text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1089–1096, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Highly constrained unification grammars Daniel Feinstein Department of Computer Science University of Haifa 31905 Haifa, Israel [email protected] Shuly Wintner Department of Computer Science University of Haifa 31905 Haifa, Israel [email protected] Abstract Unification grammars are widely accepted as an expressive means for describing the structure of natural languages. In general, the recognition problem is undecidable for unification grammars. Even with restricted variants of the formalism, offline parsable grammars, the problem is computationally hard. We present two natural constraints on unification grammars which limit their expressivity. We first show that non-reentrant unification grammars generate exactly the class of contextfree languages. We then relax the constraint and show that one-reentrant unification grammars generate exactly the class of tree-adjoining languages. We thus relate the commonly used and linguistically motivated formalism of unification grammars to more restricted, computationally tractable classes of languages. 1 Introduction Unification grammars (UG) (Shieber, 1986; Shieber, 1992; Carpenter, 1992) have originated as an extension of context-free grammars, the basic idea being to augment the context-free rules with non context-free annotations (feature structures) in order to express additional information. They can describe phonological, morphological, syntactic and semantic properties of languages simultaneously and are thus linguistically suitable for modeling natural languages. Several formulations of unification grammars have been proposed, and they are used extensively by computational linguists to describe the structure of a variety of natural languages. Unification grammars are Turing equivalent: determining whether a given string is generated by a given grammar is as hard as deciding whether a Turing machine halts on the empty input (Johnson, 1988). Therefore, the recognition problem for unification grammars is undecidable in the general case. To ensure its decidability, several constraints on unification grammars, commonly known as the off-line parsability (OLP) constraints, were suggested, such that the recognition problem is decidable for off-line parsable grammars (Jaeger et al., 2005). The idea behind all the OLP definitions is to rule out grammars which license trees in which unbounded amount of material is generated without expanding the frontier word. This can happen due to two kinds of rules: ϵ-rules (whose bodies are empty) and unit rules (whose bodies consist of a single element). However, even for unification grammars with no such rules the recognition problem is NP-hard (Barton et al., 1987). In order for a grammar formalism to make predictions about the structure of natural language its generative capacity must be constrained. It is now generally accepted that Context-free Grammars (CFGs) lack the generative power needed for this purpose (Savitch et al., 1987), due to natural language constructions such as reduplication, multiple agreement and crossed agreement. Several linguistic formalisms have been proposed as capable of modeling these phenomena, including Linear Indexed Grammars (LIG) (Gazdar, 1988), Head Grammars (Pollard, 1984), Tree Adjoining Grammars (TAG) (Joshi, 2003) and Combinatory Categorial Grammars (Steedman, 2000). In a seminal work, Vijay-Shanker and Weir (1994) prove that all four formalisms are weakly equivalent. They all generate the class of mildly context-sensitive languages (MCSL), all members 1089 of which have recognition algorithms with time complexity O(n6) (Vijay-Shanker and Weir, 1993; Satta, 1994).1 As a result of the weak equivalence of four independently developed (and linguistically motivated) extensions of CFG, the class MCSL is considered to be linguistically meaningful, a natural class of languages for characterizing natural languages. Several authors tried to approximate unification grammars by means of context-free grammars (Rayner et al., 2001; Kiefer and Krieger, 2004) and even finite-state grammars (Pereira and Wright, 1997; Johnson, 1998), but we are not aware of any work which relates unification grammars with the class MCSL. The main objective of this work is to define constraints on UGs which naturally limit their generative capacity. We define two natural and easily testable syntactic constraints on UGs which ensure that grammars satisfying them generate the context-free and the mildly context-sensitive languages, respectively. The contribution of this result is twofold: • From a theoretical point of view, constraining unification grammars to generate exactly the class MCSL results in a grammatical formalism which is, on one hand, powerful enough for linguists to express linguistic generalizations in, and on the other hand cognitively adequate, in the sense that its generative capacity is constrained; • Practically, such a constraint can provide efficient recognition algorithms for the limited class of unification grammars. We define some preliminary notions in section 2 and then show a constrained version of UG which generates the class CFL of context-free languages in section 3. Section 4 presents the main result, namely a restricted version of UG and a mapping of its grammars to LIG, establishing the proposition that such grammars generate exactly the class MCSL. For lack of space, we favor intuitive explanation over rigorous proofs; the full details can be found in Feinstein (2004). 2 Preliminary notions A CFG is a four-tuple Gcf = ⟨VN, Vt, Rcf, S⟩ where Vt is a set of terminals, VN is a set of non1The term mildly context-sensitive was coined by Joshi (1985), in reference to a less formally defined class of languages. Strictly speaking, what we call MCSL here is also known as the class of tree-adjoining languages. terminals, including the start symbol S, and Rcf is a set of productions, assumed to be in a normal form where each rule has either (zero or more) non-terminals or a single terminal in its body, and where the start symbol never occurs in the right hand side of rules. The set of all such context-free grammars is denoted CFGS. In a linear indexed grammar (LIG),2 strings are derived from nonterminals with an associated stack denoted A[l1 . . . ln], where A is a nonterminal, each li is a stack symbol, and l1 is the top of the stack. Since stacks can grow to be of unbounded size during a derivation, some way of partially specifying unbounded stacks in LIG productions is needed. We use A[l1 . . . ln ∞] to denote the nonterminal A associated with any stack η whose top n symbols are l1, l2 . . . , ln. The set of all nonterminals in VN, associated with stacks whose symbols come from Vs, is denoted VN[V ∗ s ]. Definition 1. A Linear Indexed Grammar is a five tuple Gli = ⟨VN, Vt, Vs, Rli, S⟩where Vt, VN and S are as above, Vs is a finite set of indices (stack symbols) and Rli is a finite set of productions in one of the following two forms: • fixed stack: Ni[p1 . . . pn] →α • unbounded stack: Ni[p1 . . . pn ∞] →α or Ni[p1 . . . pn ∞] →αNj[q1 . . . qm ∞]β where Ni, Nj ∈VN, p1 . . . pn, q1 . . . qm ∈Vs, n, m ≥0 and α, β ∈(Vt ∪VN[V ∗ s ])∗. A crucial characteristic of LIG is that only one copy of the stack can be copied to a single element in the body of a rule. If more than one copy were allowed, the expressive power would grow beyond MCSL. Definition 2. Given a LIG ⟨VN, Vt, Vs, Rli, S⟩, the derivation relation ‘⇒li’ is defined as follows: for all Ψ1, Ψ2 ∈(VN[V ∗ s ] ∪Vt)∗and η ∈V ∗ s , • If Ni[p1 . . . pn] →α ∈Rli then Ψ1Ni[p1 . . . pn]Ψ2 ⇒li Ψ1αΨ2 • If Ni[p1 . . . pn ∞] →α ∈Rli then Ψ1Ni[p1 . . . pnη]Ψ2 ⇒li Ψ1αΨ2 • If Ni[p1 . . . pn ∞] →αNj[q1 . . . qm ∞]β ∈ Rli then Ψ1Ni[p1 . . . pnη]Ψ2 ⇒li Ψ1αNj[q1 . . . qmη]βΨ2 2The definition is based on Vijay-Shanker and Weir (1994). 1090 The language generated by Gli is L(Gli) = {w ∈ V ∗ t | S[ ] ∗⇒li w}, where ‘ ∗⇒li’ is the reflexive, transitive closure of ‘⇒li’. Unification grammars are defined over feature structures (FSs) which are directed, connected, rooted, labeled graphs, usually depicted as attribute-value matrices (AVM). A feature structure A can be characterized by its set of paths, ΠA, an assignment of atomic values to the ends of some paths, ΘA(·), and a reentrancy relation ‘↭’ relating paths which lead to the same node. A sequence of feature structures, where some nodes may be shared by more than one element, is a multi-rooted structure (MRS). Definition 3. Unification grammars are defined over a signature consisting of a finite set ATOMS of atoms; a finite set FEATS of features and a finite set WORDS of words. A unification grammar is a tuple Gu = ⟨Ru, As, L⟩where Ru is a finite set of rules, each of which is an MRS of length n ≥1, L is a lexicon, which associates with every word w ∈WORDS a finite set of feature structures, L(w), and As is a feature structure, the start symbol. Definition 4. A unification grammar ⟨Ru, As, L⟩ over the signature ⟨ATOMS, FEATS, WORDS⟩is non-reentrant iff for any rule ru ∈Ru, ru is non-reentrant. It is one-reentrant iff for every rule ru ∈Ru, ru includes at most one reentrancy, between the head of the rule and some element of the body. Let UGnr, UG1r be the sets of all nonreentrant and one-reentrant unification grammars, respectively. Informally, a rule is non-reentrant if (on an AVM view) no reentrancy tags occur in it. When the rule is viewed as a (multi-rooted) graph, it is non-reentrant if the in-degree of all nodes is at most 1. A rule is one-reentrant if (on an AVM view) at most one reentrancy tag occurs in it, exactly twice: once in the head of the rule and once in an element of its body. When the rule is viewed as a (multi-rooted) graph, it is one-reentrant if the in-degree of all nodes is at most 1, with the exception of one node whose in-degree can be 2, provided that the only two distinct paths that lead to this node leave from the roots of the head of the rule and an element of the body. FSs and MRSs are partially ordered by subsumption, denoted ‘⊑’. The least upper bound with respect to subsumption is unification, denoted ‘⊔’. Unification is partial; when A ⊔B is undefined we say that the unification fails and denote it as A⊔B = ⊤. Unification is lifted to MRSs: given two MRSs σ and ρ, it is possible to unify the i-th element of σ with the j-th element of ρ. This operation, called unification in context and denoted (σ, i) ⊔(ρ, j), yields two modified variants of σ and ρ: (σ′, ρ′). In unification grammars, forms are MRSs. A form σA = ⟨A1, . . . , Ak⟩immediately derives another form σB = ⟨B1, . . . , Bm⟩(denoted by σA 1⇒u σB) iff there exists a rule ru ∈Ru of length n that licenses the derivation. The head of ru is matched against some element Ai in σA using unification in context: (σA, i) ⊔(ru, 0) = (σ′ A, r′). If the unification does not fail, σB is obtained by replacing the i-th element of σ′ A with the body of r′. The reflexive transitive closure of ‘ 1⇒u’ is denoted by ‘ ∗⇒u’. Definition 5. The language of a unification grammar Gu is L(Gu) = {w1 · · · wn ∈WORDS∗| As ∗⇒u ⟨A1, . . . , An⟩}, where Ai ∈L(wi) for 1 ≤i ≤n. 3 Context-free unification grammars We define a constraint on unification grammars which ensures that grammars satisfying it generate the class CFL. The constraint disallows any reentrancies in the rules of the grammar. When rules are non-reentrant, applying a rule implies that an exact copy of the body of the rule is inserted into the generated (sentential) form, not affecting neighboring elements of the form the rule is applied to. The only difference between rule application in UGnr and the analog operation in CFGS is that the former requires unification whereas the latter only calls for identity check. This small difference does not affect the generative power of the formalisms, since unification can be pre-compiled in this simple case. The trivial direction is to map a CFG to a nonreentrant unification grammar, since every CFG is, trivially, such a grammar (where terminal and non-terminal symbols are viewed as atomic feature structures). For the inverse direction, we define a mapping from UGnr to CFGS. The nonterminals of the CFG in the image of the mapping are the set of all feature structures defined in the source UG. Definition 6. Let ug2cfg : UGnr 7→ CFGS be a mapping of UGnr to CFGS, such that 1091 if Gu = ⟨Ru, As, L⟩is over the signature ⟨ATOMS, FEATS, WORDS⟩then ug2cfg(Gu) = ⟨VN, Vt, Rcf, Scf⟩, where: • VN = {Ai | A0 →A1 . . . An ∈Ru, i ≥0} ∪ {A | A ∈L(a), a ∈ATOMS} ∪{As}. VN is the set of all the feature structures occurring in any of the rules or the lexicon of Gu. • Scf = As • Vt = WORDS • Rcf consists of the following rules: 1. Let A0 →A1 . . . An ∈Ru and B ∈ L(b). If for some i, 1 ≤i ≤n, Ai ⊔B ̸= ⊤, then Ai →b ∈Rcf 2. If A0 →A1 . . . An ∈Ru and As ⊔A0 ̸= ⊤then Scf →A1 . . . An ∈Rcf. 3. Let ru 1 = A0 →A1 . . . An and ru 2 = B0 →B1 . . . Bm, where ru 1, ru 2 ∈Ru. If for some i, 1 ≤i ≤n, Ai ⊔B0 ̸= ⊤, then the rule Ai →B1 . . . Bm ∈Rcf The size of ug2cfg(Gu) is polynomial in the size of Gu. By inductions on the lengths of the derivation sequences, we prove the following theorem: Theorem 1. If Gu = ⟨Ru, As, L⟩is a nonreentrant unification grammar and Gcf = ug2cfg(Gu), then L(Gcf) = L(Gu). Corollary 2. Non-reentrant unification grammars are weakly equivalent to CFGS. 4 Mildly context-sensitive UG In this section we show that one-reentrant unification grammars generate exactly the class MCSL. In such grammars each rule can have at most one reentrancy, reflecting the LIG situation where stacks can be copied to exactly one daughter in each rule. 4.1 Mapping LIG to UG1r In order to simulate a given LIG with a unification grammar, a dedicated signature is defined based on the parameters of the LIG. Definition 7. Given a LIG ⟨VN, Vt, Vs, Rli, S⟩, let τ be ⟨ATOMS, FEATS, WORDS⟩, where ATOMS = VN ∪Vs ∪{elist}, FEATS = {HEAD, TAIL}, and WORDS = Vt. We use τ throughout this section as the signature over which UGs are defined. We use FSs over the signature τ to represent and simulate LIG symbols. In particular, FSs will encode lists in the natural way, hence the features HEAD and TAIL. For the sake of brevity, we use standard list notation when FSs encode lists. LIG symbols are mapped to FSs thus: Definition 8. Let toFs be a mapping of LIG symbols to feature structures, such that: 1. If t ∈Vt then toFs(t) = ⟨t⟩ 2. If N ∈VN and pi ∈Vs, 1 ≤i ≤n, then toFs(N[p1, . . . , pn]) = ⟨N, p1, . . . , pn⟩ The mapping toFs is extended to sequences of symbols by setting toFs(αβ) = toFs(α)toFs(β). Note that toFs is one to one. When FSs that are images of LIG symbols are concerned, unification is reduced to identity: Lemma 3. Let X1, X2 ∈ VN[V ∗ s ] ∪Vt. If toFs(X1) ⊔toFs(X2) ̸= ⊤then toFs(X1) = toFs(X2). When a feature structure which is represented as an unbounded list (a list that is not terminated by elist) is unifiable with an image of a LIG symbol, the former is a prefix of the latter. Lemma 4. Let C = ⟨p1, . . . , pn, i ⟩be a nonreentrant feature structure, where p1, . . . , pn ∈ Vs, and letX ∈VN[V ∗ s ]∪Vt. Then C⊔toFs(X) ̸= ⊤iff toFs(X) = ⟨p1, . . . , pn, α⟩, for some α ∈ V ∗ s . To simulate LIGs with UGs we represent each symbol in the LIG as a feature structure, encoding the stack of LIG non-terminals as lists. Rules that propagate stacks (from mother to daughter) are simulated by means of reentrancy in the UG. Definition 9. Let lig2ug be a mapping of LIGS to UG1r, such that if Gli = ⟨VN, Vt, Vs, Rli, S⟩and Gu = ⟨Ru, As, L⟩= lig2ug(Gli) then Gu is over the signature τ (definition 7), As = toFs(S[ ]), for all t ∈Vt, L(t) = {toFs(t)} and Ru is defined by: • A LIG rule of the form X0 →α is mapped to the unification rule toFs(X0) →toFs(α) • A LIG rule of the form Ni[p1, . . . , pn ∞] → α Nj[q1, . . . , qm ∞] β is mapped to the unification rule ⟨Ni, p1, . . . , pn, 1 ⟩ → toFs(α) ⟨Nj, q1, . . . , qm, 1 ⟩toFs(β) Evidently, lig2ug(Gli) ∈UG1r for any LIG Gli. 1092 Theorem 5. If Gli = ⟨VN, Vt, Vs, Rli, Sli⟩is a LIG and Gu = lig2ug(Gli) then L(Gu) = L(Gli). 4.2 Mapping UG1r to LIG We are now interested in the reverse direction, namely mapping UGs to LIG. Of course, since UGs are more expressive than LIGs, only a subset of the former can be correctly simulated by the latter. The differences between the two formalisms can be summarized along three dimensions: The basic elements UG manipulates feature structures, and rules (and forms) are MRSs; whereas LIG manipulates terminals and non-terminals with stacks of elements, and rules (and forms) are sequences of such symbols. Rule application In UG a rule is applied by unification in context of the rule and a sentential form, both of which are MRSs, whereas in LIG, the head of a rule and the selected element of a sentential form must have the same non-terminal symbol and consistent stacks. Propagation of information in rules In UG information is shared through reentrancies, whereas In LIG, information is propagated by copying the stack from the head of the rule to one element of its body. We show that one-reentrant UGs can all be correctly mapped to LIG. For the rest of this section we fix a signature ⟨ATOMS, FEATS, WORDS⟩over which UGs are defined. Let NRFSS be the set of all non-reentrant FSs over this signature. One-reentrant UGs induce highly constrained (sentential) forms: in such forms, there are no reentrancies whatsoever, neither between distinct elements nor within a single element. Hence all the FSs in forms induced by a one-reentrant UG are non-reentrant. Definition 10. Let A be a feature structure with no reentrancies. The height of A, denoted |A|, is the length of the longest path in A. This is well-defined since non-reentrant feature structures are acyclic. Let Gu = ⟨Ru, As, L⟩∈UG1r be a one-reentrant unification grammar. The maximum height of the grammar, maxHt(Gu), is the height of the highest feature structure in the grammar. This is well defined since all the feature structures of onereentrant grammars are non-reentrant. The following lemma indicates an important property of one-reentrant UGs. Informally, in any FS that is an element of a sentential form induced by such grammars, if two paths are long (specifically, longer than the maximum height of the grammar), they must have a long common prefix. Lemma 6. Let Gu = ⟨Ru, As, L⟩∈UG1r be a one-reentrant unification grammar. Let A be an element of a sentential form induced by Gu. If π · ⟨Fj⟩·π1, π·⟨Fk⟩·π2 ∈ΠA, where Fj, Fk ∈FEATS, j ̸= k and |π1| ≤|π2|, then |π1| ≤maxHt(Gu). Lemma 6 facilitates a view of all the FSs induced by such a grammar as (unboundedly long) lists of elements drawn from a finite, predefined set. The set consists of all features in FEATS and all the non-reentrant feature structures whose height is limited by the maximal height of the unification grammar. Note that even with onereentrant UGs, feature structures can be unboundedly deep. What lemma 6 establishes is that if a feature structure induced by a one-reentrant unification grammar is deep, then it can be represented as a single “core” path which is long, and all the sub-structures which “hang” from this core are depth-bounded. We use this property to encode such feature structures as cords. Definition 11. Let Ψ : NRFSS × PATHS 7→ (FEATS ∪ NRFSS)∗ be a mapping such that if A is a non-reentrant FS and π = ⟨F1, . . . , Fn⟩ ∈ ΠA, then the cord Ψ(A, π) is ⟨A1, F1, . . . , An, Fn, An+1⟩, where for 1 ≤i ≤n + 1, Ai are non-reentrant FSs such that: • ΠAi = {⟨G⟩· π | ⟨F1, . . . , Fi−1, G⟩· π ∈ ΠA, i ≤n, G ̸= Fi} ∪{ε} • ΘAi(π) = ΘA(⟨F1, . . . , Fi−1⟩· π) (if it is defined). We also define last(Ψ(A, π)) = An+1. The height of a cord is defined as |Ψ(A, π)| = max1≤i≤n+1(|Ai|). For each cord Ψ(A, π) we refer to A as the base feature structure and to π as the base path. The length of a cord is the length of the base path. The function Ψ is one to one: given Ψ(A, π), both A and π are uniquely determined. Lemma 7. Let Gu be a one-reentrant unification grammar and let A be an element of a sentential form induced by Gu. Then there is a path π ∈ΠA such that |Ψ(A, π)| < maxHt(Gu). 1093 Lemma 7 implies that every non-reentrant FS (i.e., FSs induced by one-reentrant grammars) can be represented as a height-limited cord. This mapping resolves the first difference between LIG and UG, by providing a representation of the basic elements. We use cords as the stack contents of LIG non-terminals: cords can be unboundedly long, but so can LIG stacks; the crucial point is that cords are height limited, implying that they can be represented using a finite number of elements. We now show how to simulate, in LIG, the unification in context of a rule and a sentential form. The first step is to have exactly one non-terminal symbol (in addition to the start symbol); when all non-terminal symbols are identical, only the content of the stack has to be taken into account. Recall that in order for a LIG rule to be applicable to a sentential form, the stack of the rule’s head must be a prefix of the stack of the selected element in the form. The only question is whether the two stacks are equal (fixed rule head) or not (unbounded rule head). Since the contents of stacks are cords, we need a property relating two cords, on one hand, with unifiability of their base feature structures, on the other. Lemma 8 establishes such a property. Informally, if the base path of one cord is a prefix of the base path of the other cord and all feature structures along the common path of both cords are unifiable, then the base feature structures of both cords are unifiable. The reverse direction also holds. Lemma 8. Let A, B ∈NRFSS be non-reentrant feature structures and π1, π2 ∈PATHS be paths such that π1 ∈ΠB, π1 · π2 ∈ΠA, Ψ(A, π1 · π2) = ⟨t1, F1, . . . , F|π1|, t|π1|+1, F|π1|+1, . . . , t|π1·π2|+1⟩, Ψ(B, π1) = ⟨s1, F1, . . . , s|π1|+1⟩, and ⟨F|π1|+1⟩̸∈Πs|π1|+1. Then A ⊔B ̸= ⊤iff for all i, 1 ≤i ≤|π1| + 1, si ⊔ti ̸= ⊤. The length of a cord of an element of a sentential form induced by the grammar cannot be bounded, but the length of any cord representation of a rule head is limited by the grammar height. By lemma 8, unifiability of two feature structures can be reduced to a comparison of two cords representing them and only the prefix of the longer cord (as long as the shorter cord) affects the result. Since the cord representation of any grammar rule’s head is limited by the height of the grammar we always choose it as the shorter cord in the comparison. We now define, for a feature structure C (which is a head of a rule) and some path π, the set that includes all feature structures that are both unifiable with C and can be represented as a cord whose height is limited by the grammar height and whose base path is π. We call this set the compatibility set of C and π and use it to define the set of all possible prefixes of cords whose base FSs are unifiable with C (see definition 13). Crucially, the compatibility set of C is finite for any feature structure C since the heights and the lengths of the cords are limited. Definition 12. Given a non-reentrant feature structure C, a path π = ⟨F1, . . . , Fn⟩∈ΠC and a natural number h, the compatibility set, Γ(C, π, h), is defined as the set of all feature structures A such that C ⊔A ̸= ⊤, π ∈ΠA, and |Ψ(A, π)| ≤h. The compatibility set is defined for a feature structure and a given path (when h is taken to be the grammar height). We now define two similar sets, FH and UH, for a given FS, independently of a path. When rules of a one-reentrant unification grammar are mapped to LIG rules (definition 14), FH and UH are used to define heads of fixed and unbounded LIG rules, respectively. A single unification rule is mapped to a set of LIG rules, each with a different head. The stack of the head is some member of the sets FH and UH. Each such member is a prefix of the stack of potential elements of sentential forms that the LIG rule can be applied to. Definition 13. Let C be a non-reentrant feature structure and h be a natural number. Then: FH(C, h) = {Ψ(A, π) | π ∈ΠC, A ∈Γ(C, π, h)} UH(C, h) = {Ψ(A, π) · ⟨F⟩| Ψ(A, π) ∈FH(C, h), ΘC(π) ↑, F ∈FEATS, val(last(Ψ(C ⊔A, π)), ⟨F⟩) ↑} This accounts for the second difference between LIG and one-reentrant UG, namely rule application. We now briefly illustrate our account of the last difference, propagation of information in rules. In UG1r information is shared between the rule’s head and a single element in its body. Let ru = ⟨C0, . . . , Cn⟩be a reentrant unification rule in which the path µe, leaving the e-th element of the body, is reentrant with the path µ0 leaving the head. This rule is mapped to a set of LIG rules, corresponding to the possible rule heads induced by the compatibility set of C0. Let r be a member of this set, and let X0 and Xe be the head and the e-th element of r, respectively. Reentrancy in ru is modeled in the LIG rule by copying the stack from X0 to Xe. The major complication is the contents 1094 of this stack, which varies according to the cord representations of C0 and Ce and to the reentrant paths. Summing up, in a LIG simulating a onereentrant UG, FSs are represented as stacks of symbols. The set of stack symbols Vs, therefore, is defined as a set of height bounded non-reentrant FSs. Also, all the features of the UG are stack symbols. Vs is finite due to the restriction on FSs (no reentrancies and height-boundedness). The set of terminals, Vt, is the words of the UG. There are exactly two non-terminal symbols, S (the start symbol) and N. The set of rules is divided to four. The start rule only applies once in a derivation, simulating the situation in UGs of a rule whose head is unifiable with the start symbol. Terminal rules are a straight-forward implementation of the lexicon in terms of LIG. Non-reentrant rules are simulated in a similar way to how rules of a non-reentrant UG are simulated by CFG (section 3). The major difference is the head of the rule, X0, which is defined as explained above. One-reentrant rules are simulated similarly to non-reentrant ones, the only difference being the selected element of the rule body, Xe, which is defined as follows. Definition 14. Let ug2lig be a mapping of UG1r to LIGS, such that if Gu = ⟨Ru, As, L⟩∈UG1r then ug2lig(Gu) = ⟨VN, Vt, Vs, Rli, S⟩, where VN = {N, S} (fresh symbols), Vt = WORDS, Vs = FEATS ∪{A | A ∈ NRFSS, |A| ≤ maxHt(Gu)}, and Rli is defined as follows:3 1. S[ ] →N[Ψ(As, ε)] 2. For every w ∈WORDS such that L(w) = {C0} and for every π0 ∈ΠC0, the rule N[Ψ(C0, π0)] →w is in Rli. 3. If ⟨C0, . . . , Cn⟩∈Ru is a non-reentrant rule, then for every X0 ∈LIGHEAD(C0) the rule X0 →N[Ψ(C1, ε)] . . . N[Ψ(Cn, ε)] is in Rli. 4. Let ru = ⟨C0, . . . , Cn⟩∈Ru and (0, µ0) ru ↭ (e, µe), where 1 ≤e ≤n. Then for every X0 ∈LIGHEAD(C0) the rule X0 → N[Ψ(C1, ε)] . . . N[Ψ(Ce−1, ε)] Xe N[Ψ(Ce+1, ε)] . . . N[Ψ(Cn, ε)] 3For a non-reentrant FS C0, we define: LIGHEAD(C0) as {N[η] | η ∈FH(C0, maxHt(Gu))} ∪{N[η ∞] | η ∈ UH(C0, maxHt(Gu))} is in Rli, where Xe is defined as follows. Let π0 be the base path of X0 and A be the base feature structure of X0. Applying the rule ru to A, define (⟨A⟩, 0) ⊔(ru, 0) = (⟨P0⟩, ⟨P0, . . . , Pe, . . . , Pn⟩). (a) If µ0 is not a prefix of π0 then Xe = N[Ψ(Pe, µe)]. (b) If π0 = µ0 · ν, ν ∈PATHS then i. If X0 = N[Ψ(A, π0)] then Xe = N[Ψ(Pe, µe · ν)]. ii. If X0 = N[Ψ(A, π0), F ∞] then Xe = N[Ψ(Pe, µe · ν), F ∞]. By inductions on the lengths of the derivations we prove that the mapping is correct: Theorem 9. If Gu ∈UG1r, then L(Gu) = L(ug2lig(Gu)). 5 Conclusions The main contribution of this work is the definition of two constraints on unification grammars which dramatically limit their expressivity. We prove that non-reentrant unification grammars generate exactly the class of context-free languages; and that one-reentrant unification grammars generate exactly the class of mildly context-sensitive languages. We thus obtain two linguistically plausible constrained formalisms whose computational processing is tractable. This main result is primarily a formal grammar result. However, we maintain that it can be easily adapted such that its consequences to (practical) computational linguistics are more evident. The motivation behind this observation is that reentrancy only adds to the expressivity of a grammar formalism when it is potentially unbounded, i.e., when infinitely many feature structures can be the possible values at the end of the reentrant paths. It is therefore possible to modestly extend the class of unification grammars which can be shown to generate exactly the class of mildly context-sensitive languages, by allowing also a limited form of multiple reentrancies among the elements in a rule (e.g., to handle agreement phenomena). This can be most useful for grammar writers, and at the same time adds nothing to the expressivity of the formalism. We leave the formal details of such an extension to future work. This work can also be extended in other directions. The mapping of one-reentrant UGs to LIG is highly verbose, resulting in LIGs with a huge 1095 number of rules. We believe that it should be possible to optimize the mapping such that much smaller grammars are generated. In particular, we are looking into mappings of one-reentrant UGs to other MCSL formalisms, notably TAG. The two constraints on unification grammars (non-reentrant and one-reentrant) are parallel to the first two classes of the Weir (1992) hierarchy of languages. A possible extension of this work could be a definition of constraints on unification grammars that would generate all the classes of the hierarchy. Another direction is an extension of one-reentrant unification grammars, where the reentrancy does not have to be between the head and one element of the body. Also of interest are two-reentrant unification grammars, possibly with limited kinds of reentrancies. Acknowledgments This research was supported by The Israel Science Foundation (grant no. 136/01). We are grateful to Yael Cohen-Sygal, Nissim Francez and James Rogers for their comments and help. References G. Edward Barton, Jr., Robert C. Berwick, and Eric Sven Ristad. 1987. The complexity of LFG. In G. Edward Barton, Jr., Robert C. Berwick, and Eric Sven Ristad, editors, Computational Complexity and Natural Language, Computational Models of Cognition and Perception, chapter 3, pages 89–102. MIT Press, Cambridge, MA. Bob Carpenter. 1992. The Logic of Typed Feature Structures. Cambridge University Press. Daniel Feinstein. 2004. Computational investigation of unification grammars. Master’s thesis, University of Haifa. Gerald Gazdar. 1988. Applicability of indexed grammars to natural languages. In Uwe Reyle and Christian Rohrer, editors, Natural Language Parsing and Linguistic Theories, pages 69–94. Reidel. Efrat Jaeger, Nissim Francez, and Shuly Wintner. 2005. Unification grammars and off-line parsability. Journal of Logic, Language and Information, 14(2):199–234. Mark Johnson. 1988. Attribute-Value Logic and the Theory of Grammar, volume 16 of CSLI Lecture Notes. CSLI, Stanford, California. Mark Johnson. 1998. Finite-state approximation of constraint-based grammars using left-corner grammar transforms. In Proceedings of the 17th international conference on Computational linguistics, pages 619–623. Aravind K. Joshi. 1985. Tree Adjoining Grammars: How much context Sensitivity is required to provide a reasonable structural description. In D. Dowty, I. Karttunen, and A. Zwicky, editors, Natural Language Parsing, pages 206–250. Cambridge University Press, Cambridge, U.K. Aravind K. Joshi. 2003. Tree-adjoining grammars. In Ruslan Mitkov, editor, The Oxford handbook of computational linguistics, chapter 26, pages 483–500. Oxford university Press. Bernd Kiefer and Hans-Ulrich Krieger. 2004. A context-free superset approximation of unificationbased grammars. In Harry Bunt, John Carroll, and Giorgio Satta, editors, New Developments in Parsing Technology, pages 229–250. Kluwer Academic Publishers. Fernando C. N. Pereira and Rebecca N. Wright. 1997. Finite-state approximation of phrase-structure grammars. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing, Language, Speech and Communication, chapter 5, pages 149– 174. MIT Press, Cambridge, MA. Carl Pollard. 1984. Generalized phrase structure grammars, head grammars and natural language. Ph.D. thesis, Stanford University. Manny Rayner, John Dowding, and Beth Ann Hockey. 2001. A baseline method for compiling typed unification grammars into context free language models. In Proceedings of EUROSPEECH 2001, Aalborg, Denmark. Giorgio Satta. 1994. Tree-adjoining grammar parsing and boolean matrix multiplication. In Proceedings of the 20st Annual Meeting of the Association for Computational Linguistics, volume 20. Walter J. Savitch, Emmon Bach, William Marsh, and Gila Safran-Naveh, editors. 1987. The formal complexity of natural language, volume 33 of Studies in Linguistics and Philosophy. D. Reidel, Dordrecht. Stuart M. Shieber. 1986. An Introduction to Unification Based Approaches to Grammar. Number 4 in CSLI Lecture Notes. CSLI. Stuart M. Shieber. 1992. Constraint-Based Grammar Formalisms. MIT Press, Cambridge, Mass. Mark Steedman. 2000. The Syntactic Process. Language, Speech and Communication. The MIT Press, Cambridge, Mass. K. Vijay-Shanker and David J. Weir. 1993. Parsing some constrained grammar formalisms. Computational Linguistics, 19(4):591 – 636. K. Vijay-Shanker and David J. Weir. 1994. The equivalence of four extensions of context-free grammars. Mathematical systems theory, 27:511–545. David J. Weir. 1992. A geometric hierarchy beyond context-free languages. Theoretical Computer Science, 104:235–261. 1096
2006
137
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1097–1104, Sydney, July 2006. c⃝2006 Association for Computational Linguistics _æPg6Mdg{’×6æP×dZæ×6Zgî’={ægîæ=4ægPg6gZ’K×6æ{gb46æ MdK=îgd’§’dZæ»gd€’õ4dæ×dbæ 4P4db4dKMæ:î×{{×î€–ææ Ä66õ€î×4bæMæ:4î{×dæ<gîbæîb4îæº=4dg{4d׿ æ æ /’{æ:4îb4€æ ,Cæ»ÍC æ)搒Zd4€æÄdî’× –ææ Êd’h4Úæñgîb4×õææ ¯’{ÆZ4îb4€Ýõôgîb4×õÆîæ M6hגdæ/×=×d4æ gbMKg揻ÍC –ææ Êd’h4Úæº×æ"9æ €¯ÝKKîÆPõ€€’4õÆîæ æ æ æ _€î×Kæ =’€æP×P4îæb4€Kî’4€æ×æ{’d’{×6ægPg6ô gZMæbî’h4dæP×dZæ×6Zgî’={ægîægPgô 6gZ’K×6æ Zî×{{×î€æ =׿ €MdK=îgd’§4€æ ׿ î4~î’’dZæ Zî×{{×îæ ×dbæ ׿ b4P4db4dKMæ Zî×{{×î–æ gגd’dZæ ~gæ 6’dZõ’€’K×66Mæ {g’h×4bæ€Md×K’Kæ€îõKõî4€Ææ =4æõ€4æ gædgdô6gK×6æ€6׀=æ×dbæh’€’gîæ4×õî4€æ K×dæ4æî4€î’K4bægægגdæ×æ»/¬æMP4æ ×d×6M€’€æ ’dæ Pg6Mdg{’×6æ ’{4Ææ :4î{×dæ 6gdZæb’€×dK4æP=4dg{4d׿’66õ€î×4æ=4æ ×6Zgî’={–æî’dZ’dZægæ=4ægî4æ=4æPîgô K4bõî×6æ d44b€æ gæ =4æ ×d×6M€4€æ gæ €Mdô ×ôgPg6gZMæ {’€{×K=4€æ ’dæKgd€îגdæ ׀4bæ ×PPîg×K=4€æ 6’¯4æ gîæ 4×{P64æ ú:Ææ "æ ÄdîgbõK’gdæ Ädæ =’€æ P×P4îæ ~4æ ’dh4€’Z×4æ =4æ Kg{Põ×’gd×6æ Pîg64{€æî4€õ6’dZæîg{æ=4æb’€Kî4P×dKMæ4~44dæ =4æ €õî×K4æ gîZ×d’§×’gdæ gæ ׿ €4d4dK4æ ×dbæ ’€æ €Md×K’Kæ€îõKõî4æ4Pî4€€’dZæõdK’gd×6æî46×’gd€æ 4~44dæ ~gîb€Ææ =4€4æ €Md×K’Kæ P=4dg{4d׿ ×î4æ ×bbî4€€4bæ õdb4îæ 4î{€æ 6’¯4æ õdgõdb4bæ b4P4dbô 4dK’4€æ gîæ €Kî×{6’dZÆæ =’€æ b’€Kî4P×dKMæ ’€æ =4æ €gõîK4æ gæ Kg{P64æ {4K=×d’€{€æ €õK=æ ׀æ {gh4ô {4dæ ’dæ »=g{€¯M×dæ {gb46€–æ õdK’gd×6æ õdK4îô גdMæ’dæ=4æõdK’gd×6æ4 õ×’gd€ægæ :–ægîædgdæ 6gK×6æ 4×õî4€æ 6’¯4æ €6׀= æ ’dæ ú:Ææ _6Zgî’={€æ gîæ =4€4æ {4K=×d’€{€æ ×î4æ ͺôKg{P644–æ õæ ’dô îgbõK’dZæ õPP4îæ gõdb€æ gdæ €’{õ6×d4gõ€æ {’€ô {×K=4€æ {ׯ4€æ =4€4æ P=4dg{4d׿ P×î€×64æ ’dæ Pg6Mdg{’×6æ ’{4Ææ <4æ €=g~æ =׿ =4æ î4€õ6’dZæ Kg{P64’M撀ægæ d/  æ~’=æ/æ4’dZæ=’€æõPP4îæ gõdbÆ"æ æ <4æ Kgd€’b4îæ =4æ €gôK×664bæ gPg6gZ’K×6æ Zî×{ô {×î€æK=×î×K44bæMæ=4æ€MdK=îgd’§×’gdægæ~gæ ×6Z4îגKæ Zî×{{×î€æ4 õ’h×64dæ» : –ægd4æ Z4dô 4î×’dZæ =4æ €õî×K4æ €îõKõî4æ =4æ gPg6gZ’K×6æ P=î׀4æ€îõKõî4 –æ=4æg=4îæZ4d4î×’dZæ=4æb44P4îæ €îõKõî4æ =4æ õdK’gd×6æ gîæ b4P4db4dKMæ €îõKô õî4 Ææ =4æP×î×6646æKgd€îõK’gdægæ=4æ€õî×K4æ×dbæ =4æb44P4îæ€îõKõî4æ{’d’{×66Mæ=×db64€æ=4æPîgô 64{ægæ=4æb’€Kî4P×dKMæ4dKgõd4î4bæ’dæ×66ægî{×6æ b4€Kî’P’gd€ægædgdô6gK×6æ~gîbægîb4îæP=4dg{4d×Ææ æ =4æ 64’K×6’§4bæ h4gdæ gæ gPg6gZ’K×6æ Zî×{ô {×î€æPî4€4d4bæ’dæ=’€æP×P4îæ64db€æ’€46æ ~466ægæ 4æõ€4bæ’dæ×æ»/¬æMP4æ×6Zgî’={Ææ =4æ b4€Kî’Pô ’gdæ gæ =’€æ ×6Zgî’={æ €=4b€æ 6’Z=æ gdæ =4æ PîgK4ô bõî×6æ îg64æ gæ =4æ KgdK4Pæ õdb4î6M’dZæ =4æ €6׀=æ 4×õî4æ ×dbæ =4æ bõ×6æ îg64æ gæ =4æ h’€’gîæ 4×õî4æ ’dîgbõK4bæ=4î4æg66g~’dZæÃõb€gdæ 999 Ææ æ gî{×6ægPg6gZ’K×6æZî×{{×î€æ=×h4æ44dæ’dîgô bõK4bæ ’db4P4db4d6Mæ Mæ 4õ€{×ddæ )æ õK=’4îæ 99"æ ×dbæ :4îb4€æ )æ /×=×d4æ 99"Ææ C4×P4æ " æ ×dbæ /×=g6æ " æ =×h4æ gî{×6’§4bæ =4æ K6׀€’K×6æ KgdK4PægægPg6gZMæ’dæÃº:æ~’=gõ–æ=g~4h4î–æ 4P6’K’6MæKgd€’b4î’dZæ×dæ’db4P4db4dægPg6gZ’K×6æ P=î׀4æ€îõKõî4Ææ gî4ægîæ64€€æKg{P644ægPg6gZ’ô K×6æ Zî×{{×î€æ =×h4æ 44dæ b4h46gP4bæ gîæ »§4K=–æ õK=–æ:4î{×d–æ gb4îdæ:î44¯–æ/gî4×d–æ_î×’K–æ ×dbæ î4dK=æ ñg ×îæ 99 –æ ¬ggæ 99–æ ¬ggæ )æ :4îb4€æ 99 –æ,6æ/׀€×€æ 99 –æ:4îb4€æ)æ/×=×d4æ 99 Ææ ’4î4dæ’{P64{4d×’gd€ægîægPg6gZ’K×6æ Zî×{{×î€æ 4’€©æ :æ  4õ€{×ddæ 4æ ×6Ææ 99 æ õ€4€æ =4æ õd€P4K’×6’§4bæ §æ Kgd€îגdæ ׀4bæ 6×dô Zõ×Z4–æ {ׯ’dZæ :æ P×dZæ ͺô=×îbæ ×€æ €=g~æ /g664îæ )æ î’4Zd’§æ 99 æ ×6=gõZ=æ gd4æ gגd€æ Pg6Mdg{’×6æ ’{4æ ’dæ =4æ ×h4î×Z4æ K׀4–æ õK=’4îæ ææææææææææææææææææææææææææææææ ææææææææææææææææææ "æ  =4æ6’dZõ’€’Kæî464h×dK4ægæ=’€æõPP4îægõdbæ=׀æ44dæb’€ô Kõ€€4bægîæ4×{P64æ’dæñ4K¯4îæ4æ×6Ææ"   "–æ×îZõ’dZæ=׿  î44æ _b  g’d’dZæ :î×{{×îæ K×Põî4æ =4æ P4îgî{×dK4æ 6’{’æ gdæ 4ô î×K’gd€æ’dæ:4î{×dÆæ 1097 99 Àæ 4P ’d撀æ×æZ4d4î×’gdæ€M€4{æ׀4bægdæ×æ PîgK4bõî×6æ6’d4××’gdæZî×{{×îæ:4îb4€æ)æ¬ggæ 99 –æ×dbæ»6Ú{4dæ4æ×6Ææ 99 æ×dbæ î×d¯æ 99æ î×d€6×4ægPg6gZ’K×6æZî×{{×î€æ’dgæ=4æ :ægPô Pg€’’gdæ4~44dæKôæ×dbæô€îõKõî4€æ×66g~’dZæ=4æ õ€4ægæ :æ×d×6M§4î€ægîæ~=’K=æ4’€æÍºæ=×îbô d4€€æ î4€õ6€ Ææ gæ gõîæ ¯dg~64bZ4–æ =4æ ú:æ×Pô Pîg×K=æ=׀ædgæ44dæ’{P64{4d4bÆæÍgd4ægæ=4€4æ ×PPîg×K=4€æ =׀æ 4P6gî4bæ =4æ =4gî4’K×6æ ×6Zgô î’={’KæPîgP4î’4€ægæ×dæ4×KægPg6gZ’K×6æ×d×6Mô €’€æ 64×h’dZæ ׀’b4æ €gK=׀’Kæ ×PPîg×K=4€ Ææ Ãg~ô 4h4î–æ =4æ =4gî4’K×6æ Pîg64{€æ gæ =4æ õPP4îæ gõdb€ægîæ=4ædõ{4îægæ4î×K’gd€æ=×h4æ44dæ €õb’4bæ ’dæ d4’Z=gî’dZæ b4P4db4dKMô׀4bæ gîô {×6’€{€æMæ/×=×d4æ4æ×6Ææ" æ×dbæñî ¯4îæ 999Æææ æ õîæ Kgdî’õ’gdæ €=g~€æ =g~æ =4æ gPg6gZMô €Md׿b’€Kî4P×dKMæK×dæ{’d’{×66Mæ4æ€44dæ×€æ~gæ Zî×{{×î€æ =׿ €MdK=îgdgõ€6Mæ Kgd€îõKæ ~gæ ’dô b4P4db4dæ €îõKõî4€Ææ <4æ 46’4h4æ =׿ =4æ ’dîgô bõK’gdægæ=4æbõ×6æ€6׀=æ×dbæh’€’gîæ4×õî4€æ×6ô 6g~€æ gîæ ×æ 44îæ Kg{Pî4=4d€’gdæ gæ =4æ PîgK4ô bõî×6æ{4K=×d’€{€æ×æ€ׯ4æ’dæ=4æõdgõdb4bæb4ô P4db4dK’4€æ=×db6’dZæ×dbæ=4ædgæd4K4€€×î’6Mæ×Pô P×î4d æ €’{’6×î’’4€æ 4~44dæ gî{×6’€{€æ €õK=æ ׀æ :–æÃº:–ægîæb4P4db4dKMæZî×{{×î€Ææ æ 4K’gdæ æ Pî4€4d€æ =4æ Zî×{{×îæ gî{×6’€{–æ 44{P6’M’dZæ ’æ Mæ ׿ :4î{×dæ gMæ Zî×{{×îæ ×dbæ 4K’gdæ撀æb4hg4bægæ=4æP×dZæ×6Zgî’={æ~4æ PîgPg€4Ææ æ =4æZî×{{×îægî{×6’€{æ õîæZî×{{×îæKgdגd€æ=î44æ{gbõ64€©æ×æ€Md×K’Kæ Zî×{{×î–æ×ægPg6gZ’K×6æZî×{{×î–æ×dbæ=4ægPg6ô gZMô€Md׿’d4î×K4Ææ<4æ~’66æPî4€4dæ=4€4æ=î44æ {gbõ64€–æ 44{P6’M’dZæ 4×K=æ gæ =4{æ Mæ ׿ gMæ Zî×{{×îægîæ:4î{×dÆæ_6=gõZ=æh4îM怒{P64–æ=’€æ Zî×{{×îæKgh4î€æ×æZî4׿P×îægæ=4æh4î×6æ€Md׿ gæ :4î{×dæ ñ4K=æ " –æ ’dK6õb’dZæ =4æ {גdæ €Kî×{6’dZæ P=4dg{4dׯæ gîæ ×æ {gî4æ Kg{P644æ Zî×{{×îægæ:4î{×dæ×dbægîæZî×{{×î€ægæg=4îæ 6×dZõ×Z4€æ’dæ=4æ€×{4æ=4gî4’K×6æî×{4~gæ€44æ =4æî44î4dK4€æ’dægõîæ’dîgbõK’gdÆæ æ Äæ€=gõ6bæ4ædg4bæ=׿=4ægî{×6’€{æ~4æPîgô Pg€4æ=4î4æb’4î€æ€6’Z=6Mæîg{æ=4æPî4h’gõ€ægîô {×6æPî4€4d×’gdægæ=4ægPg6gZ’K×6æ{gb46–æî’dZô ’dZægæ=4ægî4æ=4æ€MdK=îgd’§×’gdægæ~gæZî×{ô {×î€æ×dbæ=4æ’d4î×K4æZî×{{×îÆæ Æ"æ =4æ€Md×K’KæZî×{{×îæ =4æ €Md×K’Kæ {gbõ64æ ’€æ ׿ K6׀€’Kæ b4P4db4dKMæ Zî×{{×îæ ×dbæ Z4d4î×4€æ õdgîb4î4bæ b4P4db4dKMæ î44€Ææ =4æ P×î×{44î€æ gæ ’d€×d’×4æ ×î4æ =4æ hgô K×õ6×îMæ–æ=4æ€4ægæ64’K×6 æK×4Zgî’4€æ»–æ=4æ ’d’’×6æ K×4ZgîMæ Ä»–æ=4æ €4æ gæ €Md×K’Kæ îg64€æ C–æ ×dbæ=4æ€4ægæ64’K×6æîõ64€Ææ_æ64’K×6æîõ64æ×€€’Zd€æ ׿ K×4ZgîMæ ×dbæ ׿ h×64dK4æ 6’€æ gæ ׿ ~gîbÆæ _æ h×ô 64dK4æ€6g撀æ×æKgõP64æî–» æ~=4î4æî撀æ×æ€Md×K’Kæ îg64æ ×dbæ »æ ׿ K×4ZgîMÆ æ =4æ ’d’’×6æ K×4ZgîMæ Ä»æ Z’h4€æ=4æK×4ZgîMægæ=4æîggægæ=4æb4P4db4dKMæ î44Ææ ,×{P64æ æ3æ=4æ:4î{×dæ~gîb€æ »æ3æJæ’d–æ§õ–æ’d–æPP–æÍaææ ’dæ3æ’d’4æh4î–æ§õæ3æ’d’d’’h4æ~’=æ§õ–æ’dæ 3æ×î4æ’d’d’’h4–æPPæ3æP׀æP×î’K’P64 æ Cæ3æJæ€õP–ægP–æhKg{  æaæ Ä»æ3æ’dæ 4P4db4dKMæîõ64€æ =׿x=׀ ©’d–h×6©¦€õP–Ídg{ –hKg{  –PP ½æ Z464€4dæxî4×b ©æPP–æh×6©¦gP–Í×KK ½æ =4æ6׀æîõ64æ€×M€æ=׿Z464€4d撀æ×æP׀æP×î’K’P64æ Zgh4îd’dZæ ׿ dg{’d×6æ g 4Kæ ׿ =4æ ×KKõ€×’h4æ K׀4Ææ õîæ Zî×{{×îæ Z4d4î×4€æ b4P4db4dKMæ î44€æ €õK=æ×€æ ’ZÆæ"ægîæ=4æ€4d4dK4æ" ©æ æ æ æ æ æ æ ’ZÆæ"Ææ_æb4P4db4dKMæî44æ æ " æ 4dæCg{×dæ=׿b’4€4{æ ×ddæd’4{×dbæ æ æ æ=4æædgh46ææææ=׀ægô=’€ææ{×dææædggbMæ §õæ64€4dæh4î€  îgK=4dæ gæî4×bæææPîg{’€4bæ æxÍggbMæ Pîg{’€4bæ gæ =’€æ {×dæ gæ î4×bæ =4ædgh46 Ææ ææææææææææææææææææææææææææææææ ææææææææææææææææææ  æ<4æ bgæ dgæ Pî4€4dæ =4æ î4×{4dæ gæ {gb’’4î€æ ~=4dæ =4æ Zgh4îdgîæ’€æ€464K4bæMæ=4æb4P4db4dÆæ  =’€æbg4€ædgæPg€4æ ×dMæ 4K=d’K×6æ Pîg64{€æ õæ ’æ d4K4€€’×4€æ P×î’Kõ6×îæ îõ64€æ =׿~4æ~’66ædgæPî4€4dæ=4î4ægîæ=4æî4×{4dægæ{gb’’4îæ ’dæ×æb4P4db4dKMæZî×{{×îæ€44ægîæ4×{P64æÍ׀îæ"    Àæh×î’ô gõ€æPîgPg€’’gd€æ’dæÃº:æK×dæ×6€gæ4æ×b×P4bæ=4î4 ÆæÍ4’ô =4îæbgæ~4æ4P×’×4æõPgdæ=4ægP’gd×6’Mægæ€g{4æ€Md×K’Kæ ×îZõ{4d€Ææ æ  gîæ=4æ€×¯4æg怒{P6’K’Mæ~4æZ’h4æ×æh4îMæîgõZ=æPî4€4d×ô ’gdæ gæ =4æ K×4ZgîMÆæ  gîæ dgõd€–æ K׀4€æ ×î4æ ×bb4bæ ’dæ =4’îæ d×{4€æÍdg{–æÍZ4d–æÍb×–æ×dbæÍ×KK Æææ PP æh4î€PîgK=4dæ ææææææææææxPîg{’€4b  æ ’d æ=׿x=׀ æ Í×KK æb4dæCg{×dæ ææææææææææææx=4ædgh46  æ Ídg{ æd’4{×dbæ æææææææææææææxdggbM  æ Íb× æb’4€4{æ ×ddæ æææææææææææxgæ=’€æ{×d  æ æææææ§õ æ§õæ64€4dæææææææ æææææææææææææææxgæî4×b  æ €õPæ hKg{ æ gPæ gPæ hKg{ æ 1098 Æ æ =4ægPg6gZ’K×6æZî×{{×îæ =4æ gPg6gZ’K×6æ Zî×{{×îæ PîgP4îæ Z4d4î×4€æ =4æ gPg6gZ’K×6æ €îõKõî4€–æ ~=’K=æ ×î4æ gîb4î4bæ Kgdô €’õ4dæ î44€Ææ õK=æ ׿ Zî×{{×îæ b’4î€æ €6’Z=6Mæ îg{æ î×b’’gd×6æ » :€æ Mæ b’€’dZõ’€=’dZæ Kgdô €’õ4d€æ îg{æ Pg€’’gd€æ gîæ Kgd€’õ4d€–æ ’Æ4Ææ g4€æ 3ægPg6gZ’K×6æ Kgd€’õ4d æ ×dbæ ’46b€æ 3æPg€’’gd€æ’dæ×æg Ææ_æZî×{{×îæîõ64æ’db’K×4€æ gîæ 4×K=æ gæ ’€æ 6’€æ gæ ’46b€æ ×dbæ =g~æ {×dMæ g4€æ’æK×dæKgdגdægîæ4×K=æ’46bÆæ =î44æh×6õ4€æ gîæ=4æ’66’dZæP×î×{44îægæ×æ’46bæ×î4æPg€€’64©æ 4×K6Mægd4æ464{4dæ0 –æ×æ{g€ægd4æ464{4dæG æ ×dbæ×dMædõ{4îægæ464{4d€æu Ææ_æ’46b撀æK×664bæ g6’Z×gîMæ’æ’€æ’66’dZæP×î×{44îæ’€æ0Ææ æ =4æP×î×{44î€ægæ’d€×d’×4æ×î4æ=4æ€4ægægæ d×{4€æñ–æ=4æ€4ægæ’46bæd×{4€æ–æ=4æ’d’’×6ægæ ’ –æ×dbæ=4æ€4ægæîõ64€Ææ æ gîæ=4æ€×¯4æg怒{P6’K’M–æ’dæ=’€æPî4€4d×’gd–æ ~4æ ×bgPæ ׿ 6׿ €îõKõî4–æ ~’=æ 4×K6Mæ gd4æ gæ =4×b4bæMæ4×K=æ~gîbægæ=4æ€4d4dK4Ææ<4æ6g€4æ×æ P×îæ gæ =4æ 4Kgdg{Mæ gæ =4æ €M€4{æ =׿ î4õ€4€æ =4æ€×{4æg4€æ×æb’4î4dæ64h46€ æõæ=4æPî4€ô 4d×’gdægæ=4æP×dZæ×6Zgî’={æ~’66æ4æK64×î4îÆæ ,×{P64æ ñæ3æJæ{b–æ4b–æhK–ædPæaæ {bæ3æ{גdæbg{גd–æ4bæ3æ4{4bb4bæbg{גd–æhKæ3æ h4îæK6õ€4î–ædPæ3ædg{’d×6æP=î׀4 æ æ3æJæh–æ{–æd–æî–æg–æŒæaæ hæ3ægî46b–æ{æ3æ ’4646b–ædæ3æÍ×K=46b–æîæ3æ î’Z=æî×K¯4–ægæ3æ  4î46b–æŒæ3æ=4×bæ’46b æ ’æ3æ{bæ gPg6gZ’K×6æîõ64€æ {bæ £æ h0æŒæ{uæîGæduæ 4bæ£æ {uægGæŒæduæ hKæ£æ gGæŒæ æ æ æ æ æ æ æ æ b4dæ Cg{×dæ =׿ b’4€4{æ ×ddæ d’4{×dbæ §õæ 64€4dæ h4î€  îgK=4dæ ’ZÆæ Ææ =4ægPg6gZ’K×6æ€îõKõî4ægæ" æ õîæ’î€æîõ64撀æ=4æK6׀€’K×6ægPg6gZ’K×6æ{gb46ægæ :4î{×d©æ ׿ {גdæ bg{גdæ ’€æ Kg{Pg€4bæ gæ ’h4æ ’46b€æ ×dbæ =4æ {גdæ h4îæ gKKõP’4€æ =4æ €4Kgdbæ ’46b–æ =4æ ’î€æ ’46bæ hæ Kgdגd’dZæ 4×K6Mæ gd4æ 464{4dÆæÄdæ=4æ 4{4bb4bæ bg{גd–æ=4æ=4×bægKô KõP’4€æ=4æî’Z=æî×K¯4–æ~=’K=撀æ=4dæ=4æ =4×bæ ’46bÆæ_æh4îæ’dæ=4æî’Z=æî×K¯4æg4î€æ×æP6×K4ægæ ’€æ64æK×664bæ=4æ 4î46bæg ægîæ×æh4î×6æb4ô P4db4dÆæ ’ZÆæ æ ’€æ ׿ Zî×P=’K×6æ î4Pî4€4d×’gdæ gæ =4ægPg6gZ’K×6æ€îõKõî4ægæ=4ædgdæ{×î¯4bæ€4dô 4dK4æ " Ææ ñg4€æ ×î4æ î4Pî4€4d4bæ Mæ K’îK64€æ ×dbæ ’46b€æMæ€ õ×î4€Ææ gPg6gZ’K×6æ îõ64€æ gd6Mæ b4€Kî’4æ =4æ ’46bæ €îõKô õî4ægæ=4æg4€ÆæCõ64€æ€×’dZæ~=’K=æ~gîbæK×dæ =4×bæ~=’K=ægæ’dæ~=’K=æ’46bæ×î4æP×îægæ=4æ’dô 4î×K4æ Zî×{{×îÆæ <’=gõæ =4€4æ îõ64€æ gõîæ gPgô 6gZ’K×6æ Zî×{{×îæ gh4îZ4d4î×4€–æ ×66g~’dZæ 4h4îMæ gægæZgæ’dgæ×dMæ’46bÆæ Ææ =4ægPg6gZMô€Md׿’d4î×K4æ =4æ gPg6gZMô€Md׿ ’d4î×K4æ €MdK=îgd’§4€æ =4æ €Md×K’KæZî×{{×îæ×dbæ=4ægPg6gZ’K×6æZî×{{×îÆæ _dæ’d4î×K4æîõ64æ×€€gK’×4€æ=4æPg€’’gd’dZægæ×æ b4P4db4dKMædgb4æ~’=æ=4æPg€’’gd’dZægæ=4æKgîô î4€Pgdb’dZægPg6gZ’K×6ægÆæ ,×K=æ gæ ºñæ ’€æ ׀€gK’×4bæ gæ ×dæ ’d4Z4îæ P –æ K×664bæ ’€æ P4î{4×’6’M–æ Kgdîg66’dZæ ~=’K=æ Kgdô €’õ4dæK×dæ4{×dK’P×4æîg{æ’Ææ _dæ ’d4î×K4æ îõ64æ ’€æ ׿ ÑôõP64æ »"–î–»  –"–– –  –æ ~=4î4æ»"–»  º»–æîºC–æ"–  ºñ–準æ×dbæ  撀æ×dæ ’d4Z4îæ K×664bæ =4æ P4î{4×’6’Mæ 64h46Ææ =4æ îõ64æ K×dæ4æî4×bæ’dæ~gæ4 õ’h×64dæ~×M€©æ" æ’æ×æ~gîbæ ~  ægæK×4ZgîMæK× æ»  æb4P4db€ægdæ×æ~gîbæ~"ægæ K׿»"æMæ×æ€Md×K’Kæî46×’gdæî–æ=4dæ~  æK×dæ=4×bæ ׿gæ  æP6×K4bæ’dæ×æ’46bæægæ×ægæ"æKgdגd’dZæ ~"æ×dbæ€4P×î×4bæîg{æ"æMæg4€ægæP4î{4×’6ô ’Mæèæ  揀Md=4€’€æî4×b’dZ Àæ æ’æ×æ~gîbæ~  ægæK׿ »  æ=4×b€æ×ægæ  æP6×K4bæ’dæ×æ’46bæægæ×ægæ"æ Kgdגd’dZæ ׿ ~gîbæ ~"æ gæ K׿ »"æ ×dbæ €4P×î×4bæ îg{æ "æ Mæ g4€æ gæ P4î{4×’6’Mæ èæ  –æ =4dæ ~  æ K×dæ b4P4dbæ gdæ ~"æ Mæ ׿ €Md×K’Kæ î46×’gdæ îæ ×d×6M€’€æî4×b’dZ Ææ =4æîõ64撀æ€K=4{×’§4bæ’dæ=4æ g66g~’dZæ’Zõî4©æ æ æ æ æ g4bæ 6’d4€æ î4Pî4€4dæ €MdK=îgd’§’dZæ 6’d¯€Ææ Äæ ׿ ~gîbæ ~æ 6×46’dZæ ׿ b4P4db4dKMæ dgb4æ €MdK=îgô d’§4€æ~’=æ×ægæ–æ~4æ€×Mæ=׿~æ=4×b€æÆæ æ õîæ=î44æZî×{{×î€æKgd g’d6MæPîgbõK4æ×æb4ô P4db4dKMæ î44æ ×dbæ ׿ gPg6gZ’K×6æ î44æ ~=g€4æ dgb4€æ ×î4æ €MdK=îgd’§4bÆæ =4æ ~gæ €îõKõî4€æ ×î4æ Z4d4î×4bæ P×î×66466M–æ 4×K=æ gd4æ Kgd€îגd’dZæ =4æ {bæ hæ {æ Œæ îæ hKæ dPæ gæ dPæ Œæ hKæ dPæ "æ æ   æ »"æ îæ »  æ æ 1099 g=4îÆæMdK=îgd’§’dZæ6’d¯€æ×î4æP×îægæ=4æZ4d4îô ×4bæ6×dZõ×Z4æ/×=×d4æ 99 Ææ ,×{P64æ º4î{4×’6’Mæ PhK 3"–æP4b 3PP 3 –æP{b 3Ææ Äd4î×K4æîõ64€æ gîæ×ædgõdæ ~4æ =×h4æ gd4æ׀’Kæîõ64©æ×ædgõdæ b4ô P4db’dZægdæ×æh4îæK×dæ=4×bæ×dæÍºæ’dæ×dMæ{× gîæ ’46bæhÿ{ÿd æ~=4î4h4îæ’€æ=4æh4îÀ æ’æK×dæKîg€€æ gh4îæhKæ×dbæ4bæg4€©æ æ –æ€õPÿgP–æÍ–æ{bÿ4bÿhK–æhÿ{ÿd–ædP–æ æ Íg4æ =׿ ’dæ :4î{×d–æ Kgdî×î’6Mæ gæ ,dZ6’€=–æ =4æ P6×K4{4dægæ×æÍºæbg4€æ dgæ×Kõ×66Mæb4P4dbægdæ ’€æ€Md×K’Kæîg64Ææ gîæ×ædgdô’d’4æh4îæ~4æ=×h4æ~gæîõ64€©  æ ôæ gd4撀怒{’6×îægæ=4æîõ64ægîædgõd€©æ×æh4îæ K×dæ=4×bæ×dæ4{4bb4bæbg{גdæ’dæ×dMæ{× gîæ ’46b©æ –æhKg{  –æ’d–æ{bÿ4bÿhK–æhÿ{ÿd–æ4b–æ æ ôæ =4æg=4îæ’€æ€P4K’’Kægæ=4æ:4î{×d’K æ€Mdô שæ ׿ dgdô’d’4æ h4îæ K×dæ 4æ P6×K4bæ ’dæ =4æ î’Z=æî×K¯4æî ægîæ×æ=4æ64ægæ’€æh4î×6æ Zgh4îdgîæ’dæ=4ægæ’46b æ’æ=’€æZgh4îdgîæ’€æ ×6î4×bMæ’dæ=4æî’Z=æî×K¯4©ææ –æhKg{  –æ’d–æ{bÿ4bÿhK–ægÿî–æhK–æ9 æ =’€æ6׀æîõ64æK×dæ4æ×PP6’4bæî4Kõh46M–ægî{’dZæ ׿ €î’dZæ gæ h4î€æ K×664bæ ׿ h4îæ K6õ€4îÆæ =4æ b4ô P4db4d€ægæh4î€ægæ×æ€×{4æK6õ€4îæK×dæ4æî446Mæ €=×î4bæ gõæ ’dæ =4æ {× gîæ ’46b€æ gæ =4æ €×{4æ bgô {גdÆæ =’€æ PîgP4îMæ PîgbõK4€æ ~=׿ ’€æ K×664bæ €Kî×{6’dZÆæÄdægõîæ{gb46æ=’€æî4 õ’î4€æ×dæ44dô €’h4æõ€4ægæ4{×dK’P×’gd–æ×66g~’dZæ×dMæ b4P4dbô 4dægæ×æh4îægæ4æP6×K4bæ’dæ×æbg{גdæ=4×b4bæMæ ׿h4î×6æ×dK4€gîÆæ,{×dK’P×’gd撀æ×6€gæPg€€’64æ gõ€’b4æ =4æ 4{4bb4bæ bg{גd–æ ×6=gõZ=æ =’€æ ~gõ6bæî4 õ’î4æ€P4K’’Kæ€îgdZæ ’dgî{×’gdæP×K¯ô ×Z’dZæKgd€îגd€ædgæî464K4bæ’dæ=’€ægMæZî×{ô {×î Ææ Ææ =4æP×dZæ×6Zgî’={æ <4æ 4Z’dæ ~’=æ ׿ Pî4€4d×’gdæ gæ =4æ ×6Zgî’={æ ~=4dæ =4î4æ ’€æ dgæ 4{×dK’P×’gdÆæ Ädæ =’€æ K׀4æ =4æ gPg6gZ’K×6æ €îõKõî4æ ×dbæ =4æ b4P4db4dKMæ €îõKô õî4æ×î4æõ’6æ’dæP×î×6646–æ’Æ4Ææ4×K=æKg{’d×’gdægæ 6’d4×îæ€4Z{4d€æKgîî4€Pgdb€ægæ×æõdK’gd×6æKg{ô ææææææææææææææææææææææææææææææ ææææææææææææææææææ  æ<4æbgædgæb4h46gPæ=4ædg{’d×6ægPg6gZMæ’dæ=’€æ4×{P64æ Zî×{{×îÆæ  æ_dæ’d’’×6æîõ64æ€×4€æ=׿=4æîggæ’dæ=4×b€æ=4æ’d’’×6ægæ {bÆæ ’d×’gdÆæ <=4dæ 4{×dK’P×’gd€æ ×î4æ ×66g~4bæ =4æ P×dZæ ~’66æ 4æ bî’h4dæ Mæ =4æ gPg6gZ’K×6æ €îõKô õî4ægd6MÆ  æ Æ"æ =4æ×6Zgî’={æ~’=gõæ4{×dK’P×’gdæ =4æP=’6g€gP=Mægæ×æ»/¬æ×6Zgî’={撀ægæ4Z’dæ P×dZæ gd4æ ~gîbæ €4Z{4d€æ gæ =4æ €4d4dK4–æ gæ €gî4æ=4æ{’d’{õ{ægæ’dgî{×’gdæ’dæ×æP×î€4æ{×ô î’–æ×dbægæP×î€4æ’ZZ4îæ×dbæ’ZZ4îæ€4Z{4d€æMæ KgdK×4d×’gdægæ€4Z{4d€æPî4h’gõ€6MæP×î€4bÆæ Ädæ =4æ ×6Zgî’={æ gîæ » :–æ ’æ ~4æ =×h4æ ~gæ Kgdô €4Kõ’h4æ€4Z{4d€æîg{æ’ægæPæ×dbæîg{æP  "ægæ¯ægæ K׿ »"æ ×dbæ »  æ ×dbæ ’æ ~4æ =×h4æ ׿ îõ64æ »£»"» –æ =4dæ~4æPg€õ6×4æ×æ€4Z{4dæîg{æ’ægæ¯ægæK׿»Ææ =4æî4Kõîî4dK4æ€4P撀æ=4d©æ -’–P–Kש»"Dæ[æ-P "–¯–Kש» Dær揻£»"» æ 3æ-’–¯–Kש»Dæ ÄdægõîæK׀4–æ=4æ4dî’4€ægægõîæP×î€4æ{×î’æ×î4ægæ =4æ gî{æ -’–P–Kש»–h×6©‰–g©–’46b€¬Dæ ~=4î4æ ’æ ×dbæ Pæ b46’{’æ =4æ €4Z{4d–æ »æ ’€æ =4æ K×4ZgîMæ gæ =4æ=4×b–æ‰æ’€æ=4æ6’€ægæî44æh×64dK4æ€6g€–æ撀æ =4ægPg6gZ’K×6ægæd×{4ægæ=4æ€4Z{4d–æ×dbæ¬æ =4æ6’€ægædgdô€×õî×4bæ’46b€ægæ插dK6õb’dZæ=4æ =4×bæ ’46bæ Œ–æ ’db’K×’dZæ ~=’K=æ ’46b€æ ×î4æ gdæ =4æ 64ægîægdæ=4æî’Z= æ Äd’’×6’§×’gdæ€4Pæ Äæ=4æ’ô=æ~gîbægæ=4æ€4d4dK4æ~4æ~×dægæP×î€4æ K×dæ=×h4æ" æ=4æK׿»–æ æ=4æh×64dK4æ‰æ×dbæ æ×æ ~gîbægæK׿»æK×dæ=4×bæ×ægææ×dbæ’æ æ=4î4撀æ ׿ gPg6gZ’K×6æ îõ64æ £¬–æ =4dæ ~4æ €gî4æ =4æ €4Zô {4dæ-’–’–Kש»–h×6©‰–g©–’46b€©¬DÆæ C4Kõîî4dK4æ€4Pæ <4æKg{’d4æ~gæKgd€4Kõ’h4æ€4Z{4d€æMæ×PP6Mô ’dZæ ×dæ ’d4î×K4æ îõ64Ææ d4æ gæ =4æ ~gæ €4Z{4d€æ {õ€æ 4æ €×õî×4bæ ~=’K=æ {4×d€©æ " æ ×66æ =4æ h×ô 64dK4æ€6g€ægæ=4æ=4×bæ=×h4æ44dæ’664bæ×dbæ=õ€æ h×6撀æ×dæ4{PMæ6’€ æÀæ æ×66æ=4æ’46b€ægæ=4ægæ ×î4æPg4d’×66Mæ€×õî×4b–æ=×撀–æ=4î4撀ædgæ’46bæ ~’=æ=4æh×6õ4æ0æ64Ææ Ädæ=4æg66g~’dZæî4Kõîî4dK4æ€4P–æ~4æ€õPPg€4æ=׿ =4æ€4Kgdbæ€4Z{4d撀æ€×õî×4bæ×dbæ~4ædg4æ46’€æ =4æ4{PMæh×64dK4æ6’€æ×dbæ€×æ×dMæ€×õî×4bæ’46bæ 6’€Ææ æ ææææææææææææææææææææææææææææææ ææææææææææææææææææ æ  =’€æ b’€’dZõ’€=4€æ gõîæ×PPîg×K=æ îg{æ €’{’6×îæ ×PPîg×K=4€æ €õK=æ×€æ/×=×d4æ4æ×6Ææ"    –æ~=’K=æbgædgæ4P6’K’6MæKgd€’b4îæ ×æ €4P×î×4æ gPg6gZ’K×6æ €îõKõî4Ææ »gdî×î’6Mæ gæ =’€æ ~gæ ~=4î4æ4×K=æ4{×dK’P×’gd撀æ=×db64bæMæ×æb4b’K×4bæ6’’dZæ îõ64–æ~4æbgædgæî4×66MæKgd€’b4îæPîg  4K’h’Mæ×€æ=4æ{gî4ædgîô {×6æ K׀4æ ×dbæ ~4æ î4׿ 4{×dK’P×4bæ ×dbæ dgdô4{×dK’P×4bæ 464{4d€æ{gî4ægîæ64€€æ’dæ=4æ€×{4æ{×dd4îÆæ 1100 -’–P–Kש»"–h×6©‰–g©"–’46b€©¬Dæ [æ-P "–¯–Kש»  –h×6©46’€–g© –’46b€©€×Dæ r揻"–æî–æ»  –æ"–æ–æ –æ  æ æ3-’–¯–Kש»"–h×6©‰·¦î–»  ½–g©"–’46b€©¬·Dæ =’€æ€4P撀æPg€€’64æ’æ‰æKgdגd€æ×æh×64dK4æ€6gææ î–»  æ×dbæ=4d扷¦î–»  ½æ’€æ=4æ6’€æ‰æî4bõK4bæ Mæ î–»  Ææ Ädæ =4æ €×{4æ ~×M–æ ¬æ {õ€æ Kgdגdæ ׿ ’46bææ×æ=4æ64ægæ=4æ=4×bæ’46bÀæ=4æ’46b€æ4ô ~44dæ Œæ ×dbæ æ {õ€æ 4æ dgdæ g6’Z×gîMæ ×dbæ ×î4æ €õPPî4€€4bæ’d欷Àæ{gî4gh4î–æ=4æ’66’dZæP×î×{4ô 4îæ gæ æ ’€æ ×b õ€4bæ ×KKgîb’dZæ gæ =4æ ×Kæ =׿ æ dg~æKgdגd€æ×ægÆæ =4æP×dZæ€õKK44b€æ’ægõîæP×î€4æ{×î’æKgdגd€æ ׿ 64׀æ gd4æ €4Z{4dæ -"–d–KשĻ–h×6©46’€–æ g©’–’46b€©€×DÆæ Äæ ~4æ ¯44Pæ ×K¯Pg’d4î€æ ׿ 4×K=æ €4Pæ ’dæ =4æ ×6Zgî’={–æ ~4æ =×h4æ ׿ Kg{P×Kæ î4Pî4€4d×’gdægæ=4æP×î€4ægî4€Ææ Æ æ =4æ×6Zgî’={æ~’=æ4{×dK’P×’gdæ _dæ 4{×dK’P×4bæ Kgd€’õ4dæ ’€æ dgæ ’dæ =4æ {×’ô {×6æPîg 4K’gdægæ’€æZgh4îdgî–æ’Æ4Ææ’撀ædgæ’dæ=4æ gæ=4×b4bæMæ’€æZgh4îdgîÆæ»gd€’b4îæ=4æg66g~ô ’dZæ4×{P64ægæP×î’×6æºôîgd’dZ©æ  æ :464€4dæ=׿ ×î’׿b4dæCg{×dÆææ æ æ î4×bæææææ æ=׀æ ×î’׿=4ædgh46æ æ æ x ×î’׿î4×bæ=4ædgh46 æ Ädæ  –æ =4æ P׀æ P×î’K’P64æ Z464€4dæ =4×b€æ ×dæ 4{ô 4bb4bæ bg{גdæ ’dæ hÆæ Ä€æ g 4Kæ b4dæ Cg{×dæ ’€æ 4{×dK’P×4bæ ×dbæ P6×K4bæ ’dæ =4æ ’46bæ {æ gæ =4æ {גdæ bg{גdÆæ õPPg€4æ ~4æ ~×dæ gæ ×PP6Mæ gõîæ Pî4h’gõ€æ ×6Zgî’={æ =4æ »/¬æ P×dZæ ~’=gõæ 4{×dK’P×’gd Ææ<4æK×dæ4׀’6MæP×î€4æ=4æ€4Z{4d€æ Z464€4d–æ=׿ ×î’×–æ×dbæb4dæCg{×d–æõæd4’=4îæ Z464€4dæ ×dbæ =׿ ×î’׿ Z464€4dæ dgæ €×õî×4b –æ dgîæ =׿ ×î’׿ ×dbæ b4dæ Cg{×dæ dgæ h×64dK4æ gîæ b4dæCg{×d æK×dæ4æKg{’d4bÆæ õîæP×dZæ~’66æ4æbî’h4dæMæ=4ægPg6gZ’K×6æ €îõKõî4æ ×dbæ =4æ Kgdb’’gdæ gæ =4æ gPg6gZ’K×6æ €×õî×’gdæ gæ =4æ b4P4db4dæ ’€æ {גdגd4bÆæ ~gæ K׀4€ægæKg{’d×’gdægæ€4Z{4d€æ×î4æPg€€’64Ææ æ =4æ’î€æK׀4撀æ’66õ€î×4bæMæ=4æKg{’d×’gdæ 4~44dæ =׿ ×î’׿ ×dbæ Z464€4d–æ ~=4î4æ Z464€4dæ €’66æ 4P4K€æ ׿ b4P4db4dÆæ =4î4gî4æ ~4æ bgæ dgæ î4 õ’î4æ =4æ h×64dK4æ gæ =4æ gPg6gZ’K×6æ P=î׀4æ gæ 4æ€×õî×4bæ×dbæ~4æ{õ€æP4îKg6×4æ’æ’dæ×æ€P4K’×6æ 4×õî4æ €’{’6×îæ gæ =4æ €6׀=æ 4×õî4æ gæ :ÿú:æ :×§b×îæ4æ×6Ææ" –æºg66×îbæ)æ×Zæ" æ æ =4æ€4KgdbæK׀4撀æ’66õ€î×4bæMæ=4æKg{’d×ô ’gdæ4~44dæb4dæCg{×dæ×dbæ=׿ ×î’ׯæÄdæ=’€æ K׀4æ~4æbgædgæî’ZZ4îæ×æKgîî4€Pgdb4dK4æîõ64æ4ô K×õ€4æ dgæ b4P4db4dKMæ {õ€æ 4æ õ’6Ææ <4æ {õ€æ €gî4æb4dæCg{×dæ’dæ×æ€P4K’×6æ4×õî4æ~4æK×66æh’€’ô gîæ €44æ Ãõb€gdæ 999æ gîæ ×æ €’{’6×îæ b4h’K4 –æ ~=’K=æ ’€æ =4æ Kgdh4î€4æ gæ =4æ €6׀=æ 4×õî4Ææ =4æ €6׀=æ 4×õî4æ ×66g~€æ õ€æ gæ 6’æ õPæ ׿ d44bæ ×æ h×ô 64dK4æ€6gægæ4æ’664b –æ~=’64æ=4æ h’€’gîæ4×õî4æ ×66g~€æ=×db’dZæbg~dæ×æî4€gõîK4æ=׿~’66æ’66æ×æ h×64dK4æ€6g Ææ îæ{gî4æPî4K’€46Mæ’dægõîæK׀4©æ =4æ h’€’gîæ×66g~€æ×æZgh4îdgîægæ¯44Pæ×ædgdô€g6’K’4bæ €4Z{4dæ ~=’64æ ~ג’dZæ gîæ ×dæ 464{4dæ =׿ K×dæ ׯ4æ=’€æ464{4dæ’dæ’€æh×64dK4Ææ æ Íg4æ=g~4h4îæ=×–æ~’=ægõîæKgdb’’gd€ægdæ=4æ €×õî×’gdæ gæ gPg6gZ’K×6æ Kgd€’õ4d€–æ =4æ ~gæ €î×4Z’4€æ ×î4æ dgæ ’d4îK=×dZ4×64æ ×dbæ =4Mæ ×î4æ g=æ d4K4€€×îMÆæ 4æõ€æKgd€’b4îæ~gæd4~æ 4×{ô P64€Ææ  æ ×î’׿=׿b4dæCg{×dæZ464€4dÆææ æ æ ×î’׿æ=׀æ=4ædgh46ææææî4×bæ æ æ x ×î’׿î4×bæ=4ædgh46 æ _6=gõZ=æ=4æ€4d4dK4æ 撀æPîg 4K’h4–æb4dæCgô {×dæ {õ€æ 4æ ×d×6M§4bæ ׀æ ×dæ 4{×dK’P×4bæ Kgdô €’õ4dÆæÄdb44b–æZ464€4d撀æ’dæ=4æî’Z=æî×K¯4ægæ =4æ{גdæbg{גdæ×dbæ=4æ{×’{×6æPîg 4K’gdægæ Z464€4d–æ=4æh4îæK6õ€4î–æbg4€ædgæKgdגdæ’€æb4ô P4db4dæb4dæCg{×d–æ~=’K=撀æ’dæ=4æ ’4646bægæ =4æ {גdæ bg{גdæ =4×b4bæ Mæ =ׯæ îg{æ ׿ gPgô 6gZ’K×6æPg’dægæh’4~–æb4dæCg{×dæK×dægd6MæKg{ô ’d4æ~’=æ=׿õæ’撀ædgæ’dæ=4æh×64dK4ægæ=׿ ×dbæ’æ{õ€æ4æKgd€’b4î4bæ×æh’€’gîÆæ  æ ÄK=æZ6×õ4–æb׀€æb4dæCg{×dæ ×î’׿ Äæææææ=’d¯æææææ=׿æ=4ædgh46ææææ ×î’׿ Z464€4dæ=×Æææ æ æ î4×bæææææææ=×€æ æ xÄæ=’d¯æ=׿ ×î’׿î4×bæ=4ædgh46 æ Ädæ  –æ Z464€4dæ =׿ gî{€æ ׿ h4îæ K6õ€4îæ ’dæ =4æ î’Z=æ î×K¯4æ gæ ׿ Kg{P64{4d’§4îæ P=î׀4Ææ =4æ ͺæb4dæCg{×d撀怒66æ4{×dK’P×4bæ×db–æîg{æ=4æ gPg6gZ’K×6æh’4~Pg’d–æ’æK×ddgæKg{’d4æ~’=æ’€æ Zgh4îdgîæ Z464€4dÆæ Äæ ×6€gæ K×ddgæ Kg{’d4æ ~’=æ =׿4K×õ€4æ=4Mæ×î4æ€4P×î×4bæMæ ×î’׿×dbæZ4ô 64€4dÆæ =4æ€{×664€ægPg6gZ’K×6æP=î׀4æKgdגd’dZæ b4dæCg{×dæ×dbæZ464€4dæ×6€gæKgdגd€æ=ׯæ =4î4ô gî4æ=4æ€6׀=æ€î×4ZM撀æd44b4bæ×dbæZ464€4dæ×dbæ =׿ {õ€æ Kg{’d4æ 4gî4æ Kg{’d’dZæ ~’=æ b4dæ Cg{×dæ×dbæ ×î’× Ææ <4æ{ׯ4æ~gæ{× gîæK=×dZ4€æ’dægõîæPî4h’gõ€æ ×6Zgî’={Ææ dæ =4æ gd4æ =×dbæ ~4æ î4P6×K4æ =4æ h×6æ 4×õî4æMæ~gæd4~æ4×õî4€©æ Îæ =4æh’€’gî æ4×õî4–æ~=’K=æ€gî4€æî’P64€æ »–– æ’db’K×’dZæ=׿׿gæægæ=4×bæ»æ =׀æ44dæP6×K4bæ’dæ=4æ’46bæÀæ 1101 Îæ =4æ €6׀=æ 4×õî4–æ ~=’K=æ €gî4€æ h×64dK4æ €6g€ægæ=4æ=4×bæ×€æ~466æ×€æ=4ædgdæ€×õô î×4bæ€6g€ægæ’€æb4P4db4d€Ææ dæ=4æg=4îæ=×dbæ~4æPîgK44bæ’dæ~gæ€4P€©æ’î€æ ~4æ Kg{’d4æ Kgd€4Kõ’h4æ €4Z{4d€æ ~’=gõæ î’Zô Z4î’dZæ’d4î×K4æîõ64€–æMæ€gî’dZæ×66æ’dgî{×’gdæ ’dægõîæh’€æ×dbæ€6׀=æ4×õî4€Àæ€4Kgdbæ ~4æî’ZZ4îæ ’d4î×K4æîõ64€ægæî4bõK4æ=4æKgd4d€ægæh’€æ×dbæ €6׀=æ4×õî4€Ææ Äd’’×6’§×’gdæ€4Pæ =4æ €4Z{4dæ -’–’–Kש»–h×6©‰–g©–’46b€©¬Dæ gæ =4æ Pî4h’gõ€æ ×6Zgî’={æ ’€æ î4P6×K4bæ Mæ -’–’–Kש»–æ h’€©46’€–€6׀=©‰ –g©–’46b€©¬Dæ ~=4î4æ 4×K=æ h×ô 64dK4æ €6gæ î–» æ gæ ‰æ Z’h4€æ ׿ €6׀=æ €6gæ »–î–» –9 –æ~’=æ»æ=4æK׿gæ=4æ=4×bæ×dbæ9æ’db’ô K×’dZæ =׿ =’€æ €6׀=æ €6gæ =׀æ dgæ 4{×dK’P×4bæ ×Kîg€€æ×dMægÆæ »g{’d×’gdæ€4Pæ =4æ Kg{’d×’gdæ ’€æ bî’h4dæ Mæ =4æ gPg6gZ’K×6æ €îõKõî4–æ€gægd4ægæ=4æ~gæ€4Z{4d€æKg{’d4bæ {õ€æ 4æ gPg6gZ’K×66Mæ €×õî×4bæ ’46b€©€× æ õæ ~4æ dgæ 6gdZ4îæ î4 õ’î4æ =׿ =4æ h×64dK4æ gæ =’€æ €4Z{4d撀æ€×õî×4b©æ’€æî44æh×64dK4æ€6gæ~’66æ4æ €6׀=4bÆæ -’–P–Kש»"–h’€©å–€6׀=©‰"–g©"–’46b€©¬Dæ [æ -P "–¯–Kש»  –h’€©46’€–€6׀=©‰ –g©  –’46b€©€×Dæ 3æ -’–¯–Kש»"–h’€©å[¦»  –– ½–€6׀=©‰"[‰  –æ g©"–’46b€©¬·Dæ ~=4î4æ 4×K=æ ôõP64揻–î–æ» –  ægæ‰  æZ’h4€æ×æ ô õP64揻–î–» –  æ~’=æ  æ=4æ{׿gæ  æ×dbæP" –æ =4æP4î{4×’6’Mægæ"Ææ æ Ädæg=4îæ~gîb€æ=4æ€4Kgdbæ€4Z{4d撀æP6×K4bæ’dæ =4æ’46bææ×dbæ=’€æ’€æ€gî4bæ’dæ=4æh’€’gîæ4×õî4æ KÆæ»  ––  Ææ_æ=4æ€×{4æ’{4æ=4æ€6׀=æKgd4dæ ‰  æ gæ =4æ €4Kgdbæ €4Z{4dæ ’€æ ×bb4bæ gæ =4æ €6׀=æ Kgd4dægæ=4æ’î€æ€4Z{4d–æõæ~4æ{õ€æ’db’K×4æ =׿=4€4æh×64dK4æ€6g€æ=×h4æKîg€€4bæ=4ægæ"æ ×dbæ=’€æ’€æ~=Mæ=4æP4î{4×’6’Mæ64h46撀æ×b õ€4bÆæ C4bõK’gdæ€4Pæ -’–æ P–æ Kש»–æ h’€©å[¦»  ––  ½–æ €6׀=©‰[æ ¦»"–î–»  –  ½–æg©"–æ’46b€©¬Dæ r揻"–æî–æ»  –æ"–æ–æ –æ æ 3æ-’–æP–æKש»–æh’€©å–æh×6©‰üg©"–æ’46b€©¬Dæ Pîgh’b4bæ=׿ æèæ Ææ æ _æî4bõK’gd撀æPg€€’64æ’æh’€æ×dbæ€6׀=æKgdגdæ 464{4d€æ î44îî’dZæ gæ =4æ €×{4æ K×4ZgîMæ » ©æ =4æ h’€æ 464{4d揻 –– æ€×M€æ=׿~4æ =×h4æ 4dKgõdô 4î4bæ×ægæ ægæ=4×bæ» æ’dæ=4æ’46bææ–æ~=’64æ=4æ €6׀=æ 464{4dæ »"–î–» – æ ’db’K×4€æ =׿ ׿ ~gîbæ gæK׿» æ’€æî4 õ’î4bægæ’66æ=4æîæh×64dK4æ€6gægæ×æ ~gîbæ gæ K׿ »"Ææ Äæ õî=4î{gî4æ =4æ €6׀=4bæ €6gæ =׀æ dgæ Kîg€€4bæ gh4îæ g4€æ gæ P4î{4×’6’Mæ Zî4×4îæ =×dæ æ  æ èæ –æ =4dæ =4æ ’d4î×K4æ îõ64æ »"–æî–æ» –æ"–æ–æ –æ æK×dæ×PP6Mæ×dbæ=4æ€4Z{4dæ K×dæ4æî4bõK4bÆæ =4æP×dZæ€õKK44b€æ’æ=4æP×î€4æ{×î’æKgdגd€æ ׿ €4Z{4dæ -"–d–KשĻ–h’€©46’€–€6׀=©46’€–æ g©’–’46b€©€×D–æ ~=4î4æ dæ ’€æ =4æ 64dZ=æ gæ =4æ €4d4dK4Ææ ,×{P64©æ º×dZæ gæ 4dæ Cg{×dæ =׿ ×î’׿ §õæ 64€4dæ h4î€  îgK=4dæ KÆæ " Ææ <4æ gKõ€æ gdæ =4æ Kg{’d×’gdæ gæ =4æ €4Z{4dæ b4dæ Cg{×dæ =׿ ×î’׿ ~=4î4æ b4dæ Cg{×dæ ’€æ ׿ h’€’gî æ ~’=æ =4æ h4îæK6õ€4îæ§õæ64€4dæh4î€  îgK=4dÆæ b4dæ Cg{×dæ =׿ ×î’×©æ "æ 3æ -"– –æ Kש’d–æ h’€©¦Í×KK–h–dP ½–æ æ €6׀=©¦’d–×õ–PP–9 ½–æ g©{b–æ’46b€©æŒ{uîGduDæ §õæ 64€4dæ h4î€  îgK=4d©æ   æ 3æ - –æ іæ KשPP–æ h’€©46’€–æ €6׀=©¦§õ–bg –Í×KK–" ½–æ g©hK–æ ’46b€©ŒDæ =4æ€4Z{4dæ  撀ægPg6gZ’K×66Mæ€×õî×4b–æ€gæ"æ ×dbæ  æK×dæKg{’d4Ææ æ3æ"[  æ3æ æ -"–Ñ–æ Kש’d–æ h’€©¦Í×KK–h–dP –æ PP–î–hK ½–æ €6׀=©¦’d–×õ–PP–9 –æ §õ–g –Í×KK–" ½–æ g©{b–æ’46b€©æŒduDæ æK×dæ4æî4bõK4bæ~’K4æ ôæ Mæ{4îZ’dZæ×õ–’d–PP–9 æ×dbæPP–î–hK æ õ€’dZæ=4æ’d4î×K4æîõ64揖×õ–’d–æ{b–æî–æ hK–æ9 æ ôæ ×dbæ Mæ {4îZ’dZæ §õ–bg –Í×KK–hK æ ×dbæ Í×KK–h–dP æ õ€’dZæ =4æ ’d4î×K4æ îõ64æ –bg –Í–{b–h–hK– Ææ _4îæ î4bõK’gd–æ æ 3æ -"–Ñ–Kש’d–æ h’€©46’€–æ €6׀=©46’€–æg©{b–æ’46b€©æŒduD–æPîgh’dZæ=׿" æ ’€æ×æZî×{{×’K×6æ€4d4dK4ægæ:4î{×dÆæ Ææ»g{P64’Mæ =4æ P×î€4æ {×î’æ gæ ׿ »/¬æ ×6Zgî’={æ =׀æ 64€€æ =×dæd  æ4dî’4€–æ~=4î4æd撀æ=4æ64dZ=ægæ=4æ€4dô 4dK4æ P×î€4bÆæ Ädæ =4æ » :æ K׀4–æ =4æ dõ{4îæ gæ Pg€€’64æ€4Z{4dæb4€Kî’P’gd€æ’€ægõdb4bæMæ=4æ dõ{4îægæK×4Zgî’4€–æ~=’K=æ~4æK×66æ»Ææ gæ’66æ×æ d4~æ4dîMæ’dæ=4æP×î€4æ{×î’æ×æ64׀ædæKg{’d×ô ’gd€ægæ~gæ4dî’4€æ {õ€æ4æKgd€’b4î4bæ×dbæ=4æ dõ{4îæ gæ gP4î×’gd€æ ’€æ gõdb4bæ Mæ :»  d–æ ~=4î4æ:撀æ=4æ dõ{4îægæîõ64€æ4×K=æKg{’d×ô ’gdæ’dhg6h4€æ×æZî×{{×îæîõ64 Ææ Ädæ=4æ×6Zgî’={æ~’=gõæ4{×dK’P×’gdæ=4ædõ{ô 4îæ gæ Pg€€’64æ b4€Kî’P’gd€æ gæ €4Z{4d€æ ’€æ €’66æ 1102 gõdb4bæ ×dbæ =4æ Kg{P64’Mæ î4{גd€æ d Ææ Ãg~4h4îæ’æ~4æ ~×dægæî4î’4h4æ=4æb4P4db4dKMæ gî4€æÍ×€îæ 99 æ~4æd44bægæ€gî4æ×K¯Pg’d4æ =׿ ’€æ =4æ P6×K4æ gæ =4æ =4×bæ ’dæ 4×K=æ €4Z{4dæ b4€Kî’P’gdÀæ=4ædõ{4îægæ€4Z{4dæb4€Kî’P’gd€æ Zg4€æ õPæ Mæ ׿ ×Kgîæ dæ ×dbæ =4æ Kg{P64’Mæ 4ô Kg{4€æ d ÑÆæ =4æ €6׀=æ ×dbæ h’€’gîæ 4×õî4€æ ×î4æ {gî4æ 4P4dô €’h4©æ<4æ×€€õ{4æ=4æ€6׀=æ×dbæh’€’gîæ€4€ægæ4æ gõdb4bæ Mæ /–æ ’Æ4Ææ ~4æ €õPPg€4æ =׿ ~4æ bgæ dgæ d44bægæ¯44Pæ{gî4æ=×dæ/æ4dî’4€æ’dæ=4æ€6׀=æ×dbæ h’€’gîæ €4€æ ׿ ׿ ’{4Ææ »gd€4 õ4d6Mæ =4æ dõ{4îæ gæ €4Z{4dæ b4€Kî’P’gd€æ î4{גd€æ gõdb4bæ Mæ »/–æ~=4î4æ撀æ=4ædõ{4îægæh×64dK4æ€6gæb4ô €Kî’P’gd€–æ ×dbæ =4æ ×6Zgî’={æ Kg{P64’Mæ ’€æ €’66æ gæMP4æ d Ææñõæ’æ ~4æ ’dîgbõK4æ×K¯Pg’d4î€æ gæ î4î’4h4æ =4æ b4P4db4dKMæ gî4€–æ ~4æ d44bæ gæ ¯44Pæ=4{æ’dæh×64dK4æ€6gæb4€Kî’P’gd€æ’dægîb4îægæ î4{4{4îæ~=’K=æ~gîbæ=׀æ×æh×64dK4æ€6gægæ’66Ææ =4æ dõ{4îæ gæ €4Z{4dæ b4€Kî’P’gd€æ ’€æ =õ€æ gõdb4bæMæ»/d/ "æ×dbæ=4æ’{4æKg{P64’Mægæ =4æ×6Zgî’={æ ’€æ’dæ d/  Ææ<4æ×hg’bæ 4Pgd4dô ’×6æZîg~=ægd6Mæ4K×õ€4æ~4æî4€î’Kæ=4ædõ{4îæ gæ€6׀=æ×dbæh’€’gîæ4dî’4€ægæ4×K=æKgd’Zõî×’gdÆææ æ »gdK6õ€’gdæ <4æ =×h4æ PîgPg€4bæ ׿ P×dZæ ×6Zgî’={æ gîæ =4æ gPg6gZ’K×6æ {gb46æ =׿ ’€æ {’d’{×6æ ’dæ =4æ €4d€4æ =׿’€æ×bb’’gd×6æ4Pgd4d’×6æZîg~=æ=4æ×Kgîæ / æ Kgîî4€Pgdb€æ 4×K6Mæ gæ =4æ dõ{4îæ gæ {’€ô {×K=4€æ 4~44dæ õdK’gd×6æ b4P4db4dKMæ ×dbæ gPg6gZ’K×6æ Kgd€’õ4dKMÆæ Ädæ b’4î4dæ 4î{€–æ Z’h4dæ ~4æ ~×dæ gæ Kgd€îõKæ =4æ gPg6gZ’K×6æ ×dbæ =4æb4P4db4dKMæ€îõKõî4€–æ~gæ’db4P4db4dæ×dbæ 6’dZõ’€’K×66Mæ €’Zd’’K×dæ €îõKõî4€–æ ×dbæ Z’h4dæ ~gæ €4P×î×4æ Zî×{{×î€æ 4Pî4€€’dZæ =4æ Kgdô €îגd€æ gdæ =4æ Kgd€îõK’gdæ gæ =4€4æ €îõKõî4€–æ =4dæ=4æKg€ægæ×bb’dZæ=4æ’d4î×K4æKgd€îגd€æ’€æ 4Pgd4d’×6æPî4K’€46Mæ ’dæ=4ædõ{4îægæ {4{gîMæ Pg€’’gd€æ d44b4bæ ’dæ gîb4îæ gæ ¯44Pæ î×K¯æ gæ =4æ b’4î4dK4€æ 4~44dæ =4æ ~gæ €îõKõî4€Ææ ºî4K’€4æ P×dZægæ=4€4æ=î44æZî×{{×î€æK×ddgæbgæ~’=æ 64€€Æææ <4æKgõ6bæ=MPg=4€’§4æ=׿=4æd44bægæb’4î4d’ô ×’dZæ{gî4ægîæ64€€æ’db4P4db4dæ64h46€ægæ€Md×K’Kæ ×d×6M€’€æ 4ÆZÆæ €õî×K4æ h€Ææ b44Pæ €îõKõî4–æ  –æ ~=’K=撀æ×æ=4ægî’Z’dægægî{×6’€{€æ6’¯4æ :ægîæ Ãº:–æ×6Zgî’={’K×66Mæg’6€æbg~dægæ=’€æ4Pgô d4d’×6’M–æ×æ64׀æKgdK4îd’dZæ=4æ€Md×ÿ€õî×K4æ ææææææææææææææææææææææææææææææ ææææææææææææææææææ Ñæ<4æbgædgæ×4{Pægæõî=4îægP’{’§4ægõîæ×6Zgî’={Ææ  g6ô 6g~’dZæ,’€d4îæ  999*€æ’b4׀æ’æ€44{€ægæ4æPg€€’64ægæKgdô €îגdæ=4æKg{P64’Mægæ  d Ææ 4Pî4€€’h4æd44b€ÆæÄæ{’Z=æ4æ’d4î4€’dZægæKg{ô P×î4æ=4€4æî4€õ6€æ~’=æ 4’K’4dKMæKgd€’b4î×’gd€æ gîæ Ãº:æ×€æ ’dæ ͒€=’b׿ 4æ ×6Ææ 99"æ ×dbæ gîæ î4ô €î’K4bæ Zî×P=æ Zî×{{×î€æ gîæ b4P4db4dKMô gî’4d×4bæZ4d4î×’gdæñg=d4æ)æ<×dd4îæ 99" Ææ õîægg{ôõPæ€î×4ZMæbî’h4dæMæ=4ægPg6gZ’K×6æ €îõKõî4ægîK4€æõ€ægæ ’dîgbõK4ægg6€æ 4 õ’h×64dæ gæ =4æ €6׀=æ 4×õî4æ gæ :ÿú:Ææ <4æ =gP4æ =׿ =’€æ Pî4€4d×’gdæ €=4b€æ 6’Z=æ gdæ =4æ PîgK4bõî×6æ îg64ægæ=4æ€6׀=æ4×õî4–æ×dbægdæ=4æKg{P64{4dô ×îMæ Pg€€’’6’Mæ gæ ׿ 6’dZõ’€’Kæ ×d×6M€’€æ õ€’dZæ ׿ h’€’gîæ4×õî4Ææ Äæ€=gõ6bæ4ædg4bæ=×–æ’dæ€P’4æg撀怒{P6’K’M–æ =4æ:4î{×dægPg6gZ’K×6æZî×{{×îæPî4€4d4bæ×€æ×dæ 4×{P64æ ×66g~€æ =4æ Kgdîg6æ gæ €Md×K’Kæ Kgdô €îגd€æ gdæ P=4dg{4d׿ 6’¯4æ €Kî×{6’dZ–æ P×î’×6æ ºæ îgd’dZ–æ ×dbæ ×õ’6’×îMæ 6’P–æ ~=’K=æ b4{gdô €î×4€æ =4æ 4Pî4€€’h’Mæ gæ =4æ gPg6gZ’K×6æ ×Pô Pîg×K=Ææ =4æ Zî×{{×î€æ gæ 6×dZõ×Z4€æ 6’¯4æ »§4K=æ ×dbæ gb4îdæ:î44¯æ€=g~æ=׿=4ægPg6gZ’K×6æ×Pô Pîg×K=æ×66g~€ægîæ×æ€îגZ=gî~×îbæ’d4Zî×’gdægæ ’dgî{×’gdæ€îõKõî4æ’dæ=4æ’d4î×K4æKgd€îגd€Ææ <gî¯æ’€æ’dæPîgZî4€€ægdæ4P4î’{4d×6æ’{P64{4dô ×’gd€ægæ=4æPî4€4d4bæ×6Zgî’={æ×dbægdæK=gg€ô ’dZæõ€4õ6æ×dbæ6’dZõ’€’K×66Mæ×KK4€€’64æ’dPõæ×dbæ gõPõægî{×€ÆæC4×6æh×6õ4€ægdæ4’K’4dKMæ~’66ædgæ 4æ ×hג6×64æ ׀æ 6gdZæ ׀æ =4æ Zî×{{×îæ bg4€æ dgæ €õîP׀€æ4P4î’{4d×6怒§4Ææ_æ6’dZõ’€’Kæ€õbMægdæ KgîPgî׿{’Z=æb44î{’d4æ~=׿MP4€ægæ464{4d€æ ×î4æ ×Kõ×66Mæ 4{×dK’P×4bæ ×dbæ ’dæ P×î’Kõ6×îæ ~=׿ MP4€ægæ464{4d€æK×dæ4æ4{×dK’P×4b怒{õ6×d4ô gõ€6M–æ’Æ4Ææ~=׿6’€ægæ€6׀=4bæ464{4dæ×î4æPg€€’ô 64–æZ’h4dæ=׿=’€æ’€æ=4æ{גdæ×KgîægæKg{P64ô ’Mæ gæ =4æ ×6Zgî’={æ €44æ /’44îæ 4æ ×6Ææ " æ gîæ €’{’6×îæ =4õ’Kæ Kgd€’b4î×’gd€æ gîæ Ãº:æ P×î€ô ’dZ Ææ _K¯dg~64bZ4{4d€æ <4æ~gõ6bæ6’¯4ægæ=×d¯æC4d×õbæ ×î64æ×dbæ=4æ =î44æ ×dgdM{gõ€æ î4h’4~4î€æ gîæ =4’îæ h׀æ Kg{ô {4d€æ×dbæKgd€îõK’h4æKî’’K’€{Ææ C44î4dK4€æ ñ4K=–æ:õdd×î–æ"    –æõb’4dæA4îæb׀æb4õ€K=4æ4îô õ{æ’d’d’õ{–æ  dbæ4b’’gdæ"   –æ ’dZõ’€’€K=4æ_îô 4’4dæ"  –æÍ’4{4M4î–æ A’dZ4dÆæ ñ4K¯4î–æ ’6{×d–æ _î×h’dbæ /Ææ Xg€=’–æ ~4dæ C×{g~–æ "   "–æ gdZô ’€×dK4æKî×{6’dZæ×dbæ î44æ_b g’dô ’dZæ:î×{{×î€Ææ,_»ñæ" "ææ ñg=d4–æ ñ4îdbæ ×dbæ 4gæ <×dd4î–æ  99"–æ dæ ʀ’dZæ ׿ º×î×6646æ :î×P=æ C4~î’’dZæ :î×{{×îæ  gî{×6’€{æ ’dæ :4d4î×’gdÆæºîgK44b’dZ€ægæ=4æ  =æ,õîg  4×dæÍ×ô 1103 Í×õî×6æñ×dZõ×Z4æ:4d4î×’gdæ<g=g æ×æ_»ñ  –æ  gõ6gõ€4Ææ ñg  ×î–æ  dbo4  –æ  99  –æºîg64{€ægæÄdbõK’dZæ  ×îZ4æ»gô h4î×Z4æ»gd€îגdôñ׀4bæ 4P4db4dKMæ:î×{{×îægîæ »§4K=Ææ Äd4îd×’gd×6æ <g=g æ gdæ »gd€îגdæ g6h’dZæ ×dbæ ñ×dZõ×Z4æ ºîgK4€€’dZ–æ Êd’h44–æ Cg€¯’6b4–æPPÆæ  ô   Æææ ñî ¯4î–æÍgî4î–æ"   –æ4P×î×’dZæõî×K4æ  îb4îæ×dbæ Md×K’Kæ C46×’gd€æ ’dæ ׿ 4P4db4dKMæ :î×{{×î€–æ » ñÄÍ:ô_»ñ  –æ"Ñ  ô" 9Ææ 4õ€{×dd–æC×6P=–æ 4dM€æ õK=’4îæ×dbæXg×K=’{æÍ’4ô =î4d–æ  99  –æ  =4æ‰ :æ:î×{{×îæ 4h46gP{4dæ/’–æ 4KgdbæÄd4îd×’gd×6æ g§×îÿ §æ»gd4î4dK4–æ»=×îô 64îg’æ î×K=–æ ,î’K=–æ "  іæ :îõdbZ4b×d¯4dæ b4îæ b4õ€K=4dæ ×§64=î4–æ ’4€4î~4Z–æ  î×d¯õîÿ  ÆÆæ õK=’4î–æ 4dM€–æC×6P=æ 4õ€{×dd–æ  99"–æ  gPg6gZ’ô K×6æ 4P4db4dKMæ  î44€©æ _æ »gd€îגdôñ׀4bæ _Kô Kgõdægæ  ’d4×îæºî4K4b4dK4–æ_»ñæM99"–æ" 9ô ÑÆæ õK=’4î–æ 4dM€–æ  99–æ»gd’Zõî×’gdægæ6×4664bæî44€æ õdb4îæ 64’K×6’§4bæ Kgd€îגd€æ ×dbæ Pî’dK’P64€–æXgõîô d×6ægæC4€4×îK=ægdæñ×dZõ×Z4æ×dbæ»g{ õ×’gdÆæ ,6æ/׀€×€–æ ’dזæM6hגdæ/×=×d4–æ  99  –æ  gbÚ6’€×’gdæ b4æ6  gîbî4æb4€æ{g€æ4dæ×î×4æ€×db×îb–æX,ºô  _ñ͖æ <g=g æ gdæ _î×’Kæ ñ×dZõ×Z4æ ºîgK4€€’dZ–æ  4§–æ    ô    Ææ ,’€d4î–æ X׀gdæ   999 Ææ ñ’64’K×6æ Zî×{{×î€æ ×dbæ =4’îæ Kõ’Kô’{4æ P×dZæ×6Zgî’={€Ææ Ädæ Ã×îîMæ ñõdæ×dbæ _dgdæÍ’  =g6æ4b€Æ –æ_bh×dK4€æ’dæºîg×’6’€’Kæ×dbæ =4îæº×dZæ  4K=dg6gZ’4€–æP×Z4€æ   ô   Ææ/6õ~4îÆæ  î×d¯–æ _d44–æ  99–æ ºîg  4K’dZæ   :æ  ôîõKõî4€æ îg{æ »=õd¯€Ææ ñ:æ M99–æ ×î×gZ׿ Pî’dZ€–æ Í4~æ ¬gæ "Ñô  ÑÆæ :×§b×î–æ :4î×6b–æ ,~×dæ /64’d–æ :4gî4Mæ ºõ66õ{æ ×dbæ Äh×dæ ×Z–æ "   –æ :4d4î×6’§4bæ º=î׀4æ €îõKõî4æ Zî×{{×î–æÃ×îh×îbæÊd’h4Mæºî4€€–æ»×{î’bZ4Æææ :4îb4€–æ /’{–æ M6hגdæ /×=×d4–æ  99"–æ <gîbæ  îb4îæ’dæ :4î{×d©æ_æ  gî{×6æ 4P4db4dKMæ:î×{{×îæÊ€’dZæ×æ  gPg6gZ’K×6æÃ’4î×îK=M–æ_»ñæM99"–æ   9ô  ÑÆæ :4îb4€–æ /’{–æ M6hגdæ /×=×d4–æ  99  –æ   ×{׀æ h4î×6æ ×õæK†õîæb  õd4æ{gbÚ6’€×’gdægPg6gZ’  õ4–æñ’dZõ’€’ô K×4æÄdh4€’Z×’gd4€–æ  ©"–æ"9"ô""  Ææ :4îb4€–æ /’{æ )æ Òô¬gdæ ¬gg–æ  99–æ  ׿ gPg6gZ’4æ Kg{{4æ’d4î×K4æ4dî4æ€Md×4æ4æPîg€gb’4–æõdæ€M€ô {4æ b4æ ZÚdÚî×’gdæ ×PP6’  õÚæ ×õæ Zî4Kæ {gb4îd4–æ  _ñÍæM99–æñ×§ô€õîô  4î–æ"   ô"  Ææ Ãõb€gdæC’K=×îb–æ  999–æ´ ’€Kgd’dõ’M˖æ 4 4db4dK  æ :î×{{×æ  Æ_ÆñƖæ  "©"–æÃ4î{€–æº×–æ"  ô   Ææ /×=×d4æ M6hגd–æ  99  –æ ºg6×4bæ Êd’’K×’gdæ :î×{ô {×æ»g6’dZô_»ñ 9  –æMbd4M–æ æPÆæ /×=×d4–æM6hגd–æ_64’€æÍ×€î–æ  ~4dæC×{g~–æ"   –æ º€4õbgôºîg  4K’h’M©æ×æºg6Mdg{’×66Mæº×î€×64æÍgdô ºîg  4K’h4æ 4P4db4dKMæ :î×{{×î–æ » ñÄÍ:ô _»ñ  –æ  gdî4×6–æ    ô   Ææ /×=g6æ _dbî4׀–æ "    –æ ñ’d4××’gdô׀4bæ :4î{×dæ   d×  –æº= æ=4€’€–æ  =’gæ×4æÊd’h4MÆæ /’44î–æñƖæÃÆôÊÆæ/î’4Z4î–æXÆæ»×îîg66–æ×dbæCÆæ  ×6gõ–æ "    –æ_æ×Zægæõ€4õ6æ4K=d’  õ4€ægîæ4’K’4dæ×dbæ îgõ€æP×dZÆæ_»ñæ" –æ  Ñâ  9Ææ _64×db4îæ/g664îæ×dbæ/’d׿î’4Zd’§Ææ  99  Ææ:4dô 4î×’gdæ×€æb4P4db4dKMæP×dZÆæ_»ñæM99MÆæ Í×€îæ _64’€–æ "    –æ _æ gî{×6’€{æ ×dbæ ׿ P×î€4îæ gîæ  4’K×6’€4bæ 4P4db4dKMæ:î×{{×î€Ææ  =æÄdÆæ<gî¯ô €=g ægdæº×dZæ  4K=dg6gZ’4€–æÊͬæºî4€€Ææ Í×€î–æ_64’€–æ  99–æ  ×Kgî’dZæ€õî×K4æ€Md×K’Kæ€îõKô õî4€Ææ’î€æÄd4îd×’gd×6æ»gd4î4dK4ægdæ 4×d’dZô  4  æ  =4gî  –æº×–æ   ô   Ææ ͒€=’bזæ/4d  ’–æ/4d×îgæ  g×~׿×dbæXõd*’K=’æ  €õ  ’’–æ  99"–æ »g{P’6’dZæ ×dæ ú:ô׀4bæ Zî×{{×îæ ’dgæ {gî4æ=×dægd4æ»  :Ææº_»ñÄÍ:æM99"–"   ôô  9  Ææ ͒hî4–æXgׯ’{ÆæX4d€æÍ’6€€gd–æ  99  –満4õbgôºîg  4K’h4æ 4P4db4dKMæº×dZ–æ_»ñæM99  –æ   ô"9  Ææ ºg66×îb–æ »Ææ ×dbæ ÄÆæ ×Z–æ "    –æ Ã4×bô î’h4dæ º=î׀4æ îõKõî4æ:î×{{×î–æ»  Äæºõ6’K×’gd€Æææ ¬gg–æÃ’Mgdæ)æ/’{æ:4îb4€–æ  99  –æ_æb4P4db4dKMæ×Kô Kgõdægæ/gî4×dæ<gîbæ  îb4î–æñ’dZõ’€’KægK’4  ægæ /gî4׿M99  –æ4gõ6Ææ ¬gg–æÃ’Mgdæ  99–æ îbî4æb4€æ{g€æ4æ îg€gb’4æ©æ,€€×’æ b4æb4€Kî’ ’gdæ4æb4ægî{×6’€×’gdæ gõîæ64æZî4Kæ{gô b4îd4–æº= æ=4€’€–æÊd’h4Úæº×æÑÆæ æ 1104
2006
138
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1105–1112, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Stochastic Language Generation Using WIDL-expressions and its Application in Machine Translation and Summarization Radu Soricut Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 [email protected] Daniel Marcu Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 [email protected] Abstract We propose WIDL-expressions as a flexible formalism that facilitates the integration of a generic sentence realization system within end-to-end language processing applications. WIDL-expressions represent compactly probability distributions over finite sets of candidate realizations, and have optimal algorithms for realization via interpolation with language model probability distributions. We show the effectiveness of a WIDL-based NLG system in two sentence realization tasks: automatic translation and headline generation. 1 Introduction The Natural Language Generation (NLG) community has produced over the years a considerable number of generic sentence realization systems: Penman (Matthiessen and Bateman, 1991), FUF (Elhadad, 1991), Nitrogen (Knight and Hatzivassiloglou, 1995), Fergus (Bangalore and Rambow, 2000), HALogen (Langkilde-Geary, 2002), Amalgam (Corston-Oliver et al., 2002), etc. However, when it comes to end-to-end, text-totext applications – Machine Translation, Summarization, Question Answering – these generic systems either cannot be employed, or, in instances where they can be, the results are significantly below that of state-of-the-art, application-specific systems (Hajic et al., 2002; Habash, 2003). We believe two reasons explain this state of affairs. First, these generic NLG systems use input representation languages with complex syntax and semantics. These languages involve deep, semanticbased subject-verb or verb-object relations (such as ACTOR, AGENT, PATIENT, etc., for Penman and FUF), syntactic relations (such as subject, object, premod, etc., for HALogen), or lexical dependencies (Fergus, Amalgam). Such inputs cannot be accurately produced by state-of-the-art analysis components from arbitrary textual input in the context of text-to-text applications. Second, most of the recent systems (starting with Nitrogen) have adopted a hybrid approach to generation, which has increased their robustness. These hybrid systems use, in a first phase, symbolic knowledge to (over)generate a large set of candidate realizations, and, in a second phase, statistical knowledge about the target language (such as stochastic language models) to rank the candidate realizations and find the best scoring one. The disadvantage of the hybrid approach – from the perspective of integrating these systems within end-to-end applications – is that the two generation phases cannot be tightly coupled. More precisely, input-driven preferences and target language–driven preferences cannot be integrated in a true probabilistic model that can be trained and tuned for maximum performance. In this paper, we propose WIDL-expressions (WIDL stands for Weighted Interleave, Disjunction, and Lock, after the names of the main operators) as a representation formalism that facilitates the integration of a generic sentence realization system within end-to-end language applications. The WIDL formalism, an extension of the IDL-expressions formalism of Nederhof and Satta (2004), has several crucial properties that differentiate it from previously-proposed NLG representation formalisms. First, it has a simple syntax (expressions are built using four operators) and a simple, formal semantics (probability distributions over finite sets of strings). Second, it is a compact representation that grows linearly 1105 in the number of words available for generation (see Section 2). (In contrast, representations such as word lattices (Knight and Hatzivassiloglou, 1995) or non-recursive CFGs (Langkilde-Geary, 2002) require exponential space in the number of words available for generation (Nederhof and Satta, 2004).) Third, it has good computational properties, such as optimal algorithms for intersection with -gram language models (Section 3). Fourth, it is flexible with respect to the amount of linguistic processing required to produce WIDLexpressions directly from text (Sections 4 and 5). Fifth, it allows for a tight integration of inputspecific preferences and target-language preferences via interpolation of probability distributions using log-linear models. We show the effectiveness of our proposal by directly employing a generic WIDL-based generation system in two end-to-end tasks: machine translation and automatic headline generation. 2 The WIDL Representation Language 2.1 WIDL-expressions In this section, we introduce WIDL-expressions, a formal language used to compactly represent probability distributions over finite sets of strings. Given a finite alphabet of symbols  , atomic WIDL-expressions are of the form  , with   . For a WIDL-expression  , its semantics is a probability distribution  "! # $%'&)( , where *  ,+- . and / 0012 & . Complex WIDL-expressions are created from other WIDL-expressions, by employing the following four operators, as well as operator distribution functions 354 from an alphabet 6 . Weighted Disjunction. If 87 %595959:% <; are WIDL-expressions, then = >*?@A*7 %595959-% B; , with 3C ,+ &D%595959:% . ! # $%'&)( , specified such that FE-GIHKJML<N ? @KO 3CPRQ/S & , is a WIDLexpression. Its semantics is a probability distribution 0TU  ! # $%'&)( , where    V ; 4XW<7 * PY , and the probability values are induced by 3'C and 1 )B4R , &[Z]\^Z . For example, if _ >?@D0` %ba  , c @ed^fg hji'k l-monphji'k n)q , its semantics is a probability distribution )R over   j+-` %ba . , defined by rDsDt uwvXxzy|{wxX}){ d c @ x g { d i'k l and r sIt u~v xzy|{wx€A{ d c @ x n { di'k n . Precedence. If 7 % B are WIDL-expressions, then ‚  7„ƒ   is a WIDL-expression. Its semantics is a probability distribution | R  S! # $%'&)( , where … is the set of all strings that obey the precedence imposed over the arguments, and the probability values are induced by / 0†7) and / RB- . For example, if †7‡>ˆ?b‰50` %ba  , c ‰ dfgBh[i-k l-m nhŠi'k nq , and ˆ… >ˆ?Œ‹:0 %KŽ  , c ‹†d2fgFhi'k 'mon hi-k ‘'q , then ‡’†7 ƒ ˆ represents a probability distribution “  over the set p” +-`A % ` Ž•%ba  %ba|Ž . , defined by r sIt u~v xzy|{wxX}M–b{ d c ‰ x g { c ‹ x g { di'k ‘l , r sDt u~v xzy•{wxX}M—I{ d c ‰ x g { c ‹ x n { di-k ˜n , etc. Weighted Interleave. If 87 %595959:% <; are WIDLexpressions, then ™_š)?@A*7 % B %595959:% <; , with c @ˆ›5œ “fbžMŸŒ ¢¡M£'¤¡M£X¥§¦Œq) fK¦w 5¨b©¡w¦wqˆh«ª i'mbgw¬ , ­¯®°1±D²p; , specified such that e³ GIHKJMLBN ?@ O 3CA01’ & , is a WIDL-expression. Its semantics is a probability distribution R´U*  ! # $%'&)( , where   consists of all the possible interleavings of strings from  PY , &µZ¶\·Z , and the probability values are induced by 3-C and 1 )B4R . The distribution function 3 C is defined either explicitly, over ­´®¸°1±D²¹p; (the set of all permutations of elements), or implicitly, as 3-CP0DºM»I±D²½¼P±D²¹ ¾M . Because the set of argument permutations is a subset of all possible interleavings, 3-C also needs to specify the probability mass for the strings that are not argument permutations, 3-CP¿¾5»ÁÀ'Âñ¾) . For example, if   šM? @ 0` ƒ a§% - , c @ d_f5gÄnh i-k l)i'm*žMŸw ¢¡¢£5¤¡M£X¥§¦ÆÅÈÇÉ Ê ËÈÌ Í h i-kÎgKÏ5mŒ¦Œ 5¨b©8¡w¦ÄÅÇÈÉ Ê ËÈÌ Í h i-k i)Ïq , its semantics is a probability distribution | 0 , with domain   ]+-` a  % 5` a§% `A a . , defined by r sDt u~v xzy•{wxX}¢A–b{ d c @ x g8n { d[i'k l)i , r sIt u~v xzy|{wx€–b}¢A{ d Ð0ÑbÒ ËXÓÎԀÕXÌRÖÈÕXÌ Í5×zØ ‰ di-kÎgbÏ , rDsIt u~v€xzy|{wxX}M–wD{ d Ð0ÑbÒ ×ÙÔÅÛÚDÕz×ÛØ ‰ di'k i)Ï . Lock. If *Ü is a WIDL-expression, then ] Ý ÜX is a WIDL-expression. The semantic mapping / 0 is the same as R Ü  , except that   contains strings in which no additional symbol can be interleaved. For example, if   šM?@A Ý 0` ƒ a  % - , c @[d,fgÞn¸h i-k l)i'm|ž¢Ÿw ¢¡M£5¤)¡M£X¥§¦ßh¶i-k nMiq , its semantics is a probability distribution R , with domain    +-5` ao% ` a D. , defined by r sDt u~v xzy•{wxX}¢A–b{ d c @ x g8n { d i-k l)i , r sIt u~v xzy|{wx€–b}MI{ d Ð0Ñ¢Ò ËÈÓÔXÕXÌRÖXÕXÌ Í×àØ ‰ di-k nMi . In Figure 1, we show a more complex WIDLexpression. The probability distribution 3 7 associated with the operator š)?~‰ assigns probability 0.2 to the argument order á &oâ ; from a probability mass of 0.7, it assigns uniformly, for each of the remaining â½ã“ä‡& æå argument permutations, a permutation probability value of C)ç è é  $9X&ê . The 1106 š¢?~‰: Ý ¿ºMÀÁ² ¾5» ƒ I±D²Á…±-º¢ % >ˆ?Œ‹A Ý w²È± P± ¾ ƒ  »:º   % Ý -ºwºM± ƒ ²± P± ¾Mb %  ƒ z²Á % 3:7‡+:á &oâ ! $9 á % Dº)»:±D²½¼P±D² ¾   JÛL ! $9 Á% ¾5»ÁÀ'Âñ¾!   J!ÛL ! $9X& . % 3 ‡+ & ! $9#" å % á ! $9Ùâ åÁ. Figure 1: An example of a WIDL-expression. remaining probability mass of 0.1 is left for the 12 shuffles associated with the unlocked expression  ƒ z² , for a shuffle probability of C)ç7 7¹  $9Î$A$%$ . The list below enumerates some of the & ¾bº)²  %(' ¿¾bº)²  ) pairs that belong to the probability distribution defined by our example: rebels fighting turkish government in iraq 0.130 in iraq attacked rebels turkish goverment 0.049 in turkish goverment iraq rebels fighting 0.005 The following result characterizes an important representation property for WIDL-expressions. Theorem 1 A WIDL-expression  over  and 6 using atomic expressions has space complexity O( ), if the operator distribution functions of  have space complexity at most O( ). For proofs and more details regarding WIDLexpressions, we refer the interested reader to (Soricut, 2006). Theorem 1 ensures that highcomplexity hypothesis spaces can be represented efficiently by WIDL-expressions (Section 5). 2.2 WIDL-graphs and Probabilistic Finite-State Acceptors WIDL-graphs. Equivalent at the representation level with WIDL-expressions, WIDL-graphs allow for formulations of algorithms that process them. For each WIDL-expression  , there exists an equivalent WIDL-graph *  . As an example, we illustrate in Figure 2(a) the WIDL-graph corresponding to the WIDL-expression in Figure 1. WIDL-graphs have an initial vertex +-, and a final vertex +%. . Vertices +PC , +0/ , and +AbC with in-going edges labeled 1 7 ? ‰ , 1  ? ‰ , and 132 ? ‰ , respectively, and vertices + é , +½754 , and +A 2 with out-going edges labeled 6 7 ?~‰ , 6  ?b‰ , and 632 ?~‰ , respectively, result from the expansion of the š)?~‰ operator. Vertices +Áè and + 7 2 with in-going edges labeled  7 ?¹‹ ,   ?¹‹ , respectively, and vertices +17¹ and +½757 with out-going edges labeled  7 ?Œ‹ ,   ?Œ‹ , respectively, result from the expansion of the >?¹‹ operator. With each WIDL-graph *  , we associate a probability distribution. The domain of this distribution is the finite collection of strings that can be generated from the paths of a WIDL-specific traversal of *  , starting from +8, and ending in +%. . Each path (and its associated string) has a probability value induced by the probability distribution functions associated with the edge labels of *  . A WIDL-expression  and its corresponding WIDLgraph *  are said to be equivalent because they represent the same distribution ) . WIDL-graphs and Probabilistic FSA. Probabilistic finite-state acceptors (pFSA) are a wellknown formalism for representing probability distributions (Mohri et al., 2002). For a WIDLexpression  , we define a mapping, called UNFOLD, between the WIDL-graph *  and a pFSA 9  . A state : in 9  is created for each set of WIDL-graph vertices that can be reached simultaneously when traversing the graph. State : records, in what we call a š -stack (interleave stack), the order in which 1 4 ? , 6 4 ? –bordered subgraphs are traversed. Consider Figure 2(b), in which state # +PC;+04<+D 2 % + & ?b‰ â áÁ. ( (at the bottom) corresponds to reaching vertices +ÁC % +4 , and +A 2 (see the WIDL-graph in Figure 2(a)), by first reaching vertex +P 2 (inside the 1 2 ?~‰ , 6 2 ?~‰ –bordered subgraph), and then reaching vertex + 4 (inside the 1  ?b‰ , 6  ?b‰ –bordered sub-graph). A transition labeled  between two 9  states :P7 and :: in 9  exists if there exists a vertex += in the description of : 7 and a vertex + E in the description of :I such that there exists a path in *  between += and + E , and  is the only  -labeled transitions in this path. For example, transition # +AC;+04<+D 2 % + & ?b‰ â áÁ. ( ?>A@>CB D ! # +AC<+½754<+A 2 % + & ?~‰ â áÁ. ( (Figure 2(b)) results from unfolding the path +84FE ! +½7¹C #>C@>AB D ! +½7b7GE !H+½7¹ O ‰ Ð ‹ !I+½754 (Figure 2(a)). A transition labeled J between two 9p states : 7 and :  in 9  exists if there exists a vertex += in the description of : 7 and vertices + 7 E %595959-% + ; E in the description of :: , such that +=LK Y Ð !M+ 4 E N*  , &Z«\ Z (see transition # +O, %~( E ! # +AC<+/;+AbC % + & ?~‰;)w?~‰. ( ), or if there exists vertices + 7 = %595959-% + ; = in the description of : 7 and vertex + E in the description of :  , such that + 4 = P:Y Ð !Q+ E R*  , & Z´\FZ . The J -transitions 1107                                         ! ! " "# # $ $% % & &' ' ( () ) * *+ + , ,. ./ / 0 01 1 2 23 3 45454 45454 45454 45454 45454 45454 65656 65656 65656 65656 65656 65656 75757 75757 75757 75757 75757 75757 75757 85858 85858 85858 85858 85858 85858 95959 95959 95959 95959 95959 95959 :5:5: :5:5: :5:5: :5:5: :5:5: :5:5: ;5;5;5; ;5;5;5; ;5;5;5; ;5;5;5; ;5;5;5; ;5;5;5; ;5;5;5; <5<5<5< <5<5<5< <5<5<5< <5<5<5< <5<5<5< <5<5<5< attacked attacked attacked attacked attacked rebels rebels rebels fighting rebels rebels rebels rebels rebels fighting fighting fighting fighting turkish turkish turkish turkish turkish turkish turkish government government government government government government in iraq in in in in in iraq iraq iraq iraq iraq ε ε δ1 government turkish :0.3 attacked :0.1 :0.3 :1 :1 rebels :0.2 :1 fighting :1 rebels :1 δ1 :0.18 :0.18 :1 rebels :1 rebels :1 ε 0 6 21 0 6 0 23 9 23 9 0 21 11 0 20 9 0 1520 6 20 2 0 21 s e (b) (a) rebels rebels fighting ( ( ) 2 δ1 δ1 δ1 δ1 δ1 δ1 δ2 δ2 δ2 1 2 3 2 1 1 3 )1 δ2 attacked in iraq ε ε ε ε ε ε ε ε ε ε turkish government 1 1 1 1 1 1 1 1 1 1 1 2 v v v v v v v v v v v v v v v v v v v v v v v v 1 v s e 0 1 2 3 4 v6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 23 22 21 5 0 6 20 0 δ1 23 19 0 23 19 [v , ] 0 δ1 23 19 0 23 19 [v v v ,<32] [v v v ,<32] [v v v ,<3] [v v v ,<3] [v v v ,<32] [v v v ,<0] [v v v ,<32] [v v v ,<2] [v v v ,<2] [v , ] [v v v ,<1 ] ε ε δ1 δ1 δ1 δ1 δ1 δ1 δ1 δ1 δ1 δ1 0.1 } shuffles 0.7, δ1= { 2 1 3 0.2, other perms δ2 = { 1 0.35 } 0.65, 2 δ1 [v v v ,< > ] δ1 [v v v ,< 0 > ] [v v v ,< 321 > ] δ1 δ1 Figure 2: The WIDL-graph corresponding to the WIDL-expression in Figure 1 is shown in (a). The probabilistic finite-state acceptor (pFSA) that corresponds to the WIDL-graph is shown in (b). are responsible for adding and removing, respectively, the & ? , )~? symbols in the š -stack. The probabilities associated with 9  transitions are computed using the vertex set and the š -stack of each 9  state, together with the distribution functions of the > and š operators. For a detailed presentation of the UNFOLD relation we refer the reader to (Soricut, 2006). 3 Stochastic Language Generation from WIDL-expressions 3.1 Interpolating Probability Distributions in a Log-linear Framework Let us assume a finite set = of strings over a finite alphabet  , representing the set of possible sentence realizations. In a log-linear framework, we have a vector of feature functions >  & > C > 7 95959 >@? ) , and a vector of parameters A¶ & A C5A|7 95959 A ? ) . For any B  = , the interpolated probability CÞDB: can be written under a log-linear model as in Equation 1: C·DBI* EGFIH #  ? J W|C A J > J DBI (  .LK EGFIH #  ? J W|C A J > J DB Ü  ( (1) We can formulate the search problem of finding the most probable realization B under this model as shown in Equation 2, and therefore we do not need to be concerned about computing expensive normalization factors. `NMPORQÃ` F . CÞDB:*T`NMPOSQÃ` F . EGFIH #  ? J W|C A J > J DB: ( (2) For a given WIDL-expression  over  , the set = is defined by  Ä0 / b , and feature function > C is taken to be R . Any language model we want to employ may be added in Equation 2 as a feature function > 4 , \UT& . 3.2 Algorithms for Intersecting WIDL-expressions with Language Models Algorithm WIDL-NGLM-A V (Figure 3) solves the search problem defined by Equation 2 for a WIDL-expression  (which provides feature function > C ) and W -gram language models (which provide feature functions >§7 %595959:% > ?  . It does so by incrementally computing UNFOLD for *  (i.e., on-demand computation of the corresponding pFSA 9  ), by keeping track of a set of active states, called XZY\[^]`_Na . The set of newly UNFOLDed states is called bdc5eGfNgih . Using Equation 1 (unnormalized), we EVALUATE the current C·DB: scores for the bIc\ejfNgkh states. Additionally, EVALUATE uses an admissible heuristic function to compute future (admissible) scores for the bIc\eGflgkh states. The algorithm PUSHes each state from the current bdc5eGfNgih into a priority queue m , which sorts the states according to their total score (current n admissible). In the next iteration, XZY5[^]o_la is a singleton set containing the state POPed out from the top of m . The admissible heuristic function we use is the one defined in (Soricut and Marcu, 2005), using Equation 1 (unnormalized) for computing the event costs. Given the existence of the admissible heuristic and the monotonicity property of the unfolding provided by the priority queue m , the proof for A V optimality (Russell and Norvig, 1995) guarantees that WIDL-NGLM-A V finds a path in 9qp that provides an optimal solution. 1108 WIDL-NGLM-A V*  % > % A“ 1 XZY5[^]o_la+ # +%, % +A. ( . 2  X& 3 while  X 4 do bIc\ejfNgkhUNFOLD *  % XZY5[^]o_laD 5 EVALUATE  bdc5eGfNgih % > % A| 6 if XZY\[^]`_Na ‡+ # +0. % +A. ( . 7 then  X$ 8 for each  [ Xl[ a in bIc\ejfNgih do PUSH  m %  [ Xl[ aP X Y\[^]`_NaPOP  m  9 return XZY5[^]o_la Figure 3: A V algorithm for interpolating WIDLexpressions with -gram language models. An important property of the WIDL-NGLM-A V algorithm is that the UNFOLD relation (and, implicitly, the 9p acceptor) is computed only partially, for those states for which the total cost is less than the cost of the optimal path. This results in important savings, both in space and time, over simply running a single-source shortest-path algorithm for directed acyclic graphs (Cormen et al., 2001) over the full acceptor 9… (Soricut and Marcu, 2005). 4 Headline Generation using WIDL-expressions We employ the WIDL formalism (Section 2) and the WIDL-NGLM-A V algorithm (Section 3) in a summarization application that aims at producing both informative and fluent headlines. Our headlines are generated in an abstractive, bottom-up manner, starting from words and phrases. A more common, extractive approach operates top-down, by starting from an extracted sentence that is compressed (Dorr et al., 2003) and annotated with additional information (Zajic et al., 2004). Automatic Creation of WIDL-expressions for Headline Generation. We generate WIDLexpressions starting from an input document. First, we extract a weighted list of topic keywords from the input document using the algorithm of Zhou and Hovy (2003). This list is enriched with phrases created from the lexical dependencies the topic keywords have in the input document. We associate probability distributions with these phrases using their frequency (we assume Keywords + iraq 0.32, syria 0.25, rebels 0.22, kurdish 0.17, turkish 0.14, attack 0.10 . Phrases iraq + in iraq 0.4, northern iraq 0.5,iraq and iran 0.1 . , syria + into syria 0.6, and syria 0.4 . rebels + attacked rebels 0.7,rebels fighting 0.3 . . . . WIDL-expression & trigram interpolation TURKISH GOVERNMENT ATTACKED REBELS IN IRAQ AND SYRIA Figure 4: Input and output for our automatic headline generation system. that higher frequency is indicative of increased importance) and their position in the document (we assume that proximity to the beginning of the document is also indicative of importance). In Figure 4, we present an example of input keywords and lexical-dependency phrases automatically extracted from a document describing incidents at the Turkey-Iraq border. The algorithm for producing WIDLexpressions combines the lexical-dependency phrases for each keyword using a > operator with the associated probability values for each phrase multiplied with the probability value of each topic keyword. It then combines all the > -headed expressions into a single WIDL-expression using a š operator with uniform probability. The WIDLexpression in Figure 1 is a (scaled-down) example of the expressions created by this algorithm. On average, a WIDL-expression created by this algorithm, using  " keywords and an average of Qà ê lexical-dependency phrases per keyword, compactly encodes a candidate set of about 3 million possible realizations. As the specification of the šM? operator takes space à &  for uniform 3 , Theorem 1 guarantees that the space complexity of these expressions is · Q/ . Finally, we generate headlines from WIDLexpressions using the WIDL-NGLM-A V algorithm, which interpolates the probability distributions represented by the WIDL-expressions with -gram language model distributions. The output presented in Figure 4 is the most likely headline realization produced by our system. Headline Generation Evaluation. To evaluate the accuracy of our headline generation system, we use the documents from the DUC 2003 evaluation competition. Half of these documents are used as development set (283 documents), 1109 ALG (uni) (bi) Len. Rouge  Rouge  Extractive Lead10 458 114 9.9 20.8 11.1 HedgeTrimmer  399 104 7.4 18.1 9.9 Topiary  576 115 9.9 26.2 12.5 Abstractive Keywords 585 22 9.9 26.6 5.5 Webcl 311 76 7.3 14.1 7.5 WIDL-A  562 126 10.0 25.5 12.9 Table 1: Headline generation evaluation. We compare extractive algorithms against abstractive algorithms, including our WIDL-based algorithm. and the other half is used as test set (273 documents). We automatically measure performance by comparing the produced headlines against one reference headline produced by a human using ROUGE  (Lin, 2004). For each input document, we train two language models, using the SRI Language Model Toolkit (with modified Kneser-Ney smoothing). A general trigram language model, trained on 170M English words from the Wall Street Journal, is used to model fluency. A document-specific trigram language model, trained on-the-fly for each input document, accounts for both fluency and content validity. We also employ a word-count model (which counts the number of words in a proposed realization) and a phrase-count model (which counts the number of phrases in a proposed realization), which allow us to learn to produce headlines that have restrictions in the number of words allowed (10, in our case). The interpolation weights A (Equation 2) are trained using discriminative training (Och, 2003) using ROUGE  as the objective function, on the development set. The results are presented in Table 1. We compare the performance of several extractive algorithms (which operate on an extracted sentence to arrive at a headline) against several abstractive algorithms (which create headlines starting from scratch). For the extractive algorithms, Lead10 is a baseline which simply proposes as headline the lead sentence, cut after the first 10 words. HedgeTrimmer  is our implementation of the Hedge Trimer system (Dorr et al., 2003), and Topiary  is our implementation of the Topiary system (Zajic et al., 2004). For the abstractive algorithms, Keywords is a baseline that proposes as headline the sequence of topic keywords, Webcl is the system THREE GORGES PROJECT IN CHINA HAS WON APPROVAL WATER IS LINK BETWEEN CLUSTER OF E. COLI CASES SRI LANKA ’S JOINT VENTURE TO EXPAND EXPORTS OPPOSITION TO EUROPEAN UNION SINGLE CURRENCY EURO OF INDIA AND BANGLADESH WATER BARRAGE Figure 5: Headlines generated automatically using a WIDL-based sentence realization system. described in (Zhou and Hovy, 2003), and WIDLA  is the algorithm described in this paper. This evaluation shows that our WIDL-based approach to generation is capable of obtaining headlines that compare favorably, in both content and fluency, with extractive, state-of-the-art results (Zajic et al., 2004), while it outperforms a previously-proposed abstractive system by a wide margin (Zhou and Hovy, 2003). Also note that our evaluation makes these results directly comparable, as they use the same parsing and topic identification algorithms. In Figure 5, we present a sample of headlines produced by our system, which includes both good and not-so-good outputs. 5 Machine Translation using WIDL-expressions We also employ our WIDL-based realization engine in a machine translation application that uses a two-phase generation approach: in a first phase, WIDL-expressions representing large sets of possible translations are created from input foreignlanguage sentences. In a second phase, we use our generic, WIDL-based sentence realization engine to intersect WIDL-expressions with an gram language model. In the experiments reported here, we translate between Chinese (source language) and English (target language). Automatic Creation of WIDL-expressions for MT. We generate WIDL-expressions from Chinese strings by exploiting a phrase-based translation table (Koehn et al., 2003). We use an algorithm resembling probabilistic bottom-up parsing to build a WIDL-expression for an input Chinese string: each contiguous span  \K%  over a Chinese string 4 = is considered a possible “constituent”, and the “non-terminals” associated with each constituent are the English phrase translations = E 4 = that correspond in the translation table to the Chinese string 84 = . Multiple-word English phrases, such as  , are represented as WIDL-expressions using the precedence ( ƒ ) and 1110               "! #%$&' " '%( )*+,-' " ./0 .12# 3+'"43567 ( 8 % #9 #:%( 7 7 :%( 7 7  ,:%( 7 7 ( ) ;< = ->@? 5 02/A BDC EF G C A H I J ; K)L = M>N? JOI K; PQ&K; KPKR; J S K; TQ) = ! >@? JMI KR; J P&K; JU K; JV K; KWR T I K; PQKR; J KK; PTK; PXR T I K; U WK; J QK; P KKR; U X) P I K; P KKR; V W&K; U S K; P S L P I K; JU K; JJ K; JV KR; K W) = 9 >N? JOI K; XQ&K; PPKR; TQ&K; XP) UYI K; J XK; TWK; T U KR; T U  T I K; TPKR; QKK; QPK; TXR Q I K; J KK; PPK; J KKR; J X L P I K; J TKR; J X&K; TTK; JJ L WIDL-expression & trigram interpolation gunman was killed by police . Figure 6: A Chinese string is converted into a WIDL-expression, which provides a translation as the best scoring hypothesis under the interpolation with a trigram language model. lock ( Ý ) operators, as Ý   †ƒ  8ƒ    . To limit the number of possible translations = E 4 = corresponding to a Chinese span 84 = , we use a probabilistic beam Z and a histogram beam : to beam out low probability translation alternatives. At this point, each 4 = span is “tiled” with likely translations = E 4 = taken from the translation table. Tiles that are adjacent are joined together in a larger tile by a šM? operator, where 3  +P¼P±D²…¾ >[R\ L•J ] \>  ! & . . That is, reordering of the component tiles are permitted by the š5? operators (assigned non-zero probability), but the longer the movement from the original order of the tiles, the lower the probability. (This distortion model is similar with the one used in (Koehn, 2004).) When multiple tiles are available for the same span  \K%  , they are joined by a >*? operator, where 3 is specified by the probability distributions specified in the translation table. Usually, statistical phrase-based translation tables specify not only one, but multiple distributions that account for context preferences. In our experiments, we consider four probability distributions: '  ^`_ BD %('  B_ ^o %('Ya .b  ^`_ BD , and 'ca . b  B3_ ^| , where ^ and B are Chinese-English phrase translations as they appear in the translation table. In Figure 6, we show an example of WIDL-expression created by this algorithm1. On average, a WIDL-expression created by this algorithm, using an average of  â0$ tiles per sentence (for an average input sentence length of 30 words) and an average of Q·ed possible translations per tile, encodes a candidate set of about 10 é C possible translations. As the specification of the š¢? operators takes space à &  , Theorem 1 1English reference: the gunman was shot dead by the police. guarantees that these WIDL-expressions encode compactly these huge spaces in · Q/ . In the second phase, we employ our WIDLbased realization engine to interpolate the distribution probabilities of WIDL-expressions with a trigram language model. In the notation of Equation 2, we use four feature functions >|C %595959:% > 2 for the WIDL-expression distributions (one for each probability distribution encoded); a feature function >Of for a trigram language model; a feature function > é for a word-count model, and a feature function > / for a phrase-count model. As acknowledged in the Machine Translation literature (Germann et al., 2003), full A V search is not usually possible, due to the large size of the search spaces. We therefore use an approximation algorithm, called WIDL-NGLM-A V E , which considers for unfolding only the nodes extracted from the priority queue m which already unfolded a path of length greater than or equal to the maximum length already unfolded minus Q (we used QÃTá in the experiments reported here). MT Performance Evaluation. When evaluated against the state-of-the-art, phrase-based decoder Pharaoh (Koehn, 2004), using the same experimental conditions – translation table trained on the FBIS corpus (7.2M Chinese words and 9.2M English words of parallel text), trigram language model trained on 155M words of English newswire, interpolation weights A (Equation 2) trained using discriminative training (Och, 2003) (on the 2002 NIST MT evaluation set), probabilistic beam Z set to 0.01, histogram beam : set to 10 – and BLEU (Papineni et al., 2002) as our metric, the WIDL-NGLM-A V  algorithm produces translations that have a BLEU score of 0.2570, while Pharaoh translations have a BLEU score of 0.2635. The difference is not statistically significant at 95% confidence level. These results show that the WIDL-based approach to machine translation is powerful enough to achieve translation accuracy comparable with state-of-the-art systems in machine translation. 6 Conclusions The approach to sentence realization we advocate in this paper relies on WIDL-expressions, a formal language with convenient theoretical properties that can accommodate a wide range of generation scenarios. In the worst case, one can work with simple bags of words that encode no context 1111 preferences (Soricut and Marcu, 2005). One can also work with bags of words and phrases that encode context preferences, a scenario that applies to current approaches in statistical machine translation (Section 5). And one can also encode context and ordering preferences typically used in summarization (Section 4). The generation engine we describe enables a tight coupling of content selection with sentence realization preferences. Its algorithm comes with theoretical guarantees about its optimality. Because the requirements for producing WIDLexpressions are minimal, our WIDL-based generation engine can be employed, with state-of-the-art results, in a variety of text-to-text applications. Acknowledgments This work was partially supported under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022. References Srinivas Bangalore and Owen Rambow. 2000. Using TAG, a tree model, and a language model for generation. In Proceedings of the Fifth International Workshop on Tree-Adjoining Grammars (TAG+). Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2001. Introduction to Algorithms. The MIT Press and McGraw-Hill. Simon Corston-Oliver, Michael Gamon, Eric K. Ringger, and Robert Moore. 2002. An overview of Amalgam: A machine-learned generation module. In Proceedings of the INLG. Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: a parse-and-trim approach to headline generation. In Proceedings of the HLTNAACL Text Summarization Workshop, pages 1–8. Michael Elhadad. 1991. FUF User manual — version 5.0. Technical Report CUCS-038-91, Department of Computer Science, Columbia University. Ulrich Germann, Mike Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2003. Fast decoding and optimal decoding for machine translation. Artificial Intelligence, 154(1–2):127-143. Nizar Habash. 2003. Matador: A large-scale SpanishEnglish GHMT system. In Proceedings of AMTA. J. Hajic, M. Cmejrek, B. Dorr, Y. Ding, J. Eisner, D. Gildea, T. Koo, K. Parton, G. Penn, D. Radev, and O. Rambow. 2002. Natural language generation in the context of machine translation. Summer workshop final report, Johns Hopkins University. K. Knight and V. Hatzivassiloglou. 1995. Two level, many-path generation. In Proceedings of the ACL. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase based translation. In Proceedings of the HLT-NAACL, pages 127–133. Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine transltion models. In Proceedings of the AMTA, pages 115–124. I. Langkilde-Geary. 2002. A foundation for generalpurpose natural language generation: sentence realization using probabilistic models of language. Ph.D. thesis, University of Southern California. Chin-Yew Lin. 2004. ROUGE: a package for automatic evaluation of summaries. In Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004). Christian Matthiessen and John Bateman. 1991. Text Generation and Systemic-Functional Linguistic. Pinter Publishers, London. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. Computer Speech and Language, 16(1):69–88. Mark-Jan Nederhof and Giorgio Satta. 2004. IDLexpressions: a formalism for representing and parsing finite languages in natural language processing. Journal of Artificial Intelligence Research, pages 287–317. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the ACL, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In In Proceedings of the ACL, pages 311–318. Stuart Russell and Peter Norvig. 1995. Artificial Intelligence. A Modern Approach. Prentice Hall. Radu Soricut and Daniel Marcu. 2005. Towards developing generation algorithms for text-to-text applications. In Proceedings of the ACL, pages 66–74. Radu Soricut. 2006. Natural Language Generation for Text-to-Text Applications Using an Information-Slim Representation. Ph.D. thesis, University of Southern California. David Zajic, Bonnie J. Dorr, and Richard Schwartz. 2004. BBN/UMD at DUC-2004: Topiary. In Proceedings of the NAACL Workshop on Document Understanding, pages 112–119. Liang Zhou and Eduard Hovy. 2003. Headline summarization at ISI. In Proceedings of the NAACL Workshop on Document Understanding. 1112
2006
139
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 105–112, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance Roberto Navigli Dipartimento di Informatica Universit`a di Roma “La Sapienza” Roma, Italy [email protected] Abstract Fine-grained sense distinctions are one of the major obstacles to successful Word Sense Disambiguation. In this paper, we present a method for reducing the granularity of the WordNet sense inventory based on the mapping to a manually crafted dictionary encoding sense hierarchies, namely the Oxford Dictionary of English. We assess the quality of the mapping and the induced clustering, and evaluate the performance of coarse WSD systems in the Senseval-3 English all-words task. 1 Introduction Word Sense Disambiguation (WSD) is undoubtedly one of the hardest tasks in the field of Natural Language Processing. Even though some recent studies report benefits in the use of WSD in specific applications (e.g. Vickrey et al. (2005) and Stokoe (2005)), the present performance of the best ranking WSD systems does not provide a sufficient degree of accuracy to enable real-world, language-aware applications. Most of the disambiguation approaches adopt the WordNet dictionary (Fellbaum, 1998) as a sense inventory, thanks to its free availability, wide coverage, and existence of a number of standard test sets based on it. Unfortunately, WordNet is a fine-grained resource, encoding sense distinctions that are often difficult to recognize even for human annotators (Edmonds and Kilgariff, 1998). Recent estimations of the inter-annotator agreement when using the WordNet inventory report figures of 72.5% agreement in the preparation of the English all-words test set at Senseval-3 (Snyder and Palmer, 2004) and 67.3% on the Open Mind Word Expert annotation exercise (Chklovski and Mihalcea, 2002). These numbers lead us to believe that a credible upper bound for unrestricted fine-grained WSD is around 70%, a figure that state-of-the-art automatic systems find it difficult to outperform. Furthermore, even if a system were able to exceed such an upper bound, it would be unclear how to interpret such a result. It seems therefore that the major obstacle to effective WSD is the fine granularity of the WordNet sense inventory, rather than the performance of the best disambiguation systems. Interestingly, Ng et al. (1999) show that, when a coarse-grained sense inventory is adopted, the increase in interannotator agreement is much higher than the reduction of the polysemy degree. Following these observations, the main question that we tackle in this paper is: can we produce and evaluate coarse-grained sense distinctions and show that they help boost disambiguation on standard test sets? We believe that this is a crucial research topic in the field of WSD, that could potentially benefit several application areas. The contribution of this paper is two-fold. First, we provide a wide-coverage method for clustering WordNet senses via a mapping to a coarse-grained sense inventory, namely the Oxford Dictionary of English (Soanes and Stevenson, 2003) (Section 2). We show that this method is well-founded and accurate with respect to manually-made clusterings (Section 3). Second, we evaluate the performance of WSD systems when using coarse-grained sense inventories (Section 4). We conclude the paper with an account of related work (Section 5), and some final remarks (Section 6). 105 2 Producing a Coarse-Grained Sense Inventory In this section, we present an approach to the automatic construction of a coarse-grained sense inventory based on the mapping of WordNet senses to coarse senses in the Oxford Dictionary of English. In section 2.1, we introduce the two dictionaries, in Section 2.2 we illustrate the creation of sense descriptions from both resources, while in Section 2.3 we describe a lexical and a semantic method for mapping sense descriptions of WordNet senses to ODE coarse entries. 2.1 The Dictionaries WordNet (Fellbaum, 1998) is a computational lexicon of English which encodes concepts as synonym sets (synsets), according to psycholinguistic principles. For each word sense, WordNet provides a gloss (i.e. a textual definition) and a set of relations such as hypernymy (e.g. apple kind-of edible fruit), meronymy (e.g. computer has-part CPU), etc. The Oxford Dictionary of English (ODE) (Soanes and Stevenson, 2003)1 provides a hierarchical structure of senses, distinguishing between homonymy (i.e. completely distinct senses, like race as a competition and race as a taxonomic group) and polysemy (e.g. race as a channel and as a current). Each polysemous sense is further divided into a core sense and a set of subsenses. For each sense (both core and subsenses), the ODE provides a textual definition, and possibly hypernyms and domain labels. Excluding monosemous senses, the ODE has an average number of 2.56 senses per word compared to the average polysemy of 3.21 in WordNet on the same words (with peaks for verbs of 2.73 and 3.75 senses, respectively). In Table 1 we show an excerpt of the sense inventories of the noun race as provided by both dictionaries2. The ODE identifies 3 homonyms and 3 polysemous senses for the first homonym, while WordNet encodes a flat list of 6 senses, some of which strongly related (e.g. race#1 and race#3). Also, the ODE provides a sense (ginger 1The ODE was kindly made available by Ken Litkowski (CL Research) in the context of a license agreement. 2In the following, we denote a WordNet sense with the convention w#p#i where w is a word, p a part of speech and i is a sense number; analogously, we denote an ODE sense with the convention w#p#h.k where h is the homonym number and k is the k-th polysemous entry under homonym h. root) which is not taken into account in WordNet. The structure of the ODE senses is clearly hierarchical: if we were able to map with a high accuracy WordNet senses to ODE entries, then a sense clustering could be trivially induced from the mapping. As a result, the granularity of the WordNet inventory would be drastically reduced. Furthermore, disregarding errors, the clustering would be well-founded, as the ODE sense groupings were manually crafted by expert lexicographers. In the next section we illustrate a general way of constructing sense descriptions that we use for determining a complete, automatic mapping between the two dictionaries. 2.2 Constructing Sense Descriptions For each word w, and for each sense S of w in a given dictionary D ∈{WORDNET, ODE}, we construct a sense description dD(S) as a bag of words: dD(S) = def D(S) ∪hyperD(S) ∪domainsD(S) where: • def D(S) is the set of words in the textual definition of S (excluding usage examples), automatically lemmatized and partof-speech tagged with the RASP statistical parser (Briscoe and Carroll, 2002); • hyperD(S) is the set of direct hypernyms of S in the taxonomy hierarchy of D (∅if hypernymy is not available); • domainsD(S) includes the set of domain labels possibly assigned to sense S (∅when no domain is assigned). Specifically, in the case of WordNet, we generate def WN(S) from the gloss of S, hyperWN(S) from the noun and verb taxonomy, and domainsWN(S) from the subject field codes, i.e. domain labels produced semi-automatically by Magnini and Cavagli`a (2000) for each WordNet synset (we exclude the general-purpose label, called FACTOTUM). For example, for the first WordNet sense of race#n we obtain the following description: dWN(race#n#1) = {competition#n} ∪ {contest#n} ∪{POLITICS#N, SPORT#N} In the case of the ODE, def ODE(S) is generated from the definitions of the core sense and the subsenses of the entry S. Hypernymy (for nouns only) and domain labels, when available, are included in the respective sets hyperODE(S) 106 Table 1: The sense inventory of race#n in WordNet and ODE (definitions are abridged, bullets (•) indicate a subsense in the ODE, arrows (→) indicate hypernymy, DOMAIN LABELS are in small caps). race#n (WordNet) #1 Any competition (→contest). #2 People who are believed to belong to the same genetic stock (→group). #3 A contest of speed (→contest). #4 The flow of air that is driven backwards by an aircraft propeller (→flow). #5 A taxonomic group that is a division of a species; usually arises as a consequence of geographical isolation within a species (→taxonomic group). #6 A canal for a current of water (→canal). race#n (ODE) #1.1 Core: SPORT A competition between runners, horses, vehicles, etc. • RACING A series of such competitions for horses or dogs • A situation in which individuals or groups compete (→contest) • ASTRONOMY The course of the sun or moon through the heavens (→ trajectory). #1.2 Core: NAUTICAL A strong or rapid current (→flow). #1.3 Core: A groove, channel, or passage. • MECHANICS A water channel • Smooth groove or guide for balls (→ indentation, conduit) • FARMING Fenced passageway in a stockyard (→route) • TEXTILES The channel along which the shuttle moves. #2.1 Core: ANTHROPOLOGY Division of humankind (→ethnic group). • The condition of belonging to a racial division or group • A group of people sharing the same culture, history, language • BIOLOGY A group of people descended from a common ancestor. #3.1 Core: BOTANY, FOOD A ginger root (→plant part). and domainsODE(S). For example, the first ODE sense of race#n is described as follows: dODE(race#n#1.1) = {competition#n, runner#n, horse#n, vehicle#n, . . . , heavens#n} ∪{contest#n, trajectory#n} ∪ {SPORT#N, RACING#N, ASTRONOMY#N} Notice that, for every S, dD(S) is non-empty as a definition is always provided by both dictionaries. This approach to sense descriptions is general enough to be applicable to any other dictionary with similar characteristics (e.g. the Longman Dictionary of Contemporary English in place of ODE). 2.3 Mapping Word Senses In order to produce a coarse-grained version of the WordNet inventory, we aim at defining an automatic mapping between WordNet and ODE, i.e. a function µ : SensesWN →SensesODE ∪{ϵ}, where SensesD is the set of senses in the dictionary D and ϵ is a special element assigned when no plausible option is available for mapping (e.g. when the ODE encodes no entry corresponding to a WordNet sense). Given a WordNet sense S ∈SensesWN(w) we define ˆm(S), the best matching sense in the ODE, as: ˆm(S) = arg max S′∈SensesODE(w) match(S, S′) where match : SensesWN×SensesODE →[0, 1] is a function that measures the degree of matching between the sense descriptions of S and S′. We define the mapping µ as: µ(S) = ( ˆm(S) if match(S, ˆm(S)) ≥θ ϵ otherwise where θ is a threshold below which a matching between sense descriptions is considered unreliable. Finally, we define the clustering of senses c(w) of a word w as: c(w) = {µ−1(S′) : S′ ∈SensesODE(w), µ−1(S′) ̸= ∅} ∪{{S} : S ∈SensesWN(w), µ(S) = ϵ} where µ−1(S′) is the group of WordNet senses mapped to the same sense S′ of the ODE, while the second set includes singletons of WordNet senses for which no mapping can be provided according to the definition of µ. For example, an ideal mapping between entries in Table 1 would be as follows: µ(race#n#1) = race#n#1.1, µ(race#n#2) = race#n#2.1, µ(race#n#3) = race#n#1.1, µ(race#n#5) = race#n#2.1, µ(race#n#4) = race#n#1.2, µ(race#n#6) = race#n#1.3, resulting in the following clustering: c(race#n) = {{race#n#1, race#n#3}, {race#n#2, race#n#5}, {race#n#4}, {race#n#6}} In Sections 2.3.1 and 2.3.2 we describe two different choices for the match function, respectively based on the use of lexical and semantic information. 2.3.1 Lexical matching As a first approach, we adopted a purely lexical matching function based on the notion of lexical overlap (Lesk, 1986). The function counts the number of lemmas that two sense descriptions of a word have in common (we neglect parts of speech), and is normalized by the minimum of the two description lengths: matchLESK(S, S′) = |dWN(S)∩dODE(S′)| min{|dWN(S)|,|dODE(S′)|} 107 where S ∈ SensesWN(w) and S′ ∈ SensesODE(w). For instance: matchLESK(race#n#1, race#n#1.1) = 3 min{4,20} = 3 4 = 0.75 matchLESK(race#n#2, race#n#1.1) = 1 8 = 0.125 Notice that unrelated senses can get a positive score because of an overlap of the sense descriptions. In the example, group#n, the hypernym of race#n#2, is also present in the definition of race#n#1.1. 2.3.2 Semantic matching Unfortunately, the very same concept can be defined with entirely different words. To match definitions in a semantic manner we adopted a knowledge-based Word Sense Disambiguation algorithm, Structural Semantic Interconnections (SSI, Navigli and Velardi (2004)). SSI3 exploits an extensive lexical knowledge base, built upon the WordNet lexicon and enriched with collocation information representing semantic relatedness between sense pairs. Collocations are acquired from existing resources (like the Oxford Collocations, the Longman Language Activator, collocation web sites, etc.). Each collocation is mapped to the WordNet sense inventory in a semi-automatic manner and transformed into a relatedness edge (Navigli and Velardi, 2005). Given a word context C = {w1, ..., wn}, SSI builds a graph G = (V, E) such that V = nS i=1 SensesWN(wi) and (S, S′) ∈E if there is at least one semantic interconnection between S and S′ in the lexical knowledge base. A semantic interconnection pattern is a relevant sequence of edges selected according to a manually-created context-free grammar, i.e. a path connecting a pair of word senses, possibly including a number of intermediate concepts. The grammar consists of a small number of rules, inspired by the notion of lexical chains (Morris and Hirst, 1991). SSI performs disambiguation in an iterative fashion, by maintaining a set C of senses as a semantic context. Initially, C = V (the entire set of senses of words in C). At each step, for each sense S in C, the algorithm calculates a score of the degree of connectivity between S and the other senses in C: 3Available online from: http://lcl.di.uniroma1.it/ssi ScoreSSI(S, C) = P S′∈C\{S} P i∈IC(S,S′) 1 length(i) P S′∈C\{S} |IC(S,S′)| where IC(S, S′) is the set of interconnections between senses S and S′. The contribution of a single interconnection is given by the reciprocal of its length, calculated as the number of edges connecting its ends. The overall degree of connectivity is then normalized by the number of contributing interconnections. The highest ranking sense S of word w is chosen and the senses of w are removed from the semantic context C. The algorithm terminates when either C = ∅or there is no sense such that its score exceeds a fixed threshold. Given a word w, semantic matching is performed in two steps. First, for each dictionary D ∈{WORDNET, ODE}, and for each sense S ∈ SensesD(w), the sense description of S is disambiguated by applying SSI to dD(S). As a result, we obtain a semantic description as a bag of concepts dsem D (S). Notice that sense descriptions from both dictionaries are disambiguated with respect to the WordNet sense inventory. Second, given a WordNet sense S ∈ SensesWN(w) and an ODE sense S′ ∈SensesODE(w), we define matchSSI(S, S′) as a function of the direct relations connecting senses in dsem WN (S) and dsem ODE (S′): matchSSI(S, S′) = |c→c′:c∈dsem WN (S),c′∈dsem ODE (S′)| |dsem WN (S)|·|dsem ODE (S′)| where c →c′ denotes the existence of a relation edge in the lexical knowledge base between a concept c in the description of S and a concept c′ in the description of S′. Edges include the WordNet relation set (synonymy, hypernymy, meronymy, antonymy, similarity, nominalization, etc.) and the relatedness edge mentioned above (we adopt only direct relations to maintain a high precision). For example, some of the relations found between concepts in dsem WN (race#n#3) and dsem ODE (race#n#1.1) are: race#n#3 relation race#n#1.1 speed#n#1 related−to −→ vehicle#n#1 race#n#3 related−to −→ compete#v#1 racing#n#1 kind−of −→ sport#n#1 race#n#3 kind−of −→ contest#n#1 contributing to the final value of the function on the two senses: matchSSI(race#n#3, race#n#1.1) = 0.41 Due to the normalization factor in the denominator, these values are generally low, but unrelated 108 Table 2: Performance of the lexical and semantic mapping functions. Func. Prec. Recall F1 Acc. Lesk 84.74% 65.43% 73.84% 66.08% SSI 86.87% 79.67% 83.11% 77.94% senses have values much closer to 0. We chose SSI for the semantic matching function as it has the best performance among untrained systems on unconstrained WSD (cf. Section 4.1). 3 Evaluating the Clustering We evaluated the accuracy of the mapping produced with the lexical and semantic methods described in Sections 2.3.1 and 2.3.2, respectively. We produced a gold-standard data set by manually mapping 5,077 WordNet senses of 763 randomlyselected words to the respective ODE entries (distributed as follows: 466 nouns, 231 verbs, 50 adjectives, 16 adverbs). The data set was created by two annotators and included only polysemous words. These words had 2,600 senses in the ODE. Overall, 4,599 out of the 5,077 WordNet senses had a corresponding sense in ODE (i.e. the ODE covered 90.58% of the WordNet senses in the data set), while 2,053 out of the 2,600 ODE senses had an analogous entry in WordNet (i.e. WordNet covered 78.69% of the ODE senses). The WordNet clustering induced by the manual mapping was 49.85% of the original size and the average degree of polysemy decreased from 6.65 to 3.32. The reliability of our data set is substantiated by a quantitative assessment: 548 WordNet senses of 60 words were mapped to ODE entries by both annotators, with a pairwise mapping agreement of 92.7%. The average Cohen’s κ agreement between the two annotators was 0.874. In Table 2 we report the precision and recall of the lexical and semantic functions in providing the appropriate association for the set of senses having a corresponding entry in ODE (i.e. excluding the cases where a sense ϵ was assigned by the manual annotators, cf. Section 2.3). We also report in the Table the accuracy of the two functions when we view the problem as a classification task: an automatic association is correct if it corresponds to the manual association provided by the annotators or if both assign no answer (equivalently, if both provide an ϵ label). All the differences between Lesk and SSI are statistically significant (p < 0.01). As a second experiment, we used two information-theoretic measures, namely entropy and purity (Zhao and Karypis, 2004), to compare an automatic clustering c(w) (i.e. the sense groups acquired for word w) with a manual clustering ˆc(w). The entropy quantifies the distribution of the senses of a group over manually-defined groups, while the purity measures the extent to which a group contains senses primarily from one manual group. Given a word w, and a sense group G ∈c(w), the entropy of G is defined as: H(G) = − 1 log |ˆc(w)| P ˆG∈ˆc(w) | ˆG∩G| | ˆG| log(| ˆG∩G| | ˆG| ) i.e., the entropy4 of the distribution of senses of group G over the groups of the manual clustering ˆc(w). The entropy of an entire clustering c(w) is defined as: Entropy(c(w)) = P G∈c(w) |G| |SensesWN(w)|H(G) that is, the entropy of each group weighted by its size. The purity of a sense group G ∈c(w) is defined as: Pu(G) = 1 |G| max ˆG∈ˆc(w) | ˆG ∩G| i.e., the normalized size of the largest subset of G contained in a single group ˆG of the manual clustering. The overall purity of a clustering is obtained as a weighted sum of the individual cluster purities: Purity(c(w)) = P G∈c(w) |G| |SensesWN(w)|Pu(G) We calculated the entropy and purity of the clustering produced automatically with the lexical and the semantic method, when compared to the grouping induced by our manual mapping (ODE), and to the grouping manually produced for the English all-words task at Senseval-2 (3,499 senses of 403 nouns). We excluded from both gold standards words having a single cluster. The figures are shown in Table 3 (good entropy and purity values should be close to 0 and 1 respectively). Table 3 shows that the quality of the clustering induced with a semantic function outperforms both lexical overlap and a random baseline. The baseline was computed averaging among 200 random clustering solutions for each word. Random 4Notice that we are comparing clusterings against the manual clustering (rather than viceversa), as otherwise a completely unclustered solution would result in 1.0 entropy and 0.0 purity. 109 Table 3: Comparison with gold standards. Gold standard Method Entropy Purity ODE Lesk 0.15 0.87 SSI 0.11 0.87 Baseline 0.28 0.67 Senseval Lesk 0.17 0.71 SSI 0.16 0.69 Baseline 0.27 0.57 clusterings were the result of a random mapping function between WordNet and ODE senses. As expected, the automatic clusterings have a lower purity when compared to the Senseval-2 noun grouping as the granularity of the latter is much finer than ODE (entropy is only partially affected by this difference, indicating that we are producing larger groups). Indeed, our gold standard (ODE), when compared to the Senseval groupings, obtains a low purity as well (0.75) and an entropy of 0.13. 4 Evaluating Coarse-Grained WSD The main reason for building a clustering of WordNet senses is to make Word Sense Disambiguation a feasible task, thus overcoming the obstacles that even humans encounter when annotating sentences with excessively fine-grained word senses. As the semantic method outperformed the lexical overlap in the evaluations of previous Section, we decided to acquire a clustering on the entire WordNet sense inventory using this approach. As a result, we obtained a reduction of 33.54% in the number of entries (from 60,302 to 40,079 senses) and a decrease of the polysemy degree from 3.14 to 2.09. These figures exclude monosemous senses and derivatives in WordNet. As we are experimenting on an automaticallyacquired clustering, all the figures are affected by the 22.06% error rate resulting from Table 2. 4.1 Experiments on Senseval-3 As a first experiment, we assessed the effect of the automatic sense clustering on the English allwords task at Senseval-3 (Snyder and Palmer, 2004). This task required WSD systems to provide a sense choice for 2,081 content words in a set of 301 sentences from the fiction, news story, and editorial domains. We considered the three best-ranking WSD systems – GAMBL (Decadt et al., 2004), SenseLearner (Mihalcea and Faruque, 2004), and Koc Table 4: Performance of WSD systems at Senseval-3 on coarse-grained sense inventories. System Prec. Rec. F1 F1fine Gambl 0.779 0.779 0.779 0.652 SenseLearner 0.769 0.769 0.769 0.646 KOC Univ. 0.768 0.768 0.768 0.641 SSI 0.758 0.758 0.758 0.612 IRST-DDD 0.721 0.719 0.720 0.583 FS baseline 0.769 0.769 0.769 0.624 Random BL 0.497 0.497 0.497 0.340 University (Yuret, 2004) – and the best unsupervised system, namely IRST-DDD (Strapparava et al., 2004). We also included SSI as it outperforms all the untrained systems (Navigli and Velardi, 2005). To evaluate the performance of the five systems on our coarse clustering, we considered a fine-grained answer to be correct if it belongs to the same cluster as that of the correct answer. Table 4 reports the performance of the systems, together with the first sense and the random baseline (in the last column we report the performance on the original fine-grained test set). The best system, Gambl, obtains almost 78% precision and recall, an interesting figure compared to 65% performance in the fine-grained WSD task. An interesting aspect is that the ranking across systems was maintained when moving from a fine-grained to a coarse-grained sense inventory, although two systems (SSI and IRSTDDD) show the best improvement. In order to show that the general improvement is the result of an appropriate clustering, we assessed the performance of Gambl by averaging its results when using 100 randomly-generated different clusterings. We excluded monosemous clusters from the test set (i.e. words with all the senses mapped to the same ODE entry), so as to clarify the real impact of properly grouped clusters. As a result, the random setting obtained 64.56% average accuracy, while the performance when adopting our automatic clustering was 70.84% (1,025/1,447 items). To make it clear that the performance improvement is not only due to polysemy reduction, we considered a subset of the Senseval-3 test set including only the incorrect answers given by the fine-grained version of Gambl (623 items). In other words, on this data set Gambl performs with 0% accuracy. We compared the performance of 110 Table 5: Performance of SSI on coarse inventories (SSI∗uses a coarse-grained knowledge base). System Prec. Recall F1 SSI + baseline 0.758 0.758 0.758 SSI 0.717 0.576 0.639 SSI∗ 0.748 0.674 0.709 Gambl when adopting our automatic clustering with the accuracy of the random baseline. The results were respectively 34% and 15.32% accuracy. These experiments prove that the performance in Table 4 is not due to chance, but to an effective way of clustering word senses. Furthermore, the systems in the Table are not taking advantage of the information given by the clustering (trained systems could be retrained on the coarse clustering). To assess this aspect, we performed a further experiment. We modified the sense inventory of the SSI lexical knowledge base by adopting the coarse inventory acquired automatically. To this end, we merged the semantic interconnections belonging to the same cluster. We also disabled the first sense baseline heuristic, that most of the systems use as a back-off when they have no information about the word at hand. We call this new setting SSI∗(as opposed to SSI used in Table 4). In Table 5 we report the results. The algorithm obtains an improvement of 9.8% recall and 3.1% precision (both statistically significant, p < 0.05). The increase in recall is mostly due to the fact that different senses belonging to the same cluster now contribute together to the choice of that cluster (rather than individually to the choice of a fine-grained sense). 5 Related Work Dolan (1994) describes a method for clustering word senses with the use of information provided in the electronic version of LDOCE (textual definitions, semantic relations, domain labels, etc.). Unfortunately, the approach is not described in detail and no evaluation is provided. Most of the approaches in the literature make use of the WordNet structure to cluster its senses. Peters et al. (1998) exploit specific patterns in the WordNet hierarchy (e.g. sisters, autohyponymy, twins, etc.) to group word senses. They study semantic regularities or generalizations obtained and analyze the effect of clustering on the compatibility of language-specific wordnets. Mihalcea and Moldovan (2001) study the structure of WordNet for the identification of sense regularities: to this end, they provide a set of semantic and probabilistic rules. An evaluation of the heuristics provided leads to a polysemy reduction of 39% and an error rate of 5.6%. A different principle for clustering WordNet senses, based on the Minimum Description Length, is described by Tomuro (2001). The clustering is evaluated against WordNet cousins and used for the study of inter-annotator disagreement. Another approach exploits the (dis)agreements of human annotators to derive coarse-grained sense clusters (Chklovski and Mihalcea, 2003), where sense similarity is computed from confusion matrices. Agirre and Lopez (2003) analyze a set of methods to cluster WordNet senses based on the use of confusion matrices from the results of WSD systems, translation equivalences, and topic signatures (word co-occurrences extracted from the web). They assess the acquired clusterings against 20 words from the Senseval-2 sense groupings. Finally, McCarthy (2006) proposes the use of ranked lists, based on distributionally nearest neighbours, to relate word senses. This softer notion of sense relatedness allows to adopt the most appropriate granularity for a specific application. Compared to our approach, most of these methods do not evaluate the clustering produced with respect to a gold-standard clustering. Indeed, such an evaluation would be difficult and timeconsuming without a coarse sense inventory like that of ODE. A limited assessment of coarse WSD is performed by Fellbaum et al. (2001), who obtain a large improvement in the accuracy of a maximum-entropy system on clustered verbs. 6 Conclusions In this paper, we presented a study on the construction of a coarse sense inventory for the WordNet lexicon and its effects on unrestricted WSD. A key feature in our approach is the use of a well-established dictionary encoding sense hierarchies. As remarked in Section 2.2, the method can employ any dictionary with a sufficiently structured inventory of senses, and can thus be applied to reduce the granularity of, e.g., wordnets of other languages. One could argue that the adoption of the ODE as a sense inventory for WSD would be a better solution. While we are not against this possibility, there are problems that cannot be solved at present: the ODE does not encode semantic re111 lations and is not freely available. Also, most of the present research and standard data sets focus on WordNet. The fine granularity of the WordNet sense inventory is unsuitable for most applications, thus constituting an obstacle that must be overcome. We believe that the research topic analyzed in this paper is a first step towards making WSD a feasible task and enabling language-aware applications, like information retrieval, question answering, machine translation, etc. In a future work, we plan to investigate the contribution of coarse disambiguation to such real-world applications. To this end, we aim to set up an Open Mind-like experiment for the validation of the entire mapping from WordNet to ODE, so that only a minimal error rate would affect the experiments to come. Finally, the method presented here could be useful for lexicographers in the comparison of the quality of dictionaries, and in the detection of missing word senses. Acknowledgments This work is partially funded by the Interop NoE (508011), 6th European Union FP. We wish to thank Paola Velardi, Mirella Lapata and Samuel Brody for their useful comments. References Eneko Agirre and Oier Lopez. 2003. Clustering wordnet word senses. In Proc. of Conf. on Recent Advances on Natural Language (RANLP). Borovets, Bulgary. Ted Briscoe and John Carroll. 2002. Robust accurate statistical annotation of general text. In Proc. of 3rd Conference on Language Resources and Evaluation. Las Palmas, Gran Canaria. Tim Chklovski and Rada Mihalcea. 2002. Building a sense tagged corpus with open mind word expert. In Proc. of ACL 2002 Workshop on WSD: Recent Successes and Future Directions. Philadelphia, PA. Tim Chklovski and Rada Mihalcea. 2003. Exploiting agreement and disagreement of human annotators for word sense disambiguation. In Proc. of Recent Advances In NLP (RANLP 2003). Borovetz, Bulgaria. Bart Decadt, V´eronique Hoste, Walter Daelemans, and Antal van den Bosch. 2004. Gambl, genetic algorithm optimization of memory-based wsd. In Proc. of ACL/SIGLEX Senseval-3. Barcelona, Spain. William B. Dolan. 1994. Word sense ambiguation: Clustering related senses. In Proc. of 15th Conference on Computational Linguistics (COLING). Morristown, N.J. Philip Edmonds and Adam Kilgariff. 1998. Introduction to the special issue on evaluating word sense disambiguation systems. Journal of Natural Language Engineering, 8(4). Christiane Fellbaum, Martha Palmer, Hoa Trang Dang, Lauren Delfs, and Susanne Wolf. 2001. Manual and automatic semantic annotation with wordnet. In Proc. of NAACL Workshop on WordNet and Other Lexical Resources. Pittsburgh, PA. Christiane Fellbaum, editor. 1998. WordNet: an Electronic Lexical Database. MIT Press. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine code from an ice cream cone. In Proc. of 5th Conf. on Systems Documentation. ACM Press. Bernardo Magnini and Gabriela Cavagli`a. 2000. Integrating subject field codes into wordnet. In Proc. of the 2nd Conference on Language Resources and Evaluation (LREC). Athens, Greece. Diana McCarthy. 2006. Relating wordnet senses for word sense disambiguation. In Proc. of ACL Workshop on Making Sense of Sense. Trento, Italy. Rada Mihalcea and Ehsanul Faruque. 2004. Senselearner: Minimally supervised word sense disambiguation for all words in open text. In Proc. of ACL/SIGLEX Senseval-3. Barcelona, Spain. Rada Mihalcea and Dan Moldovan. 2001. Automatic generation of a coarse grained wordnet. In Proc. of NAACL Workshop on WordNet and Other Lexical Resources. Pittsburgh, PA. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1). Roberto Navigli and Paola Velardi. 2004. Learning domain ontologies from document warehouses and dedicated websites. Computational Linguistics, 30(2). Roberto Navigli and Paola Velardi. 2005. Structural semantic interconnections: a knowledge-based approach to word sense disambiguation. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 27(7). Hwee T. Ng, Chung Y. Lim, and Shou K. Foo. 1999. A case study on the inter-annotator agreement for word sense disambiguation. In Proc. of ACL Workshop: Standardizing Lexical Resources. College Park, Maryland. Wim Peters, Ivonne Peters, and Piek Vossen. 1998. Automatic sense clustering in eurowordnet. In Proc. of the 1st Conference on Language Resources and Evaluation (LREC). Granada, Spain. Benjamin Snyder and Martha Palmer. 2004. The english all-words task. In Proc. of ACL 2004 SENSEVAL-3 Workshop. Barcelona, Spain. Catherine Soanes and Angus Stevenson, editors. 2003. Oxford Dictionary of English. Oxford University Press. Christopher Stokoe. 2005. Differentiating homonymy and polysemy in information retrieval. In Proc. of the Conference on Empirical Methods in Natural Language Processing. Vancouver, Canada. Carlo Strapparava, Alfio Gliozzo, and Claudio Giuliano. 2004. Pattern abstraction and term similarity for word sense disambiguation. In Proc. of ACL/SIGLEX Senseval3. Barcelona, Spain. Noriko Tomuro. 2001. Tree-cut and a lexicon based on systematic polysemy. In Proc. of the Meeting of the NAACL. Pittsburgh, USA. David Vickrey, Luke Biewald, Marc Teyssier, and Daphne Koller. 2005. Word sense disambiguation vs. statistical machine translation. In Proc. of Conference on Empirical Methods in Natural Language Processing. Vancouver, Canada. Deniz Yuret. 2004. Some experiments with a naive bayes wsd system. In Proc. of ACL/SIGLEX Senseval-3. Barcelona, Spain. Ying Zhao and George Karypis. 2004. Empirical and theoretical comparisons of selected criterion functions for document clustering. Machine Learning, 55(3). 112
2006
14
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1113–1120, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Learning to Say It Well: Reranking Realizations by Predicted Synthesis Quality Crystal Nakatsu and Michael White Department of Linguistics The Ohio State University Columbus, OH 43210 USA fcnakatsu,[email protected] Abstract This paper presents a method for adapting a language generator to the strengths and weaknesses of a synthetic voice, thereby improving the naturalness of synthetic speech in a spoken language dialogue system. The method trains a discriminative reranker to select paraphrases that are predicted to sound natural when synthesized. The ranker is trained on realizer and synthesizer features in supervised fashion, using human judgements of synthetic voice quality on a sample of the paraphrases representative of the generator’s capability. Results from a cross-validation study indicate that discriminative paraphrase reranking can achieve substantial improvements in naturalness on average, ameliorating the problem of highly variable synthesis quality typically encountered with today’s unit selection synthesizers. 1 Introduction Unit selection synthesis1—a technique which concatenates segments of natural speech selected from a database—has been found to be capable of producing high quality synthetic speech, especially for utterances that are similar to the speech in the database in terms of style, delivery, and coverage (Black and Lenzo, 2001). In particular, in the limited domain of a spoken language dialogue system, it is possible to achieve highly natural synthesis with a purpose-built voice (Black and Lenzo, 2000). However, it can be difficult to develop 1See e.g. (Hunt and Black, 1996; Black and Taylor, 1997; Beutnagel et al., 1999). a synthetic voice for a dialogue system that produces natural speech completely reliably, and thus in practice output quality can be quite variable. Two important factors in this regard are the labeling process for the speech database and the direction of the dialogue system’s further development, after the voice has been built: when labels are assigned fully automatically to the recorded speech, label boundaries may be inaccurate, leading to unnatural sounding joins in speech output; and when further system development leads to the generation of utterances that are less like those in the recording script, such utterances must be synthesized using smaller units with more joins between them, which can lead to a considerable dropoff in quality. As suggested by Bulyko and Ostendorf (2002), one avenue for improving synthesis quality in a dialogue system is to have the system choose what to say in part by taking into account what is likely to sound natural when synthesized. The idea is to take advantage of the generator’s periphrastic ability:2 given a set of generated paraphrases that suitably express the desired content in the dialogue context, the system can select the specific paraphrase to use as its response according to the predicted quality of the speech synthesized for that paraphrase. In this way, if there are significant differences in the predicted synthesis quality for the various paraphrases—and if these predictions are generally borne out—then, by selecting paraphrases with high predicted synthesis quality, the dialogue system (as a whole) can more reliably produce natural sounding speech. In this paper, we present an application of dis2See e.g. (Iordanskaja et al., 1991; Langkilde and Knight, 1998; Barzilay and McKeown, 2001; Pang et al., 2003) for discussion of paraphrase in generation. 1113 criminative reranking to the task of adapting a language generator to the strengths and weaknesses of a particular synthetic voice. Our method involves training a reranker to select paraphrases that are predicted to sound natural when synthesized, from the N-best realizations produced by the generator. The ranker is trained in supervised fashion, using human judgements of synthetic voice quality on a representative sample of the paraphrases. In principle, the method can be employed with any speech synthesizer. Additionally, when features derived from the synthesizer’s unit selection search can be made available, further quality improvements become possible. The paper is organized as follows. In Section 2, we review previous work on integrating choice in language generation and speech synthesis, and on learning discriminative rerankers for generation. In Section 3, we present our method. In Section 4, we describe a cross-validation study whose results indicate that discriminative paraphrase reranking can achieve substantial improvements in naturalness on average. Finally, in Section 5, we conclude with a summary and a discussion of future work. 2 Previous Work Most previous work on integrating language generation and synthesis, e.g. (Davis and Hirschberg, 1988; Prevost and Steedman, 1994; Hitzeman et al., 1998; Pan et al., 2002), has focused on how to use the information present in the language generation component in order to specify contextually appropriate intonation for the speech synthesizer to target. For example, syntactic structure, information structure and dialogue context have all been argued to play a role in improving prosody prediction, compared to unrestricted textto-speech synthesis. While this topic remains an important area of research, our focus is instead on a different opportunity that arises in a dialogue system, namely, the possibility of choosing the exact wording and prosody of a response according to how natural it is likely to sound when synthesized. To our knowledge, Bulyko and Ostendorf (2002) were the first to propose allowing the choice of wording and prosody to be jointly determined by the language generator and speech synthesizer. In their approach, a template-based generator passes a prosodically annotated word network to the speech synthesizer, rather than a single text string (or prosodically annotated text string). To perform the unit selection search on this expanded input efficiently, they employ weighted finite-state transducers, where each step of network expansion is then followed by minimization. The weights are determined by concatenation (join) costs, relative frequencies (negative log probabilities) of the word sequences, and prosodic prediction costs, for cases where the prosody is not determined by the templates. In a perception experiment, they demonstrated that by expanding the space of candidate responses, their system achieved higher quality speech output. Following (Bulyko and Ostendorf, 2002), Stone et al. (2004) developed a method for jointly determining wording, speech and gesture. In their approach, a template-based generator produces a word lattice with intonational phrase breaks. A unit selection algorithm then searches for a low-cost way of realizing a path through this lattice that combines captured motion samples with recorded speech samples to create coherent phrases, blending segments of speech and motion together phrase-by-phrase into extended utterances. Video demonstrations indicate that natural and highly expressive results can be achieved, though no human evaluations are reported. In an alternative approach, Pan and Weng (2002) proposed integrating instance-based realization and synthesis. In their framework, sentence structure, wording, prosody and speech waveforms from a domain-specific corpus are simultaneously reused. To do so, they add prosodic and acoustic costs to the insertion, deletion and replacement costs used for instance-based surface realization. Their contribution focuses on how to design an appropriate speech corpus to facilitate an integrated approach to instance-based realization and synthesis, and does not report evaluation results. A drawback of these approaches to integrating choice in language generation and synthesis is that they cannot be used with most existing speech synthesizers, which do not accept (annotated) word lattices as input. In contrast, the approach we introduce here can be employed with any speech synthesizer in principle. All that is required is that the language generator be capable of producing N-best outputs; that is, the generator must be able to construct a set of suitable paraphrases ex1114 pressing the desired content, from which the top N realizations can be selected for reranking according to their predicted synthesis quality. Once the realizations have been reranked, the top scoring realization can be sent to the synthesizer as usual. Alternatively, when features derived from the synthesizer’s unit selection search can be made available—and if the time demands of the dialogue system permit—several of the top scoring reranked realizations can be sent to the synthesizer, and the resulting utterances can be rescored with the extended feature set. Our reranking approach has been inspired by previous work on reranking in parsing and generation, especially (Collins, 2000) and (Walker et al., 2002). As in Walker et al.’s (2002) method for training a sentence plan ranker, we use our generator to produce a representative sample of paraphrases and then solicit human judgements of their naturalness to use as data for training the ranker. This method is attractive when there is no suitable corpus of naturally occurring dialogues available for training purposes, as is often the case for systems that engage in human-computer dialogues that differ substantially from human-human ones. The primary difference between Walker et al.’s work and ours is that theirs examines the impact on text quality of sentence planning decisions such as aggregation, whereas ours focuses on the impact of the lexical and syntactic choice at the surface realization level on speech synthesis quality, according to the strengths and weaknesses of a particular synthetic voice. 3 Reranking Realizations by Predicted Synthesis Quality 3.1 Generating Alternatives Our experiments with integrating language generation and synthesis have been carried out in the context of the COMIC3 multimodal dialogue system (den Os and Boves, 2003). The COMIC system adds a dialogue interface to a CAD-like application used in sales situations to help clients redesign their bathrooms. The input to the system includes speech, handwriting, and pen gestures; the output combines synthesized speech, an animated talking head, deictic gestures at on-screen objects, and direct control of the underlying application. 3COnversational Multimodal Interaction with Computers, http://www.hcrc.ed.ac.uk/comic/. Drawing on the materials used in (Foster and White, 2005) to evaluate adaptive generation in COMIC, we selected a sample of 104 sentences from 38 different output turns across three dialogues. For each sentence in the set, a variant was included that expressed the same content adapted to a different user model or adapted to a different dialogue history. For example, a description of a certain design’s colour scheme for one user might be phrased as As you can see, the tiles have a blue and green colour scheme, whereas a variant expression of the same content for a different user could be Although the tiles have a blue colour scheme, the design does also feature green, if the user disprefers blue. In COMIC, the sentence planner uses XSLT to generate disjunctive logical forms (LFs), which specify a range of possible paraphrases in a nested free-choice form (Foster and White, 2004). Such disjunctive LFs can be efficiently realized using the OpenCCG realizer (White, 2004; White, 2006b; White, 2006a). Note that for the experiments reported here, we manually augmented the disjunctive LFs for the 104 sentences in our sample to make greater use of the periphrastic capabilities of the COMIC grammar; it remains for future work to augment the COMIC sentence planner produce these more richly disjunctive LFs automatically. OpenCCG includes an extensible API for integrating language modeling and realization. To select preferred word orders, from among all those allowed by the grammar for the input LF, we used a backoff trigram model trained on approximately 750 example target sentences, where certain words were replaced with their semantic classes (e.g. MANUFACTURER, COLOUR) for better generalization. For each of the 104 sentences in our sample, we performed 25-best realization from the disjunctive LF, and then randomly selected up to 12 different realizations to include in our experiments based on a simulated coin flip for each realization, starting with the top-scoring one. We used this procedure to sample from a larger portion of the N-best realizations, while keeping the sample size manageable. Figure 1 shows an example of 12 paraphrases for a sentence chosen for inclusion in our sample. Note that the realizations include words with pitch accent annotations as well as boundary tones as separate, punctuation-like words. Generally the 1115  thisH designH uses tiles from Villeroy and BochH ’s Funny DayH collection LL% .  thisH designH is based on the Funny DayH collection by Villeroy and BochH LL% .  thisH designH is based on Funny DayH LL% , by Villeroy and BochH LL% .  thisH designH draws from the Funny DayH collection by Villeroy and BochH LL% .  thisH one draws from Funny DayH LL% , by Villeroy and BochH LL% .  hereL+H LH% we have a design that is based on the Funny DayH collection by Villeroy and BochH LL% .  thisH designH draws from Villeroy and BochH ’s Funny DayH series LL% .  here is a design that draws from Funny DayH LL% , by Villeroy and BochH LL% .  thisH one draws from Villeroy and BochH ’s Funny DayH collection LL% .  thisH draws from the Funny DayH collection by Villeroy and BochH LL% .  thisH one draws from the Funny DayH collection by Villeroy and BochH LL% .  here is a design that draws from Villeroy and BochH ’s Funny DayH collection LL% . Figure 1: Example of sampled periphrastic alternatives for a sentence. quality of the sampled paraphrases is very high, only occasionally including dispreferred word orders such as We here have a design in the family style, where here is in medial position rather than fronted.4 3.2 Synthesizing Utterances For synthesis, OpenCCG’s output realizations are converted to APML,5 a markup language which allows pitch accents and boundary tones to be specified, and then passed to the Festival speech synthesis system (Taylor et al., 1998; Clark et al., 2004). Festival uses the prosodic markup in the text analysis phase of synthesis in place of the structures that it would otherwise have to predict from the text. The synthesiser then uses the context provided by the markup to enforce the selec4In other examples medial position is preferred, e.g. This design here is in the family style. 5Affective Presentation Markup Language; see http://www.cstr.ed.ac.uk/projects/ festival/apml.html. tion of suitable units from the database. A custom synthetic voice for the COMIC system was developed, as follows. First, a domainspecific recording script was prepared by selecting about 150 sentences from the larger set of target sentences used to train the system’s n-gram model. The sentences were greedily selected with the goals of ensuring that (i) all words (including proper names) in the target sentences appeared at least once in the record script, and (ii) all bigrams at the level of semantic classes (e.g. MANUFACTURER, COLOUR) were covered as well. For the cross-validation study reported in the next section, we also built a trigram model on the words in the domain-specific recording script, without replacing any words with semantic classes, so that we could examine whether the more frequent occurrence of the specific words and phrases in this part of the script is predictive of synthesis quality. The domain-specific script was augmented with a set of 600 newspaper sentences selected for diphone coverage. The newspaper sentences make it possible for the voice to synthesize words outside of the domain-specific script, though not necessarily with the same quality. Once these scripts were in place, an amateur voice talent was recorded reading the sentences in the scripts during two recording sessions. Finally, after the speech files were semi-automatically segmented into individual sentences, the speech database was constructed, using fully automatic labeling. We have found that the utterances synthesized with the COMIC voice vary considerably in their naturalness, due to two main factors. First, the system underwent further development after the voice was built, leading to the addition of a variety of new phrases to the system’s repertoire, as well as many extra proper names (and their pronunciations); since these names and phrases usually require going outside of the domain-specific part of the speech database, they often (though not always) exhibit a considerable dropoff in synthesis quality.6 And second, the boundaries of the automatically assigned unit labels were not always accurate, leading to problems with unnatural joins and reduced intelligibility. To improve the reliability of the COMIC voice, we could have recorded more speech, or manually corrected label bound6Note that in the current version of the system, proper names are always required parts of the output, and thus the discriminative reranker cannot learn to simply choose paraphrases that leave out problematic names. 1116 aries; the goal of this paper is to examine whether the naturalness of a dialogue system’s output can be improved in a less labor-intensive way. 3.3 Rating Synthesis Quality To obtain data for training our realization reranker, we solicited judgements of the naturalness of the synthesized speech produced by Festival for the utterances in our sample COMIC corpus. Two judges (the first two authors) provided judgements on a 1–7 point scale, with higher scores representing more natural synthesis. Ratings were gathered using WebExp2,7 with the periphrastic alternatives for each sentence presented as a group in a randomized order. Note that for practical reasons, the utterances were presented out of the dialogue context, though both judges were familiar with the kinds of dialogues that the COMIC system is capable of. Though the numbers on the seven point scale were not assigned labels, they were roughly taken to be “horrible,” “poor,” “fair,” “ok,” “good,” “very good” and “perfect.” The average assigned rating across all utterances was 4.05 (“ok”), with a standard deviation of 1.56. The correlation between the two judges’ ratings was 0.45, with one judge’s ratings consistently higher than the other’s. Some common problems noted by the judges included slurred words, especially the sometimes sounding like ther or even their; clipped words, such as has shortened at times to the point of sounding like is, or though clipped to unintelligibility; unnatural phrasing or emphasis, e.g. occasional pauses before a possessive ’s, or words such as style sounding emphasized when they should be deaccented; unnatural rate changes; “choppy” speech from poor joins; and some unintelligible proper names. 3.4 Ranking While Collins (2000) and Walker et al. (2002) develop their rankers using the RankBoost algorithm (Freund et al., 1998), we have instead chosen to use Joachims’ (2002) method of formulating ranking tasks as Support Vector Machine (SVM) constraint optimization problems.8 This choice has been motivated primarily by convenience, as Joachims’ SVMlight package is easy to 7http://www.hcrc.ed.ac.uk/web exp/ 8See (Barzilay and Lapata, 2005) for another application of SVM ranking in generation, namely to the task of ranking alternative text orderings for local coherence. use; we leave it for future work to compare the performance of RankBoost and SVMlight on our ranking task. The ranker takes as input a set of paraphrases that express the desired content of each sentence, optionally together with synthesized utterances for each paraphrase. The output is a ranking of the paraphrases according to the predicted naturalness of their corresponding synthesized utterances. Ranking is more appropriate than classification for our purposes, as naturalnesss is a graded assessment rather than a categorical one. To encode the ranking task as an SVM constraint optimization problem, each paraphrase j of a sentence i is represented by a feature vector (sij) = hf1(sij); : : : ; fm(sij)i, where m is the number of features. In the training data, the feature vectors are paired with the average value of their corresponding human judgements of naturalness. From this data, ordered pairs of paraphrases (sij; sik) are derived, where sij has a higher naturalness rating than sik. The constraint optimization problem is then to derive a parameter vector ~w that yields a ranking score function ~w  (sij) which minimizes the number of pairwise ranking violations. Ideally, for every ordered pair (sij; sik), we would have ~w  (sij) > ~w  (sik); in practice, it is often impossible or intractable to find such a parameter vector, and thus slack variables are introduced that allow for training errors. A parameter to the algorithm controls the trade-off between ranking margin and training error. In testing, the ranker’s accuracy can be determined by comparing the ranking scores for every ordered pair (sij; sik) in the test data, and determining whether the actual preferences are borne out by the predicted preference, i.e. whether ~w  (sij) > ~w  (sik) as desired. Note that the ranking scores, unlike the original ratings, do not have any meaning in the absolute sense; their import is only to order alternative paraphrases by their predicted naturalness. In our ranking experiments, we have used SVMlight with all parameters set to their default values. 3.5 Features Table 1 shows the feature sets we have investigated for reranking, distinguished by the availability of the features and the need for discriminative training. The first row shows the feature sets that are 1117 Table 1: Feature sets for reranking. Discriminative Availability no yes Realizer NGRAMS WORDS Synthesizer COSTS ALL available to the realizer. There are two n-gram models that can be used to directly rank alternative realizations: NGRAM-1, the language model used in COMIC, and NGRAM-2, the language model derived from the domain-specific recording script; for feature values, the negative logarithms are used. There are also two WORDS feature sets (shown in the second column): WORDS-BI, which includes NGRAMS plus a feature for every possible unigram and bigram, where the value of the feature is the count of the unigram or bigram in a given realization; and WORDS-TRI, which includes all the features in WORDS-BI, plus a feature for every possible trigram. The second row shows the feature sets that require information from the synthesizer. The COSTS feature set includes NGRAMS plus the total join and target costs from the unit selection search. Note that a weighted sum of these costs could be used to directly rerank realizations, in much the same way as relative frequencies and concatenation costs are used in (Bulyko and Ostendorf, 2002); in our experiments, we let SVMlight determine how to weight these costs. Finally, there are two ALL feature sets: ALL-BI includes NGRAMS, WORDSBI and COSTS, plus features for every possible phone and diphone, and features for every specific unit in the database; ALL-TRI includes NGRAMS, WORDS-TRI, COSTS, and a feature for every phone, diphone and triphone, as well as specific units in the database. As with WORDS, the value of a feature is the count of that feature in a given synthesized utterance. 4 Cross-Validation Study To train and test our ranker on our feature sets, we partitioned the corpus into 10 folds and performed 10-fold cross-validation. For each fold, 90% of the examples were used for training the ranker and the remaining unseen 10% were used for testing. The folds were created by randomly choosing from among the sentence groups, resulting in all of the paraphrases for a given sentence occurring in the same fold, and each occurring exTable 2: Comparison of results for differing feature sets, topline and baseline. Features Mean Score SD Accuracy (%) BEST 5.38 1.11 100.0 WORDS-TRI 4.95 1.24 77.3 ALL-BI 4.95 1.24 77.9 ALL-TRI 4.90 1.25 78.0 WORDS-BI 4.86 1.28 76.8 COSTS 4.69 1.27 68.2 NGRAM-2 4.34 1.38 56.2 NGRAM-1 4.30 1.29 53.3 RANDOM 4.11 1.22 50.0 actly once in the testing set as a whole. We evaluated the performance of our ranker by determining the average score of the best ranked paraphrase for each sentence, under each of the following feature combinations: NGRAM1, NGRAM-2, COSTS, WORDS-BI, WORDSTRI, ALL-BI, and ALL-TRI. Note that since we used the human ratings to calculate the score of the highest ranked utterance, the score of the highest ranked utterance cannot be higher than that of the highest human-rated utterance. Therefore, we effectively set the human ratings as the topline (BEST). For the baseline, we randomly chose an utterance from among the alternatives, and used its associated score. In 15 tests generating the random scores, our average scores ranged from 3.88– 4.18. We report the median score of 4.11 as the average for the baseline, along with the mean of the topline and each of the feature subsets, in Table 2. We also report the ordering accuracy of each feature set used by the ranker in Table 2. As mentioned in Section 3.4, the ordering accuracy of the ranker using a given feature set is determined by c=N, where c is the number of correctly ordered pairs (of each paraphrase, not just the top ranked one) produced by the ranker, and N is the total number of human-ranked ordered pairs. As Table 2 indicates, the mean of BEST is 5.38, whereas our ranker using WORDS-TRI features achieves a mean score of 4.95. This is a difference of 0.42 on a seven point scale, or only a 6% difference. The ordering accuracy of WORDS-TRI is 77.3%. We also measured the improvement of our ranker with each feature set over the random baseline as a percentage of the maximum possible gain (which would be to reproduce the human topline). The results appear in Figure 2. As the 1118 0 10 20 30 40 50 60 70 NGRAM-1 NGRAM-2 COSTS WORDS-BI ALL-TRI ALL-BI WORDS-TRI Figure 2: Improvement as a percentage of the maximum possible gain over the random baseline. figure indicates, the maximum possible gain our ranker achieves over the baseline is 66% (using the WORDS-TRI or ALL-BI feature set) . By comparison, NGRAM-1 and NGRAM-2 achieve less than 20% of the possible gain. To verify our main hypothesis that our ranker would significantly outperform the baselines, we computed paired one-tailed t-tests between WORDS-TRI and RANDOM (t = 2:4, p < 8:9x1013), and WORDS-TRI and NGRAM-1 (t = 1:4, p < 4:5x108). Both differences were highly significant. We also performed seven posthoc comparisons using two-tailed t-tests, as we did not have an a priori expectation as to which feature set would work better. Using the Bonferroni adjustment for multiple comparisons, the pvalue required to achieve an overall level of significance of 0.05 is 0.007. In the first post-hoc test, we found a significant difference between BEST and WORDS-TRI (t = 8:0,p < 1:86x1012), indicating that there is room for improvement of our ranker. However, in considering the top scoring feature sets, we did not find a significant difference between WORDS-TRI and WORDS-BI (t = 2:3, p < 0:022), from which we infer that the difference among all of WORDS-TRI, ALL-BI, ALL-TRI and WORDS-BI is not significant also. This suggests that the synthesizer features have no substantial impact on our ranker, as we would expect ALL-TRI to be significantly higher than WORDS-TRI if so. However, since COSTS does significantly improve upon NGRAM2 (t = 3:5, p < 0:001), there is some value to the use of synthesizer features in the absence of WORDS. We also looked at the comparison for the WORDS models and COSTS. While WORDS-BI did not perform significantly better than COSTS ( t = 2:3, p < 0:025), the added trigrams in WORDSTRI did improve ranker performance significantly over COSTS (t = 3:7, p < 3:29x104). Since COSTS ranks realizations in the much the same way as (Bulyko and Ostendorf, 2002), the fact that WORDS-TRI outperforms COSTS indicates that our discriminative reranking method can significantly improve upon their non-discriminative approach. 5 Conclusions In this paper, we have presented a method for adapting a language generator to the strengths and weaknesses of a particular synthetic voice by training a discriminative reranker to select paraphrases that are predicted to sound natural when synthesized. In contrast to previous work on this topic, our method can be employed with any speech synthesizer in principle, so long as features derived from the synthesizer’s unit selection search can be made available. In a case study with the COMIC dialogue system, we have demonstrated substantial improvements in the naturalness of the resulting synthetic speech, achieving two-thirds of the maximum possible gain, and raising the average rating from “ok” to “good.” We have also shown that in this study, our discriminative method significantly outperforms an approach that performs selection based solely on corpus frequencies together with target and join costs. In future work, we intend to verify the results of our cross-validation study in a perception experiment with na¨ıve subjects. We also plan to investigate whether additional features derived from the synthesizer can better detect unnatural pauses or changes in speech rate, as well as F0 contours that fail to exhibit the targeting accenting pattern. Finally, we plan to examine whether gains in quality can be achieved with an off-the-shelf, general purpose voice that are similar to those we have observed using COMIC’s limited domain voice. Acknowledgements We thank Mary Ellen Foster, Eric Fosler-Lussier and the anonymous reviewers for helpful comments and discussion. References Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: An entity-based approach. In Pro1119 ceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, Ann Arbor. Regina Barzilay and Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proc. ACL/EACL. M. Beutnagel, A. Conkie, J. Schroeter, Y. Stylianou, and A. Syrdal. 1999. The AT&T Next-Gen TTS system. In Joint Meeting of ASA, EAA, and DAGA. Alan Black and Kevin Lenzo. 2000. Limited domain synthesis. In Proceedings of ICSLP2000, Beijing, China. Alan Black and Kevin Lenzo. 2001. Optimal data selection for unit selection synthesis. In 4th ISCA Speech Synthesis Workshop, Pitlochry, Scotland. Alan Black and Paul Taylor. 1997. Automatically clustering similar units for unit selection in speech synthesis. In Eurospeech ’97. Ivan Bulyko and Mari Ostendorf. 2002. Efficient integrated response generation from multiple targets using weighted finite state transducers. Computer Speech and Language, 16:533–550. Robert A.J. Clark, Korin Richmond, and Simon King. 2004. Festival 2 – build your own general purpose unit selection speech synthesiser. In 5th ISCA Speech Synthesis Workshop, pages 173–178, Pittsburgh, PA. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proc. ICML. James Raymond Davis and Julia Hirschberg. 1988. Assigning intonational features in synthesized spoken directions. In Proc. ACL. Els den Os and Lou Boves. 2003. Towards ambient intelligence: Multimodal computers that understand our intentions. In Proc. eChallenges-03. Mary Ellen Foster and Michael White. 2004. Techniques for Text Planning with XSLT. In Proc. 4th NLPXML Workshop. Mary Ellen Foster and Michael White. 2005. Assessing the impact of adaptive generation in the COMIC multimodal dialogue system. In Proc. IJCAI-05 Workshop on Knowledge and Representation in Practical Dialogue Systems. Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. 1998. An efficient boosting algorithm for combining preferences. In Machine Learning: Proc. of the Fifteenth International Conference. Janet Hitzeman, Alan W. Black, Chris Mellish, Jon Oberlander, and Paul Taylor. 1998. On the use of automatically generated discourse-level information in a concept-to-speech synthesis system. In Proc. ICSLP-98. A. Hunt and A. Black. 1996. Unit selection in a concatenative speech synthesis system using a large speech database. In Proc. ICASSP-96, Atlanta, Georgia. Lidija Iordanskaja, Richard Kittredge, and Alain Polg´uere. 1991. Lexical selection and paraphrase in a meaning-text generation model. In C´ecile L. Paris, William R. Swartout, and William C. Mann, editors, Natural Language Generation in Artificial Intelligence and Computational Linguistics, pages 293–312. Kluwer. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proc. KDD. Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proc. COLING-ACL. Shimei Pan and Wubin Weng. 2002. Designing a speech corpus for instance-based spoken language generation. In Proc. of the International Natural Language Generation Conference (INLG-02). Shimei Pan, Kathleen McKeown, and Julia Hirschberg. 2002. Exploring features from natural language generation for prosody modeling. Computer Speech and Language, 16:457–490. Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sentences. In Proc. HLT/NAACL. Scott Prevost and Mark Steedman. 1994. Specifying intonation from context for speech synthesis. Speech Communication, 15:139–153. Matthew Stone, Doug DeCarlo, Insuk Oh, Christian Rodriguez, Adrian Stere, Alyssa Lees, and Chris Bregler. 2004. Speaking with hands: Creating animated conversational characters from recordings of human performance. ACM Transactions on Graphics (SIGGRAPH), 23(3). P. Taylor, A. Black, and R. Caley. 1998. The architecture of the the Festival speech synthesis system. In Third International Workshop on Speech Synthesis, Sydney, Australia. Marilyn A. Walker, Owen C. Rambow, and Monica Rogati. 2002. Training a sentence planner for spoken dialogue using boosting. Computer Speech and Language, 16:409–433. Michael White. 2004. Reining in CCG Chart Realization. In Proc. INLG-04. Michael White. 2006a. CCG chart realization from disjunctive logical forms. In Proc. INLG-06. To appear. Michael White. 2006b. Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar. Research on Language & Computation, online first, March. 1120
2006
140
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1121–1128, Sydney, July 2006. c⃝2006 Association for Computational Linguistics An Effective Two-Stage Model for Exploiting Non-Local Dependencies in Named Entity Recognition Vijay Krishnan Computer Science Department Stanford University Stanford, CA 94305 [email protected] Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305 [email protected] Abstract This paper shows that a simple two-stage approach to handle non-local dependencies in Named Entity Recognition (NER) can outperform existing approaches that handle non-local dependencies, while being much more computationally efficient. NER systems typically use sequence models for tractable inference, but this makes them unable to capture the long distance structure present in text. We use a Conditional Random Field (CRF) based NER system using local features to make predictions and then train another CRF which uses both local information and features extracted from the output of the first CRF. Using features capturing non-local dependencies from the same document, our approach yields a 12.6% relative error reduction on the F1 score, over state-of-theart NER systems using local-information alone, when compared to the 9.3% relative error reduction offered by the best systems that exploit non-local information. Our approach also makes it easy to incorporate non-local information from other documents in the test corpus, and this gives us a 13.3% error reduction over NER systems using local-information alone. Additionally, our running time for inference is just the inference time of two sequential CRFs, which is much less than that of other more complicated approaches that directly model the dependencies and do approximate inference. 1 Introduction Named entity recognition (NER) seeks to locate and classify atomic elements in unstructured text into predefined entities such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. A particular problem for Named Entity Recognition(NER) systems is to exploit the presence of useful information regarding labels assigned at a long distance from a given entity. An example is the label-consistency constraint that if our text has two occurrences of New York separated by other tokens, we would want our learner to encourage both these entities to get the same label. Most statistical models currently used for Named Entity Recognition, use sequence models and thereby capture local structure. Hidden Markov Models (HMMs) (Leek, 1997; Freitag and McCallum, 1999), Conditional Markov Models (CMMs) (Borthwick, 1999; McCallum et al., 2000), and Conditional Random Fields (CRFs) (Lafferty et al., 2001) have been successfully employed in NER and other information extraction tasks. All these models encode the Markov property i.e. labels directly depend only on the labels assigned to a small window around them. These models exploit this property for tractable computation as this allows the Forward-Backward, Viterbi and Clique Calibration algorithms to become tractable. Although this constraint is essential to make exact inference tractable, it makes us unable to exploit the non-local structure present in natural language. Label consistency is an example of a non-local dependency important in NER. Apart from label consistency between the same token sequences, we would also like to exploit richer sources of dependencies between similar token sequences. For example, as shown in Figure 1, we would want it to encourage Einstein to be labeled “Person” if there is strong evidence that Albert Einstein should be labeled “Person”. Sequence models unfortu1121 told that Albert Einstein proved . . . on seeing Einstein at the Figure 1: An example of the label consistency problem. Here we would like our model to encourage entities Albert Einstein and Einstein to get the same label, so as to improve the chance that both are labeled PERSON. nately cannot model this due to their Markovian assumption. Recent approaches attempting to capture nonlocal dependencies model the non-local dependencies directly, and use approximate inference algorithms, since exact inference is in general, not tractable for graphs with non-local structure. Bunescu and Mooney (2004) define a Relational Markov Network (RMN) which explicitly models long-distance dependencies, and use it to represent relations between entities. Sutton and McCallum (2004) augment a sequential CRF with skip-edges i.e. edges between different occurrences of a token, in a document. Both these approaches use loopy belief propagation (Pearl, 1988; Yedidia et al., 2000) for approximate inference. Finkel et al. (2005) hand-set penalties for inconsistency in entity labeling at different occurrences in the text, based on some statistics from training data. They then employ Gibbs sampling (Geman and Geman, 1984) for dealing with their local feature weights and their non-local penalties to do approximate inference. We present a simple two-stage approach where our second CRF uses features derived from the output of the first CRF. This gives us the advantage of defining a rich set of features to model non-local dependencies, and also eliminates the need to do approximate inference, since we do not explicitly capture the non-local dependencies in a single model, like the more complex existing approaches. This also enables us to do inference efficiently since our inference time is merely the inference time of two sequential CRF’s; in contrast Finkel et al. (2005) reported an increase in running time by a factor of 30 over the sequential CRF, with their Gibbs sampling approximate inference. In all, our approach is simpler, yields higher F1 scores, and is also much more computationally efficient than existing approaches modeling nonlocal dependencies. 2 Conditional Random Fields We use a Conditional Random Field (Lafferty et al., 2001; Sha and Pereira, 2003) since it represents the state of the art in sequence modeling and has also been very effective at Named Entity Recognition. It allows us both discriminative training that CMMs offer as well and the bi-directional flow of probabilistic information across the sequence that HMMs allow, thereby giving us the best of both worlds. Due to the bi-directional flow of information, CRFs guard against the myopic locally attractive decisions that CMMs make. It is customary to use the Viterbi algorithm, to find the most probably state sequence during inference. A large number of possibly redundant and correlated features can be supplied without fear of further reducing the accuracy of a high-dimensional distribution. These are welldocumented benefits (Lafferty et al., 2001). 2.1 Our Baseline CRF for Named Entity Recognition Our baseline CRF is a sequence model in which labels for tokens directly depend only on the labels corresponding to the previous and next tokens. We use features that have been shown to be effective in NER, namely the current, previous and next words, character n-grams of the current word, Part of Speech tag of the current word and surrounding words, the shallow parse chunk of the current word, shape of the current word, the surrounding word shape sequence, the presence of a word in a left window of size 5 around the current word and the presence of a word in a left window of size 5 around the current word. This gives us a competitive baseline CRF using local information alone, whose performance is close to the best published local CRF models, for Named Entity Recognition 3 Label Consistency The intuition for modeling label consistency is that within a particular document, different occur1122 Document Level Statistics Corpus Level Statistics PER LOC ORG MISC PER LOC ORG MISC PER 3141 4 5 0 33830 113 153 0 LOC 6436 188 3 346966 6749 60 ORG 2975 0 43892 223 MISC 2030 66286 Table 1: Table showing the number of pairs of different occurrences of the same token sequence, where one occurrence is given a certain label and the other occurrence is given a certain label. We show these counts both within documents, as well as over the whole corpus. As we would expect, most pairs of the same entity sequence are labeled the same(i.e. the diagonal has most of the density) at both the document and corpus levels. These statistics are from the CoNLL 2003 English training set. Document Level Statistics Corpus Level Statistics PER LOC ORG MISC PER LOC ORG MISC PER 1941 5 2 3 9111 401 261 38 LOC 0 167 6 63 68 4560 580 1543 ORG 22 328 819 191 221 19683 5131 4752 MISC 14 224 7 365 50 12713 329 8768 Table 2: Table showing the number of (token sequence, token subsequence) pairs where the token sequence is assigned a certain entity label, and the token subsequence is assigned a certain entity label. We show these counts both within documents, as well as over the whole corpus. Rows correspond to sequences, and columns to subsequences. These statistics are from the CoNLL 2003 English training set. rences of a particular token sequence (or similar token sequences) are unlikely to have different entity labels. While this constraint holds strongly at the level of a document, there exists additional value to be derived by enforcing this constraint less strongly across different documents. We want to model label consistency as a soft and not a hard constraint; while we want to encourage different occurrences of similar token sequences to get labeled as the same entity, we do not want to force this to always hold, since there do exist exceptions, as can be seen from the off-diagonal entries of tables 1 and 2. A named entity recognition system modeling this structure would encourage all the occurrences of the token sequence to the same entity type, thereby sharing evidence among them. Thus, if the system has strong evidence about the label of a given token sequence, but is relatively unsure about the label to be assigned to another occurrence of a similar token sequence, the system can gain significantly by using the information about the label assigned to the former occurrence, to label the relatively ambiguous token sequence, leading to accuracy improvements. The strength of the label consistency constraint, can be seen from statistics extracted from the CoNLL 2003 English training data. Table 1 shows the counts of entity labels pairs assigned for each pair of identical token sequences both within a document and across the whole corpus. As we would expect, inconsistent labelings are relatively rare and most pairs of the same entity sequence are labeled the same(i.e. the diagonal has most of the density) at both the document and corpus levels. A notable exception to this is the labeling of the same text as both organization and location within the same document and across documents. This is a due to the large amount of sports news in the CoNLL dataset due to which city and country names are often also team names. We will see that our approach is capable of exploiting this as well, i.e. we can learn a model which would not penalize an Organization-Location inconsistency as strongly as it penalizes other inconsistencies. In addition, we also want to model subsequence constraints: having seen Albert Einstein earlier in a document as a person is a good indicator that a subsequent occurrence of Einstein should also be labeled as a person. Here, we would expect that a subsequence would gain much more by knowing the label of a supersequence, than the other way around. However, as can be seen from table 2, we find that the consistency constraint does not hold nearly so strictly in this case. A very common case of this in the CoNLL dataset is that of documents containing references to both The China Daily, a newspaper, and China, the country (Finkel et al., 2005). The first should be labeled as an organization, and second as a location. The counts of subsequence labelings within a document and across documents listed in Table 2, show that there are many off-diagonal entries: the China Daily case is among the most common, occurring 328 times in the dataset. Just as we can model off-diagonal pat1123 terns with exact token sequence matches, we can also model off-diagonal patterns for the token subsequence case. In addition, we could also derive some value by enforcing some label consistency at the level of an individual token. Obviously, our model would learn much lower weights for these constraints, when compared to label consistency at the level of token sequences. 4 Our Approach to Handling non-local Dependencies To handle the non-local dependencies between same and similar token sequences, we define three sets of feature pairs where one member of the feature pair corresponds to a function of aggregate statistics of the output of the first CRF at the document level, and the other member corresponds to a function of aggregate statistics of the output of the first CRF over the whole test corpus. Thus this gives us six additional feature types for the second round CRF, namely Document-level Token-majority features, Document-level Entitymajority features, Document-level Superentitymajority features, Corpus-level Token-majority features, Corpus-level Entity-majority features and Corpus-level Superentity-majority features. These feature types are described in detail below. All these features are a function of the output labels of the first CRF, where predictions on the test set are obtained by training on all the data, and predictions on the train data are obtained by 10 fold cross-validation (details in the next section). Our features fired based on document and corpus level statistics are: • Token-majority features: These refer to the majority label assigned to the particular token in the document/corpus. Eg: Suppose we have three occurrences of the token Australia, such that two are labeled Location and one is labeled Organization, our tokenmajority feature would take value Location for all three occurrences of the token. This feature can enable us to capture some dependence between token sequences corresponding to a single entity and having common tokens. • Entity-majority features: These refer to the majority label assigned to the particular entity in the document/corpus. Eg: Suppose we have three occurrences of the entity sequence (we define it as a token sequence labeled as a single entity by the first stage CRF) Bank of Australia, such that two are labeled Organization and one is labeled Location, our entitymajority feature would take value Organization for all tokens in all three occurrences of the entity sequence. This feature enables us to capture the dependence between identical entity sequences. For token labeled as not a Named Entity by the first CRF, this feature returns the majority label assigned to that token when it occurs as a single token named entity. • Superentity-majority features: These refer to the majority label assigned to supersequences of the particular entity in the document/corpus. By entity supersequences, we refer to entity sequences, that strictly contain within their span, another entity sequence. For example, if we have two occurrences of Bank of Australia labeled Organization and one occurrence of Australia Cup labeled Miscellaneous, then for all occurrences of the entity Australia, the superentity-majority feature would take value Organization. This feature enables us to take into account labels assigned to supersequences of a particular entity, while labeling it. For token labeled as not a Named Entity by the first CRF, this feature returns the majority label assigned to all entities containing the token within their span. The last feature enables entity sequences to benefit from labels assigned to entities which are entity supersequences of it. We attempted to add subentity-majority features, analogous to the superentity-majority features to model dependence on entity subsequences, but got no benefit from it. This is intuitive, since the basic sequence model would usually be much more certain about labels assigned to the entity supersequences, since they are longer and have more contextual information. As a result of this, while there would be several cases in which the basic sequence model would be uncertain about labels of entity subsequences but relatively certain about labels of token supersequences, the converse is very unlikely. Thus, it is difficult to profit from labels of entity subsequences while labeling entity sequences. We also attempted using more fine 1124 grained features corresponding to the majority label of supersequences that takes into account the position of the entity sequence in the entity supersequence(whether the entity sequence occurs in the start, middle or end of the supersequence), but could obtain no additional gains from this. It is to be noted that while deciding if token sequences are equal or hold a subsequencesupersequence relation, we ignore case, which clearly performs better than being sensitive to case. This is because our dataset contains several entities in allCaps such as AUSTRALIA, especially in news headlines. Ignoring case enables us to model dependences with other occurrences with a different case such as Australia. It may appear at first glance, that our framework can only learn to encourage entities to switch to the most popular label assigned to other occurrences of the entity sequence and similar entity sequences. However this framework is capable of learning interesting off-diagonal patterns as well. To understand this, let us consider the example of different occurrences of token sequences being labeled Location and Organization. Suppose, the majority label of the token sequence is Location. While this majority label would encourage the second CRF to switch the labels of all occurrences of the token sequence to Location, it would not strongly discourage the CRF from labeling these as Organization, since there would be several occurrences of token sequences in the training data labeled Organization, with the majority label of the token sequence being Location. However it would discourage the other labels strongly. The reasoning is analogous when the majority label is Organization. In case of a tie (when computing the majority label), if the label assigned to a particular token sequence is one of the majority labels, we fire the feature corresponding to that particular label being the majority label, instead of breaking ties arbitrarily. This is done to encourage the second stage CRF to make its decision based on local information, in the absence of compelling non-local information to choose a different label. 5 Advantages of our approach With our two-stage approach, we manage to get improvements on the F1 measure over existing approaches that model non-local dependencies. At the same time, the simplicity of our two-stage approach keeps inference time down to just the inference time of two sequential CRFs, when compared to approaches such as those of Finkel et al. (2005) who report that their inference time with Gibbs sampling goes up by a factor of about 30, compared to the Viterbi algorithm for the sequential CRF. Below, we give some intuition about areas for improvement in existing work and explain how our approach incorporates the improvements. • Most existing work to capture labelconsistency, has attempted to create all n 2  pairwise dependencies between the different occurrences of an entity, (Finkel et al., 2005; Sutton and McCallum, 2004), where n is the number of occurrences of the given entity. This complicates the dependency graph making inference harder. It also leads to the penalty for deviation in labeling to grow linearly with n, since each entity would be connected to Θ(n) entities. When an entity occurs several times, these models would force all occurrences to take the same value. This is not what we want, since there exist several instances in real-life data where different entities like persons and organizations share the same name. Thus, our approach makes a certain entity’s label depend on certain aggregate information of other labels assigned to the same entity, and does not enforce pairwise dependencies. • We also exploit the fact that the predictions of a learner that takes non-local dependencies into account would have a good amount of overlap with a sequential CRF, since the sequence model is already quite competitive. We use this intuition to approximate the aggregate information about labels assigned to other occurrences of the entity by the nonlocal model, with the aggregate information about labels assigned to other occurrences of the entity by the sequence model. This intuition enables us to learn weights for non-local dependencies in two stages; we first get predictions from a regular sequential CRF and in turn use aggregate information about predictions made by the CRF as extra features to train a second CRF. • Most work has looked to model non-local dependencies only within a document (Finkel 1125 et al., 2005; Chieu and Ng, 2002; Sutton and McCallum, 2004; Bunescu and Mooney, 2004). Our model can capture the weaker but still important consistency constraints across the whole document collection, whereas previous work has not, for reasons of tractability. Capturing label-consistency at the level of the whole test corpus is particularly helpful for token sequences that appear only once in their documents, but occur a few times over the corpus, since they do not have strong nonlocal information from within the document. • For training our second-stage CRF, we need to get predictions on our train data as well as test data. Suppose we were to use the same train data to train the first CRF, we would get unrealistically good predictions on our train data, which would not be reflective of its performance on the test data. One option is to partition the train data. This however, can lead to a drop in performance, since the second CRF would be trained on less data. To overcome this problem, we make predictions on our train data by doing a 10-fold cross validation on the train data. For predictions on the test data, we use all the training data to train the CRF. Intuitively, we would expect that the quality of predictions with 90% of the train data would be similar to the quality of predictions with all the training data. It turns out that this is indeed the case, as can be seen from our improved performance. 6 Experiments 6.1 Dataset and Evaluation We test the effectiveness of our technique on the CoNLL 2003 English named entity recognition dataset downloadable from http://cnts.uia.ac.be/conll2003/ner/. The data comprises Reuters newswire articles annotated with four entity types: person (PER), location (LOC), organization (ORG), and miscellaneous (MISC). The data is separated into a training set, a development set (testa), and a test set (testb). The training set contains 945 documents, and approximately 203,000 tokens and the test set has 231 documents and approximately 46,000 tokens. Performance on this task is evaluated by measuring the precision and recall of annotated entities (and not tokens), combined into an F1 score. There is no partial credit for labeling part of an entity sequence correctly; an incorrect entity boundary is penalized as both a false positive and as a false negative. 6.2 Results and Discussion It can be seen from table 3, that we achieve a 12.6% relative error reduction, by restricting ourselves to features approximating non-local dependency within a document, which is higher than other approaches modeling non-local dependencies within a document. Additionally, by incorporating non-local dependencies across documents in the test corpus, we manage a 13.3% relative error reduction, over an already competitive baseline. We can see that all three features approximating non-local dependencies within a document yield reasonable gains. As we would expect the additional gains from features approximating nonlocal dependencies across the whole test corpus are relatively small. We use the approximate randomization test (Yeh, 2000) for statistical significance of the difference between the basic sequential CRF and our second round CRF, which has additional features derived from the output of the first CRF. With a 1000 iterations, our improvements were statistically significant with a p-value of 0.001. Since this value is less than the cutoff threshold of 0.05, we reject the null hypothesis. The simplicity of our approach makes it easy to incorporate dependencies across the whole corpus, which would be relatively much harder to incorporate in approaches like (Bunescu and Mooney, 2004) and (Finkel et al., 2005). Additionally, our approach makes it possible to do inference in just about twice the inference time with a single sequential CRF; in contrast, approaches like Gibbs Sampling that model the dependencies directly can increase inference time by a factor of 30 (Finkel et al., 2005). An analysis of errors by the first stage CRF revealed that most errors are that of single token entities being mislabeled or missed altogether followed by a much smaller percentage of multiple token entities mislabelled completely. All our features directly encode information that is useful to reducing these errors. The widely prevalent boundary detection error is that of missing a single-token entity (i.e. labeling it as Other(O)). Our approach helps correct many such errors based on occurrences of the token in other 1126 F1 scores on the CoNLL Dataset Approach LOC ORG MISC PER ALL Relative Error reduction Bunescu and Mooney (2004) (Relational Markov Networks) Only Local Templates 80.09 Global and Local Templates 82.30 11.1% Finkel et al. (2005)(Gibbs Sampling) Local+Viterbi 88.16 80.83 78.51 90.36 85.51 Non Local+Gibbs 88.51 81.72 80.43 92.29 86.86 9.3% Our Approach with the 2-stage CRF Baseline CRF 88.09 80.88 78.26 89.76 85.29 + Document token-majority features 89.17 80.15 78.73 91.60 86.50 + Document entity-majority features 89.50 81.98 79.38 91.74 86.75 + Document superentity-majority features 89.52 82.27 79.76 92.71 87.15 12.6% + Corpus token-majority features 89.48 82.36 79.59 92.65 87.13 + Corpus entity-majority features 89.72 82.40 79.71 92.65 87.23 + Corpus superentity-majority features (All features) 89.80 82.39 79.76 92.57 87.24 13.3% Table 3: Table showing improvements obtained with our additional features, over the baseline CRF. We also compare our performance against (Bunescu and Mooney, 2004) and (Finkel et al., 2005) and find that we manage higher relative improvement than existing work despite starting from a very competitive baseline CRF. named entities. Other kinds of boundary detection errors involving multiple tokens are very rare. Our approach can also handle these errors by encouraging certain tokens to take different labels. This together with the clique features encoding the markovian dependency among neighbours can correct some multiple-token boundary detection errors. 7 Related Work Recent work looking to directly model non-local dependencies and do approximate inference are that of Bunescu and Mooney (2004), who use a Relational Markov Network (RMN) (Taskar et al., 2002) to explicitly model long-distance dependencies, Sutton and McCallum (2004), who introduce skip-chain CRFs, which add additional non-local edges to the underlying CRF sequence model (which Bunescu and Mooney (2004) lack) and Finkel et al. (2005) who hand-set penalties for inconsistency in labels based on the training data and then use Gibbs Sampling for doing approximate inference where the goal is to obtain the label sequence that maximizes the product of the CRF objective function and their penalty. Unfortunately, in the RMN model, the dependencies must be defined in the model structure before doing any inference, and so the authors use heuristic part-of-speech patterns, and then add dependencies between these text spans using clique templates. This generates an extremely large number of overlapping candidate entities, which renders necessary additional templates to enforce the constraint that text subsequences cannot both be different entities, something that is more naturally modeled by a CRF. Another disadvantage of this approach is that it uses loopy belief propagation and a voted perceptron for approximate learning and inference, which are inherently unstable algorithms leading to convergence problems, as noted by the authors. In the skip-chain CRFs model, the decision of which nodes to connect is also made heuristically, and because the authors focus on named entity recognition, they chose to connect all pairs of identical capitalized words. They also utilize loopy belief propagation for approximate learning and inference. It is hard to directly extend their approach to model dependencies richer than those at the token level. The approach of Finkel et al. (2005) makes it possible a to model a broader class of longdistance dependencies than Sutton and McCallum (2004), because they do not need to make any initial assumptions about which nodes should be connected and they too model dependencies between whole token sequences representing entities and between entity token sequences and their token supersequences that are entities. The disadvantage of their approach is the relatively ad-hoc selection of penalties and the high computational cost of running Gibbs sampling. Early work in discriminative NER employed two stage approaches that are broadly similar to ours, but the effectiveness of this approach appears to have been overlooked in more recent work. Mikheev et al. (1999) exploit label consistency information within a document using relatively ad hoc multi-stage labeling procedures. Borth1127 wick (1999) used a two-stage approach similar to ours with CMM’s where Reference Resolution features which encoded the frequency of occurrences of other entities similar to the current token sequence, were derived from the output of the first stage. Malouf (2002) and Curran and Clark (2003) condition the label of a token at a particular position on the label of the most recent previous instance of that same token in a previous sentence of the same document. This violates the Markov property and therefore instead of finding the maximum likelihood sequence over the entire document (exact inference), they label one sentence at a time, which allows them to condition on the maximum likelihood sequence of previous sentences. While this approach is quite effective for enforcing label consistency in many NLP tasks, it permits a forward flow of information only, which can result in loss of valuable information. Chieu and Ng (2002) propose a solution to this problem: for each token, they define additional features based on known information, taken from other occurrences of the same token in the document. This approach has the advantage of allowing the training procedure to automatically learn good weights for these “global” features relative to the local ones. However, it is hard to extend this to incorporate other types of non-local structure. 8 Conclusion We presented a two stage approach to model nonlocal dependencies and saw that it outperformed existing approaches to modeling non-local dependencies. Our approach also made it easy to exploit various dependencies across documents in the test corpus, whereas incorporating this information in most existing approaches would make them intractable due to the complexity of the resultant graphical model. Our simple approach is also very computationally efficient since the inference time is just twice the inference time of the basic sequential CRF, while for approaches doing approximate inference, the inference time is often well over an order of magnitude over the basic sequential CRF. The simplicity of our approach makes it easy to understand, implement, and adapt to new applications. Acknowledgments We wish to Jenny R. Finkel for discussions on NER and her CRF code. Also, thanks to Trond Grenager for NER discussions and to William Morgan for help with statistical significance tests. Also, thanks to Vignesh Ganapathy for helpful discussions and Rohini Rajaraman for comments on the writeup. This work was supported in part by a Scottish Enterprise Edinburgh-Stanford Link grant (R37588), as part of the EASIE project. References A. Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. thesis, New York University. R. Bunescu and R. J. Mooney. 2004. Collective information extraction with relational Markov networks. In Proceedings of the 42nd ACL, pages 439–446. H. L. Chieu and H. T. Ng. 2002. Named entity recognition: a maximum entropy approach using global information. In Proceedings of the 19th Coling, pages 190–196. J. R. Curran and S. Clark. 2003. Language independent NER using a maximum entropy tagger. In Proceedings of the 7th CoNLL, pages 164–167. J. Finkel, T. Grenager, and C. D. Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 42nd ACL. D. Freitag and A. McCallum. 1999. Information extraction with HMMs and shrinkage. In Proceedings of the AAAI99 Workshop on Machine Learning for Information Extraction. S. Geman and D. Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transitions on Pattern Analysis and Machine Intelligence, 6:721–741. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th ICML, pages 282–289. Morgan Kaufmann, San Francisco, CA. T. R. Leek. 1997. Information extraction using hidden Markov models. Master’s thesis, U.C. San Diego. R. Malouf. 2002. Markov models for language-independent named entity recognition. In Proceedings of the 6th CoNLL, pages 187–190. A. McCallum, D. Freitag, and F. Pereira. 2000. Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the 17th ICML, pages 591– 598. Morgan Kaufmann, San Francisco, CA. A. Mikheev, M. Moens, and C. Grover. 1999. Named entity recognition without gazetteers. In Proceedings of the 9th EACL, pages 1–8. J. Pearl. 1988. Probabilistic reasoning in intelligent systems: Networks of plausible inference. In Morgan Kauffmann. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of NAACL-2003, pages 134–141. C. Sutton and A. McCallum. 2004. Collective segmentation and labeling of distant entities in information extraction. In ICML Workshop on Statistical Relational Learning and Its connections to Other Fields. B. Taskar, P. Abbeel, and D. Koller. 2002. Discriminative probabilistic models for relational data. In Proceedings of UAI-02. J. S. Yedidia, W. T. Freeman, and Y. Weiss. 2000. Generalized belief propagation. In Proceedings of NIPS-2000, pages 689–695. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of COLING 2000. 1128
2006
141
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1129–1136, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Learning Transliteration Lexicons from the Web Jin-Shea Kuo1, 2 1Chung-Hwa Telecom. Laboratories, Taiwan [email protected] Haizhou Li Institute for Infocomm Research, Singapore [email protected] Ying-Kuei Yang2 2National Taiwan University of Science and Technology, Taiwan [email protected]. ntust.edu.tw Abstract This paper presents an adaptive learning framework for Phonetic Similarity Modeling (PSM) that supports the automatic construction of transliteration lexicons. The learning algorithm starts with minimum prior knowledge about machine transliteration, and acquires knowledge iteratively from the Web. We study the active learning and the unsupervised learning strategies that minimize human supervision in terms of data labeling. The learning process refines the PSM and constructs a transliteration lexicon at the same time. We evaluate the proposed PSM and its learning algorithm through a series of systematic experiments, which show that the proposed framework is reliably effective on two independent databases. 1 Introduction In applications such as cross-lingual information retrieval (CLIR) and machine translation (MT), there is an increasing need to translate out-ofvocabulary (OOV) words, for example from an alphabetical language to Chinese. Foreign proper names constitute a good portion of OOV words, which are translated into Chinese through transliteration. Transliteration is a process of translating a foreign word into a native language by preserving its pronunciation in the original language, otherwise known as translation-bysound. MT and CLIR systems rely heavily on bilingual lexicons, which are typically compiled manually. However, in view of the current information explosion, it is labor intensive, if not impossible, to compile a complete proper nouns lexicon. The Web is growing at a fast pace and is providing a live information source that is rich in transliterations. This paper presents a novel solution for automatically constructing an English-Chinese transliteration lexicon from the Web. Research on automatic transliteration has reported promising results for regular transliteration (Wan and Verspoor, 1998; Li et al, 2004), where transliterations follow rigid guidelines. However, in Web publishing, translators in different countries and regions may not observe common guidelines. They often skew the transliterations in different ways to create special meanings to the sound equivalents, resulting in casual transliterations. In this case, the common generative models (Li et al, 2004) fail to predict the transliteration most of the time. For example, “Coca Cola” is transliterated into “ 可口可樂 /Ke-Kou-Ke-Le/” as a sound equivalent in Chinese, which literately means “happiness in the mouth”. In this paper, we are interested in constructing lexicons that cover both regular and casual transliterations. When a new English word is first introduced, many transliterations are invented. Most of them are casual transliterations because a regular transliteration typically does not have many variations. After a while, the transliterations converge into one or two popular ones. For example, “Taxi” becomes “ 的士 /Di-Shi/” in China and “ 德士 /De-Shi/” in Singapore. Therefore, the adequacy of a transliteration entry could be judged by its popularity and its conformity with the translation-by-sound principle. In any case, the phonetic similarity should serve as the primary basis of judgment. This paper is organized as follows. In Section 2, we briefly introduce prior works pertaining to machine transliteration. In Section 3, we propose a phonetic similarity model (PSM) for confidence scoring of transliteration. In Section 4, we propose an adaptive learning process for PSM modeling and lexicon construction. In Section 5, we conduct experiments to evaluate different adaptive learning strategies. Finally, we conclude in Section 6. 1129 2 Related Work In general, studies of transliteration fall into two categories: transliteration modeling (TM) and extraction of transliteration pairs (EX) from corpora. The TM approach models phoneme-based or grapheme-based mapping rules using a generative model that is trained from a large bilingual lexicon, with the objective of translating unknown words on the fly. The efforts are centered on establishing the phonetic relationship between transliteration pairs. Most of these works are devoted to phoneme1-based transliteration modeling (Wan and Verspoor 1998, Knight and Graehl, 1998). Suppose that EW is an English word and CW is its prospective Chinese transliteration. The phoneme-based approach first converts EW into an intermediate phonemic representation P, and then converts the phonemic representation into its Chinese counterpart CW. In this way, EW and CW form an E-C transliteration pair. In this approach, we model the transliteration using two conditional probabilities, P(CW|P) and P(P|EW), in a generative model P(CW|EW) = P(CW|P)P(P|EW). Meng (2001) proposed a rulebased mapping approach. Virga and Khudanpur (2003) and Kuo et al (2005) adopted the noisychannel modeling framework. Li et al (2004) took a different approach by introducing a joint source-channel model for direct orthography mapping (DOM), which treats transliteration as a statistical machine translation problem under monotonic constraints. The DOM approach, which is a grapheme-based approach, significantly outperforms the phoneme-based approaches in regular transliterations. It is noted that the state-of-the-art accuracy reported by Li et al (2004) for regular transliterations of the Xinhua database is about 70.1%, which leaves much room for improvement if one expects to use a generative model to construct a lexicon for casual transliterations. EX research is motivated by information retrieval techniques, where people attempt to extract transliteration pairs from corpora. The EX approach aims to construct a large and up-todate transliteration lexicon from live corpora. Towards this objective, some have proposed extracting translation pairs from parallel or comparable bitext using co-occurrence analysis 1 Both phoneme and syllable based approaches are referred to as phoneme-based here. or a context-vector approach (Fung and Yee, 1998; Nie et al, 1999). These methods compare the semantic similarities between words without taking their phonetic similarities into accounts. Lee and Chang (2003) proposed using a probabilistic model to identify E-C pairs from aligned sentences using phonetic clues. Lam et al (2004) proposed using semantic and phonetic clues to extract E-C pairs from comparable corpora. However, these approaches are subject to the availability of parallel or comparable bitext. A method that explores non-aligned text was proposed by harvesting katakana-English pairs from query logs (Brill et al, 2001). It was discovered that the unsupervised learning of such a transliteration model could be overwhelmed by noisy data, resulting in a decrease in model accuracy. Many efforts have been made in using Webbased resources for harvesting transliteration/ translation pairs. These include exploring query logs (Brill et al, 2001), unrelated corpus (Rapp, 1999), and parallel or comparable corpus (Fung and Yee, 1998; Nie et al, 1999; Huang et al 2005). To establish correspondence, these algorithms usually rely on one or more statistical clues, such as the correlation between word frequencies, cognates of similar spelling or pronunciations. They include two aspects. First, a robust mechanism that establishes statistical relationships between bilingual words, such as a phonetic similarity model which is motivated by the TM research; and second, an effective learning framework that is able to adaptively discover new events from the Web. In the prior work, most of the phonetic similarity models were trained on a static lexicon. In this paper, we address the EX problem by exploiting a novel Web-based resource. We also propose a phonetic similarity model that generates confidence scores for the validation of E-C pairs. In Chinese webpages, translated or transliterated terms are frequently accompanied by their original Latin words. The latter serve as the appositives of the former. A sample search result for the query submission “Kuro” is the bilingual snippet2 “...經營 Kuro 庫洛P2P 音樂交 換軟體的飛行網,3 日發表 P2P 與版權爭議的解 決方案— C2C (Content to Community)...”. The co-occurrence statistics in such a snippet was shown to be useful in constructing a transitive translation model (Lu et al, 2002). In the 2 A bilingual snippet refers to a Chinese predominant text with embedded English appositives. 1130 example above, “Content to Community” is not a transliteration of C2C, but rather an acronym expansion, while “庫洛 /Ku-Luo/”, as underlined, presents a transliteration for “Kuro”. What is important is that the E-C pairs are always closely collocated. Inspired by this observation, we propose an algorithm that searches over the close context of an English word in a bilingual snippet for the word’s transliteration candidates. The contributions of this paper include: (i) an approach to harvesting real life E-C transliteration pairs from the Web; (ii) a phonetic similarity model that evaluates the confidence of so extracted E-C pair candidates; (iii) a comparative study of several machine learning strategies. 3 Phonetic Similarity Model English and Chinese have different syllable structures. Chinese is a syllabic language where each Chinese character is a syllable in either consonant-vowel (CV) or consonant-vowel-nasal (CVN) structure. A Chinese word consists of a sequence of characters, phonetically a sequence of syllables. Thus, in first E-C transliteration, it is a natural choice to syllabify an English word by converting its phoneme sequence into a sequence of Chinese-like syllables, and then convert it into a sequence of Chinese characters. There have been several effective algorithms for the syllabification of English words for transliteration. Typical syllabification algorithms first convert English graphemes to phonemes, referred to as the letter-to-sound transformation, then syllabify the phoneme sequence into a syllable sequence. For this method, a letter-tosound conversion is needed (Pagel, 1998; Jurafsky, 2000). The phoneme-based syllabification algorithm is referred to as PSA. Another syllabification technique attempts to map the grapheme of an English word to syllables directly (Kuo and Yang, 2004). The grapheme-based syllabification algorithm is referred to as GSA. In general, the size of a phoneme inventory is smaller than that of a grapheme inventory. The PSA therefore requires less training data for statistical modeling (Knight, 1998); on the other hand, the grapheme-based method gets rid of the letter-to-sound conversion, which is one of the main causes of transliteration errors (Li et al, 2004). Assuming that Chinese transliterations always co-occur in proximity to their original English words, we propose a phonetic similarity modeling (PSM) that measures the phonetic similarity between candidate transliteration pairs. In a bilingual snippet, when an English word EW is spotted, the method searches for the word’s possible Chinese transliteration CW in its neighborhood. EW can be a single word or a phrase of multiple English words. Next, we formulate the PSM and the estimation of its parameters. 3.1 Generative Model Let 1 { ,... ,... } m M ES es es es = be a sequence of English syllables derived from EW, using the PSA or GSA approach, and 1 { ,... ,... } n N CS cs cs cs = be the sequence of Chinese syllables derived from CW, represented by a Chinese character string 1,... ,..., n N CW c c c → . EW and CW is a transliteration pair. The E-C transliteration can be considered a generative process formulated by the noisy channel model, with EW as the input and CW as the output. ( / ) P EW CW is estimated to characterize the noisy channel, known as the transliteration probability. ( ) P CW is a language model to characterize the source language. Applying Bayes’ rule, we have ( / ) ( / ) ( )/ ( ) P CW EW P EW CW P CW P EW = (1) Following the translation-by-sound principle, the transliteration probability ( / ) P EW CW can be approximated by the phonetic confusion probability ( / ) P ES CS , which is given as ( / ) max ( , / ), P ES CS P ES CS ∆∈Φ = ∆ (2) where Φ is the set of all possible alignment paths between ES and CS. It is not trivial to find the best alignment path ∆. One can resort to a dynamic programming algorithm. Assuming conditional independence of syllables in ES and CS, we have 1 ( / ) ( / ) M m m m P ES CS p es cs = =∏ in a special case where M N = . Note that, typically, we have N M ≤ due to syllable elision. We introduce a null syllable ϕ and a dynamic warping strategy to evaluate ( / ) P ES CS when M N ≠ (Kuo et al, 2005). With the phonetic approximation, Eq.(1) can be rewritten as ( / ) ( / ) ( )/ ( ) P CW EW P ES CS P CW P EW ≈ (3) The language model in Eq.(3) can be represented by Chinese characters n-gram statistics. 1 2 1 1 ( ) ( / , ,..., ) N n n n n P CW p c c c c − − = =∏ (4) 1131 In adopting bigram, Eq.(4) is rewritten as 1 1 2 ( ) ( ) ( / ) N n n n P CW p c p c c − = ≈ ∏ . Note that the context of EW usually has a number of competing Chinese transliteration candidates in a set, denoted as Ω. We rank the candidates by Eq.(1) to find the most likely CW for a given EW. In this process, ( ) P EW can be ignored because it is the same for all CW candidates. The CW candidate that gives the highest posterior probability is considered the most probable candidate CW′. argmax ( / ) argmax ( / ) ( ) CW CW CW P CW EW P ES CS P CW ∈Ω ∈Ω ′ = ≈ (5) However, the most probable CW′ isn’t necessarily the desired transliteration. The next step is to examine if CW′ and EW indeed form a genuine E-C pair. We define the confidence of the E-C pair as the posterior odds similar to that in a hypothesis test under the Bayesian interpretation. We have 0 H , which hypothesizes that CW′ and EW form an E-C pair, and 1 H , which hypothesizes otherwise. The posterior odds is given as follows, 0 1 ' ( / ) ( / ') ( ') ( / ) ( / ) ( ) CW CW CW P H EW P ES CS P CW P H EW P ES CS P CW σ ∈Ω ≠ = ≈∑ (6) where ' CS is the syllable sequence of CW′ , 1 ( / ) p H EW is approximated by the probability mass of the competing candidates of CW′ , or ' ( / ) ( ) CW CW CW P ES CS P CW ∈Ω ≠ ∑ . The higher the σ is, the more probable that hypothesis 0 H overtakes 1 H . The PSM formulation can be seen as an extension to prior work (Brill et al, 2001) in transliteration modeling. We introduce the posterior odds σ as the confidence score so that E-C pairs that are extracted from different contexts can be directly compared. In practice, we set a threshold for σ to decide a cutoff point for E-C pairs short-listing. 3.2 PSM Estimation The PSM parameters are estimated from the statistics of a given transliteration lexicon, which is a collection of manually selected E-C pairs in supervised learning, or a collection of high confidence E-C pairs in unsupervised learning. An initial PSM is bootstrapped using prior knowledge such as rule-based syllable mapping. Then we align the E-C pairs with the PSM and derive syllable mapping statistics for PSA and GSA syllabifications. A final PSM is a linear combination of the PSA-based PSM (PSA-PSM) and the GSA-based PSM (GSA-PSM). The PSM parameter ( / ) m n p es cs can be estimated by an Expectation-Maximization (EM) process (Dempster, 1977). In the Expectation step, we compute the counts of events such as # , m n es cs < > and # n cs < > by force-aligning the E-C pairs in the training lexicon Ψ . In the Maximization step, we estimate the PSM parameters ( / ) m n p es cs by ( / ) # , /# m n m n n p es cs es cs cs = < > < > . (7) As the EM process guarantees non-decreasing likelihood probability ( / ) P ES CS ∀Ψ ∏ , we let the EM process iterate until ( / ) P ES CS ∀Ψ ∏ converges. The EM process can be thought of as a refining process to obtain the best alignment between the E-C syllables and at the same time a re-estimating process for PSM parameters. It is summarized as follows. Start: Bootstrap PSM parameters ( / ) m n p es cs using prior phonetic mapping knowledge E-Step: Force-align corpus Ψ using existing ( / ) m n p es cs and compute the counts of # , m n es cs < > and # n cs < > ; M-Step: Re-estimate ( / ) m n p es cs using the counts from E-Step. Iterate: Repeat E-Step and M-Step until ( / ) P ES CS ∀Ψ ∏ converges. 4 Adaptive Learning Framework We propose an adaptive learning framework under which we learn PSM and harvest E-C pairs from the Web at the same time. Conceptually, the adaptive learning is carried out as follows. We obtain bilingual snippets from the Web by iteratively submitting queries to the Web search engines (Brin and Page, 1998). For each batch of querying, the query results are all normalized to plain text, from which we further extract qualified sentences. A qualified sentence has at least one English word. Under this criterion, a collection of qualified sentences can be extracted automatically. To label the E-C pairs, each qualified sentence is manually checked based on the following transliteration criteria: (i) if an EW is partly translated phonetically and partly translated semantically, only the phonetic transliteration constituent is extracted to form a 1132 transliteration pair; (ii) elision of English sound is accepted; (iii) multiple E-C pairs can appear in one sentence; (iv) an EW can have multiple valid Chinese transliterations and vice versa. The validation process results in a collection of qualified E-C pairs, also referred to as Distinct Qualified Transliteration Pairs (DQTPs). As formulated in Section 3, the PSM is trained using a training lexicon in a data driven manner. It is therefore very important to ensure that in the learning process we have prepared a quality training lexicon. We establish a baseline system using supervised learning. In this approach, we use human labeled data to train a model. The advantage is that it is able to establish a model quickly as long as labeled data are available. However, this method also suffers from some practical issues. First, the derived model can only be as good as the data that it sees. An adaptive mechanism is therefore needed for the model to acquire new knowledge from the dynamically growing Web. Second, a massive annotation of database is labor intensive, if not entirely impossible. To reduce the annotation needed, we discuss three adaptive strategies cast in the machine learning framework, namely active learning, unsupervised learning and active-unsupervised learning. The learning strategies can be depicted in Figure 1 with their difference being discussed next. We also train a baseline system using supervised learning approach as a reference point for benchmarking purpose. 4.1 Active Learning Active learning is based on the assumption that a small number of labeled samples, which are DQTPs here, and a large number of unlabeled Figure 1. An adaptive learning framework for automatic construction of transliteration lexicon. samples are available. This assumption is valid in most NLP tasks. In contrast to supervised learning, where the entire corpus is labeled manually, active learning selects the most useful samples for labeling and adds the labeled examples to the training set to retrain the model. This procedure is repeated until the model achieves a certain level of performance. Practically, a batch of samples is selected each time. This is called batch-based sample selection (Lewis and Catlett, 1994), as shown in the search and ranking block in Figure 1. For an active learning to be effective, we propose using three measures to select candidates for human labeling. First, we would like to select the most uncertain samples that are potentially highly informative for the PSM model. The informativeness of a sample can be quantified by its confidence score σ as in the PSM formulation. Ranking the E-C pairs by σ is referred to as C-rank. The samples of low C-rank are the interesting samples to be labeled. Second, we would like to select candidates that are of low frequency. Ranking by frequency is called Frank. During Web crawling, most of the search engines use various strategies to prevent spamming and one of fundamental tasks is to remove the duplicated Web pages. Therefore, we assume that the bilingual snippets are all unique. Intuitively, E-C pairs of low frequency indicate uncommon events which are of higher interest to the model. Third, we would like to select samples upon which the PSA-PSM and GSAPSM disagree the most. The disagreed upon samples represent new knowledge to the PSM. In short, we select low C-rank, low F-rank and PSM-disagreed samples for labeling because the high C-rank, high F-rank and PSM-agreed samples are already well known to the model. 4.2 Unsupervised Learning Unsupervised learning skips the human labeling step. It minimizes human supervision by automatically labeling the data. This can be effective if prior knowledge about a task is available, for example, if an initial PSM can be built based on human crafted phonetic mapping rules. This is entirely possible. Kuo et al (2005) proposed using a cross-lingual phonetic confusion matrix resulting from automatic speech recognition to bootstrap an initial PSM model. The task of labeling samples is basically to distinguish the qualified transliteration pairs from the rest. Unlike the sample selection method in active learning, here we would like to Iterate Start Final PSM Initial PSM Search & Ranking PSM Learning Lexicon Stop The Web Select & Labeling Training Samples Labeled Samples PSM Evaluation & Stop Criterion 1133 select the samples that are of high C-rank and high F-rank because they are more likely to be the desired transliteration pairs. The difference between the active learning and the unsupervised learning strategies lies in that the former selects samples for human labeling, such as in the select & labeling block in Figure 1 before passing on for PSM learning, while the latter selects the samples automatically and assumes they are all correct DQTPs. The disadvantage of unsupervised learning is that it tends to reinforce its existing knowledge rather than to discover new events. 4.3 Active-Unsupervised Learning The active learning and the unsupervised learning strategies can be complementary. Active learning minimizes the labeling effort by intelligently short-listing informative and representative samples for labeling. It makes sure that the PSM learns new and informative knowledge over iterations. Unsupervised learning effectively exploits the unlabelled data. It reinforces the knowledge that PSM has acquired and allows PSM to adapt to changes at no cost. However, we do not expect unsupervised learning to acquire new knowledge like active learning does. Intuitively, a better solution is to integrate the two strategies into one, referred to as the active-unsupervised learning strategy. In this strategy, we use active learning to select a small amount of informative and representative samples for labeling. At the same time, we select samples of high confidence score from the rest and consider them correct E-C pairs. We then merge the labeled set with the highconfidence set in the PSM re-training. 5 Experiments We first construct a development corpus by crawling of webpages. This corpus consists of about 500 MB of webpages, called SET1 (Kuo et al, 2005). Out of 80,094 qualified sentences, 8,898 DQTPs are manually extracted from SET1, which serve as the gold standard in testing. To establish a baseline system, we first train a PSM using all 8,898 DQTPs in supervised manner and conduct a closed test on SET1 as in Table 1. We further implement three PSM learning strategies and conduct a systematic series of experiments. Precision Recall F-measure closed-test 0.79 0.69 0.74 Table 1. Supervised learning test on SET1 5.1 Unsupervised Learning We follow the formulation described in Section 4.2. First, we derive an initial PSM using randomly selected 100 seed DQTPs and simulate the Web-based learning process with the SET1: (i) select high F-rank and high C-rank E-C pairs using PSM, (ii) add the selected E-C pairs to the DQTP pool as if they are true DQTPs, and (iii) reestimate PSM by using the updated DQTP pool. In Figure 2, we report the F-measure over iterations. The U_HF curve reflects the learning progress of using E-C pairs that occur more than once in the SET1 corpus (high F-rank). The U_HF_HR curve reflects the learning progress using a subset of E-C pairs from U_HF which has high posterior odds as defined in Eq.(6). Both selection strategies aim to select E-C pairs, which are as genuine as possible. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 2 3 4 5 6 # Iteration F-measure Supervised U_HF U_HF_HR Figure 2. F-measure over iterations for unsupervised learning on SET1. We found that both U_HF and U_HF_HR give similar results in terms of F-measure. Without surprise, more iterations don’t always lead to better performance because unsupervised learning doesn’t aim to acquiring new knowledge over iterations. Nevertheless, unsupervised learning improves the initial PSM in the first iteration substantially. It can serve as an effective PSM adaptation method. 5.2 Active Learning The objective of active learning is to minimize human supervision by automatically selecting the most informative samples to be labeled. The effect of active learning is that it maximizes performance improvement with minimum annotation effort. Like in unsupervised learning, we start with the same 100 seed DQTPs and an initial PSM model and carry out experiments on SET1: (i) select low F-rank, low C-rank and GSA-PSM and PSA-PSM disagreed E-C pairs; (ii) label the selected pairs by removing the nonE-C pairs and add the labeled E-C pairs to the DQTP pool, and (iii) reestimate the PSM by using the updated DQTP pool. 1134 To select the samples, we employ 3 different strategies: A_LF_LR, where we only select low F-rank and low C-rank candidates for labeling. A_DIFF, where we only select those that GSAPSM and PSA-PSM disagreed upon; and A_DIFF_LF_LR, the union of A_LF_LR and A_DIFF selections. As shown in Figure 3, the Fmeasure of A_DIFF (0.729) and A_DIFF_LF_LR (0.731) approximate to that of supervised learning 0.735) after four iterations. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 2 3 4 5 6 # Iteration F-measure Supervised A_LF_LR A_DIFF A_DIFF_LF_LR Figure 3. F-measure over iterations for active learning on SET1. With almost identical performance as supervised learning, the active learning approach has greatly reduced the number of samples for manual labeling as reported in Table 2. It is found that for active learning to reach the performance of supervised learning, A_DIFF is the most effective strategy. It reduces the labeling effort by 89.0%, from 80,094 samples to 8,750. Sample selection #samples labeled A_LF_LR 1,671 A_DIFF 8,750 Active learning A_DIFF_LF_LR 9,683 Supervised learning 80,094 Table 2. Number of total samples for manual labeling in 6 iterations of Figure 3. 5.3 Active Unsupervised Learning It would be interesting to study the performance of combining unsupervised learning and active learning. The experiment is similar to that of active learning except that, in step (iii) of active learning, we take the unlabeled high confidence candidates (high F-rank and high C-rank as in U_HF_HR of Section 5.1) as the true labeled samples and add into the DQTP pool. The result is shown in Figure 4. Although active unsupervised learning was reported having promising results (Riccardi and Hakkani-Tur, 2003) in some NLP tasks, it has not been as effective as active learning alone in this experiment probably due to the fact the unlabeled high confidence candidates are still too noisy to be informative. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 2 3 4 5 6 # Iteration F-measure Supervised AU_LF_LR AU_DIFF AU_DIFF_LF_LR Figure 4. F-measure over iterations for active unsupervised learning on SET1. 5.4 Learning Transliteration Lexicons The ultimate objective of building a PSM is to extract a transliteration lexicon from the Web by iteratively submitting queries and harvesting new transliteration pairs from the return results until no more new pairs. For example, by submitting “Robert” to search engines, we may get “Robert羅伯特”, “Richard-理查” and “Charles-查爾斯” in return. In this way, new queries can be generated iteratively, thus new pairs are discovered. We pick the best performing SET1derived PSM trained using A_DIFF_LF_LR active learning strategy and test it on a new database SET2 which is obtained in the same way as SET1. Before adaptation After adaptation #distinct E-C pairs 137,711 130,456 Precision 0.777 0.846 #expected DQTPs 107,001 110,365 Table 3. SET1-derived PSM adapted towards SET2. SET2 contains 67,944 Web pages amounting to 3.17 GB. We extracted 2,122,026 qualified sentences from SET2. Using the PSM, we extract 137,711 distinct E-C pairs. As the gold standard for SET2 is unavailable, we randomly select 1,000 pairs for manual checking. A precision of 0.777 is reported. In this way, 107,001 DQTPs can be expected. We further carry out one iteration of unsupervised learning using U_HF_HR to adapt the SET1-derived PSM towards SET2. The results before and after adaptation are reported in Table 3. Like the experiment in Section 5.1, the unsupervised learning improves the PSM in terms of precision significantly. 1135 6 Conclusions We have proposed a framework for harvesting EC transliteration lexicons from the Web using bilingual snippets. In this framework, we formulate the PSM learning and E-C pair evaluation methods. We have studied three strategies for PSM learning aiming at reducing the human supervision. The experiments show that unsupervised learning is an effective way for rapid PSM adaptation while active learning is the most effective in achieving high performance. We find that the Web is a resourceful live corpus for real life E-C transliteration lexicon learning, especially for casual transliterations. In this paper, we use two Web databases SET1 and SET2 for simplicity. The proposed framework can be easily extended to an incremental learning framework for live databases. This paper has focused solely on use of phonetic clues for lexicon and PSM learning. We have good reason to expect the combining semantic and phonetic clues to improve the performance further. References E. Brill, G. Kacmarcik, C. Brockett. 2001. Automatically Harvesting Katakana-English Term Pairs from Search Engine Query Logs, In Proc. of NLPPRS, pp. 393-399. S. Brin and L. Page. 1998. The Anatomy of a Largescale Hypertextual Web Search Engine, In Proc. of 7th WWW, pp. 107-117. A. P. Dempster, N. M. Laird and D. B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm, Journal of the Royal Statistical Society, Ser. B. Vol. 39, pp. 1-38. P. Fung and L.-Y. Yee. 1998. An IR Approach for Translating New Words from Nonparallel, Comparable Texts. In Proc. of 17th COLING and 36th ACL, pp. 414-420. F. Huang, Y. Zhang and Stephan Vogel. 2005. Mining Key Phrase Translations from Web Corpora. In Proc. of HLT-EMNLP, pp. 483-490. D. Jurafsky and J. H. Martin. 2000. Speech and Language Processing, pp. 102-120, Prentice-Hall, New Jersey. K. Knight and J. Graehl. 1998. Machine Transliteration, Computational Linguistics, Vol. 24, No. 4, pp. 599-612. J.-S. Kuo and Y.-K. Yang. 2004. Constructing Transliterations Lexicons from Web Corpora, In the Companion Volume, 42nd ACL, pp. 102-105. J.-S. Kuo and Y.-K. Yang. 2005. Incorporating Pronunciation Variation into Extraction of Transliterated-term Pairs from Web Corpora, In Proc. of ICCC, pp. 131-138. C.-J. Lee and J.-S. Chang. 2003. Acquisition of English-Chinese Transliterated Word Pairs from Parallel-Aligned Texts Using a Statistical Machine Transliteration Model, In Proc. of HLT-NAACL Workshop Data Driven MT and Beyond, pp. 96103. D. D. Lewis and J. Catlett. 1994. Heterogeneous Uncertainty Sampling for Supervised Learning, In Proc. of ICML 1994, pp. 148-156. H. Li, M. Zhang and J. Su. 2004. A Joint Source Channel Model for Machine Transliteration, In Proc. of 42nd ACL, pp. 159-166. W. Lam, R.-Z. Huang and P.-S. Cheung. 2004. Learning Phonetic Similarity for Matching Named Entity Translations and Mining New Translations, In Proc. of 27th ACM SIGIR, pp. 289-296. W.-H. Lu, L.-F. Chien and H.-J Lee. 2002. Translation of Web Queries Using Anchor Text Mining, TALIP, Vol. 1, Issue 2, pp. 159- 172. H. M. Meng, W.-K. Lo, B. Chen and T. Tang. 2001. Generate Phonetic Cognates to Handle Name Entities in English-Chinese Cross-Language Spoken Document Retrieval, In Proc. of ASRU, pp. 311-314. J.-Y. Nie, P. Isabelle, M. Simard, and R. Durand. 1999. Cross-language Information Retrieval based on Parallel Texts and Automatic Mining of Parallel Text from the Web”, In Proc. of 22nd ACM SIGIR, pp 74-81. V. Pagel, K. Lenzo and A. Black. 1998. Letter to Sound Rules for Accented Lexicon Compression, In Proc. of ICSLP, pp. 2015-2020. R. Rapp. 1999. Automatic Identification of Word Translations from Unrelated English and German Corpora, In Proc. of 37th ACL, pp. 519-526. G. Riccardi and D. Hakkani-Tür. 2003. Active and Unsupervised Learning for Automatic Speech Recognition. In Proc. of 8th Eurospeech. P. Virga and S. Khudanpur. 2003. Transliteration of Proper Names in Cross-Lingual Information Retrieval, In Proc. of 41st ACL Workshop on Multilingual and Mixed Language Named Entity Recognition, pp. 57-64. S. Wan and C. M. Verspoor. 1998. Automatic English-Chinese Name Transliteration for Development of Multilingual Resources, In Proc. of 17th COLING and 36th ACL, pp.1352-1356. 1136
2006
142
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1137–1144, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Punjabi Machine Transliteration M. G. Abbas Malik Department of Linguistics Denis Diderot, University of Paris 7 Paris, France [email protected] Abstract Machine Transliteration is to transcribe a word written in a script with approximate phonetic equivalence in another language. It is useful for machine translation, cross-lingual information retrieval, multilingual text and speech processing. Punjabi Machine Transliteration (PMT) is a special case of machine transliteration and is a process of converting a word from Shahmukhi (based on Arabic script) to Gurmukhi (derivation of Landa, Shardha and Takri, old scripts of Indian subcontinent), two scripts of Punjabi, irrespective of the type of word. The Punjabi Machine Transliteration System uses transliteration rules (character mappings and dependency rules) for transliteration of Shahmukhi words into Gurmukhi. The PMT system can transliterate every word written in Shahmukhi. 1 Introduction Punjabi is the mother tongue of more than 110 million people of Pakistan (66 million), India (44 million) and many millions in America, Canada and Europe. It has been written in two mutually incomprehensible scripts Shahmukhi and Gurmukhi for centuries. Punjabis from Pakistan are unable to comprehend Punjabi written in Gurmukhi and Punjabis from India are unable to comprehend Punjabi written in Shahmukhi. In contrast, they do not have any problem to understand the verbal expression of each other. Punjabi Machine Transliteration (PMT) system is an effort to bridge the written communication gap between the two scripts for the benefit of the millions of Punjabis around the globe. Transliteration refers to phonetic translation across two languages with different writing systems (Knight & Graehl, 1998), such as Arabic to English (Nasreen & Leah, 2003). Most prior work has been done for Machine Translation (MT) (Knight & Leah, 97; Paola & Sanjeev, 2003; Knight & Stall, 1998) from English to other major languages of the world like Arabic, Chinese, etc. for cross-lingual information retrieval (Pirkola et al, 2003), for the development of multilingual resources (Yan et al, 2003; Kang & Kim, 2000) and for the development of crosslingual applications. PMT is a special kind of machine transliteration. It converts a Shahmukhi word into a Gurmukhi word irrespective of the type constraints of the word. It not only preserves the phonetics of the transliterated word but in contrast to usual transliteration, also preserves the meaning. Two scripts are discussed and compared. Based on this comparison and analysis, character mappings between Shahmukhi and Gurmukhi are drawn and transliteration rules are discussed. Finally, architecture and process of the PMT system are discussed. When it is applied to Punjabi Unicode encoded text especially designed for testing, the results were complied and analyzed. PMT system will provide basis for CrossScriptural Information Retrieval (CSIR) and Cross-Scriptural Application Development (CSAD). 2 Punjabi Machine Transliteration According to Paola (2003), “When writing a foreign name in one’s native language, one tries to preserve the way it sounds, i.e. one uses an orthographic representation which, when read aloud by the native speaker of the language, sounds as it would when spoken by a speaker of the foreign language – a process referred to as Transliteration”. Usually, transliteration is referred to phonetic translation of a word of some 1137 specific type (proper nouns, technical terms, etc) across languages with different writing systems. Native speakers may not understand the meaning of transliterated word. PMT is a special type of Machine Transliteration in which a word is transliterated across two different writing systems used for the same language. It is independent of the type constraint of the word. It preserves both the phonetics as well as the meaning of transliterated word. 3 Scripts of Punjabi 3.1 Shahmukhi Shahmukhi derives its character set form the Arabic alphabet. It is a right-to-left script and the shape assumed by a character in a word is context sensitive, i.e. the shape of a character is different depending whether the position of the character is at the beginning, in the middle or at the end of the word. Normally, it is written in Nastalique, a highly complex writing system that is cursive and context-sensitive. A sentence illustrating Shahmukhi is given below: X}Z Ìáââ y6– ÌÐâ< ڻ6– ~@ԧð ÌÌ6=ҊP It has 49 consonants, 16 diacritical marks and 16 vowels, etc. (Malik 2005) 3.2 Gurmukhi Gurmukhi derives its character set from old scripts of the Indian Sub-continent i.e. Landa (script of North West), Sharda (script of Kashmir) and Takri (script of western Himalaya). It is a left-to-right syllabic script. A sentence illustrating Gurmukhi is given below: ਪੰਜਾਬੀ ਮੇਰੀ ਮਾਣ ਜੋਗੀ ਮ ਬੋਲੀ ਏ. It has 38 consonants, 10 vowels characters, 9 vowel symbols, 2 symbols for nasal sounds and 1 symbol that duplicates the sound of a consonant. (Bhatia 2003, Malik 2005) 4 Analysis and PMT Rules Punjabi is written in two completely different scripts. One script is right-to-left and the other is left-to-right. One is Arabic based cursive and the other is syllabic. But both of them represent the phonetic repository of Punjabi. These phonetic sounds are used to determine the relation between the characters of two scripts. On the basis of this idea, character mappings are determined. For the analysis and comparison, both scripts are subdivided into different group on the basis of types of characters e.g. consonants, vowels, diacritical marks, etc. 4.1 Consonant Mapping Consonants can be further subdivided into two groups: Aspirated Consonants: There are sixteen aspirated consonants in Punjabi (Malik, 2005). Ten of these aspirated consonants (JJ[bʰ], JJ[pʰ], JJ[ṱʰ], JJ[ʈʰ], bY[ʤʰ], bb[ʧʰ], |e[ḓʰ], |e[ɖʰ], ÏÏ[kʰ], ÏÏ[gʰ]) are very frequently used in Punjabi as compared to the remaining six aspirates (|g[rʰ], |h[ɽʰ], Ïà[lʰ], Jb[mʰ], JJ[nʰ], |z[vʰ]). In Shahmukhi, aspirated consonants are represented by the combination of a consonant (to be aspirated) and HEH-DOACHASHMEE (|). For example [ [b] + | [h] = JJ [bʰ] and ` [ʤ] + | [h] = Yb [ʤʰ]. In Gurmukhi, each frequently used aspiratedconsonant is represented by a unique character. But, less frequent aspirated consonants are represented by the combination of a consonant (to be aspirated) and sub-joined PAIREEN HAAHAA e.g. ਲ [l] + ◌੍ + ਹ [h] = ਲ‰ (Ïà) [lʰ] and ਵ [v] + ◌੍ + ਹ [h] = ਵ‰ ) |z ( [vʰ], where ◌੍ is the sub-joiner. The sub-joiner character (◌੍) tells that the following ਹ [h] is going to change the shape of PAIREEN HAAHHA. The mapping of ten frequently used aspirated consonants is given in Table 1. Sr. Shahmukhi Gurmukhi Sr. Shahmukhi Gurmukhi 1 JJ [bʰ] ਭ 6 bb [ʧʰ] ਛ 2 JJ [pʰ] ਫ 7 |e [ḓʰ] ਧ 3 JJ [ṱʰ] ਥ 8 |e [ɖʰ] ਢ 4 JJ [ʈʰ] ਠ 9 ÏÏ [kʰ] ਖ 5 bY [ʤʰ] ਝ 10 ÏÏ [gʰ] ਘ Table 1: Aspirated Consonants Mapping The mapping for the remaining six aspirates is covered under non-aspirated consonants. Non-Aspirated Consonants: In case of nonaspirated consonants, Shahmukhi has more consonants than Gurmukhi, which follows the one symbol for one sound principle. On the other hand there are more then one characters for a single sound in Shahmukhi. For example, Seh 1138 (_), Seen (k) and Sad (m) represent [s] and [s] has one equivalent in Gurmukhi i.e. Sassaa (ਸ). Similarly other characters like ਅ [a], ਤ [ṱ], ਹ [h] and ਜ਼ [z] have multiple equivalents in Shahmukhi. Non-aspirated consonants mapping is given in Table 2. Sr. Shahmukhi Gurmukhi Sr. Shahmukhi Gurmukhi 1 [ [b] ਬ 21 o [ṱ] ਤ 2 \ [p] ਪ 22 p [z] ਜ਼ 3 ] [ṱ] ਤ 23 q [ʔ] ਅ 4 ^ [ʈ] ਟ 24 r [ɤ] ਗ਼ 5 _ [s] ਸ 25 s [f] ਫ਼ 6 ` [ʤ] ਜ 26 t [q] ’ 7 a [ʧ] ਚ 27 u [k] ਕ 8 b [h] ਹ 28 v [g] ਗ 9 c [x] ਖ਼ 29 w [l] ਲ 10 e [ḓ] ਦ 30 wؕ [ɭ] ਲ਼ 11 e [ɖ] ਡ 31 x [m] ਮ 12 f [z] ਜ਼ 32 y [n] ਨ 13 g [r] ਰ 33 [ ڻɳ] ਣ 14 h [ɽ] ੜ 35 y [ŋ] ◌ਂ 15 i [z] ਜ਼ 35 z [v] ਵ 16 j [ʒ] ਜ਼ 36 { [h] ਹ 17 k [s] ਸ 37 | [h] ◌੍ਹ 18 l [ʃ] ਸ਼ 38 ~ [j] ਯ 19 m [s] ਸ 39 } [j] ਯ 20 n [z] ਜ਼ Table 2: Non-Aspirated Consonants Mapping 4.2 Vowel Mapping Punjabi contains ten vowels. In Shahmukhi, these vowels are represented with help of four long vowels (Alef Madda (W), Alef (Z), Vav (z) and Choti Yeh (~)) and three short vowels (Arabic Fatha – Zabar (F◌), Arabic Damma – Pesh (E◌) and Arabic Kasra – Zer (G◌)). Note that the last two long vowels are also used as consonants. Hamza (Y) is a special character and always comes between two vowel sounds as a place holder. For example, in õGõ66W [ɑsɑɪʃ] (comfort), Hamza (Y) is separating two vowel sounds Alef (Z) and Zer (G◌), in zW [ɑo] (come), Hamza (Y) is separating two vowel sounds Alef Madda (W) [ɑ] and Vav (z) [o], etc. In the first example õGõ66W [ɑsɑɪʃ] (comfort), Hamza (Y) is separating two vowel sounds Alef (Z) and Zer (G◌), but normally Zer (G◌) is dropped by common people. So Hamza (Y) is mapped on ਇ [ɪ] when it is followed by a consonant. In Gurmukhi, vowels are represented by ten independent vowel characters (ਅ, ਆ, ਇ, ਈ, ਉ, ਊ, ਏ, ਐ, ਓ, ਔ) and nine dependent vowel signs (◌ਾ, ਿ◌, ◌ੀ, ◌ੁ, ◌ੂ, ◌ੇ, ◌ੈ, ◌ੋ, ◌ੌ). When a vowel sound comes at the start of a word or is independent of some consonant in the middle or end of a word, independent vowels are used; otherwise dependent vowel signs are used. The analysis of vowels is shown in Table 4 and the vowel mapping is given in Table 3. Sr. Shahmukhi Gurmukhi Sr. Shahmukhi Gurmukhi 1 FZ [ə] ਅ 11 Z[ə] ਅ,◌ਾ 2 [ ﺁɑ] ਆ 12 G◌ [ɪ] ਿ◌ 3 GZ [ɪ] ਇ 13 ﯼG◌ [i] ◌ੀ 4 [ اِﯼi] ਈ 14 E◌ [ʊ] ◌ੁ 5 EZ [ʊ] ਉ 15 z E◌ [u] ◌ੂ 6 zEZ [u] ਊ 16 } [e] ◌ੇ 7 }Z [e] ਏ 17 } F◌ [æ] ◌ੈ 8 }FZ [æ] ਐ 18 z [o] ◌ੋ 9 zZ [o] ਓ 19 Fz [Ɔ] ◌ੌ 10 zFZ [Ɔ] ਔ 20 Y [ɪ] ਇ Table 3: Vowels Mapping 1139 Vowel Shahmukhi Gurmukhi Example ɑ Represented by Alef Madda (W) in the beginning of a word and by Alef (Z) in the middle or at the end of a word. Represented by ਆ and ◌ਾ ÌòeW → ਆਦਮੀ [ɑdmi] (man) 66z6Ï → ਜਾਵਣਾ [ʤɑvɳɑ] (go) ə Represented by Alef (Z) in the beginning of a word and with Zabar (F◌) elsewhere. Represented by ਅ in the beginning. H`Z → ਅੱਜ [ɑʤʤ] (today) e Represented by the combinations of Alef (Z) and Choti Yeh (~) in the beginning; a consonant and Choti Yeh (~) in the middle and a consonant and Baree Yeh (}) at the end of a word. Represented by ਏ and ◌ੇ uOääZ → ਏਧਰ [eḓʰər] (here), Z@ԧð → ਮੇਰਾ [merɑ] (mine), }g66 → ਸਾਰੇ [sɑre] (all) æ Represented by the combination of Alef (Z), Zabar (F◌) and Choti Yeh (~) in the beginning; a consonant, Zabar (F◌) and Choti Yeh (~) in the middle and a consonant, Zabar (F◌) and Baree Yeh (}) at the end of a word. Represented by ਐ and ◌ੈ E} FZԡ → ਐਹ [æh] (this), I‚Fr → ਮੈਲ [mæl] (dirt), Fì → ਹੈ [hæ] (is) ɪ Represented by the combination of Alef (Z) and Zer (G◌) in the beginning and a consonant and Zer (G◌) in the middle of a word. It never appears at the end of a word. Represented by ਇ and ਿ◌ âH§GZ → ਇੱਕੋ [ɪkko] (one), lGg66 → ਬਾਿਰਸ਼ [bɑrɪsh] (rain) i Represented by the combination of Alef (Z), Zer (G◌) and Choti Yeh (~) in the beginning; a consonant, Zer (G◌) and Choti Yeh (~) in the middle and a consonant and Choti Yeh (~) at the end of a word Represented by ਈ and ◌ੀ @υԝGZ → ਈਤਰ [iṱər] (mean) ~@ԧGðZ → ਅਮੀਰੀ [ɑmiri] (richness), ÌÌ6=ҊP → ਪੰਜਾਬੀ [pənʤɑbi] (Punjabi) ʊ Represented by the combination of Alef (Z) and Pesh (E◌) in the beginning; a consonant and Pesh (E◌) in the middle of a word. It never appears at the end of a word. Represented by ਉ and ◌ੁ uOHeEZ → Žਧਰ [ʊḓḓhr] (there) HIEï → ਮੁੱਲ [mʊll] (price) u Represented by the combination of Alef (Z), Pesh (E◌) and Vav (z) in the beginning, a consonant, Pesh (E◌) and Vav (z) in the middle and at the end of a word. Represented by ਊ and ◌ੂ zEegEZ → ਉਰਦੂ [ʊrḓu] ]gâEß → ਸੂਰਤ [surṱ] (face) o Represented by the combination of Alef (Z) and Vav (z) in the beginning; a consonant and Vav (z) in the middle and at the end of a word. Represented by ਓ and ◌ੋ h6J zZ՜ → ਓਛਾੜ [oʧhɑɽ] (cover), iâðww → ਪੜ‰ੋਲਾ [pɽholɑ] (a big pot in which wheat is stored) Ɔ Represented by the combination of Alef (Z), Zabar (F◌) and Vav (z) in the beginning; a consonant, Zabar (F◌) and Vav (z) in the middle and at the end of a word. Represented by ਔ and ◌ੌ ZhzFZ → ਔੜਾ [Ɔɽɑ] (hindrance), ]âFñ → ਮੌਤ [mƆṱ] (death) Note: Where → means ‘its equivalent in Gurmukhi is’. Table 4: Vowels Analysis of Punjabi for PMT 1140 4.3 Sub-Joins (PAIREEN) of Gurmukhi There are three PAIREEN (sub-joins) in Gurmukhi, “Haahaa”, “Vaavaa” and “Raaraa” shown in Table 5. For PMT, if HEH-DOACHASHMEE (|) does come after the less frequently used aspirated consonants then it is transliterated into PAIREEN Haahaa. Other PAIREENS are very rare in their usage and are used only in Sanskrit loan words. In present day writings, PAIREEN Vaavaa and Raaraa are being replaced by normal Vaavaa (ਵ) and Raaraa (ਰ) respectively. Sr. PAIREEN Shahmukhi Gurmukhi English 1 H JHçEo ਬੁੱਲ‰ Lips 2 R 6–gäs" ਚੰਦ‡ਮਾ Moon 3 Í y6˜ԡFâÎ ਸˆੈਮਾਨ Selfrespect Table 5: Sub-joins (PAIREEN) of Gurmukhi 4.4 Diacritical Marks Both in Shahmukhi and Gurmukhi, diacritical marks (dependent vowel signs in Gurmukhi) are the back bone of the vowel system and are very important for the correct pronunciation and understanding the meaning of a word. There are sixteen diacritical marks in Shahmukhi and nine dependent vowel sings in Gurmukhi (Malik, 2005). The mapping of diacritical marks is given in Table 6. Sr. Shahmukhi Gurmukhi Sr. Shahmukhi Gurmukhi 1 F◌ [ə] --- 9 F◌ [ɪn] ਿ◌ਨ 2 G◌ [ɪ] ਿ◌ 10 H◌ ◌ੱ 3 E◌ [ʊ] ◌ੁ 11 W◌ --- 4 ؕ --- 12 Y◌ --- 5 F◌ [ən] ਨ 13 Y◌ --- 6 E◌ [ʊn] ◌ੂਨ 14 G◌ --- 7 E◌ --- 15 --- 8 --- 16 G◌ [ɑ] ◌ਾ Table 6: Diacritical Mapping Diacritical marks in Shahmukhi are very important for the correct pronunciation and understanding the meaning of a word. But they are sparingly used in writing by common people. In the normal text of Shahmukhi books, newspapers, and magazines etc. one will not find the diacritical marks. The pronunciation of a word and its meaning would be comprehended with the help of the context in which it is used. For example, E} FZԡ uuu ~ww ~hâa }Z X @ԧð ~ ~hâa }Z wi X In the first sentence, the word ~hâa is pronounced as [ʧɔɽi] and it conveys the meaning of ‘wide’. In the second sentence, the word ~hâa is pronounced as [ʧuɽi] and it conveys the meaning of ‘bangle’. There should be Zabar (F◌) after Cheh (a) and Pesh (E◌) after Cheh (a) in the first and second words respectively, to remove the ambiguities. It is clear from the above example that diacritical marks are essential for removing ambiguities, natural language processing and speech synthesis. 4.5 Other Symbols Punctuation marks in Gurmukhi are the same as in English, except the full stop. DANDA (।) and double DANDA (॥) of Devanagri script are used for the full stop instead. In case of Shahmukhi, these are same as in Arabic. The mapping of digits and punctuation marks is given in Table 7. Sr. Shahmukhi Gurmukhi Sr. Shahmukhi Gurmukhi 1 0 ੦ 8 7 ੭ 2 1 ੧ 9 8 ੮ 3 2 ੨ 10 9 ੯ 4 3 ੩ 11 Ô , 5 4 ੪ 12 ? ? 6 5 ੫ 13 ; ; 7 6 ੬ 14 X । Table 7: Other Symbols Mapping 4.6 Dependency Rules Character mappings alone are not sufficient for PMT. They require certain dependency or contextual rules for producing correct transliteration. The basic idea behind these rules is the same as that of the character mappings. These rules include rules for aspirated consonants, nonaspirated consonants, Alef (Z), Alef Madda (W), Vav (z), Choti Yeh (~) etc. Only some of these rules are discussed here due to space limitations. Rules for Consonants: Shahmukhi consonants are transliterated into their equivalent 1141 Gurmukhi consonants e.g. k → ਸ [s]. Any diacritical mark except Shadda (H◌) is ignored at this point and is treated in rules for vowels or in rules for diacritical marks. In Shahmukhi, Shadda (H◌) is placed after the consonant but in Gurmukhi, its equivalent Addak (◌ੱ) is placed before the consonant e.g. \ + H◌ → ◌ੱਪ [pp]. Both Shadda (H◌) and Addak (◌ੱ) double the sound a consonant after or before which they are placed. This rule is applicable to all consonants in Table 1 and 2 except Ain (q), Noon (y), Noonghunna (y), Vav (z), Heh Gol ({), Dochashmee Heh (|), Choti Yeh (~) and Baree Yeh (}). These characters are treated separately. Rule for Hamza (Y): Hamza (Y) is a special character of Shahmukhi. Rules for Hamza (Y) are: − If Hamza (Y) is followed by Choti Yeh (~), then Hamza (Y) and Choti Yeh (~) will be transliterated into ਈ [i]. − If Hamza (Y) is followed by Baree Yeh (}), then Hamza (Y) and Baree Yeh (}) will be transliterated into ਏ [e]. − If Hamza (Y) is followed by Zer (G◌), then Hamza (Y) and Zer (G◌) will be transliterated into ਇ [ɪ]. − If Hamza (Y) is followed by Pesh (E◌), then Hamza (Y) and Pesh (E◌) will be transliterated into ਉ [ʊ]. In all other cases, Hamza (Y) will be transliterated into ਇ [ɪ]. 5 PMT System 5.1 System Architecture The architecture of PMT system and its functionality are described in this section. The system architecture of Punjabi Machine Transliteration System is shown in figure 1. Unicode encoded Shahmukhi text input is received by the Input Text Parser that parses it into Shahmukhi words by using simple parsing techniques. These words are called Shahmukhi Tokens. Then these tokens are given to the Transliteration Component. This component gives each token to the PMT Token Converter that converts a Shahmukhi Token into a Gurmukhi Token by using the PMT Rules Manager, which consists of character mappings and dependency rules. The PMT Token Converter then gives the Gurmukhi Token back to the Transliteration Component. When all Shahmukhi Tokens are converted into Gurmukhi Tokens, then all Gurmukhi Tokens are passed to the Output Text Generator that generates the output Unicode encoded Gurmukhi text. The main PMT process is done by the PMT Token Converter and the PMT Rules Manager. Figure 1: Architecture of PMT System PMT system is a rule based transliteration system and is very robust. It is fast and accurate in its working. It can be used in domains involving Information Communication Technology (web, WAP, instant messaging, etc.). 5.2 PMT Process The PMT Process is implemented in the PMT Token Converter and the PMT Rules Manager. For PMT, each Shahmukhi Token is parsed into its constituent characters and the character dependencies are determined on the basis of the occurrence and the contextual placement of the character in the token. In each Shahmukhi Token, there are some characters that bear dependencies and some characters are independent of such contextual dependencies for transliteration. If the character under consideration bears a dependency, then it is resolved and transliterated with the help of dependency rules. Input Text Parser PMT Rules Manager Character Mappings Dependency Rules Unicode Encoded Shahmukhi Text Unicode Encoded Gurmukhi Text PMT Token Converter Shahmukhi Token Gurmukhi Token Punjabi Machine Transliteration System Output Text Generator Transliteration Component Shahmukhi Tokens Gurmukhi Tokens 1142 If the character under consideration does not bear a dependency, then its transliteration is achieved by character mapping. This is done through mapping a character of the Shahmukhi token to its equivalent Gurmukhi character with the help of character mapping tables 1, 2, 3, 6 and 7, whichever is applicable. In this way, a Shahmukhi Token is transliterated into its equivalent Gurmukhi Token. Consider some input Shahmukhi text S. First it is parsed into Shahmukhi Tokens (S1, S2… SN). Suppose that Si = “y63„Zz” [vɑlejɑ̃] is the i th Shahmukhi Token. Si is parsed into characters Vav (z) [v], Alef (Z) [ɑ], Lam (w) [l], Choti Yeh (~) [j], Alef (Z) [ɑ] and Noon Ghunna (y) [ŋ]. Then PMT mappings and dependency rules are applied to transliterate the Shahmukhi Token into a Gurmukhi Token. The Gurmukhi Token Gi=“ਵਾਿਲਆਂ” is generated from Si. The step by step process is clearly shown in Table 8. Sr. Character(s) Parsed Gurmukhi Token Mapping or Rule Applied 1 z → ਵ [v] ਵ Mapping Table 4 2 Z → ◌ਾ [ɑ] ਵਾ Rule for ALEF 3 w → ਲ [l] ਵਾਲ Mapping Table 4 4 66 → ਿ◌ਆ [ɪɑ] ਵਾਿਲਆ Rule for YEH 5 y → ◌ਂ [ŋ] ਵਾਿਲਆਂ Rule for NOONGHUNNA Note: → is read as ‘is transliterated into’. Table 8: Methodology of PMTS In this way, all Shahmukhi Tokens are transliterated into Gurmukhi Tokens (G1, G2 … Gn). From these Gurmukhi Tokens, Gurmukhi text G is generated. The important point to be noted here is that input Shahmukhi text must contain all necessary diacritical marks, which are necessary for the correct pronunciation and understanding the meaning of the transliterated word. 6 Evaluation Experiments 6.1 Input Selection The first task for evaluation of the PMT system is the selection of input texts. To consider the historical aspects, two manuscripts, poetry by Maqbal (Maqbal) and Heer by Waris Shah (Waris, 1766) were selected. Geographically Punjab is divided into four parts eastern Punjab (Indian Punjab), central Punjab, southern Punjab and northern Punjab. All these geographical regions represent the major dialects of Punjabi. Hayms of Baba Nanak (eastern Punjab), Heer by Waris Shah (central Punjab), Hayms by Khawaja Farid (southern Punjab) and Saif-ul-Malooq by Mian Muhammad Bakhsh (northern Punjab) were selected for the evaluation of PMT system. All the above selected texts are categorized as classical literature of Punjabi. In modern literature, poetry and short stories of different poets and writers were selected from some issues of Puncham (monthly Punjabi magazine since 1985) and other published books. All of these selected texts were then compiled into Unicode encoded text as none of them were available in this form before. The main task after the compilation of all the selected texts into Unicode encoded texts is to put all necessary diacritical marks in the text. This is done with help of dictionaries. The accuracy of the PMT system depends upon the necessary diacritical marks. Absence of the necessary diacritical marks affects the accuracy greatly. 6.2 Results After the compilation of selected input texts, they are transliterated into Gurmukhi texts by using the PMT system. Then the transliterated Gurmukhi texts are tested for errors and accuracy. Testing is done manually with help of dictionaries of Shahmukhi and Gurmukhi by persons who know both scripts. The results are given in Table 9. Source Total Words Accuracy Manuscripts 1,007 98.21 Baba Nanak 3,918 98.47 Khawaja Farid 2,289 98.25 Waris Shah 14,225 98.95 Mian Muhammad Bakhsh 7,245 98.52 Modern lieratutre 16,736 99.39 Total 45,420 98.95 Table 9: Results of PMT System If we look at the results, it is clear that the PMT system gives more than 98% accuracy on classical literature and more than 99% accuracy on the modern literature. So PMT system fulfills the requirement of transliteration across two scripts of Punjabi. The only constraint to achieve this accuracy is that input text must contain all necessary diacritical marks for removing ambiguities. 1143 7 Conclusion Shahmukhi and Gurmukhi being the only two prevailing scripts for Punjabi expressions encompass a population of almost 110 million around the globe. PMT is an endeavor to bridge the ethnical, cultural and geographical divisions between the Punjabi speaking communities. By implementing this system of transliteration, new horizons for thought, idea and belief will be shared and the world will gain an impetus on the efforts harmonizing relationships between nations. The large repository of historical, literary and religious work done by generations will now be available for easy transformation and critique for all. The research has future milestone enabling PMT system for back machine transliteration from Gurmukhi to Shahmukhi. Reference Ari Pirkola, Jarmo Toivonen, Heikki Keskustalo, Kari Visala, and Kalervo Järvelin. 2003. Fuzzy Translation of Cross-Lingual Spelling Variants. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval. pp: 345 – 352 Baba Guru Nanak, arranged by Muhammad Asif Khan. 1998. " HH66 6666 63r Wi (Sayings of Baba Nanak in Punjabi Shahmukhi). Pakistan Punjabi Adbi Board, Lahore Bhatia, Tej K. 2003. The Gurmukhi Script and Other Writing Systems of Punjab: History, Structure and Identity. International Symposium on Indic Script: Past and future organized by Research Institute for the Languages and Cultures of Asia and Africa and Tokyo University of Foreign Studies, December 17 – 19. pp: 181 – 213 In-Ho Kang and GilChang Kim. 2000. English-toKorean transliteration using multiple unbounded overlapping phoneme chunks. In Proceedings of the 17th conference on Computational Linguistics. 1: 418 – 424 Khawaja Farid (arranged by Muhammad Asif Khan). " ääGuu EbZâa 63r Wi (Sayings of Khawaja Farid in Punjabi Shahmukhi). Pakistan Punjabi Adbi Board, Lahore Knight, K. and Stalls, B. G. 1998. Translating Names and Technical Terms in Arabic Tex. Proceedings of the COLING/ACL Workshop on Computational Approaches to Semitic Languages Knight, Kevin and Graehl, Jonathan. 1997. Machine Transliteration. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics. pp. 128-135 Knight, Kevin; Morgan Kaufmann and Graehl, Jonathan. 1998. Machine Transliteration. In Computational Linguistics. 24(4): 599 – 612 Malik, M. G. Abbas. 2005. Towards Unicode Compatible Punjabi Character Set. In proceedings of 27th Internationalization and Unicode Conference, 6 – 8 April, Berlin, Germany Maqbal. Gbäæ _âú . Punjabi Manuscript in Oriental Section, Main Library University of the Punjab, Quaid-e-Azam Campus, Lahore Pakistan; 7 pages; Access # 8773 Mian Muhammad Bakhsh (Edited by Fareer Muhammad Faqeer). 2000. Saif-ul-Malooq. Al-Faisal Pub. Urdu Bazar, Lahore Nasreen AbdulJaleel, Leah S. Larkey. 2003. Statistical transliteration for English-Arabic cross language information retrieval. In Proceedings of the 12th international conference on information and knowledge management. pp: 139 – 146 Paola Virga and Sanjeev Khudanpur. 2003. Transliteration of proper names in cross-language applications. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval. pp: 365 – 366 Rahman Tariq. 2004. Language Policy and Localization in Pakistan: Proposal for a Paradigmatic Shift. Crossing the Digital Divide, SCALLA Conference on Computational Linguistics, 5 – 7 January 2004 Sung Young Jung, SungLim Hong and Eunok Peak. 2000. An English to Korean transliteration model of extended markov window. In Proceedings of the 17th conference on Computational Linguistics. 1:383 – 389 Tanveer Bukhari. 2000. zegEZ ÌÌ6=Ҋ ›~ P Ö. Urdu Science Board, 299 Uper Mall, Lahore Waris Shah. 1766. 6J Zg @ԧ¦ 6= . Punjabi Manuscript in Oriental Section, Main Library University of the Punjab, Quaid-e-Azam Campus, Lahore Pakistan; 48 pages; Access # [Ui VI 135/]1443 Waris Shah (arranged by Naseem Ijaz). 1977. 6J Zg @ԧ¦ 6= . Lehran, Punjabi Journal, Lahore Yan Qu, Gregory Grefenstette, David A. Evans. 2003. Automatic transliteration for Japanese-to-English text retrieval. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval. pp: 353 – 360 1144
2006
143
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1145–1152, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Multilingual Document Clustering: an Heuristic Approach Based on Cognate Named Entities Soto Montalvo GAVAB Group URJC [email protected] Raquel Mart´ınez NLP&IR Group UNED [email protected] Arantza Casillas Dpt. EE UPV-EHU [email protected] V´ıctor Fresno GAVAB Group URJC [email protected] Abstract This paper presents an approach for Multilingual Document Clustering in comparable corpora. The algorithm is of heuristic nature and it uses as unique evidence for clustering the identification of cognate named entities between both sides of the comparable corpora. One of the main advantages of this approach is that it does not depend on bilingual or multilingual resources. However, it depends on the possibility of identifying cognate named entities between the languages used in the corpus. An additional advantage of the approach is that it does not need any information about the right number of clusters; the algorithm calculates it. We have tested this approach with a comparable corpus of news written in English and Spanish. In addition, we have compared the results with a system which translates selected document features. The obtained results are encouraging. 1 Introduction Multilingual Document Clustering (MDC) involves dividing a set of n documents, written in different languages, into a specified number k of clusters, so the documents that are similar to other documents are in the same cluster. Meanwhile a multilingual cluster is composed of documents written in different languages, a monolingual cluster is composed of documents written in one language. MDC has many applications. The increasing amount of documents written in different languages that are available electronically, leads to develop applications to manage that amount of information for filtering, retrieving and grouping multilingual documents. MDC tools can make easier tasks such as Cross-Lingual Information Retrieval, the training of parameters in statistics based machine translation, or the alignment of parallel and non parallel corpora, among others. MDC systems have developed different solutions to group related documents. The strategies employed can be classified in two main groups: the ones which use translation technologies, and the ones that transform the document into a language-independent representation. One of the crucial issues regarding the methods based on document or features translation is the correctness of the proper translation. Bilingual resources usually suggest more than one sense for a source word and it is not a trivial task to select the appropriate one. Although word-sense disambiguation methods can be applied, these are not free of errors. On the other hand, methods based on language-independent representation also have limitations. For instance, those based on thesaurus depend on the thesaurus scope. Numbers or dates identification can be appropriate for some types of clustering and documents; however, for other types of documents or clustering it could not be so relevant and even it could be a source of noise. In this work we dealt with MDC and we proposed an approach based only on cognate Named Entities (NE) identification. We have tested this approach with a comparable corpus of news written in English and Spanish, obtaining encouraging results. One of the main advantages of this approach is that it does not depend on multilingual resources such as dictionaries, machine translation systems, thesaurus or gazetteers. In addition, no information about the right number of clusters has 1145 to be provided to the algorithm. It only depends on the possibility of identifying cognate named entities between the languages involved in the corpus. It could be particularly appropriate for news corpus, where named entities play an important role. In order to compare the results of our approach with other based on features translation, we also dealt with this one, as baseline approach. The system uses EuroWordNet (Vossen, 1998) to translate the features. We tried different features categories and combinations of them in order to determine which ones lead to improve MDC results in this approach. In the following section we relate previous work in the field. In Section 3 we present our approach for MDC. Section 4 describes the system we compare our approach with, as well as the experiments and the results. Finally, Section 5 summarizes the conclusions and the future work. 2 Related Work MDC is normally applied with parallel (Silva et. al., 2004) or comparable corpus (Chen and Lin, 2000), (Rauber et. al., 2001), (Lawrence, 2003), (Steinberger et. al., 2002), (Mathieu et. al, 2004), (Pouliquen et. al., 2004). In the case of the comparable corpora, the documents usually are news articles. Considering the approaches based on translation technology, two different strategies are employed: (1) translate the whole document to an anchor language, and (2) translate some features of the document to an anchor language. With regard to the first approach, some authors use machine translation systems, whereas others translate the document word by word consulting a bilingual dictionary. In (Lawrence, 2003), the author presents several experiments for clustering a Russian-English multilingual corpus; several of these experiments are based on using a machine translation system. Columbia’s Newsblaster system (Kirk et al., 2004) clusters news into events, it categorizes events into broad topic and summarizes multiple articles on each event. In the clustering process non-English documents are translated using simple dictionary lookup techniques for translating Japanese and Russian documents, and the Systran translation system for the other languages used in the system. When the solution involves translating only some features, first it is necessary to select these features (usually entities, verbs, nouns) and then translate them with a bilingual dictionary or/and consulting a parallel corpus. In (Mathieu et. al, 2004) before the clustering process, the authors perform a linguistic analysis which extracts lemmas and recognizes named entities (location, organization, person, time expression, numeric expression, product or event); then, the documents are represented by a set of terms (keywords or named entity types). In addition, they use document frequency to select relevant features among the extracted terms. Finally, the solution uses bilingual dictionaries to translate the selected features. In (Rauber et. al., 2001) the authors present a methodology in which documents are parsed to extract features: all the words which appear in n documents except the stopwords. Then, standard machine translation techniques are used to create a monolingual corpus. After the translation process the documents are automatically organized into separate clusters using an un-supervised neural network. Some approaches first carry out an independent clustering in each language, that is a monolingual clustering, and then they find relations among the obtained clusters generating the multilingual clusters. Others solutions start with a multilingual clustering to look for relations between the documents of all the involved languages. This is the case of (Chen and Lin, 2000), where the authors propose an architecture of multilingual news summarizer which includes monolingual and multilingual clustering; the multilingual clustering takes input from the monolingual clusters. The authors select different type of features depending on the clustering: for the monolingual clustering they use only named entities, for the multilingual clustering they extract verbs besides named entities. The strategies that use language-independent representation try to normalize or standardize the document contents in a language-neutral way; for example: (1) by mapping text contents to an independent knowledge representation, or (2) by recognizing language independent text features inside the documents. Both approaches can be employed isolated or combined. The first approach involves the use of existing multilingual linguistic resources, such as thesaurus, to create a text representation consisting of a set of thesaurus items. Normally, in a multilingual thesaurus, elements in different languages are 1146 related via language-independent items. So, two documents written in different languages can be considered similar if they have similar representation according to the thesaurus. In some cases, it is necessary to use the thesaurus in combination with a machine learning method for mapping correctly documents onto thesaurus. In (Steinberger et. al., 2002) the authors present an approach to calculate the semantic similarity by representing the document contents in a language independent way, using the descriptor terms of the multilingual thesaurus Eurovoc. The second approach, recognition of language independent text features, involves the recognition of elements such as: dates, numbers, and named entities. In others works, for instance (Silva et. al., 2004), the authors present a method based on Relevant Expressions (RE). The RE are multilingual lexical units of any length automatically extracted from the documents using the LiPXtractor extractor, a language independent statistics-based tool. The RE are used as base features to obtain a reduced set of new features for the multilingual clustering, but the clusters obtained are monolingual. Others works combine recognition of independent text features (numbers, dates, names, cognates) with mapping text contents to a thesaurus. In (Pouliquen et. al., 2004) the cross-lingual news cluster similarity is based on a linear combination of three types of input: (a) cognates, (b) automatically detected references of geographical place names, and (c) the results of a mapping process onto a multilingual classification system which maps documents onto the multilingual thesaurus Eurovoc. In (Steinberger et. al., 2004) it is proposed to extract language-independent text features using gazetteers and regular expressions besides thesaurus and classification systems. None of the revised works use as unique evidence for multilingual clustering the identification of cognate named entities between both sides of the comparable corpora. 3 MDC by Cognate NE Identification We propose an approach for MDC based only on cognate NE identification. The NEs categories that we take into account are: PERSON, ORGANIZATION, LOCATION, and MISCELLANY. Other numerical categories such as DATE, TIME or NUMBER are not considered because we think they are less relevant regarding the content of the document. In addition, they can lead to group documents with few content in common. The process has two main phases: (1) cognate NE identification and (2) clustering. Both phases are described in detail in the following sections. 3.1 Cognate NE identification This phase consists of three steps: 1. Detection and classification of the NEs in each side of the corpus. 2. Identification of cognates between the NEs of both sides of the comparable corpus. 3. To work out a statistic of the number of documents that share cognates of the different NE categories. Regarding the first step, it is carried out in each side of the corpus separately. In our case we used a corpus with morphosyntactical annotations and the NEs identified and classified with the FreeLing tool (Carreras et al., 2004). In order to identify the cognates between NEs 4 steps are carried out: • Obtaining two list of NEs, one for each language. • Identification of entity mentions in each language. For instance, “Ernesto Zedillo”, “Zedillo”, “Sr. Zedillo” will be considered as the same entity after this step since they refer to the same person. This step is only applied to entities of PERSON category. The identification of NE mentions, as well as cognate NE, is based on the use of the Levenshtein edit-distance function (LD). This measure is obtained by finding the cheapest way to transform one string into another. Transformations are the one-step operations of insertion, deletion and substitution. The result is an integer value that is normalized by the length of the longest string. In addition, constraints regarding the number of words that the NEs are made up, as well as the order of the words are applied. • Identification of cognates between the NEs of both sides of the comparable corpus. It is also based on the LD. In addition, also 1147 constraints regarding the number and the order of the words are applied. First, we tried cognate identification only between NEs of the same category (PERSON with PERSON, ...) or between any category and MISCELLANY (PERSON with MISCELLANY, . .. ). Next, with the rest of NEs that have not been considered as cognate, a next step is applied without the constraint of being to the same category or MISCELLANY. As result of this step a list of corresponding bilingual cognates is obtained. • The same procedure carried out for obtaining bilingual cognates is used to obtain two more lists of cognates, one per language, between the NEs of the same language. Finally, a statistic of the number of documents that share cognates of the different NE categories is worked out. This information can be used by the algorithm (or the user) to select the NE category used as constraint in the clustering steps 1(a) and 2(b). 3.2 Clustering The algorithm for clustering multilingual documents based on cognate NEs is of heuristic nature. It consists of 3 main phases: (1) first clusters creation, (2) addition of remaining documents to existing clusters, and (3) final cluster adjustment. 1. First clusters creation. This phase consists of 2 steps. (a) First, documents in different languages that have more cognates in common than a threshold are grouped into the same cluster. In addition, at least one of the cognates has to be of a specific category (PERSON, LOCATION or ORGANIZATION), and the number of mentions has to be similar; a threshold determines the similarity degree. After this step some documents are assigned to clusters while the others are free (with no cluster assigned). (b) Next, it is tried to assign each free document to an existing cluster. This is possible if there is a document in the cluster that has more cognates in common with the free document than a threshold, with no constraints regarding the NE category. If it is not possible, a new cluster is created. This step can also have as result free documents. At this point the number of clusters created is fixed for the next phase. 2. Addition of the rest of the documents to existing clusters. This phase is carried out in 2 steps. (a) A document is added to a cluster that contains a document which has more cognates in common than a threshold. (b) Until now, the cognate NEs have been compared between both sides of the corpus, that is a bilingual comparison. In this step, the NEs of a language are compared with those of the same language. This can be described like a monolingual comparison step. The aim is to group similar documents of the same language if the bilingual comparison steps have not been successful. As in the other cases, a document is added to a cluster with at least a document of the same language which has more cognates in common than a threshold. In addition, at least one of the cognates have to be of a specific category (PERSON, LOCATION or ORGANIZATION). 3. Final cluster adjustment. Finally, if there are still free documents, each one is assigned to the cluster with more cognates in common, without constraints or threshold. Nonetheless, if free documents are left because they do not have any cognates in common with those assigned to the existing clusters, new clusters can be created. Most of the thresholds can be customized in order to permit and make the experiments easier. In addition, the parameters customization allows the adaptation to different type of corpus or content. For example, in steps 1(a) and 2(b) we enforce at least on match in a specific NE category. This parameter can be customized in order to guide the grouping towards some type of NE. In Section 4.5 the exact values we used are described. Our approach is an heuristic method that following an agglomerative approach and in an iterative way, decides the number of clusters and 1148 locates each document in a cluster; everything is based in cognate NEs identification. The final number of clusters depends on the threshold values. 4 Evaluation We wanted not only determine whether our approach was successful for MDC or not, but we also wanted to compare its results with other approach based on feature translation. That is why we try MDC by selecting and translating the features of the documents. In this Section, first the MCD by feature translation is described; next, the corpus, the experiments and the results are presented. 4.1 MDC by Feature Translation In this approach we emphasize the feature selection based on NEs identification and the grammatical category of the words. The selection of features we applied is based on previous work (Casillas et. al, 2004), in which several document representations are tested in order to study which of them lead to better monolingual clustering results. We used this MDC approach as baseline method. The approach we implemented consists of the following steps: 1. Selection of features (NE, noun, verb, adjective, ...) and its context (the whole document or the first paragraph). Normally, the journalist style includes the heart of the news in the first paragraph; taking this into account we have experimented with the whole document and only with the first paragraph. 2. Translation of the features by using EuroWordNet 1.0. We translate English into Spanish. When more than one sense for a single word is provided, we disambiguate by selecting one sense if it appears in the Spanish corpus. Since we work with a comparable corpus, we expect that the correct translation of a word appears in it. 3. In order to generate the document representation we use the TF-IDF function to weight the features. 4. Use of an clustering algorithm. Particularly, we used a partitioning algorithm of the CLUTO (Karypis, 2002) library for clustering. 4.2 Corpus A Comparable Corpus is a collection of similar texts in different languages or in different varieties of a language. In this work we compiled a collection of news written in Spanish and English belonging to the same period of time. The news are categorized and come from the news agency EFE compiled by HERMES project (http://nlp.uned.es/hermes/index.html). That collection can be considered like a comparable corpus. We have used three subset of that collection. The first subset, call S1, consists on 65 news, 32 in Spanish and 33 in English; we used it in order to train the threshold values. The second one, S2, is composed of 79 Spanish news and 70 English news, that is 149 news. The third subset, S3, contains 179 news: 93 in Spanish and 86 in English. In order to test the MDC results we carried out a manual clustering with each subset. Three persons read every document and grouped them considering the content of each one. They judged independently and only the identical resultant clusters were selected. The human clustering solution is composed of 12 clusters for subset S1, 26 clusters for subset S2, and 33 clusters for S3. All the clusters are multilingual in the three subsets. In the experimentation process of our approach the first subset, S1, was used to train the parameters and threshold values; with the second one and the third one the best parameters values were applied. 4.3 Evaluation metric The quality of the experimentation results are determined by means of an external evaluation measure, the F-measure (van Rijsbergen, 1974). This measure compares the human solution with the system one. The F-measure combines the precision and recall measures: F(i, j) = 2 × Recall(i, j) × Precision(i, j) (Precision(i, j) + Recall(i, j)) , (1) where Recall(i, j) = nij ni , Precision(i, j) = nij nj , nij is the number of members of cluster human solution i in cluster j, nj is the number of members of cluster j and ni is the number of members of cluster human solution i. For all the clusters: F = X i ni n max{F(i)} (2) The closer to 1 the F-measure value the better. 1149 4.4 Experiments and Results with MDC by Feature Translation After trying with features of different grammatical categories and combinations of them, Table 1 and Table 2 only show the best results of the experiments. The first column of both tables indicates the features used in clustering: NOM (nouns), VER (verbs), ADJ (adjectives), ALL (all the lemmas), NE (named entities), and 1rst PAR (those of the first paragraph of the previous categories). The second column is the F-measure, and the third one indicates the number of multilingual clusters obtained. Note that the number of total clusters of each subset is provided to the clustering algorithm. As can be seen in the tables, the results depend on the features selected. 4.5 Experiments and Results with MDC by Cognate NE The threshold for the LD in order to determine whether two NEs are cognate or not is 0.2, except for entities of ORGANIZATION and LOCATION categories which is 0.3 when they have more than one word. Regarding the thresholds of the clustering phase (Section 3.2), after training the thresholds with the collection S1 of 65 news articles we have concluded: • The first step in the clustering phase, 1(a), performs a good first grouping with threshold relatively high; in this case 6 or 7. That is, documents in different languages that have more cognates in common than 6 or 7 are grouped into the same cluster. In addition, at least one of the cognates have to be of an specific category, and the difference between the number of mentions have to be equal or less than 2. Of course, these threshold are applied after checking that there are documents that meet the requirements. If they do not, thresholds are reduced. This first step creates multilingual clusters with high cohesiveness. • Steps 1(b) and 2(a) lead to good results with small threshold values: 1 or 2. They are designed to give priority to the addition of documents to existing clusters. In fact, only step 1(b) can create new clusters. • Step 2(b) tries to group similar documents of the same language when the bilingual comparison steps could not be able to deal with them. This step leads to good results with a threshold value similar to 1(a) step, and with the same NE category. On the other hand, regarding the NE category enforce on match in steps 1(a) and 2(b), we tried with the two NE categories of cognates shared by the most number of documents. Particularly, with S2 and S3 corpus the NE categories of the cognates shared by the most number of documents was LOCATION followed by PERSON. We experimented with both categories. Table 3 and Table 4 show the results of the application of the cognate NE approach to subsets S2 and S3 respectively. The first column of both tables indicates the thresholds for each step of the algorithm. Second and third columns show the results by selecting PERSON category as NE category to be shared by at least a cognate in steps 1(a) and 2(b); whereas fourth and fifth columns are calculated with LOCATION NE category. The results are quite similar but slightly better with LOCATION category, that is the cognate NE category shared by the most number of documents. Although none of the results got the exact number of clusters, it is remarkable that the resulting values are close to the right ones. In fact, no information about the right number of cluster is provided to the algorithm. If we compare the performance of the two approaches (Table 3 with Table 1 and Table 4 with Table 2) our approach obtains better results. With the subset S3 the results of the F-measure of both approaches are more similar than with the subset S2, but the F-measure values of our approach are still slightly better. To sum up, our approach obtains slightly better results that the one based on feature translation with the same corpora. In addition, the number of multilingual clusters is closer to the reference solution. We think that it is remarkable that our approach reaches results that can be comparable with those obtained by means of features translation. We will have to test the algorithm with different corpora (with some monolingual clusters, different languages) in order to confirm its performance. 5 Conclusions and Future Work We have presented a novel approach for Multilingual Document Clustering based only on cognate 1150 Selected Features F-measure Multilin. Clus./Total NOM, VER 0.8533 21/26 NOM, ADJ 0.8405 21/26 ALL 0.8209 21/26 NE 0.8117 19/26 NOM, VER, ADJ 0.7984 20/26 NOM, VER, ADJ, 1rst PAR 0.7570 21/26 NOM, ADJ, 1rst PAR 0.7515 22/26 ALL, 1rst PAR 0.7473 19/26 NOM, VER, 1rst PAR 0.7371 20/26 Table 1: MDC results with the feature translation approach and subset S2 Selected Features F-measure Multilin. Clus. /Total NOM, ADJ 0.8291 26/33 ALL 0.8126 27/33 NOM, VER 0.8028 26/33 NE 0.8015 23/33 NOM, VER, ADJ 0.7917 25/33 NOM, ADJ, 1rst PAR 0.7520 28/33 NOM, VER, ADJ, 1rst PAR 0.7484 26/33 ALL, 1rst PAR 0.7288 26/33 NOM, VER, 1rst PAR 0.7200 24/33 Table 2: MDC results with the feature translation approach and subset S3 Thresholds 1(a), 2(b) match on PERSON 1(a), 2(b) match on LOCATION Steps Results Clusters Results Clusters 1(a) 1(b) 2(a) 2(b) F-measure Multil./Calc./Total F-measure Multil./Calc./Total 6 2 1 5 0.9097 24/24/26 0.9097 24/24/26 6 2 1 6 0.8961 24/24/26 0.8961 24/24/26 6 2 1 7 0.8955 24/24/26 0.8955 24/24/26 6 2 2 5 0.8861 24/24/26 0.8913 24/24/26 7 2 1 5 0.8859 24/24/26 0.8913 24/24/26 6 2 2 4 0.8785 24/24/26 0.8899 24/24/26 6 2 2 6 0.8773 24/24/26 0.8833 24/24/26 6 2 2 7 0.8773 24/24/26 0.8708 24/24/26 Table 3: MDC results with the cognate NE approach and S2 subset Thresholds 1(a), 2(b) match on PERSON 1(a), 2(b) match on LOCATION Steps Results Clusters Results Clusters 1(a) 1(b) 2(a) 2(b) F-measure Multil./Calc./Total F-measure Multil./Calc./Total 7 2 1 5 0.8587 30/30/33 0.8621 30/30/33 6 2 1 5 0.8552 30/30/33 0.8552 30/30/33 6 2 1 6 0.8482 30/30/33 0.8483 30/30/33 6 2 1 7 0.8471 30/30/33 0.8470 30/30/33 6 2 2 5 0.8354 30/30/33 0.8393 30/30/33 6 2 2 6 0.8353 30/30/33 0.8474 30/30/33 6 2 2 4 0.8323 30/30/33 0.8474 30/30/33 6 2 2 7 0.8213 30/30/33 0.8134 30/30/33 Table 4: MDC results with the cognate NE approach and S3 subset 1151 named entities identification. One of the main advantages of this approach is that it does not depend on multilingual resources such as dictionaries, machine translation systems, thesaurus or gazetteers. The only requirement to fulfill is that the languages involved in the corpus have to permit the possibility of identifying cognate named entities. Another advantage of the approach is that it does not need any information about the right number of clusters. In fact, the algorithm calculates it by using the threshold values of the algorithm. We have tested this approach with a comparable corpus of news written in English and Spanish, obtaining encouraging results. We think that this approach could be particularly appropriate for news articles corpus, where named entities play an important role. Even more, when there is no previous evidence of the right number of clusters. In addition, we have compared our approach with other based on feature translation, resulting that our approach presents a slightly better performance. Future work will include the compilation of more corpora, the incorporation of machine learning techniques in order to obtain the thresholds more appropriate for different type of corpus. In addition, we will study if changing the order of the bilingual and monolingual comparison steps the performance varies significantly for different type of corpus. Acknowledgements We wish to thank the anonymous reviewers for their helpful and instructive comments. This work has been partially supported by MCyT TIN200508943-C02-02. References Benoit Mathieu, Romanic Besancon and Christian Fluhr. 2004. “Multilingual document clusters discovery”. RIAO’2004, p. 1-10. Arantza Casillas, M. Teresa Gonz´alez de Lena and Raquel Mart´ınez. 2004. “Sampling and Feature Selection in a Genetic Algorithm for Document Clustering”. Computational Linguistics and Intelligent Text Processing, CICLing’04. Lecture Notes in Computer Science, Springer-Verlag, p. 601-612. Hsin-Hsi Chen and Chuan-Jie Lin. 2000. “A Multilingual News Summarizer”. Proceedings of 18th International Conference on Computational Linguistics, p. 159-165. Xavier Carreras, I. Chao, Lluis Padr´o and M. Padr´o 2004 “An Open-Source Suite of Language Analyzers”. Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC’04). Lisbon, Portugal. http://garraf.epsevg.upc.es/freeling/. Karypis G. 2002. “ CLUTO: A Clustering Toolkit”. Technical Report: 02-017. University of Minnesota, Department of Computer Science, Minneapolis, MN 55455. David Kirk Evans, Judith L. Klavans and Kathleen McKeown. 2004. “Columbian Newsblaster: Multilingual News Summarization on the Web”. Proceedings of the Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics Annual Meeting, HLT-NAACL’2004. Lawrence J. Leftin. 2003. “Newsblaster RussianEnglish Clustering Performance Analysis”. Columbia computer science Technical Reports. Bruno Pouliquen, Ralf Steinberger, Camelia Ignat, Emilia Ksper and Irina Temikova. 2004. “Multilingual and cross-lingual news topic tracking”. Proceedings of the 20th International Conference on computational Linguistics, p. 23-27. Andreas Rauber, Michael Dittenbach and Dieter Merkl. 2001. “Towards Automatic Content-Based Organization of Multilingual Digital Libraries: An English, French, and German View of the Russian Information Agency Novosti News”. Third All-Russian Conference Digital Libraries: Advanced Methods and Technologies, Digital Collections Petrozavodsk, RCDI’2001. van Rijsbergen, C.J. 1974. “Foundations of evaluation”. Journal of Documentation, 30 (1974), p. 365373. Joaquin Silva, J. Mexia, Carlos Coelho and Gabriel Lopes. 2004. ”A Statistical Approach for Multilingual Document Clustering and Topic Extraction form Clusters”. Pliska Studia Mathematica Bulgarica, v.16,p. 207-228. Ralf Steinberger, Bruno Pouliquen, and Johan Scheer. 2002. “Cross-Lingual Document Similarity Calculation Using the Multilingual Thesaurus EUROVOC”. Computational Linguistics and Intelligent Text Processing, CICling’02. Lecture Notes in Computer Science, Springer-Verlag, p. 415-424. Ralf Steinberger, Bruno Pouliquen, and Camelia Ignat. 2004. “Exploiting multilingual nomenclatures and language-independent text features as an interlingua for cross-lingual text analysis applications”. Slovenian Language Technology Conference. Information Society, SLTC 2004. Vossen, P. 1998. “Introduction to EuroWordNet”. Computers and the Humanities Special Issue on EuroWordNet. 1152
2006
144
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1153–1160, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Time Period Identification of Events in Text Taichi Noro† Takashi Inui†† Hiroya Takamura‡ Manabu Okumura‡ †Interdisciplinary Graduate School of Science and Engineering Tokyo Institute of Technology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa, Japan ††Japan Society for the Promotion of Science ‡Precision and Intelligence Laboratory, Tokyo Institute of Technology {norot, tinui}@lr.pi.titech.ac.jp,{takamura, oku}@pi.titech.ac.jp Abstract This study aims at identifying when an event written in text occurs. In particular, we classify a sentence for an event into four time-slots; morning, daytime, evening, and night. To realize our goal, we focus on expressions associated with time-slot (time-associated words). However, listing up all the time-associated words is impractical, because there are numerous time-associated expressions. We therefore use a semi-supervised learning method, the Naïve Bayes classifier backed up with the Expectation Maximization algorithm, in order to iteratively extract time-associated words while improving the classifier. We also propose to use Support Vector Machines to filter out noisy instances that indicates no specific time period. As a result of experiments, the proposed method achieved 0.864 of accuracy and outperformed other methods. 1 Introduction In recent years, the spread of the internet has accelerated. The documents on the internet have increased their importance as targets of business marketing. Such circumstances have evoked many studies on information extraction from text especially on the internet, such as sentiment analysis and extraction of location information. In this paper, we focus on the extraction of temporal information. Many authors of documents on the web often write about events in their daily life. Identifying when the events occur provides us valuable information. For example, we can use temporal information as a new axis in the information retrieval. From time-annotated text, companies can figure out when customers use their products. We can explore activities of users for marketing researches, such as “What do people eat in the morning?”, “What do people spend money for in daytime?” Most of previous work on temporal processing of events in text dealt with only newswire text. In those researches, it is assumed that temporal expressions indicating the time-period of events are often explicitly written in text. Some examples of explicit temporal expressions are as follows: “on March 23”, “at 7 p.m.”. However, other types of text including web diaries and blogs contain few explicit temporal expressions. Therefore one cannot acquire sufficient temporal information using existing methods. Although dealing with such text as web diaries and blogs is a hard problem, those types of text are excellent information sources due to their overwhelmingly huge amount. In this paper, we propose a method for estimating occurrence time of events expressed in informal text. In particular, we classify sentences in text into one of four time-slots; morning, daytime, evening, and night. To realize our goal, we focus on expressions associated with time-slot (hereafter, called time-associated words), such as “commute (morning)”, “nap (daytime)” and “cocktail (night)”. Explicit temporal expressions have more certain information than the timeassociated words. However, these expressions are rare in usual text. On the other hand, although the time-associated words provide us only indirect information for estimating occurrence time of events, these words frequently appear in usual text. Actually, Figure 2 (we will discuss the graph in Section 5.2, again) shows the number of sentences including explicit tem1153 poral expressions and time-associated words respectively in text. The numbers are obtained from a corpus we used in this paper. We can figure out that there are much more time-associated words than explicit temporal expressions in blog text. In other words, we can deal with wide coverage of sentences in informal text by our method with time-associated words. However, listing up all the time-associated words is impractical, because there are numerous time-associated expressions. Therefore, we use a semi-supervised method with a small amount of labeled data and a large amount of unlabeled data, because to prepare a large quantity of labeled data is costly, while unlabeled data is easy to obtain. Specifically, we adopt the Naïve Bayes classifier backed up with the Expectation Maximization (EM) algorithm (Dempster et al., 1977) for semi-supervised learning. In addition, we propose to use Support Vector Machines to filter out noisy sentences that degrade the performance of the semi-supervised method. In our experiments using blog data, we obtained 0.864 of accuracy, and we have shown effectiveness of the proposed method. This paper is organized as follows. In Section 2 we briefly describe related work. In Section 3 we describe the details of our corpus. The proposed method is presented in Section 4. In Section 5, we describe experimental results and discussions. We conclude the paper in Section 6. 2 Related Work The task of time period identification is new and has not been explored much to date. Setzer et al. (2001) and Mani et al. (2000) aimed at annotating newswire text for analyzing temporal information. However, these previous work are different from ours, because these work only dealt with newswire text including a lot of explicit temporal expressions. Tsuchiya et al. (2005) pursued a similar goal as ours. They manually prepared a dictionary with temporal information. They use the handcrafted dictionary and some inference rules to determine the time periods of events. In contrast, we do not resort to such a hand-crafted material, which requires much labor and cost. Our method automatically acquires temporal information from actual data of people's activities (blog). Henceforth, we can get temporal information associated with your daily life that would be not existed in a dictionary. 3 Corpus In this section, we describe a corpus made from blog entries. The corpus is used for training and test data of machine learning methods mentioned in Section 4. The blog entries we used are collected by the method of Nanno et al. (2004). All the entries are written in Japanese. All the entries are split into sentences automatically by some heuristic rules. In the next section, we are going to explain “time-slot” tag added at every sentence. 3.1 Time-Slot Tag The “time-slot” tag represents when an event occurs in five classes; “morning”, “daytime”, “evening”, “night”, and “time-unknown”. “Timeunknown” means that there is no temporal information. We set the criteria of time-slot tags as follows. Morning: 04:00--10:59 from early morning till before noon, breakfast Daytime: 11:00--15:59 from noon till before dusk, lunch Evening: 16:00--17:59 from dusk till before sunset Night: 18:00--03:59 from sunset till dawn, dinner Note that above criteria are just interpreted as rough standards. We think time-slot recognized by authors is more important. For example, in a case of “about 3 o'clock this morning” we judge the case as “morning” (not “night”) with the expression written by the author “this morning”. To annotate sentences in text, we used two different clues. One is the explicit temporal expressions or time-associated words included in the sentence to be judged. The other is contextual information around the sentences to be judged. The examples corresponding to the former case are as follows: Example 1 a. I went to post office by bicycle in the morning. b. I had spaghetti at restaurant at noon. c. I cooked stew as dinner on that day. Suppose that the two sentences in Example 2 appear successively in a document. In this case, we first judge the first sentence as morning. Next, we judge the second sentence as morning by contextual information (i.e., the preceding sentence is judged as morning), although we cannot know the time period just from the content of the second sentence itself. 1154 4.2 Naïve Bayes Classifier Example 2 1. I went to X by bicycle in the morning. In this section, we describe multinomial model that is a kind of Naïve Bayes classifiers. 2. I went to a shop on the way back from X. A generative probability of example x given a category has the form: c 3.2 Corpus Statistics We manually annotated the corpus. The number of the blog entries is 7,413. The number of sentences is 70,775. Of 70,775, the number of sentences representing any events1 is 14,220. The frequency distribution of time-slot tags is shown in Table 1. We can figure out that the number of time-unknown sentences is much larger than the other sentences from this table. This bias would affect our classification process. Therefore, we propose a method for tackling the problem. ( ) ( ) ( ) ( ) ( ) ∏ = w x w N x w N c w P x x P c x P , | ! , | , θ (1) where ( ) x P denotes the probability that a sentence of length x occurs, denotes the number of occurrences of w in text ( x w N , ) x . The occurrence of a sentence is modeled as a set of trials, in which a word is drawn from the whole vocabulary. In time-slot classification, the x is correspond to each sentence, the c is correspond to one of time-slots in {morning, daytime, evening, night}. Features are words in the sentence. A detailed description of features will be described in Section 4.5. morning 711 daytime 599 evening 207 night 1,035 time-unknown 11,668 Total 14,220 4.3 Incorporation of Unlabeled Data with the EM Algorithm Table 1: The numbers of time-slot tags. The EM algorithm (Dempster et al., 1977) is a method to estimate a model that has the maximal likelihood of the data when some variables cannot be observed (these variables are called latent variables). Nigam et al. (2000) proposed a combination of the Naïve Bayes classifiers and the EM algorithm. 4 Proposed Method 4.1 Basic Idea Suppose, for example, “breakfast” is a strong clue for the morning class, i.e. the word is a time-associated word of morning. Thereby we can classify the sentence “I have cereal for breakfast.” into the morning class. Then “cereal” will be a time-associated word of morning. Therefore we can use “cereal” as a clue of timeslot classification. By iterating this process, we can obtain a lot of time-associated words with bootstrapping method, improving sentence classification performance at the same time. Ignoring the unrelated factors of Eq. (1), we obtain ( ) ( ) ( ) ∏ ∝ w x w N c w P c x P , | , | , θ (2) ( ) ( ) ( ) ( ) ∏ ∑ ∝ w x w N c c w P c P x P . | | , θ (3) We express model parameters as θ . If we regard c as a latent variable and introduce a Dirichlet distribution as the prior distribution for the parameters, the Q-function (i.e., the expected log-likelihood) of this model is defined as: To realize the bootstrapping method, we use the EM algorithm. This algorithm has a theoretical base of likelihood maximization of incomplete data and can enhance supervised learning methods. We specifically adopted the combination of the Naïve Bayes classifier and the EM algorithm. This combination has been proven to be effective in the text classification (Nigam et al., 2000). ( ) ( ) ( ) ( ) ( ) ( ) ( ) , | log , | log | , ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ × + = ∏ ∑∑ ∈ w x w N D x c c w P c P c x P P Q θ θ θ θ (4) where ( ) ( ) ( ) ( ) ( ) ∏ ∏ − − ∝ c w c w P c P P 1 1 | α α θ . α is a user given parameter and D is the set of examples used for model estimation. 1 The aim of this study is time-slot classification of events. Therefore we treat only sentences expressing an event. We obtain the next EM equation from this Qfunction: 1155 Figure 1: The flow of 2-step classification. E-step: ( ) ( ) ( ) ( ) ( ) , , | | , | | , | ∑ = c c x P c P c x P c P x c P θ θ θ θ θ (5) M-step: ( ) ( ) ( ) ( ) , 1 , | 1 D C x c P c P D x + − + − = ∑∈ α θ α (6) ( ) ( ) ( ) ( ) ( ) ( ) ( ) , , , | 1 , , | 1 | ∑∑ ∑ ∈ ∈ + − + − = w D x D x x w N x c P W x w N x c P c w P θ α θ α (7) where C denotes the number of categories, W denotes the number of features variety. For labeled example x , Eq. (5) is not used. Instead, ( ) θ , | x c P is set as 1.0 if c is the category of x , otherwise 0. Instead of the usual EM algorithm, we use the tempered EM algorithm (Hofmann, 2001). This algorithm allows coordinating complexity of the model. We can realize this algorithm by substituting the next equation for Eq. (5) at E-step: ( ) ( ) ( ) { } ( ) ( ) { } , , | | , | | , | ∑ = c c x P c P c x P c P x c P β β θ θ θ θ θ (8) where β denotes a hyper parameter for coordinating complexity of the model, and it is positive value. By decreasing this hyper-parameter β , we can reduce the influence of intermediate classification results if those results are unreliable. Too much influence by unlabeled data sometimes deteriorates the model estimation. Therefore, we introduce a new hyper-parameter ( 1 0 ≤ ≤ ) λ λ which acts as weight on unlabeled data. We exchange the second term in the righthand-side of Eq. (4) for the next equation: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) , | log , | | log , | , , ∑ ∏ ∑ ∑ ∏ ∑ ∈ ∈ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ + ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ u l D x w x w N c D x w x w N c c w P c P x c P c w P c P x c P θ λ θ where l D denotes labeled data, u D denotes unlabeled data. We can reduce the influence of unlabeled data by decreasing the value of λ . We derived new update rules from this new Qfunction. The EM computation stops when the difference in values of the Q-function is smaller than a threshold. 4.4 Class Imbalance Problem We have two problems with respect to “timeunknown” tag. The first problem is the class imbalance problem (Japkowicz 2000). The number of timeunknown time-slot sentences is much larger than that of the other sentences as shown in Table 1. There are more than ten times as many timeunknown time-slot sentences as the other sentences. Second, there are no time-associated words in the sentences categorized into “time-unknown”. Thus the feature distribution of time-unknown time-slot sentences is remarkably different from the others. It would be expected that they adversely affect proposed method. There have been some methodologies in order to solve the class imbalance problem, such as Zhang and Mani (2003), Fan et al. (1999) and Abe et al. (2004). However, in our case, we have to resolve the latter problem in addition to the class imbalance problem. To deal with two problems above simultaneously and precisely, we develop a cascaded classification procedure. SVM NB + EM Step 2 Time-Slot Classifier time-slot = time-unknown time-slot = morning, daytime, evening, night time-slot = morning time-slot = daytime time-slot = morning, daytime, evening, night, time-unknown Step1 Time-Unknown Filter time-slot = night time-slot = evening 1156 4.5 Time-Slot Classification Method It’s desirable to treat only “time-known” sentences at NB+EM process to avoid the abovementioned problems. We prepare another classifier for filtering time-unknown sentences before NB+EM process for that purpose. Thus, we propose a classification method in 2 steps (Method A). The flow of the 2-step classification is shown in Figure 1. In this figure, ovals represent classifiers, and arrows represent flow of data. The first classifier (hereafter, “time-unknown” filter) classifies sentences into two classes; “time-unknown” and “time-known”. The “timeknown” class is a coarse class consisting of four time-slots (morning, daytime, evening, and night). We use Support Vector Machines as a classifier. The features we used are all words included in the sentence to be classified. The second classifier (time-slot classifier) classifies “time-known” sentences into four classes. We use Naïve Bayes classifier backed up with the Expectation Maximization (EM) algorithm mentioned in Section 4.3. The features for the time-slot classifier are words, whose part of speech is noun or verb. The set of these features are called NORMAL in the rest of this paper. In addition, we use information from the previous and the following sentences in the blog entry. The words included in such sentences are also used as features. The set of these features are called CONTEXT. The features in CONTEXT would be effective for estimating time-slot of the sentences as mentioned in Example2 in Section 3.1. We also use a simple classifier (Method B) for comparison. The Method B classifies all timeslots (morning ~ night, time-unknown) sentences at just one step. We use Naïve Bayes classifier backed up with the Expectation Maximization (EM) algorithm at this learning. The features are words (whose part-of-speech is noun or verb) included in the sentence to be classified. 5 Experimental Results and Discussion 5.1 Time-Slot Classifier with TimeAssociated Words 5.1.1 Time-Unknown Filter We used 11.668 positive (time-unknown) samples and 2,552 negative (morning ~ night) samples. We conducted a classification experiment by Support Vector Machines with 10-fold cross validation. We used TinySVM2 software package for implementation. The soft margin parameter is automatically estimated by 10-fold cross validation with training data. The result is shown in Table 2. Table 2 clarified that the “time-unknown” filter achieved good performance; F-measure of 0.899. In addition, since we obtained a high recall (0.969), many of the noisy sentences will be filtered out at this step and the classifier of the second step is likely to perform well. Accuracy 0.878 Precision 0.838 Recall 0.969 F-measure 0.899 Table 2: Classification result of the time-unknown filter. 5.1.2 Time-Slot Classification In step 2, we used “time-known” sentences classified by the unknown filter as test data. We conducted a classification experiment by Naïve Bayes classifier + the EM algorithm with 10-fold cross validation. For unlabeled data, we used 64,782 sentences, which have no intersection with the labeled data. The parameters, λ and β , are automatically estimated by 10-fold cross validation with training data. The result is shown in Table 3. Accuracy Method NORMAL CONTEXT Explicit 0.109 Baseline 0.406 NB 0.567 0.464 NB + EM 0.673 0.670 Table 3: The result of time-slot classifier. 2 http://www.chasen.org/~taku/software/TinySVM 1157 Table 4: Confusion matrix of output. morning daytime evening night rank word p(c|w) word p(c|w) word p(c|w) word p(c|w) 1 this morning 0.729 noon 0.728 evening 0.750 last night 0.702 2 morning 0.673 early after noon 0.674 sunset 0.557 night 0.689 3 breakfast 0.659 afternoon 0.667 academy 0.448 fireworks 0.688 4 early morning 0.656 daytime 0.655 dusk 0.430 dinner 0.684 5 before noon 0.617 lunch 0.653 Hills 0.429 go to bed 0.664 6 compacted snow 0.603 lunch 0.636 run on 0.429 night 0.641 7 commute 0.561 lunch break 0.629 directions 0.429 bow 0.634 8 --- 0.541 lunch 0.607 pinecone 0.429 overtime 0.606 9 parade 0.540 noon 0.567 priest 0.428 year-end party 0.603 10 wake up 0.520 butterfly 0.558 sand beach 0.428 dinner 0.574 11 leave harbor 0.504 Chinese food 0.554 --- 0.413 beach 0.572 12 rise late 0.504 forenoon 0.541 Omori 0.413 cocktail 0.570 13 cargo work 0.504 breast-feeding 0.536 fan 0.413 me 0.562 14 alarm clock 0.497 nap 0.521 Haneda 0.412 Tomoyuki 0.560 15 --- 0.494 diaper 0.511 preview 0.402 return home 0.557 16 sunglow 0.490 Japanese food 0.502 cloud 0.396 close 0.555 17 wheel 0.479 star festival 0.502 Dominus 0.392 stay up late 0.551 18 wake up 0.477 hot noodle 0.502 slip 0.392 tonight 0.549 19 perm 0.474 pharmacy 0.477 tasting 0.391 night 0.534 20 morning paper 0.470 noodle 0.476 nest 0.386 every night 0.521 Table 5: Time-associated words examples. In Table 3, “Explicit” indicates the result by a simple classifier based on regular expressions 3 including explicit temporal expressions. The baseline method classifies all sentences into night because the number of night sentences is the largest. The “CONTEXT” column shows the results obtained by classifiers learned with the features in CONTEXT in addition to the features 3 For example, we classify sentences matching following regular expressions into morning class: [(gozen)(gozen-no)(asa) (asa-no)(am)(AM)(amno)(AM-no)][456789(10)] ji, [(04)(05)(06)(07)(08) (09)]ji, [(04)(05)(06)(07) (08) (09)]:[0-9]{2,2}, [456789(10)][(am)(AM)]. (“gozen”, “gozen‐no” means before noon. “asa”, “asa-no” means morning. “ji” means o’clock.) in NORMAL. The accuracy of the Explicit method is lower than the baseline. This means existing methods based on explicit temporal expressions cannot work well in blog text. The accuracy of the method 'NB' exceeds that of the baseline by 16%. Furthermore, the accuracy of the proposed method 'NB+EM' exceeds that of the 'NB' by 11%. Thus, we figure out that using unlabeled data improves the performance of our time-slot classification. In this experiment, unfortunately, CONTEXT only deteriorated the accuracy. The time-slot tags of the sentences preceding or following the target sentence may still provide information to improve the accuracy. Thus, we tried a sequential tagging method for sentences, in which tags are output of time-slot classifier morning daytime evening night time-unknown sum morning 332 14 1 37 327 711 daytime 30 212 1 44 312 599 evening 4 5 70 18 110 207 night 21 19 4 382 609 1035 time-slot tag time-unknown 85 66 13 203 11301 11668 sum 472 316 89 684 12659 14220 1158 predicted in the order of their occurrence. The predicted tags are used as features in the prediction of the next tag. This type of sequential tagging method regard as a chunking procedure (Kudo and Matsumoto, 2000) at sentence level. We conducted time-slot (five classes) classification experiment, and tried forward tagging and backward tagging, with several window sizes. We used YamCha4, the multi-purpose text chunker using Support Vector Machines, as an experimental tool. However, any tagging direction and window sizes did not improve the performance of classification. Although a chunking method has possibility of correctly classifying a sequence of text units, it can be adversely biased by the preceding or the following tag. The sentences in blog used in our experiments would not have a very clear tendency in order of tags. This is why the chunking-method failed to improve the performance in this task. We would like to try other bias-free methods such as Conditional Random Fields (Lafferty et al., 2001) for future work. 5.1.3 2-step Classification Finally, we show an accuracy of the 2-step classifier (Method A) and compare it with those of other classifiers in Table 6. The accuracies are calculated with the equation: . In Table 6, the baseline method classifies all sentences into time-unknown because the number of time-unknown sentences is the largest. Accuracy of Method A (proposed method) is higher than that of Method B (4.1% over). These results show that time-unknown sentences adversely affect the classifier learning, and 2-step classification is an effective method. Table 4 shows the confusion matrix corresponding to the Method A (NORMAL). From this table, we can see Method A works well for classification of morning, daytime, evening, and night, but has some difficulty in 4 http://www.chasen.org/~taku/software/YamCha Table 6: Comparison of the methods for five class classification Figure 2: Change of # sentences that have timeassociated words: “Explicit” indicates the number of sentences including explicit temporal expressions, “NE-TIME” indicates the number of sentences including NE-TIME tag. classification of time-unknown. The 11.7% of samples were wrongly classified into “night” or “unknown”. We briefly describe an error analysis. We found that our classifier tends to wrongly classify samples in which two or more events are written in a sentence. The followings are examples: Example 3 a. I attended a party last night, and I got back on the first train in this morning because the party was running over. b. I bought a cake this morning, and ate it after the dinner. 5.2 Examples of Time-Associated Words Table 5 shows some time-associated words obtained by the proposed method. The words are sorted in the descending order of the value of ( ) w c P | . Although some consist of two or three words, their original forms in Japanese consist of one word. There are some expressions appearing more than once, such as “dinner”. Actually these expressions have different forms in Japanese. Meaningless (non-word) strings caused by morMethod Conclusive accuracy Explicit 0.833 Baseline 0.821 Method A (NORMAL) 0.864 Method A (CONTEXT) 0.862 Method B 0.823 0 1000 2000 3000 4000 5000 1 10 20 30 40 50 60 70 80 90 100 # time-associated words (N-best) # sentences including timeassociated words Explicit NE-TIME # time-unknown sentences correctly classified by the time-unknown filter # known sentences correctly classified by the time-slot classifier + # sentences with a time-slot tag value 1159 phological analysis error are presented as the symbol “---”. We obtained a lot of interesting time-associated words, such as “commute (morning)”, “fireworks (night)”, and “cocktail (night)”. Most words obtained are significantly different from explicit temporal expressions and NETIME expressions. Figure 2 shows the number of sentences including time-associated words in blog text. The horizontal axis represents the number of timeassociated words. We sort the words in the descending order of and selected the top N words. The vertical axis represents the number of sentences including any N-best time-associated words. We also show the number of sentences including explicit temporal expressions, and the number of sentences including NE-TIME tag (Sekine and Isahara, 1999) for comparison. The set of explicit temporal expressions was extracted by the method described in Section 5.1.2. We used a Japanese linguistic analyzer “CaboCha ( w c P | ) 5 ” to obtain NE-TIME information. From this graph, we can confirm that the number of target sentences of our proposed method is larger than that of existing methods. 6 Conclusion In our study, we proposed a method for identifying when an event in text occurs. We succeeded in using a semi-supervised method, the Naïve Bayes Classifier enhanced by the EM algorithm, with a small amount of labeled data and a large amount of unlabeled data. In order to avoid the class imbalance problem, we used a 2-step classifier, which first filters out time-unknown sentences and then classifies the remaining sentences into one of 4 classes. The proposed method outperformed the simple 1-step method. We obtained 86.4% of accuracy that exceeds the existing method and the baseline method. References Naoki Abe, Bianca Zadrozny, John Langford. 2004. An Iterative Method for Multi-class Cost-sensitive Learning. In Proc. of the 10th. ACM SIGKDD, pp.3–11. Arthur P. Dempster, Nan M. laird, and Donald B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the 5 http://chasen.org/~taku/software/cabocha/ Royal Statistical Society Series B, Vol. 39, No. 1, pp.1–38. Wei Fan, Salvatore J. Stolfo, Junxin Zhang, Philip K. Chan. 1999. AdaCost: Misclassification Costsensitive Boosting. In Proc. of ICML, pp.97–105. Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42:177–196. Nathalie Japkowicz. 2000. Learning from Imbalanced Data Sets: A Comparison of Various Strategies. In Proc. of the AAAI Workshop on Learning from Imbalanced Data Sets, pp.10 –15. Taku Kudo, Yuji Matsumoto. 2000. Use of Support Vector Learning for Chunking Identification, In Proc of the 4th CoNLL, pp.142–144. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data, In Proc. of ICML, pp.282–289. Inderjeet Mani, George Wilson 2000. Robust Temporal Processing of News. In Proc. of the 38th ACL, pp.69–76. Tomoyuki Nanno, Yasuhiro Suzuki, Toshiaki Fujiki, Manabu Okumura. 2004. Automatically Collecting and Monitoring Japanese Weblogs. Journal for Japanese Society for Artificial Intelligence , Vol.19, No.6, pp.511–520. (in Japanese) Kamal Nigam, Andrew McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, Vol. 39, No.2/3, pp.103–134. Satoshi Sekine, Hitoshi Isahara. 1999. IREX project overview. Proceedings of the IREX Workshop. Andrea Setzer, Robert Gaizauskas. 2001. A Pilot Study on Annotating Temporal Relations in Text. In Proc. of the ACL-2001 Workshop on Temporal and Spatial Information Processing, Toulose, France, July, pp.88–95. Seiji Tsuchiya, Hirokazu Watabe, Tsukasa Kawaoka. 2005. Evaluation of a Time Judgement Technique Based on an Association Mechanism. IPSG SIG Technical Reports,2005-NL-168, pp.113–118. (in Japanese) Jianping Zhang, Inderjeet Mani. 2003. kNN Approach to Unbalanced Data Distributions: A Case Study involving Information Extraction. In Proc. of ICML Workshop on Learning from Imbalanced Datasets II., pp.42–48. 1160
2006
145
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1161–1168, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Optimal Constituent Alignment with Edge Covers for Semantic Projection Sebastian Padó Computational Linguistics Saarland University Saarbrücken, Germany [email protected] Mirella Lapata School of Informatics University of Edinburgh Edinburgh, UK [email protected] Abstract Given a parallel corpus, semantic projection attempts to transfer semantic role annotations from one language to another, typically by exploiting word alignments. In this paper, we present an improved method for obtaining constituent alignments between parallel sentences to guide the role projection task. Our extensions are twofold: (a) we model constituent alignment as minimum weight edge covers in a bipartite graph, which allows us to find a globally optimal solution efficiently; (b) we propose tree pruning as a promising strategy for reducing alignment noise. Experimental results on an English-German parallel corpus demonstrate improvements over state-of-the-art models. 1 Introduction Recent years have witnessed increased interest in data-driven methods for many natural language processing (NLP) tasks, ranging from part-ofspeech tagging, to parsing, and semantic role labelling. The success of these methods is due partly to the availability of large amounts of training data annotated with rich linguistic information. Unfortunately, such resources are largely absent for almost all languages except English. Given the data requirements for supervised learning, and the current paucity of suitable data for many languages, methods for generating annotations (semi-)automatically are becoming increasingly popular. Annotation projection tackles this problem by leveraging parallel corpora and the high-accuracy tools (e.g., parsers, taggers) available for a few languages. Specifically, through the use of word alignments, annotations are transfered from resource-rich languages onto low density ones. The projection process can be decomposed into three steps: (a) determining the units of projection; these are typically words but can also be chunks or syntactic constituents; (b) inducing alignments between the projection units and projecting annotations along these alignments; (c) reducing the amount of noise in the projected annotations, often due to errors and omissions in the word alignment. The degree to which analyses are parallel across languages is crucial for the success of projection approaches. A number of recent studies rely on this notion of parallelism and demonstrate that annotations can be adequately projected for parts of speech (Yarowsky and Ngai, 2001; Hi and Hwa, 2005), chunks (Yarowsky and Ngai, 2001), and dependencies (Hwa et al., 2002). In previous work (Padó and Lapata, 2005) we considered the annotation projection of semantic roles conveyed by sentential constituents such as AGENT, PATIENT, or INSTRUMENT. Semantic roles exhibit a high degree of parallelism across languages (Boas, 2005) and thus appear amenable to projection. Furthermore, corpora labelled with semantic role information can be used to train shallow semantic parsers (Gildea and Jurafsky, 2002), which could in turn benefit applications in need of broad-coverage semantic analysis. Examples include question answering, information extraction, and notably machine translation. Our experiments concentrated primarily on the first projection step, i.e., establishing the right level of linguistic analysis for effecting projection. We showed that projection schemes based on constituent alignments significantly outperform schemes that rely exclusively on word alignments. A local optimisation strategy was used to find constituent alignments, while relying on a simple filtering technique to handle noise. The study described here generalises our earlier semantic role projection framework in two important ways. First, we formalise constituent projection as the search for a minimum weight edge cover in a weighted bipartite graph. This formalisation 1161 efficiently yields constituent alignments that are globally optimal. Second, we propose tree pruning as a general noise reduction strategy, which exploits both structural and linguistic information to enable projection. Furthermore, we quantitatively assess the impact of noise on the task by evaluating both on automatic and manual word alignments. In Section 2, we describe the task of rolesemantic projection and the syntax-based framework introduced in Padó and Lapata (2005). Section 3 explains how semantic role projection can be modelled with minimum weight edge covers in bipartite graphs. Section 4 presents our tree pruning strategy. We present our evaluation framework and results in Section 5. A discussion of related and future work concludes the paper. 2 Cross-lingual Semantic Role projection Semantic role projection is illustrated in Figure 1 using English and German as the source-target language pair. We assume a FrameNet-style semantic analysis (Fillmore et al., 2003). In this paradigm, the semantics of predicates and their arguments are described in terms of frames, conceptual structures which model prototypical situations. The English sentence Kim promised to be on time in Figure 1 is an instance of the COMMITMENT frame. In this particular example, the frame introduces two roles, i.e., SPEAKER (Kim) and MESSAGE (to be on time). Other possible, though unrealised, roles are ADDRESSEE, MESSAGE, and TOPIC. The COMMITMENT frame can be introduced by promise and several other verbs and nouns such as consent or threat. We also assume that frame-semantic annotations can be obtained reliably through shallow semantic parsing.1 Following the assignment of semantic roles on the English side, (imperfect) word alignments are used to infer semantic alignments between constituents (e.g., to be on time is aligned with pünktlich zu kommen), and the role labels are transferred from one language to the other. Note that role projection can only take place if the source predicate (here promised) is word-aligned to a target predicate (here versprach) evoking the same frame; if this is not the case (e.g., in metaphors), projected roles will not be generally appropriate. We represent the source and target sentences as sets of linguistic units, Us and Ut, respectively. 1See Carreras and Màrquez (2005) for an overview of recent approaches to semantic parsing. Kim versprach, pünktlich zu kommen Kim promised to be on time S S NP NP Commitment Message Speaker Commitment Speaker Message Figure 1: Projection of semantic roles from English to German (word alignments as dotted lines) The assignment of semantic roles on the source side is a function roles : R →2Us from roles to sets of source units. Constituent alignments are obtained in two steps. First, a real-valued function sim : Us ×Ut →R estimates pairwise similarities between source and target units. To make our model robust to alignment noise, we use only content words to compute the similarity function. Next, a decision procedure uses the similarity function to determine the set of semantically equivalent, i.e., aligned units A ⊆Us ×Ut. Once A is known, semantic projection reduces to transferring the semantic roles from the source units onto their aligned target counterparts: rolet(r) = {ut |∃us ∈roles(r) : (us,ut) ∈A} In Padó and Lapata (2005), we evaluated two main parameters within this framework: (a) the choice of linguistic units and (b) methods for computing semantic alignments. Our results revealed that constituent-based models outperformed wordbased ones by a wide margin (0.65 Fscore vs. 0.46), thus demonstrating the importance of bracketing in amending errors and omissions in the automatic word alignment. We also compared two simplistic alignment schemes, backward alignment and forward alignment. The first scheme aligns each target constituent to its most similar source constituent, whereas the second (A f ) aligns each source constituent to its most similar target constituent: A f = {(us,ut)|ut = argmax u′t∈Ut sim(us,u′ t)} 1162 An example constituent alignment obtained from the forward scheme is shown in Figure 2 (left side). The nodes represent constituents in the source and target language and the edges indicate the resulting alignment. Forward alignment generally outperformed backward alignment (0.65 Fscore vs. 0.45). Both procedures have a time complexity quadratic in the maximal number of sentence nodes: O(|Us||Ut|) = O(max(|Us|,|Ut|)2). A shortcoming common to both decision procedures is that they are local, i.e., they optimise the alignment for each node independently of all other nodes. Consider again Figure 2. Here, the forward procedure creates alignments for all source nodes, but leaves constituents from the target set unaligned (see target node (1)). Moreover, local alignment methods constitute a rather weak model of semantic equivalence since they allow one target node to correspond to any number of source nodes (see target node (3) in Figure 2, which is aligned to three source nodes). In fact, by allowing any alignment between constituents, the local models can disregard important linguistic information, thus potentially leading to suboptimal results. We investigate this possibility by proposing well-understood global optimisation models which suitably constrain the resulting alignments. Besides matching constituents reliably, poor word alignments are a major stumbling block for achieving accurate projections. Previous research addresses this problem in a post-processing step, by reestimating parameter values (Yarowsky and Ngai, 2001), by applying transformation rules (Hwa et al., 2002), by using manually labelled data (Hi and Hwa, 2005), or by relying on linguistic criteria (Padó and Lapata, 2005). In this paper, we present a novel filtering technique based on tree pruning which removes extraneous constituents in a preprocessing stage, thereby disassociating filtering from the alignment computation. In the remainder of this paper, we present the details of our global optimisation and filtering techniques. We only consider constituent-based models, since these obtained the best performance in our previous study (Padó and Lapata, 2005). 3 Globally optimal constituent alignment We model constituent alignment as a minimum weight bipartite edge cover problem. A bipartite graph is a graph G = (V,E) whose node set V is partitioned into two nonempty sets V1 and V2 in such a way that every edge E joins a node in V1 to a node in V2. In a weighted bipartite graph a weight is assigned to each edge. An edge cover is a subgraph of a bipartite graph so that each node is linked to at least one node of the other partition. A minimum weight edge cover is an edge cover with the least possible sum of edge weights. In our projection application, the two partitions are the sets of source and target sentence constituents, Us and Ut, respectively. Each source node is connected to all target nodes and each target node to all source nodes; these edges can be thought of as potential constituent alignments. The edge weights, which represent the (dis)similarity between nodes us and ut are set to 1−sim(us,ut).2 The minimum weight edge cover then represents the alignment with the maximal similarity between source and target constituents. Below, we present details on graph edge covers and a more restricted kind, minimum weight perfect bipartite matchings. We also discuss their computation. Edge covers Given a bipartite graph G, a minimum weight edge cover Ae can be defined as: Ae = argmin Edge cover E ∑ (us,ut)∈E 1−sim(us,ut) An example edge cover is illustrated in Figure 2 (middle). Edge covers are somewhat more constrained compared to the local model described above: all source and target nodes have to take part in some alignment. We argue that this is desirable in modelling constituent alignment, since important linguistic units will not be ignored. As can be seen, edge covers allow one-to-many alignments which are common when translating from one language to another. For example, an English constituent might be split into several German constituents or alternatively two English constituents might be merged into a single German constituent. In Figure 2, the source nodes (3) and (4) correspond to target node (4). Since each node of either side has to participate in at least one alignment, edge covers cannot account for insertions arising when constituents in the source language have no counterpart in their target language, or vice versa, as is the case for deletions. Weighted perfect bipartite matchings Perfect bipartite matchings are a more constrained version of edge covers, in which each node has exactly one adjacent edge. This restricts constituent 2The choice of similarity function is discussed in Section 5. 1163 2 3 4 5 6 1 2 3 4 1 Us Ut r1 r2 r2 r1 r2 2 3 4 5 6 1 2 3 4 1 Us Ut r1 r2 r2 r1 r2 2 3 4 5 6 1 2 3 4 1 Us Ut r1 r2 r2 r1 r2 d d Figure 2: Constituent alignments and role projections resulting from different decision procedures (Us,Ut: sets of source and target constituents; r1,r2: two semantic roles). Left: local forward alignment; middle: edge cover; right: perfect matching with dummy nodes alignment to a bijective function: each source constituent is linked to exactly one target constituent, and vice versa. Analogously, a minimum weight perfect bipartite matching Am is a minimum weight edge cover obeying the one-to-one constraint: Am = argmin Matching M ∑ (us,ut)∈M 1−sim(us,ut) An example of a perfect bipartite matching is given in Figure 2 (right), where each node has exactly one adjacent edge. Note that the target side contains two nodes labelled (d), a shorthand for “dummy” node. Since sentence pairs will often differ in length, the resulting graph partitions will have different sizes as well. In such cases, dummy nodes are introduced in the smaller partition to enable perfect matching. Dummy nodes are assigned a similarity of zero with all other nodes. Alignments to dummy nodes (such as for source nodes (3) and (6)) are ignored during projection. Perfect matchings are more restrictive models of constituent alignment than edge covers. Being bijective, the resulting alignments cannot model splitting or merging operations at all. Insertions and deletions can be modelled only indirectly by aligning nodes in the larger partition to dummy nodes on the other side (see the source side in Figure 2 where nodes (3) and (6) are aligned to (d)). Section 5 assesses if these modelling limitations impact the quality of the resulting alignments. Algorithms Minimum weight perfect matchings in bipartite graphs can be computed efficiently in cubic time using algorithms for network optimisation (Fredman and Tarjan, 1987; time O(|Us|2 log|Us|+|Us|2|Ut|)) or algorithms for the equivalent linear assignment problem (Jonker and Volgenant, 1987; time O(max(|Us|,|Ut|)3)). Their complexity is a linear factor slower than the quadratic runtime of the local optimisation methods presented in Section 2. The computation of (general) edge covers has been investigated by Eiter and Mannila (1997) in the context of distance metrics for point sets. They show that edge covers can be reduced to minimum weight perfect matchings of an auxiliary bipartite graph with two partitions of size |Us| + |Ut|. This allows the computation of general minimum weight edge covers in time O((|Us|+|Ut|)3). 4 Filtering via Tree Pruning We introduce two filtering techniques which effectively remove constituents from source and target trees before alignment takes place. Tree pruning as a preprocessing step is more general and more efficient than our original post-processing filter (Padó and Lapata, 2005) which was embedded into the similarity function. Not only does tree pruning not interfere with the similarity function but also reduces the size of the graph, thus speeding up the algorithms discussed in the previous section. We present two instantiations of tree pruning: word-based filtering, which subsumes our earlier method, and argument-based filtering, which eliminates unlikely argument candidates. Word-based filtering This technique removes terminal nodes from parse trees according to certain linguistic or alignment-based criteria. We apply two word-based filters in our experiments. The first removes non-content words, i.e., all words which are not adjectives, adverbs, verbs, or nouns, from the source and target sen1164 Kim versprach, pünktlich zu kommen. VP S VP S Figure 3: Filtering of unlikely arguments (predicate in boldface, potential arguments in boxes). tences (Padó and Lapata, 2005). We also use a novel filter which removes all words which remain unaligned in the automatic word alignment. Nonterminal nodes whose terminals are removed by these filters, are also pruned. Argument filtering Previous work in shallow semantic parsing has demonstrated that not all nodes in a tree are equally probable as semantic roles for a given predicate (Xue and Palmer, 2004). In fact, assuming a perfect parse, there is a “set of likely arguments”, to which almost all semantic roles roles should be assigned to. This set of likely arguments consists of all constituents which are a child of some ancestor of the predicate, provided that (a) they do not dominate the predicate themselves and (b) there is no sentence boundary between a constituent and its predicate. This definition covers long-distance dependencies such as control constructions for verbs, or support constructions for nouns and adjectives, and can be extended slightly to accommodate coordination. This argument-based filter reduces target trees to a set of likely arguments. In the example in Figure 3, all tree nodes are removed except Kim and pünktlich zu kommen. 5 Evaluation Set-up Data For evaluation, we used the parallel corpus3 from our earlier work (Padó and Lapata, 2005). It consists of 1,000 English-German sentence pairs from the Europarl corpus (Koehn, 2005). The sentences were automatically parsed (using Collin’s 1997 parser for English and Dubey’s 2005 parser for German), and manually annotated with FrameNet-like semantic roles (see Padó and Lapata 2005 for details.) Word alignments were computed with the GIZA++ toolkit (Och and Ney, 2003), using the 3The corpus can be downloaded from http://www. coli.uni-saarland.de/~pado/projection/. entire English-German Europarl bitext as training data (20M words). We used the GIZA++ default settings to induce alignments for both directions (source-target, target-source). Following common practise in MT (Koehn et al., 2003), we considered only their intersection (bidirectional alignments are known to exhibit high precision). We also produced manual word alignments for all sentences in our corpus, using the GIZA++ alignments as a starting point and following the Blinker annotation guidelines (Melamed, 1998). Method and parameter choice The constituent alignment models we present are unsupervised in that they do not require labelled data for inferring correct alignments. Nevertheless, our models have three parameters: (a) the similarity measure for identifying semantically equivalent constituents; (b) the filtering procedure for removing noise in the data (e.g., wrong alignments); and (c) the decision procedure for projection. We retained the similarity measure introduced in Padó and Lapata (2005) which computes the overlap between a source constituent and its candidate projection, in both directions. Let y(cs) and y(ct) denote the yield of a source and target constituent, respectively, and al(T) the union of all word alignments for a token set T: sim(cs,ct) = |y(ct)∩al(y(cs))| |y(cs)| |y(cs)∩al(y(ct))| |y(ct)| We examined three filtering procedures (see Section 4): removing non-aligned words (NA), removing non-content words (NC), and removing unlikely arguments (Arg). These were combined with three decision procedures: local forward alignment (Forward), perfect matching (PerfMatch), and edge cover matching (EdgeCover) (see Section 3). We used Jonker and Volgenant’s (1987) solver4 to compute weighted perfect matchings. In order to find optimal parameter settings for our models, we split our corpus randomly into a development and test set (both 50% of the data) and examined the parameter space exhaustively on the development set. The performance of the best models was then assessed on the test data. The models had to predict semantic roles for German, using English gold standard roles as input, and were evaluated against German gold standard 4The software is available from http://www. magiclogic.com/assignment.html. 1165 Model Prec Rec F-score WordBL 45.6 44.8 45.1 Forward 66.0 56.5 60.9 PerfMatch 71.7 54.7 62.1 No Filter EdgeCover 65.6 57.3 61.2 UpperBnd 85.0 84.0 84.0 Model Prec Rec F-score WordBL 45.6 44.8 45.1 Forward 74.1 56.1 63.9 PerfMatch 73.3 62.1 67.2 NA Filter EdgeCover 70.5 62.9 66.5 UpperBnd 85.0 84.0 84.0 Model Prec Rec F-score WordBL 45.6 44.8 45.1 Forward 64.3 47.8 54.8 PerfMatch 73.1 56.9 64.0 NC Filter EdgeCover 67.5 57.0 61.8 UpperBnd 85.0 84.0 84.0 Model Prec Rec F-score WordBL 45.6 44.8 45.1 Forward 69.9 60.7 65.0 PerfMatch 80.4 48.1 60.2 Arg Filter EdgeCover 69.6 60.6 64.8 UpperBnd 85.0 84.0 84.0 Table 1: Model comparison using intersective alignments (development set) roles. To gauge the extent to which alignment errors are harmful, we present results both on intersective and manual alignments. Upper bound and baseline In Padó and Lapata (2005), we assessed the feasibility of semantic role projection by measuring how well annotators agreed on identifying roles and their spans. We obtained an inter-annotator agreement of 0.84 (F-score), which can serve as an upper bound for the projection task. As a baseline, we use a simple word-based model (WordBL) from the same study. The units of this model are words, and the span of a projected role is the union of all target terminals aligned to a terminal of the source role. 6 Results Development set Our results on the development set are summarised in Table 1. We show how performance varies for each model according to different filtering procedures when automatically produced word alignments are used. No filtering is applied to the baseline model (WordBL). Without filtering, local and global models yield comparable performance. Models based on perfect bipartite matchings (PerfMatch) and edge covers (EdgeCover) obtain slight F-score improvements over the forward alignment model (Forward). It is worth noticing that PerfMatch yields a significantly higher precision (using a χ2 test, p < 0.01) than Forward and EdgeCover. This indicates that, even without filtering, PerfMatch delivers rather accurate projections, however with low recall. Model performance seems to increase with tree pruning. When non-aligned words are removed (Table 1, NA Filter), PerfMatch and EdgeCover reach an F-score of 67.2 and 66.5, respectively. This is an increase of approximately 3% over the local Forward model. Although the latter model yields high precision (74.1%), its recall is significantly lower than PerfMatch and EdgeCover (p < 0.01). This demonstrates the usefulness of filtering for the more constrained global models which as discussed in Section 3 can only represent a limited set of alignment possibilities. The non-content words filter (NC filter) yields smaller improvements. In fact, for the Forward model, results are worse than applying no filtering at all. We conjecture that NC is an overly aggressive filter which removes projection-critical words. This is supported by the relatively low recall values. In comparison to NA, recall drops by 8.3% for Forward and by almost 6% for PerfMatch and EdgeCover. Nevertheless, both PerfMatch and EdgeCover outperform the local Forward model. PerfMatch is the best performing model reaching an F-score of 64.0%. We now consider how the models behave when the argument-based filter is applied (Arg, Table 1, bottom). As can be seen, the local model benefits most from this filter, whereas PerfMatch is worst affected; it obtains its highest precision (80.4%) as well as its lowest recall (48.1%). This is somewhat expected since the filter removes the majority of nodes in the target partition causing a proliferation of dummy nodes. The resulting edge covers are relatively “unnatural”, thus counterbalancing the advantages of global optimisation. To summarise, we find on the development set that PerfMatch in the NA Filter condition obtains the best performance (F-score 67.2%), followed closely by EdgeCover (F-score 66.5%) in the same 1166 Model Prec Rec F-score WordBL 45.7 45.0 43.3 Forward (Arg) 72.4 63.2 67.5 PerfMatch (NA) 75.7 63.7 69.2 EdgeCover (NA) 73.0 64.9 68.7 Intersective UpperBnd 85.0 84.0 84.0 Model Prec Rec F-score WordBL 62.1 60.7 61.4 Forward (Arg) 72.2 68.6 70.4 PerfMatch (NA) 75.7 67.5 71.4 EdgeCover (NA) 71.9 69.3 70.6 Manual UpperBnd 85.0 84.0 84.0 Table 2: Model comparison using intersective and manual alignments (test set) condition. In general, PerfMatch seems less sensitive to the type of filtering used; it yields best results in three out of four filtering conditions (see boldface figures in Table 1). Our results further indicate that Arg boosts the performance of the local model by guiding it towards linguistically appropriate alignments.5 A comparative analysis of the output of PerfMatch and EdgeCover revealed that the two models make similar errors (85% overlap). Disagreements, however, arise with regard to misparses. Consider as an example the sentence pair: The Charter is [NP an opportunity to bring the EU closer to the people.] Die Charta ist [NP eine Chance], [S die EU den Bürgern näherzubringen.] An ideal algorithm would align the English NP to both the German NP and S. EdgeCover, which can model one-to-many-relationships, acts “confidently” and aligns the NP to the German S to maximise the overlap similarity, incurring both a precision and a recall error. PerfMatch, on the other hand, cannot handle one-to-many relationships, acts “cautiously” and aligns the English NP to a dummy node, leading to a recall error. Thus, even though EdgeCover’s analysis is partly right, it will come out worse than PerfMatch, given the current dataset and evaluation method. Test set We now examine whether our results carry over to the test data. Table 2 shows the 5Experiments using different filter combinations did not lead to performance gains over individual filters and are not reported here due to lack of space. performance of the best models (Forward (Arg), PerfMatch (NA), and EdgeCover (NA)) on automatic (Intersective) and manual (Manual) alignments.6 All models perform significantly better than the baseline but significantly worse than the upper bound (both in terms of precision and recall, p < 0.01). PerfMatch and EdgeCover yield better F-scores than the Forward model. In fact, PerfMatch yields a significantly better precision than Forward (p < 0.01). Relatively small performance gains are observed when manual alignments are used. The Fscore increases by 2.9% for Forward, 2.2% for PerfMatch, and 1.9% for EdgeCover. Also note that this better performance is primarily due to a significant increase in recall (p < 0.01), but not precision. This is an encouraging result indicating that our filters and graph-based algorithms eliminate alignment noise to a large extent. Analysis of the models’ output revealed that the remaining errors are mostly due to incorrect parses (none of the parsers employed in this work were trained on the Europarl corpus) but also to modelling deficiencies. Recall from Section 3 that our global models cannot currently capture one-to-zero correspondences, i.e., deletions and insertions. 7 Related work Previous work has primarily focused on the projection of grammatical (Yarowsky and Ngai, 2001) and syntactic information (Hwa et al., 2002). An exception is Fung and Chen (2004), who also attempt to induce FrameNet-style annotations in Chinese. Their method maps English FrameNet entries to concepts listed in HowNet7, an on-line ontology for Chinese, without using parallel texts. The present work extends our earlier projection framework (Padó and Lapata, 2005) by proposing global methods for automatic constituent alignment. Although our models are evaluated on the semantic role projection task, we believe they also show promise in the context of statistical machine translation. Especially for systems that use syntactic information to enhance translation quality. For example, Xia and McCord (2004) exploit constituent alignment for rearranging sentences in the source language so as to make their word or6Our results on the test set are slightly higher in comparison to the development set. The fluctuation reflects natural randomness in the partitioning of our corpus. 7See http://www.keenage.com/zhiwang/e_ zhiwang.html. 1167 der similar to that of the target language. They learn tree reordering rules by aligning constituents heuristically using a naive local optimisation procedure analogous to forward alignment. A similar approach is described in Collins et al. (2005); however, the rules are manually specified and the constituent alignment step reduces to inspection of the source-target sentence pairs. The global optimisation models presented in this paper could be easily employed for the reordering task common to both approaches. Other approaches treat rewrite rules not as a preprocessing step (e.g., for reordering source strings), but as a part of the translation model itself (Gildea, 2003; Gildea, 2004). Constituent alignments are learnt by estimating the probability of tree transformations, such as node deletions, insertions, and reorderings. These models have a greater expressive power than our edge cover models; however, this implies that approximations are often used to make computation feasible. 8 Conclusions In this paper, we have proposed a novel method for obtaining constituent alignments between parallel sentences and have shown that it is useful for semantic role projection. A key aspect of our approach is the formalisation of constituent alignment as the search for a minimum weight edge cover in a bipartite graph. This formalisation provides efficient mechanisms for aligning constituents and yields results superior to heuristic approaches. Furthermore, we have shown that treebased noise filtering techniques are essential for good performance. Our approach rests on the assumption that constituent alignment can be determined solely from the lexical similarity between constituents. Although this allows us to model constituent alignments efficiently as edge covers, it falls short of modelling translational divergences such as substitutions or insertions/deletions. In future work, we will investigate minimal tree edit distance (Bille, 2005) and related formalisms which are defined on tree structures and can therefore model divergences explicitly. However, it is an open question whether cross-linguistic syntactic analyses are similar enough to allow for structure-driven computation of alignments. Acknowledgments The authors acknowledge the support of DFG (Padó; grant Pi-154/9-2) and EPSRC (Lapata; grant GR/T04540/01). References P. Bille. 2005. A survey on tree edit distance and related problems. Theoretical Computer Science, 337(1-3):217– 239. H. C. Boas. 2005. Semantic frames as interlingual representations for multilingual lexical databases. International Journal of Lexicography, 18(4):445–478. X. Carreras, L. Màrquez, eds. 2005. Proceedings of the CoNLL shared task: Semantic role labelling, Boston, MA, 2005. M. Collins, P. Koehn, I. Kuˇcerová. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd ACL, 531–540, Ann Arbor, MI. M. Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the ACL/EACL, 16– 23, Madrid, Spain. A. Dubey. 2005. What to do when lexicalization fails: parsing German with suffix analysis and smoothing. In Proceedings of the 43rd ACL, 314–321, Ann Arbor, MI. T. Eiter, H. Mannila. 1997. Distance measures for point sets and their computation. Acta Informatica, 34(2):109–133. C. J. Fillmore, C. R. Johnson, M. R. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16:235–250. M. L. Fredman, R. E. Tarjan. 1987. Fibonacci heaps and their uses in improved network optimization algorithms. Journal of the ACM, 34(3):596–615. P. Fung, B. Chen. 2004. BiFrameNet: Bilingual frame semantics resources construction by cross-lingual induction. In Proceedings of the 20th COLING, 931–935, Geneva, Switzerland. D. Gildea, D. Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. D. Gildea. 2003. Loosely tree-based alignment for machine translation. In Proceedings of the 41st ACL, 80–87, Sapporo, Japan. D. Gildea. 2004. Dependencies vs. constituents for treebased alignment. In Proceedings of the EMNLP, 214–221, Barcelona, Spain. C. Hi, R. Hwa. 2005. A backoff model for bootstrapping resources for non-english languages. In Proceedings of the HLT/EMNLP, 851–858, Vancouver, BC. R. Hwa, P. Resnik, A. Weinberg, O. Kolak. 2002. Evaluation of translational correspondence using annotation projection. In Proceedings of the 40th ACL, 392–399, Philadelphia, PA. R. Jonker, T. Volgenant. 1987. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38:325–340. P. Koehn, F. J. Och, D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of the HLT/NAACL, 127–133, Edmonton, AL. P. Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the MT Summit X, Phuket, Thailand. I. D. Melamed. 1998. Manual annotation of translational equivalence: The Blinker project. Technical Report IRCS TR #98-07, IRCS, University of Pennsylvania, 1998. F. J. Och, H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–52. S. Padó, M. Lapata. 2005. Cross-lingual projection of role-semantic information. In Proceedings of the HLT/EMNLP, 859–866, Vancouver, BC. F. Xia, M. McCord. 2004. Improving a statistical MT system with automatically learned rewrite patterns. In Proceedings of the 20th COLING, 508–514, Geneva, Switzerland. N. Xue, M. Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of the EMNLP, 88–94, Barcelona, Spain. D. Yarowsky, G. Ngai. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the HLT, 161–168, San Diego, CA. 1168
2006
146
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1169–1176, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Utilizing Co-Occurrence of Answers in Question Answering Abstract In this paper, we discuss how to utilize the co-occurrence of answers in building an automatic question answering system that answers a series of questions on a specific topic in a batch mode. Experiments show that the answers to the many of the questions in the series usually have a high degree of co-occurrence in relevant document passages. This feature sometimes can’t be easily utilized in an automatic QA system which processes questions independently. However it can be utilized in a QA system that processes questions in a batch mode. We have used our pervious TREC QA system as baseline and augmented it with new answer clustering and co-occurrence maximization components to build the batch QA system. The experiment results show that the QA system running under the batch mode get significant performance improvement over our baseline TREC QA system. 1 Introduction Question answering of a series of questions on one topic has gained more and more research interest in the recent years. The current TREC QA test set contains factoid and list questions grouped into different series, where each series has the target of a definition associated with it (Overview of the TREC 2004 Question Answering Track, Voorhees 2005). Usually, the target is also called “topic” by QA researchers. One of the restrictions of TREC QA is that “questions within a series must be processed in order, without looking ahead.” That is, systems are allowed to use answers to earlier questions to help answer later questions in the same series, but can not use later questions to help answer earlier questions. This requirement models the dialogue discourse between the user and the QA system. However our experiments on interactive QA system show that some impatient QA users will throw a bunch of questions to the system and waiting for the answers returned in all. This prompted us to consider building a QA system which can accept as many questions as possible from users once in all and utilizing the relations between these questions to help find answers. We would also like to know the performance difference between the QA system processing the question series in an order and the QA system processing the question series as a whole. We call the second type of QA system as batch QA system to avoid the ambiguity in the following description in this paper. What kind of relations between questions could be utilized is a key problem in building the batch QA system. By observing the test questions of TREC QA, we found that the questions given under the same topic are not independent at all. Figure-1 shows a series of three questions proposed under the topic “Russian submarine Kursk Sinks” and some relevant passages to this topic found in the TREC data set. These passages contain answers not to just one but to two or three of the questions. This indicates that the answers to these questions have high co-occurrence. In an automatic QA system which processes the questions independently, the answers to the questions may or may not always be extracted due to algorithmic limitations or noisy information around the correct answer. However in building a batch QA system, the interdependence between the answers could be utilized to help to filter out the noisy information and pinpoint the correct answer for each question in the series. Min Wu1 and Tomek Strzalkowski1,2 1 ILS Institute, University at Albany, State University of New York 1400 Washington Ave SS261, Albany NY, 12222 2Institute of Computer Science, Polish Academy of Sciences [email protected], [email protected] 1169 We will discuss later in this paper how to utilize the co-occurrence of answers to a series of questions in building a batch QA system. The remainder of this paper is organized as follows. In the next section, we review the current techniques used in building an automatic QA system. Section 3 introduces the answers co-occurrence and how to cluster questions by the cooccurrence of their answers. Section 4.1 describes our TREC QA system and section 4.2 describes how to build a batch QA system by augmenting the TREC QA system with question clustering and answer co-occurrence maximization. Section 4.3 describes the experiments and explains the experimental results. Finally we conclude with the discussion of future work. 2 Related Work During recent years, many automatic QA systems have been developed and the techniques used in these systems cover logic inference, syntactic relation analysis, information extraction and proximity search, some systems also utilize pre-compiled knowledge base and external online knowledge resource. The LCC system (Moldovan & Rus, 2001; Harabagiu et al. 2004) uses a logic prover to select answer from related passages. With the aid of extended WordNet and knowledge base, the text terms are converted to logical forms that can be proved to match the question logical forms. The IBM’s PIQUANT system (Chu-Carroll et al, 2003; Prager et al, 2004) adopts a QA-byDossier-with-Constraints approach, which utilizes the natural constraints between the answer to the main question and the answers to the auxiliary questions. Syntactic dependency matching has also been applied in many QA systems (Cui et al, 2005; Katz and Lin 2003). The syntactic dependency relations of a candidate sentence are matched against the syntactic dependency relations in the question in order to decide if the candidate sentence contains the answer. Although surface text pattern matching is a comparatively simple method, it is very efficient for simple factoid questions and is used by many QA systems (Hovy et al 2001; Soubbotin, M. and S. Soubbotin 2003). As a powerful web search engine and external online knowledge resource, Google has been widely adopted in QA systems (Hovy et al 2001; Cui 2005) as a tool to help passage retrieval and answer validation. Current QA systems mentioned above and represented at TREC have been developed to answer one question at the time. This may partially be an artifact of the earlier TREC QA evaluations which used large sets of independent questions. It may also partially reflect the intention of the current TREC QA Track that the question series introduced in TREC QA 2004 (Voorhees 2005) simulate an interaction with a human, thus expected to arrive one at a time. The co-occurrence of answers of a series of highly related questions has not yet been fully utilized in current automatic QA systems participating TREC. In this situation, we think it worthwhile to find out whether a series of highly related questions on a specific topic such as the TREC QA test questions can be answered together in a batch mode by utilizing the cooccurrences of the answers and how much it will help improve the QA system performance. 3 Answer Co-Occurrence and Question Clustering Many QA systems utilize the co-occurrence of question terms in passage retrieval (Cui 2005). Topic Russian submarine Kursk sinks 1. When did the submarine sink? August 12 2. How many crewmen were lost in the disaster? 118 3. In what sea did the submarine sink? Barents Sea Some Related Passages Russian officials have speculated that the Kursk collided with another vessel in the Barents Sea, and usually blame an unspecified foreign submarine. All 118 officers and sailors aboard were killed. The Russian governmental commission on the accident of the submarine Kursk sinking in the Barents Sea on August 12 has rejected 11 original explanations for the disaster. .... as the same one carried aboard the nuclear submarine Kursk, which sank in the Barents Sea on Aug. 12, killing all 118 crewmen aboard. The navy said Saturday that most of the 118-man crew died Aug. 12 when a huge explosion .... Chief of Staff of the Russian Northern Fleet Mikhail Motsak Monday officially confirmed the deaths of 118 crewmen on board the Kursk nuclear submarine that went to the bottom of the Barents Sea on August 12. Figure-1 Questions and Related Passages 1170 Some QA systems utilize the co-occurrence of question terms and answer terms in answer validation. These methods are based on the assumption that the co-occurrences of question terms and answer terms are relatively higher than the co-occurrences of other terms. Usually the cooccurrence are measured by pointwise mutual information between terms. During the development of our TREC QA system, we found the answers of some questions in a series have higher co-occurrence. For example, in a series of questions on a topic of disaster event, the answers to questions such as “when the event occurred”, “where the event occurred” and “how many were injured in the event” have high co-occurrence in relatively short passages. Also, in a series of questions on a topic of some person, the answers to questions such as “when did he die”, “where did he die” and “how did he die” have high co-occurrence. To utilize this answers co-occurrence effectively in a batch QA system, we need to know which questions are expected to have higher answers co-occurrence and cluster these questions to maximize the answers co-occurrence among the questions in the cluster. Currently, the topics used in TREC QA test questions fall into four categories: “Person”, “Organization”, “Event” and “Things”. The topic can be viewed as an object and the series of questions can be viewed as asking for the attributes of the object. In this point of view, to find out which questions have higher answers cooccurrence is to find out which attributes of the object (topic) have high co-occurrence. We started with three categories of TREC QA topics: “Event”, “Person” and “Organization”. For “Event” topic category, we divided it into two sub-categories: “Disaster Event” and “Sport Event”. From the 2004 & 2005 TREC QA test questions, we manually collected frequently asked questions on each topic category and mapped these questions to the corresponding attributes of the topic. We focused on frequently asked questions because these questions are easier to be classified and thus served as a good starting point for our work. However for this technique to scale in the future, we are expecting to integrate automatic topic model detection into the system. For topic category “Person”, the attributes and corresponding named entity (NE) tags list as follows. For each topic category, we collected 20 sample topics as well as the corresponding attributes information about these topics. The sample topic “Rocky Marciano” and the attributes are listed as follows: From each attribute of the sample topic, an appropriate question can be formulated and relevant passages about this question were retrieved from TREC data (AQUAINT Data) and the web. A topic-related passages collection was formed by the relevant passages of questions on all attributes under the topic. Among the topic-related passages, the pointwise mutual information (PMI) of attribute values were calculated which consequently formed a symmetric mutual information matrix. The PMI of two attribute values x and y was calculated by the following equation. ) ( ) ( ) , ( log ) , ( y p x p y x p y x PMI = All the mutual information matrixes under the topic category were added up and averaged in order to get one mutual information matrix which reflects the general co-occurrence relaAttribute Attribute Value Birth Date September 1, 1923 Birth Place Brockton, MA Death Date August 31, 1969 Death Place Iowa Death Reason airplane crash Death Age 45 Buried Place Fort Lauderdale, FL Nationality American Occupation heavyweight champion boxer Father Pierino Marchegiano Mother Pasqualena Marchegiano Wife Barbara Cousins Children Mary Ann, Rocco Kevin No. of Children two Real Name Rocco Francis Marchegiano Nick Name none Affiliation none Education none Attribute Attribute’s NE tag Birth Date Date Birth Place Location Death Date Date Death Place Location Death Reason Disease, Accident Death Age Number Nationality Nationality Occupation Occupation Father Person Mother Person Wife Person Children Person Number of Children Number Real Name Person, Other Nick Name Person, Other Affiliation Organization Education Organization 1171 tions between attributes under the topic category. We clustered the attributes by their mutual information value. Our clustering strategy was to cluster attributes whose pointwise mutual information is greater than a threshold λ. We choose λ as equal to 60% of the maximum value in the matrix. The operations described above were automatically carried out by our carefully designed training system. The clusters learned for each topic category is listed as follows. The reason for the clustering of attributes of topic category is for the convenience of building a batch QA system. When a batch QA system is processing a series of questions under a topic, some of the questions in the series are mapped to the attributes of the topic and thus grouped together according to the attribute clusters. Then questions in the same group are processed together to obtain a maximum of answers cooccurrence. More details are given in section 4.2. 4 Experiment Setup and Evaluation 4.1 Baseline System The baseline system is an automatic IE-driven (Information Extraction) QA system. We call it IE-driven because the main techniques used in the baseline system: surface pattern matching and N-gram proximity search need to be applied to NE-tagged (Named Entity) passages. The system architecture is illustrated in Figure-2. The components indicated by dash lines are not included in the baseline system and they are added to the baseline system to build a batch QA system. As shown in the figure with light color, the two components are question classification and co-occurrence maximization. Both our baseline system and batch QA system didn’t utilize any pre-compiled knowledge base. In the question analysis component, questions are classified by their syntactic structure and answer target. The answer targets are classified as named entity types. The retrieved documents are segmented into passages and filtered by topic keywords, question keywords and answer target. The answer selection methods we used are surface text pattern matching and n-gram proximity search. We build a pattern learning system to automatically extract answer patterns from the TREC data and the web. These answer patterns are scored by their frequency, sorted by question type and represented as regular expressions with terms of “NP”, “VP”, “VPN”, “ADVP”, “be”, “in”, “of”, “on”, “by”, “at”, “which”, “when”, “where”, “who”, “,”, “-“, “(“. Some sample answer patterns of question type “when_be_np_vp” are listed as follows. When applying these answer patterns to extract answer from candidate passages, the terms such as “NP”, “VP”, “VPN”, “ADVP” and “be” are replaced with the corresponding question terms. The replaced patterns can be matched directly to the candidate passages and answer candidate be extracted. Some similar proximity search methods have been applied in document and passage retrieval in the previous research. We applied n-gram proximity search to answer questions whose answers can’t be extracted by surface text pattern matching. Around every named entity in the filtered candidate passages, question terms as well as topic terms are matched as n-grams. A question term is tokenized by word. We matched the longest possible sequence of tokenized word within the 100 word sliding window around the named entity. Once a sequence is matched, the corresponding word tokens are removed from the ADVP1 VP in <Date>([^<>]+?)<\/Date> NP1.{1,15}VP.{1,30} in <Date>([^<>]+?)<\/Date> NP1.{1,30} be VP in <Date>([^<>]+?)<\/Date> NP1, which be VP in <Date>([^<>]+?)<\/Date> VP NP1.{1,15} at .{1,15}<Date>([^<>]+?)<\/Date> ADVP1.{1,80}NP1.{1,80}<Date>([^<>]+?)<\/Date> NP1, VP in <Date>([^<>]+?)<\/Date> NP1 of <Date>([^<>]+?)<\/Date> NP1 be VP in <Date>([^<>]+?)<\/Date> “Person” Topic Cluster1: Birth Date; Birth Place Cluster2a: Death Date; Death Place; Death Reason; Death Age Cluster2b: Death Date; Birth Date Cluster3: Father; Mother Cluster4: Wife; Children; Number of Children Cluster5: Nationality; Occupation “Disaster Event” Topic Cluster1: Event Date; Event Location; Event Casualty; Cluster2: Organization Involved, Person Involved “Sport Event” Topic Cluster1: Winner; Winning Score Cluster2: Location, Date “Organization” Topic Cluster1: Founded Date; Founded Location; Founder Cluster2: Headquarters; Number of Members 1172 token list and the same searching and matching is repeated until the token list is empty or no sequence of tokenized word can be matched. The named entity is scored by the average weighted distance score of question terms and topic terms. Let Num(ti...tj) denotes the number of all matched n-grams, d(E, ti...tj) denotes the word distance between the named entity and the matched n-gram, W1(ti...tj) denotes the topic weight of the matched n-gram, W2(ti...tj) denotes the length weight of the matched n-gram. If ti...tj contains topic terms or question verb phrase, 0.5 is assigned to W1, otherwise 1.0 is assigned. The value assigned to length weight W2 is determined by λ, the ratio value of matched n-gram length to question term length. How to assign W2 is illustrated as follows. The weighted distance score D(E,QTerm) of the question term and the final score S(E) of the named entity are calculated by the following equations. ) ... ( ) ... ( 2 ) ... (1 ) ... , ( ) , ( ... j i t t j i j i j i t t Num t t W t t W t t E d QTerm E D j i∑ × = N QTerm E D E S N i i ∑ = ) , ( ) ( 4.2 Batch QA System The batch QA system is built from the baseline system and two added components: question classification and co-occurrence maximization. In a batch QA system, questions are classified before they are syntactically and semantically analyzed. The classification process consists of two steps: topic categorization and question mapping. Firstly the topic of the series questions is classified into appropriate topic category and then the questions can be mapped to the corresponding attribute and clustered according to the mapped attributes. Since the attributes of topic category is collected from frequently asked questions, there are some questions in the question series which can’t be mapped to any attribute. These unmapped questions are processed individually. The topic categorization is done by a Naïve Bayes classifier which employs features such as stemmed question terms and named entities in the question. The training data is a collection of 85 question series labeled as one of four topic categories: “Person”, “Disaster Event”, “Sport Event” and “Organization”. The mapping of question to topic attribute is an example-based syntactic pattern matching and keywords matching. The questions grouped together are processed as a question cluster. After the processing of answer selection and ranking, each question in the cluster gets top 10 scored candidate answers which forms an answer vector A(a1, …, a10). W2(ti...tj)=0.4 if λ<0.4; W2(ti...tj)=0.6 if 0.4≤ λ≤ 0.6; W2(ti...tj)=0.8 if λ>0.6; W2(ti...tj)= 0.9 if λ>0.75. Answers Syntactic Chunking Type Categorization Query Generation Target Classification Questions Document Retrieval Passage Filtering Surface Text Pattern Matching N-Gram Proximity Search Answer Ranking Pattern Files Tagged Corpus (AQUAINT /Web) Question Clustering Co-occurrence Maximization Figure-2 Baseline QA System & Batch QA System (dashed lines and light colored component) 1173 Suppose there are n questions in the cluster, the task of answer co-occurrence maximization is to retrieve a combination of n answers which has maximum pointwise mutual information (PMI). This combination is assumed to be the answers to the questions in the cluster. There are a total of 10n possible combinations among all the candidate answers. If the PMI of every combination should be calculated, it is computationally inefficient. Also, some combinations containing noisy information may have higher co-occurrence than the correct answer combination. For example, the correct answers combination to questions showed in figure-1 is “August 12; 118; Barents Sea”. However, there is also a combination of “Aug. 12, two; U.S.” which has higher pointwise mutual information due to the frequently occurred noisy information of “two U.S. submarines” and “two explosions in the area Aug. 12 at the time”. To reduce this negative effect brought by the noisy information, we started from the highest scored answer and put it in the final answer list. Then we added the answers one by one to the final answer list. The added answer has the highest PMI with the answers in the final answer list. It is important here to choose the first answer added to the final answer list correctly. Otherwise, the following added answers will be negatively affected. So in our batch QA system, a correct answer should be scored highest among all the answer candidates of the questions in the cluster. Although this can’t be always achieved, it can be approximated by setting higher threshold both in passage scoring and answer ranking. However, in the baseline system, passages are not scored. They are equally processed because we wanted to retrieve as many answer candidates as possible and answer candidates are ranked by their matching score and redundancy score. 4.3 Performance Evaluation The data corpus we used is TREC QA data (AQUAINT Corpus). The test questions are TREC QA 2004 and TREC QA 2005 questions. Each topic is followed with a series of factoid questions. The number of questions selected from TREC 2004 collection is 230 and the number of question series is 65. The number of questions selected from TREC 2005 collection is 362 and the number of question series is 75. We performed 4 different experiments: (1). Baseline system. (2). Batch QA system (Baseline system with co-occurrence maximization). (3). Baseline system with web supporting. (4). Batch QA with web supporting. We introduced web supporting into the experiments because usually the information on the web tends to share more co-occurrence and redundancy which is also proved by our results. Compared between the baseline system and batch system, the experiment results show that the overall accuracy score has been improved from 0.34 to 0.39 on TREC 2004 test questions and from 0.31 to 0.37 on TREC 2005 test questions. Compared between the baseline system and batch system with web supporting, the accuracy score can be improved up to 0.498. We also noticed that the average number of questions under each topic in TREC 2004 test questions is 3.538, which is significantly lower than the 4.8267 average in TREC 2005 questions series. This may explain why the improvement we obtained on TREC2004 data is not as significant as the improvement obtained on TREC 2005 questions. The accuracy score of each TREC2005 question series is also calculated. Figure3-4 shows the comparisons between 4 different experiment methods. We also calculate the number of question series with accuracy increased, unchanged and decreased. It is also shown in the following table. (“+” means number of question series with accuracy increased, “=” unchanged and “-” decreased.) TREC2005 Question Series (75 question series) + - = Baseline + Co-occurrence 25 5 45 Baseline + Web 40 2 33 Baseline + Co-occurrence + Web 49 2 24 Accuracy Com parison on Different Methods 0 0.1 0.2 0.3 0.4 0.5 0.6 1 2 3 4 TREC2004 TREC2005 1174 Some question series get unchanged accuracy because the questions can’t be clustered according to our clustering template so that it can’t utilize the co-occurrence of answers in the cluster. Some question series get decreased accuracy because the questions because the noisy information had even higher co-occurrence, the error occurred during the question clustering and the answers didn’t show any co-relations in the retrieved passages at all. A deep and further error analysis is necessary for this answer cooccurrence maximization technique to be applied topic independently. 5 Discussion and Future Work We have demonstrated that in a QA system, answering a series of inter-related questions can be improved by grouping the questions by expected co-occurrence of answers in text. The improvement can be made without exploiting the pre-compiled knowledge base. Although our system can cluster frequently asked questions on topics of “Events”, “Persons” and “Organizations”, there are still some highly related questions which can’t be clustered by our method. Here are some examples. To cluster these questions, we plan to utilize event detection techniques and set up an event topic “Carlos the Jackal captured” during the answering process, which will make it easier to cluster “When was the Carlos the Jackal captured?” and “Where was the Carlos the Jackal captured?” Can this answers co-occurrence maximization approach be applied to improve QA performance Topic Carlos the Jackal 1. When was he captured? 2. Where was he captured? Topic boxer Floyd Patterson 1. When did he win the title? 2. How old was he when he won the title? 3. Who did he beat to win the title? Accuracy on TREC2005 Test Questions 0 0.2 0.4 0.6 0.8 1 1.2 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 question series accuracy baseline baseline+co_occurrence baseline+w eb baseline+w eb+co_occurrence Accuracy on TREC2004 Test Questions 0 0.2 0.4 0.6 0.8 1 1.2 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 question series accuracy baseline baseline+co_occurrence baseline+w eb baseline+w eb+co_occurrence Figure 3-4 Comparison of TREC2004/2005 Question Series Accuracy 1175 on single questions (i.e. 1-series)? As suggested in the reference paper (Chu-Carrol and Prager), we may be able to add related (unasked) questions to form a cluster around the single question. Another open issue is what kind of effect will this technique bring to answering series of “list” questions, i.e., where each question expects a list of items as answer. As we know that the answers of some “list” questions have pretty high co-occurrence while others don’t have cooccurrence at all. Future work involves experiments conducted on these aspects. Acknowledgement The Authors wish to thank BBN for the use of NE tagging software IdentiFinder, CIIR at University of Massachusetts for the use of Inquery search engine, Stanford University NLP group for the use of Stanford parser. Thanks also to the anonymous reviewers for their helpful comments. References Chu-Carrol, J., J. Prager, C. Welty, K. Czuba and D. Ferrucci. “A Multi-Strategy and MultiSource Approach to Question Answering”, In Proceedings of the 11th TREC, 2003. Cui, H., K. Li, R. Sun, T.-S. Chua and M.-Y. Kan. “National University of Singapore at the TREC 13 Question Answering Main Task”. In Proceedings of the 13th TREC, 2005. Han, K.-S., H. Chung, S.-B. Kim, Y.-I. Song, J.Y. Lee, and H.-C. Rim. “Korea University Question Answering System at TREC 2004”. In Proceedings of the 13th TREC, 2005. Harabagiu, S., D. Moldovan, C. Clark, M. Bowden, J. Williams and J. Bensley. “Answer Mining by Combining Extraction Techniques with Abductive Reasoning”. In Proceedings of 12th TREC, 2004. Hovy, E. L. Gerber, U. Hermjakob, M. Junk and C.-Y. Lin. “Question Answering in Webclopedia”. In Proceedings of the 9th TREC, 2001. Lin, J., D. Quan, V. Sinha, K. Bakshi, D. Huynh, B. Katz and D. R. Karger. “The Role of Context in Question Answering Systems”. In CHI 2003. Katz, B. and J. Lin. “Selectively Using Relations to Improve Precision in Question Answering”. In Proceedings of the EACL-2003 Workshop on Natural Language Processing for Question Answering. 2003. Moldovan, D. and V. Rus. “Logical Form Transformation of WordNet and its Applicability to Question Answering”. In Proceedings of the ACL, 2001. Monz. C. “Minimal Span Weighting Retrieval for Question Answering” In Proceedings of the SIGIR Workshop on Information Retrieval for Question Answering. 2004. Prager, J., E. Brown, A. Coden and D. Radev. “Question-Answering by Predictive Annotation”. In Proceedings of SIGIR 2000, pp. 184191. 2000. Prager, J., J. Chu-Carroll and K. Czuba. “Question Answering Using Constraint Satisfaction: QA-By-Dossier-With-Constraints”. In Proceedings of the 42nd ACL. 2004. Ravichandran, D. and E. Hovy. “Learning Surface Text Patterns for a Question Answering System”. In Proceedings of 40th ACL. 2002. Soubbotin, M. and S. Soubbotin. “Patterns of Potential Answer Expressions as Clues to the Right Answers”. In Proceedings of 11th TREC. 2003. Voorhees, E. “Using Question Series to Evaluate Question Answering System Effectiveness”. In Proceedings of HLT 2005. 2005. 1176
2006
147
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 113–120, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Espresso: Leveraging Generic Patterns for Automatically Harvesting Semantic Relations Patrick Pantel Information Sciences Institute University of Southern California 4676 Admiralty Way Marina del Rey, CA 90292 [email protected] Marco Pennacchiotti ART Group - DISP University of Rome “Tor Vergata” Viale del Politecnico 1 Rome, Italy [email protected] Abstract In this paper, we present Espresso, a weakly-supervised, general-purpose, and accurate algorithm for harvesting semantic relations. The main contributions are: i) a method for exploiting generic patterns by filtering incorrect instances using the Web; and ii) a principled measure of pattern and instance reliability enabling the filtering algorithm. We present an empirical comparison of Espresso with various state of the art systems, on different size and genre corpora, on extracting various general and specific relations. Experimental results show that our exploitation of generic patterns substantially increases system recall with small effect on overall precision. 1 Introduction Recent attention to knowledge-rich problems such as question answering (Pasca and Harabagiu 2001) and textual entailment (Geffet and Dagan 2005) has encouraged natural language processing researchers to develop algorithms for automatically harvesting shallow semantic resources. With seemingly endless amounts of textual data at our disposal, we have a tremendous opportunity to automatically grow semantic term banks and ontological resources. To date, researchers have harvested, with varying success, several resources, including concept lists (Lin and Pantel 2002), topic signatures (Lin and Hovy 2000), facts (Etzioni et al. 2005), and word similarity lists (Hindle 1990). Many recent efforts have also focused on extracting semantic relations between entities, such as entailments (Szpektor et al. 2004), is-a (Ravichandran and Hovy 2002), part-of (Girju et al. 2006), and other relations. The following desiderata outline the properties of an ideal relation harvesting algorithm: • Performance: it must generate both high precision and high recall relation instances; • Minimal supervision: it must require little or no human annotation; • Breadth: it must be applicable to varying corpus sizes and domains; and • Generality: it must be applicable to a wide variety of relations (i.e., not just is-a or part-of). To our knowledge, no previous harvesting algorithm addresses all these properties concurrently. In this paper, we present Espresso, a generalpurpose, broad, and accurate corpus harvesting algorithm requiring minimal supervision. The main algorithmic contribution is a novel method for exploiting generic patterns, which are broad coverage noisy patterns – i.e., patterns with high recall and low precision. Insofar, difficulties in using these patterns have been a major impediment for minimally supervised algorithms resulting in either very low precision or recall. We propose a method to automatically detect generic patterns and to separate their correct and incorrect instances. The key intuition behind the algorithm is that given a set of reliable (high precision) patterns on a corpus, correct instances of a generic pattern will fire more with reliable patterns on a very large corpus, like the Web, than incorrect ones. Below is a summary of the main contributions of this paper: • Algorithm for exploiting generic patterns: Unlike previous algorithms that require significant manual work to make use of generic patterns, we propose an unsupervised Webfiltering method for using generic patterns; and • Principled reliability measure: We propose a new measure of pattern and instance reliability which enables the use of generic patterns. 113 Espresso addresses the desiderata as follows: • Performance: Espresso generates balanced precision and recall relation instances by exploiting generic patterns; • Minimal supervision: Espresso requires as input only a small number of seed instances; • Breadth: Espresso works on both small and large corpora – it uses Web and syntactic expansions to compensate for lacks of redundancy in small corpora; • Generality: Espresso is amenable to a wide variety of binary relations, from classical is-a and part-of to specific ones such as reaction and succession. Previous work like (Girju et al. 2006) that has made use of generic patterns through filtering has shown both high precision and high recall, at the expensive cost of much manual semantic annotation. Minimally supervised algorithms, like (Hearst 1992; Pantel et al. 2004), typically ignore generic patterns since system precision dramatically decreases from the introduced noise and bootstrapping quickly spins out of control. 2 Relevant Work To date, most research on relation harvesting has focused on is-a and part-of. Approaches fall into two categories: pattern- and clustering-based. Most common are pattern-based approaches. Hearst (1992) pioneered using patterns to extract hyponym (is-a) relations. Manually building three lexico-syntactic patterns, Hearst sketched a bootstrapping algorithm to learn more patterns from instances, which has served as the model for most subsequent pattern-based algorithms. Berland and Charniak (1999) proposed a system for part-of relation extraction, based on the (Hearst 1992) approach. Seed instances are used to infer linguistic patterns that are used to extract new instances. While this study introduces statistical measures to evaluate instance quality, it remains vulnerable to data sparseness and has the limitation of considering only one-word terms. Improving upon (Berland and Charniak 1999), Girju et al. (2006) employ machine learning algorithms and WordNet (Fellbaum 1998) to disambiguate part-of generic patterns like “X’s Y” and “X of Y”. This study is the first extensive attempt to make use of generic patterns. In order to discard incorrect instances, they learn WordNetbased selectional restrictions, like “X(scene#4)’s Y(movie#1)”. While making huge grounds on improving precision/recall, heavy supervision is required through manual semantic annotations. Ravichandran and Hovy (2002) focus on scaling relation extraction to the Web. A simple and effective algorithm is proposed to infer surface patterns from a small set of instance seeds by extracting substrings relating seeds in corpus sentences. The approach gives good results on specific relations such as birthdates, however it has low precision on generic ones like is-a and partof. Pantel et al. (2004) proposed a similar, highly scalable approach, based on an edit-distance technique, to learn lexico-POS patterns, showing both good performance and efficiency. Espresso uses a similar approach to infer patterns, but we make use of generic patterns and apply refining techniques to deal with wide variety of relations. Other pattern-based algorithms include (Riloff and Shepherd 1997), who used a semi-automatic method for discovering similar words using a few seed examples, KnowItAll (Etzioni et al. 2005) that performs large-scale extraction of facts from the Web, Mann (2002) who used part of speech patterns to extract a subset of is-a relations involving proper nouns, and (Downey et al. 2005) who formalized the problem of relation extraction in a coherent and effective combinatorial model that is shown to outperform previous probabilistic frameworks. Clustering approaches have so far been applied only to is-a extraction. These methods use clustering algorithms to group words according to their meanings in text, label the clusters using its members’ lexical or syntactic dependencies, and then extract an is-a relation between each cluster member and the cluster label. Caraballo (1999) proposed the first attempt, which used conjunction and apposition features to build noun clusters. Recently, Pantel and Ravichandran (2004) extended this approach by making use of all syntactic dependency features for each noun. The advantage of clustering approaches is that they permit algorithms to identify is-a relations that do not explicitly appear in text, however they generally fail to produce coherent clusters from fewer than 100 million words; hence they are unreliable for small corpora. 3 The Espresso Algorithm Espresso is based on the framework adopted in (Hearst 1992). It is a minimally supervised bootstrapping algorithm that takes as input a few seed instances of a particular relation and iteratively learns surface patterns to extract more instances. The key to Espresso lies in its use of generic patters, i.e., those broad coverage noisy patterns that 114 extract both many correct and incorrect relation instances. For example, for part-of relations, the pattern “X of Y” extracts many correct relation instances like “wheel of the car” but also many incorrect ones like “house of representatives”. The key assumption behind Espresso is that in very large corpora, like the Web, correct instances generated by a generic pattern will be instantiated by some reliable patterns, where reliable patterns are patterns that have high precision but often very low recall (e.g., “X consists of Y” for part-of relations). In this section, we describe the overall architecture of Espresso, propose a principled measure of reliability, and give an algorithm for exploiting generic patterns. 3.1 System Architecture Espresso iterates between the following three phases: pattern induction, pattern ranking/selection, and instance extraction. The algorithm begins with seed instances of a particular binary relation (e.g., is-a) and then iterates through the phases until it extracts τ1 patterns or the average pattern score decreases by more than τ2 from the previous iteration. In our experiments, we set τ1 = 5 and τ2 = 50%. For our tokenization, in order to harvest multiword terms as relation instances, we adopt a slightly modified version of the term definition given in (Justeson 1995), as it is one of the most commonly used in the NLP literature: ((Adj|Noun)+|((Adj|Noun)*(NounPrep)?)(Adj|Noun)*)Noun Pattern Induction In the pattern induction phase, Espresso infers a set of surface patterns P that connects as many of the seed instances as possible in a given corpus. Any pattern learning algorithm would do. We chose the state of the art algorithm described in (Ravichandran and Hovy 2002) with the following slight modification. For each input instance {x, y}, we first retrieve all sentences containing the two terms x and y. The sentences are then generalized into a set of new sentences Sx,y by replacing all terminological expressions by a terminological label, TR. For example: “Because/IN HF/NNP is/VBZ a/DT weak/JJ acid/NN and/CC x is/VBZ a/DT y” is generalized as: “Because/IN TR is/VBZ a/DT TR and/CC x is/VBZ a/DT y” Term generalization is useful for small corpora to ease data sparseness. Generalized patterns are naturally less precise, but this is ameliorated by our filtering step described in Section 3.3. As in the original algorithm, all substrings linking terms x and y are then extracted from Sx,y, and overall frequencies are computed to form P. Pattern Ranking/Selection In (Ravichandran and Hovy 2002), a frequency threshold on the patterns in P is set to select the final patterns. However, low frequency patterns may in fact be very good. In this paper, instead of frequency, we propose a novel measure of pattern reliability, rπ, which is described in detail in Section 3.2. Espresso ranks all patterns in P according to reliability rπ and discards all but the top-k, where k is set to the number of patterns from the previous iteration plus one. In general, we expect that the set of patterns is formed by those of the previous iteration plus a new one. Yet, new statistical evidence can lead the algorithm to discard a pattern that was previously discovered. Instance Extraction In this phase, Espresso retrieves from the corpus the set of instances I that match any of the patterns in P. In Section 3.2, we propose a principled measure of instance reliability, rι, for ranking instances. Next, Espresso filters incorrect instances using the algorithm proposed in Section 3.3 and then selects the highest scoring m instances, according to rι, as input for the subsequent iteration. We experimentally set m=200. In small corpora, the number of extracted instances can be too low to guarantee sufficient statistical evidence for the pattern discovery phase of the next iteration. In such cases, the system enters an expansion phase, where instances are expanded as follows: Web expansion: New instances of the patterns in P are retrieved from the Web, using the Google search engine. Specifically, for each instance {x, y}∈ I, the system creates a set of queries, using each pattern in P instantiated with y. For example, given the instance “Italy, country” and the pattern “Y such as X”, the resulting Google query will be “country such as *”. New instances are then created from the retrieved Web results (e.g. “Canada, country”) and added to I. The noise generated from this expansion is attenuated by the filtering algorithm described in Section 3.3. Syntactic expansion: New instances are created from each instance {x, y}∈ I by extracting sub-terminological expressions from x corresponding to the syntactic head of terms. For ex115 ample, the relation “new record of a criminal conviction part-of FBI report” expands to: “new record part-of FBI report”, and “record part-of FBI report”. 3.2 Pattern and Instance Reliability Intuitively, a reliable pattern is one that is both highly precise and one that extracts many instances. The recall of a pattern p can be approximated by the fraction of input instances that are extracted by p. Since it is non-trivial to estimate automatically the precision of a pattern, we are wary of keeping patterns that generate many instances (i.e., patterns that generate high recall but potentially disastrous precision). Hence, we desire patterns that are highly associated with the input instances. Pointwise mutual information (Cover and Thomas 1991) is a commonly used metric for measuring this strength of association between two events x and y: ( ) ( ) ( ) ( ) y P x P y x P y x pmi , log , = We define the reliability of a pattern p, rπ(p), as its average strength of association across each input instance i in I, weighted by the reliability of each instance i: ( ) ( ) I i r p i pmi p r I i pmi ∑ ∈ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ∗ = ι π max ) , ( where rι(i) is the reliability of instance i (defined below) and maxpmi is the maximum pointwise mutual information between all patterns and all instances. rπ(p) ranges from [0,1]. The reliability of the manually supplied seed instances are rι(i) = 1. The pointwise mutual information between instance i = {x, y} and pattern p is estimated using the following formula: ( ) ,* *, ,*, , , log , p y x y p x p i pmi = where |x, p, y| is the frequency of pattern p instantiated with terms x and y and where the asterisk (*) represents a wildcard. A well-known problem is that pointwise mutual information is biased towards infrequent events. We thus multiply pmi(i, p) with the discounting factor suggested in (Pantel and Ravichandran 2004). Estimating the reliability of an instance is similar to estimating the reliability of a pattern. Intuitively, a reliable instance is one that is highly associated with as many reliable patterns as possible (i.e., we have more confidence in an instance when multiple reliable patterns instantiate it.) Hence, analogous to our pattern reliability measure, we define the reliability of an instance i, rι(i), as: ( ) ( ) P p r p i pmi i r P p pmi ∑ ′ ∈ ∗ = π ι max ) , ( where rπ(p) is the reliability of pattern p (defined earlier) and maxpmi is as before. Note that rι(i) and rπ(p) are recursively defined, where rι(i) = 1 for the manually supplied seed instances. 3.3 Exploiting Generic Patterns Generic patterns are high recall / low precision patterns (e.g, the pattern “X of Y” can ambiguously refer to a part-of, is-a and possession relations). Using them blindly increases system recall while dramatically reducing precision. Minimally supervised algorithms have typically ignored them for this reason. Only heavily supervised approaches, like (Girju et al. 2006) have successfully exploited them. Espresso’s recall can be significantly increased by automatically separating correct instances extracted by generic patterns from incorrect ones. The challenge is to harness the expressive power of the generic patterns while remaining minimally supervised. The intuition behind our method is that in a very large corpus, like the Web, correct instances of a generic pattern will be instantiated by many of Espresso’s reliable patterns accepted in P. Recall that, by definition, Espresso’s reliable patterns extract instances with high precision (yet often low recall). In a very large corpus, like the Web, we assume that a correct instance will occur in at least one of Espresso’s reliable pattern even though the patterns’ recall is low. Intuitively, our confidence in a correct instance increases when, i) the instance is associated with many reliable patterns; and ii) its association with the reliable patterns is high. At a given Espresso iteration, where PR represents the set of previously selected reliable patterns, this intuition is captured by the following measure of confidence in an instance i = {x, y}: ( ) ( ) ( ) ∑ ∈ × = R P p p T p r i S i S π where T is the sum of the reliability scores rπ(p) for each pattern p ∈ PR, and ( ) ( ) ,* *, ,*, , , log , p y x y p x p i pmi i S p × = = 116 where pointwise mutual information between instance i and pattern p is estimated with Google as follows: ( ) p y x y p x i S p × × ≈ , , An instance i is rejected if S(i) is smaller than some threshold τ. Although this filtering may also be applied to reliable patterns, we found this to be detrimental in our experiments since most instances generated by reliable patterns are correct. In Espresso, we classify a pattern as generic when it generates more than 10 times the instances of previously accepted reliable patterns. 4 Experimental Results In this section, we present an empirical comparison of Espresso with three state of the art systems on the task of extracting various semantic relations. 4.1 Experimental Setup We perform our experiments using the following two datasets: • TREC: This dataset consists of a sample of articles from the Aquaint (TREC-9) newswire text collection. The sample consists of 5,951,432 words extracted from the following data files: AP890101 – AP890131, AP890201 – AP890228, and AP890310 – AP890319. • CHEM: This small dataset of 313,590 words consists of a college level textbook of introductory chemistry (Brown et al. 2003). Each corpus is pre-processed using the Alembic Workbench POS-tagger (Day et al. 1997). Below we describe the systems used in our empirical evaluation of Espresso. • RH02: The algorithm by Ravichandran and Hovy (2002) described in Section 2. • GI03: The algorithm by Girju et al. (2006) described in Section 2. • PR04: The algorithm by Pantel and Ravichandran (2004) described in Section 2. • ESP-: The Espresso algorithm using the pattern and instance reliability measures, but without using generic patterns. • ESP+: The full Espresso algorithm described in this paper exploiting generic patterns. For ESP+, we experimentally set τ from Section 3.3 to τ = 0.4 for TREC and τ = 0.3 for CHEM by manually inspecting a small set of instances. Espresso is designed to extract various semantic relations exemplified by a given small set of seed instances. We consider the standard is-a and part-of relations as well as the following more specific relations: • succession: This relation indicates that a person succeeds another in a position or title. For example, George Bush succeeded Bill Clinton and Pope Benedict XVI succeeded Pope John Paul II. We evaluate this relation on the TREC-9 corpus. • reaction: This relation occurs between chemical elements/molecules that can be combined in a chemical reaction. For example, hydrogen gas reacts-with oxygen gas and zinc reacts-with hydrochloric acid. We evaluate this relation on the CHEM corpus. • production: This relation occurs when a process or element/object produces a result1. For example, ammonia produces nitric oxide. We evaluate this relation on the CHEM corpus. For each semantic relation, we manually extracted a small set of seed examples. The seeds were used for both Espresso as well as RH02. Table 1 lists a sample of the seeds as well as sample outputs from Espresso. 4.2 Precision and Recall We implemented the systems outlined in Section 4.1, except for GI03, and applied them to the 1 Production is an ambiguous relation; it is intended to be a causation relation in the context of chemical reactions. Table 1. Sample seeds used for each semantic relation and sample outputs from Espresso. The number in the parentheses for each relation denotes the total number of seeds used as input for the system. Is-a (12) Part-Of (12) Succession (12) Reaction (13) Production (14) Seeds wheat :: crop George Wendt :: star nitrogen :: element diborane :: substance leader :: panel city :: region ion :: matter oxygen :: water Khrushchev :: Stalin Carla Hills :: Yeutter Bush :: Reagan Julio Barbosa :: Mendes magnesium :: oxygen hydrazine :: water aluminum metal :: oxygen lithium metal :: fluorine gas bright flame :: flares hydrogen :: metal hydrides ammonia :: nitric oxide copper :: brown gas Espresso Picasso :: artist tax :: charge protein :: biopolymer HCl :: strong acid trees :: land material :: FBI report oxygen :: air atom :: molecule Ford :: Nixon Setrakian :: John Griesemer Camero Cardiel :: Camacho Susan Weiss :: editor hydrogen :: oxygen Ni :: HCl carbon dioxide :: methane boron :: fluorine electron :: ions glycerin :: nitroglycerin kidneys :: kidney stones ions :: charge 117 Table 8. System performance: CHEM/production. SYSTEM INSTANCES PRECISION* REL RECALL† RH02 197 57.5% 0.80 ESP- 196 72.5% 1.00 ESP+ 1676 55.8% 6.58 TREC and CHEM datasets. For each output set, per relation, we evaluate the precision of the system by extracting a random sample of instances (50 for the TREC corpus and 20 for the CHEM corpus) and evaluating their quality manually using two human judges (a total of 680 instances were annotated per judge). For each instance, judges may assign a score of 1 for correct, 0 for incorrect, and ½ for partially correct. Example instances that were judged partially correct include “analyst is-a manager” and “pilot is-a teacher”. The kappa statistic (Siegel and Castellan Jr. 1988) on this task was Κ = 0.692. The precision for a given set of instances is the sum of the judges’ scores divided by the total instances. Although knowing the total number of correct instances of a particular relation in any nontrivial corpus is impossible, it is possible to compute the recall of a system relative to another system’s recall. Following (Pantel et al. 2004), we define the relative recall of system A given system B, RA|B, as: B P A P C C R R R B A B A C C C C B A B A B A × × = = = = | where RA is the recall of A, CA is the number of correct instances extracted by A, C is the (unknown) total number of correct instances in the corpus, PA is A’s precision in our experiments, 2 The kappa statistic jumps to Κ = 0.79 if we treat partially correct classifications as correct. and |A| is the total number of instances discovered by A. Tables 2 – 8 report the total number of instances, precision, and relative recall of each system on the TREC-9 and CHEM corpora 3 4. The relative recall is always given in relation to the ESP- system. For example, in Table 2, RH02 has a relative recall of 5.31 with ESP-, which means that the RH02 system outputs 5.31 times more correct relations than ESP- (at a cost of much lower precision). Similarly, PR04 has a relative recall of 0.23 with ESP-, which means that PR04 outputs 4.35 fewer correct relations than ESP- (also with a smaller precision). We did not include the results from GI03 in the tables since the system is only applicable to part-of relations and we did not reproduce it. However, the authors evaluated their system on a sample of the TREC9 dataset and reported 83% precision and 72% recall (this algorithm is heavily supervised.) * Because of the small evaluation sets, we estimate the 95% confidence intervals using bootstrap resampling to be in the order of ± 10-15% (absolute numbers). † Relative recall is given in relation to ESP-. Table 2. System performance: TREC/is-a. SYSTEM INSTANCES PRECISION* REL RECALL† RH02 57,525 28.0% 5.31 PR04 1,504 47.0% 0.23 ESP- 4,154 73.0% 1.00 ESP+ 69,156 36.2% 8.26 Table 4. System performance: TREC/part-of. SYSTEM INSTANCES PRECISION* REL RECALL† RH02 12,828 35.0% 42.52 ESP- 132 80.0% 1.00 ESP+ 87,203 69.9% 577.22 Table 3. System performance: CHEM/is-a. SYSTEM INSTANCES PRECISION* REL RECALL† RH02 2556 25.0% 3.76 PR04 108 40.0% 0.25 ESP- 200 85.0% 1.00 ESP+ 1490 76.0% 6.66 Table 5. System performance: CHEM/part-of. SYSTEM INSTANCES PRECISION* REL RECALL† RH02 11,582 33.8% 58.78 ESP- 111 60.0% 1.00 ESP+ 5973 50.7% 45.47 Table 7. System performance: CHEM/reaction. SYSTEM INSTANCES PRECISION* REL RECALL† RH02 6,083 30% 53.67 ESP- 40 85% 1.00 ESP+ 3102 91.4% 89.39 Table 6. System performance: TREC/succession. SYSTEM INSTANCES PRECISION* REL RECALL† RH02 49,798 2.0% 36.96 ESP- 55 49.0% 1.00 ESP+ 55 49.0% 1.00 118 In all tables, RH02 extracts many more relations than ESP-, but with a much lower precision, because it uses generic patterns without filtering. The high precision of ESP- is due to the effective reliability measures presented in Section 3.2. 4.3 Effect of Generic Patterns Experimental results, for all relations and the two different corpus sizes, show that ESP- greatly outperforms the other methods on precision. However, without the use of generic patterns, the ESP- system shows lower recall in all but the production relation. As hypothesized, exploiting generic patterns using the algorithm from Section 3.3 substantially improves recall without much deterioration in precision. ESP+ shows one to two orders of magnitude improvement on recall while losing on average below 10% precision. The succession relation in Table 6 was the only relation where Espresso found no generic pattern. For other relations, Espresso found from one to five generic patterns. Table 4 shows the power of generic patterns where system recall increases by 577 times with only a 10% drop in precision. In Table 7, we see a case where the combination of filtering with a large increase in retrieved instances resulted in both higher precision and recall. In order to better analyze our use of generic patterns, we performed the following experiment. For each relation, we randomly sampled 100 instances for each generic pattern and built a gold standard for them (by manually tagging each instance as correct or incorrect). We then sorted the 100 instances according to the scoring formula S(i) derived in Section 3.3 and computed the average precision, recall, and F-score of each top-K ranked instances for each pattern5. Due to lack of space, we only present the graphs for four of the 22 generic patterns: “X is a Y” for the is-a relation of Table 2, “X in the Y” for the part-of relation of Table 4, “X in Y” for the part-of relation of Table 5, and “X and Y” for the reaction relation of Table 7. Figure 1 illustrates the results. In each figure, notice that recall climbs at a much faster rate than precision decreases. This indicates that the scoring function of Section 3.3 effectively separates correct and incorrect instances. In Figure 1a), there is a big initial drop in precision that accounts for the poor precision reported in Table 1. Recall that the cutoff points on S(i) were set to τ = 0.4 for TREC and τ = 0.3 for CHEM. The figures show that this cutoff is far from the maximum F-score. An interesting avenue of future work would be to automatically determine the proper threshold for each individual generic pattern instead of setting a uniform threshold. 5 We can directly compute recall here since we built a gold standard for each set of 100 samples. Figure 1. Precision, recall and F-score curves of the Top-K% ranking instances of patterns “X is a Y” (TREC/is-a), “X in Y” (TREC/part-of), “X in the Y” (CHEM/part-of), and “X and Y” (CHEM/reaction). a) TREC/is-a: "X is a Y" 0 0.2 0.4 0.6 0.8 1 5 15 25 35 45 55 65 75 85 95 Top-K% d) CHEM/reaction: "X and Y" 0 0.2 0.4 0.6 0.8 1 5 15 25 35 45 55 65 75 85 95 Top-K% c) CHEM/part-of: "X in Y" 0 0.2 0.4 0.6 0.8 1 5 15 25 35 45 55 65 75 85 95 Top-K% b) TREC/part-of: "X in the Y" 0 0.2 0.4 0.6 0.8 1 5 15 25 35 45 55 65 75 85 95 Top-K% 119 5 Conclusions We proposed a weakly-supervised, generalpurpose, and accurate algorithm, called Espresso, for harvesting binary semantic relations from raw text. The main contributions are: i) a method for exploiting generic patterns by filtering incorrect instances using the Web; and ii) a principled measure of pattern and instance reliability enabling the filtering algorithm. We have empirically compared Espresso’s precision and recall with other systems on both a small domain-specific textbook and on a larger corpus of general news, and have extracted several standard and specific semantic relations: isa, part-of, succession, reaction, and production. Espresso achieves higher and more balanced performance than other state of the art systems. By exploiting generic patterns, system recall substantially increases with little effect on precision. There are many avenues of future work both in improving system performance and making use of the relations in applications like question answering. For the former, we plan to investigate the use of WordNet to automatically learn selectional constraints on generic patterns, as proposed by (Girju et al. 2006). We expect here that negative instances will play a key role in determining the selectional restrictions. Espresso is the first system, to our knowledge, to emphasize concurrently performance, minimal supervision, breadth, and generality. It remains to be seen whether one could enrich existing ontologies with relations harvested by Espresso, and it is our hope that these relations will benefit NLP applications. References Berland, M. and E. Charniak, 1999. Finding parts in very large corpora. In Proceedings of ACL-1999. pp. 57-64. College Park, MD. Brown, T.L.; LeMay, H.E.; Bursten, B.E.; and Burdge, J.R. 2003. Chemistry: The Central Science, Ninth Edition. Prentice Hall. Caraballo, S. 1999. Automatic acquisition of a hypernymlabeled noun hierarchy from text. In Proceedings of ACL-99. pp 120-126, Baltimore, MD. Cover, T.M. and Thomas, J.A. 1991. Elements of Information Theory. John Wiley & Sons. Day, D.; Aberdeen, J.; Hirschman, L.; Kozierok, R.; Robinson, P.; and Vilain, M. 1997. Mixed-initiative development of language processing systems. In Proceedings of ANLP-97. Washington D.C. Downey, D.; Etzioni, O.; and Soderland, S. 2005. A Probabilistic model of redundancy in information extraction. In Proceedings of IJCAI-05. pp. 1034-1041. Edinburgh, Scotland. Etzioni, O.; Cafarella, M.J.; Downey, D.; Popescu, A.-M.; Shaked, T.; Soderland, S.; Weld, D.S.; and Yates, A. 2005. Unsupervised named-entity extraction from the Web: An experimental study. Artificial Intelligence, 165(1): 91-134. Fellbaum, C. 1998. WordNet: An Electronic Lexical Database. MIT Press. Geffet, M. and Dagan, I. 2005. The Distributional Inclusion Hypotheses and Lexical Entailment. In Proceedings of ACL-2005. Ann Arbor, MI. Girju, R.; Badulescu, A.; and Moldovan, D. 2006. Automatic Discovery of Part-Whole Relations. Computational Linguistics, 32(1): 83-135. Hearst, M. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of COLING-92. pp. 539-545. Nantes, France. Hindle, D. 1990. Noun classification from predicateargument structures. In Proceedings of ACL-90. pp. 268– 275. Pittsburgh, PA. Justeson J.S. and Katz S.M. 1995. Technical Terminology: some linguistic properties and algorithms for identification in text. In Proceedings of ICCL-95. pp.539-545. Nantes, France. Lin, C.-Y. and Hovy, E.H.. 2000. The Automated acquisition of topic signatures for text summarization. In Proceedings of COLING-00. pp. 495-501. Saarbrücken, Germany. Lin, D. and Pantel, P. 2002. Concept discovery from text. In Proceedings of COLING-02. pp. 577-583. Taipei, Taiwan. Mann, G. S. 2002. Fine-Grained Proper Noun Ontologies for Question Answering. In Proceedings of SemaNet’ 02: Building and Using Semantic Networks, Taipei, Taiwan. Pantel, P. and Ravichandran, D. 2004. Automatically labeling semantic classes. In Proceedings of HLT/NAACL-04. pp. 321-328. Boston, MA. Pantel, P.; Ravichandran, D.; Hovy, E.H. 2004. Towards terascale knowledge acquisition. In Proceedings of COLING-04. pp. 771-777. Geneva, Switzerland. Pasca, M. and Harabagiu, S. 2001. The informative role of WordNet in Open-Domain Question Answering. In Proceedings of NAACL-01 Workshop on WordNet and Other Lexical Resources. pp. 138-143. Pittsburgh, PA. Ravichandran, D. and Hovy, E.H. 2002. Learning surface text patterns for a question answering system. In Proceedings of ACL-2002. pp. 41-47. Philadelphia, PA. Riloff, E. and Shepherd, J. 1997. A corpus-based approach for building semantic lexicons. In Proceedings of EMNLP-97. Siegel, S. and Castellan Jr., N. J. 1988. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill. Szpektor, I.; Tanev, H.; Dagan, I.; and Coppola, B. 2004. Scaling web-based acquisition of entailment relations. In Proceedings of EMNLP-04. Barcelona, Spain. 120
2006
15
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 121–128, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Modeling Commonality among Related Classes in Relation Extraction Zhou GuoDong Su Jian Zhang Min Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 Email: {zhougd, sujian, mzhang}@i2r.a-star.edu.sg Abstract This paper proposes a novel hierarchical learning strategy to deal with the data sparseness problem in relation extraction by modeling the commonality among related classes. For each class in the hierarchy either manually predefined or automatically clustered, a linear discriminative function is determined in a topdown way using a perceptron algorithm with the lower-level weight vector derived from the upper-level weight vector. As the upper-level class normally has much more positive training examples than the lower-level class, the corresponding linear discriminative function can be determined more reliably. The upperlevel discriminative function then can effectively guide the discriminative function learning in the lower-level, which otherwise might suffer from limited training data. Evaluation on the ACE RDC 2003 corpus shows that the hierarchical strategy much improves the performance by 5.6 and 5.1 in F-measure on least- and medium- frequent relations respectively. It also shows that our system outperforms the previous best-reported system by 2.7 in F-measure on the 24 subtypes using the same feature set. 1 Introduction With the dramatic increase in the amount of textual information available in digital archives and the WWW, there has been growing interest in techniques for automatically extracting information from text. Information Extraction (IE) is such a technology that IE systems are expected to identify relevant information (usually of predefined types) from text documents in a certain domain and put them in a structured format. According to the scope of the ACE program (ACE 2000-2005), current research in IE has three main objectives: Entity Detection and Tracking (EDT), Relation Detection and Characterization (RDC), and Event Detection and Characterization (EDC). This paper will focus on the ACE RDC task, which detects and classifies various semantic relations between two entities. For example, we want to determine whether a person is at a location, based on the evidence in the context. Extraction of semantic relationships between entities can be very useful for applications such as question answering, e.g. to answer the query “Who is the president of the United States?”. One major challenge in relation extraction is due to the data sparseness problem (Zhou et al 2005). As the largest annotated corpus in relation extraction, the ACE RDC 2003 corpus shows that different subtypes/types of relations are much unevenly distributed and a few relation subtypes, such as the subtype “Founder” under the type “ROLE”, suffers from a small amount of annotated data. Further experimentation in this paper (please see Figure 2) shows that most relation subtypes suffer from the lack of the training data and fail to achieve steady performance given the current corpus size. Given the relative large size of this corpus, it will be time-consuming and very expensive to further expand the corpus with a reasonable gain in performance. Even if we can somehow expend the corpus and achieve steady performance on major relation subtypes, it will be still far beyond practice for those minor subtypes given the much unevenly distribution among different relation subtypes. While various machine learning approaches, such as generative modeling (Miller et al 2000), maximum entropy (Kambhatla 2004) and support vector machines (Zhao and Grisman 2005; Zhou et al 2005), have been applied in the relation extraction task, no explicit learning strategy is proposed to deal with the inherent data sparseness problem caused by the much uneven distribution among different relations. This paper proposes a novel hierarchical learning strategy to deal with the data sparseness problem by modeling the commonality among related classes. Through organizing various classes hierarchically, a linear discriminative function is determined for each class in a topdown way using a perceptron algorithm with the lower-level weight vector derived from the upper-level weight vector. Evaluation on the ACE RDC 2003 corpus shows that the hierarchical 121 strategy achieves much better performance than the flat strategy on least- and medium-frequent relations. It also shows that our system based on the hierarchical strategy outperforms the previous best-reported system. The rest of this paper is organized as follows. Section 2 presents related work. Section 3 describes the hierarchical learning strategy using the perceptron algorithm. Finally, we present experimentation in Section 4 and conclude this paper in Section 5. 2 Related Work The relation extraction task was formulated at MUC-7(1998). With the increasing popularity of ACE, this task is starting to attract more and more researchers within the natural language processing and machine learning communities. Typical works include Miller et al (2000), Zelenko et al (2003), Culotta and Sorensen (2004), Bunescu and Mooney (2005a), Bunescu and Mooney (2005b), Zhang et al (2005), Roth and Yih (2002), Kambhatla (2004), Zhao and Grisman (2005) and Zhou et al (2005). Miller et al (2000) augmented syntactic full parse trees with semantic information of entities and relations, and built generative models to integrate various tasks such as POS tagging, named entity recognition, template element extraction and relation extraction. The problem is that such integration may impose big challenges, e.g. the need of a large annotated corpus. To overcome the data sparseness problem, generative models typically applied some smoothing techniques to integrate different scales of contexts in parameter estimation, e.g. the back-off approach in Miller et al (2000). Zelenko et al (2003) proposed extracting relations by computing kernel functions between parse trees. Culotta and Sorensen (2004) extended this work to estimate kernel functions between augmented dependency trees and achieved Fmeasure of 45.8 on the 5 relation types in the ACE RDC 2003 corpus1. Bunescu and Mooney (2005a) proposed a shortest path dependency kernel. They argued that the information to model a relationship between two entities can be typically captured by the shortest path between them in the dependency graph. It achieved the Fmeasure of 52.5 on the 5 relation types in the ACE RDC 2003 corpus. Bunescu and Mooney (2005b) proposed a subsequence kernel and ap 1 The ACE RDC 2003 corpus defines 5/24 relation types/subtypes between 4 entity types. plied it in protein interaction and ACE relation extraction tasks. Zhang et al (2005) adopted clustering algorithms in unsupervised relation extraction using tree kernels. To overcome the data sparseness problem, various scales of sub-trees are applied in the tree kernel computation. Although tree kernel-based approaches are able to explore the huge implicit feature space without much feature engineering, further research work is necessary to make them effective and efficient. Comparably, feature-based approaches achieved much success recently. Roth and Yih (2002) used the SNoW classifier to incorporate various features such as word, part-of-speech and semantic information from WordNet, and proposed a probabilistic reasoning approach to integrate named entity recognition and relation extraction. Kambhatla (2004) employed maximum entropy models with features derived from word, entity type, mention level, overlap, dependency tree, parse tree and achieved Fmeasure of 52.8 on the 24 relation subtypes in the ACE RDC 2003 corpus. Zhao and Grisman (2005) 2 combined various kinds of knowledge from tokenization, sentence parsing and deep dependency analysis through support vector machines and achieved F-measure of 70.1 on the 7 relation types of the ACE RDC 2004 corpus3. Zhou et al (2005) further systematically explored diverse lexical, syntactic and semantic features through support vector machines and achieved Fmeasure of 68.1 and 55.5 on the 5 relation types and the 24 relation subtypes in the ACE RDC 2003 corpus respectively. To overcome the data sparseness problem, feature-based approaches normally incorporate various scales of contexts into the feature vector extensively. These approaches then depend on adopted learning algorithms to weight and combine each feature effectively. For example, an exponential model and a linear model are applied in the maximum entropy models and support vector machines respectively to combine each feature via the learned weight vector. In summary, although various approaches have been employed in relation extraction, they implicitly attack the data sparseness problem by using features of different contexts in featurebased approaches or including different sub 2 Here, we classify this paper into feature-based approaches since the feature space in the kernels of Zhao and Grisman (2005) can be easily represented by an explicit feature vector. 3 The ACE RDC 2004 corpus defines 7/27 relation types/subtypes between 7 entity types. 122 structures in kernel-based approaches. Until now, there are no explicit ways to capture the hierarchical topology in relation extraction. Currently, all the current approaches apply the flat learning strategy which equally treats training examples in different relations independently and ignore the commonality among different relations. This paper proposes a novel hierarchical learning strategy to resolve this problem by considering the relatedness among different relations and capturing the commonality among related relations. By doing so, the data sparseness problem can be well dealt with and much better performance can be achieved, especially for those relations with small amounts of annotated examples. 3 Hierarchical Learning Strategy Traditional classifier learning approaches apply the flat learning strategy. That is, they equally treat training examples in different classes independently and ignore the commonality among related classes. The flat strategy will not cause any problem when there are a large amount of training examples for each class, since, in this case, a classifier learning approach can always learn a nearly optimal discriminative function for each class against the remaining classes. However, such flat strategy may cause big problems when there is only a small amount of training examples for some of the classes. In this case, a classifier learning approach may fail to learn a reliable (or nearly optimal) discriminative function for a class with a small amount of training examples, and, as a result, may significantly affect the performance of the class or even the overall performance. To overcome the inherent problems in the flat strategy, this paper proposes a hierarchical learning strategy which explores the inherent commonality among related classes through a class hierarchy. In this way, the training examples of related classes can help in learning a reliable discriminative function for a class with only a small amount of training examples. To reduce computation time and memory requirements, we will only consider linear classifiers and apply the simple and widely-used perceptron algorithm for this purpose with more options open for future research. In the following, we will first introduce the perceptron algorithm in linear classifier learning, followed by the hierarchical learning strategy using the perceptron algorithm. Finally, we will consider several ways in building the class hierarchy. 3.1 Perceptron Algorithm _______________________________________ Input: the initial weight vector w , the training example sequence T t Y X y x t t ..., 2,1 , ) , ( = × ∈ and the number of the maximal iterations N (e.g. 10 in this paper) of the training sequence4 Output: the weight vector w for the linear discriminative function x w f ⋅ = BEGIN w w = 1 REPEAT for t=1,2,…,T*N 1. Receive the instance n t R x ∈ 2. Compute the output t t t x w o ⋅ = 3. Give the prediction ) ( t t o sign y = ∧ 4. Receive the desired label } 1 ,1 { + − ∈ ty 5. Update the hypothesis according to t t t t t x y w w δ + = +1 (1) where 0 = tδ if the margin of tw at the given example ) , ( t t y x 0 > ⋅ t t t x w y and 1 = tδ otherwise END REPEAT Return 5 / 4 1 * ∑ − = + = N N i i T w w END BEGIN _______________________________________ Figure 1: the perceptron algorithm This section first deals with binary classification using linear classifiers. Assume an instance space n R X = and a binary label space } 1 ,1 { + − = Y . With any weight vector n R w ∈ and a given instance n R x ∈ , we associate a linear classifier w h with a linear discriminative function 5 x w x f ⋅ = ) ( by ) ( ) ( x w sign x hw ⋅ = , where 1 ) ( − = ⋅x w sign if 0 < ⋅x w and 1 ) ( + = ⋅x w sign otherwise. Here, the margin of w at ) , ( t t y x is defined as t t x w y ⋅ . Then if the margin is positive, we have a correct prediction with t w y x h = ) ( , and if the margin is negative, we have an error with t w y x h ≠ ) ( . Therefore, given a sequence of training examples T t Y X y x t t ..., 2,1 , ) , ( = × ∈ , linear classifier learning attemps to find a weight vector w that achieves a positive margin on as many examples as possible. 4 The training example sequence is feed N times for better performance. Moreover, this number can control the maximal affect a training example can pose. This is similar to the regulation parameter C in SVM, which affects the trade-off between complexity and proportion of non-separable examples. As a result, it can be used to control over-fitting and robustness. 5 ) ( x w⋅ denotes the dot product of the weight vector n R w ∈ and a given instance n R x ∈ . 123 The well-known perceptron algorithm, as shown in Figure 1, belongs to online learning of linear classifiers, where the learning algorithm represents its t -th hyposthesis by a weight vector n t R w ∈ . At trial t , an online algorithm receives an instance n t R x ∈ , makes its prediction ) ( t t t x w sign y ⋅ = ∧ and receives the desired label } 1 ,1 { + − ∈ ty . What distinguishes different online algorithms is how they update tw into 1 + tw based on the example ) , ( t t y x received at trial t . In particular, the perceptron algorithm updates the hypothesis by adding a scalar multiple of the instance, as shown in Equation 1 of Figure 1, when there is an error. Normally, the tradictional perceptron algorithm initializes the hypothesis as the zero vector 0 1 = w . This is usually the most natural choice, lacking any other preference. Smoothing In order to further improve the performance, we iteratively feed the training examples for a possible better discriminative function. In this paper, we have set the maximal iteration number to 10 for both efficiency and stable performance and the final weight vector in the discriminative function is averaged over those of the discriminative functions in the last few iterations (e.g. 5 in this paper). Bagging One more problem with any online classifier learning algorithm, including the perceptron algorithm, is that the learned discriminative function somewhat depends on the feeding order of the training examples. In order to eliminate such dependence and further improve the performance, an ensemble technique, called bagging (Breiman 1996), is applied in this paper. In bagging, the bootstrap technique is first used to build M (e.g. 10 in this paper) replicate sample sets by randomly re-sampling with replacement from the given training set repeatedly. Then, each training sample set is used to train a certain discriminative function. Finally, the final weight vector in the discriminative function is averaged over those of the M discriminative functions in the ensemble. Multi-Class Classification Basically, the perceptron algorithm is only for binary classification. Therefore, we must extend the perceptron algorithms to multi-class classification, such as the ACE RDC task. For efficiency, we apply the one vs. others strategy, which builds K classifiers so as to separate one class from all others. However, the outputs for the perceptron algorithms of different classes may be not directly comparable since any positive scalar multiple of the weight vector will not affect the actual prediction of a perceptron algorithm. For comparability, we map the perceptron algorithm output into the probability by using an additional sigmoid model: ) exp( 1 1 ) | 1 ( B Af f y p + + = = (2) where x w f ⋅ = is the output of a perceptron algorithm and the coefficients A & B are to be trained using the model trust alorithm as described in Platt (1999). The final decision of an instance in multi-class classification is determined by the class which has the maximal probability from the corresponding perceptron algorithm. 3.2 Hierarchical Learning Strategy using the Perceptron Algorithm Assume we have a class hierarchy for a task, e.g. the one in the ACE RDC 2003 corpus as shown in Table 1 of Section 4.1. The hierarchical learning strategy explores the inherent commonality among related classes in a top-down way. For each class in the hierarchy, a linear discriminative function is determined in a top-down way with the lower-level weight vector derived from the upper-level weight vector iteratively. This is done by initializing the weight vector in training the linear discriminative function for the lowerlevel class as that of the upper-level class. That is, the lower-level discriminative function has the preference toward the discriminative function of its upper-level class. For an example, let’s look at the training of the “Located” relation subtype in the class hierarchy as shown in Table 1: 1) Train the weight vector of the linear discriminative function for the “YES” relation vs. the “NON” relation with the weight vector initialized as the zero vector. 2) Train the weight vector of the linear discriminative function for the “AT” relation type vs. all the remaining relation types (including the “NON” relation) with the weight vector initialized as the weight vector of the linear discriminative function for the “YES” relation vs. the “NON” relation. 3) Train the weight vector of the linear discriminative function for the “Located” relation subtype vs. all the remaining relation subtypes under all the relation types (including the “NON” relation) with the 124 weight vector initialized as the weight vector of the linear discriminative function for the “AT” relation type vs. all the remaining relation types. 4) Return the above trained weight vector as the discriminatie function for the “Located” relation subtype. In this way, the training examples in different classes are not treated independently any more, and the commonality among related classes can be captured via the hierarchical learning strategy. The intuition behind this strategy is that the upper-level class normally has more positive training examples than the lower-level class so that the corresponding linear discriminative function can be determined more reliably. In this way, the training examples of related classes can help in learning a reliable discriminative function for a class with only a small amount of training examples in a top-down way and thus alleviate its data sparseness problem. 3.3 Building the Class Hierarchy We have just described the hierarchical learning strategy using a given class hierarchy. Normally, a rough class hierarchy can be given manually according to human intuition, such as the one in the ACE RDC 2003 corpus. In order to explore more commonality among sibling classes, we make use of binary hierarchical clustering for sibling classes at both lowest and all levels. This can be done by first using the flat learning strategy to learn the discriminative functions for individual classes and then iteratively combining the two most related classes using the cosine similarity function between their weight vectors in a bottom-up way. The intuition is that related classes should have similar hyper-planes to separate from other classes and thus have similar weight vectors. • Lowest-level hybrid: Binary hierarchical clustering is only done at the lowest level while keeping the upper-level class hierarchy. That is, only sibling classes at the lowest level are hierarchically clustered. • All-level hybrid: Binary hierarchical clustering is done at all levels in a bottom-up way. That is, sibling classes at the lowest level are hierarchically clustered first and then sibling classes at the upper-level. In this way, the binary class hierarchy can be built iteratively in a bottom-up way. 4 Experimentation This paper uses the ACE RDC 2003 corpus provided by LDC to train and evaluate the hierarchical learning strategy. Same as Zhou et al (2005), we only model explicit relations and explicitly model the argument order of the two mentions involved. 4.1 Experimental Setting Type Subtype Freq Bin Type AT Based-In 347 Medium Located 2126 Large Residence 308 Medium NEAR Relative-Location 201 Medium PART Part-Of 947 Large Subsidiary 355 Medium Other 6 Small ROLE Affiliate-Partner 204 Medium Citizen-Of 328 Medium Client 144 Small Founder 26 Small General-Staff 1331 Large Management 1242 Large Member 1091 Large Owner 232 Medium Other 158 Small SOCIAL Associate 91 Small Grandparent 12 Small Other-Personal 85 Small Other-Professional 339 Medium Other-Relative 78 Small Parent 127 Small Sibling 18 Small Spouse 77 Small Table 1: Statistics of relation types and subtypes in the training data of the ACE RDC 2003 corpus (Note: According to frequency, all the subtypes are divided into three bins: large/ middle/ small, with 400 as the lower threshold for the large bin and 200 as the upper threshold for the small bin). The training data consists of 674 documents (~300k words) with 9683 relation examples while the held-out testing data consists of 97 documents (~50k words) with 1386 relation examples. All the experiments are done five times on the 24 relation subtypes in the ACE corpus, except otherwise specified, with the final performance averaged using the same re-sampling with replacement strategy as the one in the bagging technique. Table 1 lists various types and subtypes of relations for the ACE RDC 2003 corpus, along with their occurrence frequency in the training data. It shows that this corpus suffers from a small amount of annotated data for a few subtypes such as the subtype “Founder” under the type “ROLE”. For comparison, we also adopt the same feature set as Zhou et al (2005): word, entity type, 125 mention level, overlap, base phrase chunking, dependency tree, parse tree and semantic information. 4.2 Experimental Results Table 2 shows the performance of the hierarchical learning strategy using the existing class hierarchy in the given ACE corpus and its comparison with the flat learning strategy, using the perceptron algorithm. It shows that the pure hierarchical strategy outperforms the pure flat strategy by 1.5 (56.9 vs. 55.4) in F-measure. It also shows that further smoothing and bagging improve the performance of the hierarchical and flat strategies by 0.6 and 0.9 in F-measure respectively. As a result, the final hierarchical strategy achieves F-measure of 57.8 and outperforms the final flat strategy by 1.8 in F-measure. Strategies P R F Flat 58.2 52.8 55.4 Flat+Smoothing 58.9 53.1 55.9 Flat+Bagging 59.0 53.1 55.9 Flat+Both 59.1 53.2 56.0 Hierarchical 61.9 52.6 56.9 Hierarchical+Smoothing 62.7 53.1 57.5 Hierarchical+Bagging 62.9 53.1 57.6 Hierarchical+Both 63.0 53.4 57.8 Table 2: Performance of the hierarchical learning strategy using the existing class hierarchy and its comparison with the flat learning strategy Class Hierarchies P R F Existing 63.0 53.4 57.8 Entirely Automatic 63.4 53.1 57.8 Lowest-level Hybrid 63.6 53.5 58.1 All-level Hybrid 63.6 53.6 58.2 Table 3: Performance of the hierarchical learning strategy using different class hierarchies Table 3 compares the performance of the hierarchical learning strategy using different class hierarchies. It shows that, the lowest-level hybrid approach, which only automatically updates the existing class hierarchy at the lowest level, improves the performance by 0.3 in F-measure while further updating the class hierarchy at upper levels in the all-level hybrid approach only has very slight effect. This is largely due to the fact that the major data sparseness problem occurs at the lowest level, i.e. the relation subtype level in the ACE corpus. As a result, the final hierarchical learning strategy using the class hierarchy built with the all-level hybrid approach achieves F-measure of 58.2 in F-measure, which outperforms the final flat strategy by 2.2 in Fmeasure. In order to justify the usefulness of our hierarchical learning strategy when a rough class hierarchy is not available and difficult to determine manually, we also experiment using entirely automatically built class hierarchy (using the traditional binary hierarchical clustering algorithm and the cosine similarity measurement) without considering the existing class hierarchy. Table 3 shows that using automatically built class hierarchy performs comparably with using only the existing one. With the major goal of resolving the data sparseness problem for the classes with a small amount of training examples, Table 4 compares the best-performed hierarchical and flat learning strategies on the relation subtypes of different training data sizes. Here, we divide various relation subtypes into three bins: large/middle/small, according to their available training data sizes. For the ACE RDC 2003 corpus, we use 400 as the lower threshold for the large bin6 and 200 as the upper threshold for the small bin7. As a result, the large/medium/small bin includes 5/8/11 relation subtypes, respectively. Please see Table 1 for details. Table 4 shows that the hierarchical strategy outperforms the flat strategy by 1.0/5.1/5.6 in F-measure on the large/middle/small bin respectively. This indicates that the hierarchical strategy performs much better than the flat strategy for those classes with a small or medium amount of annotated examples although the hierarchical strategy only performs slightly better by 1.0 and 2.2 in Fmeasure than the flat strategy on those classes with a large size of annotated corpus and on all classes as a whole respectively. This suggests that the proposed hierarchical strategy can well deal with the data sparseness problem in the ACE RDC 2003 corpus. An interesting question is about the similarity between the linear discriminative functions learned using the hierarchical and flat learning strategies. Table 4 compares the cosine similarities between the weight vectors of the linear discriminative functions using the two strategies for different bins, weighted by the training data sizes 6 The reason to choose this threshold is that no relation subtype in the ACE RC 2003 corpus has training examples in between 400 and 900. 7 A few minor relation subtypes only have very few examples in the testing set. The reason to choose this threshold is to guarantee a reasonable number of testing examples in the small bin. For the ACE RC 2003 corpus, using 200 as the upper threshold will fill the small bin with about 100 testing examples while using 100 will include too few testing examples for reasonable performance evaluation. 126 of different relation subtypes. It shows that the linear discriminative functions learned using the two strategies are very similar (with the cosine similarity 0.98) for the relation subtypes belonging to the large bin while the linear discriminative functions learned using the two strategies are not for the relation subtypes belonging to the medium/small bin with the cosine similarity 0.92/0.81 respectively. This means that the use of the hierarchical strategy over the flat strategy only has very slight change on the linear discriminative functions for those classes with a large amount of annotated examples while its effect on those with a small amount of annotated examples is obvious. This contributes to and explains (the degree of) the performance difference between the two strategies on the different training data sizes as shown in Table 4. Due to the difficulty of building a large annotated corpus, another interesting question is about the learning curve of the hierarchical learning strategy and its comparison with the flat learning strategy. Figure 2 shows the effect of different training data sizes for some major relation subtypes while keeping all the training examples of remaining relation subtypes. It shows that the hierarchical strategy performs much better than the flat strategy when only a small amount of training examples is available. It also shows that the hierarchical strategy can achieve stable performance much faster than the flat strategy. Finally, it shows that the ACE RDC 2003 task suffers from the lack of training examples. Among the three major relation subtypes, only the subtype “Located” achieves steady performance. Finally, we also compare our system with the previous best-reported systems, such as Kambhatla (2004) and Zhou et al (2005). Table 5 shows that our system outperforms the previous best-reported system by 2.7 (58.2 vs. 55.5) in Fmeasure, largely due to the gain in recall. It indicates that, although support vector machines and maximum entropy models always perform better than the simple perceptron algorithm in most (if not all) applications, the hierarchical learning strategy using the perceptron algorithm can easily overcome the difference and outperforms the flat learning strategy using the overwhelming support vector machines and maximum entropy models in relation extraction, at least on the ACE RDC 2003 corpus. Large Bin (0.98) Middle Bin (0.92) Small Bin (0.81) Bin Type(cosine similarity) P R F P R F P R F Flat Strategy 62.3 61.9 62.1 60.8 38.7 47.3 33.0 21.7 26.2 Hierarchical Strategy 66.4 60.2 63.1 67.6 42.7 52.4 40.2 26.3 31.8 Table 4: Comparison of the hierarchical and flat learning strategies on the relation subtypes of different training data sizes. Notes: the figures in the parentheses indicate the cosine similarities between the weight vectors of the linear discriminative functions learned using the two strategies. 10 20 30 40 50 60 70 200 400 600 800 1000 1200 1400 1600 1800 2000 Training Data Size F-measure HS: General-Staff FS: General-Staff HS: Part-Of FS: Part-Of HS: Located FS: Located Figure 2: Learning curve of the hierarchical strategy and its comparison with the flat strategy for some major relation subtypes (Note: FS for the flat strategy and HS for the hierarchical strategy) Performance System P R F Our: Perceptron Algorithm + Hierarchical Strategy 63.6 53.6 58.2 Zhou et al (2005): SVM + Flat Strategy 63.1 49.5 55.5 Kambhatla (2004): Maximum Entropy + Flat Strategy 63.5 45.2 52.8 Table 5: Comparison of our system with other best-reported systems 127 5 Conclusion This paper proposes a novel hierarchical learning strategy to deal with the data sparseness problem in relation extraction by modeling the commonality among related classes. For each class in a class hierarchy, a linear discriminative function is determined in a top-down way using the perceptron algorithm with the lower-level weight vector derived from the upper-level weight vector. In this way, the upper-level discriminative function can effectively guide the lower-level discriminative function learning. Evaluation on the ACE RDC 2003 corpus shows that the hierarchical strategy performs much better than the flat strategy in resolving the critical data sparseness problem in relation extraction. In the future work, we will explore the hierarchical learning strategy using other machine learning approaches besides online classifier learning approaches such as the simple perceptron algorithm applied in this paper. Moreover, just as indicated in Figure 2, most relation subtypes in the ACE RDC 2003 corpus (arguably the largest annotated corpus in relation extraction) suffer from the lack of training examples. Therefore, a critical research in relation extraction is how to rely on semi-supervised learning approaches (e.g. bootstrap) to alleviate its dependency on a large amount of annotated training examples and achieve better and steadier performance. Finally, our current work is done when NER has been perfectly done. Therefore, it would be interesting to see how imperfect NER affects the performance in relation extraction. This will be done by integrating the relation extraction system with our previously developed NER system as described in Zhou and Su (2002). References ACE. (2000-2005). Automatic Content Extraction. http://www.ldc.upenn.edu/Projects/ACE/ Bunescu R. & Mooney R.J. (2005a). A shortest path dependency kernel for relation extraction. HLT/EMNLP’2005: 724-731. 6-8 Oct 2005. Vancover, B.C. Bunescu R. & Mooney R.J. (2005b). Subsequence Kernels for Relation Extraction NIPS’2005. Vancouver, BC, December 2005 Breiman L. (1996) Bagging Predictors. Machine Learning, 24(2): 123-140. Collins M. (1999). Head-driven statistical models for natural language parsing. Ph.D. Dissertation, University of Pennsylvania. Culotta A. and Sorensen J. (2004). Dependency tree kernels for relation extraction. ACL’2004. 423-429. 21-26 July 2004. Barcelona, Spain. Kambhatla N. (2004). Combining lexical, syntactic and semantic features with Maximum Entropy models for extracting relations. ACL’2004(Poster). 178-181. 21-26 July 2004. Barcelona, Spain. Miller G.A. (1990). WordNet: An online lexical database. International Journal of Lexicography. 3(4):235-312. Miller S., Fox H., Ramshaw L. and Weischedel R. (2000). A novel use of statistical parsing to extract information from text. ANLP’2000. 226233. 29 April - 4 May 2000, Seattle, USA MUC-7. (1998). Proceedings of the 7th Message Understanding Conference (MUC-7). Morgan Kaufmann, San Mateo, CA. Platt J. 1999. Probabilistic Outputs for Support Vector Machines and Comparisions to regularized Likelihood Methods. In Advances in Large Margin Classifiers. Edited by Smola .J., Bartlett P., Scholkopf B. and Schuurmans D. MIT Press. Roth D. and Yih W.T. (2002). Probabilistic reasoning for entities and relation recognition. CoLING’2002. 835-841.26-30 Aug 2002. Taiwan. Zelenko D., Aone C. and Richardella. (2003). Kernel methods for relation extraction. Journal of Machine Learning Research. 3(Feb):1083-1106. Zhang M., Su J., Wang D.M., Zhou G.D. and Tan C.L. (2005). Discovering Relations from a Large Raw Corpus Using Tree Similarity-based Clustering, IJCNLP’2005, Lecture Notes in Computer Science (LNCS 3651). 378-389. 11-16 Oct 2005. Jeju Island, South Korea. Zhao S.B. and Grisman R. 2005. Extracting relations with integrated information using kernel methods. ACL’2005: 419-426. Univ of Michigan-Ann Arbor, USA, 25-30 June 2005. Zhou G.D. and Su Jian. Named Entity Recognition Using a HMM-based Chunk Tagger, ACL’2002. pp473-480. Philadelphia. July 2002. Zhou G.D., Su J. Zhang J. and Zhang M. (2005). Exploring various knowledge in relation extraction. ACL’2005. 427-434. 25-30 June, Ann Arbor, Michgan, USA. 128
2006
16
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 129–136, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Relation Extraction Using Label Propagation Based Semi-supervised Learning Jinxiu Chen1 Donghong Ji1 Chew Lim Tan2 Zhengyu Niu1 1Institute for Infocomm Research 2Department of Computer Science 21 Heng Mui Keng Terrace National University of Singapore 119613 Singapore 117543 Singapore {jinxiu,dhji,zniu}@i2r.a-star.edu.sg [email protected] Abstract Shortage of manually labeled data is an obstacle to supervised relation extraction methods. In this paper we investigate a graph based semi-supervised learning algorithm, a label propagation (LP) algorithm, for relation extraction. It represents labeled and unlabeled examples and their distances as the nodes and the weights of edges of a graph, and tries to obtain a labeling function to satisfy two constraints: 1) it should be fixed on the labeled nodes, 2) it should be smooth on the whole graph. Experiment results on the ACE corpus showed that this LP algorithm achieves better performance than SVM when only very few labeled examples are available, and it also performs better than bootstrapping for the relation extraction task. 1 Introduction Relation extraction is the task of detecting and classifying relationships between two entities from text. Many machine learning methods have been proposed to address this problem, e.g., supervised learning algorithms (Miller et al., 2000; Zelenko et al., 2002; Culotta and Soresen, 2004; Kambhatla, 2004; Zhou et al., 2005), semi-supervised learning algorithms (Brin, 1998; Agichtein and Gravano, 2000; Zhang, 2004), and unsupervised learning algorithms (Hasegawa et al., 2004). Supervised methods for relation extraction perform well on the ACE Data, but they require a large amount of manually labeled relation instances. Unsupervised methods do not need the definition of relation types and manually labeled data, but they cannot detect relations between entity pairs and its result cannot be directly used in many NLP tasks since there is no relation type label attached to each instance in clustering result. Considering both the availability of a large amount of untagged corpora and direct usage of extracted relations, semisupervised learning methods has received great attention. DIPRE (Dual Iterative Pattern Relation Expansion) (Brin, 1998) is a bootstrapping-based system that used a pattern matching system as classifier to exploit the duality between sets of patterns and relations. Snowball (Agichtein and Gravano, 2000) is another system that used bootstrapping techniques for extracting relations from unstructured text. Snowball shares much in common with DIPRE, including the employment of the bootstrapping framework as well as the use of pattern matching to extract new candidate relations. The third system approaches relation classification problem with bootstrapping on top of SVM, proposed by Zhang (2004). This system focuses on the ACE subproblem, RDC, and extracts various lexical and syntactic features for the classification task. However, Zhang (2004)’s method doesn’t actually “detect” relaitons but only performs relation classification between two entities given that they are known to be related. Bootstrapping works by iteratively classifying unlabeled examples and adding confidently classified examples into labeled data using a model learned from augmented labeled data in previous iteration. It 129 can be found that the affinity information among unlabeled examples is not fully explored in this bootstrapping process. Recently a promising family of semi-supervised learning algorithm is introduced, which can effectively combine unlabeled data with labeled data in learning process by exploiting manifold structure (cluster structure) in data (Belkin and Niyogi, 2002; Blum and Chawla, 2001; Blum et al., 2004; Zhu and Ghahramani, 2002; Zhu et al., 2003). These graph-based semi-supervised methods usually define a graph where the nodes represent labeled and unlabeled examples in a dataset, and edges (may be weighted) reflect the similarity of examples. Then one wants a labeling function to satisfy two constraints at the same time: 1) it should be close to the given labels on the labeled nodes, and 2) it should be smooth on the whole graph. This can be expressed in a regularization framework where the first term is a loss function, and the second term is a regularizer. These methods differ from traditional semisupervised learning methods in that they use graph structure to smooth the labeling function. To the best of our knowledge, no work has been done on using graph based semi-supervised learning algorithms for relation extraction. Here we investigate a label propagation algorithm (LP) (Zhu and Ghahramani, 2002) for relation extraction task. This algorithm works by representing labeled and unlabeled examples as vertices in a connected graph, then propagating the label information from any vertex to nearby vertices through weighted edges iteratively, finally inferring the labels of unlabeled examples after the propagation process converges. In this paper we focus on the ACE RDC task1. The rest of this paper is organized as follows. Section 2 presents related work. Section 3 formulates relation extraction problem in the context of semisupervised learning and describes our proposed approach. Then we provide experimental results of our proposed method and compare with a popular supervised learning algorithm (SVM) and bootstrapping algorithm in Section 4. Finally we conclude our work in section 5. 1 http://www.ldc.upenn.edu/Projects/ACE/, Three tasks of ACE program: Entity Detection and Tracking (EDT), Relation Detection and Characterization (RDC), and Event Detection and Characterization (EDC) 2 The Proposed Method 2.1 Problem Definition The problem of relation extraction is to assign an appropriate relation type to an occurrence of two entity pairs in a given context. It can be represented as follows: R →(Cpre, e1, Cmid, e2, Cpost) (1) where e1 and e2 denote the entity mentions, and Cpre,Cmid,and Cpost are the contexts before, between and after the entity mention pairs. In this paper, we set the mid-context window as the words between the two entity mentions and the pre- and postcontext as up to two words before and after the corresponding entity mention. Let X = {xi}n i=1 be a set of contexts of occurrences of all the entity mention pairs, where xi represents the contexts of the i-th occurrence, and n is the total number of occurrences. The first l examples (or contexts) are labeled as yg ( yg ∈{rj}R j=1, rj denotes relation type and R is the total number of relation types). The remaining u(u = n −l) examples are unlabeled. Intuitively, if two occurrences of entity mention pairs have the similarity context, they tend to hold the same relation type. Based on the assumption, we define a graph where the vertices represent the contexts of labeled and unlabeled occurrences of entity mention pairs, and the edge between any two vertices xi and xj is weighted so that the closer the vertices in some distance measure, the larger the weight associated with this edge. Hence, the weights are defined as follows: Wij = exp(−s2 ij α2 ) (2) where sij is the similarity between xi and xj calculated by some similarity measures, e.g., cosine similarity, and α is used to scale the weights. In this paper, we set α as the average similarity between labeled examples from different classes. 2.2 A Label Propagation Algorithm In the LP algorithm, the label information of any vertex in a graph is propagated to nearby vertices through weighted edges until a global stable stage is achieved. Larger edge weights allow labels to travel 130 through easier. Thus the closer the examples are, the more likely they have similar labels. We define soft label as a vector that is a probabilistic distribution over all the classes. In the label propagation process, the soft label of each initial labeled example is clamped in each iteration to replenish label sources from these labeled data. Thus the labeled data act like sources to push out labels through unlabeled data. With this push from labeled examples, the class boundaries will be pushed through edges with large weights and settle in gaps along edges with small weights. Hopefully, the values of Wij across different classes would be as small as possible and the values of Wij within the same class would be as large as possible. This will make label propagation to stay within the same class. This label propagation process will make the labeling function smooth on the graph. Define an n × n probabilistic transition matrix T Tij = P(j →i) = wij Pn k=1 wkj (3) where Tij is the probability to jump from vertex xj to vertex xi. We define a n × R label matrix Y , where Yij representing the probabilities of vertex yi to have the label rj. Then the label propagation algorithm consists the following main steps: Step1 : Initialization • Set the iteration index t = 0; • Let Y 0 be the initial soft labels attached to each vertex, where Y 0 ij = 1 if yi is label rj and 0 otherwise. • Let Y 0 L be the top l rows of Y 0 and Y 0 U be the remaining u rows. Y 0 L is consistent with the labeling in labeled data and the initialization of Y 0 U can be arbitrary. Step 2 : Propagate the labels of any vertex to nearby vertices by Y t+1 = TY t , where T is the row-normalized matrix of T, i.e. Tij = Tij/ P k Tik, which can maintain the class probability interpretation. Step 3 : Clamp the labeled data, that is, replace the top l row of Y t+1 with Y 0 L. Step 4 : Repeat from step 2 until Y converges. Step 5 : Assign xh(l + 1 ≤h ≤n) with a label: yh = argmaxjYhj. The above algorithm can ensure that the labeled data YL never changes since it is clamped in Step 3. Actually we are interested in only YU. This algorithm has been shown to converge to a unique solution ˆYU = limt→∞Y t U = (I −¯Tuu)−1 ¯TulY 0 L (Zhu and Ghahramani, 2002). Here, ¯Tuu and ¯Tul are acquired by splitting matrix ¯T after the l-th row and the l-th column into 4 sub-matrices. And I is u × u identity matrix. We can see that the initialization of Y 0 U in this solution is not important, since Y 0 U does not affect the estimation of ˆYU. 3 Experiments and Results 3.1 Feature Set Following (Zhang, 2004), we used lexical and syntactic features in the contexts of entity pairs, which are extracted and computed from the parse trees derived from Charniak Parser (Charniak, 1999) and the Chunklink script 2 written by Sabine Buchholz from Tilburg University. Words: Surface tokens of the two entities and words in the three contexts. Entity Type: the entity type of both entity mentions, which can be PERSON, ORGANIZATION, FACILITY, LOCATION and GPE. POS features: Part-Of-Speech tags corresponding to all tokens in the two entities and words in the three contexts. Chunking features: This category of features are extracted from the chunklink representation, which includes: • Chunk tag information of the two entities and words in the three contexts. The “0” tag means that the word is not in any chunk. The “I-XP” tag means that this word is inside an XP chunk. The “B-XP” by default means that the word is at the beginning of an XP chunk. • Grammatical function of the two entities and words in the three contexts. The 2Software available at http://ilk.uvt.nl/∼sabine/chunklink/ 131 last word in each chunk is its head, and the function of the head is the function of the whole chunk. “NP-SBJ” means a NP chunk as the subject of the sentence. The other words in a chunk that are not the head have “NOFUNC” as their function. • IOB-chains of the heads of the two entities. So-called IOB-chain, noting the syntactic categories of all the constituents on the path from the root node to this leaf node of tree. The position information is also specified in the description of each feature above. For example, word features with position information include: 1) WE1 (WE2): all words in e1 (e2) 2) WHE1 (WHE2): head word of e1 (e2) 3) WMNULL: no words in Cmid 4) WMFL: the only word in Cmid 5) WMF, WML, WM2, WM3, ...: first word, last word, second word, third word, ...in Cmid when at least two words in Cmid 6) WEL1, WEL2, ...: first word, second word, ... before e1 7) WER1, WER2, ...: first word, second word, ... after e2 We combine the above lexical and syntactic features with their position information in the contexts to form context vectors. Before that, we filter out low frequency features which appeared only once in the dataset. 3.2 Similarity Measures The similarity sij between two occurrences of entity pairs is important to the performance of the LP algorithm. In this paper, we investigated two similarity measures, cosine similarity measure and JensenShannon (JS) divergence (Lin, 1991). Cosine similarity is commonly used semantic distance, which measures the angle between two feature vectors. JS divergence has ever been used as distance measure for document clustering, which outperforms cosine similarity based document clustering (Slonim et al., 2002). JS divergence measures the distance between two probability distributions if feature vector is considered as probability distribution over features. JS divergence is defined as follows: Table 1: Frequency of Relation SubTypes in the ACE training and devtest corpus. Type SubType Training Devtest ROLE General-Staff 550 149 Management 677 122 Citizen-Of 127 24 Founder 11 5 Owner 146 15 Affiliate-Partner 111 15 Member 460 145 Client 67 13 Other 15 7 PART Part-Of 490 103 Subsidiary 85 19 Other 2 1 AT Located 975 192 Based-In 187 64 Residence 154 54 SOC Other-Professional 195 25 Other-Personal 60 10 Parent 68 24 Spouse 21 4 Associate 49 7 Other-Relative 23 10 Sibling 7 4 GrandParent 6 1 NEAR Relative-Location 88 32 JS(q, r) = 1 2[DKL(q∥¯p) + DKL(r∥¯p)] (4) DKL(q∥¯p) = X y q(y)(log q(y) ¯p(y)) (5) DKL(r∥¯p) = X y r(y)(log r(y) ¯p(y)) (6) where ¯p = 1 2(q + r) and JS(q, r) represents JS divergence between probability distribution q(y) and r(y) (y is a random variable), which is defined in terms of KL-divergence. 3.3 Experimental Evaluation 3.3.1 Experiment Setup We evaluated this label propagation based relation extraction method for relation subtype detection and characterization task on the official ACE 2003 corpus. It contains 519 files from sources including broadcast, newswire, and newspaper. We dealt with only intra-sentence explicit relations and assumed that all entities have been detected beforehand in the EDT sub-task of ACE. Table 1 lists the types and subtypes of relations for the ACE Relation Detection and Characterization (RDC) task, along with their 132 Table 2: The Performance of SVM and LP algorithm with different sizes of labeled data for relation detection on relation subtypes. The LP algorithm is run with two similarity measures: cosine similarity and JS divergence. SVM LPCosine LPJS Percentage P R F P R F P R F 1% 35.9 32.6 34.4 58.3 56.1 57.1 58.5 58.7 58.5 10% 51.3 41.5 45.9 64.5 57.5 60.7 64.6 62.0 63.2 25% 67.1 52.9 59.1 68.7 59.0 63.4 68.9 63.7 66.1 50% 74.0 57.8 64.9 69.9 61.8 65.6 70.1 64.1 66.9 75% 77.6 59.4 67.2 71.8 63.4 67.3 72.4 64.8 68.3 100% 79.8 62.9 70.3 73.9 66.9 70.2 74.2 68.2 71.1 Table 3: The performance of SVM and LP algorithm with different sizes of labeled data for relation detection and classification on relation subtypes. The LP algorithm is run with two similarity measures: cosine similarity and JS divergence. SVM LPCosine LPJS Percentage P R F P R F P R F 1% 31.6 26.1 28.6 39.6 37.5 38.5 40.1 38.0 39.0 10% 39.1 32.7 35.6 45.9 39.6 42.5 46.2 41.6 43.7 25% 49.8 35.0 41.1 51.0 44.5 47.3 52.3 46.0 48.9 50% 52.5 41.3 46.2 54.1 48.6 51.2 54.9 50.8 52.7 75% 58.7 46.7 52.0 56.0 52.0 53.9 56.1 52.6 54.3 100% 60.8 48.9 54.2 56.2 52.3 54.1 56.3 52.9 54.6 frequency of occurrence in the ACE training set and test set. We constructed labeled data by randomly sampling some examples from ACE training data and additionally sampling examples with the same size from the pool of unrelated entity pairs for the “NONE” class. We used the remaining examples in the ACE training set and the whole ACE test set as unlabeled data. The testing set was used for final evaluation. 3.3.2 LP vs. SVM Support Vector Machine (SVM) is a state of the art technique for relation extraction task. In this experiment, we use LIBSVM tool 3 with linear kernel function. For comparison between SVM and LP, we ran SVM and LP with different sizes of labeled data and evaluate their performance on unlabeled data using precision, recall and F-measure. Firstly, we ran SVM or LP algorithm to detect possible relations from unlabeled data. If an entity mention pair is classified not to the “NONE” class but to the other 24 subtype classes, then it has a relation. Then construct labeled datasets with different sampling set size l, including 1%×Ntrain, 10%×Ntrain, 25%× Ntrain, 50%×Ntrain, 75%×Ntrain, 100%×Ntrain (Ntrain is the number of examples in the ACE train3LIBSV M: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm. ing set). If any relation subtype was absent from the sampled labeled set, we redid the sampling. For each size, we performed 20 trials and calculated average scores on test set over these 20 random trials. Table 2 reports the performance of SVM and LP with different sizes of labled data for relation detection task. We used the same sampled labeled data in LP as the training data for SVM model. From Table 2, we see that both LPCosine and LPJS achieve higher Recall than SVM. Specifically, with small labeled dataset (percentage of labeled data ≤25%), the performance improvement by LP is significant. When the percentage of labeled data increases from 50% to 100%, LPCosine is still comparable to SVM in F-measure while LPJS achieves slightly better F-measure than SVM. On the other hand, LPJS consistently outperforms LPCosine. Table 3 reports the performance of relation classification by using SVM and LP with different sizes of labled data. And the performance describes the average values of Precision, Recall and F-measure over major relation subtypes. From Table 3, we see that LPCosine and LPJS outperform SVM by F-measure in almost all settings of labeled data, which is due to the increase of Recall. With smaller labeled dataset (percentage of labeled data ≤50%), the gap between LP and SVM is larger. When the percentage of labeled data in133 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 1% 10% 25% 50% 75% 100% Percentage of Labeled Examples F-measure SVM LP_Cosine LP_JS Figure 1: Comparison of the performance of SVM and LP with different sizes of labeled data creases from 75% to 100%, the performance of LP algorithm is still comparable to SVM. On the other hand, the LP algorithm based on JS divergence consistently outperforms the LP algorithm based on Cosine similarity. Figure 1 visualizes the accuracy of three algorithms. As shown in Figure 1, the gap between SVM curve and LPJS curves is large when the percentage of labeled data is relatively low. 3.3.3 An Example In Figure 2, we selected 25 instances in training set and 15 instances in test set from the ACE corpus,which covered five relation types. Using Isomap tool 4, the 40 instances with 229 feature dimensions are visualized in a two-dimensional space as the figure. We randomly sampled only one labeled example for each relation type from the 25 training examples as labeled data. Figure 2(a) and 2(b) show the initial state and ground truth result respectively. Figure 2(c) reports the classification result on test set by SVM (accuracy = 4 15 = 26.7%), and Figure 2(d) gives the classification result on both training set and test set by LP (accuracy = 11 15 = 73.3%). Comparing Figure 2(b) and Figure 2(c), we find that many examples are misclassified from class ⋄ to other class symbols. This may be caused that SVMs method ignores the intrinsic structure in data. For Figure 2(d), the labels of unlabeled examples are determined not only by nearby labeled examples, but also by nearby unlabeled examples, so using LP 4The tool is available at http://isomap.stanford.edu/.                                                             Figure 2: An example: comparison of SVM and LP algorithm on a data set from ACE corpus. ◦and △denote the unlabeled examples in training set and test set respectively, and other symbols (⋄, ×, 2, + and ▽) represent the labeled examples with respective relation type sampled from training set. strategy achieves better performance than the local consistency based SVM strategy when the size of labeled data is quite small. 3.3.4 LP vs. Bootstrapping In (Zhang, 2004), they perform relation classification on ACE corpus with bootstrapping on top of SVM. To compare with their proposed Bootstrapped SVM algorithm, we use the same feature stream setting and randomly selected 100 instances from the training data as the size of initial labeled data. Table 4 lists the performance of the bootstrapped SVM method from (Zhang, 2004) and LP method with 100 seed labeled examples for relation type classification task. We can see that LP algorithm outperforms the bootstrapped SVM algorithm on four relation type classification tasks, and perform comparably on the relation ”SOC” classification task. 4 Discussion In this paper,we have investigated a graph-based semi-supervised learning approach for relation extraction problem. Experimental results showed that the LP algorithm performs better than SVM and 134 Table 4: Comparison of the performance of the bootstrapped SVM method from (Zhang, 2004) and LP method with 100 seed labeled examples for relation type classification task. Bootstrapping LPJS Relation type P R F P R F ROLE 78.5 69.7 73.8 81.0 74.7 77.7 PART 65.6 34.1 44.9 70.1 41.6 52.2 AT 61.0 84.8 70.9 74.2 79.1 76.6 SOC 47.0 57.4 51.7 45.0 59.1 51.0 NEAR − − − 13.7 12.5 13.0 Table 5: Comparison of the performance of previous methods on ACE RDC task. Relation Dectection Relation Detection and Classification on Types on Subtypes Method P R F P R F P R F Culotta and Soresen (2004) Tree kernel based 81.2 51.8 63.2 67.1 35.0 45.8 Kambhatla (2004) Feature based, Maximum Entropy 63.5 45.2 52.8 Zhou et al. (2005) Feature based,SVM 84.8 66.7 74.7 77.2 60.7 68.0 63.1 49.5 55.5 bootstrapping. We have some findings from these results: The LP based relation extraction method can use the graph structure to smooth the labels of unlabeled examples. Therefore, the labels of unlabeled examples are determined not only by the nearby labeled examples, but also by nearby unlabeled examples. For supervised methods, e.g., SVM, very few labeled examples are not enough to reveal the structure of each class. Therefore they can not perform well, since the classification hyperplane was learned only from few labeled data and the coherent structure in unlabeled data was not explored when inferring class boundary. Hence, our LP-based semisupervised method achieves better performance on both relation detection and classification when only few labeled data is available. Bootstrapping Currently most of works on the RDC task of ACE focused on supervised learning methods Culotta and Soresen (2004; Kambhatla (2004; Zhou et al. (2005). Table 5 lists a comparison on relation detection and classification of these methods. Zhou et al. (2005) reported the best result as 63.1%/49.5%/55.5% in Precision/Recall/F-measure on the relation subtype classification using feature based method, which outperforms tree kernel based method by Culotta and Soresen (2004). Compared with Zhou et al.’s method, the performance of LP algorithm is slightly lower. It may be due to that we used a much simpler feature set. Our work in this paper focuses on the investigation of a graph based semi-supervised learning algorithm for relation extraction. In the future, we would like to use more effective feature sets Zhou et al. (2005) or kernel based similarity measure with LP for relation extraction. 5 Conclusion and Future Work This paper approaches the problem of semisupervised relation extraction using a label propagation algorithm. It represents labeled and unlabeled examples and their distances as the nodes and the weights of edges of a graph, and tries to obtain a labeling function to satisfy two constraints: 1) it should be fixed on the labeled nodes, 2) it should be smooth on the whole graph. In the classification process, the labels of unlabeled examples are determined not only by nearby labeled examples, but also by nearby unlabeled examples. Our experimental results demonstrated that this graph based algorithm can achieve better performance than SVM when only very few labeled examples are available, and also outperforms the bootstrapping method for relation extraction task. In the future, we would like to investigate more effective feature set or use feature selection to improve the performance of this graph-based semisupervised relation extraction method. 135 References Agichtein E. and Gravano L.. 2000. Snowball: Extracting Relations from large Plain-Text Collections, In Proceedings of the 5th ACM International Conference on Digital Libraries (ACMDL’00). Belkin M. and Niyogi P.. 2002. Using Manifold Structure for Partially Labeled Classification. Advances in Neural Infomation Processing Systems 15. Blum A. and Chawla S. 2001. Learning from Labeled and Unlabeled Data Using Graph Mincuts. In Proceedings of the 18th International Conference on Machine Learning. Blum A., Lafferty J., Rwebangira R. and Reddy R. 2004. Semi-Supervised Learning Using Randomized Mincuts. In Proceedings of the 21th International Conference on Machine Learning.. Brin Sergey. 1998. Extracting patterns and relations from world wide web. In Proceedings of WebDB Workshop at 6th International Conference on Extending Database Technology (WebDB’98). pages 172-183. Charniak E. 1999. A Maximum-entropy-inspired parser. Technical Report CS-99-12. Computer Science Department, Brown University. Culotta A. and Soresen J. 2004. Dependency tree kernels for relation extraction, In Proceedings of 42th Annual Meeting of the Association for Computational Linguistics. 21-26 July 2004. Barcelona, Spain. Hasegawa T., Sekine S. and Grishman R. 2004. Discovering Relations among Named Entities from Large Corpora, In Proceeding of Conference ACL2004. Barcelona, Spain. Kambhatla N. 2004. Combining lexical, syntactic and semantic features with Maximum Entropy Models for extracting relations, In Proceedings of 42th Annual Meeting of the Association for Computational Linguistics.. 21-26 July 2004. Barcelona, Spain. Lin J. 1991. Divergence Measures Based on the Shannon Entropy. IEEE Transactions on Information Theory. Vol 37, No.1, 145-150. Miller S.,Fox H.,Ramshaw L. and Weischedel R. 2000. A novel use of statistical parsing to extract information from text. In Proceedings of 6th Applied Natural Language Processing Conference 29 April-4 may 2000, Seattle USA. Slonim, N., Friedman, N., and Tishby, N. 2002. Unsupervised Document Classification Using Sequential Information Maximization. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Yarowsky D. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. pp.189-196. Zelenko D., Aone C. and Richardella A. 2002. Kernel Methods for Relation Extraction, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Philadelphia. Zhang Zhu. 2004. Weakly-supervised relation classification for Information Extraction, In Proceedings of ACM 13th conference on Information and Knowledge Management (CIKM’2004). 8-13 Nov 2004. Washington D.C.,USA. Zhou GuoDong, Su Jian, Zhang Jie and Zhang min. 2005. Exploring Various Knowledge in Relation Extraction. In Proceedings of 43th Annual Meeting of the Association for Computational Linguistics. USA. Zhu Xiaojin and Ghahramani Zoubin. 2002. Learning from Labeled and Unlabeled Data with Label Propagation. CMU CALD tech report CMU-CALD-02-107. Zhu Xiaojin, Ghahramani Zoubin, and Lafferty J. 2003. Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. In Proceedings of the 20th International Conference on Machine Learning. 136
2006
17
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 137–144, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Polarized Unification Grammars Sylvain Kahane Modyco, Université Paris 10 [email protected] Abstract This paper proposes a generic mathematical formalism for the combination of various structures: strings, trees, dags, graphs and products of them. The polarization of the objects of the elementary structures controls the saturation of the final structure. This formalism is both elementary and powerful enough to strongly simulate many grammar formalisms, such as rewriting systems, dependency grammars, TAG, HPSG and LFG. 1 Introduction Our aim is to propose a generic formalism as simple as possible but powerful enough to write real grammars for natural language and to handle complex linguistic structures. The formalism we propose can strongly simulate most rule-based formalisms used in linguistics.1 Language utterances are both strongly structured and compositional and the structure of a complex utterance can be obtained by combining elementary structures associated to the elementary units of language.2 The most simple way to 1 A formalism A strongly simulates a formalism B if A has a better strong generative capacity than B, that is, if A can generate the languages generated by B with the same structures associated to the utterances of these languages. 2 Whether a natural language utterance contains one or several structures depends on our point of view. On the one hand it is clear that a sentence can receive various structures according to the semantic, syntactic, morphological or phonological point of view. On the other hand these different structures are not independent from each other and even if they are not structures on the same objects (for instance the semantic units do not correspond one to one to the syntactic units, that is the words) there are links between the different objects of these structures. In other words, considering separately the different simple structures of the sentence does not take into account the whole structure of the sentence, because we lost the interrelation between structures of different levels. combine two structures A and B is unification, that is, to build a new structure C by partially superimposing A and B and identifying a part of the objects of A with those of B. This idea recalls an old idea, used by Jespersen (1924), Tesnière (1934) or Ajduckiewicz (1935): the sentence is like a molecule whose words are atoms, each word bearing a valence (a linguistic term explicitly borrowed from chemistry) that forces or allows it to meet some other words. Nevertheless, unification grammars cannot directly take into account the fact that some linguistic units are unsaturated in a sense that they must absolutely combine with other structures to form a stable unit. Saturation is ensured by additional mechanisms, such as the distinction of terminal and non-terminal symbols in rewriting systems or by requiring some features to have an empty list as a value in HPSG. This paper presents a new family of formalisms, Polarized Unification Grammars (PUGs). PUGs extend Unification Grammars with an explicit control of the saturation of structures by attributing a polarity to each object. Using polarities allows integrating the treatment of saturation in the formalism of the rules. Thus the processing of saturation will pilot the combination of structures during the generation processing. Some polarities are neutral, others are not, but a final structure must be completely neutral. Two nonneutral objects can unify (that is, identify) and form a neutral object (that is, neutralizing each other). Proper unification holds no equivalent. Polarization takes its source in categorial grammar and subsequent works on resourcesensitive logic (see Lambek’s, Girard’s or van Benthem’s works). Nasr (1995) is among the first to introduce a rule-based formalism using an explicit polarization of structures. Duchier & Thater (1999) propose a formalism for tree description where they put forward the notion of polarity (and they uses the terms of polarity and neutralization). Perrier (2000) is probably the 137 first to develop a linguistic formalism entirely based on these ideas, the Interaction Grammar. PUG is both an elementary formalism (structures simply combine by identifying objects) and a powerful formalism, equivalent to Turing machines and capable of handling strings, trees, dags, n-graphs and products of such structures (such as ordered trees).3 But, above all, PUG is a well-adapted formalism for writing grammars and it is capable of strongly simulating many classic formalisms. Part 2 presents the general framework of PUG and its system of polarities. Part 3 proposes several examples of PUG and the translation in PUG of rewriting grammars, TAG, HPSG and LFG. We hope that these translations shed light on some common features of these formalisms. 2 Polarities and unification 2.1 Polarized Unification Grammars Polarized Unification Grammars generate sets of finite structures. A structure is based on objects. For instance, for a (directed) graph, objects are nodes and edges. These two types of objects are linked, giving us the proper structure: if X is the set of nodes and U, the set of edges, the graph is defined by two maps π1 and π2 from U into X which associate an edge with its source and its target. Our structures are polarized, that is, objects are associated to polarities. The set P of polarities is provided with an operation noted “.” and called product. The product is commutative and generally associative; (P, . ) is called the system of polarities. A non-empty strict subset N of P contains the neutral polarities. A polarized structure is neutral if all its polarities are neutral. Structures are defined on a collection of objects of various types (syntactic nodes, semantic nodes, syntactic edges …) and a collection of maps: structural maps linking objects to objects (such as source and target for edges), label maps linking objects to labels and polarity maps linking objects to polarities. Structures combine by unification. The unification of two structures A and B gives a new structure A⊕B obtained by “pasting” together these structures and identifying a part of the objects of the first structure with objects of the second structure. When two polarized structures A 3 A dag is a directed acyclic graph. An n-graph is a graph whose nodes are edges of a (n-1)-graph and a 1-graph is a standard graph. and B are unified, the polarity of an object of A⊕B obtained by identifying two objects of A and B is the product of their polarities; if the two objects bear the same map, these maps must be identified and their values, unified. (For instance identifying two edges forces us to identify their sources and targets.) A Polarized Unification Grammar (PUG) is defined by a finite family T of types of objects, a set of maps attached to the objects of each type, a system (P,.) of polarities, a subset N of P of neutral polarities, and a finite subset of elementary polarized structures, whose objects are described by T; one elementary structure is marked as the initial one and must be used exactly once. The structures generated by the grammar are the neutral structures obtained by combining the initial structure and a finite number of elementary structures. In the derivation process, elementary structures combine successively, each new elementary structure combining with at least one object of the previous result; this ensures that the derived structure is continuous. Polarities are only necessary to control the saturation and are not considered when the strong generative capacity of the grammar is estimated. Polarities belong to the declarative part of the grammar, but they rather play a role in the processing of the grammar. 2.2 The system of polarities In this paper we will use the system of polarities P = {■,□,–,+,■} (which are called like this: ■ = black = saturated, + = positive, – = negative, □ = white = obligatory context and ■ = grey = absolutely neutral), with N = {■,■}, and a product defined by the following array (where ⊥ represents the impossibility to combine). Note that ■ is the neutral element of the product. The symbol – can be interpreted as a need and + as the corresponding resource. . ■ □ – + ■ ■ ■ □ – + ■ □ □ □ – + ■ – – – ⊥ ■ ⊥ + + + ■ ⊥ ⊥ ■ ■ ■ ⊥ ⊥ ⊥ The system {□,■} is used by Nasr (1995), while the system {■,■,–,+}, noted {=,↔,←,→}, is considered by Bonfante et al. (2004), who show advantages of negative and positive polarities for prefiltration in parsing (a set of structures bearing negative and positive polarities can only 138 be reduced into a neutral structure if the sum of negative polarities of each object type is equal the sum of positive polarities). The system (P, . ) we have presented is commutative and associative. Commutativity implies that the combination of two structures is not procedurally oriented (and we can begin a derivation by any elementary structure, provided we use only once the initial structure). Associativity implies that the combination of structures is unordered: if an object B must combine with A and C, there is no precedence order between the combination of A and B and the one of B and C, that is, A⊕(B⊕C) = (A⊕B)⊕C. If we leave polarities aside, our formalism is trivially monotonic: the combination of two structures A and B by a PUG gives us a structure A⊕B that contains A and B as substructures. We can add a (partial) order on P in order to make the formalism monotonic.4 Let ≤ be this order. In order to give us a monotonic formalism, ≤ must verify the following monotonicity property: ∀x,y∈P x.y ≥ x. This provides us with the following order: ■ < □ < +/– < ■. A PUG built with an ordered system of polarities (P, . ,≤) verifying the monotonicity property is monotonic. Monotonicity implies good computational properties; for instance it allows translating the parsing with PUG into a problem of constraint resolution (Duchier & Thater, 1999). 3 Examples of PUGs 3.1 Tree grammars The first tree grammars belonging to the paradigm of PUGs was proposed by Nasr 1995. The following grammar G1 allows generating all finite trees (a tree is a connected directed graph such that every node except one is the target of at most one edge); objects are nodes and edges; the initial structure (the box on the left) is reduced to a black node; the grammar has only one other elementary structure, which is composed of a black edge linking a white node to a black node. Each white node must unify with a black node in order to be neutralized and each black node can unify with whatever number of white nodes. It can easily be verified that the structures generated by the grammar are trees, because every node has one and only one governor, except the node introduced by the initial structure, which is the root of the tree. 4 I was suggested this idea by Guy Perrier. G1 G2 The grammar G1 does not control the number of dependents of nodes. A grammar like G2 allows controlling the valence of each node, but it does not ensure that generated structures are trees, because two white nodes can unify and a node can have more than one governor.5 The grammar G3 solves the problem. In fact, G3 can be viewed as the superimposition of G1 and G2. Indeed, if P0 = {□,■}, P1 = P0×P0 = {(□,□),(□,■),(■,□),(■,■)} is equivalent to {□,+,– ,■}. The first polarity controls the tree structure as G1 does, while the second polarity controls the valence as G2 does. G3 With the same principles, one can build a dependency grammar generating the syntactic dependency trees of a fragment of natural language. Grammar G4, directly inspired from Nasr 1995, proposes a fragment of grammar for English generating the syntactic tree of Peter eats red beans. Nodes of this grammar are labeled by two label maps, /cat/ and /lex/. Note that the root of G4 (Dependency grammar for English) 5 Nasr 1995 proposes such a grammar in order to generate trees. He uses an external requirement, which forces, when two structures are combined, the root of one to combine with a node of the other one. subj dobj cat: V lex: eat cat: V cat: N cat: Adj lex: red cat: N cat: N cat: N lex: Peter mod cat: N lex: beans 139 a b c the elementary structure of an adjective is a white node, allowing an unlimited number of such structures to adjoin to a noun. 3.2 Rewriting systems and ordered trees PUG can simulate any rewriting system and have the weak generative capacity of Turing machines. We follow ideas developed by Burroni 1993 or Dymetman 1999, themselves following van Kampen 1933’s ideas. A sequence abc is represented by a string of labeled edges a, b and c: Intuitively, edges are intervals and nodes model their extremities. This is the simplest way to model linear order and precedence rules: X precedes Y iff the end of X is the beginning of Y. The initial category S of the grammar gives us the initial structure: A terminal symbol a corresponds to a positive edge: A rewriting rule ABC → DE gives us the elementary structure: This elementary structure is a “cell” whose upper frontier is a string of positive edges corresponding to the left part of the rule, while the lower frontier is a string of negative edges corresponding to the right part of the rule. Each positive edge must unify with a negative edge and vice versa, in order to give a black edge. Nodes are grey (= absolutely neutral) and their unification is entirely driven by the unification of edges. Cells will unify with each other to give a final structure representing the derivation structure of a sequence, which is the lower edge of this structure. The next figure shows the derivation structure of the sequence Peter eats red beans with a standard phrase structure grammar, which can be reconstructed by the reader. In such a representation, edges represent phrases and correspond to intervals in the cutting of the sequence, while nodes are bounds of these intervals. For a context-free rewriting system, the grammar generates the derivation tree, which can be represented in a more traditional way by adding the branches of the tree (giving us a 2-graph). Let us recall that a derivation tree for a contextfree grammar is an ordered tree. An ordered tree combines two structures on the same set of nodes: a structure of tree and a precedence relation on the node of the tree. Here the precedence relation is explicitly represented (a “node” of the tree precedes another “node” if the target of the first one is the source of the second one). It is not possible, with a PUG, to generate the derivation tree, including the precedence relation, in a simpler way.6 Note that the usual representation of ordered trees (where the precedence relation is not explicit, but only deductible from the planarity of the representation) is very misleading from the computational viewpoint. When they calculate the precedence relation, parsers (of the CKY type for instance) in fact calculate a data structure like the one we present here, where beginnings and ends of phrase are explicitly considered as objects. 3.3 TAG (Tree Adjoining Grammar) PUG has a clear kinship with TAG, which is the first formalism based on combination of structures to be studied at length. TAGs are generally presented as grammars combining (ordered) trees. In fact, as a tree grammar, TAG is not 6 The most natural idea would be to encode a rewriting rule with a tree of depth 1 and the precedence relation with edges from a node to its successor. The difficulty is then to propagate the order relation to the descendants of two sister nodes when we apply a rewriting rule by substituting a tree of depth 1. The simplest solution is undeniably the one presented here, consisting to introduce objects representing the beginning and the end of phrases (our grey nodes) and to indicate the relation between a phrase, its beginning and its end by representing the phrase with an edge from the beginning to the end. S A B C D E beans Peter S NP VP N eats red Adj NP V S NP VP a 140 monotonic and cannot be simulated with PUG. As shown by Vijay-Shanker 1992, to obtain a monotonic formalism, TAG must be viewed as a grammar combining quasi-trees. Intuitively, a quasi-tree is a tree whose nodes has been split in two parts and have each one an upper part and a lower part, between which another quasi-tree can be inserted (this is the famous adjoining operation of TAG). Formally, a quasi-tree is a tree whose branches have two types: dependency relations and dominance relations (respectively noted by plain lines and dotted lines). Two nodes linked by a negative dominance relation are potentially the two parts of a same node; only the lower part can have dependents. The next figures give an α-tree (= to be substituted) and a β-tree (= to be adjoined) with the corresponding PUG structures.7 A substitution node (like D↓) gives a negative node, which will unify with the root of an α tree. A β-tree gives a white root node and a black foot node, which will unify with the upper and the lower part of a split node. This is why the root and the foot node are linked by a positive dominance link, which will unify with a negative dominance link connecting the two parts of a split node. An α tree and its PUG translation 7 For sake of simplicity, we leave aside the precedence relation on sister nodes. It might be encoded in the same way as context-free rewriting systems, by modeling seminodes of TAG trees by edges. It does not pose any problem but would make the figures difficult to read. A β tree and its PUG translation At the end of the derivation, the structure must be a tree and all nodes must be reconstructed: this is why we introduce the next rule, which presents a positive dominance link linking a node to itself and which will force two seminodes to unify by neutralizing the dominance link between them. This last rule again shows the advantage of PUG: the reunification of nodes, which is procedurally ensured in Vijay-Shanker 1992 is given here as a declarative rule. 3.4 HPSG (Head-driven Phrase Structure Grammar) There are two ways to translate feature structures (FSs) into PUG. Clearly atomic values must be labels and (embedded) feature structures must be nodes, but features can be translated by maps or by edges, that is, objects. Encoding features by maps ensures to identify them in PUG. Encoding them by edges allows us to polarize them and control the number of identifications.8 For the sake of clarification of HSPG structures, we choose to translate structural features such as HDTR and NHDTR, which give the phrase structure and which never unify with other “features”, by edges and other features by maps (which will be represented by hashed arrows). In any case, the result looks like a dag whose “edges” (true edges and maps) represent features and whose nodes represent values (e.g. Kesper & Mönnich 2003). We exemplify the translation of HPSG in PUG with the schema of combination 8 Perrier 2000 uses a feature-structure based formalism where only features are polarized. Although more or less equivalent we prefer to polarize the FS themselves, i.e. the nodes. A A A* C D↓ A C C A D A A B C A B A B C D↓ C D 141 SC HD H Q HD SC HD elist NHDTR HDTR SC of head phrase with a subcategorized sister phrase, namely the head-daughter-phrase:9 HEAD: 1 SUBCAT: 3 HDTR: HEAD: 1 SUBCAT: 〈 2 〉 ⊕ 3 NHDTR: HEAD: 2 SUBCAT : elist This FS gives the following structure, where a list is represented recursively in two pieces: its head (value of H) and its queue (value of Q). A negative node of this FS can be neutralized by the combination with a similar FS representing a phrase or with a lexical entry. The next figure proposes a lexical entry for eat, indicating that eat is a V whose SUBCAT list contains two phrases headed by an N (for sake of simplicity we deal with the subject as a subcategorized phrase). The combination of two head-daughterphrases with the lexical entry of eat gives us the previous lexicalized rule, equivalent to the rule for eat of the dependency grammar G4 (/subj/ is the NHDTR of the maximal projection and /obj/ 9 Numbers in boxes are values shared by several features. The value of SUBCAT (= SC) is a list (the list of subcategorized phrases). The non-head daughter phrase (NHDTR) has a saturated valence and so needs an empty SUBCAT list (elist). The subcat list of the head daughter phrase (HDTR) is the concatenation, noted ⊕, of two lists: a list with one element that is the description of the non-head daughter phrase and the SUBCAT list of the whole phrase. The rest of the description of this phrase (value of HEAD) is equal to the one of the head daughter phrase. the NHDTR of the intermediate projection of eat). Polarization of objects shows exactly what is constructed by each rule and what are the requests filled by other rules. Moreover it allows us to force SUBCAT lists to be instantiated (and therefore allows us to control the saturation of the valence), which is ensured in the usual formalism of HPSG by a bottom-up procedural presentation. 3.5 LFG (Lexical Functional Grammar) and synchronous grammars We propose a translation of LFG into PUG that makes LFG appear as a synchronous grammar approach (see Shieber & Schabes 1990). LFG synchronizes two structures (a phrase structure or c-structure and a dependency/functional structure or f-structure) and it can be viewed as the synchronization of a phrase structure grammar and a dependency grammar. Let us consider a first LFG rule and its translation in PUG: [1] S → NP VP ↓ = ↑ SUBJ ↓ = ↑ Equations under phrases (in the right side of [1]) ensure the synchronization between the objects of the c-structure and the f-structure: each phrase is synchronized with a “functional” node. Symbols ↓ and ↑ respectively designate the functional node synchronized with the current phrase and the one synchronized with the mother phrase (here S). Thus the equation ↓=↑ means that the current phrase (VP) and its mother (S) are synchronized with the same functional node. The eat V Q H cat lex HD HD SC HD SC SC HDTR HD HD NHDT R HDTR NHDTR Q elist cat N H cat N SC elist SC elist eat V Q H cat Q elist SC HD cat N H cat N SUBJ S S NP VP 142 expression ↑ SUBJ designates the functional node depending on ↑ by the relation SUBJ. In PUG we model the synchronization of the phrases and the functional nodes by synchronization links (represented by dotted lines with diamond-shaped polarities) (see Bresnan 2000 for non-formalized similar representations). The two synchronizations ensured by the two constraints ↓=↑ SUBJ and ↓=↑ of [1], and therefore built by this rule, are polarized in black. A phrasal rule such as [1] introduces an fstructure with a totally white polarization. It will be neutralized by lexical rules such as [2]: [2] V → wants ↑ PRED = ‘want 〈SUBJ,VCOMP〉’ ↑ SUBJ = ↑ VCOMP SUBJ The feature Pred is interpreted as the labeling of the functional node, while the valence 〈SUBJ,VCOMP〉 gives us two black edges and two white nodes. The functional equation ↑SUBJ = ↑ VCOMP SUBJ introduces a white edge SUBJ between the nodes ↑ SUBJ and ↑VCOMP (and is therefore to be interpreted very differently from the constraints of [1], which introduce black synchronization links.) PUG allows to easily split up a rule into more elementary rules. For instance, the rule [1] can be split up into three rules: a phrase structure rules linearizing the daughter phrases and two rules of synchronization indicating the functional link between a phrase and one of its daughter phrases. Our decomposition shows that LFG articulated two different grammars: a classical phrase structure generating the c-structure and an interface grammar between c- and f-structures (and even a third grammar because the f-structure is really generated only by the lexical rules). With PUG it is easy to join two (or more) grammars: it suffices to add on the objects by both grammars a white polarity that will be saturated in the other grammar (and vice versa) (Kahane & Lareau 2005). Let us consider another problem, illustrated here by the rule for the topicalization of an object. The unbounded dependency of the object with its functional governor is an undetermined path expressed by a regular expression (here VCOMP* OBJ; functional uncertainty, Kaplan & Zaenen 1989). [3] S' → NP S ↓ = ↑ VCOMP* OBJ ↓ = ↑ ↓ = ↑ TOP The path VCOMP* (represented by a dashed arrow) is expanded by the following regular grammar, with two rules, one for the propagation and one for the ending. Again the translation into PUG brings to the fore some fundamental components of the formalism (like synchronization links) and some non-explicit mechanisms such as the fact that the lexical equation ↑ PRED = ‘want 〈SUBJ,VCOMP〉’ introduces both resources (a node ‘want’) and needs (its valence). 4 Conclusion The PUG formalism is extremely simple: it only imposes that combining two structures involves at least the unification of two objects. Forcing or forbidding more objects to combine is then entirely controlled by polarization of objects. Polarization will thus guide the process of combination of elementary structures. In spite of its simplicity, the PUG formalism is powerful enough to elegantly simulate most of the rulebased formalisms used in formal linguistics and NLP. This sheds new light on these formalisms and allows us to bring to the fore the exact nature SUBJ S NP VP S NP ⊕ S VP ⊕ VCOMP* VCOMP VCOMP* V wants SUBJ VCOMP SUBJ ‘want’ S' S VCOMP* TOP NP OBJ VCOMP* 143 of the structures they handle and to extract some procedural mechanisms hidden by the formalism. But above all, the PUG formalism allows us to write separately several modules of the grammar handling various structures and to put them together in a same formalism by synchronization of the grammars, as we show with our translation of LFG. Thus PUGs extend unification grammars based on feature structures by allowing a greatest diversity of geometric structures and a best control of resources. Further investigations must concern the computational properties of PUGs, notably restrictions allowing polynomial time parsing. Acknowledgements I thank Benoît Crabbé, Denys Duchier, Kim Gerdes, François Lareau, François Métayer, Piet Mertens, Guy Perrier, Alain Polguère and Benoît Sagot for their numerous remarks and enlightening commentaries. References Ajdukiewicz K. 1935. Die syntaktische Konnexität. Studia Philosophica, 1:1-27. Bonfante G., Guillaume B., Perrier G. 2004. Polarization and abstraction of grammatical formalisms as methods for lexical disambiguation. Proceedings of CoLing, Genève, Switzerland, 303-309. Bresnan J. 2001. Lexical-Functional Syntax. Blackwell. Burroni A. 1993. Higher-dimensional word problems with applications to equational logic. Theoretical Computer Sciences, 115:43-62. Duchier D., Thater S. 1999. Parsing with tree descriptions: a constraint-based approach. NLULP 1999 (Natural Language Understanding and Logic Programming), Las Cruces, NM. Dymetman M. 1999. Some Remarks on the Geometry of Grammar. Proceedings of MOL'6 (6th Meeting on the Mathematics of Language), Orlando, FA. Girard J.-Y. 1987. Linear logic. Theoretical Computer Sciences, 50(1):1-102. Kahane S., Lareau F. 2005. Meaning-Text Unification Grammar: modularity and polarization, Second International Conference on Meaning-Text Theory, Moscow, 197-206. Kaplan R.M., Zaenen A. 1989. Long-distance dependencies, constituent structure, and functional uncertainty. In Alternative Conceptions of Phrase Structure, M. Baltin & A. Kroch (eds), 17-42, Chicago Univ. Press. Kepser S., Mönnich U. 2003. Graph properties of HPSG feature structures. Proceedings of Formal Grammar, Vienna, Austria, 115-124. Nasr A. 1995. A formalism and a parser for lexicalised dependency grammars. Proceedings of the 4th Int. Workshop on Parsing Tecnologies, State Univ. of NY Press. Perrier G. 2000. Interaction Grammars. Proceedings of CoLing, Sarrebrücken, 7 p. Shieber S. M. & Schabes Y. 1990. Synchronous treeadjoining grammars, Proceedings of CoLing, vol. 3, 253-258, Helsinki, Finland. Tesnière L. 1934. Comment construire une syntaxe. Bulletin de la Faculté des Lettres de Strasbourg, 7: 219-229. van Kampen E. 1933. On some lemmas in the theory of groups. American Journal of Mathematics, 55:268–73. Vijay-Shanker K. 1992. Using description of trees in a Tree Adjoining Grammar. Computational Linguistics, 18(4):481-517. 144
2006
18
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 145–152, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Partially Specified Signatures: a Vehicle for Grammar Modularity Yael Cohen-Sygal Dept. of Computer Science University of Haifa [email protected] Shuly Wintner Dept. of Computer Science University of Haifa [email protected] Abstract This work provides the essential foundations for modular construction of (typed) unification grammars for natural languages. Much of the information in such grammars is encoded in the signature, and hence the key is facilitating a modularized development of type signatures. We introduce a definition of signature modules and show how two modules combine. Our definitions are motivated by the actual needs of grammar developers obtained through a careful examination of large scale grammars. We show that our definitions meet these needs by conforming to a detailed set of desiderata. 1 Introduction Development of large scale grammars for natural languages is an active area of research in human language technology. Such grammars are developed not only for purposes of theoretical linguistic research, but also for natural language applications such as machine translation, speech generation, etc. Wide-coverage grammars are being developed for various languages (Oepen et al., 2002; Hinrichs et al., 2004; Bender et al., 2005; King et al., 2005) in several theoretical frameworks, e.g., LFG (Dalrymple, 2001) and HPSG (Pollard and Sag, 1994). Grammar development is a complex enterprise: it is not unusual for a single grammar to be developed by a team including several linguists, computational linguists and computer scientists. The scale of grammars is overwhelming: for example, the English resource grammar (Copestake and Flickinger, 2000) includes thousands of types. This raises problems reminiscent of those encountered in large-scale software development. Yet while software engineering provides adequate solutions for the programmer, no grammar development environment supports even the most basic needs, such as grammar modularization, combination of sub-grammars, separate compilation and automatic linkage of grammars, information encapsulation, etc. This work provides the essential foundations for modular construction of signatures in typed unification grammars. After a review of some basic notions and a survey of related work we list a set of desiderata in section 4, which leads to a definition of signature modules in section 5. In section 6 we show how two modules are combined, outlining the mathematical properties of the combination (proofs are suppressed for lack of space). Extending the resulting module to a stand-alone type signature is the topic of section 7. We conclude with suggestions for future research. 2 Type signatures We assume familiarity with theories of (typed) unification grammars, as formulated by, e.g., Carpenter (1992) and Penn (2000). The definitions in this section set the notation and recall basic notions. For a partial function F, ‘F(x)↓’ means that F is defined for the value x. Definition 1 Given a partially ordered set ⟨P, ≤⟩, the set of upper bounds of a subset S ⊆P is the set Su = {y ∈P | ∀x ∈S x ≤y}. For a given partially ordered set ⟨P, ≤⟩, if S ⊆ P has a least element then it is unique. Definition 2 A partially ordered set ⟨P, ≤⟩is a bounded complete partial order (BCPO) if for every S ⊆P such that Su ̸= ∅, Su has a least element, called a least upper bound (lub). Definition 3 A type signature is a structure ⟨TYPE, ⊑, FEAT, Approp⟩, where: 1. ⟨TYPE, ⊑⟩is a finite bounded complete partial order (the type hierarchy) 145 2. FEAT is a finite set, disjoint from TYPE. 3. Approp : TYPE×FEAT →TYPE (the appropriateness specification) is a partial function such that for every F ∈FEAT: (a) (Feature Introduction) there exists a type Intro(F) ∈ TYPE such that Approp(Intro(F), F)↓, and for every t ∈ TYPE, if Approp(t, F) ↓, then Intro(F) ⊑t; (b) (Upward Closure) if Approp(s, F) ↓ and s ⊑t, then Approp(t, F) ↓and Approp(s, F) ⊑Approp(t, F). Notice that every signature has a least type, since the subset S = ∅of TYPE has the non-empty set of upper bounds, Su = TYPE, which must have a least element due to bounded completeness. Definition 4 Let ⟨TYPE, ⊑⟩be a type hierarchy and let x, y ∈TYPE. If x ⊑y, then x is a supertype of y and y is a subtype of x. If x ⊑y, x ̸= y and there is no z such that x ⊑z ⊑y and z ̸= x, y then x is an immediate supertype of y and y is an immediate subtype of x. 3 Related Work Several authors address the issue of grammar modularization in unification formalisms. Moshier (1997) views HPSG , and in particular its signature, as a collection of constraints over maps between sets. This allows the grammar writer to specify any partial information about the signature, and provides the needed mathematical and computational capabilities to integrate the information with the rest of the signature. However, this work does not define modules or module interaction. It does not address several basic issues such as bounded completeness of the partial order and the feature introduction and upward closure conditions of the appropriateness specification. Furthermore, Moshier (1997) shows how signatures are distributed into components, but not the conditions they are required to obey in order to assure the well-definedness of the combination. Keselj (2001) presents a modular HPSG, where each module is an ordinary type signature, but each of the sets FEAT and TYPE is divided into two disjoint sets of private and public elements. In this solution, modules do not support specification of partial information; module combination is not associative; and the only channel of interaction between modules is the names of types. Kaplan et al. (2002) introduce a system designed for building a grammar by both extending and restricting another grammar. An LFG grammar is presented to the system in a priority-ordered sequence of files where the grammar can include only one definition of an item of a given type (e.g., rule) with a particular name. Items in a higher priority file override lower priority items of the same type with the same name. The override convention makes it possible to add, delete or modify rules. However, a basis grammar is needed and when modifying a rule, the entire rule has to be rewritten even if the modifications are minor. The only interaction among files in this approach is overriding of information. King et al. (2005) augment LFG with a makeshift signature to allow modular development of untyped unification grammars. In addition, they suggest that any development team should agree in advance on the feature space. This work emphasizes the observation that the modularization of the signature is the key for modular development of grammars. However, the proposed solution is adhoc and cannot be taken seriously as a concept of modularization. In particular, the suggestion for an agreement on the feature space undermines the essence of modular design. Several works address the problem of modularity in other, related, formalisms. Candito (1996) introduces a description language for the trees of LTAG. Combining two descriptions is done by conjunction. To constrain undesired combinations, Candito (1996) uses a finite set of names where each node of a tree description is associated with a name. The only channel of interaction between two descriptions is the names of the nodes, which can be used only to allow identification but not to prevent it. To overcome these shortcomings, Crabb´e and Duchier (2004) suggest to replace node naming by colors. Then, when unifying two trees, the colors can prevent or force the identification of nodes. Adapting this solution to type signatures would yield undesired orderdependence (see below). 4 Desiderata To better understand the needs of grammar developers we carefully explored two existing grammars: the LINGO grammar matrix (Bender et al., 2002), which is a basis grammar for the rapid development of cross-linguistically consistent gram146 mars; and a grammar of a fragment of Modern Hebrew, focusing on inverted constructions (Melnik, 2006). These grammars were chosen since they are comprehensive enough to reflect the kind of data large scale grammar encode, but are not too large to encumber this process. Motivated by these two grammars, we experimented with ways to divide the signatures of grammars into modules and with different methods of module interaction. This process resulted in the following desiderata for a beneficial solution for signature modularization: 1. The grammar designer should be provided with as much flexibility as possible. Modules should not be unnecessarily constrained. 2. Signature modules should provide means for specifying partial information about the components of a grammar. 3. A good solution should enable one module to refer to types defined in another. Moreover, it should enable the designer of module Mi to use a type defined in Mj without specifying the type explicitly. Rather, some of the attributes of the type can be (partially) specified, e.g., its immediate subtypes or its appropriateness conditions. 4. While modules can specify partial information, it must be possible to deterministically extend a module (which can be the result of the combination of several modules) into a full type signature. 5. Signature combination must be associative and commutative: the order in which modules are combined must not affect the result. The solution we propose below satisfies these requirements.1 5 Partially specified signatures We define partially specified signatures (PSSs), also referred to as modules below, which are structures containing partial information about a signature: part of the subsumption relation and part of the appropriateness specification. We assume enumerable, disjoint sets TYPE of types and FEAT of features, over which signatures are defined. We begin, however, by defining partially labeled graphs, of which PSSs are a special case. 1The examples in the paper are inspired by actual grammars but are obviously much simplified. Definition 5 A partially labeled graph (PLG) over TYPE and FEAT is a finite, directed labeled graph S = ⟨Q, T, ⪯, Ap⟩, where: 1. Q is a finite, nonempty set of nodes, disjoint from TYPE and FEAT. 2. T : Q →TYPE is a partial function, marking some of the nodes with types. 3. ⪯⊆Q × Q is a relation specifying (immediate) subsumption. 4. Ap ⊆Q × FEAT × Q is a relation specifying appropriateness. Definition 6 A partially specified signature (PSS) over TYPE and FEAT is a PLG S = ⟨Q, T, ⪯, Ap⟩, where: 1. T is one to one. 2. ‘⪯’ is antireflexive; its reflexive-transitive closure, denoted ‘ ∗ ⪯’, is antisymmetric. 3. (a) (Relaxed Upward Closure) for all q1, q′ 1, q2 ∈ Q and F ∈ FEAT, if (q1, F, q2) ∈Ap and q1 ∗ ⪯q′ 1, then there exists q′ 2 ∈Q such that q2 ∗ ⪯q′ 2 and (q′ 1, F, q′ 2) ∈Ap; and (b) (Maximality) for all q1, q2 ∈Q and F ∈ FEAT, if (q1, F, q2) ∈Ap then for all q′ 2 ∈Q such that q′ 2 ∗ ⪯q2 and q2 ̸= q′ 2, (q1, F, q′ 2) /∈Ap. A PSS is a finite directed graph whose nodes denote types and whose edges denote the subsumption and appropriateness relations. Nodes can be marked by types through the function T, but can also be anonymous (unmarked). Anonymous nodes facilitate reference, in one module, to types that are defined in another module. T is oneto-one since we assume that two marked nodes denote different types. The ‘⪯’ relation specifies an immediate subsumption order over the nodes, with the intention that this order hold later for the types denoted by nodes. This is why ‘ ∗ ⪯’ is required to be a partial order. The type hierarchy of a type signature is a BCPO, but current approaches (Copestake, 2002) relax this requirement to allow more flexibility in grammar design. PSS subsumption is also a partial order but not necessarily a bounded complete 147 one. After all modules are combined, the resulting subsumption relation will be extended to a BCPO (see section 7), but any intermediate result can be a general partial order. Relaxing the BCPO requirement also helps guaranteeing the associativity of module combination. Consider now the appropriateness relation. In contrast to type signatures, Ap is not required to be a function. Rather, it is a relation which may specify several appropriate nodes for the values of a feature F at a node q. The intention is that the eventual value of Approp(T(q), F) be the lub of the types of all those nodes q′ such that Ap(q, F, q′). This relaxation allows more ways for modules to interact. We do restrict the Ap relation, however. Condition 3a enforces a relaxed version of upward closure. Condition 3b disallows redundant appropriateness arcs: if two nodes are appropriate for the same node and feature, then they should not be related by subsumption. The feature introduction condition of type signatures is not enforced by PSSs. This, again, results in more flexibility for the grammar designer; the condition is restored after all modules combine, see section 7. Example 1 A simple PSS S1 is depicted in Figure 1, where solid arrows represent the ‘⪯’ (subsumption) relation and dashed arrows, labeled by features, the Ap relation. S1 stipulates two subtypes of cat, n and v, with a common subtype, gerund. The feature AGR is appropriate for all three categories, with distinct (but anonymous) values for Approp(n, AGR) and Approp(v, AGR). Approp(gerund, AGR) will eventually be the lub of Approp(n, AGR) and Approp(v, AGR), hence the multiple outgoing AGR arcs from gerund. Observe that in S1, ‘⪯’ is not a BCPO, Ap is not a function and the feature introduction condition does not hold. gerund n v cat agr AGR AGR AGR AGR Figure 1: A partially specified signature, S1 We impose an additional restriction on PSSs: a PSS is well-formed if any two different anonymous nodes are distinguishable, i.e., if each node is unique with respect to the information it encodes. If two nodes are indistinguishable then one of them can be removed without affecting the information encoded by the PSS. The existence of indistinguishable nodes in a PSS unnecessarily increases its size, resulting in inefficient processing. Given a PSS S, it can be compacted into a PSS, compact(S), by unifying all the indistinguishable nodes in S. compact(S) encodes the same information as S but does not include indistinguishable nodes. Two nodes, only one of which is anonymous, can still be otherwise indistinguishable. Such nodes will, eventually, be coalesced, but only after all modules are combined (to ensure the associativity of module combination). The detailed computation of the compacted PSS is suppressed for lack of space. Example 2 Let S2 be the PSS of Figure 2. S2 includes two pairs of indistinguishable nodes: q2, q4 and q6, q7. The compacted PSS of S2 is depicted in Figure 3. All nodes in compact(S2) are pairwise distinguishable. q6 q7 b q8 q2 q3 q4 q5 q1 a F F F F Figure 2: A partially specified signature with indistinguishable nodes, S2 b a F F F Figure 3: The compacted partially specified signature of S2 Proposition 1 If S is a PSS then compact(S) is a well formed PSS. 148 6 Module combination We now describe how to combine modules, an operation we call merge bellow. When two modules are combined, nodes that are marked by the same type are coalesced along with their attributes. Nodes that are marked by different types cannot be coalesced and must denote different types. The main complication is caused when two anonymous nodes are considered: such nodes are coalesced only if they are indistinguishable. The merge of two modules is performed in several stages: First, the two graphs are unioned (this is a simple pointwise union of the coordinates of the graph, see definition 7). Then the resulting graph is compacted, coalescing nodes marked by the same type as well as indistinguishable anonymous nodes. However, the resulting graph does not necessarily maintain the relaxed upward closure and maximality conditions, and therefore some modifications are needed. This is done by Ap-Closure, see definition 8. Finally, the addition of appropriateness arcs may turn two anonymous distinguishable nodes into indistinguishable ones and therefore another compactness operation is needed (definition 9). Definition 7 Let S1 = ⟨Q1, T1, ⪯1, Ap1⟩, S2 = ⟨Q2, T2, ⪯2, Ap2⟩be two PLGssuch that Q1 ∩ Q2 = ∅. The union of S1 and S2, denoted S1∪S2, is the PLG S = ⟨Q1 ∪Q2, T1 ∪T2, ⪯1 ∪⪯2, Ap1 ∪Ap2⟩. Definition 8 Let S = ⟨Q, T, ⪯, Ap⟩be a PLG. The Ap-Closure of S, denoted ApCl(S), is the PLG ⟨Q, T, ⪯, Ap′′⟩where: • Ap′ = {(q1, F, q2) | q1, q2 ∈Q and there exists q′ 1 ∈ Q such that q′ 1 ∗ ⪯ q1 and (q′ 1, F, q2) ∈Ap} • Ap′′ = {(q1, F, q2) ∈Ap′ | for all q′ 2 ∈Q, such that q2 ∗ ⪯q′ 2 and q2 ̸= q′ 2, (q1, F, q′ 2) /∈ Ap′} Ap-Closure adds to a PLG the arcs required for it to maintain the relaxed upward closure and maximality conditions. First, arcs are added (Ap′) to maintain upward closure (to create the relations between elements separated between the two modules and related by mutual elements). Then, redundant arcs are removed to maintain the maximality condition (the removed arcs may be added by Ap′ but may also exist in Ap). Notice that Ap ⊆Ap′ since for all (q1, F, q2) ∈Ap, by choosing q′ 1 = q1 it follows that q′ 1 = q1 ∗ ⪯q1 and (q′ 1, F, q2) = (q1, F, q2) ∈Ap and hence (q′ 1, F, q2) = (q1, F, q2) ∈Ap′. Two PSSs can be merged only if the resulting subsumption relation is indeed a partial order, where the only obstacle can be the antisymmetry of the resulting relation. The combination of the appropriateness relations, in contrast, cannot cause the merge operation to fail because any violation of the appropriateness conditions in PSSs can be deterministically resolved. Definition 9 Let S1 = ⟨Q1, T1, ⪯1, Ap1⟩, S2 = ⟨Q2, T2, ⪯2, Ap2⟩be two PSSs such that Q1 ∩ Q2 = ∅. S1, S2 are mergeable if there are no q1, q2 ∈Q1 and q3, q4 ∈Q2 such that the following hold: 1. T1(q1)↓, T1(q2)↓, T2(q3)↓and T2(q4)↓ 2. T1(q1) = T2(q4) and T1(q2) = T2(q3) 3. q1 ∗ ⪯1 q2 and q3 ∗ ⪯2 q4 If S1 and S2 are mergeable, then their merge, denoted S1⋒S2, is compact(ApCl(compact(S1∪ S2))). In the merged module, pairs of nodes marked by the same type and pairs of indistinguishable anonymous nodes are coalesced. An anonymous node cannot be coalesced with a typed node, even if they are otherwise indistinguishable, since that will result in an unassociative combination operation. Anonymous nodes are assigned types only after all modules combine, see section 7.1. If a node has multiple outgoing Ap-arcs labeled with the same feature, these arcs are not replaced by a single arc, even if the lub of the target nodes exists in the resulting PSS. Again, this is done to guarantee the associativity of the merge operation. Example 3 Figure 4 depicts a na¨ıve agreement module, S5. Combined with S1 of Figure 1, S1 ⋒S5 = S5 ⋒S1 = S6, where S6 is depicted in Figure 5. All dashed arrows are labeled AGR, but these labels are suppressed for readability. Example 4 Let S7 and S8 be the PSSs depicted in Figures 6 and 7, respectively. Then S7 ⋒S8 = S8⋒S7 = S9, where S9 is depicted in Figure 8. By standard convention, Ap arcs that can be inferred by upward closure are not depicted. 149 n nagr gerund vagr v agr Figure 4: Na¨ıve agreement module, S5 gerund n v vagr nagr cat agr Figure 5: S6 = S1 ⋒S5 Proposition 2 Given two mergeable PSSs S1, S2, S1 ⋒S2 is a well formed PSS. Proposition 3 PSS merge is commutative: for any two PSSs, S1, S2, S1⋒S2 = S2⋒S1. In particular, either both are defined or both are undefined. Proposition 4 PSS merge is associative: for all S1, S2, S3, (S1 ⋒S2) ⋒S3 = S1 ⋒(S2 ⋒S3). 7 Extending PSSs to type signatures When developing large scale grammars, the signature can be distributed among several modules. A PSS encodes only partial information and therefore is not required to conform with all the constraints imposed on ordinary signatures. After all the modules are combined, however, the PSS must be extended into a signature. This process is done in 4 stages, each dealing with one property: 1. Name resolution: assigning types to anonymous nodes (section 7.1); 2. Determinizing Ap, converting it from a relation to a function (section 7.2); 3. Extending ‘⪯’ to a BCPO. This is done using the algorithm of Penn (2000); 4. Extending Ap to a full appropriateness specification by enforcing the feature introduction condition: Again, we use the person nvagr bool vagr nagr agr num NUM PERSON DEF Figure 6: An agreement module, S7 first second third + − sg person pl bool num Figure 7: A partially specified signature, S8 first second third + − person bool nvagr vagr nagr sg pl agr num NUM DEF PERSON Figure 8: S9 = S7 ⋒S8 algorithm of Penn (2000). 7.1 Name resolution By the definition of a well-formed PSS, each anonymous node is unique with respect to the information it encodes among the anonymous nodes, but there may exist a marked node encoding the same information. The goal of the name resolution procedure is to assign a type to every anonymous node, by coalescing it with a similar marked node, if one exists. If no such node exists, or if there is more than one such node, the anonymous node is given an arbitrary type. The name resolution algorithm iterates as long as there are nodes to coalesce. In each iteration, for each anonymous node the set of its similar typed nodes is computed. Then, using this computation, anonymous nodes are coalesced with their paired similar typed node, if such a node uniquely exists. After coalescing all such pairs, the resulting PSS may be non well-formed and therefore the PSS is compacted. Compactness can trigger more pairs that need to be coalesced, and therefore the above procedure is repeated. When no pairs that need to be coalesced are left, the remaining anonymous nodes are assigned arbitrary names and the algorithm halts. The detailed algorithm is suppressed for lack of space. 150 Example 5 Let S6 be the PSS depicted in Figure 5. Executing the name resolution algorithm on this module results in the PSS of Figure 9 (AGR-labels are suppressed for readability.) The two anonymous nodes in S6 are coalesced with the nodes marked nagr and vagr, as per their attributes. Cf. Figure 1, in particular how two anonymous nodes in S1 are assigned types from S5 (Figure 4). gerund n v vagr nagr cat agr Figure 9: Name resolution result for S6 7.2 Appropriateness consolidation For each node q, the set of outgoing appropriateness arcs with the same label F, {(q, F, q′)}, is replaced by the single arc (q, F, ql), where ql is marked by the lub of the types of all q′. If no lub exists, a new node is added and is marked by the lub. The result is that the appropriateness relation is a function, and upward closure is preserved; feature introduction is dealt with separately. The input to the following procedure is a PSS whose typing function, T, is total; its output is a PSS whose typing function, T, is total and whose appropriateness relation is a function. Let S = ⟨Q, T, ⪯, Ap⟩be a PSS. For each q ∈Q and F ∈ FEAT, let target(q, F) = {q′ | (q, F, q′) ∈Ap} sup(q) = {q′ ∈Q | q′ ⪯q} sub(q) = {q′ ∈Q | q ⪯q′} out(q) = {(F, q′) | (q, F, q′) ∈Ap Algorithm 1 Appropriateness consolidation (S = ⟨Q, T, ⪯, Ap⟩) 1. Find a node q and a feature F for which |target(q, F)| > 1 and for all q′ ∈Q such that q′ ∗ ⪯q, |target(q′, F)| ≤1. If no such pair exists, halt. 2. If target(q, F) has a lub, p, then: (a) for all q′ ∈target(q, F), remove the arc (q, F, q′) from Ap. (b) add the arc (q, F, p) to Ap. (c) for all q′ ∈Q such that q ∗ ⪯q′, if (q′, F, p) /∈Ap then add (q′, F, p) to Ap. (d) go to (1). 3. (a) Add a new node, p, to Q with: • sup(p) = target(q, F) • sub(p) = (target(q, F))u • out(p) = S q′∈target(q,F) out(q′) (b) Mark p with a fresh type from NAMES. (c) For all q′ ∈Q such that q ∗ ⪯q′, add (q′, F, p) to Ap. (d) For all q′ ∈target(q, F), remove the arc (q, F, q′) from Ap. (e) Add (q, F, p) to Ap. (f) go to (1). The order in which nodes are selected in step 1 of the algorithm is from supertypes to subtypes. This is done to preserve upward closure. In addition, when replacing a set of outgoing appropriateness arcs with the same label F, {(q, F, q′)}, by a single arc (q, F, ql), ql is added as an appropriate value for F and all the subtypes of q. Again, this is done to preserve upward closure. If a new node is added (stage 3), then its appropriate features and values are inherited from its immediate supertypes. During the iterations of the algorithm, condition 3b (maximality) of the definition of a PSS may be violated but the resulting graph is guaranteed to be a PSS. Example 6 Consider the PSS depicted in Figure 9. Executing the appropriateness consolidation algorithm on this module results in the module depicted in Figure 10. AGR-labels are suppressed. gerund new n v vagr nagr cat agr Figure 10: Appropriateness consolidation result 8 Conclusions We advocate the use of PSSs as the correct concept of signature modules, supporting interaction 151 among grammar modules. Unlike existing approaches, our solution is formally defined, mathematically proven and can be easily and efficiently implemented. Module combination is a commutative and associative operation which meets all the desiderata listed in section 4. There is an obvious trade-off between flexibility and strong typedeness, and our definitions can be finely tuned to fit various points along this spectrum. In this paper we prefer flexibility, following Melnik (2005), but future work will investigate other options. There are various other directions for future research. First, grammar rules can be distributed among modules in addition to the signature. The definition of modules can then be extended to include also parts of the grammar. Then, various combination operators can be defined for grammar modules (cf. Wintner (2002)). We are actively pursuing this line of research. Finally, while this work is mainly theoretical, it has important practical implications. We would like to integrate our solutions in an existing environment for grammar development. An environment that supports modular construction of large scale grammars will greatly contribute to grammar development and will have a significant impact on practical implementations of grammatical formalisms. 9 Acknowledgments We are grateful to Gerald Penn and Nissim Francez for their comments on an earlier version of this paper. This research was supported by The Israel Science Foundation (grant no. 136/01). References Emily M. Bender, Dan Flickinger, and Stephan Oepen. 2002. The grammar matrix: An open-source starterkit for the rapid development of cross-linguistically consistent broad-coverage precision grammars. In Proceedings of ACL Workshop on Grammar Engineering. Taipei, Taiwan, pages 8–14. Emily M. Bender, Dan Flickinger, Fredrik Fouvry, and Melanie Siegel. 2005. Shared representation in multilingual grammar engineering. Research on Language and Computation, 3:131–138. Marie-H´el`ene Candito. 1996. A principle-based hierarchical representation of LTAGs. In COLING-96, pages 194–199, Copenhagen, Denemark. Bob Carpenter. 1992. The Logic of Typed Feature Structures. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press. Ann Copestake and Dan Flickinger. 2000. An open-source grammar development environment and broad-coverage English grammar using HPSG. In Proceedings of LREC, Athens, Greece. Ann Copestake. 2002. Implementing typed feature structures grammars. CSLI publications, Stanford. Benoit Crabb´e and Denys Duchier. 2004. Metagrammar redux. In CSLP, Copenhagen, Denemark. Mary Dalrymple. 2001. Lexical Functional Grammar, volume 34 of Syntax and Semantics. Academic Press. Erhard W. Hinrichs, W. Detmar Meurers, and Shuly Wintner. 2004. Linguistic theory and grammar implementation. Research on Language and Computation, 2:155–163. Ronald M. Kaplan, Tracy Holloway King, and John T. Maxwell. 2002. Adapting existing grammars: the XLE experience. In COLING-02 workshop on Grammar engineering and evaluation, pages 1–7, Morristown, NJ, USA. Vlado Keselj. 2001. Modular HPSG. Technical Report CS-2001-05, Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada. Tracy Holloway King, Martin Forst, Jonas Kuhn, and Miriam Butt. 2005. The feature space in parallel grammar writing. Research on Language and Computation, 3:139–163. Nurit Melnik. 2005. From “hand-written” to implemented HPSG theories. In Proceedings of HPSG2005, Lisbon, Portugal. Nurit Melnik. 2006. A constructional approach to verb-initial constructions in Modern Hebrew. Cognitive Linguistics, 17(2). To appear. Andrew M. Moshier. 1997. Is HPSG featureless or unprincipled? Linguistics and Philosophy, 20(6):669– 695. Stephan Oepen, Daniel Flickinger, J. Tsujii, and Hans Uszkoreit, editors. 2002. Collaborative Language Engineering: A Case Study in Efficient GrammarBased Processing. CSLI Publications, Stanford. Gerald B. Penn. 2000. The algebraic structure of attributed type signatures. Ph.D. thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA. Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press and CSLI Publications. Shuly Wintner. 2002. Modular context-free grammars. Grammars, 5(1):41–63. 152
2006
19
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 9–16, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Going Beyond AER: An Extensive Analysis of Word Alignments and Their Impact on MT Necip Fazil Ayan and Bonnie J. Dorr Institute of Advanced Computer Studies (UMIACS) University of Maryland College Park, MD 20742 {nfa,bonnie}@umiacs.umd.edu Abstract This paper presents an extensive evaluation of five different alignments and investigates their impact on the corresponding MT system output. We introduce new measures for intrinsic evaluations and examine the distribution of phrases and untranslated words during decoding to identify which characteristics of different alignments affect translation. We show that precision-oriented alignments yield better MT output (translating more words and using longer phrases) than recalloriented alignments. 1 Introduction Word alignments are a by-product of statistical machine translation (MT) and play a crucial role in MT performance. In recent years, researchers have proposed several algorithms to generate word alignments. However, evaluating word alignments is difficult because even humans have difficulty performing this task. The state-of-the art evaluation metric— alignment error rate (AER)—attempts to balance the precision and recall scores at the level of alignment links (Och and Ney, 2000). Other metrics assess the impact of alignments externally, e.g., different alignments are tested by comparing the corresponding MT outputs using automated evaluation metrics (e.g., BLEU (Papineni et al., 2002) or METEOR (Banerjee and Lavie, 2005)). However, these studies showed that AER and BLEU do not correlate well (Callison-Burch et al., 2004; Goutte et al., 2004; Ittycheriah and Roukos, 2005). Despite significant AER improvements achieved by several researchers, the improvements in BLEU scores are insignificant or, at best, small. This paper demonstrates the difficulty in assessing whether alignment quality makes a difference in MT performance. We describe the impact of certain alignment characteristics on MT performance but also identify several alignment-related factors that impact MT performance regardless of the quality of the initial alignments. In so doing, we begin to answer long-standing questions about the value of alignment in the context of MT. We first evaluate 5 different word alignments intrinsically, using: (1) community-standard metrics—precision, recall and AER; and (2) a new measure called consistent phrase error rate (CPER). Next, we observe the impact of different alignments on MT performance. We present BLEU scores on a phrase-based MT system, Pharaoh (Koehn, 2004), using five different alignments to extract phrases. We investigate the impact of different settings for phrase extraction, lexical weighting, maximum phrase length and training data. Finally, we present a quantitative analysis of which phrases are chosen during the actual decoding process and show how the distribution of the phrases differ from one alignment into another. Our experiments show that precision-oriented alignments yield better phrases for MT than recalloriented alignments. Specifically, they cover a higher percentage of our test sets and result in fewer untranslated words and selection of longer phrases during decoding. The next section describes work related to our alignment evaluation approach. Following this we outline different intrinsic evaluation measures of alignment and we propose a new measure to evaluate word alignments within phrase-based MT framework. We then present several experiments to measure the impact of different word alignments on a phrase-based MT system, and investigate how different alignments change the phrase 9 selection in the same MT system. 2 Related Work Starting with the IBM models (Brown et al., 1993), researchers have developed various statistical word alignment systems based on different models, such as hidden Markov models (HMM) (Vogel et al., 1996), log-linear models (Och and Ney, 2003), and similarity-based heuristic methods (Melamed, 2000). These methods are unsupervised, i.e., the only input is large parallel corpora. In recent years, researchers have shown that even using a limited amount of manually aligned data improves word alignment significantly (Callison-Burch et al., 2004). Supervised learning techniques, such as perceptron learning, maximum entropy modeling or maximum weighted bipartite matching, have been shown to provide further improvements on word alignments (Ayan et al., 2005; Moore, 2005; Ittycheriah and Roukos, 2005; Taskar et al., 2005). The standard technique for evaluating word alignments is to represent alignments as a set of links (i.e., pairs of words) and to compare the generated alignment against manual alignment of the same data at the level of links. Manual alignments are represented by two sets: Probable (P) alignments and Sure (S) alignments, where S ⊆ P. Given A, P and S, the most commonly used metrics—precision (Pr), recall (Rc) and alignment error rate (AER)—are defined as follows: Pr = |A ∩P| |A| Rc = |A ∩S| |S| AER = 1 −|A ∩S| + |A ∩P| |A| + |S| Another approach to evaluating alignments is to measure their impact on an external application, e.g., statistical MT. In recent years, phrase-based systems (Koehn, 2004; Chiang, 2005) have been shown to outperform word-based MT systems; therefore, in this paper, we use a publicly-available phrase-based MT system, Pharaoh (Koehn, 2004), to investigate the impact of different alignments. Although it is possible to estimate phrases directly from a training corpus (Marcu and Wong, 2002), most phrase-based MT systems (Koehn, 2004; Chiang, 2005) start with a word alignment and extract phrases that are consistent with the given alignment. Once the consistent phrases are extracted, they are assigned multiple scores (such Test Lang # of # Words Source Pair Sent’s (en/fl) en-ch 491 14K/12K NIST MTEval’2002 en-ar 450 13K/11K NIST MTEval’2003 Training en-ch 107K 4.1M/3.3M FBIS en-ar 44K 1.4M/1.1M News + Treebank Table 1: Test and Training Data Used for Experiments as translation probabilities and lexical weights), and the decoder’s job is to choose the correct phrases based on those scores using a log-linear model. 3 Intrinsic Evaluation of Alignments Our goal is to compare different alignments and to investigate how their characteristics affect the MT systems. We evaluate alignments in terms of precision, recall, alignment error rate (AER), and a new measure called consistent phrase error rate (CPER). We focus on 5 different alignments obtained by combining two uni-directional alignments. Each uni-directional alignment is the result of running GIZA++ (Och, 2000b) in one of two directions (source-to-target and vice versa) with default configurations. The combined alignments that are used in this paper are as follows: 1. Union of both directions (SU), 2. Intersection of both directions (SI), 3. A heuristic based combination technique called grow-diag-final (SG), which is the default alignment combination heuristic employed in Pharaoh (Koehn, 2004), 4-5. Two supervised alignment combination techniques (SA and SB) using 2 and 4 input alignments as described in (Ayan et al., 2005). This paper examines the impact of alignments according to their orientation toward precision or recall. Among the five alignments above, SU and SG are recall-oriented while the other three are precision-oriented. SB is an improved version of SA which attempts to increase recall without a significant sacrifice in precision. Manually aligned data from two language pairs are used in our intrinsic evaluations using the five combinations above. A summary of the training and test data is presented in Table 1. Our gold standard for each language pair is a manually aligned corpus. English-Chinese an10 notations distinguish between sure and probable alignment links, but English-Arabic annotations do not. The details of how the annotations are done can be found in (Ayan et al., 2005) and (Ittycheriah and Roukos, 2005). 3.1 Precision, Recall and AER Table 2 presents the precision, recall, and AER for 5 different alignments on 2 language pairs. For each of these metrics, a different system achieves the best score – respectively, these are SI, SU, and SB. SU and SG yield low precision, high recall alignments. In contrast, SI yields very high precision but very low recall. SA and SB attempt to balance these two measures but their precision is still higher than their recall. Both systems have nearly the same precision but SB yields significantly higher recall than SA. Align. en-ch en-ar Sys. Pr Rc AER Pr Rc AER SU 58.3 84.5 31.6 56.0 84.1 32.8 SG 61.9 82.6 29.7 60.2 83.0 30.2 SI 94.8 53.6 31.2 96.1 57.1 28.4 SA 87.0 74.6 19.5 88.6 71.1 21.1 SB 87.8 80.5 15.9 90.1 76.1 17.5 Table 2: Comparison of 5 Different Alignments using AER (on English-Chinese and English-Arabic) 3.2 Consistent Phrase Error Rate In this section, we present a new method, called consistent phrase error rate (CPER), for evaluating word alignments in the context of phrasebased MT. The idea is to compare phrases consistent with a given alignment against phrases that would be consistent with human alignments. CPER is similar to AER but operates at the phrase level instead of at the word level. To compute CPER, we define a link in terms of the position of its start and end words in the phrases. For instance, the phrase link (i1, i2, j1, j2) indicates that the English phrase ei1, . . . , ei2 and the FL phrase fj1, . . . , fj2 are consistent with the given alignment. Once we generate the set of phrases PA and PG that are consistent with a given alignment A and a manual alignment G, respectively, we compute precision (Pr), recall (Rc), and CPER as follows:1 Pr = |PA ∩PG| |PA| Rc = |PA ∩PG| |PG| CPER = 1 −2 × Pr × Rc Pr + Rc 1Note that CPER is equal to 1 - F-score. Chinese Arabic Align. CPER-3 CPER-7 CPER-3 CPER-7 SU 63.2 73.3 55.6 67.1 SG 59.5 69.4 52.0 62.6 SI 50.8 69.8 50.7 67.6 SA 40.8 51.6 42.0 54.1 SB 36.8 45.1 36.1 46.6 Table 3: Consistent Phrase Error Rates with Maximum Phrase Lengths of 3 and 7 CPER penalizes incorrect or missing alignment links more severely than AER. While computing AER, an incorrect alignment link reduces the number of correct alignment links by 1, affecting precision and recall slightly. Similarly, if there is a missing link, only the recall is reduced slightly. However, when computing CPER, an incorrect or missing alignment link might result in more than one phrase pair being eliminated from or added to the set of phrases. Thus, the impact is more severe on both precision and recall. Figure 1: Sample phrases that are generated from a human alignment and an automated alignment: Gray cells show the alignment links, and rectangles show the possible phrases. In Figure 1, the first box represents a manual alignment and the other two represent automated alignments A. In the case of a missing alignment link (Figure 1b), PA includes 9 valid phrases. For this alignment, AER = 1 −(2 × 2/2 × 2/3)/(2/2 + 2/3) = 0.2 and CPER = 1 −(2 × 5/9 × 5/6)/(5/9 + 5/6) = 0.33. In the case of an incorrect alignment link (Figure 1c), PA includes only 2 valid phrases, which results in a higher CPER (1 −(2 × 2/2 × 2/6)/(2/2 + 2/6) = 0.49) but a lower AER (1 −(2 × 3/4 × 3/3)/(3/4 + 3/3) = 0.14). Table 3 presents the CPER values on two different language pairs, using 2 different maximum phrase lengths. For both maximum phrase lengths, SA and SB yield the lowest CPER. For all 5 alignments—in both languages—CPER increases as the length of the phrase increases. For all alignments except SI, this amount of increase is nearly the same on both languages. Since SI contains very few alignment points, the number of generated phrases dramatically increases, yielding 11 poor precision and CPER as the maximum phrase length increases. 4 Evaluating Alignments within MT We now move from intrinsic measurement to extrinsic measurement using an off-the-shelf phrasebased MT system Pharaoh (Koehn, 2004). Our goal is to identify the characteristics of alignments that change MT behavior and the types of changes induced by these characteristics. All MT system components were kept the same in our experiments except for the component that generates a phrase table from a given alignment. We used the corpora presented in Table 1 to train the MT system. The phrases were scored using translation probabilities and lexical weights in two directions and a phrase penalty score. We also use a language model, a distortion model and a word penalty feature for MT. We measure the impact of different alignments on Pharaoh using three different settings: 1. Different maximum phrase length, 2. Different sizes of training data, and 3. Different lexical weighting. For maximum phrase length, we used 3 (based on what was suggested by (Koehn et al., 2003) and 7 (the default maximum phrase length in Pharaoh). For lexical weighting, we used the original weighting scheme employed in Pharaoh and a modified version. We realized that the publiclyavailable implementation of Pharaoh computes the lexical weights only for non-NULL alignment links. As a consequence, loose phrases containing NULL-aligned words along their edges receive the same lexical weighting as tight phrases without NULL-aligned words along the edges. We therefore adopted a modified weighting scheme following (Koehn et al., 2003), which incorporates NULL alignments. MT output was evaluated using the standard evaluation metric BLEU (Papineni et al., 2002).2 The parameters of the MT System were optimized for BLEU metric on NIST MTEval’2002 test sets using minimum error rate training (Och, 2003), and the systems were tested on NIST MTEval’2003 test sets for both languages. 2We used the NIST script (version 11a) for BLEU with its default settings: case-insensitive matching of n-grams up to n = 4, and the shortest reference sentence for the brevity penalty. The words that were not translated during decoding were deleted from the MT output before running the BLEU script. The SRI Language Modeling Toolkit was used to train a trigram model with modified Kneser-Ney smoothing on 155M words of English newswire text, mostly from the Xinhua portion of the Gigaword corpus. During decoding, the number of English phrases per FL phrase was limited to 100 and phrase distortion was limited to 4. 4.1 BLEU Score Comparison Table 4 presents the BLEU scores for Pharaoh runs on Chinese with five different alignments using different settings for maximum phrase length (3 vs. 7), size of training data (107K vs. 241K), and lexical weighting (original vs. modified).3 The modified lexical weighting yields huge improvements when the alignment leaves several words unaligned: the BLEU score for SA goes from 24.26 to 25.31 and the BLEU score for SB goes from 23.91 to 25.38. In contrast, when the alignments contain a high number of alignment links (e.g., SU and SG), modifying lexical weighting does not bring significant improvements because the number of phrases containing unaligned words is relatively low. Increasing the phrase length increases the BLEU scores for all systems by nearly 0.7 points and increasing the size of the training data increases the BLEU scores by 1.5-2 points for all systems. For all settings, SU yields the lowest BLEU scores while SB clearly outperforms the others. Table 5 presents BLEU scores for Pharaoh runs on 5 different alignments on English-Arabic, using different settings for lexical weighting and maximum phrase lengths.4 Using the original lexical weighting, SA and SB perform better than the others while SU and SI yield the worst results. Modifying the lexical weighting leads to slight reductions in BLEU scores for SU and SG, but improves the scores for the other 3 alignments significantly. Finally, increasing the maximum phrase length to 7 leads to additional improvements in BLEU scores, where SG and SU benefit nearly 2 BLEU points. As in English-Chinese, the worst BLEU scores are obtained by SU while the best scores are produced by SB. As we see from the tables, the relation between intrinsic alignment measures (AER and CPER) 3We could not run SB on the larger corpus because of the lack of required inputs. 4Due to lack of additional training data, we could not do experiments using different sizes of training data on EnglishArabic. 12 Original Modified Modified Modified Alignment Max Phr Len = 3 Max Phr Len=3 Max Phr Len=7 Max Phr Len=3 |Corpus| = 107K |Corpus| = 107K |Corpus| = 107K |Corpus| = 241K SU 22.56 22.66 23.30 24.40 SG 23.65 23.79 24.48 25.54 SI 23.60 23.97 24.76 26.06 SA 24.26 25.31 25.99 26.92 SB 23.91 25.38 26.14 N/A Table 4: BLEU Scores on English-Chinese with Different Lexical Weightings, Maximum Phrase Lengths and Training Data LW=Org LW=Mod LW=Mod Alignment MPL=3 MPL=3 MPL=7 SU 41.97 41.72 43.50 SG 44.06 43.82 45.78 SI 42.29 42.76 43.88 SA 44.49 45.23 46.06 SB 44.92 45.39 46.66 Table 5: BLEU Scores on English-Arabic with Different Lexical Weightings and Maximum Phrase Lengths and the corresponding BLEU scores varies, depending on the language, lexical weighting, maximum phrase length, and training data size. For example, using a modified lexical weighting, the systems are ranked according to their BLEU scores as follows: SB, SA, SG, SI, SU—an ordering that differs from that of AER but is identical to that of CPER (with a phrase length of 3) for Chinese. On the other hand, in Arabic, both AER and CPER provide a slightly different ranking from that of BLEU, with SG and SI swapping places. 4.2 Tight vs. Loose Phrases To demonstrate how alignment-related components of the MT system might change the translation quality significantly, we did an additional experiment to compare different techniques for extracting phrases from a given alignment. Specifically, we are comparing two techniques for phrase extraction: 1. Loose phrases (the original ‘consistent phrase extraction’ method) 2. Tight phrases (the set of phrases where the first/last words on each side are forced to align to some word in the phrase pair) Using tight phrases penalizes alignments with many unaligned words, whereas using loose phrases rewards them. Our goal is to compare the performance of precision-oriented vs. recalloriented alignments when we allow only tight phrases in the phrase extraction step. To simplify things, we used only 2 alignments: SG, the best recall-oriented alignment, and SB, the best precision-oriented alignment. For this experiment, we used modified lexical weighting and a maximum phrase length of 7. Chinese Arabic Alignment Loose Tight Loose Tight SG 24.48 23.19 45.78 43.67 SB 26.14 22.68 46.66 40.10 Table 6: BLEU Scores with Loose vs. Tight Phrases Table 6 presents the BLEU scores for SG and SB using two different phrase extraction techniques on English-Chinese and English-Arabic. In both languages, SB outperforms SG significantly when loose phrases are used. However, when we use only tight phrases, the performance of SB gets significantly worse (3.5 to 6.5 BLEU-score reduction in comparison to loose phrases). The performance of SG also gets worse but the degree of BLEUscore reduction is less than that of SB. Overall SG performs better than SB with tight phrases; for English-Arabic, the difference between the two systems is more than 3 BLEU points. Note that, as before, the relation between the alignment measures and the BLEU scores varies, this time depending on whether loose phrases or tight phrases are used: both CPER and AER track the BLEU rankings for loose (but not for tight) phrases. This suggests that changing alignment-related components of the system (i.e., phrase extraction and phrase scoring) influences the overall translation quality significantly for a particular alignment. Therefore, when comparing two alignments in the context of a MT system, it is important to take the alignment characteristics into account. For instance, alignments with many unaligned words are severely penalized when using tight phrases. 4.3 Untranslated Words We analyzed the percentage of words left untranslated during decoding. Figure 2 shows the percentage of untranslated words in the FL using the Chinese and Arabic NIST MTEval’2003 test sets. On English-Chinese data (using all four settings given in Table 4) SU and SG yield the highest percentage of untranslated words while SI produces the lowest percentage of untranslated words. SA and SB leave about 2% of the FL words phrases 13 Figure 2: Percentage of untranslated words out of the total number of FL words without translating them. Increasing the training data size reduces the percentage of untranslated words by nearly half with all five alignments. No significant impact on untranslated words is observed from modifying the lexical weights and changing the phrase length. On English-Arabic data, all alignments result in higher percentages of untranslated words than English-Chinese, most likely due to data sparsity. As in Chinese-to-English translation, SU is the worst and SB is the best. SI behaves quite differently, leaving nearly 7% of the words untranslated—an indicator of why it produces a higher BLEU score on Chinese but a lower score on Arabic compared to other alignments. 4.4 Analysis of Phrase Tables This section presents several experiments to analyze how different alignments affect the size of the generated phrase tables, the distribution of the phrases that are used in decoding, and the coverage of the test set with the generated phrase tables. Size of Phrase Tables The major impact of using different alignments in a phrase-based MT system is that each one results in a different phrase table. Table 7 presents the number of phrases that are extracted from five alignments using two different maximum phrase lengths (3 vs. 7) in two languages, after filtering the phrase table for MTEval’2003 test set. The size of the phrase table increases dramatically as the number of links in the initial alignment gets smaller. As a result, for both languages, SU and SG yield a much smaller Chinese Arabic Alignment MPL=3 MPL=7 MPL=3 MPL=7 SU 106 122 32 38 SG 161 181 48 55 SI 1331 3498 377 984 SA 954 1856 297 594 SB 876 1624 262 486 Table 7: Number of Phrases in the Phrase Table Filtered for MTEval’2003 Test Sets (in thousands) phrase table than the other three alignments. As the maximum phrase length increases, the size of the phrase table gets bigger for all alignments; however, the growth of the table is more significant for precision-oriented alignments due to the high number of unaligned words. Distribution of Phrases To investigate how the decoder chooses phrases of different lengths, we analyzed the distribution of the phrases in the filtered phrase table and the phrases that were used to decode Chinese MTEval’2003 test set.5 For the remaining experiments in the paper, we use modified lexical weighting, a maximum phrase length of 7, and 107K sentence pairs for training. The top row in Figure 3 shows the distribution of the phrases generated by the five alignments (using a maximum phrase length of 7) according to their length. The “j-i” designators correspond to the phrase pairs with j FL words and i English words. For SU and SG, the majority of the phrases contain only one FL word, and the percentage of the phrases with more than 2 FL words is less than 18%. For the other three alignments, however, the distribution of the phrases is almost inverted. For SI, nearly 62% of the phrases contain more than 3 words on either FL or English side; for SA and SB, this percentage is around 45-50%. Given the completely different phrase distribution, the most obvious question is whether the longer phrases generated by SI, SA and SB are actually used in decoding. In order to investigate this, we did an analysis of the phrases used to decode the same test set. The bottom row of Figure 3 shows the percentage of phrases used to decode the Chinese MTEval’2003 test set. The distribution of the actual phrases used in decoding is completely the reverse of the distribution of the phrases in the entire filtered table. For all five alignments, the majority of the used phrases is one-to-one (between 5Due to lack of space, we will present results on ChineseEnglish only in the rest of this paper but the Arabic-English results show the same trends. 14 Figure 3: Distribution of the phrases in the phrase table filtered for Chinese MTEval’2003 test set (top row) and the phrases used in decoding the same test set (bottom row) according to their lengths 50-65% of the total number of phrases used in decoding). SI, SA and SB use the other phrase pairs (particularly 1-to-2 phrases) more than SU and SG. Note that SI, SA and SB use only a small portion of the phrases with more than 3 words although the majority of the phrase table contains phrases with more than 3 words on one side. It is surprising that the inclusion of phrase pairs with more than 3 words in the search space increases the BLEU score although the majority of the phrases used in decoding is mostly one-to-one. Length of the Phrases used in Decoding We also investigated the number and length of phrases that are used to decode the given test set for different alignments. Table 8 presents the average number of English and FL words in the phrases used in decoding Chinese MTEval’2003 test set. The decoder uses fewer phrases with SI, SA and SB than for the other two, thus yielding a higher number of FL words per phrase. The number of English words per phrase is also higher for these three systems than the other two. Coverage of the Test Set Finally, we examine the coverage of a test set using phrases of a specific length in the phrase table. Table 9 presents Alignment |Eng| |FL| SU 1.39 1.28 SG 1.45 1.33 SI 1.51 1.55 SA 1.54 1.55 SB 1.56 1.52 Table 8: The average length of the phrases that are used in decoding Chinese MTEval’2003 test set the coverage of the Chinese MTEval’2003 test set (source side) using only phrases of a particular length (from 1 to 7). For this experiment, we assume that a word in the test set is covered if it is part of a phrase pair that exists in the phrase table (if a word is part of multiple phrases, it is counted only once). Not surprisingly, using only phrases with one FL word, more than 90% of the test set can be covered for all 5 alignments. As the length of the phrases increases, the coverage of the test set decreases. For instance, using phrases with 5 FL words results in less than 5% coverage of the test set. Phrase Length (FL) A 1 2 3 4 5 6 7 SU 92.2 59.5 21.4 6.7 1.3 0.4 0.1 SG 95.5 64.4 24.9 7.4 1.6 0.5 0.3 SI 97.8 75.8 38.0 13.8 4.6 1.9 1.2 SA 97.3 75.3 36.1 12.5 3.8 1.5 0.8 SB 97.5 74.8 35.7 12.4 4.2 1.8 0.9 Table 9: Coverage of Chinese MTEval’2003 Test Set Using Phrases with a Specific Length on FL side (in percentages) Table 9 reveals that the coverage of the test set is higher for precision-oriented alignments than recall-oriented alignments for all different lengths of the phrases. For instance, SI, SA, and SB cover nearly 75% of the corpus using only phrases with 2 FL words, and nearly 36% of the corpus using phrases with 3 FL words. This suggests that recalloriented alignments fail to catch a significant number of phrases that would be useful to decode this test set, and precision-oriented alignments would yield potentially more useful phrases. Since precision-oriented alignments make a higher number of longer phrases available to the decoder (based on the coverage of phrases presented in Table 9), they are used more during decoding. Consequently, the major difference between the alignments is the coverage of the phrases extracted from different alignments. The more the phrase table covers the test set, the more the longer phrases are used during decoding, and precision-oriented alignments are better at generating high-coverage phrases than recall-oriented alignments. 15 5 Conclusions and Future Work This paper investigated how different alignments change the behavior of phrase-based MT. We showed that AER is a poor indicator of MT performance because it penalizes incorrect links less than is reflected in the corresponding phrasebased MT. During phrase-based MT, an incorrect alignment link might prevent extraction of several phrases, but the number of phrases affected by that link depends on the context. We designed CPER, a new phrase-oriented metric that is more informative than AER when the alignments are used in a phrase-based MT system because it is an indicator of how the set of phrases differ from one alignment to the next according to a pre-specified maximum phrase length. Even with refined evaluation metrics (including CPER), we found it difficult to assess the impact of alignment on MT performance because word alignment is not the only factor that affects the choice of the correct words (or phrases) during decoding. We empirically showed that different phrase extraction techniques result in better MT output for certain alignments but the MT performance gets worse for other alignments. Similarly, adjusting the scores assigned to the phrases makes a significant difference for certain alignments while it has no impact on some others. Consequently, when comparing two BLEU scores, it is difficult to determine whether the alignments are bad to start with or the set of extracted phrases is bad or the phrases extracted from the alignments are assigned bad scores. This suggests that finding a direct correlation between AER (or even CPER) and the automated MT metrics is infeasible. We demonstrated that recall-oriented alignment methods yield smaller phrase tables and a higher number of untranslated words when compared to precision-oriented methods. We also showed that the phrases extracted from recall-oriented alignments cover a smaller portion of a given test set when compared to precision-oriented alignments. Finally, we showed that the decoder with recalloriented alignments uses shorter phrases more frequently as a result of unavailability of longer phrases that are extracted. Future work will involve an investigation into how the phrase extraction and scoring should be adjusted to take the nature of the alignment into account and how the phrase-table size might be reduced without sacrificing the MT output quality. Acknowledgments This work has been supported, in part, under ONR MURI Contract FCPO.810548265 and the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-2-0001. We also thank Adam Lopez for his very helpful comments on earlier drafts of this paper. References Necip F. Ayan, Bonnie J. Dorr, and Christof Monz. 2005. Neuralign: Combining word alignments using neural networks. In Proceedings of EMNLP’2005, pages 65–72. Stanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at ACL-2005. Peter F. Brown, Stephan A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Chris Callison-Burch, David Talbot, and Miles Osborne. 2004. Statistical machine translation with word- and sentence-aligned parallel corpora. In Proceedings of ACL’2004. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL’2005. Cyril Goutte, Kenji Yamada, and Eric Gaussier. 2004. Aligning words using matrix factorisation. In Proceedings of ACL’2004, pages 502–509. Abraham Ittycheriah and Salim Roukos. 2005. A maximum entropy word aligner for arabic-english machine translation. In Proceedings of EMNLP’2005. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLTNAACL’2003. Philipp Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation. In Proceedings of AMTA’2004. Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proceedings of EMNLP’2002. I. Dan Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2):221– 249. Robert C. Moore. 2005. A discriminative framework for bilingual word alignment. In Proceedings of EMNLP’2005. Franz J. Och and Hermann Ney. 2000. A comparison of alignment models for statistical machine translation. In Proceedings of COLING’2000. Franz J. Och. 2000b. GIZA++: Training of statistical translation models. Technical report, RWTH Aachen, University of Technology. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):9–51, March. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL’2003. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL’2002. Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word alignment. In Proceedings of EMNLP’2005. Stefan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of COLING’1996, pages 836–841. 16
2006
2
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 153–160, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Morphology-Syntax Interface for Turkish LFG ¨Ozlem C¸ etino˘glu Faculty of Engineering and Natural Sciences Sabancı University 34956, Istanbul, Turkey [email protected] Kemal Oflazer Faculty of Engineering and Natural Sciences Sabancı University 34956, Istanbul, Turkey [email protected] Abstract This paper investigates the use of sublexical units as a solution to handling the complex morphology with productive derivational processes, in the development of a lexical functional grammar for Turkish. Such sublexical units make it possible to expose the internal structure of words with multiple derivations to the grammar rules in a uniform manner. This in turn leads to more succinct and manageable rules. Further, the semantics of the derivations can also be systematically reflected in a compositional way by constructing PRED values on the fly. We illustrate how we use sublexical units for handling simple productive derivational morphology and more interesting cases such as causativization, etc., which change verb valency. Our priority is to handle several linguistic phenomena in order to observe the effects of our approach on both the c-structure and the f-structure representation, and grammar writing, leaving the coverage and evaluation issues aside for the moment. 1 Introduction This paper presents highlights of a large scale lexical functional grammar for Turkish that is being developed in the context of the ParGram project1 In order to incorporate in a manageable way, the complex morphology and the syntactic relations mediated by morphological units, and to handle lexical representations of very productive derivations, we have opted to develop the grammar using sublexical units called inflectional groups. Inflectional groups (IGs hereafter) represent the inflectional properties of segments of a complex 1http://www2.parc.com/istl/groups/nltt/ pargram/ word structure separated by derivational boundaries. An IG is typically larger than a morpheme but smaller than a word (except when the word has no derivational morphology in which case the IG corresponds to the word). It turns out that it is the IGs that actually define syntactic relations between words. A grammar for Turkish that is based on words as units would have to refer to information encoded at arbitrary positions in words, making the task of the grammar writer much harder. On the other hand, treating morphemes as units in the grammar level implies that the grammar will have to know about morphotactics making either the morphological analyzer redundant, or repeating the information in the morphological analyzer at the grammar level which is not very desirable. IGs bring a certain form of normalization to the lexical representation of a language like Turkish, so that units in which the grammar rules refer to are simple enough to allow easy access to the information encoded in complex word structures. That IGs delineate productive derivational processes in words necessitates a mechanism that reflects the effect of the derivations to semantic representations and valency changes. For instance, English LFG (Kaplan and Bresnan, 1982) represents derivations as a part of the lexicon; both happy and happiness are separately lexicalized. Lexicalized representations of adjectives such as easy and easier are related, so that both lexicalized and phrasal comparatives would have the same feature structure; easier would have the feature structure (1)      PRED ‘easy’ ADJUNCT  PRED ‘more’  DEG-DIM pos DEGREE comparative       Encoding derivations in the lexicon could be applicable for languages with relatively unproductive derivational phenomena, but it certainly is not 153 possible to represent in the grammar lexicon,2 all derived forms as lexemes for an agglutinative language like Turkish. Thus one needs to incorporate such derivational processes in a principled way along with the computation of the effects on derivations on the representation of the semantic information. Lexical functional grammar (LFG) (Kaplan and Bresnan, 1982) is a theory representing the syntax in two parallel levels: Constituent structures (c-structures) have the form of context-free phrase structure trees. Functional structures (f-structures) are sets of pairs of attributes and values; attributes may be features, such as tense and gender, or functions, such as subject and object. C-structures define the syntactic representation and f-structures define more semantic representation. Therefore c-structures are more language specific whereas f-structures of the same phrase for different languages are expected to be similar to each other. The remainder of the paper is organized as follows: Section 2 reviews the related work both on Turkish, and on issues similar to those addressed in this paper. Section 3 motivates and presents IGs while Section 4 explains how they are employed in a LFG setting. Section 5 summarizes the architecture and the current status of the our system. Finally we give conclusions in Section 6. 2 Related Work G¨ung¨ord¨u and Oflazer (1995) describes a rather extensive grammar for Turkish using the LFG formalism. Although this grammar had a good coverage and handled phenomena such as freeconstituent order, the underlying implementation was based on pseudo-unification. But most crucially, it employed a rather standard approach to represent the lexical units: words with multiple nested derivations were represented with complex nested feature structures where linguistically relevant information could be embedded at unpredictable depths which made access to them in rules extremely complex and unwieldy. Bozs¸ahin (2002) employed morphemes overtly as lexical units in a CCG framework to account for a variety of linguistic phenomena in a prototype implementation. The drawback was that morphotactics was explicitly raised to the level of the sentence grammar, hence the categorial lexicon accounted for both constituent order and the morpheme order with no distinction. Oflazer’s dependency parser (2003) used IGs as units between which dependency relations were established. Another parser based on IGs is Eryi˘git and Oflazer’s 2We use this term to distinguish the lexicon used by the morphological analyzer. (2006) statistical dependency parser for Turkish. C¸ akıcı (2005), used relations between IG-based representations encoded within the Turkish Treebank (Oflazer et al., 2003) to automatically induce a CCG grammar lexicon for Turkish. In a more general setting, Butt and King (2005) have handled the morphological causative in Urdu as a separate node in c-structure rules using LFG’s restriction operator in semantic construction of causatives. Their approach is quite similar to ours yet differs in an important way: the rules explicitly use morphemes as constituents so it is not clear if this is just for this case, or all morphology is handled at the syntax level. 3 Inflectional Groups as Sublexical Units Turkish is an agglutinative language where a sequence of inflectional and derivational morphemes get affixed to a root (Oflazer, 1994). At the syntax level, the unmarked constituent order is SOV, but constituent order may vary freely as demanded by the discourse context. Essentially all constituent orders are possible, especially at the main sentence level, with very minimal formal constraints. In written text however, the unmarked order is dominant at both the main sentence and embedded clause level. Turkish morphotactics is quite complicated: a given word form may involve multiple derivations and the number of word forms one can generate from a nominal or verbal root is theoretically infinite. Turkish words found in typical text average about 3-4 morphemes including the stem, with an average of about 1.23 derivations per word, but given that certain noninflecting function words such as conjuctions, determiners, etc. are rather frequent, this number is rather close to 2 for inflecting word classes. Statistics from the Turkish Treebank indicate that for sentences ranging between 2 words to 40 words (with an average of about 8 words), the number of IGs range from 2 to 55 IGs (with an average of 10 IGs per sentence) (Eryi˘git and Oflazer, 2006). The morphological analysis of a word can be represented as a sequence of tags corresponding to the morphemes. In our morphological analyzer output, the tag ˆDB denotes derivation boundaries that we also use to define IGs. If we represent the morphological information in Turkish in the following general form: root+IG DB+IG  DB+ DB+IG . then each IG denotes the relevant sequence of inflectional features including the part-of-speech for the root (in IG ) and for any of the derived forms. A given word may have multiple such representations depending on any morphological ambiguity brought about by alternative segmentations of the 154 Figure 1: Modifier-head relations in the NP eski kitaplarımdaki hikayeler word, and by ambiguous interpretations of morphemes. For instance, the morphological analysis of the derived modifier cezalandırılacak (literally, “(the one) that will be given punishment”) would be :3 ceza(punishment)+Noun+A3sg+Pnon+Nom ˆDB+Verb+Acquire ˆDB+Verb+Caus ˆDB+Verb+Pass+Pos ˆDB+Adj+FutPart+Pnon The five IGs in this word are: 1. +Noun+A3sg+Pnon+Nom 2. +Verb+Acquire 3. +Verb+Caus 4. +Verb+Pass+Pos 5. +Adj+FutPart+Pnon The first IG indicates that the root is a singular noun with nominative case marker and no possessive marker. The second IG indicates a derivation into a verb whose semantics is “to acquire” the preceding noun. The third IG indicates that a causative verb (equivalent to “to punish” in English), is derived from the previous verb. The fourth IG indicates the derivation of a passive verb with positive polarity from the previous verb. Finally the last IG represents a derivation into future participle which will function as a modifier in the sentence. The simple phrase eski kitaplarımdaki hikayeler (the stories in my old books) in Figure 1 will help clarify how IGs are involved in syntactic relations: Here, eski (old) modifies kitap (book) and not hikayeler (stories),4 and the locative phrase eski 3The morphological features other than the obvious partof-speech features are: +A3sg: 3sg number-person agreement, +Pnon: no possesive agreement, +Nom: Nominative case, +Acquire: acquire verb, +Caus: causative verb, +Pass: passive verb, +FutPart: Derived future participle, +Pos: Positive Polarity. 4Though looking at just the last POS of the words one sees an +Adj +Adj +Noun sequence which may imply that both adjectives modify the noun hikayeler kitaplarımda (in my old books) modifies hikayeler with the help of derivational suffix -ki. Morpheme boundaries are represented by ’+’ sign and morphemes in solid boxes actually define one IG. The dashed box around solid boxes is for word boundary. As the example indicates, IGs may consist of one or more morphemes. Example (2) shows the corresponding fstructure for this NP. Supporting the dependency representation in Figure 1, f-structure of adjective eski is placed as the adjunct of kitaplarımda, at the innermost level. The semantics of the relative suffix -ki is shown as ’rel OBJ ’ where the fstructure that represents the NP eski kitaplarımda is the OBJ of the derived adjective. The new fstructure with a PRED constructed on the fly, then modifies the noun hikayeler. The derived adjective behaves essentially like a lexical adjective. The effect of using IGs as the representative units can be explicitly seen in c-structure where each IG corresponds to a separate node as in Example (3).5 Here, DS stands for derivational suffix. (2)                PRED ‘hikaye’ ADJUNCT          PRED ‘relkitap’ OBJ     PRED ‘kitap’ ADJUNCT PRED ‘eski’ ATYPE attributive  CASE loc, NUM pl      ATYPE attributive           CASE NOM, NUM PL                 (3) NP     AP     NP       AP A eski NP N kitaplarımda DS ki NP N hikayeler Figure 2 shows the modifier-head relations for a more complex example given in Example (4) where we observe a chain/hierarchy of relations between IGs (4) mavi blue renkli color-WITH elbiselideki dress-WITH-LOC-REL kitap book 5Note that placing the sublexical units of a word in separate nodes goes against the Lexical Integrity principle of LFG (Dalrymple, 2001). The issue is currently being discussed within the LFG community (T. H. King, personal communication). 155 ‘the book on the one with the blue colored dress’ Figure 2: Syntactic Relations in the NP mavi renkli elbiselideki kitap Examples (5) and (6) show respectively the constituent structure (c-structure) and the corresponding feature structure (f-structure) for this noun phrase. Within the tree representation, each IG corresponds to a separate node. Thus, the LFG grammar rules constructing the c-structures are coded using IGs as units of parsing. If an IG contains the root morpheme of a word, then the node corresponding to that IG is named as one of the syntactic category symbols. The rest of the IGs are given the node name DS (to indicate derivational suffix), no matter what the content of the IG is. The semantic representation of derivational suffixes plays an important role in f-structure construction. In almost all cases, each derivation that is induced by an overt or a covert affix gets a OBJ feature which is then unified with the f-structure of the preceding stem already constructed, to obtain the feature structure of the derived form, with the PRED of the derived form being constructed on the fly. A PRED feature thus constructed however is not meant to necessarily have a precise lexical semantics. Most derivational suffixes have a consistent (lexical) semantics6, but some don’t, that is, the precise additional lexical semantics that the derivational suffix brings in, depends on the stem it is affixed to. Nevertheless, we represent both cases in the same manner, leaving the determination of the precise lexical semantics aside. If we consider Figure 2 in terms of dependency relations, the adjective mavi (blue) modifies the noun renk (color) and then the derivational suffix -li (with) kicks in although the -li is attached to renk only. Therefore, the semantics of the phrase should be with(blue color), not blue with(color). With the approach we take, this difference can easily be represented in both the fstructure as in the leftmost branch in Example (5) 6e.g., the “to acquire” example earlier and the c-structure as in the middle ADJUNCT f-structure in Example (6). Each DS in c-structure gives rise to an OBJject in c-structure. More precisely, a derived phrase is always represented as a binary tree where the right daughter is always a DS. In f-structure DS unifies with the mother fstructure and inserts PRED feature which subcategorizes for a OBJ. The left daughter of the binary tree is the original form of the phrase that is derived, and it unifies with the OBJ of the mother f-structure. (5) NP       AP       NP       AP       NP       AP       NP     AP A mavi NP N renk DS li NP N elbise DS li DS de DS ki NP N kitap 4 Inflectional Groups in Practice We have already seen how the IGs are used to construct on the fly PRED features that reflect the lexical semantics of the derivation. In this section we describe how we handle phenomena where the derivational suffix in question does not explicitly affect the semantic representation in PRED feature but determines the semantic role so as to unify the derived form or its components with the appropriate external f-structure. 4.1 Sentential Complements and Adjuncts, and Relative Clauses In Turkish, sentential complements and adjuncts are marked by productive verbal derivations into nominals (infinitives, participles) or adverbials, while relative clauses with subject and non-subject (object or adjunct) gaps are formed by participles which function as adjectivals modifying a head noun. Example (7) shows a simple sentence that will be used in the following examples. 156 (6)                                     PRED ‘kitap’ ADJUNCT                               PRED ‘relzero-deriv’ OBJ                          PRED ‘zero-derivwith’ OBJ                    PRED ‘withelbise’ OBJ               PRED ‘elbise’ ADJUNCT         PRED ‘withrenk’ OBJ    PRED ‘renk’ ADJUNCT  PRED ‘mavi’  CASE nom, NUM sg, PERS 3     ATYPE attributive          CASE nom, NUM sg, PERS 3                ATYPE attributive                     CASE loc, NUM sg, PERS 3                           ATYPE attributive                                CASE NOM, NUM SG, PERS 3                                      (7) Kız Girl-NOM adamı man-ACC aradı. call-PAST ‘The girl called the man’ In (8), we see a past-participle form heading a sentential complement functioning as an object for the verb s¨oyledi (said). (8) Manav Grocer-NOM kızın girl-GEN adamı man-ACC aradı˘gını call-PASTPART-ACC s¨oyledi. say-PAST ‘The grocer said that the girl called the man’ Once the grammar encounters such a sentential complement, everything up to the participle IG is parsed, as a normal sentence and then the participle IG appends nominal features, e.g., CASE, to the existing f-structure. The final f-structure is for a noun phrase, which now is the object of the matrix verb, as shown in Example (9). Since the participle IG has the right set of syntactic features of a noun, no new rules are needed to incorporate the derived f-structure to the rest of the grammar, that is, the derived phrase can be used as if it is a simple NP within the rules. The same mechanism is used for all kinds of verbal derivations into infinitives, adverbial adjuncts, including those derivations encoded by lexical reduplications identified by multi-word construct processors. (9)                               PRED ‘s¨oylemanav, ara’ SUBJ PRED ‘manav’ CASE nom, NUM sg, PERS 3  OBJ                PRED ‘arakz, adam’ SUBJ PRED ‘kz’ CASE gen, NUM sg, PERS 3  OBJ PRED ‘adam’ CASE acc, NUM sg, PERS 3  CHECK  PART pastpart CASE acc, NUM sg, PERS 3, VTYPE main CLAUSE-TYPE nom                 TNS-ASP  TENSE past NUM SG, PERS 3, VTYPE MAIN                                Relative clauses also admit to a similar mechanism. Relative clauses in Turkish are gapped sentences which function as modifiers of nominal heads. Turkish relative clauses have been previously studied (Barker et al., 1990; G¨ung¨ord¨u and Engdahl, 1998) and found to pose interesting issues for linguistic and computational modeling. Our aim here is not to address this problem in its generality but show with a simple example, how our treatment of IGs encoding derived forms handle the mechanics of generating f-structures for such cases. Kaplan and Zaenen (1988) have suggested a general approach for handling long distance dependencies. They have extended the LFG notation and allowed regular expressions in place of simple attributes within f-structure constraints so that phenomena requiring infinite disjunctive enumeration can be described with a finite expression. We basically follow this approach and once we derive the participle phrase we unify it with the appropriate argument of the verb using rules based on functional uncertainty. Example (10) shows a relative clause where a participle form is used as a modifier of a head noun, adam in this case. (10) Manavın Grocer-GEN kızın girl-GEN []  obj-gap aradı˘gını call-PASTPART-ACC s¨oyledi˘gi say-PASTPART adam  man-NOM ‘The man the grocer said the girl called’ This time, the sentence is parsed with a gap with an appropriate functional uncertainty constraint, and when the participle IG is encountered the sentence f-structure is derived into an adjective and the gap in the derived form, the object here, is then unified with the head word as marked with co-indexation in Example (11). The example sentence (10) includes Example (8) as a relative clause with the object extracted, hence the similarity in the f-structures can be observed easily. The ADJUNCT in Example (11) 157 is almost the same as the whole f-structure of Example (9), differing in TNS-ASP and ADJUNCTTYPE features. At the grammar level, both the relative clause and the complete sentence is parsed with the same core sentence rule. To understand whether the core sentence is a complete sentence or not, the finite verb requirement is checked. Since the requirement is met by the existence of TENSE feature, Example (8) is parsed as a complete sentence. Indeed the relative clause also includes temporal information as ‘pastpart’ value of PART feature, of the ADJUNCT f-structure, denoting a past event. (11)                                      PRED ’adam’ ADJUNCT                                PRED ‘s¨oylemanav, ara’ SUBJ PRED ‘manav’ CASE gen, NUM sg, PERS 3 OBJ               PRED ‘arakz, adam’ SUBJ PRED ‘kz’ CASE gen, NUM sg, PERS 3 OBJ  PRED ‘adam’  CHECK  PART pastpart  CASE acc, NUM sg, PERS 3, VTYPE main CLAUSE-TYPE nom                CHECK  PART pastpart  NUM sg, PERS 3, VTYPE main ADJUNCT-TYPE relative                                 CASE NOM, NUM SG, PERS 3                                       4.2 Causatives Turkish verbal morphotactics allows the production multiply causative forms for verbs.7 Such verb formations are also treated as verbal derivations and hence define IGs. For instance, the morphological analysis for the verb aradı (s/he called) is ara+Verb+Pos+Past+A3sg and for its causative arattı (s/he made (someone else) call) the analysis is ara+VerbˆDB+Verb+Caus+Pos+Past+A3sg. In Example (12) we see a sentence and its causative form followed by respective f-structures for these sentences in Examples (13) and (14). The detailed morphological analyses of the verbs are given to emphasize the morphosyntactic relation between the bare and causatived versions of the verb. (12) a. Kız Girl-NOM adamı man-ACC aradı. call-PAST ‘The girl called the man’ b. Manav Grocer-NOM kıza girl-DAT adamı man-ACC arattı. call-CAUS-PAST ‘The grocer made the girl call the man’ 7Passive, reflexive, reciprocal/collective verb formations are also handled in morphology, though the latter two are not productive due to semantic constraints. On the other hand it is possible for a verb to have multiple causative markers, though in practice 2-3 seem to be the maximum observed. (13)              PRED ‘arakz, adam’ SUBJ PRED ‘kz’ CASE nom, NUM sg, PERS 3  OBJ PRED ‘adam’ CASE acc, NUM sg, PERS 3  TNS-ASP  TENSE past NUM SG, PERS 3,VTYPE MAIN               (14)                               PRED ‘causmanav, kz, adam, arakz , adam’ SUBJ  PRED ‘manav’ OBJ  PRED ‘kz’ OBJTH  PRED ‘adam’  XCOMP          PRED ‘arakz , adam’ SUBJ PRED ‘kz’ CASE dat, NUM sg, PERS 3  OBJ PRED ‘adam’ CASE acc, NUM sg, PERS 3   VTYPE main           TNS-ASP  TENSE past NUM SG, PERS 3,VTYPE MAIN                                The end-result of processing an IG which has a verb with a causative form is to create a larger fstructure whose PRED feature has a SUBJect, an OBJect and a XCOMPlement. The f-structure of the first verb is the complement in the f-structure of the causative form, that is, its whole structure is embedded into the mother f-structure in an encapsulated way. The object of the causative (causee - that who is caused by the causer – the subject of the causative verb) is unified with the subject the inner f-structure. If the original verb is transitive, the object of the original verb is further unified with the OBJTH of the causative verb. All of grammatical functions in the inner f-structure, namely XCOMP, are also represented in the mother f-structure and are placed as arguments of caus since the flat representation is required to enable free word order in sentence level. Though not explicit in the sample f-structures, the important part is unifying the object and former subject with appropriate case markers, since the functions of the phrases in the sentence are decided with the help of case markers due to free word order. If the verb that is causativized subcategorizes for an direct object in accusative case, after causative formation, the new object unified with the subject of the causativized verb should be in dative case (Example 15). But if the verb in question subcategorizes for a dative or an ablative oblique object, then this object will be transformed into a direct object in accusative case after causativization (Example 16). That is, the causativation will select the case of the object of the causative verb, so as not to “interfere” with the object of the verb that is causativized. In causativized intransitive verbs the causative object is always in accusative case. 158 (15) a. adam man-NOM kadını woman-ACC aradı. call-PAST ‘the man called the woman’ b. adama man-DAT kadını woman-ACC arattı. call-CAUS-PAST ‘(s/he) made the man call the woman’ (16) a. adam man-NOM kadına woman-DAT vurdu. hit-PAST ‘the man hit the woman’ b. adamı man-ACC kadına woman-DAT vurdurdu. hit-CAUS-PAST ‘(s/he) made the man hit the woman’ All other derivational phenomena can be solved in a similar way by establishing the appropriate semantic representation for the derived IG and its effect on the semantic representation. 5 Current Implementation The implementation of the Turkish LFG grammar is based on the Xerox Linguistic Environment (XLE) (Maxwell III and Kaplan, 1996), a grammar development platform that facilitates the integration of various modules, such as tokenizers, finite-state morphological analyzers, and lexicons. We have integrated into XLE, a series of finite state transducers for morphological analysis and for multi-word processing for handling lexicalized, semi-lexicalized collocations and a limited form of non-lexicalized collocations. The finite state modules provide the relevant ambiguous morphological interpretations for words and their split into IGs, but do not provide syntactically relevant semantic and subcategorization information for root words. Such information is encoded in a lexicon of root words on the grammar side. The grammar developed so far addresses many important aspects ranging from free constituent order, subject and non-subject extractions, all kinds of subordinate clauses mediated by derivational morphology and has a very wide coverage NP subgrammar. As we have also emphasized earlier, the actual grammar rules are oblivious to the source of the IGs, so that the same rule handles an adjective - noun phrase regardless of whether the adjective is lexical or a derived one. So all such relations in Figure 28 are handled with the same phrase structure rule. The grammar is however lacking the treatment of certain interesting features of Turkish such as suspended affixation (Kabak, 2007) in which the inflectional features of the last element in a coordination have a phrasal scope, that is, all other 8Except the last one which requires some additional treatment with respect to definiteness. coordinated constituents have certain default features which are then “overridden” by the features of the last element in the coordination. A very simple case of such suspended affixation is exemplified in (17a) and (17b). Note that although this is not due to derivational morphology that we have emphasized in the previous examples, it is due to a more general nature of morphology in which affixes may have phrasal scopes. (17) a. kız girl adam man-NOM ve and kadını woman-ACC aradı. call-PAST ‘the girl called the man and the woman’ b. kız girl [adam [man ve and kadın]-ı woman]-ACC aradı. call-PAST ‘the girl called the man and the woman’ Suspended affixation is an example of a phenomenon that IGs do not seem directly suitable for. The unification of the coordinated IGs have to be done in a way in which non-default features of the final constituent is percolated to the upper node in the tree as is usually done with phrase structure grammars but unlike coordination is handled in such grammars. 6 Conclusions and Future Work This paper has described the highlights of our work on developing a LFG grammar for Turkish employing sublexical constituents, that we have called inflectional groups. Such a sublexical constituent choice has enabled us to handle the very productive derivational morphology in Turkish in a rather principled way and has made the grammar more or less oblivious to morphological complexity. Our current and future work involves extending the coverage of the grammar and lexicon as we have so far included in the grammar lexicon only a small subset of the root lexicon of the morphological analyzer, annotated with the semantic and subcategorization features relevant to the linguistic phenomena that we have handled. We also intend to use the Turkish Treebank (Oflazer et al., 2003), as a resource to extract statistical information along the lines of Frank et al. (2003) and O’Donovan et al. (2005). Acknowledgement This work is supported by TUBITAK (The Scientific and Technical Research Council of Turkey) by grant 105E021. 159 References Chris Barker, Jorge Hankamer, and John Moore, 1990. Grammatical Relations, chapter Wa and Ga in Turkish. CSLI. Cem Bozs¸ahin. 2002. The combinatory morphemic lexicon. Computational Linguistics, 28(2):145–186. Miriam Butt and Tracey Holloway King. 2005. Restriction for morphological valency alternations: The Urdu causative. In Proceedings of The 10th International LFG Conference, Bergen, Norway. CSLI Publications. Ruken C¸ akıcı. 2005. Automatic induction of a CCG grammar for Turkish. In Proceedings of the ACL Student Research Workshop, pages 73–78, Ann Arbor, Michigan, June. Association for Computational Linguistics. Mary Dalrymple. 2001. Lexical Functional Grammar, volume 34 of Syntax and Semantics. Academic Press, New York. G¨uls¸en Eryi˘git and Kemal Oflazer. 2006. Statistical dependency parsing for turkish. In Proceedings of EACL 2006 - The 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Anette Frank, Louisa Sadler, Josef van Genabith, and Andy Way. 2003. From treebank resources to LFG f-structures:automatic f-structure annotation of treebank trees and CFGs extracted from treebanks. In Anne Abeille, editor, Treebanks. Kluwer Academic Publishers, Dordrecht. Zelal G¨ung¨ord¨u and Elisabeth Engdahl. 1998. A relational approach to relativization in Turkish. In Joint Conference on Formal Grammar, HPSG and Categorial Grammar, Saarbr¨ucken, Germany, August. Zelal G¨ung¨ord¨u and Kemal Oflazer. 1995. Parsing Turkish using the Lexical Functional Grammar formalism. Machine Translation, 10(4):515–544. Barıs¸ Kabak. 2007. Turkish suspended affixation. Linguistics, 45. (to appear). Ronald M. Kaplan and Joan Bresnan. 1982. Lexicalfunctional grammar: A formal system for grammatical representation. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages 173–281. MIT Press, Cambridge, MA. Ronald M. Kaplan and Annie Zaenen. 1988. Longdistance dependencies, constituent structure, and functional uncertainty. In M. Baitin and A. Kroch, editors, Alternative Conceptions of Phrase Structure. University of Chicago Press, Chicago. John T. Maxwell III and Ronald M. Kaplan. 1996. An efficient parser for LFG. In Miriam Butt and Tracy Holloway King, editors, The Proceedings of the LFG ’96 Conference, Rank Xerox, Grenoble. Ruth O’Donovan, Michael Burke, Aoife Cahill, Josef van Genabith, and Andy Way. 2005. Large-scale induction and evaluation of lexical resources from the Penn-II and Penn-III Treebanks. Computational Linguistics, 31(3):329–365. Kemal Oflazer, Bilge Say, Dilek Zeynep Hakkani-T¨ur, and G¨okhan T¨ur. 2003. Building a Turkish treebank. In Anne Abeille, editor, Building and Exploiting Syntactically-annotated Corpora. Kluwer Academic Publishers. Kemal Oflazer. 1994. Two-level description of Turkish morphology. Literary and Linguistic Computing, 9(2):137–148. Kemal Oflazer. 2003. Dependency parsing with an extended finite-state approach. Computational Linguistics, 29(4):515–544. 160
2006
20
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 161–168, Sydney, July 2006. c⃝2006 Association for Computational Linguistics PCFGs with Syntactic and Prosodic Indicators of Speech Repairs John Halea Izhak Shafranb Lisa Yungc Bonnie Dorrd Mary Harperde Anna Krasnyanskayaf Matthew Leaseg Yang Liuh Brian Roarki Matthew Snoverd Robin Stewartj a Michigan State University; b,c Johns Hopkins University; d University of Maryland, College Park; e Purdue University f UCLA; g Brown University; h University of Texas at Dallas; i Oregon Health & Sciences University; j Williams College Abstract A grammatical method of combining two kinds of speech repair cues is presented. One cue, prosodic disjuncture, is detected by a decision tree-based ensemble classifier that uses acoustic cues to identify where normal prosody seems to be interrupted (Lickley, 1996). The other cue, syntactic parallelism, codifies the expectation that repairs continue a syntactic category that was left unfinished in the reparandum (Levelt, 1983). The two cues are combined in a Treebank PCFG whose states are split using a few simple tree transformations. Parsing performance on the Switchboard and Fisher corpora suggests that these two cues help to locate speech repairs in a synergistic way. 1 Introduction Speech repairs, as in example (1), are one kind of disfluent element that complicates any sort of syntax-sensitive processing of conversational speech. (1) and [ the first kind of invasion of ] the first type of privacy seemed invaded to me The problem is that the bracketed reparandum region (following the terminology of Shriberg (1994)) is approximately repeated as the speaker The authors are very grateful for Eugene Charniak’s help adapting his parser. We also thank the Center for Language and Speech processing at Johns Hopkins for hosting the summer workshop where much of this work was done. This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 0121285. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. “repairs” what he or she has already uttered. This extra material renders the entire utterance ungrammatical—the string would not be generated by a correct grammar of fluent English. In particular, attractive tools for natural language understanding systems, such as Treebank grammars for written corpora, naturally lack appropriate rules for analyzing these constructions. One possible response to this mismatch between grammatical resources and the brute facts of disfluent speech is to make one look more like the other, for the purpose of parsing. In this separate-processing approach, reparanda are located through a variety of acoustic, lexical or string-based techniques, then excised before submission to a parser (Stolcke and Shriberg, 1996; Heeman and Allen, 1999; Spilker et al., 2000; Johnson and Charniak, 2004). The resulting parse tree then has the reparandum re-attached in a standardized way (Charniak and Johnson, 2001). An alternative strategy, adopted in this paper, is to use the same grammar to model fluent speech, disfluent speech, and their interleaving. Such an integrated approach can use syntactic properties of the reparandum itself. For instance, in example (1) the reparandum is an unfinished noun phrase, the repair a finished noun phrase. This sort of phrasal correspondence, while not absolute, is strong in conversational speech, and cannot be exploited on the separate-processing approach. Section 3 applies metarules (Weischedel and Sondheimer, 1983; McKelvie, 1998a; Core and Schubert, 1999) in recognizing these correspondences using standard context-free grammars. At the same time as it defies parsing, conversational speech offers the possibility of leveraging prosodic cues to speech repairs. Sec161 Figure 1: The pause between two or s and the glottalization at the end of the first makes it easy for a listener to identify the repair. tion 2 describes a classifier that learns to label prosodic breaks suggesting upcoming disfluency. These marks can be propagated up into parse trees and used in a probabilistic context-free grammar (PCFG) whose states are systematically split to encode the additional information. Section 4 reports results on Switchboard (Godfrey et al., 1992) and Fisher EARS RT04F data, suggesting these two features can bring about independent improvements in speech repair detection. Section 5 suggests underlying linguistic and statistical reasons for these improvements. Section 6 compares the proposed grammatical method to other related work, including state of the art separate-processing approaches. Section 7 concludes by indicating a way that string- and treebased approaches to reparandum identification could be combined. 2 Prosodic disjuncture Everyday experience as well as acoustic analysis suggests that the syntactic interruption in speech repairs is typically accompanied by a change in prosody (Nakatani and Hirschberg, 1994; Shriberg, 1994). For instance, the spectrogram corresponding to example (2), shown in Figure 1, (2) the jehovah’s witness or [ or ] mormons or someone reveals a noticeable pause between the occurrence of the two ors, and an unexpected glottalization at the end of the first one. Both kinds of cues have been advanced as explanations for human listeners’ ability to identify the reparandum even before the repair occurs. Retaining only the second explanation, Lickley (1996) proposes that there is no “edit signal” per se but that repair is cued by the absence of smooth formant transitions and lack of normal juncture phenomena. One way to capture this notion in the syntax is to enhance the input with a special disjuncture symbol. This symbol can then be propagated in the grammar, as illustrated in Figure 2. This work uses a suffix ˜+ to encode the perception of abnormal prosody after a word, along with phrasal -BRK tags to decorate the path upwards to reparandum constituents labeled EDITED. Such NP NP EDITED CC NP NP NNP CC−BRK or NNPS DT NNP POS witness the jehovah ’s or~+ mormons Figure 2: Propagating BRK, the evidence of disfluent juncture, from acoustics to syntax. disjuncture symbols are identified in the ToBI labeling scheme as break indices (Price et al., 1991; Silverman et al., 1992). The availability of a corpus annotated with ToBI labels makes it possible to design a break index classifier via supervised training. The corpus is a subset of the Switchboard corpus, consisting of sixty-four telephone conversations manually annotated by an experienced linguist according to a simplified ToBI labeling scheme (Ostendorf et al., 2001). In ToBI, degree of disjuncture is indicated by integer values from 0 to 4, where a value of 0 corresponds to clitic and 4 to a major phrase break. In addition, a suffix p denotes perceptually disfluent events reflecting, for example, 162 hesitation or planning. In conversational speech the intermediate levels occur infrequently and the break indices can be broadly categorized into three groups, namely, 1, 4 and p as in Wong et al. (2005). A classifier was developed to predict three break indices at each word boundary based on variations in pitch, duration and energy associated with word, syllable or sub-syllabic constituents (Shriberg et al., 2005; Sonmez et al., 1998). To compute these features, phone-level time-alignments were obtained from an automatic speech recognition system. The duration of these phonological constituents were derived from the ASR alignment, while energy and pitch were computed every 10ms with snack, a public-domain sound toolkit (Sjlander, 2001). The duration, energy, and pitch were post-processed according to stylization procedures outlined in Sonmez et al. (1998) and normalized to account for variability across speakers. Since the input vector can have missing values such as the absence of pitch during unvoiced sound, only decision tree based classifiers were investigated. Decision trees can handle missing features gracefully. By choosing different combinations of splitting and stopping criteria, an ensemble of decision trees was built using the publicly-available IND package (Buntine, 1992). These individual classifiers were then combined into ensemble-based classifiers. Several classifiers were investigated for detecting break indices. On ten-fold cross-validation, a bagging-based classifier (Breiman, 1996) predicted prosodic breaks with an accuracy of 83.12% while chance was 67.66%. This compares favorably with the performance of the supervised classifiers on a similar task in Wong et al. (2005). Random forests and hidden Markov models provide marginal improvements at considerable computational cost (Harper et al., 2005). For speech repair, the focus is on detecting disfluent breaks. The precision and recall trade-off on its detection can be adjusted using a threshold on the posterior probability of predicting “p”, as shown in Figure 3. In essence, the large number of acoustic and prosodic features related to disfluency are encoded via the ToBI label ‘p’, and provided as additional observations to the PCFG. This is unlike previous work on incorporating prosodic information (Gre0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 Probability of Miss Probability of False Alarm Figure 3: DET curve for detecting disfluent breaks from acoustics. gory et al., 2004; Lease et al., 2005; Kahn et al., 2005) as described further in Section 6. 3 Syntactic parallelism The other striking property of speech repairs is their parallel character: subsequent repair regions ‘line up’ with preceding reparandum regions. This property can be harnessed to better estimate the length of the reparandum by considering parallelism from the perspective of syntax. For instance, in Figure 4(a) the unfinished reparandum noun phrase is repaired by another noun phrase – the syntactic categories are parallel. 3.1 Levelt’s WFR and Conjunction The idea that the reparandum is syntactically parallel to the repair can be traced back to Levelt (1983). Examining a corpus of Dutch picture descriptions, Levelt proposes a bi-conditional wellformedness rule for repairs (WFR) that relates the structure of repairs to the structure of conjunctions. The WFR conceptualizes repairs as the conjunction of an unfinished reparandum string (α) with a properly finished repair (γ). Its original formulation, repeated here, ignores optional interregna like “er” or “I mean.” Well-formedness rule for repairs (WFR) A repair ⟨α γ⟩is well-formed if and only if there is a string β such that the string ⟨αβ and∗γ⟩ is well-formed, where β is a completion of the constituent directly dominating the last element of α. (and is to be deleted if that last element is itself a sentence connective) In other words, the string α is a prefix of a phrase whose completion, β—if it were present—would 163 render the whole phrase αβ grammatically conjoinable with the repair γ. In example (1) α is the string ‘the first kind of invasion of’, γ is ‘the first type of privacy’ and β is probably the single word ‘privacy.’ This kind of conjoinability typically requires the syntactic categories of the conjuncts to be the same (Chomsky, 1957, 36). That is, a rule schema such as (2) where X is a syntactic category, is preferred over one where X is not constrained to be the same on either side of the conjunction. X →X Conj X (2) If, as schema (2) suggests, conjunction does favor like-categories, and, as Levelt suggests, wellformed repairs are conjoinable with finished versions of their reparanda, then the syntactic categories of repairs ought to match the syntactic categories of (finished versions of) reparanda. 3.2 A WFR for grammars Levelt’s WFR imposes two requirements on a grammar • distinguishing a separate category of ‘unfinished’ phrases • identifying a syntactic category for reparanda Both requirements can be met by adapting Treebank grammars to mirror the analysis of McKelvie1 (1998a; 1998b). McKelvie derives phrase structure rules for speech repairs from fluent rules by adding a new feature called abort that can take values true and false. For a given grammar rule of the form A →B C a metarule creates other rules of the form A [abort = Q] → B [abort = false] C [abort = Q] where Q is a propositional variable. These rules say, in effect, that the constituent A is aborted just in case the last daughter C is aborted. Rules that don’t involve a constant value for Q ensure that the same value appears on parents and children. The 1McKelvie’s metarule approach declaratively expresses Hindle’s (1983) Stack Editor and Category Copy Editor rules. This classic work effectively states the WFR as a program for the Fidditch deterministic parser. WFR is then implemented by rule schemas such as (3) X →X [abort = true] (AFF) X (3) that permit the optional interregnum AFF to conjoin an unfinished X-phrase (the reparandum) with a finished X-phrase (the repair) that comes after it. 3.3 A WFR for Treebanks McKelvie’s formulation of Levelt’s WFR can be applied to Treebanks by systematically recoding the annotations to indicate which phrases are unfinished and to distinguish matching from nonmatching repairs. 3.3.1 Unfinished phrases Some Treebanks already mark unfinished phrases. For instance, the Penn Treebank policy (Marcus et al., 1993; Marcus et al., 1994) is to annotate the lowest node that is unfinished with an -UNF tag as in Figure 4(a). It is straightforward to propagate this mark upwards in the tree from wherever it is annotated to the nearest enclosing EDITED node, just as -BRK is propagated upwards from disjuncture marks on individual words. This percolation simulates the action of McKelvie’s [abort = true]. The resulting PCFG is one in which distributions on phrase structure rules with ‘missing’ daughters are segregated from distributions on ‘complete’ rules. 3.4 Reparanda categories The other key element of Levelt’s WFR is the idea of conjunction of elements that are in some sense the same. In the Penn Treebank annotation scheme, reparanda always receive the label EDITED. This means that the syntactic category of the reparandum is hidden from any rule which could favor matching it with that of the repair. Adding an additional mark on this EDITED node (a kind of daughter annotation) rectifies the situation, as depicted in Figure 4(b), which adds the notation -childNP to a tree in which the unfinished tags have been propagated upwards. This allows a Treebank PCFG to represent the generalization that speech repairs tend to respect syntactic category. 4 Results Three kinds of experiments examined the effectiveness of syntactic and prosodic indicators of 164 S CC EDITED NP and NP NP NP PP DT JJ NN IN NP the first kind of NP PP−UNF NN IN invasion of DT JJ NN the first type (a) The lowest unfinished node is given. S CC EDITED−childNP NP and NP−UNF NP NP PP−UNF DT JJ NN IN NP−UNF the first kind of NP PP−UNF NN IN invasion of DT JJ NN the first type (b) -UNF propagated, daughter-annotated Switchboard tree Figure 4: Input (a) and output (b) of tree transformations. speech repairs. The first two use the CYK algorithm to find the most likely parse tree on a grammar read-off from example trees annotated as in Figures 2 and 4. The third experiment measures the benefit from syntactic indicators alone in Charniak’s lexicalized parser (Charniak, 2000). The tables in subsections 4.1, 4.2, and 4.3 summarize the accuracy of output parse trees on two measures. One is the standard Parseval F-measure, which tracks the precision and recall for all labeled constituents as compared to a gold-standard parse. The other measure, EDIT-finding F, restricts consideration to just constituents that are reparanda. It measures the per-word performance identifying a word as dominated by EDITED or not. As in previous studies, reference transcripts were used in all cases. A check (√) indicates an experiment where prosodic breaks where automatically inferred by the classifier described in section 2, whereas in the (×) rows no prosodic information was used. 4.1 CYK on Fisher Table 1 summarizes the accuracy of a standard CYK parser on the newly-treebanked Fisher corpus (LDC2005E15) of phone conversations, collected as part of the DARPA EARS program. The parser was trained on the entire Switchboard corpus (ca. 107K utterances) then tested on the 5368-utterance ‘dev2’ subset of the Fisher data. This test set was tagged using MXPOST (Ratnaparkhi, 1996) which was itself trained on Switchboard. Finally, as described in section 2 these tags were augmented with a special prosodic break symbol if the decision tree rated the probability a ToBI ‘p’ symbol higher than the threshold value of 0.75. Annotation Break index Parseval F EDIT F none × 66.54 22.9 √ 66.08 26.1 daughter annotation × 66.41 29.4 √ 65.81 31.6 -UNF propagation × 67.06 31.5 √ 66.45 34.8 both × 69.21 40.2 √ 67.02 40.6 Table 1: Improvement on Fisher, MXPOSTed tags. The Fisher results in Table 1 show that syntactic and prosodic indicators provide different kinds of benefits that combine in an additive way. Presumably because of state-splitting, improvement in EDIT-finding comes at the cost of a small decrement in overall parsing performance. 4.2 CYK on Switchboard Table 2 presents the results of similar experiments on the Switchboard corpus following the 165 train/dev/test partition of Charniak and Johnson (2001). In these experiments, the parser was given correct part-of-speech tags as input. Annotation Break index Parseval F EDIT F none × 70.92 18.2 √ 69.98 22.5 daughter annotation × 71.13 25.0 √ 70.06 25.5 -UNF propagation × 71.71 31.1 √ 70.36 30.0 both × 71.16 41.7 √ 71.05 36.2 Table 2: Improvement on Switchboard, gold tags. The Switchboard results demonstrate independent improvement from the syntactic annotations. The prosodic annotation helps on its own and in combination with the daughter annotation that implements Levelt’s WFR. 4.3 Lexicalized parser Finally, Table 3 reports the performance of Charniak’s non-reranking, lexicalized parser on the Switchboard corpus, using the same test/dev/train partition. Annotation Parseval F EDIT F baseline 83.86 57.6 daughter annotation 80.85 67.2 -UNF propagation 81.68 64.7 both 80.16 70.0 flattened EDITED 82.13 64.4 Table 3: Charniak as an improved EDIT-finder. Since Charniak’s parser does its own tagging, this experiment did not examine the utility of prosodic disjuncture marks. However, the combination of daughter annotation and -UNF propagation does lead to a better grammar-based reparandum-finder than parsers trained on flattened EDITED regions. More broadly, the results suggest that Levelt’s WFR is synergistic with the kind of head-to-head lexical dependencies that Charniak’s parser uses. 5 Discussion The pattern of improvement in tables 1, 2, and 3 from none or baseline rows where no syntactic parallelism or break index information is used, to subsequent rows where it is used, suggest why these techniques work. Unfinished-category annotation improves performance by preventing the grammar of unfinished constituents from being polluted by the grammar of finished constituents. Such purification is independent of the fact that rules with daughters labeled EDITED-childXP tend to also mention categories labeled XP further to the right (or NP and VP, when XP starts with S). This preference for syntactic parallelism can be triggered either by externally-suggested ToBI break indices or grammar rules annotated with -UNF. The prediction of a disfluent break could be further improved by POS features and Ngram language model scores (Spilker et al., 2001; Liu, 2004). 6 Related Work There have been relatively few attempts to harness prosodic cues in parsing. In a spoken language system for VERBMOBIL task, Batliner and colleagues (2001) utilize prosodic cues to dramatically reduce lexical analyses of disfluencies in a end-to-end real-time system. They tackle speech repair by a cascade of two stages – identification of potential interruption points using prosodic cues with 90% recall and many false alarms, and the lexical analyses of their neighborhood. Their approach, however, does not exploit the synergy between prosodic and syntactic features in speech repair. In Gregory et al. (2004), over 100 real-valued acoustic and prosodic features were quantized into a heuristically selected set of discrete symbols, which were then treated as pseudo-punctuation in a PCFG, assuming that prosodic cues function like punctuation. The resulting grammar suffered from data sparsity and failed to provide any benefits. Maximum entropy based models have been more successful in utilizing prosodic cues. For instance, in Lease et al. (2005), interruption point probabilities, predicted by prosodic classifiers, were quantized and introduced as features into a speech repair model along with a variety of TAG and PCFG features. Towards a clearer picture of the interaction with syntax and prosody, this work uses ToBI to capture prosodic cues. Such a method is analogous to Kahn et al. (2005) but in a generative framework. The TAG-based model of Johnson and Charniak (2004) is a separate-processing approach that rep166 resents the state of the art in reparandum-finding. Johnson and Charniak explicitly model the crossed dependencies between individual words in the reparandum and repair regions, intersecting this sequence model with a parser-derived language model for fluent speech. This second step improves on Stolcke and Shriberg (1996) and Heeman and Allen (1999) and outperforms the specific grammar-based reparandum-finders tested in section 4. However, because of separate-processing the TAG channel model’s analyses do not reflect the syntactic structure of the sentence being analyzed, and thus that particular TAG-based model cannot make use of properties that depend on the phrase structure of the reparandum region. This includes the syntactic category parallelism discussed in section 3 but also predicate-argument structure. If edit hypotheses were augmented to mention particular tree nodes where the reparandum should be attached, such syntactic parallelism constraints could be exploited in the reranking framework of Johnson et al. (2004). The approach in section 3 is more closely related to that of Core and Schubert (1999) who also use metarules to allow a parser to switch from speaker to speaker as users interrupt one another. They describe their metarule facility as a modification of chart parsing that involves copying of specific arcs just in case specific conditions arise. That approach uses a combination of longest-first heuristics and thresholds rather than a complete probabilistic model such as a PCFG. Section 3’s PCFG approach can also be viewed as a declarative generalization of Roark’s (2004) EDIT-CHILD function. This function helps an incremental parser decide upon particular treedrawing actions in syntactically-parallel contexts like speech repairs. Whereas Roark conditions the expansion of the first constituent of the repair upon the corresponding first constituent of the reparandum, in the PCFG approach there exists a separate rule (and thus a separate probability) for each alternative sequence of reparandum constituents. 7 Conclusion Conventional PCFGs can improve their detection of speech repairs by incorporating Lickley’s hypothesis about interrupted prosody and by implementing Levelt’s well-formedness rule. These benefits are additive. The strengths of these simple tree-based techniques should be combinable with sophisticated string-based (Johnson and Charniak, 2004; Liu, 2004; Zhang and Weng, 2005) approaches by applying the methods of Wieling et al. (2005) for constraining parses by externally-suggested brackets. References L. Breiman. 1996. Bagging predictors. Machine Learning, 24(2):123–140. W. Buntine. 1992. Tree classication software. In Technology 2002: The Third National Technology Transfer Conference and Exposition, Baltimore. E. Charniak and M. Johnson. 2001. Edit detection and parsing for transcribed speech. In Proceedings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics, pages 118–126. E. Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL-00, pages 132– 139. N. Chomsky. 1957. Syntactic Structures. Anua Linguarum Series Minor 4, Series Volume 4. Mouton de Gruyter, The Hague. M. G. Core and L. K. Schubert. 1999. A syntactic framework for speech repairs and other disruptions. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 413– 420. J. J. Godfrey, E. C. Holliman, and J. McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for research and development. In Proceedings of ICASSP, volume I, pages 517–520, San Francisco. M. Gregory, M. Johnson, and E. Charniak. 2004. Sentence-internal prosody does not help parsing the way punctuation does. In Proceedings of North American Association for Computational Linguistics. M. Harper, B. Dorr, J. Hale, B. Roark, I. Shafran, M. Lease, Y. Liu, M. Snover, and L. Yung. 2005. Parsing and spoken structural event detection. In 2005 Johns Hopkins Summer Workshop Final Report. P. A. Heeman and J. F. Allen. 1999. Speech repairs, intonational phrases and discourse markers: modeling speakers’ utterances in spoken dialog. Computational Linguistics, 25(4):527–571. D. Hindle. 1983. Deterministic parsing of syntactic non-fluencies. In Proceedings of the ACL. M. Johnson and E. Charniak. 2004. A TAG-based noisy channel model of speech repairs. In Proceedings of ACL, pages 33–39. 167 M. Johnson, E. Charniak, and M. Lease. 2004. An improved model for recognizing disfluencies in conversational speech. In Proceedings of Rich Transcription Workshop. J. G. Kahn, M. Lease, E. Charniak, M. Johnson, and M. Ostendorf. 2005. Effective use of prosody in parsing conversational speech. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 233–240. M. Lease, E. Charniak, and M. Johnson. 2005. Parsing and its applications for conversational speech. In Proceedings of ICASSP. W. J. M. Levelt. 1983. Monitoring and self-repair in speech. Cognitive Science, 14:41–104. R. J. Lickley. 1996. Juncture cues to disfluency. In Proceedings the International Conference on Speech and Language Processing. Y. Liu. 2004. Structural Event Detection for Rich Transcription of Speech. Ph.D. thesis, Purdue University. M. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. M. Marcus, G. Kim, M. A. Marcinkiewicz, R. MacIntyre, A. Bies, M. Ferguson, K. Katz, and B. Schasberger. 1994. The Penn Treebank: Annotating Predicate Argument Structure. In Proceedings of the 1994 ARPA Human Language Technology Workshop. D. McKelvie. 1998a. SDP – Spoken Dialog Parser. ESRC project on Robust Parsing and Part-of-speech Tagging of Transcribed Speech Corpora, May. D. McKelvie. 1998b. The syntax of disfluency in spontaneous spoken language. ESRC project on Robust Parsing and Part-of-speech Tagging of Transcribed Speech Corpora, May. C. Nakatani and J. Hirschberg. 1994. A corpus-based study of repair cues in spontaneous speech. Journal of the Acoustical Society of America, 95(3):1603– 1616, March. M. Ostendorf, I. Shafran, S. Shattuck-Hufnagel, L. Carmichael, and W. Byrne. 2001. A prosodically labelled database of spontaneous speech. In Proc. ISCA Tutorial and Research Workshop on Prosody in Speech Recognition and Understanding, pages 119–121. P. Price, M. Ostendorf, S. Shattuck-Hufnagel, and C. Fong. 1991. The use of prosody in syntactic disambiguation. Journal of the Acoustic Society of America, 90:2956–2970. A. Ratnaparkhi. 1996. A maximum entropy part-ofspeech tagger. In Proceedings of Empirical Methods in Natural Language Processing Conference, pages 133–141. B. Roark. 2004. Robust garden path parsing. Natural Language Engineering, 10(1):1–24. E. Shriberg, L. Ferrer, S. Kajarekar, A. Venkataraman, and A. Stolcke. 2005. Modeling prosodic feature sequences for speaker recognition. Speech Communication, 46(3-4):455–472. E. Shriberg. 1994. Preliminaries to a Theory of Speech Disfluencies. Ph.D. thesis, UC Berkeley. H. F. Silverman, M. Beckman, J. Pitrelli, M. Ostendorf, C. Wightman, P. Price, J. Pierrehumbert, and J. Hirshberg. 1992. ToBI: A standard for labeling English prosody. In Proceedings of ICSLP, volume 2, pages 867–870. K. Sjlander, 2001. The Snack sound visualization module. Royal Institute of Technology in Stockholm. http://www.speech.kth.se/SNACK. K. Sonmez, E. Shriberg, L. Heck, and M. Weintraub. 1998. Modeling dynamic prosodic variation for speaker verification. In Proceedings of ICSLP, volume 7, pages 3189–3192. J¨org Spilker, Martin Klarner, and G¨unther G¨orz. 2000. Processing self-corrections in a speech-to-speech system. In Wolfgang Wahlster, editor, Verbmobil: Foundations of speech-to-speech translation, pages 131–140. Springer-Verlag, Berlin. J. Spilker, A. Batliner, and E. N¨oth. 2001. How to repair speech repairs in an end-to-end system. In R. Lickley and L. Shriberg, editors, Proc. of ISCA Workshop on Disfluency in Spontaneous Speech, pages 73–76. A. Stolcke and E. Shriberg. 1996. Statistical language modeling for speech disfluencies. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 405–408, Atlanta, GA. R. M. Weischedel and N. K. Sondheimer. 1983. Meta-rules as a basis for processing ill-formed input. American Journal of Computational Linguistics, 9(3-4):161–177. M. Wieling, M-J. Nederhof, and G. van Noord. 2005. Parsing partially bracketed input. Talk presented at Computational Linguistics in the Netherlands. D. Wong, M. Ostendorf, and J. G. Kahn. 2005. Using weakly supervised learning to improve prosody labeling. Technical Report UWEETR-2005-0003, University of Washington Electrical Engineering Dept. Q. Zhang and F. Weng. 2005. Exploring features for identifying edited regions in disfluent sentences. In Proceedings of the Nineth International Workshop on Parsing Technologies, pages 179–185. 168
2006
21
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 169–176, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Dependency Parsing of Japanese Spoken Monologue Based on Clause Boundaries Tomohiro Ohno†a) Shigeki Matsubara‡ Hideki Kashioka§ Takehiko Maruyama♯and Yasuyoshi Inagaki♮ †Graduate School of Information Science, Nagoya University, Japan ‡Information Technology Center, Nagoya University, Japan §ATR Spoken Language Communication Research Laboratories, Japan ♯The National Institute for Japanese Language, Japan ♮Faculty of Information Science and Technology, Aichi Prefectural University, Japan a)[email protected] Abstract Spoken monologues feature greater sentence length and structural complexity than do spoken dialogues. To achieve high parsing performance for spoken monologues, it could prove effective to simplify the structure by dividing a sentence into suitable language units. This paper proposes a method for dependency parsing of Japanese monologues based on sentence segmentation. In this method, the dependency parsing is executed in two stages: at the clause level and the sentence level. First, the dependencies within a clause are identified by dividing a sentence into clauses and executing stochastic dependency parsing for each clause. Next, the dependencies over clause boundaries are identified stochastically, and the dependency structure of the entire sentence is thus completed. An experiment using a spoken monologue corpus shows this method to be effective for efficient dependency parsing of Japanese monologue sentences. 1 Introduction Recently, monologue data such as a lecture and commentary by a professional have been considered as human valuable intellectual property and have gathered attention. In applications, such as automatic summarization, machine translation and so on, for using these monologue data as intellectual property effectively and efficiently, it is necessary not only just to accumulate but also to structure the monologue data. However, few attempts have been made to parse spoken monologues. Spontaneously spoken monologues include a lot of grammatically ill-formed linguistic phenomena such as fillers, hesitations and selfrepairs. In order to robustly deal with their extragrammaticality, some techniques for parsing of dialogue sentences have been proposed (Core and Schubert, 1999; Delmonte, 2003; Ohno et al., 2005b). On the other hand, monologues also have the characteristic feature that a sentence is generally longer and structurally more complicated than a sentence in dialogues which have been dealt with by the previous researches. Therefore, for a monologue sentence the parsing time would increase and the parsing accuracy would decrease. It is thought that more effective, high-performance spoken monologue parsing could be achieved by dividing a sentence into suitable language units for simplicity. This paper proposes a method for dependency parsing of monologue sentences based on sentence segmentation. The method executes dependency parsing in two stages: at the clause level and at the sentence level. First, a dependency relation from one bunsetsu1 to another within a clause is identified by dividing a sentence into clauses based on clause boundary detection and then executing stochastic dependency parsing for each clause. Next, the dependency structure of the entire sentence is completed by identifying the dependencies over clause boundaries stochastically. An experiment on monologue dependency parsing showed that the parsing time can be drasti1A bunsetsu is the linguistic unit in Japanese that roughly corresponds to a basic phrase in English. A bunsetsu consists of one independent word and more than zero ancillary words. A dependency is a modification relation in which a dependent bunsetsu depends on a head bunsetsu. That is, the dependent bunsetsu and the head bunsetsu work as modifier and modifyee, respectively. 169 世論 調査に より ますと 死刑を 支持する という 人が 八十パー セント近く に なって おります 総理府 が 発表いた しました 先日 :Dependency relation whose dependent bunsetsu is not the last bunsetsu of a clause :Dependency relation whose dependent bunsetsu is the last bunsetsu of a clause :Bunsetsu :Clause boundary :Clause The public opinion poll that the Prime Minister‘s Office announced the other day indicates that the ratio of people advocating capital punishment is nearly 80% the other day that the Prime Minister’s Office announced The public opinion poll indicates that capital punishment advocating the ratio of people nearly 80% is 世論 調査に より ますと 死刑を 支持する という 人が 八十パー セント近く に なって おります 総理府 が 発表いた しました 先日 :Dependency relation whose dependent bunsetsu is not the last bunsetsu of a clause :Dependency relation whose dependent bunsetsu is the last bunsetsu of a clause :Bunsetsu :Clause boundary :Clause The public opinion poll that the Prime Minister‘s Office announced the other day indicates that the ratio of people advocating capital punishment is nearly 80% the other day that the Prime Minister’s Office announced The public opinion poll indicates that capital punishment advocating the ratio of people nearly 80% is Figure 1: Relation between clause boundary and dependency structure cally shortened and the parsing accuracy can be increased. This paper is organized as follows: The next section describes a parsing unit of Japanese monologue. Section 3 presents dependency parsing based on clause boundaries. The parsing experiment and the discussion are reported in Sections 4 and 5, respectively. The related works are described in Section 6. 2 Parsing Unit of Japanese Monologues Our method achieves an efficient parsing by adopting a shorter unit than a sentence as a parsing unit. Since the search range of a dependency relation can be narrowed by dividing a long monologue sentence into small units, we can expect the parsing time to be shortened. 2.1 Clauses and Dependencies In Japanese, a clause basically contains one verb phrase. Therefore, a complex sentence or a compound sentence contains one or more clauses. Moreover, since a clause constitutes a syntactically sufficient and semantically meaningful language unit, it can be used as an alternative parsing unit to a sentence. Our proposed method assumes that a sentence is a sequence of one or more clauses, and every bunsetsu in a clause, except the final bunsetsu, depends on another bunsetsu in the same clause. As an example, the dependency structure of the Japanese sentence: 先日総理府が発表いたしました世論調査によ りますと死刑を支持するという人が八十パーセ ント近くになっております(The public opinion poll that the Prime Minister’s Office announced the other day indicates that the ratio of people advocating capital punishment is nearly 80%) is presented in Fig. 1. This sentence consists of four clauses: • 先日総理府が発表いたしました(that the Prime Minister’s Office announced the other day) • 世論調査によりますと(The public opinion poll indicates that) • 死刑を支持するという(advocating capital punishment) • 人が八十パーセント近くになっております (the ratio of people is nearly 80%) Each clause forms a dependency structure (solid arrows in Fig. 1), and a dependency relation from the final bunsetsu links the clause with another clause (dotted arrows in Fig. 1). 2.2 Clause Boundary Unit In adopting a clause as an alternative parsing unit, it is necessary to divide a monologue sentence into clauses as the preprocessing for the following dependency parsing. However, since some kinds of clauses are embedded in main clauses, it is fundamentally difficult to divide a monologue into clauses in one dimension (Kashioka and Maruyama, 2004). Therefore, by using a clause boundary annotation program (Maruyama et al., 2004), we approximately achieve the clause segmentation of a monologue sentence. This program can identify units corresponding to clauses by detecting the end boundaries of clauses. Furthermore, the program can specify the positions and types of clause boundaries simply from a local morphological analysis. That is, for a sentence morphologically analyzed by ChaSen (Matsumoto et al., 1999), the positions of clause boundaries are identified and clause boundary labels are inserted there. There exist 147 labels such as “compound clause” and “adnominal clause.” 2 In our research, we adopt the unit sandwiched between two clause boundaries detected by clause boundary analysis, were called the clause boundary unit, as an alternative parsing unit. Here, we regard the label name provided for the end boundary of a clause boundary unit as that unit’s type. 2The labels include a few other constituents that do not strictly represent clause boundaries but can be regarded as being syntactically independent elements, such as “topicalized element,” “conjunctives,” “interjections,” and so on. 170 Table 1: 200 sentences in “Asu-Wo-Yomu” sentences 200 clause boundary units 951 bunsetsus 2,430 morphemes 6,017 dependencies over clause boundaries 94 2.3 Relation between Clause Boundary Units and Dependency Structures To clarify the relation between clause boundary units and dependency structures, we investigated the monologue corpus “Asu-Wo-Yomu 3.” In the investigation, we used 200 sentences for which morphological analysis, bunsetsu segmentation, clause boundary analysis, and dependency parsing were automatically performed and then modified by hand. Here, the specification of the partsof-speech is in accordance with that of the IPA parts-of-speech used in the ChaSen morphological analyzer (Matsumoto et al., 1999), the rules of the bunsetsu segmentation with those of CSJ (Maekawa et al., 2000), the rules of the clause boundary analysis with those of Maruyama et al. (Maruyama et al., 2004), and the dependency grammar with that of the Kyoto Corpus (Kurohashi and Nagao, 1997). Table 1 shows the results of analyzing the 200 sentences. Among the 1,479 bunsetsus in the difference set between all bunsetsus (2,430) and the final bunsetsus (951) of clause boundary units, only 94 bunsetsus depend on a bunsetsu located outside the clause boundary unit. This result means that 93.6% (1,385/1,479) of all dependency relations are within a clause boundary unit. Therefore, the results confirmed that the assumption made by our research is valid to some extent. 3 Dependency Parsing Based on Clause Boundaries In accordance with the assumption described in Section 2, in our method, the transcribed sentence on which morphological analysis, clause boundary detection, and bunsetsu segmentation are performed is considered the input 4. The dependency 3Asu-Wo-Yomu is a collection of transcriptions of a TV commentary program of the Japan Broadcasting Corporation (NHK). The commentator speaks on some current social issue for 10 minutes. 4It is difficult to preliminarily divide a monologue into sentences because there are no clear sentence breaks in monologues. However, since some methods for detecting sentence boundaries have already been proposed (Huang and Zweig, 2002; Shitaoka et al., 2004), we assume that they can be detected automatically before dependency parsing. parsing is executed based on the following procedures: 1. Clause-level parsing: The internal dependency relations of clause boundary units are identified for every clause boundary unit in one sentence. 2. Sentence-level parsing: The dependency relations in which the dependent unit is the final bunsetsu of the clause boundary units are identified. In this paper, we describe a sequence of clause boundary units in a sentence as C1 · · · Cm, a sequence of bunsetsus in a clause boundary unit Ci as bi 1 · · · bi ni, a dependency relation in which the dependent bunsetsu is a bunsetsu bi k as dep(bi k), and a dependency structure of a sentence as {dep(b1 1), · · · , dep(bm nm−1)}. First, our method parses the dependency structure {dep(bi 1), · · · , dep(bi ni−1)} within the clause boundary unit whenever a clause boundary unit Ci is inputted. Then, it parses the dependency structure {dep(b1 n1), · · · , dep(bm−1 nm−1)}, which is a set of dependency relations whose dependent bunsetsu is the final bunsetsu of each clause boundary unit in the input sentence. In addition, in both of the above procedures, our method assumes the following three syntactic constraints: 1. No dependency is directed from right to left. 2. Dependencies don’t cross each other. 3. Each bunsetsu, except the final one in a sentence, depends on only one bunsetsu. These constraints are usually used for Japanese dependency parsing. 3.1 Clause-level Dependency Parsing Dependency parsing within a clause boundary unit, when the sequence of bunsetsus in an input clause boundary unit Ci is described as Bi (= bi 1 · · · bi ni), identifies the dependency structure Si (= {dep(bi 1), · · · , dep(bi ni−1)}), which maximizes the conditional probability P(Si|Bi). At this level, the head bunsetsu of the final bunsetsu bi ni of a clause boundary unit is not identified. Assuming that each dependency is independent of the others, P(Si|Bi) can be calculated as follows: P(Si|Bi) = ni−1 Y k=1 P(bi k rel →bi l|Bi), (1) 171 where P(bi k rel →bi l|Bi) is the probability that a bunsetsu bi k depends on a bunsetsu bi l when the sequence of bunsetsus Bi is provided. Unlike the conventional stochastic sentence-by-sentence dependency parsing method, in our method, Bi is the sequence of bunsetsus that constitutes not a sentence but a clause. The structure Si, which maximizes the conditional probability P(Si|Bi), is regarded as the dependency structure of Bi and calculated by dynamic programming (DP). Next, we explain the calculation of P(bi k rel → bi l|Bi). First, the basic form of independent words in a dependent bunsetsu is represented by hi k, its parts-of-speech ti k, and type of dependency ri k, while the basic form of the independent word in a head bunsetsu is represented by hi l, and its partsof-speech ti l. Furthermore, the distance between bunsetsus is described as dii kl. Here, if a dependent bunsetsu has one or more ancillary words, the type of dependency is the lexicon, part-of-speech and conjugated form of the rightmost ancillary word, and if not so, it is the part-of-speech and conjugated form of the rightmost morpheme. The type of dependency ri k is the same attribute used in our stochastic method proposed for robust dependency parsing of spoken language dialogue (Ohno et al., 2005b). Then dii kl takes 1 or more than 1, that is, a binary value. Incidentally, the above attributes are the same as those used by the conventional stochastic dependency parsing methods (Collins, 1996; Ratnaparkhi, 1997; Fujio and Matsumoto, 1998; Uchimoto et al., 1999; Charniak, 2000; Kudo and Matsumoto, 2002). Additionally, we prepared the attribute ei l to indicate whether bi l is the final bunsetsu of a clause boundary unit. Since we can consider a clause boundary unit as a unit corresponding to a simple sentence, we can treat the final bunsetsu of a clause boundary unit as a sentence-end bunsetsu. The attribute that indicates whether a head bunsetsu is a sentence-end bunsetsu has often been used in conventional sentence-by-sentence parsing methods (e.g. Uchimoto et al., 1999). By using the above attributes, the conditional probability P(bi k rel →bi l|Bi) is calculated as follows: P(bi k rel →bi l|Bi) (2) ∼= P(bi k rel →bi l|hi k, hi l, ti k, ti l, ri k, dii kl, ei l) = F(bi k rel →bi l, hi k, hi l, ti k, ti l, ri k, dii kl, ei l) F(hi k, hi l, ti k, ti l, ri k, dii kl, ei l) . Note that F is a co-occurrence frequency function. In order to resolve the sparse data problems caused by estimating P(bi k rel →bi l|Bi) with formula (2), we adopted the smoothing method described by Fujio and Matsumoto (Fujio and Matsumoto, 1998): if F(hi k, hi l, ti k, ti l, ri k, dii kl, ei l) in formula (2) is 0, we estimate P(bi k rel →bi l|Bi) by using formula (3). P(bi k rel →bi l|Bi) (3) ∼= P(bi k rel →bi l|ti k, ti l, ri k, dii kl, ei l) = F(bi k rel →bi l, ti k, ti l, ri k, dii kl, ei l) F(ti k, ti l, ri k, dii kl, ei l) 3.2 Sentence-level Dependency Parsing Here, the head bunsetsu of the final bunsetsu of a clause boundary unit is identified. Let B (= B1 · · · Bn) be the sequence of bunsetsus of one sentence and Sfin be a set of dependency relations whose dependent bunsetsu is the final bunsetsu of a clause boundary unit, {dep(b1 n1), · · · , dep(bm−1 nm−1)}; then Sfin, which makes P(Sfin|B) the maximum, is calculated by DP. The P(Sfin|B) can be calculated as follows: P(Sfin|B) = m−1 Y i=1 P(bi ni rel →bj l |B), (4) where P(bi ni rel →bj l |B) is the probability that a bunsetsu bi ni depends on a bunsetsu bj l when the sequence of the sentence’s bunsetsus, B, is provided. Our method parses by giving consideration to the dependency structures in each clause boundary unit, which were previously parsed. That is, the method does not consider all bunsetsus located on the right-hand side as candidates for a head bunsetsu but calculates only dependency relations within each clause boundary unit that do not cross any other relation in previously parsed dependency structures. In the case of Fig. 1, the method calculates by assuming that only three bunsetsus “人が(the ratio of people),” or “なっ ております(is)” can be the head bunsetsu of the bunsetsu “指示するという(advocating).” In addition, P(bi ni rel →bj l |B) is calculated as in Eq. (5). Equation (5) uses all of the attributes used in Eq. (2), in addition to the attribute sj l , which indicates whether the head bunsetsu of bj l is the final bunsetsu of a sentence. Here, we take into 172 Table 2: Size of experimental data set (Asu-WoYomu) test data learning data programs 8 95 sentences 500 5,532 clause boundary units 2,237 26,318 bunsetsus 5,298 65,821 morphemes 13,342 165,129 Note that the commentator of each program is different. Table 3: Experimental results on parsing time our method conv. method average time (msec) 10.9 51.9 programming language: LISP computer used: Pentium4 2.4 GHz, Linux account the analysis result that about 70% of the final bunsetsus of clause boundary units depend on the final bunsetsu of other clause boundary units 5 and also use the attribute ej l at this phase. P(bi ni rel →bj l |B) (5) ∼= P(bi ni rel →bj l |hi ni, hj l , ti ni, tj l , ri ni, dij nil, ej l , sj l ) = F(bi ni rel →bj l , hi ni, hj l , ti ni, tj l , ri ni, dij nil, ej l , sj l ) F(hini, hj l , tini, tj l , rini, dij nil, ej l , sj l ) 4 Parsing Experiment To evaluate the effectiveness of our method for Japanese spoken monologue, we conducted an experiment on dependency parsing. 4.1 Outline of Experiment We used the spoken monologue corpus “ AsuWo-Yomu, ”annotated with information on morphological analysis, clause boundary detection, bunsetsu segmentation, and dependency analysis6. Table 2 shows the data used for the experiment. We used 500 sentences as the test data. Although our method assumes that a dependency relation does not cross clause boundaries, there were 152 dependency relations that contradicted this assumption. This means that the dependency accuracy of our method is not over 96.8% (4,646/4,798). On the other hand, we used 5,532 sentences as the learning data. To carry out comparative evaluation of our method’s effectiveness, we executed parsing for 5We analyzed the 200 sentences described in Section 2.3 and confirmed 70.6% (522/751) of the final bunsetsus of clause boundary units depended on the final bunsetsu of other clause boundary units. 6Here, the specifications of these annotations are in accordance with those described in Section 2.3. 0 50 100 150 200 250 300 350 400 0 5 10 15 20 25 30 Parsing time [msec] Length of sentence [number of bunsetsu] our method conv. method Figure 2: Relation between sentence length and parsing time the above-mentioned data by the following two methods and obtained, respectively, the parsing time and parsing accuracy. • Our method: First, our method provides clause boundaries for a sequence of bunsetsus of an input sentence and identifies all clause boundary units in a sentence by performing clause boundary analysis (CBAP) (Maruyama et al., 2004). After that, our method executes the dependency parsing described in Section 3. • Conventional method: This method parses a sentence at one time without dividing it into clause boundary units. Here, the probability that a bunsetsu depends on another bunsetsu, when the sequence of bunsetsus of a sentence is provided, is calculated as in Eq. (5), where the attribute e was eliminated. This conventional method has been implemented by us based on the previous research (Fujio and Matsumoto, 1998). 4.2 Experimental Results The parsing times of both methods are shown in Table 3. The parsing speed of our method improves by about 5 times on average in comparison with the conventional method. Here, the parsing time of our method includes the time taken not only for the dependency parsing but also for the clause boundary analysis. The average time taken for clause boundary analysis was about 1.2 millisecond per sentence. Therefore, the time cost of performing clause boundary analysis as a preprocessing of dependency parsing can be considered small enough to disregard. Figure 2 shows the relation between sentence length and parsing time 173 Table 4: Experimental results on parsing accuracy our method conv. method bunsetsu within a clause boundary unit (except final bunsetsu) 88.2% (2,701/3,061) 84.7% (2,592/3,061) final bunsetsu of a clause boundary unit 65.6% (1,140/1,737) 63.3% (1,100/1,737) total 80.1% (3,841/4,798) 76.9% (3,692/4,798) Table 5: Experimental results on clause boundary analysis (CBAP) recall 95.7% (2,140/2,237) precision 96.9% (2,140/2,209) for both methods, and it is clear from this figure that the parsing time of the conventional method begins to rapidly increase when the length of a sentence becomes 12 or more bunsetsus. In contrast, our method changes little in relation to parsing time. Here, since the sentences used in the experiment are composed of 11.8 bunsetsus on average, this result shows that our method is suitable for improving the parsing time of a monologue sentence whose length is longer than the average. Table 4 shows the parsing accuracy of both methods. The first line of Table 4 shows the parsing accuracy for all bunsetsus within clause boundary units except the final bunsetsus of the clause boundary units. The second line shows the parsing accuracy for the final bunsetsus of all clause boundary units except the sentence-end bunsetsus. We confirmed that our method could analyze with a higher accuracy than the conventional method. Here, Table 5 shows the accuracy of the clause boundary analysis executed by CBAP. Since the precision and recall is high, we can assume that the clause boundary analysis exerts almost no harmful influence on the following dependency parsing. As mentioned above, it is clear that our method is more effective than the conventional method in shortening parsing time and increasing parsing accuracy. 5 Discussions Our method assumes that dependency relations within a clause boundary unit do not cross clause boundaries. Due to this assumption, the method cannot correctly parse the dependency relations over clause boundaries. However, the experimental results indicated that the accuracy of our method was higher than that of the conventional method. In this section, we first discuss the effect of our method on parsing accuracy, separately for bunTable 6: Comparison of parsing accuracy between conventional method and our method (for bunsetsu within a clause boundary unit except final bunsetsu) `````````` conv. method our method correct incorrect total correct 2,499 93 2,592 incorrect 202 267 469 total 2,701 360 3,061 setsus within clause boundary units (except the final bunsetsus) and the final bunsetsus of clause boundary units. Next, we discuss the problem of our method’s inability to parse dependency relations over clause boundaries. 5.1 Parsing Accuracy for Bunsetsu within a Clause Boundary Unit (except final bunsetsu) Table 6 compares parsing accuracies for bunsetsus within clause boundary units (except the final bunsetsus) between the conventional method and our method. There are 3,061 bunsetsus within clause boundary units except the final bunsetsu, among which 2,499 were correctly parsed by both methods. There were 202 dependency relations correctly parsed by our method but incorrectly parsed by the conventional method. This means that our method can narrow down the candidates for a head bunsetsu. In contrast, 93 dependency relations were correctly parsed solely by the conventional method. Among these, 46 were dependency relations over clause boundaries, which cannot in principle be parsed by our method. This means that our method can correctly parse almost all of the dependency relations that the conventional method can correctly parse except for dependency relations over clause boundaries. 5.2 Parsing Accuracy for Final Bunsetsu of a Clause Boundary Unit We can see from Table 4 that the parsing accuracy for the final bunsetsus of clause boundary units by both methods is much worse than that for bunsetsus within the clause boundary units (except the final bunsetsus). This means that it is difficult 174 Table 7: Comparison of parsing accuracy between conventional method and our method (for final bunsetsu of a clause boundary unit) `````````` conv. method our method correct incorrect total correct 1037 63 1,100 incorrect 103 534 637 total 1,140 597 1,737 Table 8: Parsing accuracy for dependency relations over clause boundaries our method conv. method recall 1.3% (2/152) 30.3% (46/152) precision 11.8% (2/ 17) 25.3% (46/182) to identify dependency relations whose dependent bunsetsu is the final one of a clause boundary unit. Table 7 compares how the two methods parse the dependency relations when the dependent bunsetsu is the final bunsetsu of a clause boundary unit. There are 1,737 dependency relations whose dependent bunsetsu is the final bunsetsu of a clause boundary unit, among which 1,037 were correctly parsed by both methods. The number of dependency relations correctly parsed only by our method was 103. This number is higher than that of dependency relations correctly parsed by only the conventional method. This result might be attributed to our method’s effect; that is, our method narrows down the candidates internally for a head bunsetsu based on the first-parsed dependency structure for clause boundary units. 5.3 Dependency Relations over Clause Boundaries Table 8 shows the accuracy of both methods for parsing dependency relations over clause boundaries. Since our method parses based on the assumption that those dependency relations do not exist, it cannot correctly parse anything. Although, from the experimental results, our method could identify two dependency relations over clause boundaries, these were identified only because dependency parsing for some sentences was performed based on wrong clause boundaries that were provided by clause boundary analysis. On the other hand, the conventional method correctly parsed 46 dependency relations among 152 that crossed a clause boundary in the test data. Since the conventional method could correctly parse only 30.3% of those dependency relations, we can see that it is in principle difficult to identify the dependency relations. 6 Related Works Since monologue sentences tend to be long and have complex structures, it is important to consider the features. Although there have been very few studies on parsing monologue sentences, some studies on parsing written language have dealt with long-sentence parsing. To resolve the syntactic ambiguity of a long sentence, some of them have focused attention on the “clause.” First, there are the studies that focused attention on compound clauses (Agarwal and Boggess, 1992; Kurohashi and Nagao, 1994). These tried to improve the parsing accuracy of long sentences by identifying the boundaries of coordinate structures. Next, other research efforts utilized the three categories into which various types of subordinate clauses are hierarchically classified based on the “scope-embedding preference” of Japanese subordinate clauses (Shirai et al., 1995; Utsuro et al., 2000). Furthermore, Kim et al. (Kim and Lee, 2004) divided a sentence into “S(ubject)-clauses,” which were defined as a group of words containing several predicates and their common subject. The above studies have attempted to reduce the parsing ambiguity between specific types of clauses in order to improve the parsing accuracy of an entire sentence. On the other hand, our method utilizes all types of clauses without limiting them to specific types of clauses. To improve the accuracy of longsentence parsing, we thought that it would be more effective to cyclopaedically divide a sentence into all types of clauses and then parse the local dependency structure of each clause. Moreover, since our method can perform dependency parsing clause-by-clause, we can reasonably expect our method to be applicable to incremental parsing (Ohno et al., 2005a). 7 Conclusions In this paper, we proposed a technique for dependency parsing of monologue sentences based on clause-boundary detection. The method can achieve more effective, high-performance spoken monologue parsing by dividing a sentence into clauses, which are considered as suitable language units for simplicity. To evaluate the effectiveness of our method for Japanese spoken monologue, we conducted an experiment on dependency parsing of the spoken monologue sentences recorded in the “Asu-Wo-Yomu.” From the experimental re175 sults, we confirmed that our method shortened the parsing time and increased the parsing accuracy compared with the conventional method, which parses a sentence without dividing it into clauses. Future research will include making a thorough investigation into the relation between dependency type and the type of clause boundary unit. After that, we plan to investigate techniques for identifying the dependency relations over clause boundaries. Furthermore, as the experiment described in this paper has shown the effectiveness of our technique for dependency parsing of long sentences in spoken monologues, so our technique can be expected to be effective in written language also. Therefore, we want to examine the effectiveness by conducting the parsing experiment of long sentences in written language such as newspaper articles. 8 Acknowledgements This research was supported in part by a contract with the Strategic Information and Communications R&D Promotion Programme, Ministry of Internal Affairs and Communications and the Grandin-Aid for Young Scientists of JSPS. The first author is partially supported by JSPS Research Fellowships for Young Scientists. References R. Agarwal and L. Boggess. 1992. A simple but useful approach to conjunct indentification. In Proc. of 30th ACL, pages 15–21. E. Charniak. 2000. A maximum-entropy-inspired parser. In Proc. of 1st NAACL, pages 132–139. M. Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proc. of 34th ACL, pages 184–191. Mark G. Core and Lenhart K. Schubert. 1999. A syntactic framework for speech repairs and other disruptions. In Proc. of 37th ACL, pages 413–420. R. Delmonte. 2003. Parsing spontaneous speech. In Proc. of 8th EUROSPEECH, pages 1999–2004. M. Fujio and Y. Matsumoto. 1998. Japanese dependency structure analysis based on lexicalized statistics. In Proc. of 3rd EMNLP, pages 87–96. J. Huang and G. Zweig. 2002. Maximum entropy model for punctuation annotation from speech. In Proc. of 7th ICSLP, pages 917–920. H. Kashioka and T. Maruyama. 2004. Segmentation of semantic unit in Japanese monologue. In Proc. of ICSLT-O-COCOSDA 2004, pages 87–92. M. Kim and J. Lee. 2004. Syntactic analysis of long sentences based on s-clauses. In Proc. of 1st IJCNLP, pages 420–427. T. Kudo and Y. Matsumoto. 2002. Japanese dependency analyisis using cascaded chunking. In Proc. of 6th CoNLL, pages 63–69. S. Kurohashi and M. Nagao. 1994. A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures. Computational Linguistics, 20(4):507–534. S. Kurohashi and M. Nagao. 1997. Building a Japanese parsed corpus while improving the parsing system. In Proc. of 4th NLPRS, pages 451–456. K. Maekawa, H. Koiso, S. Furui, and H. Isahara. 2000. Spontaneous speech corpus of Japanese. In Proc. of 2nd LREC, pages 947–952. T. Maruyama, H. Kashioka, T. Kumano, and H. Tanaka. 2004. Development and evaluation of Japanese clause boundaries annotation program. Journal of Natural Language Processing, 11(3):39– 68. (In Japanese). Y. Matsumoto, A. Kitauchi, T. Yamashita, and Y. Hirano, 1999. Japanese Morphological Analysis System ChaSen version 2.0 Manual. NAIST Technical Report, NAIST-IS-TR99009. T. Ohno, S. Matsubara, H. Kashioka, N. Kato, and Y. Inagaki. 2005a. Incremental dependency parsing of Japanese spoken monologue based on clause boundaries. In Proc. of 9th EUROSPEECH, pages 3449–3452. T. Ohno, S. Matsubara, N. Kawaguchi, and Y. Inagaki. 2005b. Robust dependency parsing of spontaneous Japanese spoken language. IEICE Transactions on Information and Systems, E88-D(3):545–552. A. Ratnaparkhi. 1997. A liner observed time statistical parser based on maximum entropy models. In Proc. of 2nd EMNLP, pages 1–10. S. Shirai, S. Ikehara, A. Yokoo, and J. Kimura. 1995. A new dependency analysis method based on semantically embedded sentence structures and its performance on Japanese subordinate clause. Journal of Information Processing Society of Japan, 36(10):2353–2361. (In Japanese). K. Shitaoka, K. Uchimoto, T. Kawahara, and H. Isahara. 2004. Dependency structure analysis and sentence boundary detection in spontaneous Japanese. In Proc. of 20th COLING, pages 1107–1113. K. Uchimoto, S. Sekine, and K. Isahara. 1999. Japanese dependency structure analysis based on maximum entropy models. In Proc. of 9th EACL, pages 196–203. T. Utsuro, S. Nishiokayama, M. Fujio, and Y. Matsumoto. 2000. Analyzing dependencies of Japanese subordinate clauses based on statistics of scope embedding preference. In Proc. of 6th ANLP, pages 110–117. 176
2006
22
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 177–184, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Trace Prediction and Recovery With Unlexicalized PCFGs and Slash Features Helmut Schmid IMS, University of Stuttgart [email protected] Abstract This paper describes a parser which generates parse trees with empty elements in which traces and fillers are co-indexed. The parser is an unlexicalized PCFG parser which is guaranteed to return the most probable parse. The grammar is extracted from a version of the PENN treebank which was automatically annotated with features in the style of Klein and Manning (2003). The annotation includes GPSG-style slash features which link traces and fillers, and other features which improve the general parsing accuracy. In an evaluation on the PENN treebank (Marcus et al., 1993), the parser outperformed other unlexicalized PCFG parsers in terms of labeled bracketing fscore. Its results for the empty category prediction task and the trace-filler coindexation task exceed all previously reported results with 84.1% and 77.4% fscore, respectively. 1 Introduction Empty categories (also called null elements) are used in the annotation of the PENN treebank (Marcus et al., 1993) in order to represent syntactic phenomena like constituent movement (e.g. whextraction), discontinuous constituents, and missing elements (PRO elements, empty complementizers and relative pronouns). Moved constituents are co-indexed with a trace which is located at the position where the moved constituent is to be interpreted. Figure 1 shows an example of constituent movement in a relative clause. Empty categories provide important information for the semantic interpretation, in particular NP NP NNS things SBAR WHPP-1 IN of WHNP WDT which S NP-SBJ PRP they VP VBP are ADJP-PRD JJ unaware PP -NONE*T*-1 Figure 1: Co-indexation of traces and fillers for determining the predicate-argument structure of a sentence. However, most broad-coverage statistical parsers (Collins, 1997; Charniak, 2000, and others) which are trained on the PENN treebank generate parse trees without empty categories. In order to augment such parsers with empty category prediction, three rather different strategies have been proposed: (i) pre-processing of the input sentence with a tagger which inserts empty categories into the input string of the parser (Dienes and Dubey, 2003b; Dienes and Dubey, 2003a). The parser treats the empty elements like normal input tokens. (ii) post-processing of the parse trees with a pattern matcher which adds empty categories after parsing (Johnson, 2001; Campbell, 2004; Levy and Manning, 2004) (iii) in-processing of the empty categories with a slash percolation mechanism (Dienes and Dubey, 2003b; Dienes and Dubey, 2003a). The empty elements are here generated by the grammar. Good results have been obtained with all three approaches, but (Dienes and Dubey, 2003b) reported that in their experiments, the in-processing of the empty categories only worked with lexicalized parsing. They explain that their unlex177 icalized PCFG parser produced poor results because the beam search strategy applied there eliminated many correct constituents with empty elements. The scores of these constituents were too low compared with the scores of constituents without empty elements. They speculated that “doing an exhaustive search might help” here. In this paper, we confirm this hypothesis and show that it is possible to accurately predict empty categories with unlexicalized PCFG parsing and slash features if the true Viterbi parse is computed. In our experiments, we used the BitPar parser (Schmid, 2004) and a PCFG which was extracted from a version of the PENN treebank that was automatically annotated with features in the style of (Klein and Manning, 2003). 2 Feature Annotation A context-free grammar which generates empty categories has to make sure that a filler exists for each trace and vice versa. A well-known technique which enforces this constraint is the GPSGstyle percolation of a slash feature: All constituents on the direct path from the trace to the filler are annotated with a special feature which represents the category of the filler as shown in figure 2. In order to restore the original treebank anNP NP NNS things SBAR WHPP/WHPP IN of WHNP WDT which S/WHPP NP-SBJ PRP they VP/WHPP VBP are ADJP-PRD/WHPP JJ unaware PP/WHPP -NONE-/WHPP *T*/WHPP Figure 2: Slash features: The filler node of category WHNP is linked to the trace node via percolation of a slash feature. The trace node is labeled with *T*. notation with co-reference indices from the representation with slash features, the parse tree has to be traversed starting at a trace node and following the nodes annotated with the respective filler category until the filler node is encountered. Normally, the filler node is a sister node of an ancestor node of the trace, i.e. the filler c-commands the trace node, but in case of clausal fillers it is also possible that the filler dominates the trace. An example is the sentence “S-1 She had – he informed her *1 – kidney trouble” whose parse tree is shown in figure 3. Besides the slash features, we used other features in order to improve the parsing accuracy of the PCFG, inspired by the work of Klein and Manning (2003). The most important ones of these features1 will now be described in detail. Section 4.3 shows the impact of these features on labeled bracketing accuracy and empty category prediction. VP feature VPs were annotated with a feature that distinguishes between finite, infinitive, toinfinitive, gerund, past participle, and passive VPs. S feature The S node feature distinguishes between imperatives, finite clauses, and several types of small clauses. Parent features Modifier categories like SBAR, PP, ADVP, RB and NP-ADV were annotated with a parent feature (cf. Johnson (1998)). The parent features distinguish between verbal (VP), adjectival (ADJP, WHADJP), adverbial (ADVP, WHADVP), nominal (NP, WHNP, QP), prepositional (PP) and other parents. PENN tags The PENN treebank annotation uses semantic tags to refine syntactic categories. Most parsers ignore this information. We preserved the tags ADV, CLR, DIR, EXT, IMP, LGS, LOC, MNR, NOM, PRD, PRP, SBJ and TMP in combination with selected categories. Auxiliary feature We added a feature to the part-of-speech tags of verbs in order to distinguish between be, do, have, and full verbs. Agreement feature Finite VPs are marked with 3s (n3s) if they are headed by a verb with part-ofspeech VBZ (VBP). Genitive feature NP nodes which dominate a node of the category POS (possessive marker) are marked with a genitive flag. Base NPs NPs dominating a node of category NN, NNS, NNP, NNPS, DT, CD, JJ, JJR, JJS, PRP, RB, or EX are marked as base NPs. 1The complete annotation program is available from the author’s home page at http://www.ims.unistuttgart.de/ schmid 178 S-1 NP-SBJ PRP She VP VBD had PRN : – S NP-SBJ PRP he VP VBD informed NP PRP her SBAR -NONE0 S -NONE*T*-1 : – NP NN kidney NN trouble . . Figure 3: Example of a filler which dominates its trace IN feature The part-of-speech tags of the 45 most frequent prepositions were lexicalized by adding the preposition as a feature. The new partof-speech tag of the preposition “by” is “IN/by”. Irregular adverbs The part-of-speech tags of the adverbs “as”, “so”, “about”, and “not” were also lexicalized. Currency feature NP and QP nodes are marked with a currency flag if they dominate a node of category $, #, or SYM. Percent feature Nodes of the category NP or QP are marked with a percent flag if they dominate the subtree (NN %). Any node which immediately dominates the token %, is marked, as well. Punctuation feature Nodes which dominate sentential punctuation (.?!) are marked. DT feature Nodes of category DT are split into indefinite articles (a, an), definite articles (the), and demonstratives (this, that, those, these). WH feature The wh-tags (WDT, WP, WRB, WDT) of the words which, what, who, how, and that are also lexicalized. Colon feature The part-of-speech tag ’:’ was replaced with “;”, “–” or “...” if it dominated a corresponding token. DomV feature Nodes of a non-verbal syntactic category are marked with a feature if they dominate a node of category VP, SINV, S, SQ, SBAR, or SBARQ. Gap feature S nodes dominating an empty NP are marked with the feature gap. Subcategorization feature The part-of-speech tags of verbs are annotated with a feature which encodes the sequence of arguments. The encoding maps reflexive NPs to r, NP/NP-PRD/SBARNOM to n, ADJP-PRD to j, ADVP-PRD to a, PRT to t, PP/PP-DIR to p, SBAR/SBAR-CLR to b, S/fin to sf, S/ppres/gap to sg, S/to/gap to st, other S nodes to so, VP/ppres to vg, VP/ppast to vn, VP/pas to vp, VP/inf to vi, and other VPs to vo. A verb with an NP and a PP argument, for instance, is annotated with the feature np. Adjectives, adverbs, and nouns may also get a subcat feature which encodes a single argument using a less fine-grained encoding which maps PP to p, NP to n, S to s, and SBAR to b. A node of category NN or NNS e.g. is marked with a subcat feature if it is followed by an argument category unless the argument is a PP which is headed by the preposition of. RC feature In relative clauses with an empty relative pronoun of category WHADVP, we mark the SBAR node of the relative clause, the NP node to which it is attached, and its head child of category NN or NNS, if the head word is either way, ways, reason, reasons, day, days, time, moment, place, or position. This feature helps the parser to correctly insert WHADVP rather than WHNP. Figure 4 shows a sample tree. TMP features Each node on the path between an NP-TMP or PP-TMP node and its nominal head is labeled with the feature tmp. This feature helps the parser to identify temporal NPs and PPs. MNR and EXT features Similarly, each node on the path between an NP-EXT, NP-MNR or ADVP-TMP node and its head is labeled with the 179 NP NP/x NN/x time SBAR/x WHADVP-1 -NONE0 S NP-SBJ -NONE* VP TO to VP VB relax ADVP-TMP -NONE*T*-1 Figure 4: Annotation of relative clauses with empty relative pronoun of category WHADVP feature ext or mnr. ADJP features Nodes of category ADJP which are dominated by an NP node are labeled with the feature “post” if they are in final position and the feature “attr” otherwise. JJ feature Nodes of category JJ which are dominated by an ADJP-PRD node are labeled with the feature “prd”. JJ-tmp feature JJ nodes which are dominated by an NP-TMP node and which themselves dominate one of the words “last”, “next”, “late”, “previous”, “early”, or “past” are labeled with tmp. QP feature If some node dominates an NP node followed by an NP-ADV node as in (NP (NP one dollar) (NP-ADV a day)), the first child NP node is labeled with the feature “qp”. If the parent is an NP node, it is also labeled with “qp”. NP-pp feature NP nodes which dominate a PP node are labeled with the feature pp. If this PP itself is headed by the preposition of, then it is annotated with the feature of. MWL feature In adverbial phrases which neither dominate an adverb nor another adverbial phrase, we lexicalize the part-of-speech tags of a small set of words like “least” (at least), “kind”, or “sort” which appear frequently in such adverbial phrases. Case feature Pronouns like he or him , but not ambiguous pronouns like it are marked with nom or acc, respectively. Expletives If a subject NP dominates an NP which consists of the pronoun it, and an S-trace in sentences like It is important to..., the dominated NP is marked with the feature expl. LST feature The parent nodes of LST nodes2 are marked with the feature lst. Complex conjunctions In SBAR constituents starting with an IN and an NN child node (usually indicating one of the two complex conjunctions “in order to” or “in case of”), we mark the NN child with the feature sbar. LGS feature The PENN treebank marks the logical subject of passive clauses which are realized by a by-PP with the semantic tag LGS. We move this tag to the dominating PP. OC feature Verbs are marked with an object control feature if they have an NP argument which dominates an NP filler and an S argument which dominates an NP trace. An example is the sentence She asked him to come. Corrections The part-of-speech tags of the PENN treebank are not always correct. Some of the errors (like the tag NNS in VP-initial position) can be identified and corrected automatically in the training data. Correcting tags did not always improve parsing accuracy, so it was done selectively. The gap and domV features described above were also used by Klein and Manning (2003). All features were automatically added to the PENN treebank by means of an annotation program. Figure 5 shows an example of an annotated parse tree. 3 Parameter Smoothing We extracted the grammar from sections 2–21 of the annotated version of the PENN treebank. In order to increase the coverage of the grammar, we selectively applied markovization to the grammar (cf. Klein and Manning (2003)) by replacing long infrequent rules with a set of binary rules. Markovization was only applied if none of the non-terminals on the right hand side of the rule had a slash feature in order to avoid negative effects on the slash feature percolation mechanism. The probabilities of the grammar rules were directly estimated with relative frequencies. No smoothing was applied, here. The lexical probabilities, on the other hand, were smoothed with 2LST annotates the list symbol in enumerations. 180 S/fin/. NP-SBJ/3s/domV_<S> NP/base/3s/expl PRP/expl It S_<S> -NONE-_<S> *EXP*_#<S> VP/3s+<S> VBZ/pst ’s PP/V IN/up up PP/PP TO to NP/base PRP you S/to/gap+#<S> NP-SBJ -NONE* VP/to TO to VP/inf VV/r protect NP/refl/base PRP/refl yourself Figure 5: An Annotated Parse Tree the following technique which was adopted from Klein and Manning (2003). Each word is assigned to one of 216 word classes. The word classes are defined with regular expressions. Examples are the class [A-Za-z0-9-]+-old which contains the word 20-year-old, the class [a-z][az]+ifies which contains clarifies, and a class which contains a list of capitalized adjectives like Advanced. The word classes are ordered. If a string is matched by the regular expressions of more than one word class, then it is assigned to the first of these word classes. For each word class, we compute part-of-speech probabilities with relative frequencies. The part-of-speech frequencies  of a word  are smoothed by adding the part-of-speech probability    of the word class  according to equation 1 in order to obtain the smoothed frequency   . The part-ofspeech probability of the word class is weighted by a parameter  whose value was set to 4 after testing on held-out data. The lexical probabilities are finally estimated from the smoothed frequencies according to equation 2.         (1)      !#"%$  &' (2) 4 Evaluation In our experiments, we used the usual splitting of the PENN treebank into training data (sections 2– 21), held-out data (section 22), and test data (section 23). The grammar extracted from the automatically annotated version of the training corpus contained 52,297 rules with 3,453 different non-terminals. Subtrees which dominated only empty categories were collapsed into a single empty element symbol. The parser skips over these symbols during parsing, but adds them to the output parse. Overall, there were 308 different empty element symbols in the grammar. Parsing section 23 took 169 minutes on a DualOpteron system with 2.2 GHz CPUs, which is about 4.2 seconds per sentence. precision recall f-score this paper 86.9 86.3 86.6 Klein/Manning 86.3 85.1 85.7 Table 1: Labeled bracketing accuracy on section 23 Table 1 shows the labeled bracketing accuracy of the parser on the whole section 23 and compares it to the results reported in Klein and Manning (2003) for sentences with up to 100 words. 4.1 Empty Category Prediction Table 2 reports the accuracy of the parser in the empty category (EC) prediction task for ECs occurring more than 6 times. Following Johnson (2001), an empty category was considered correct if the treebank parse contained an empty node of the same category at the same string position. Empty SBAR nodes which dominate an empty S node are treated as a single empty element and listed as SBAR-S in table 2. Frequent types of empty elements are recognized quite reliably. Exceptions are the traces of adverbial and prepositional phrases where the recall was only 65% and 48%, respectively, and empty relative pronouns of type WHNP and WHADVP with f-scores around 60%. A couple of empty relative pronouns of type WHADVP were mis-analyzed as WHNP which explains why the precision is higher than the recall for WHADVP, but vice versa for WHNP. 181 prec. recall f-sc. freq. NP * 87.0 85.9 86.5 1607 NP *T* 84.9 87.6 86.2 508 0 95.2 89.7 92.3 416 *U* 95.3 93.8 94.5 388 ADVP *T* 80.3 64.7 71.7 170 S *T* 86.7 93.8 90.1 160 SBAR-S *T* 88.5 76.7 82.1 120 WHNP 0 57.6 63.6 60.4 107 WHADVP 0 75.0 50.0 60.0 36 PP *ICH* 11.1 3.4 5.3 29 PP *T* 73.7 48.3 58.3 29 SBAR *EXP* 28.6 12.5 17.4 16 VP *?* 33.3 40.0 36.4 15 S *ICH* 61.5 57.1 59.3 14 S *EXP* 66.7 71.4 69.0 14 SBAR *ICH* 60.0 25.0 35.3 12 NP *?* 50.0 9.1 15.4 11 ADJP *T* 100.0 77.8 87.5 9 SBAR-S *?* 66.7 25.0 36.4 8 VP *T* 100.0 37.5 54.5 8 overall 86.0 82.3 84.1 3716 Table 2: Accuracy of empty category prediction on section 23. The first column shows the type of the empty element and – except for empty complementizers and empty units – also the category. The last column shows the frequency in the test data. The accuracy of the pseudo attachment labels *RNR*, *ICH*, *EXP*, and *PPA* was generally low with a precision of 41%, recall of 21%, and f-score of 28%. Empty elements with a test corpus frequency below 8 were almost never generated by the parser. 4.2 Co-Indexation Table 3 shows the accuracy of the parser on the co-indexation task. A co-indexation of a trace and a filler is represented by a 5-tuple consisting of the category and the string position of the trace, as well as the category, start and end position of the filler. A co-indexation is judged correct if the treebank parse contains the same 5-tuple. For NP3 and S4 traces of type ‘*T*’, the coindexation results are quite good with 85% and 92% f-score, respectively. For ‘*T*’-traces of 3NP traces of type *T* result from wh-extraction in questions and relative clauses and from fronting. 4S traces of type *T* occur in sentences with quoted speech like the sentence “That’s true!”, he said *T*. other categories and for NP traces of type ‘*’,5 the parser shows high precision, but moderate recall. The recall of infrequent types of empty elements is again low, as in the recognition task. prec. rec. f-sc. freq. NP * 81.1 72.1 76.4 1140 WH NP *T* 83.7 86.8 85.2 507 S *T* 92.0 91.0 91.5 277 WH ADVP *T* 78.6 63.2 70.1 163 PP *ICH* 14.3 3.4 5.6 29 WH PP *T* 68.8 50.0 57.9 22 SBAR *EXP* 25.0 12.5 16.7 16 S *ICH* 57.1 53.3 55.2 15 S *EXP* 66.7 71.4 69.0 14 SBAR *ICH* 60.0 25.0 35.3 12 VP *T* 33.3 12.5 18.2 8 ADVP *T* 60.0 42.9 50.0 7 PP *T* 100.0 28.6 44.4 7 overall 81.7 73.5 77.4 2264 Table 3: Co-indexation accuracy on section 23. The first column shows the category and type of the trace. If the filler category of the filler is different from the category of the trace, it is added in front. The filler category is abbreviated to “WH” if the rest is identical to the trace category. The last column shows the frequency in the test data. In order to get an impression how often EC prediction errors resulted from misplacement rather than omission, we computed EC prediction accuracies without comparing the EC positions. We observed the largest f-score increase for ADVP *T* and PP *T*, where attachment ambiguities are likely, and for VP *?* which is infrequent. 4.3 Feature Evaluation We ran a series of evaluations on held-out data in order to determine the impact of the different features which we described in section 2 on the parsing accuracy. In each run, we deleted one of the features and measured how the accuracy changed compared to the baseline system with all features. The results are shown in table 4. 5The trace type ‘*’ combines two types of traces with different linguistic properties, namely empty objects of passive constructions which are co-indexed with the subject, and empty subjects of participial and infinitive clauses which are co-indexed with an NP of the matrix clause. 182 Feature LB EC CI slash feature 0.43 – – VP features 2.93 6.38 5.46 PENN tags 2.34 4.54 6.75 IN feature 2.02 2.57 5.63 S features 0.49 3.08 4.13 V subcat feature 0.68 3.17 2.94 punctuation feat. 0.82 1.11 1.86 all PENN tags 0.84 0.69 2.03 domV feature 1.76 0.15 0.00 gap feature 0.04 1.20 1.32 DT feature 0.57 0.44 0.99 RC feature 0.00 1.11 1.10 colon feature 0.41 0.84 0.44 ADV parent 0.50 0.04 0.93 auxiliary feat. 0.40 0.29 0.77 SBAR parent 0.45 0.24 0.71 agreement feat. 0.05 0.52 1.15 ADVP subcat feat. 0.33 0.32 0.55 genitive feat. 0.39 0.29 0.44 NP subcat feat. 0.33 0.08 0.76 no-tmp 0.14 0.90 0.16 base NP feat. 0.47 -0.24 0.55 tag correction 0.13 0.37 0.44 irr. adverb feat. 0.04 0.56 0.39 PP parent 0.08 0.04 0.82 ADJP features 0.14 0.41 0.33 currency feat. 0.06 0.82 0.00 qp feature 0.13 0.14 0.50 PP tmp feature -0.24 0.65 0.60 WH feature 0.11 0.25 0.27 percent feat. 0.34 -0.10 0.10 NP-ADV parent f. 0.07 0.14 0.39 MNR feature 0.08 0.35 0.11 JJ feature 0.08 0.18 0.27 case feature 0.05 0.14 0.27 Expletive feat. -0.01 0.16 0.27 LGS feature 0.17 0.07 0.00 ADJ subcat 0.00 0.00 0.33 OC feature 0.00 0.00 0.22 JJ-tmp feat. 0.09 0.00 0.00 refl. pronoun 0.02 -0.03 0.16 EXT feature -0.04 0.09 0.16 MWL feature 0.05 0.00 0.00 complex conj. f. 0.07 -0.07 0.00 LST feature 0.12 -0.12 -0.11 NP-pp feature 0.13 -0.57 -0.39 Table 4: Differences between the baseline f-scores for labeled bracketing, EC prediction, and coindexation (CI) and the f-scores without the specified feature. 5 Comparison Table 7 compares the empty category prediction results of our parser with those reported in Johnson (2001), Dienes and Dubey (2003b) and Campbell (2004). In terms of recall and f-score, our parser outperforms the other parsers. In terms of precision, the tagger of Dienes and Dubey is the best, but its recall is the lowest of all systems. prec. recall f-score this paper 86.0 82.3 84.1 Campbell 85.2 81.7 83.4 Dienes & Dubey 86.5 72.9 79.1 Johnson 85 74 79 Table 5: Accuracy of empty category prediction on section 23 The good performance of our parser on the empty element recognition task is remarkable considering the fact that its performance on the labeled bracketing task is 3% lower than that of the Charniak (2000) parser used by Campbell (2004). prec. recall f-score this paper 81.7 73.5 77.4 Campbell 78.3 75.1 76.7 Dienes & Dubey (b) 81.5 68.7 74.6 Dienes & Dubey (a) 80.5 66.0 72.6 Johnson 73 63 68 Table 6: Co-indexation accuracy on section 23 Table 6 compares our co-indexation results with those reported in Johnson (2001), Dienes and Dubey (2003b), Dienes and Dubey (2003a), and Campbell (2004). Our parser achieves the highest precision and f-score. Campbell (2004) reports a higher recall, but lower precision. Table 7 shows the trace prediction accuracies of our parser, Johnson’s (2001) parser with parser input and perfect input, and Campbell’s (2004) parser with perfect input. The accuracy of Johnson’s parser is consistently lower than that of the other parsers and it has particular difficulties with ADVP traces, SBAR traces, and empty relative pronouns (WHNP 0). Campbell’s parser and our parser cannot be directly compared, but when we take the respective performance difference to Johnson’s parser as evidence, we might conclude that Campbell’s parser works particularly well on NP *, *U*, and WHNP 0, whereas our system 183 paper J1 J2 C NP * 83.2 82 91 97.5 NP *T* 86.2 81 91 96.2 0 92.3 88 96 98.5 *U* 94.5 92 95 98.6 ADVP *T* 71.7 56 66 79.9 S *T* 90.1 88 90 92.7 SBAR-S *T* 82.1 70 74 84.4 WHNP 0 60.4 47 77 92.4 WHADVP 0 60.0 – – 73.3 Table 7: Comparison of the empty category prediction accuracies for different categories in this paper (paper), in (Johnson, 2001) with parser input (J1), in (Johnson, 2001) with perfect input (J2), and in (Campbell, 2004) with perfect input. is slightly better on empty complementizers (0), ADVP traces, and SBAR traces. 6 Summary We presented an unlexicalized PCFG parser which applies a slash feature percolation mechanism to generate parse trees with empty elements and coindexation of traces and fillers. The grammar was extracted from a version of the PENN treebank which was annotated with slash features and a set of other features that were added in order to improve the general parsing accuracy. The parser computes true Viterbi parses unlike most other parsers for treebank grammars which are not guaranteed to produce the most likely parse tree because they apply pruning strategies like beam search. We evaluated the parser using the standard PENN treebank training and test data. The labeled bracketing f-score of 86.6% is – to our knowledge – the best f-score reported for unlexicalized PCFGs, exceeding that of Klein and Manning (2003) by almost 1%. On the empty category prediction task, our parser outperforms the best previously reported system (Campbell, 2004) by 0.7% reaching an f-score of 84.1%, although the general parsing accuracy of our unlexicalized parser is 3% lower than that of the parser used by Campbell (2004). Our parser also ranks highest in terms of the co-indexation accuracy with 77.4% f-score, again outperforming the system of Campbell (2004) by 0.7%. References Richard Campbell. 2004. Using linguistic principles to recover empty categories. In Proceedings of the 42nd Annual Meeting of the ACL, pages 645–652, Barcelona, Spain. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics (ANLP-NAACL 2000), pages 132–139, Seattle, Washington. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the ACL, Madrid, Spain. Péter Dienes and Amit Dubey. 2003a. Antecedent recovery: Experiments with a trace tagger. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan. Péter Dienes and Amit Dubey. 2003b. Deep syntactic processing by combining shallow methods. In Proceedings of the 41st Annual Meeting of the ACL, pages 431–438, Sapporo, Japan. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):613–632. Mark Johnson. 2001. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 39th Annual Meeting of the ACL, pages 136–143, Toulouse, France. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the ACL, pages 423–430, Sapporo, Japan. Roger Levy and Christopher D. Manning. 2004. Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation. In Proceedings of the 42nd Annual Meeting of the ACL, pages 327–334, Barcelona, Spain. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313–330, June. Helmut Schmid. 2004. Efficient parsing of highly ambiguous context-free grammars with bit vectors. In Proceedings of the 20th International Conference on Computational Linguistics (COLING 2004), volume 1, pages 162–168, Geneva, Switzerland. 184
2006
23
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 185–192, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Learning More Effective Dialogue Strategies Using Limited Dialogue Move Features Matthew Frampton and Oliver Lemon HCRC, School of Informatics University of Edinburgh Edinburgh, EH8 9LW, UK [email protected], [email protected] Abstract We explore the use of restricted dialogue contexts in reinforcement learning (RL) of effective dialogue strategies for information seeking spoken dialogue systems (e.g. COMMUNICATOR (Walker et al., 2001)). The contexts we use are richer than previous research in this area, e.g. (Levin and Pieraccini, 1997; Scheffler and Young, 2001; Singh et al., 2002; Pietquin, 2004), which use only slot-based information, but are much less complex than the full dialogue “Information States” explored in (Henderson et al., 2005), for which tractabe learning is an issue. We explore how incrementally adding richer features allows learning of more effective dialogue strategies. We use 2 user simulations learned from COMMUNICATOR data (Walker et al., 2001; Georgila et al., 2005b) to explore the effects of different features on learned dialogue strategies. Our results show that adding the dialogue moves of the last system and user turns increases the average reward of the automatically learned strategies by 65.9% over the original (hand-coded) COMMUNICATOR systems, and by 7.8% over a baseline RL policy that uses only slot-status features. We show that the learned strategies exhibit an emergent “focus switching” strategy and effective use of the ‘give help’ action. 1 Introduction Reinforcement Learning (RL) applied to the problem of dialogue management attempts to find optimal mappings from dialogue contexts to system actions. The idea of using Markov Decision Processes (MDPs) and reinforcement learning to design dialogue strategies for dialogue systems was first proposed by (Levin and Pieraccini, 1997). There, and in subsequent work such as (Singh et al., 2002; Pietquin, 2004; Scheffler and Young, 2001), only very limited state information was used in strategy learning, based always on the number and status of filled information slots in the application (e.g. departure-city is filled, destination-city is unfilled). This we refer to as low-level contextual information. Much prior work (Singh et al., 2002) concentrated only on specific strategy decisions (e.g. confirmation and initiative strategies), rather than the full problem of what system dialogue move to take next. The simple strategies learned for low-level definitions of state cannot be sensitive to (sometimes critical) aspects of the dialogue context, such as the user’s last dialogue move (DM) (e.g. requesthelp) unless that move directly affects the status of an information slot (e.g. provide-info(destinationcity)). We refer to additional contextual information such as the system and user’s last dialogue moves as high-level contextual information. (Frampton and Lemon, 2005) learned full strategies with limited ‘high-level’ information (i.e. the dialogue move(s) of the last user utterance) and only used a stochastic user simulation whose probabilities were supplied via commonsense and intuition, rather than learned from data. This paper uses data-driven n-gram user simulations (Georgila et al., 2005a) and a richer dialogue context. On the other hand, increasing the size of the state space for RL has the danger of making the learning problem intractable, and at the very least means that data is more sparse and state approximation methods may need to be used (Henderson et al., 2005). To date, the use of very large state spaces relies on a “hybrid” supervised/reinforcement learning technique, where the reinforcement learning element has not yet been shown to significantly improve policies over the purely supervised case (Henderson et al., 2005). 185 The extended state spaces that we propose are based on theories of dialogue such as (Clark, 1996; Searle, 1969; Austin, 1962; Larsson and Traum, 2000), where which actions a dialogue participant can or should take next are not based solely on the task-state (i.e. in our domain, which slots are filled), but also on wider contextual factors such as a user’s dialogue moves or speech acts. In future work we also intend to use feature selection techniques (e.g. correlation-based feature subset (CFS) evaluation (Rieser and Lemon, 2006)) on the COMMUNICATOR data (Georgila et al., 2005a; Walker et al., 2001) in order to identify additional context features that it may be effective to represent in the state. 1.1 Methodology To explore these issues we have developed a Reinforcement Learning (RL) program to learn dialogue strategies while accurate simulated users (Georgila et al., 2005a) converse with a dialogue manager. See (Singh et al., 2002; Scheffler and Young, 2001) and (Sutton and Barto, 1998) for a detailed description of Markov Decision Processes and the relevant RL algorithms. In dialogue management we are faced with the problem of deciding which dialogue actions it is best to perform in different states. We use (RL) because it is a method of learning by delayed reward using trial-and-error search. These two properties appear to make RL techniques a good fit with the problem of automatically optimising dialogue strategies, because in task-oriented dialogue often the “reward” of the dialogue (e.g. successfully booking a flight) is not obtainable immediately, and the large space of possible dialogues for any task makes some degree of trial-and-error exploration necessary. We use both 4-gram and 5-gram user simulations for testing and for training (i.e. train with 4-gram, test with 5-gram, and vice-versa). These simulations also simulate ASR errors since the probabilities are learned from recognition hypotheses and system behaviour logged in the COMMUNICATOR data (Walker et al., 2001) further annotated with speech acts and contexts by (Georgila et al., 2005b). Here the task domain is flight-booking, and the aim for the dialogue manager is to obtain values for the user’s flight information “slots” i.e. departure city, destination city, departure date and departure time, before making a database query. We add the dialogue moves of the last user and system turns as context features and use these in strategy learning. We compare the learned strategies to 2 baselines: the original COMMUNICATOR systems and an RL strategy which uses only slot status features. 1.2 Outline Section 2 contains a description of our basic experimental framework, and a detailed description of the reinforcement learning component and user simulations. Sections 3 and 4 describe the experiments and analyse our results, and in section 5 we conclude and suggest future work. 2 The Experimental Framework Each experiment is executed using the DIPPER Information State Update dialogue manager (Bos et al., 2003) (which here is used to track and update dialogue context rather than deciding which actions to take), a Reinforcement Learning program (which determines the next dialogue action to take), and various user simulations. In sections 2.3 and 2.4 we give more details about the reinforcement learner and user simulations. 2.1 The action set for the learner Below is a list of all the different actions that the RL dialogue manager can take and must learn to choose between based on the context: 1. An open question e.g. ‘How may I help you?’ 2. Ask the value for any one of slots 1...n. 3. Explicitly confirm any one of slots 1...n. 4. Ask for the nth slot whilst implicitly confirming1 either slot value n −1 e.g. ‘So you want to fly from London to where?’, or slot value n + 1 5. Give help. 6. Pass to human operator. 7. Database Query. There are a couple of restrictions regarding which actions can be taken in which states: an open question is only available at the start of the dialogue, and the dialogue manager can only try to confirm non-empty slots. 2.2 The Reward Function We employ an “all-or-nothing” reward function which is as follows: 1. Database query, all slots confirmed: +100 2. Any other database query: −75 1Where n = 1 we implicitly confirm the final slot and where n = 4 we implicitly confirm the first slot. This action set does not include actions that ask the nth slot whilst implicitly confirming slot value n −2. These will be added in future experiments as we continue to increase the action and state space. 186 3. User simulation hangs-up: −100 4. DIPPER passes to a human operator: −50 5. Each system turn: −5 To maximise the chances of a slot value being correct, it must be confirmed rather than just filled. The reward function reflects the fact that a successful dialogue manager must maximise its chances of getting the slots correct i.e. they must all be confirmed. (Walker et al., 2000) showed with the PARADISE evaluation that confirming slots increases user satisfaction. The maximum reward that can be obtained for a single dialogue is 85, (the dialogue manager prompts the user, the user replies by filling all four of the slots in a single utterance, and the dialogue manager then confirms all four slots and submits a database query). 2.3 The Reinforcement Learner’s Parameters When the reinforcement learner agent is initialized, it is given a parameter string which includes the following: 1. Step Parameter: α = decreasing 2. Discount Factor: γ = 1 3. Action Selection Type = softmax (alternative is ϵ-greedy) 4. Action Selection Parameter: temperature = 15 5. Eligibility Trace Parameter: λ = 0.9 6. Eligibility Trace = replacing (alternative is accumulating) 7. Initial Q-values = 25 The reinforcement learner updates its Q-values using the Sarsa(λ) algorithm (see (Sutton and Barto, 1998)). The first parameter is the stepparameter α which may be a value between 0 and 1, or specified as decreasing. If it is decreasing, as it is in our experiments, then for any given Q-value update α is 1 k where k is the number of times that the state-action pair for which the update is being performed has been visited. This kind of step parameter will ensure that given a sufficient number of training dialogues, each of the Q-values will eventually converge. The second parameter (discount factor) γ may take a value between 0 and 1. For the dialogue management problem we set it to 1 so that future rewards are taken into account as strongly as possible. Apart from updating Q-values, the reinforcement learner must also choose the next action for the dialogue manager and the third parameter specifies whether it does this by ϵ-greedy or softmax action selection (here we have used softmax). The fifth parameter, the eligibility trace parameter λ, may take a value between 0 and 1, and the sixth parameter specifies whether the eligibility traces are replacing or accumulating. We used replacing traces because they produced faster learning for the slot-filling task. The seventh parameter is for supplying the initial Q-values. 2.4 N-Gram User Simulations Here user simulations, rather than real users, interact with the dialogue system during learning. This is because thousands of dialogues may be necessary to train even a simple system (here we train on up to 50000 dialogues), and for a proper exploration of the state-action space the system should sometimes take actions that are not optimal for the current situation, making it a sadistic and timeconsuming procedure for any human training the system. (Eckert et al., 1997) were the first to use a user simulation for this purpose, but it was not goal-directed and so could produce inconsistent utterances. The later simulations of (Pietquin, 2004) and (Scheffler and Young, 2001) were to some extent “goal-directed” and also incorporated an ASR error simulation. The user simulations interact with the system via intentions. Intentions are preferred because they are easier to generate than word sequences and because they allow error modelling of all parts of the system, for example ASR error modelling and semantic errors. The user and ASR simulations must be realistic if the learned strategy is to be directly applicable in a real system. The n-gram user simulations used here (see (Georgila et al., 2005a) for details and evaluation results) treat a dialogue as a sequence of pairs of speech acts and tasks. They take as input the n−1 most recent speech act-task pairs in the dialogue history, and based on n-gram probabilities learned from the COMMUNICATOR data (automatically annotated with speech acts and Information States (Georgila et al., 2005b)), they then output a user utterance as a further speech-act task pair. These user simulations incorporate the effects of ASR errors since they are built from the user utterances as they were recognized by the ASR components of the original COMMUNICATOR systems. Note that the user simulations do not provide instantiated slot values e.g. a response to provide a destination city is the speech-act task pair “[provide info] [dest city]”. We cannot assume that two such responses in the same dialogue refer to the same 187 destination cities. Hence in the dialogue manager’s Information State where we record whether a slot is empty, filled, or confirmed, we only update from filled to confirmed when the slot value is implicitly or explicitly confirmed. An additional function maps the user speech-act task pairs to a form that can be interpreted by the dialogue manager. Post-mapping user responses are made up of one or more of the following types of utterance: (1) Stay quiet, (2) Provide 1 or more slot values, (3) Yes, (4) No, (5) Ask for help, (6) Hang-up, (7) Null (out-of-domain or no ASR hypothesis). The quality of the 4 and 5-gram user simulations has been established through a variety of metrics and against the behaviour of the actual users of the COMMUNICATOR systems, see (Georgila et al., 2005a). 2.4.1 Limitations of the user simulations The user and ASR simulations are a fundamentally important factor in determining the nature of the learned strategies. For this reason we should note the limitations of the n-gram simulations used here. A first limitation is that we cannot be sure that the COMMUNICATOR training data is sufficiently complete, and a second is that the n-gram simulations only use a window of n moves in the dialogue history. This second limitation becomes a problem when the user simulation’s current move ought to take into account something that occurred at an earlier stage in the dialogue. This might result in the user simulation repeating a slot value unnecessarily, or the chance of an ASR error for a particular word being independent of whether the same word was previously recognised correctly. The latter case means we cannot simulate for example, a particular slot value always being liable to misrecognition. These limitations will affect the nature of the learned strategies. Different state features may assume more or less importance than they would if the simulations were more realistic. This is a point that we will return to in the analysis of the experimental results. In future work we will use the more accurate user simulations recently developed following (Georgila et al., 2005a) and we expect that these will improve our results still further. 3 Experiments First we learned strategies with the 4-gram user simulation and tested with the 5-gram simulation, and then did the reverse. We experimented with different feature sets, exploring whether better strategies could be learned by adding limited context features. We used two baselines for comparison: • The performance of the original COMMUNICATOR systems in the data set (Walker et al., 2001). • An RL baseline dialogue manager learned using only slot-status features i.e. for each of slots 1 −4, is the slot empty, filled or confirmed? We then learned two further strategies: • Strategy 2 (UDM) was learned by adding the user’s last dialogue move to the state. • Strategy 3 (USDM) was learned by adding both the user and system’s last dialogue moves to the state. The possible system and user dialogue moves were those given in sections 2.1 and 2.4 respectively, and the reward function was that described in section 2.2. 3.1 The COMMUNICATOR data baseline We computed the scores for the original handcoded COMMUNICATOR systems as was done by (Henderson et al., 2005), and we call this the “HLG05” score. This scoring function is based on task completion and dialogue length rewards as determined by the PARADISE evaluation (Walker et al., 2000). This function gives 25 points for each slot which is filled, another 25 for each that is confirmed, and deducts 1 point for each system action. In this case the maximum possible score is 197 i.e. 200 minus 3 actions, (the system prompts the user, the user replies by filling all four of the slots in one turn, and the system then confirms all four slots and offers the flight). The average score for the 1242 dialogues in the COMMUNICATOR dataset where the aim was to fill and confirm only the same four slots as we have used here was 115.26. The other COMMUNICATOR dialogues involved different slots relating to return flights, hotel-bookings and car-rentals. 4 Results Figure 1 tracks the improvement of the 3 learned strategies for 50000 training dialogues with the 4gram user simulation, and figure 2 for 50000 training dialogues with the 5-gram simulation. They show the average reward (according to the function of section 2.2) obtained by each strategy over intervals of 1000 training dialogues. Table 1 shows the results for testing the strategies learned after 50000 training dialogues (the baseline RL strategy, strategy 2 (UDM) and strategy 3 (USDM)). The ‘a’ strategies were trained with the 4-gram user simulation and tested with 188 Features Av. Score HLG05 Filled Slots Conf. Slots Length 4 →5 gram = (a) RL Baseline (a) Slots status 51.67 190.32 100 100 −9.68 RL Strat 2, UDM (a) + Last User DM 53.65** 190.67 100 100 −9.33 RL Strat 3, USDM (a) + Last System DM 54.9** 190.98 100 100 −9.02 5 →4 gram = (b) RL Baseline (b) Slots status 51.4 190.28 100 100 −9.72 RL Strat 2, UDM (b) + Last User DM 54.46* 190.83 100 100 −9.17 RL Strat 3, USDM (b) + Last System DM 56.24** 191.25 100 100 −8.75 RL Baseline (av) Slots status 51.54 190.3 100 100 −9.7 RL Strat 2, UDM (av) + Last User DM 54.06** 190.75 100 100 −9.25 RL Strat 3, USDM (av) + Last System DM 55.57** 191.16 100 100 −8.84 COMM Systems 115.26 84.6 63.7 −33.1 Hybrid RL *** Information States 142.6 88.1 70.9 −16.4 Table 1: Testing the learned strategies after 50000 training dialogues, average reward achieved per dialogue over 1000 test dialogues. (a) = strategy trained using 4-gram and tested with 5-gram; (b) = strategy trained with 5-gram and tested with 4-gram; (av) = average; * significance level p < 0.025; ** significance level p < 0.005; *** Note: The Hybrid RL scores (here updated from (Henderson et al., 2005)) are not directly comparable since that system has a larger action set and fewer policy constraints. the 5-gram, while the ‘b’ strategies were trained with the 5-gram user simulation and tested with the 4-gram. The table also shows average scores for the strategies. Column 2 contains the average reward obtained per dialogue by each strategy over 1000 test dialogues (computed using the function of section 2.2). The 1000 test dialogues for each strategy were divided into 10 sets of 100. We carried out t-tests and found that in both the ‘a’ and ‘b’ cases, strategy 2 (UDM) performs significantly better than the RL baseline (significance levels p < 0.005 and p < 0.025), and strategy 3 (USDM) performs significantly better than strategy 2 (UDM) (significance level p < 0.005). With respect to average performance, strategy 2 (UDM) improves over the RL baseline by 4.9%, and strategy 3 (USDM) improves by 7.8%. Although there seem to be only negligible qualitative differences between strategies 2(b) and 3(b) and their ‘a’ equivalents, the former perform slightly better in testing. This suggests that the 4-gram simulation used for testing the ‘b’ strategies is a little more reliable in filling and confirming slot values than the 5-gram. The 3rd column “HLG05” shows the average scores for the dialogues as computed by the reward function of (Henderson et al., 2005). This is done for comparison with that work but also with the COMMUNICATOR data baseline. Using the HLG05 reward function, strategy 3 (USDM) improves over the original COMMUNICATOR systems baseline by 65.9%. The components making up the reward are shown in the final 3 columns of table 1. Here we see that all of the RL strategies are able to fill and confirm all of the 4 slots when conversing with the simulated COMMUNICATOR users. The only variation is in the average length of dialogue required to confirm all four slots. The COMMUNICATOR systems were often unable to confirm or fill all of the user slots, and the dialogues were quite long on average. As stated in section 2.4.1, the n-gram simulations do not simulate the case of a particular user goal utterance being unrecognisable for the system. This was a problem that could be encountered by the real COMMUNICATOR systems. Nevertheless, the performance of all the learned strategies compares very well to the COMMUNICATOR data baseline. For example, in an average dialogue, the RL strategies filled and confirmed all four slots with around 9 actions not including offering the flight, but the COMMUNICATOR systems took an average of around 33 actions per dialogue, and often failed to complete the task. With respect to the hybrid RL result of (Henderson et al., 2005), shown in the final row of the table, Strategy 3 (USDM) shows a 34% improvement, though these results are not directly comparable because that system uses a larger action set and has fewer constraints (e.g. it can ask “how may I help you?” at any time, not just at the start of a dialogue). Finally, let us note that the performance of the RL strategies is close to optimal, but that there is some room for improvement. With respect to the HLG05 metric, the optimal system score would be 197, but this would only be available in rare cases where the simulated user supplies all 4 slots in the 189 -120 -100 -80 -60 -40 -20 0 20 40 0 5 10 15 20 25 30 35 40 45 50 Average Reward Number of Dialogues (Thousands) Training With 4-gram Baseline Strategy 2 Strategy 3 Figure 1: Training the dialogue strategies with the 4-gram user simulation first utterance. With respect to the metric we have used here (with a −5 per system turn penalty), the optimal score is 85 (and we currently score an average of 55.57). Thus we expect that there are still further improvments that can be made to more fully exploit the dialogue context (see section 4.3). 4.1 Qualitative Analysis Below are a list of general characteristics of the learned strategies: 1. The reinforcement learner learns to query the database only in states where all four slots have been confirmed. 2. With sufficient exploration, the reinforcement learner learns not to pass the call to a human operator in any state. 3. The learned strategies employ implicit confirmations wherever possible. This allows them to fill and confirm the slots in fewer turns than if they simply asked the slot values and then used explicit confirmation. 4. As a result of characteristic 3, which slots can be asked and implicitly confirmed at the same time influences the order in which the learned strategies attempt to fill and confirm each slot, e.g. if the status of the third slot is ‘filled’ and the others are ‘empty’, the learner learns to ask for the second or fourth slot -120 -100 -80 -60 -40 -20 0 20 40 0 5 10 15 20 25 30 35 40 45 50 Average Reward Number of Dialogues (Thousands) Training With 5-gram Baseline Strategy 2 Strategy 3 Figure 2: Training the dialogue strategies with the 5-gram user simulation rather than the first, since it can implicitly confirm the third while it asks for the second or fourth slots, but it cannot implicitly confirm the third while it asks for the first slot. This action is not available (see section 2.1). 4.2 Emergent behaviour In testing the UDM strategy (2) filled and confirmed all of the slots in fewer turns on average than the RL baseline, and strategy 3 (USDM) did this in fewer turns than strategy 2 (UDM). What then were the qualitative differences between the three strategies? The behaviour of the three strategies only seems to really deviate when a user response fails to fill or confirm one or more slots. Then the baseline strategy’s state has not changed and so it will repeat its last dialogue move, whereas the state for strategies 2 (UDM) and 3 (USDM) has changed and as a result, these may now try different actions. It is in such circumstances that the UDM strategy seems to be more effective than the baseline, and strategy 3 (USDM) more effective than the UDM strategy. In figure 3 we show illustrative state and learned action pairs for the different strategies. They relate to a situation where the first user response(s) in the dialogue has/have failed to fill a single slot value. NB: here ‘emp’ stands for ‘empty’ and ‘fill’ for ‘filled’ and they appear in the first four state variables, which stand for slot states. For strategy 2 (UDM), the fifth variable represents the user’s last 190 dialogue move, and the for strategy 3 (USDM), the fifth variable represents the system’s last dialogue move, and the sixth, the user’s last dialogue move. BASELINE STRATEGY State: [emp,emp,emp,emp] Action: askSlot2 STRATEGY 2 (UDM) State: [emp,emp,emp,emp,user(quiet)] Action: askSlot3 State: [emp,emp,emp,emp,user(null)] Action: askSlot1 STRATEGY 3 (USDM) State: [emp,emp,emp,emp,askSlot3,user(quiet)] Action: askSlot3 State: [emp,emp,emp,emp,askSlot3,user(null)] Action: giveHelp State: [emp,emp,emp,emp,giveHelp,user(quiet)] Action: askSlot3 State: [emp,emp,emp,emp,giveHelp,user(null)] Action: askSlot3 Figure 3: Examples of the different learned strategies and emergent behaviours: focus switching (for UDM) and giving help (for USDM) Here we can see that should the user responses continue to fail to provide a slot value, the baseline’s state will be unchanged and so the strategy will simply ask for slot 2 again. The state for strategy 2 (UDM) does change however. This strategy switches focus between slots 3 and 1 depending on whether the user’s last dialogue move was ‘null’ or ‘quiet’ NB. As stated in section 2.4, ‘null’ means out-of-domain or that there was no ASR hypothesis. Strategy 3 (USDM) is different again. Knowledge of the system’s last dialogue move as well as the user’s last move has enabled the learner to make effective use of the ‘give help’ action, rather than to rely on switching focus. When the user’s last dialogue move is ‘null’ in response to the system move ‘askSlot3’, then the strategy uses the ‘give help’ action before returning to ask for slot 3 again. The example described here is not the only example of strategy 2 (UDM) employing focus switching while strategy 3 (USDM) prefers to use the ‘give help’ action when a user response fails to fill or confirm a slot. This kind of behaviour in strategies 2 and 3 is emergent dialogue behaviour that has been learned by the system rather than explicitly programmed. 4.3 Further possibilities for improvement over the RL baseline Further improvements over the RL baseline might be possible with a wider set of system actions. Strategies 2 and 3 may learn to make more effective use of additional actions than the baseline e.g. additional actions that implicitly confirm one slot whilst asking another may allow more of the switching focus described in section 4.1. Other possible additional actions include actions that ask for or confirm two or more slots simultaneously. In section 2.4.1, we highlighted the fact that the n-gram user simulations are not completely realistic and that this will make certain state features more or less important in learning a strategy. Thus had we been able to use even more realistic user simulations, including certain additional context features in the state might have enabled a greater improvement over the baseline. Dialogue length is an example of a feature that could have made a difference had the simulations been able to simulate the case of a particular goal utterance being unrecognisable for the system. The reinforcement learner may then be able to use the dialogue length feature to learn when to give up asking for a particular slot value and make a partially complete database query. This would of course require a reward function that gave some reward to partially complete database queries rather than the all-ornothing reward function used here. 5 Conclusion and Future Work We have used user simulations that are n-gram models learned from COMMUNICATOR data to explore reinforcement learning of full dialogue strategies with some “high-level” context information (the user and and system’s last dialogue moves). Almost all previous work (e.g. (Singh et al., 2002; Pietquin, 2004; Scheffler and Young, 2001)) has included only low-level information in state representations. In contrast, the exploration of very large state spaces to date relies on a “hybrid” supervised/reinforcement learning technique, where the reinforcement learning element has not been shown to significantly improve policies over the purely supervised case (Henderson et al., 2005). We presented our experimental environment, the reinforcement learner, the simulated users, and our methodology. In testing with the simulated COMMUNICATOR users, the new strategies learned with higher-level (i.e. dialogue move) information in the state outperformed the lowlevel RL baseline (only slot status information) 191 by 7.8% and the original COMMUNICATOR systems by 65.9%. These strategies obtained more reward than the RL baseline by filling and confirming all of the slots with fewer system turns on average. Moreover, the learned strategies show interesting emergent dialogue behaviour such as making effective use of the ‘give help’ action and switching focus to different subtasks when the current subtask is proving problematic. In future work, we plan to use even more realistic user simulations, for example those developed following (Georgila et al., 2005a), which incorporate elements of goal-directed user behaviour. We will continue to investigate whether we can maintain tractability and learn superior strategies as we add incrementally more high-level contextual information to the state. At some stage this may necessitate using a generalisation method such as linear function approximation (Henderson et al., 2005). We also intend to use feature selection techniques (e.g. CFS subset evaluation (Rieser and Lemon, 2006)) on in order to determine which contextual features this suggests are important. We will also carry out a more direct comparison with the hybrid strategies learned by (Henderson et al., 2005). In the slightly longer term, we will test our learned strategies on humans using a full spoken dialogue system. We hypothesize that the strategies which perform the best in terms of task completion and user satisfaction scores (Walker et al., 2000) will be those learned with high-level dialogue context information in the state. Acknowledgements This work is supported by the ESRC and the TALK project, www.talk-project.org. References John L. Austin. 1962. How To Do Things With Words. Oxford University Press. Johan Bos, Ewan Klein, Oliver Lemon, and Tetsushi Oka. 2003. Dipper: Description and formalisation of an information-state update dialogue system architecture. In 4th SIGdial Workshop on Discourse and Dialogue, Sapporo. Herbert H. Clark. 1996. Using Language. Cambridge University Press. Weiland Eckert, Esther Levin, and Roberto Pieraccini. 1997. User modeling for spoken dialogue system evaluation. In IEEE Workshop on Automatic Speech Recognition and Understanding. Matthew Frampton and Oliver Lemon. 2005. Reinforcement Learning Of Dialogue Strategies Using The User’s Last Dialogue Act. In IJCAI workshop on Knowledge and Reasoning in Practical Dialogue Systems. Kallirroi Georgila, James Henderson, and Oliver Lemon. 2005a. Learning User Simulations for Information State Update Dialogue Systems. In Interspeech/Eurospeech: the 9th biennial conference of the International Speech Communication Association. Kallirroi Georgila, Oliver Lemon, and James Henderson. 2005b. Automatic annotation of COMMUNICATOR dialogue data for learning dialogue strategies and user simulations. In Ninth Workshop on the Semantics and Pragmatics of Dialogue (SEMDIAL: DIALOR). James Henderson, Oliver Lemon, and Kallirroi Georgila. 2005. Hybrid Reinforcement/Supervised Learning for Dialogue Policies from COMMUNICATOR data. In IJCAI workshop on Knowledge and Reasoning in Practical Dialogue Systems,. Staffan Larsson and David Traum. 2000. Information state and dialogue management in the TRINDI Dialogue Move Engine Toolkit. Natural Language Engineering, 6(3-4):323–340. Esther Levin and Roberto Pieraccini. 1997. A stochastic model of computer-human interaction for learning dialogue strategies. In Eurospeech, Rhodes,Greece. Olivier Pietquin. 2004. A Framework for Unsupervised Learning of Dialogue Strategies. Presses Universitaires de Louvain, SIMILAR Collection. Verena Rieser and Oliver Lemon. 2006. Using machine learning to explore human multimodal clarification strategies. In Proc. ACL. Konrad Scheffler and Steve Young. 2001. Corpusbased dialogue simulation for automatic strategy learning and evaluation. In NAACL-2001 Workshop on Adaptation in Dialogue Systems, Pittsburgh, USA. John R. Searle. 1969. Speech Acts. Cambridge University Press. Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system. Journal of Artificial Intelligence Research (JAIR). Richard Sutton and Andrew Barto. 1998. Reinforcement Learning. MIT Press. Marilyn A. Walker, Candace A. Kamm, and Diane J. Litman. 2000. Towards Developing General Models of Usability with PARADISE. Natural Language Engineering, 6(3). Marilyn A. Walker, Rebecca J. Passonneau, and Julie E. Boland. 2001. Quantitative and Qualitative Evaluation of Darpa Communicator Spoken Dialogue Systems. In Meeting of the Association for Computational Linguistics, pages 515–522. 192
2006
24
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 193–200, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Dependencies between Student State and Speech Recognition Problems in Spoken Tutoring Dialogues Mihai Rotaru University of Pittsburgh Pittsburgh, USA [email protected] Diane J. Litman University of Pittsburgh Pittsburgh, USA [email protected] Abstract Speech recognition problems are a reality in current spoken dialogue systems. In order to better understand these phenomena, we study dependencies between speech recognition problems and several higher level dialogue factors that define our notion of student state: frustration/anger, certainty and correctness. We apply Chi Square (χ2) analysis to a corpus of speech-based computer tutoring dialogues to discover these dependencies both within and across turns. Significant dependencies are combined to produce interesting insights regarding speech recognition problems and to propose new strategies for handling these problems. We also find that tutoring, as a new domain for speech applications, exhibits interesting tradeoffs and new factors to consider for spoken dialogue design. 1 Introduction Designing a spoken dialogue system involves many non-trivial decisions. One factor that the designer has to take into account is the presence of speech recognition problems (SRP). Previous work (Walker et al., 2000) has shown that the number of SRP is negatively correlated with overall user satisfaction. Given the negative impact of SRP, there has been a lot of work in trying to understand this phenomenon and its implications for building dialogue systems. Most of the previous work has focused on lower level details of SRP: identifying components responsible for SRP (acoustic model, language model, search algorithm (Chase, 1997)) or prosodic characterization of SRP (Hirschberg et al., 2004). We extend previous work by analyzing the relationship between SRP and higher level dialogue factors. Recent work has shown that dialogue design can benefit from several higher level dialogue factors: dialogue acts (Frampton and Lemon, 2005; Walker et al., 2001), pragmatic plausibility (Gabsdil and Lemon, 2004). Also, it is widely believed that user emotions, as another example of higher level factor, interact with SRP but, currently, there is little hard evidence to support this intuition. We perform our analysis on three high level dialogue factors: frustration/anger, certainty and correctness. Frustration and anger have been observed as the most frequent emotional class in many dialogue systems (Ang et al., 2002) and are associated with a higher word error rate (Bulyko et al., 2005). For this reason, we use the presence of emotions like frustration and anger as our first dialogue factor. Our other two factors are inspired by another contribution of our study: looking at speechbased computer tutoring dialogues instead of more commonly used information retrieval dialogues. Implementing spoken dialogue systems in a new domain has shown that many practices do not port well to the new domain (e.g. confirmation of long prompts (Kearns et al., 2002)). Tutoring, as a new domain for speech applications (Litman and Forbes-Riley, 2004; Pon-Barry et al., 2004), brings forward new factors that can be important for spoken dialogue design. Here we focus on certainty and correctness. Both factors have been shown to play an important role in the tutoring process (Forbes-Riley and Litman, 2005; Liscombe et al., 2005). A common practice in previous work on emotion prediction (Ang et al., 2002; Litman and Forbes-Riley, 2004) is to transform an initial finer level emotion annotation (five or more labels) into a coarser level annotation (2-3 labels). We wanted to understand if this practice can im193 pact the dependencies we observe from the data. To test this, we combine our two emotion1 factors (frustration/anger and certainty) into a binary emotional/non-emotional annotation. To understand the relationship between SRP and our three factors, we take a three-step approach. In the first step, dependencies between SRP and our three factors are discovered using the Chi Square (χ2) test. Similar analyses on human-human dialogues have yielded interesting insights about human-human conversations (Forbes-Riley and Litman, 2005; Skantze, 2005). In the second step, significant dependencies are combined to produce interesting insights regarding SRP and to propose strategies for handling SRP. Validating these strategies is the purpose of the third step. In this paper, we focus on the first two steps; the third step is left as future work. Our analysis produces several interesting insights and strategies which confirm the utility of the proposed approach. With respect to insights, we show that user emotions interact with SRP. We also find that incorrect/uncertain student turns have more SRP than expected. In addition, we find that the emotion annotation level affects the interactions we observe from the data, with finer-level emotions yielding more interactions and insights. In terms of strategies, our data suggests that favoring misrecognitions over rejections (by lowering the rejection threshold) might be more beneficial for our tutoring task – at least in terms of reducing the number of emotional student turns. Also, as a general design practice in the spoken tutoring applications, we find an interesting tradeoff between the pedagogical value of asking difficult questions and the system’s ability to recognize the student answer. 2 Corpus The corpus analyzed in this paper consists of 95 experimentally obtained spoken tutoring dialogues between 20 students and our system ITSPOKE (Litman and Forbes-Riley, 2004), a speech-enabled version of the text-based WHY2 conceptual physics tutoring system (VanLehn et al., 2002). When interacting with ITSPOKE, students first type an essay answering a qualitative physics problem using a graphical user interface. ITSPOKE then engages the student in spoken dialogue (using speech-based input and output) to correct misconceptions and elicit more complete 1 We use the term “emotion” loosely to cover both affects and attitudes that can impact student learning. explanations, after which the student revises the essay, thereby ending the tutoring or causing another round of tutoring/essay revision. For recognition, we use the Sphinx2 speech recognizer with stochastic language models. Because speech recognition is imperfect, after the data was collected, each student utterance in our corpus was manually transcribed by a project staff member. An annotated excerpt from our corpus is shown in Figure 1 (punctuation added for clarity). The excerpts show both what the student said (the STD labels) and what ITSPOKE recognized (the ASR labels). The excerpt is also annotated with concepts that will be described next. 2.1 Speech Recognition Problems (SRP) One form of SRP is the Rejection. Rejections occur when ITSPOKE is not confident enough in the recognition hypothesis and asks the student to repeat (Figure 1, STD3,4). For our χ2 analysis, we define the REJ variable with two values: Rej (a rejection occurred in the turn) and noRej (no rejection occurred in the turn). Not surprisingly, ITSPOKE also misrecognized some student turns. When ITSPOKE heard something different than what the student actually said but was confident in its hypothesis, we call this an ASR Misrecognition (a binary version of the commonly used Word Error Rate) (Figure 1, STD1,2). Similarly, we define the ASR MIS variable with two values: AsrMis and noAsrMis. Semantic accuracy is more relevant for dialogue evaluation, as it does not penalize for word errors that are unimportant to overall utterance interpretation. In the case of form-based information access spoken dialogue systems, computing semantic accuracy is straightforward (i.e. concept accuracy = percentage of correctly recognized concepts). In contrast, in the tutoring domain there are no clear forms with slots to be filled. We base our semantic accuracy on the “correctness” measure of the student turn. For each student turn, ITSPOKE interprets it and labels its correctness with regard to whether the student correctly answered the tutor question (see the labels between square brackets in Figure 1). We define Semantic Misrecognition as cases where ITSPOKE was confident in its recognition hypothesis and the correctness interpretation of the recognition hypothesis is different from the correctness interpretation of the manual transcript (Figure 1, STD1). Similarly, we define the SEM MIS variable with two values: SemMis and noSemMis. The top part of Table 1 lists the distribution for our three SRP variables. 194 3 Student State Annotation Our definition of student state includes annotation on three dimensions (our three dialogue factors): Frustration/Anger/Hyperarticulation, Certainty and Correctness (see Table 1). The first two dimensions capture the student emotions and the last one the student correctness. ITSPOKE: Good. Apply the same force to an object of mass M3, which is three times the mass of M1. What is the magnitude M3's acceleration in terms of A1 [acceleration of the object M1]? STD1: third of a one (CERT=Uncertain, CRCT=C) [correct] ASR1: they're it but a one (AsrMis, SemMis) [incorrect] ITSPOKE: The acceleration of M3 [...] would be onethird the acceleration of M1. [...] In general for the same force what remains constant? STD2: weight (CRCT=I) [incorrect] ASR2: great (AsrMis) [incorrect] ITSPOKE: Well... The product of the mass of a body and its acceleration remains constant for a given force. What is the relationship between a force acting on the box, the mass of the box, and its acceleration? STD3: newton's second law (CERT=Uncertain, CRCT=PC) ASR3: the cancel the fire (Rej) ITSPOKE: Could you please repeat that? STD4: newton's second law (FAH=FrAng, CRCT=PC) ASR4: newton second long (Rej) Figure 1: Human-Computer Dialogue Excerpt The Frustration/Anger/Hyperarticulation dimension captures the perceived negative student emotional response to the interaction with the system. Three labels were used to annotate this dimension: frustration-anger, hyperarticulation and neutral. Similar to (Ang et al., 2002), because frustration and anger can be difficult to distinguish reliably, they were collapsed into a single label: frustration-anger (Figure 1, STD4). Often, frustration and anger is prosodically marked and in many cases the prosody used is consistent with hyperarticulation (Ang et al., 2002). For this reason we included in this dimension the hyperarticulation label (even though hyperarticulation is not an emotion but a state). We used the hyperarticulation label for turns where no frustration or anger was perceived but nevertheless were hyperarticulated. For our interaction experiments we define the FAH variable with three values: FrAng (frustration-anger), Hyp (hyperarticulation) and Neutral. The Certainty dimension captures the perceived student reaction to the questions asked by our computer tutor and her overall reaction to the tutoring domain (Liscombe et al., 2005). (Forbes-Riley and Litman, 2005) show that student certainty interacts with a human tutor’s dialogue decision process (i.e. the choice of feedback). Four labels were used for this dimension: certain, uncertain (e.g. Figure 1, STD1), mixed and neutral. In a small number of turns, both certainty and uncertainty were expressed and these turns were labeled as mixed (e.g. the student was certain about a concept, but uncertain about another concept needed to answer the tutor’s question). For our interaction experiments we define the CERT variable with four values: Certain, Uncertain, Mixed and Neutral. Variable Values Student turns (2334) Speech recognition problems ASR MIS AsrMis noAsrMis 25.4% 74.6% SEM MIS SemMis noSemMis 5.7% 94.3% REJ Rej noRej 7.0% 93.0% Student state FAH FrAng Hyp Neutral 9.9% 2.1% 88.0% CERT Certain Uncertain Mixed Neutral 41.3% 19.1% 2.4% 37.3% CRCT C I PC UA 63.3% 23.3% 6.2% 7.1% EnE Emotional Neutral 64.8% 35.2% Table 1: Variable distributions in our corpus. To test the impact of the emotion annotation level, we define the Emotional/Non-Emotional annotation based on our two emotional dimensions: neutral turns on both the FAH and the CERT dimension are labeled as neutral2; all other turns were labeled as emotional. Consequently, we define the EnE variable with two values: Emotional and Neutral. Correctness is also an important factor of the student state. In addition to the correctness labels assigned by ITSPOKE (recall the definition of SEM MIS), each student turn was manually annotated by a project staff member in terms of their physics-related correctness. Our annotator used the human transcripts and his physics knowledge to label each student turn for various 2 To be consistent with our previous work, we label hyperarticulated turns as emotional even though hyperarticulation is not an emotion. 195 degrees of correctness: correct, partially correct, incorrect and unable to answer. Our system can ask the student to provide multiple pieces of information in her answer (e.g. the question “Try to name the forces acting on the packet. Please, specify their directions.” asks for both the names of the forces and their direction). If the student answer is correct and contains all pieces of information, it was labeled as correct (e.g. “gravity, down”). The partially correct label was used for turns where part of the answer was correct but the rest was either incorrect (e.g. “gravity, up”) or omitted some information from the ideal correct answer (e.g. “gravity”). Turns that were completely incorrect (e.g. “no forces”) were labeled as incorrect. Turns where the students did not answer the computer tutor’s question were labeled as “unable to answer”. In these turns the student used either variants of “I don’t know” or simply did not say anything. For our interaction experiments we defined the CRCT variable with four values: C (correct), I (incorrect), PC (partially correct) and UA (unable to answer). Please note that our definition of student state is from the tutor’s perspective. As we mentioned before, our emotion annotation is for perceived emotions. Similarly, the notion of correctness is from the tutor’s perspective. For example, the student might think she is correct but, in reality, her answer is incorrect. This correctness should be contrasted with the correctness used to define SEM MIS. The SEM MIS correctness uses ITSPOKE’s language understanding module applied to recognition hypothesis or the manual transcript, while the student state’s correctness uses our annotator’s language understanding. All our student state annotations are at the turn level and were performed manually by the same annotator. While an inter-annotator agreement study is the best way to test the reliability of our two emotional annotations (FAH and CERT), our experience with annotating student emotions (Litman and Forbes-Riley, 2004) has shown that this type of annotation can be performed reliably. Given the general importance of the student’s uncertainty for tutoring, a second annotator has been commissioned to annotate our corpus for the presence or absence of uncertainty. This annotation can be directly compared with a binary version of CERT: Uncertain+Mixed versus Certain+Neutral. The comparison yields an agreement of 90% with a Kappa of 0.68. Moreover, if we rerun our study on the second annotation, we find similar dependencies. We are currently planning to perform a second annotation of the FAH dimension to validate its reliability. We believe that our correctness annotation (CRCT) is reliable due to the simplicity of the task: the annotator uses his language understanding to match the human transcript to a list of correct/incorrect answers. When we compared this annotation with the correctness assigned by ITSPOKE on the human transcript, we found an agreement of 90% with a Kappa of 0.79. 4 Identifying dependencies using χ2 To discover the dependencies between our variables, we apply the χ2 test. We illustrate our analysis method on the interaction between certainty (CERT) and rejection (REJ). The χ2 value assesses whether the differences between observed and expected counts are large enough to conclude a statistically significant dependency between the two variables (Table 2, last column). For Table 2, which has 3 degrees of freedom ((41)*(2-1)), the critical χ2 value at a p<0.05 is 7.81. We thus conclude that there is a statistically significant dependency between the student certainty in a turn and the rejection of that turn. Combination Obs. Exp. χ2 CERT – REJ 11.45 Certain – Rej - 49 67 9.13 Uncertain – Rej + 43 31 6.15 Table 2: CERT – REJ interaction. If any of the two variables involved in a significant dependency has more than 2 possible values, we can look more deeply into this overall interaction by investigating how particular values interact with each other. To do that, we compute a binary variable for each variable’s value in part and study dependencies between these variables. For example, for the value ‘Certain’ of variable CERT we create a binary variable with two values: ‘Certain’ and ‘Anything Else’ (in this case Uncertain, Mixed and Neutral). By studying the dependency between binary variables we can understand how the interaction works. Table 2 reports in rows 3 and 4 all significant interactions between the values of variables CERT and REJ. Each row shows: 1) the value for each original variable, 2) the sign of the dependency, 3) the observed counts, 4) the expected counts and 5) the χ2 value. For example, in our data there are 49 rejected turns in which the student was certain. This value is smaller than the expected counts (67); the dependency between Certain and Rej is significant with a χ2 value of 9.13. A comparison of the observed counts and expected counts reveals the direction 196 (sign) of the dependency. In our case we see that certain turns are rejected less than expected (row 3), while uncertain turns are rejected more than expected (row 4). On the other hand, there is no interaction between neutral turns and rejections or between mixed turns and rejections. Thus, the CERT – REJ interaction is explained only by the interaction between Certain and Rej and the interaction between Uncertain and Rej. 5 Results - dependencies In this section we present all significant dependencies between SRP and student state both within and across turns. Within turn interactions analyze the contribution of the student state to the recognition of the turn. They were motivated by the widely believed intuition that emotion interacts with SRP. Across turn interactions look at the contribution of previous SRP to the current student state. Our previous work (Rotaru and Litman, 2005) had shown that certain SRP will correlate with emotional responses from the user. We also study the impact of the emotion annotation level (EnE versus FAH/CERT) on the interactions we observe. The implications of these dependencies will be discussed in Section 6. 5.1 Within turn interactions For the FAH dimension, we find only one significant interaction: the interaction between the FAH student state and the rejection of the current turn (Table 3). By studying values’ interactions, we find that turns where the student is frustrated or angry are rejected more than expected (34 instead of 16; Figure 1, STD4 is one of them). Similarly, turns where the student response is hyperarticulated are also rejected more than expected (similar to observations in (Soltau and Waibel, 2000)). In contrast, neutral turns in the FAH dimension are rejected less than expected. Surprisingly, FrAng does not interact with AsrMis as observed in (Bulyko et al., 2005) but they use the full word error rate measure instead of the binary version used in this paper. Combination Obs. Exp. χ2 FAH – REJ 77.92 FrAng – Rej + 34 16 23.61 Hyp – Rej + 16 3 50.76 Neutral – Rej - 113 143 57.90 Table 3: FAH – REJ interaction. Next we investigate how our second emotion annotation, CERT, interacts with SRP. All significant dependencies are reported in Tables 2 and 4. In contrast with the FAH dimension, here we see that the interaction direction depends on the valence. We find that ‘Certain’ turns have less SRP than expected (in terms of AsrMis and Rej). In contrast, ‘Uncertain’ turns have more SRP both in terms of AsrMis and Rej. ‘Mixed’ turns interact only with AsrMis, allowing us to conclude that the presence of uncertainty in the student turn (partial or overall) will result in ASR problems more than expected. Interestingly, on this dimension, neutral turns do not interact with any of our three SRP. Combination Obs. Exp. χ2 CERT – ASRMIS 38.41 Certain – AsrMis - 204 244 15.32 Uncertain – AsrMis + 138 112 9.46 Mixed – AsrMis + 29 13 22.27 Table 4: CERT – ASRMIS interaction. Finally, we look at interactions between student correctness and SRP. Here we find significant dependencies with all types of SRP (see Table 5). In general, correct student turns have fewer SRP while incorrect, partially correct or UA turns have more SRP than expected. Partially correct turns have more AsrMis and SemMis problems than expected, but are rejected less than expected. Interestingly, UA turns interact only with rejections: these turns are rejected more than expected. An analysis of our corpus reveals that in most rejected UA turns the student does not say anything; in these cases, the system’s recognition module thought the student said something but the system correctly rejects the recognition hypothesis. Combination Obs. Exp. χ2 CRCT – ASRMIS 65.17 C – AsrMis - 295 374 62.03 I – AsrMis + 198 137 45.95 PC – AsrMis + 50 37 5.9 CRCT – SEMMIS 20.44 C – SemMis + 100 84 7.83 I – SemMis - 14 31 13.09 PC – SemMis + 15 8 5.62 CRCT – REJ 99.48 C – Rej - 53 102 70.14 I – Rej + 84 37 79.61 PC – Rej - 4 10 4.39 UA – Rej + 21 11 9.19 Table 5: Interactions between Correctness and SRP. The only exception to the rule is SEM MIS. We believe that SEM MIS behavior is explained by the “catch-all” implementation in our system. In ITSPOKE, for each tutor question there is a list of anticipated answers. All other answers are 197 treated as incorrect. Thus, it is less likely that a recognition problem in an incorrect turn will affect the correctness interpretation (e.g. Figure 1, STD2: very unlikely to misrecognize the incorrect “weight” with the anticipated “the product of mass and acceleration”). In contrast, in correct turns recognition problems are more likely to screw up the correctness interpretation (e.g. misrecognizing “gravity down” as “gravity sound”). 5.2 Across turn interactions Next we look at the contribution of previous SRP – variable name or value followed by (-1) – to the current student state. Please note that there are two factors involved here: the presence of the SRP and the SRP handling strategy. In ITSPOKE, whenever a student turn is rejected, unless this is the third rejection in a row, the student is asked to repeat using variations of “Could you please repeat that?”. In all other cases, ITSPOKE makes use of the available information ignoring any potential ASR errors. Combination Obs. Exp. χ2 ASRMIS(-1) – FAH 7.64 AsrMis(-1) – FrAng -t 46 58 3.73 AsrMis(-1) – Hyp -t 7 12 3.52 AsrMis(-1) – Neutral + 527 509 6.82 REJ(-1) – FAH 409.31 Rej(-1) – FrAng + 36 16 28.95 Rej(-1) – Hyp + 38 3 369.03 Rej(-1) – Neutral - 88 142 182.9 REJ(-1) – CRCT 57.68 Rej(-1) – C - 68 101 31.94 Rej(-1) – I + 74 37 49.71 Rej(-1) – PC - 3 10 6.25 Table 6: Interactions across turns (t – trend, p<0.1). Here we find only 3 interactions (Table 6). We find that after a non-harmful SRP (AsrMis) the student is less frustrated and hyperarticulated than expected. This result is not surprising since an AsrMis does not have any effect on the normal dialogue flow. In contrast, after rejections we observe several negative events. We find a highly significant interaction between a previous rejection and the student FAH state, with student being more frustrated and more hyperarticulated than expected (e.g. Figure 1, STD4). Not only does the system elicit an emotional reaction from the student after a rejection, but her subsequent response to the repetition request suffers in terms of the correctness. We find that after rejections student answers are correct or partially correct less than expected and incorrect more than expected. The REJ(-1) – CRCT interaction might be explained by the CRCT – REJ interaction (Table 5) if, in general, after a rejection the student repeats her previous turn. An annotation of responses to rejections as in (Swerts et al., 2000) (repeat, rephrase etc.) should provide additional insights. We were surprised to see that a previous SemMis (more harmful than an AsrMis but less disruptive than a Rej) does not interact with the student state; also the student certainty does not interact with previous SRP. 5.3 Emotion annotation level We also study the impact of the emotion annotation level on the interactions we can observe from our corpus. In this section, we look at interactions between SRP and our coarse-level emotion annotation (EnE) both within and across turns. Our results are similar with the results of our previous work (Rotaru and Litman, 2005) on a smaller corpus and a similar annotation scheme. We find again only one significant interaction: rejections are followed by more emotional turns than expected (Table 7). The strength of the interaction is smaller than in previous work, though the results can not be compared directly. No other dependencies are present. Combination Obs. Exp. χ2 REJ(-1) – EnE 6.19 Rej(-1) – Emotional + 119 104 6.19 Table 7: REJ(-1) – EnE interaction. We believe that the REJ(-1) – EnE interaction is explained mainly by the FAH dimension. Not only is there no interaction between REJ(-1) and CERT, but the inclusion of the CERT dimension in the EnE annotation decreases the strength of the interaction between REJ and FAH (the χ2 value decreases from 409.31 for FAH to a mere 6.19 for EnE). Collapsing emotional classes also prevents us from seeing any within turn interactions. These observations suggest that what is being counted as an emotion for a binary emotion annotation is critical its success. In our case, if we look at affect (FAH) or attitude (CERT) in isolation we find many interactions; in contrast, combining them offers little insight. 6 Results – insights & strategies Our results put a spotlight on several interesting observations which we discuss below. Emotions interact with SRP The dependencies between FAH/CERT and various SRP (Tables 2-4) provide evidence that user’s emotions interact with the system’s ability 198 to recognize the current turn. This is a widely believed intuition with little empirical support so far. Thus, our notion of student state can be a useful higher level information source for SRP predictors. Similar to (Hirschberg et al., 2004), we believe that peculiarities in the acoustic/prosodic profile of specific student states are responsible for their SRP. Indeed, previous work has shown that the acoustic/prosodic information plays an important role in characterizing and predicting both FAH (Ang et al., 2002; Soltau and Waibel, 2000) and CERT (Liscombe et al., 2005; Swerts and Krahmer, 2005). The impact of the emotion annotation level A comparison of the interactions yielded by various levels of emotion annotation shows the importance of the annotation level. When using a coarser level annotation (EnE) we find only one interaction. By using a finer level annotation, not only we can understand this interaction better but we also discover new interactions (five interactions with FAH and CERT). Moreover, various state annotations interact differently with SRP. For example, non-neutral turns in the FAH dimension (FrAng and Hyp) will be always rejected more than expected (Table 3); in contrast, interactions between non-neutral turns in the CERT dimension and rejections depend on the valence (‘certain’ turns will be rejected less than expected while ‘uncertain’ will be rejected more than expected; recall Table 2). We also see that the neutral turns interact with SRP depending on the dimension that defines them: FAH neutral turns interact with SRP (Table 3) while CERT neutral turns do not (Tables 2 and 4). This insight suggests an interesting tradeoff between the practicality of collapsing emotional classes (Ang et al., 2002; Litman and ForbesRiley, 2004) and the ability to observe meaningful interactions via finer level annotations. Rejections: impact and a handling strategy Our results indicate that rejections and ITSPOKE’s current rejection-handling strategy are problematic. We find that rejections are followed by more emotional turns (Table 7). A similar effect was observed in our previous work (Rotaru and Litman, 2005). The fact that it generalizes across annotation scheme and corpus, emphasizes its importance. When a finer level annotation is used, we find that rejections are followed more than expected by a frustrated, angry and hyperarticulated user (Table 6). Moreover, these subsequent turns can result in additional rejections (Table 3). Asking to repeat after a rejection does not also help in terms of correctness: the subsequent student answer is actually incorrect more than expected (Table 6). These interactions suggest an interesting strategy for our tutoring task: favoring misrecognitions over rejections (by lowering the rejection threshold). First, since rejected turns are more than expected incorrect (Table 5), the actual recognized hypothesis for such turns turn is very likely to be interpreted as incorrect. Thus, accepting a rejected turn instead of rejecting it will have the same outcome in terms of correctness: an incorrect answer. In this way, instead of attempting to acquire the actual student answer by asking to repeat, the system can skip these extra turn(s) and use the current hypothesis. Second, the other two SRP are less taxing in terms of eliciting FAH emotions (recall Table 6; note that a SemMis might activate an unwarranted and lengthy knowledge remediation subdialogue). This suggests that continuing the conversation will be more beneficial even if the system misunderstood the student. A similar behavior was observed in human-human conversations through a noisy speech channel (Skantze, 2005). Correctness/certainty–SRP interactions We also find an interesting interaction between correctness/certainty and system’s ability to recognize that turn. In general correct/certain turns have less SRP while incorrect/uncertain turns have more SRP than expected. This observation suggests that the computer tutor should ask the right question (in terms of its difficulty) at the right time. Intuitively, asking a more complicated question when the student is not prepared to answer it will increase the likelihood of an incorrect or uncertain answer. But our observations show that the computer tutor has more trouble recognizing correctly these types of answers. This suggests an interesting tradeoff between the tutor’s question difficulty and the system’s ability to recognize the student answer. This tradeoff is similar in spirit to the initiative-SRP tradeoff that is well known when designing informationseeking systems (e.g. system initiative is often used instead of a more natural mixed initiative strategy, in order to minimize SRP). 7 Conclusions In this paper we analyze the interactions between SRP and three higher level dialogue factors that define our notion of student state: frustration/anger/hyperarticulation, certainty and correctness. Our analysis produces several interesting insights and strategies which confirm the 199 utility of the proposed approach. We show that user emotions interact with SRP and that the emotion annotation level affects the interactions we observe from the data, with finer-level emotions yielding more interactions and insights. We also find that tutoring, as a new domain for speech applications, brings forward new important factors for spoken dialogue design: certainty and correctness. Both factors interact with SRP and these interactions highlight an interesting design practice in the spoken tutoring applications: the tradeoff between the pedagogical value of asking difficult questions and the system’s ability to recognize the student answer (at least in our system). The particularities of the tutoring domain also suggest favoring misrecognitions over rejections to reduce the negative impact of asking to repeat after rejections. In our future work, we plan to move to the third step of our approach: testing the strategies suggested by our results. For example, we will implement a new version of ITSPOKE that never rejects the student turn. Next, the current version and the new version will be compared with respect to users’ emotional response. Similarly, to test the tradeoff hypothesis, we will implement a version of ITSPOKE that asks difficult questions first and then falls back to simpler questions. A comparison of the two versions in terms of the number of SRP can be used for validation. While our results might be dependent on the tutoring system used in this experiment, we believe that our findings can be of interest to practitioners building similar voice-based applications. Moreover, our approach can be applied easily to studying other systems. Acknowledgements This work is supported by NSF Grant No. 0328431. We thank Dan Bohus, Kate ForbesRiley, Joel Tetreault and our anonymous reviewers for their helpful comments. References J. Ang, R. Dhillon, A. Krupski, A. Shriberg and A. Stolcke. 2002. Prosody-based automatic detection of annoyance and frustration in human-computer dialog. In Proc. of ICSLP. I. Bulyko, K. Kirchhoff, M. Ostendorf and J. Goldberg. 2005. Error-correction detection and response generation in a spoken dialogue system. Speech Communication, 45(3). L. Chase. 1997. Blame Assignment for Errors Made by Large Vocabulary Speech Recognizers. In Proc. of Eurospeech. K. Forbes-Riley and D. J. Litman. 2005. Using Bigrams to Identify Relationships Between Student Certainness States and Tutor Responses in a Spoken Dialogue Corpus. In Proc. of SIGdial. M. Frampton and O. Lemon. 2005. Reinforcement Learning of Dialogue Strategies using the User's Last Dialogue Act. In Proc. of IJCAI Workshop on Know.&Reasoning in Practical Dialogue Systems. M. Gabsdil and O. Lemon. 2004. Combining Acoustic and Pragmatic Features to Predict Recognition Performance in Spoken Dialogue Systems. In Proc. of ACL. J. Hirschberg, D. Litman and M. Swerts. 2004. Prosodic and Other Cues to Speech Recognition Failures. Speech Communication, 43(1-2). M. Kearns, C. Isbell, S. Singh, D. Litman and J. Howe. 2002. CobotDS: A Spoken Dialogue System for Chat. In Proc. of National Conference on Artificial Intelligence (AAAI). J. Liscombe, J. Hirschberg and J. J. Venditti. 2005. Detecting Certainness in Spoken Tutorial Dialogues. In Proc. of Interspeech. D. Litman and K. Forbes-Riley. 2004. Annotating Student Emotional States in Spoken Tutoring Dialogues. In Proc. of SIGdial Workshop on Discourse and Dialogue (SIGdial). H. Pon-Barry, B. Clark, E. O. Bratt, K. Schultz and S. Peters. 2004. Evaluating the effectiveness of Scot:a spoken conversational tutor. In Proc. of ITS Workshop on Dialogue-based Intellig. Tutoring Systems. M. Rotaru and D. Litman. 2005. Interactions between Speech Recognition Problems and User Emotions. In Proc. of Eurospeech. G. Skantze. 2005. Exploring human error recovery strategies: Implications for spoken dialogue systems. Speech Communication, 45(3). H. Soltau and A. Waibel. 2000. Specialized acoustic models for hyperarticulated speech. In Proc. of ICASSP. M. Swerts and E. Krahmer. 2005. Audiovisual Prosody and Feeling of Knowing. Journal of Memory and Language, 53. M. Swerts, D. Litman and J. Hirschberg. 2000. Corrections in Spoken Dialogue Systems. In Proc. of ICSLP. K. VanLehn, P. W. Jordan, C. P. Rosé, et al. 2002. The Architecture of Why2-Atlas: A Coach for Qualitative Physics Essay Writing. In Proc. of Intelligent Tutoring Systems (ITS). M. Walker, D. Litman, C. Kamm and A. Abella. 2000. Towards Developing General Models of Usability with PARADISE. Natural Language Engineering. M. Walker, R. Passonneau and J. Boland. 2001. Quantitative and Qualitative Evaluation of Darpa Communicator Spoken Dialogue Systems. In Proc. of ACL. 200
2006
25
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 201–208, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Learning the Structure of Task-driven Human-Human Dialogs Srinivas Bangalore AT&T Labs-Research 180 Park Ave Florham Park, NJ 07932 [email protected] Giuseppe Di Fabbrizio AT&T Labs-Research 180 Park Ave Florham Park, NJ 07932 [email protected] Amanda Stent Dept of Computer Science Stony Brook University Stony Brook, NY [email protected] Abstract Data-driven techniques have been used for many computational linguistics tasks. Models derived from data are generally more robust than hand-crafted systems since they better reflect the distribution of the phenomena being modeled. With the availability of large corpora of spoken dialog, dialog management is now reaping the benefits of data-driven techniques. In this paper, we compare two approaches to modeling subtask structure in dialog: a chunk-based model of subdialog sequences, and a parse-based, or hierarchical, model. We evaluate these models using customer agent dialogs from a catalog service domain. 1 Introduction As large amounts of language data have become available, approaches to sentence-level processing tasks such as parsing, language modeling, named-entity detection and machine translation have become increasingly data-driven and empirical. Models for these tasks can be trained to capture the distributions of phenomena in the data resulting in improved robustness and adaptability. However, this trend has yet to significantly impact approaches to dialog management in dialog systems. Dialog managers (both plan-based and call-flow based, for example (Di Fabbrizio and Lewis, 2004; Larsson et al., 1999)) have traditionally been hand-crafted and consequently somewhat brittle and rigid. With the ability to record, store and process large numbers of human-human dialogs (e.g. from call centers), we anticipate that data-driven methods will increasingly influence approaches to dialog management. A successful dialog system relies on the synergistic working of several components: speech recognition (ASR), spoken language understanding (SLU), dialog management (DM), language generation (LG) and text-to-speech synthesis (TTS). While data-driven approaches to ASR and SLU are prevalent, such approaches to DM, LG and TTS are much less well-developed. In ongoing work, we are investigating data-driven approaches for building all components of spoken dialog systems. In this paper, we address one aspect of this problem – inferring predictive models to structure taskoriented dialogs. We view this problem as a first step in predicting the system state of a dialog manager and in predicting the system utterance during an incremental execution of a dialog. In particular, we learn models for predicting dialog acts of utterances, and models for predicting subtask structures of dialogs. We use three different dialog act tag sets for three different human-human dialog corpora. We compare a flat chunk-based model to a hierarchical parse-based model as models for predicting the task structure of dialogs. The outline of this paper is as follows: In Section 2, we review current approaches to building dialog systems. In Section 3, we review related work in data-driven dialog modeling. In Section 4, we present our view of analyzing the structure of task-oriented human-human dialogs. In Section 5, we discuss the problem of segmenting and labeling dialog structure and building models for predicting these labels. In Section 6, we report experimental results on Maptask, Switchboard and a dialog data collection from a catalog ordering service domain. 2 Current Methodology for Building Dialog systems Current approaches to building dialog systems involve several manual steps and careful crafting of different modules for a particular domain or application. The process starts with a small scale “Wizard-of-Oz” data collection where subjects talk to a machine driven by a human ‘behind the curtains’. A user experience (UE) engineer analyzes the collected dialogs, subject matter expert interviews, user testimonials and other evidences (e.g. customer care history records). This heterogeneous set of information helps the UE engineer to design some system functionalities, mainly: the 201 semantic scope (e.g. call-types in the case of call routing systems), the LG model, and the DM strategy. A larger automated data collection follows, and the collected data is transcribed and labeled by expert labelers following the UE engineer recommendations. Finally, the transcribed and labeled data is used to train both the ASR and the SLU. This approach has proven itself in many commercial dialog systems. However, the initial UE requirements phase is an expensive and errorprone process because it involves non-trivial design decisions that can only be evaluated after system deployment. Moreover, scalability is compromised by the time, cost and high level of UE knowhow needed to reach a consistent design. The process of building speech-enabled automated contact center services has been formalized and cast into a scalable commercial environment in which dialog components developed for different applications are reused and adapted (Gilbert et al., 2005). However, we still believe that exploiting dialog data to train/adapt or complement hand-crafted components will be vital for robust and adaptable spoken dialog systems. 3 Related Work In this paper, we discuss methods for automatically creating models of dialog structure using dialog act and task/subtask information. Relevant related work includes research on automatic dialog act tagging and stochastic dialog management, and on building hierarchical models of plans using task/subtask information. There has been considerable research on statistical dialog act tagging (Core, 1998; Jurafsky et al., 1998; Poesio and Mikheev, 1998; Samuel et al., 1998; Stolcke et al., 2000; Hastie et al., 2002). Several disambiguation methods (n-gram models, hidden Markov models, maximum entropy models) that include a variety of features (cue phrases, speaker ID, word n-grams, prosodic features, syntactic features, dialog history) have been used. In this paper, we show that use of extended context gives improved results for this task. Approaches to dialog management include AI-style plan recognition-based approaches (e.g. (Sidner, 1985; Litman and Allen, 1987; Rich and Sidner, 1997; Carberry, 2001; Bohus and Rudnicky, 2003)) and information state-based approaches (e.g. (Larsson et al., 1999; Bos et al., 2003; Lemon and Gruenstein, 2004)). In recent years, there has been considerable research on how to automatically learn models of both types from data. Researchers who treat dialog as a sequence of information states have used reinforcement learning and/or Markov decision processes to build stochastic models for dialog management that are evaluated by means of dialog simulations (Levin and Pieraccini, 1997; Scheffler and Young, 2002; Singh et al., 2002; Williams et al., 2005; Henderson et al., 2005; Frampton and Lemon, 2005). Most recently, Henderson et al. showed that it is possible to automatically learn good dialog management strategies from automatically labeled data over a large potential space of dialog states (Henderson et al., 2005); and Frampton and Lemon showed that the use of context information (the user’s last dialog act) can improve the performance of learned strategies (Frampton and Lemon, 2005). In this paper, we combine the use of automatically labeled data and extended context for automatic dialog modeling. Other researchers have looked at probabilistic models for plan recognition such as extensions of Hidden Markov Models (Bui, 2003) and probabilistic context-free grammars (Alexandersson and Reithinger, 1997; Pynadath and Wellman, 2000). In this paper, we compare hierarchical grammarstyle and flat chunking-style models of dialog. In recent research, Hardy (2004) used a large corpus of transcribed and annotated telephone conversations to develop the Amities dialog system. For their dialog manager, they trained separate task and dialog act classifiers on this corpus. For task identification they report an accuracy of 85% (true task is one of the top 2 results returned by the classifier); for dialog act tagging they report 86% accuracy. 4 Structural Analysis of a Dialog We consider a task-oriented dialog to be the result of incremental creation of a shared plan by the participants (Lochbaum, 1998). The shared plan is represented as a single tree that encapsulates the task structure (dominance and precedence relations among tasks), dialog act structure (sequences of dialog acts), and linguistic structure of utterances (inter-clausal relations and predicateargument relations within a clause), as illustrated in Figure 1. As the dialog proceeds, an utterance from a participant is accommodated into the tree in an incremental manner, much like an incremental syntactic parser accommodates the next word into a partial parse tree (Alexandersson and Reithinger, 1997). With this model, we can tightly couple language understanding and dialog management using a shared representation, which leads to improved accuracy (Taylor et al., 1998). In order to infer models for predicting the structure of task-oriented dialogs, we label humanhuman dialogs with the hierarchical information shown in Figure 1 in several stages: utterance segmentation (Section 4.1), syntactic annotation (Section 4.2), dialog act tagging (Section 4.3) and 202 subtask labeling (Section 5). Dialog Task Topic/Subtask Topic/Subtask Task Task Clause Utterance Utterance Utterance Topic/Subtask DialogAct,Pred−Args DialogAct,Pred−Args DialogAct,Pred−Args Figure 1: Structural analysis of a dialog 4.1 Utterance Segmentation The task of ”cleaning up” spoken language utterances by detecting and removing speech repairs and dysfluencies and identifying sentence boundaries has been a focus of spoken language parsing research for several years (e.g. (Bear et al., 1992; Seneff, 1992; Shriberg et al., 2000; Charniak and Johnson, 2001)). We use a system that segments the ASR output of a user’s utterance into clauses. The system annotates an utterance for sentence boundaries, restarts and repairs, and identifies coordinating conjunctions, filled pauses and discourse markers. These annotations are done using a cascade of classifiers, details of which are described in (Bangalore and Gupta, 2004). 4.2 Syntactic Annotation We automatically annotate a user’s utterance with supertags (Bangalore and Joshi, 1999). Supertags encapsulate predicate-argument information in a local structure. They are composed with each other using the substitution and adjunction operations of Tree-Adjoining Grammars (Joshi, 1987) to derive a dependency analysis of an utterance and its predicate-argument structure. 4.3 Dialog Act Tagging We use a domain-specific dialog act tagging scheme based on an adapted version of DAMSL (Core, 1998). The DAMSL scheme is quite comprehensive, but as others have also found (Jurafsky et al., 1998), the multi-dimensionality of the scheme makes the building of models from DAMSL-tagged data complex. Furthermore, the generality of the DAMSL tags reduces their utility for natural language generation. Other tagging schemes, such as the Maptask scheme (Carletta et al., 1997), are also too general for our purposes. We were particularly concerned with obtaining sufficient discriminatory power between different types of statement (for generation), and to include an out-of-domain tag (for interpretation). We provide a sample list of our dialog act tags in Table 2. Our experiments in automatic dialog act tagging are described in Section 6.3. 5 Modeling Subtask Structure Figure 2 shows the task structure for a sample dialog in our domain (catalog ordering). An order placement task is typically composed of the sequence of subtasks opening, contact-information, order-item, related-offers, summary. Subtasks can be nested; the nesting structure can be as deep as five levels. Most often the nesting is at the left or right frontier of the subtask tree. Opening Order Placement Contact Info Delivery Info Shipping Info Closing Summary Payment Info Order Item Figure 2: A sample task structure in our application domain. Contact Info Order Item Payment Info Summary Closing Shipping Info Delivery Info Opening Figure 3: An example output of the chunk model’s task structure The goal of subtask segmentation is to predict if the current utterance in the dialog is part of the current subtask or starts a new subtask. We compare two models for recovering the subtask structure – a chunk-based model and a parse-based model. In the chunk-based model, we recover the precedence relations (sequence) of the subtasks but not dominance relations (subtask structure) among the subtasks. Figure 3 shows a sample output from the chunk model. In the parse model, we recover the complete task structure from the sequence of utterances as shown in Figure 2. Here, we describe our two models. We present our experiments on subtask segmentation and labeling in Section 6.4. 5.1 Chunk-based model This model is similar to the second one described in (Poesio and Mikheev, 1998), except that we use tasks and subtasks rather than dialog games. We model the prediction problem as a classification task as follows: given a sequence of utterances  in a dialog         and a 203 subtask label vocabulary   , we need to predict the best subtask label sequence  "!      #    %$ as shown in equation 1. &('*)+-,/.10/23,/4 5 6 798 &:'; <*= (1) Each subtask has begin, middle (possibly absent) and end utterances. If we incorporate this information, the refined vocabulary of subtask labels is "> @? BA    $   /BC ED  -FG . In our experiments, we use a classifier to assign to each utterance a refined subtask label conditioned on a vector of local contextual features ( H ). In the interest of using an incremental left-to-right decoder, we restrict the contextual features to be from the preceding context only. Furthermore, the search is limited to the label sequences that respect precedence among the refined labels (begin I middle I end). This constraint is expressed in a grammar G encoded as a regular expression ( JKML  ON  /BA   $   ! /BC   ! ). However, in order to cope with the prediction errors of the classifier, we approximate J3ML  with an P -gram language model on sequences of the refined tag labels: &:' ) Q +R,/.S0/2K,4 5 61T 5 6ST1U V WYX[Z 798 &(' Q ; <*= (2) \ ,/.S0/2K,4 5 6 T 5 6ST1U V WYX[Z ] ^`_ 798baBc _ ; d= (3) In order to estimate the conditional distribution e   D H  we use the general technique of choosing the maximum entropy (maxent) distribution that properly estimates the average of each feature over the training data (Berger et al., 1996). This can be written as a Gibbs distribution parameterized with weights f , where g is the size of the label set. Thus, 798ba%c _ ; d=+ h`i1jlkbmon p qsr tvuxw:y h i1jlk%n p (4) We use the machine learning toolkit LLAMA (Haffner, 2006) to estimate the conditional distribution using maxent. LLAMA encodes multiclass maxent as binary maxent, in order to increase the speed of training and to scale this method to large data sets. Each of the g classes in the set z{> is encoded as a bit vector such that, in the vector for class | , the |B}~ bit is one and all other bits are zero. Then, g one-vs-other binary classifiers are used as follows. 798x€ ; ‚=ƒ+…„‚† 798ˆ‡ € ; ‚=+ h i1‰Šn ‹ h i ‰ n ‹9Œ h i ‰ n ‹ + „ „ Œ h Ž iS ‰ n ‹ (5) where f‘ ’ is the parameter vector for the antilabel “ ” and fƒ• ’  f ’—– f  ’ . In order to compute e   D H  , we use class independence assumption and require that ”  ™˜ and for all šœ›  | ”ž  Ÿ . 798ba%c _ ; ‚=+ 798x€ _ ; ‚= r ^ ¡`¢ w _ 798x€ ¡ ; ‚= 5.2 Parse-based Model As seen in Figure 3, the chunk model does not capture dominance relations among subtasks, which are important for resolving anaphoric references (Grosz and Sidner, 1986). Also, the chunk model is representationally inadequate for centerembedded nestings of subtasks, which do occur in our domain, although less frequently than the more prevalent “tail-recursive” structures. In this model, we are interested in finding the most likely plan tree ( e  ) given the sequence of utterances: 7 ' ) +-,/.S0/2K,/4 £ 6 798¤7 'z; <*= (6) For real-time dialog management we use a topdown incremental parser that incorporates bottomup information (Roark, 2001). We rewrite equation (6) to exploit the subtask sequence provided by the chunk model as shown in Equation 7. For the purpose of this paper, we approximate Equation 7 using one-best (or k-best) chunk output.1 7 '*)¥+ ,/.S0/2K,4 £ 6 ¦ 5 6 798 &('; <*= 798¤7 '; &('‘= (7) \ ,/.S0/2K,4 £ 6 798¤7 '; &(' ) = (8) where &(' ) +-,/.S0/2K,/4 5/6 798 &:'; <*= (9) 6 Experiments and Results In this section, we present the results of our experiments for modeling subtask structure. 6.1 Data As our primary data set, we used 915 telephonebased customer-agent dialogs related to the task of ordering products from a catalog. Each dialog was transcribed by hand; all numbers (telephone, credit card, etc.) were removed for privacy reasons. The average dialog lasted for 3.71 1However, it is conceivable to parse the multiple hypotheses of chunks (encoded as a weighted lattice) produced by the chunk model. 204 minutes and included 61.45 changes of speaker. A single customer-service representative might participate in several dialogs, but customers are represented by only one dialog each. Although the majority of the dialogs were on-topic, some were idiosyncratic, including: requests for order corrections, transfers to customer service, incorrectly dialed numbers, and long friendly out-of-domain asides. Annotations applied to these dialogs include: utterance segmentation (Section 4.1), syntactic annotation (Section 4.2), dialog act tagging (Section 4.3) and subtask segmentation (Section 5). The former two annotations are domainindependent while the latter are domain-specific. 6.2 Features Offline natural language processing systems, such as part-of-speech taggers and chunkers, rely on both static and dynamic features. Static features are derived from the local context of the text being tagged. Dynamic features are computed based on previous predictions. The use of dynamic features usually requires a search for the globally optimal sequence, which is not possible when doing incremental processing. For dialog act tagging and subtask segmentation during dialog management, we need to predict incrementally since it would be unrealistic to wait for the entire dialog before decoding. Thus, in order to train the dialog act (DA) and subtask segmentation classifiers, we use only static features from the current and left context as shown in Table 1.2 This obviates the need for constructing a search network and performing a dynamic programming search during decoding. In lieu of the dynamic context, we use larger static context to compute features – word trigrams and trigrams of words annotated with supertags computed from up to three previous utterances. Label Type Features Dialog Speaker, word trigrams from Acts current/previous utterance(s) supertagged utterance Subtask Speaker, word trigrams from current utterance, previous utterance(s)/turn Table 1: Features used for the classifiers. 6.3 Dialog Act Labeling For dialog act labeling, we built models from our corpus and from the Maptask (Carletta et al., 1997) and Switchboard-DAMSL (Jurafsky et al., 1998) corpora. From the files for the Maptask corpus, we extracted the moves, words and speaker information (follower/giver). Instead of using the 2We could use dynamic contexts as well and adopt a greedy decoding algorithm instead of a viterbi search. We have not explored this approach in this paper. raw move information, we augmented each move with speaker information, so that for example, the instruct move was split into instruct-giver and instruct-follower. For the Switchboard corpus, we clustered the original labels, removing most of the multidimensional tags and combining together tags with minimum training data as described in (Jurafsky et al., 1998). For all three corpora, nonsentence elements (e.g., dysfluencies, discourse markers, etc.) and restarts (with and without repairs) were kept; non-verbal content (e.g., laughs, background noise, etc.) was removed. As mentioned in Section 4, we use a domainspecific tag set containing 67 dialog act tags for the catalog corpus. In Table 2, we give examples of our tags. We manually annotated 1864 clauses from 20 dialogs selected at random from our corpus and used a ten-fold cross-validation scheme for testing. In our annotation, a single utterance may have multiple dialog act labels. For our experiments with the Switchboard-DAMSL corpus, we used 42 dialog act tags obtained by clustering over the 375 unique tags in the data. This corpus has 1155 dialogs and 218,898 utterances; 173 dialogs, selected at random, were used for testing. The Maptask tagging scheme has 12 unique dialog act tags; augmented with speaker information, we get 24 tags. This corpus has 128 dialogs and 26181 utterances; ten-fold cross validation was used for testing. Type Subtype Ask Info Explain Catalog, CC Related, Discount, Order Info Order Problem, Payment Rel, Product Info Promotions, Related Offer, Shipping ConversAck, Goodbye, Hello, Help, Hold, -ational YoureWelcome, Thanks, Yes, No, Ack, Repeat, Not(Information) Request Code, Order Problem, Address, Catalog, CC Related, Change Order, Conf, Credit, Customer Info, Info, Make Order, Name, Order Info, Order Status, Payment Rel, Phone Number, Product Info, Promotions, Shipping, Store Info YNQ Address, Email, Info, Order Info, Order Status,Promotions, Related Offer Table 2: Sample set of dialog act labels Table 3 shows the error rates for automatic dialog act labeling using word trigram features from the current and previous utterance. We compare error rates for our tag set to those of SwitchboardDAMSL and Maptask using the same features and the same classifier learner. The error rates for the catalog and the Maptask corpus are an average of ten-fold cross-validation. We suspect that the larger error rate for our domain compared to Maptask and Switchboard might be due to the small size of our annotated corpus (about 2K utterances for our domain as against about 20K utterances for 205 Maptask and 200K utterances for DAMSL). The error rates for the Switchboard-DAMSL data are significantly better than previously published results (28% error rate) (Jurafsky et al., 1998) with the same tag set. This improvement is attributable to the richer feature set we use and a discriminative modeling framework that supports a large number of features, in contrast to the generative model used in (Jurafsky et al., 1998). A similar obeservation applies to the results on Maptask dialog act tagging. Our model outperforms previously published results (42.8% error rate) (Poesio and Mikheev, 1998). In labeling the Switchboard data, long utterances were split into slash units (Meteer et.al., 1995). A speaker’s turn can be divided in one or more slash units and a slash unit can extend over multiple turns, for example: sv B.64 utt3: C but, F uh – b A.65 utt1: Uh-huh. / + B.66 utt1: – people want all of that / sv B.66 utt2: C and not all of those are necessities. / b A.67 utt1: Right . / The labelers were instructed to label on the basis of the whole slash unit. This makes, for example, the dysfluency turn B.64 a Statement opinion (sv) rather than a non-verbal. For the purpose of discriminative learning, this could introduce noisy data since the context associated to the labeling decision shows later in the dialog. To address this issue, we compare 2 classifiers: the first (non-merged), simply propagates the same label to each continuation, cross turn slash unit; the second (merged) combines the units in one single utterance. Although the merged classifier breaks the regular structure of the dialog, the results in Table 3 show better overall performance. Tagset current + stagged + 3 previous utterance utterance (stagged) utterance Catalog 46.3 46.1 42.2 Domain DAMSL 24.7 23.8 19.1 (non-merged) DAMSL 22.0 20.6 16.5 (merged) Maptask 34.3 33.9 30.3 Table 3: Error rates in dialog act tagging 6.4 Subtask Segmentation and Labeling For subtask labeling, we used a random partition of 864 dialogs from our catalog domain as the training set and 51 dialogs as the test set. All the dialogs were annotated with subtask labels by hand. We used a set of 18 labels grouped as shown in Figure 4. Type Subtask Labels 1 opening, closing 2 contact-information, delivery-information, payment-information, shipping-address,summary 3 order-item, related-offer, order-problem discount, order-change, check-availability 4 call-forward, out-of-domain, misc-other, sub-call Table 4: Subtask label set 6.4.1 Chunk-based Model Table 5 shows error rates on the test set when predicting refined subtask labels using word P gram features computed on different dialog contexts. The well-formedness constraint on the refined subtask labels significantly improves prediction accuracy. Utterance context is also very helpful; just one utterance of left-hand context leads to a 10% absolute reduction in error rate, with further reductions for additional context. While the use of trigram features helps, it is not as helpful as other contextual information. We used the dialog act tagger trained from Switchboard-DAMSL corpus to automatically annotate the catalog domain utterances. We included these tags as features for the classifier, however, we did not see an improvement in the error rates, probably due to the high error rate of the dialog act tagger. Feature Utterance Context Context Current +prev +three prev utt/with DA utt/with DA utt/with DA Unigram 42.9/42.4 33.6/34.1 30.0/30.3 (53.4/52.8) (43.0/43.0) (37.6/37.6) Trigram 41.7/41.7 31.6/31.4 30.0/29.1 (52.5/52.0) (42.9/42.7) (37.6/37.4) Table 5: Error rate for predicting the refined subtask labels. The error rates without the wellformedness constraint is shown in parenthesis. The error rates with dialog acts as features are separated by a slash. 6.4.2 Parsing-based Model We retrained a top-down incremental parser (Roark, 2001) on the plan trees in the training dialogs. For the test dialogs, we used the § -best (k=50) refined subtask labels for each utterance as predicted by the chunk-based classifier to create a lattice of subtask label sequences. For each dialog we then created P -best sequences (100-best for these experiments) of subtask labels; these were parsed and (re-)ranked by the parser.3 We combine the weights of the subtask label sequences assigned by the classifier with the parse score assigned by the parser and select the top 3Ideally, we would have parsed the subtask label lattice directly, however, the parser has to be reimplemented to parse such lattice inputs. 206 Features Constraints No Constraint Sequence Constraint Parser Constraint Current Utt 54.4 42.0 41.5 + DA 53.8 40.5 40.2 Current+Prev Utt 41.6 27.7 27.7 +DA 40.0 28.8 28.1 Current+3 Prev Utt 37.5 24.7 24.7 +DA 39.7 29.6 28.9 Table 6: Error rates for task structure prediction, with no constraints, sequence constraints and parser constraints scoring sequence from the list for each dialog. The results are shown in Table 6. It can be seen that using the parsing constraint does not help the subtask label sequence prediction significantly. The chunk-based model gives almost the same accuracy, and is incremental and more efficient. 7 Discussion The experiments reported in this section have been performed on transcribed speech. The audio for these dialogs, collected at a call center, were stored in a compressed format, so the speech recognition error rate is high. In future work, we will assess the performance of dialog structure prediction on recognized speech. The research presented in this paper is but one step, albeit a crucial one, towards achieving the goal of inducing human-machine dialog systems using human-human dialogs. Dialog structure information is necessary for language generation (predicting the agents’ response) and dialog state specific text-to-speech synthesis. However, there are several challenging problems that remain to be addressed. The structuring of dialogs has another application in call center analytics. It is routine practice to monitor, analyze and mine call center data based on indicators such as the average length of dialogs, the task completion rate in order to estimate the efficiency of a call center. By incorporating structure to the dialogs, as presented in this paper, the analysis of dialogs can be performed at a more finegrained (task and subtask) level. 8 Conclusions In order to build a dialog manager using a datadriven approach, the following are necessary: a model for labeling/interpreting the user’s current action; a model for identifying the current subtask/topic; and a model for predicting what the system’s next action should be. Prior research in plan identification and in dialog act labeling has identified possible features for use in such models, but has not looked at the performance of different feature sets (reflecting different amounts of context and different views of dialog) across different domains (label sets). In this paper, we compared the performance of a dialog act labeler/predictor across three different tag sets: one using very detailed, domain-specific dialog acts usable for interpretation and generation; and two using generalpurpose dialog acts and corpora available to the larger research community. We then compared two models for subtask labeling: a flat, chunkbased model and a hierarchical, parsing-based model. Findings include that simpler chunk-based models perform as well as hierarchical models for subtask labeling and that a dialog act feature is not helpful for subtask labeling. In on-going work, we are using our best performing models for both DM and LG components (to predict the next dialog move(s), and to select the next system utterance). In future work, we will address the use of data-driven dialog management to improve SLU. 9 Acknowledgments We thank Barbara Hollister and her team for their effort in annotating the dialogs for dialog acts and subtask structure. We thank Patrick Haffner for providing us with the LLAMA machine learning toolkit and Brian Roark for providing us with his top-down parser used in our experiments. We also thank Alistair Conkie, Mazin Gilbert, Narendra Gupta, and Benjamin Stern for discussions during the course of this work. References J. Alexandersson and N. Reithinger. 1997. Learning dialogue structures from a corpus. In Proceedings of Eurospeech’97. S. Bangalore and N. Gupta. 2004. Extracting clauses in dialogue corpora : Application to spoken language understanding. Journal Traitement Automatique des Langues (TAL), 45(2). S. Bangalore and A. K. Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25(2). J. Bear et al. 1992. Integrating multiple knowledge sources for detection and correction of repairs in human-computer dialog. In Proceedings of ACL’92. 207 A. Berger, S.D. Pietra, and V.D. Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22(1):39–71. D. Bohus and A. Rudnicky. 2003. RavenClaw: Dialog management using hierarchical task decomposition and an expectation agenda. In Proceedings of Eurospeech’03. J. Bos et al. 2003. DIPPER: Description and formalisation of an information-state update dialogue system architecture. In Proceedings of SIGdial. H.H. Bui. 2003. A general model for online probabalistic plan recognition. In Proceedings of IJCAI’03. S. Carberry. 2001. Techniques for plan recognition. User Modeling and User-Adapted Interaction, 11(1–2). J. Carletta et al. 1997. The reliability of a dialog structure coding scheme. Computational Linguistics, 23(1). E. Charniak and M. Johnson. 2001. Edit detection and parsing for transcribed speech. In Proceedings of NAACL’01. M. Core. 1998. Analyzing and predicting patterns of DAMSL utterance tags. In Proceedings of the AAAI spring symposium on Applying machine learning to discourse processing. M. Meteer et.al. 1995. Dysfluency annotation stylebook for the switchboard corpus. Distributed by LDC. G. Di Fabbrizio and C. Lewis. 2004. Florence: a dialogue manager framework for spoken dialogue systems. In ICSLP 2004, 8th International Conference on Spoken Language Processing, Jeju, Jeju Island, Korea, October 4-8. M. Frampton and O. Lemon. 2005. Reinforcement learning of dialogue strategies using the user’s last dialogue act. In Proceedings of the 4th IJCAI workshop on knowledge and reasoning in practical dialogue systems. M. Gilbert et al. 2005. Intelligent virtual agents for contact center automation. IEEE Signal Processing Magazine, 22(5), September. B.J. Grosz and C.L. Sidner. 1986. Attention, intentions and the structure of discoursep. Computational Linguistics, 12(3). P. Haffner. 2006. Scaling large margin classifiers for spoken language understanding. Speech Communication, 48(4). H. Hardy et al. 2004. Data-driven strategies for an automated dialogue system. In Proceedings of ACL’04. H. Wright Hastie et al. 2002. Automatically predicting dialogue structure using prosodic features. Speech Communication, 36(1–2). J. Henderson et al. 2005. Hybrid reinforcement/supervised learning for dialogue policies from COMMUNICATOR data. In Proceedings of the 4th IJCAI workshop on knowledge and reasoning in practical dialogue systems. A. K. Joshi. 1987. An introduction to tree adjoining grammars. In A. Manaster-Ramer, editor, Mathematics of Language. John Benjamins, Amsterdam. D. Jurafsky et al. 1998. Switchboard discourse language modeling project report. Technical Report Research Note 30, Center for Speech and Language Processing, Johns Hopkins University, Baltimore, MD. S. Larsson et al. 1999. TrindiKit manual. Technical report, TRINDI Deliverable D2.2. O. Lemon and A. Gruenstein. 2004. Multithreaded context for robust conversational interfaces: Context-sensitive speech recognition and interpretation of corrective fragments. ACM Transactions on Computer-Human Interaction, 11(3). E. Levin and R. Pieraccini. 1997. A stochastic model of computer-human interaction for learning dialogue strategies. In Proceedings of Eurospeech’97. D. Litman and J. Allen. 1987. A plan recognition model for subdialogs in conversations. Cognitive Science, 11(2). K. Lochbaum. 1998. A collaborative planning model of intentional structure. Computational Linguistics, 24(4). M. Poesio and A. Mikheev. 1998. The predictive power of game structure in dialogue act recognition: experimental results using maximum entropy estimation. In Proceedings of ICSLP’98. D.V. Pynadath and M.P. Wellman. 2000. Probabilistic statedependent grammars for plan recognition. In In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence (UAI-2000). C. Rich and C.L. Sidner. 1997. COLLAGEN: When agents collaborate with people. In Proceedings of the First International Conference on Autonomous Agents (Agents’97). B. Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2). K. Samuel et al. 1998. Computing dialogue acts from features with transformation-based learning. In Proceedings of the AAAI spring symposium on Applying machine learning to discourse processing. K. Scheffler and S. Young. 2002. Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In Proceedings of HLT’02. S. Seneff. 1992. A relaxation method for understanding spontaneous speech utterances. In Proceedings of the Speech and Natural Language Workshop, San Mateo, CA. E. Shriberg et al. 2000. Prosody-based automatic segmentation of speech into sentences and topics. Speech Communication, 32, September. C.L. Sidner. 1985. Plan parsing for intended response recognition in discourse. Computational Intelligence, 1(1). S. Singh et al. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system. Journal of Artificial Intelligence Research, 16. A. Stolcke et al. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3). P. Taylor et al. 1998. Intonation and dialogue context as constraints for speech recognition. Language and Speech, 41(3). J. Williams et al. 2005. Partially observable Markov decision processes with continuous observations for dialogue management. In Proceedings of SIGdial. 208
2006
26
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 209–216, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Semi-Supervised Conditional Random Fields for Improved Sequence Segmentation and Labeling Feng Jiao University of Waterloo Shaojun Wang Chi-Hoon Lee Russell Greiner Dale Schuurmans University of Alberta Abstract We present a new semi-supervised training procedure for conditional random fields (CRFs) that can be used to train sequence segmentors and labelers from a combination of labeled and unlabeled training data. Our approach is based on extending the minimum entropy regularization framework to the structured prediction case, yielding a training objective that combines unlabeled conditional entropy with labeled conditional likelihood. Although the training objective is no longer concave, it can still be used to improve an initial model (e.g. obtained from supervised training) by iterative ascent. We apply our new training algorithm to the problem of identifying gene and protein mentions in biological texts, and show that incorporating unlabeled data improves the performance of the supervised CRF in this case. 1 Introduction Semi-supervised learning is often touted as one of the most natural forms of training for language processing tasks, since unlabeled data is so plentiful whereas labeled data is usually quite limited or expensive to obtain. The attractiveness of semisupervised learning for language tasks is further heightened by the fact that the models learned are large and complex, and generally even thousands of labeled examples can only sparsely cover the parameter space. Moreover, in complex structured prediction tasks, such as parsing or sequence modeling (part-of-speech tagging, word segmentation, named entity recognition, and so on), it is considerably more difficult to obtain labeled training data than for classification tasks (such as document classification), since hand-labeling individual words and word boundaries is much harder than assigning text-level class labels. Many approaches have been proposed for semisupervised learning in the past, including: generative models (Castelli and Cover 1996; Cohen and Cozman 2006; Nigam et al. 2000), self-learning (Celeux and Govaert 1992; Yarowsky 1995), cotraining (Blum and Mitchell 1998), informationtheoretic regularization (Corduneanu and Jaakkola 2006; Grandvalet and Bengio 2004), and graphbased transductive methods (Zhou et al. 2004; Zhou et al. 2005; Zhu et al. 2003). Unfortunately, these techniques have been developed primarily for single class label classification problems, or class label classification with a structured input (Zhou et al. 2004; Zhou et al. 2005; Zhu et al. 2003). Although still highly desirable, semi-supervised learning for structured classification problems like sequence segmentation and labeling have not been as widely studied as in the other semi-supervised settings mentioned above, with the sole exception of generative models. With generative models, it is natural to include unlabeled data using an expectation-maximization approach (Nigam et al. 2000). However, generative models generally do not achieve the same accuracy as discriminatively trained models, and therefore it is preferable to focus on discriminative approaches. Unfortunately, it is far from obvious how unlabeled training data can be naturally incorporated into a discriminative training criterion. For example, unlabeled data simply cancels from the objective if one attempts to use a traditional conditional likelihood criterion. Nevertheless, recent progress has been made on incorporating unlabeled data in discriminative training procedures. For example, dependencies can be introduced between the labels of nearby instances and thereby have an effect on training (Zhu et al. 2003; Li and McCallum 2005; Altun et al. 2005). These models are trained to encourage nearby data points to have the same class label, and they can obtain impressive accuracy using a very small amount of labeled data. However, since they model pairwise similarities among data points, most of these approaches require joint inference over the whole data set at test time, which is not practical for large data sets. In this paper, we propose a new semi-supervised training method for conditional random fields (CRFs) that incorporates both labeled and unlabeled sequence data to estimate a discriminative 209 structured predictor. CRFs are a flexible and powerful model for structured predictors based on undirected graphical models that have been globally conditioned on a set of input covariates (Lafferty et al. 2001). CRFs have proved to be particularly useful for sequence segmentation and labeling tasks, since, as conditional models of the labels given inputs, they relax the independence assumptions made by traditional generative models like hidden Markov models. As such, CRFs provide additional flexibility for using arbitrary overlapping features of the input sequence to define a structured conditional model over the output sequence, while maintaining two advantages: first, efficient dynamic program can be used for inference in both classification and training, and second, the training objective is concave in the model parameters, which permits global optimization. To obtain a new semi-supervised training algorithm for CRFs, we extend the minimum entropy regularization framework of Grandvalet and Bengio (2004) to structured predictors. The resulting objective combines the likelihood of the CRF on labeled training data with its conditional entropy on unlabeled training data. Unfortunately, the maximization objective is no longer concave, but we can still use it to effectively improve an initial supervised model. To develop an effective training procedure, we first show how the derivative of the new objective can be computed from the covariance matrix of the features on the unlabeled data (combined with the labeled conditional likelihood). This relationship facilitates the development of an efficient dynamic programming for computing the gradient, and thereby allows us to perform efficient iterative ascent for training. We apply our new training technique to the problem of sequence labeling and segmentation, and demonstrate it specifically on the problem of identifying gene and protein mentions in biological texts. Our results show the advantage of semi-supervised learning over the standard supervised algorithm. 2 Semi-supervised CRF training In what follows, we use the same notation as (Lafferty et al. 2001). Let be a random variable over data sequences to be labeled, and  be a random variable over corresponding label sequences. All components,  , of  are assumed to range over a finite label alphabet  . For example, might range over sentences and  over part-of-speech taggings of those sentences; hence  would be the set of possible part-of-speech tags in this case. Assume we have a set of labeled examples,      !" $#%&'$#%)( , and unlabeled examples, *+ $#%,-   -$./ ( . We would like to build a CRF model 021 %3 -4 5 6 1 -87:9<; >= ?@ A CB @EDF@ '8 (  5 6 1 - 7:9<; HG B  D '8I ( over sequential input and output data ' , where B  B     ! B = J , D '8  D  '8    D = '8 J and 6 1 -K ?ML 7:9<; HG B  D '8I ( Our goal is to learn such a model from the combined set of labeled and unlabeled examples,  FN O* . The standard supervised CRF training procedure is based upon maximizing the log conditional likelihood of the labeled examples in + PRQ B K # ? S A UTWVYX 021   S  3   S  8Z\[/ B  (1) where [ B  is any standard regularizer on B , e.g. [/ B ]_^ B ^&`MaFb . Regularization can be used to limit over-fitting on rare features and avoid degeneracy in the case of correlated features. Obviously, (1) ignores the unlabeled examples in  * . To make full use of the available training data, we propose a semi-supervised learning algorithm that exploits a form of entropy regularization on the unlabeled data. Specifically, for a semisupervised CRF, we propose to maximize the following objective c Q B d # ? S A UTWVYX 021   S  3   S  'Z\[ B  (2) e f . ? S A #%,- ?"L 021 %3   S   TWVYX 021 %3   S   where the first term is the penalized log conditional likelihood of the labeled data under the CRF, (1), and the second line is the negative conditional entropy of the CRF on the unlabeled data. Here, f is a tradeoff parameter that controls the influence of the unlabeled data. 210 This approach resembles that taken by (Grandvalet and Bengio 2004) for single variable classification, but here applied to structured CRF training. The motivation is that minimizing conditional entropy over unlabeled data encourages the algorithm to find putative labelings for the unlabeled data that are mutually reinforcing with the supervised labels; that is, greater certainty on the putative labelings coincides with greater conditional likelihood on the supervised labels, and vice versa. For a single classification variable this criterion has been shown to effectively partition unlabeled data into clusters (Grandvalet and Bengio 2004; Roberts et al. 2000). To motivate the approach in more detail, consider the overlap between the probability distribution over a label sequence and the empirical distribution of  0 - on the unlabeled data /* . The overlap can be measured by the Kullback-Leibler divergence  0 1 %3 -  0 -"^  0 - . It is well known that Kullback-Leibler divergence (Cover and Thomas 1991) is positive and increases as the overlap between the two distributions decreases. In other words, maximizing Kullback-Leibler divergence implies that the overlap between two distributions is minimized. The total overlap over all possible label sequences can be defined as ? L  021 %3 -  0 -"^  0 -  ?"L ?   021 %3 -  0 - TWVYX 021 %3 -  0 -  0 -  ?    0 - ?"L 021 %3 - T VYX 021 %3 - which motivates the negative entropy term in (2). The combined training objective (2) exploits unlabeled data to improve the CRF model, as we will see. However, one drawback with this approach is that the entropy regularization term is not concave. To see why, note that the entropy regularizer can be seen as a composition, B   D B  , where D    , D <   L L TWVYX L and L  =   , L B        7:9<;  = @ A  B @FDF@ '8 ( . For scalar B , the second derivative of a composition,  D  , is given by (Boyd and Vandenberghe 2004)   B    B  J /` D B !  B  e D B  J   B  Although D and #" are concave here, since D is not nondecreasing, is not necessarily concave. So in general there are local maxima in (2). 3 An efficient training procedure As (2) is not concave, many of the standard global maximization techniques do not apply. However, one can still use unlabeled data to improve a supervised CRF via iterative ascent. To derive an efficient iterative ascent procedure, we need to compute gradient of (2) with respect to the parameters B . Taking derivative of the objective function (2) with respect to B yields Appendix A for the derivation) $ $ B c Q B  (3)  # ? S A  % D   S    S  8Z ? L 021 %3   S   D   S    S  '& Z $ $ B [/ B  e f . ? S A # ,-( V)*   L  +-,/. 0 D   S  8!1 B The first three items on the right hand side are just the standard gradient of the CRF objective, $ PRQ B a $ B (Lafferty et al. 2001), and the final item is the gradient of the entropy regularizer (the derivation of which is given in Appendix A. Here, ( V) *   L   +-,2. 43 D  S 865 is the conditional covariance matrix of the features, D87 '8 , given sample sequence   S  . In particular, the :9 <;  th element of this matrix is given by ( V) *   L    0 D 7 '8 DF@ '8!1 >= *   L    D 7 '8 DF@ '8 ( Z?= *   L    D@7 '8 (A= *   L    DF@ '8 (  ?ML 021 %3 - D 7 '8 DF@ '8 ( (4) Z ?"L 021 %3 - D 7 '8 ( ?ML 0 1  3 - DF@ ' ( To efficiently calculate the gradient, we need to be able to efficiently compute the expectations with respect to  in (3) and (4). However, this can pose a challenge in general, because there are exponentially many values for  . Techniques for computing the linear feature expectations in (3) are already well known if  is sufficiently structured (e.g.  forms a Markov chain) (Lafferty et al. 2001). However, we now have to develop efficient techniques for computing the quadratic feature expectations in (4). For the quadratic feature expectations, first note that the diagonal terms, 9 CB , are straightforward, since each feature is an indicator, we have 211 that D 7 ' `  D 7 '8 , and therefore the diagonal terms in the conditional covariance are just linear feature expectations = *   L    D 7 '8 ` (  = *   L    D 7 '8 ( as before. For the off diagonal terms, 9 B , however, we need to develop a new algorithm. Fortunately, for structured label sequences,  , one can devise an efficient algorithm for calculating the quadratic expectations based on nested dynamic programming. To illustrate the idea, we assume that the dependencies of  , conditioned on , form a Markov chain. Define one feature for each state pair    , and one feature for each state-observation pair R , which we express with indicator functions D " " G   I%3 *   -  C  *     H  and "  %3 -     C  R respectively. Following (Lafferty et al. 2001), we also add special start and stop states,   start and  ,-  stop. The conditional probability of a label sequence can now be expressed concisely in a matrix form. For each position 9 in the observation sequence  , define the 3  3 3  3 matrix random variable  7 -  7   3 - by  7   3 -  7:9<; ! 7   3 - where 7   3 -  ? @#" @FDF@%$ & 7  3 ' (    ) e ?Y@+* @ @ $ 7 %3 ,(   ) Here & 7 is the edge with labels  7    7  and 7 is the vertex with label  7 . For each index 9O/.<   -0 e 5 define the forward vectors 1 7 - with base case 1  3 -32 55476 /89;:=<9 . V 9;> 7 <;? 4 8 7 and recurrence 1 7 - 1 7  -  7 - Similarly, the backward vectors @ 7 - are given by @  ,- 3 -  2 55476 /89 V ; . V 9;> 7 <;? 4 8 7 @ 7 -   7 ,- - @ 7 ,- - With these definitions, the expectation of the product of each pair of feature functions, D@7 '8 DF@ '8 , D 7 '8 @ '8 , and 7 '8 @ '8 , for 9Y<;K 5    -;A , 9B ; , can be recursively calculated. First define the summary matrix DC ,-  E    3 -  E  F  A C ,-   - ( " " Then the quadratic feature expectations can be computed by the following recursion, where the two double sums in each expectation correspond to the two cases depending on which feature occurs first ( & C occuring before & E ). = *   L    D 7 '8 DF@ '8 (  ?    ,- ? C  E A   CG E ? "H " D 7I$ & C %3 'KJ     ) ? "    "    DF@ $ & E 3 'ML'        ) 1NC   3 - OC   3 - OC ,-  E     3 -  E       3 - @ E    3 -a 6 1 - e ? 8   ,- ? C  E A   E GPC ? " " D 7 $ & E%3 'ML'   ) ? " H "   DF@Q$ & C %3 'KJ        ) 1 E     3 -  E       3 -  E ,-  C      3 - DC   3 - @ E 3 -a 6 1 - = *   L    D 7 '8! @ '8 (  ?    ,- ? C  E A   CR E ? "   " D 7 $ & C %3 ' J     ) ? "  @ $ =E  3 L     ) 1NC   3 - DC   3 - DC ,-  E     3   @ E&   3 -a 6 1 - e ?     ,- ? C  E A   E GPC ? "S " D 7 $ & E%3 'KL'    ) ? "  @ $ C %3 J     ) 1 E    3 -  E ,-  C      3 R  C   3 - @ E 3 -a 6 1 - = *   L    7 '8! @ ' (  ?    ,- ? C  E A   CG E ? "  7 $ C %3 J    ) ? " @ T UE%3 L  - 212 1 C   3 -  C ,-  E    3 - @ E 3 - 6 1 - e ?    ,- ? C  E A   E GPC ? "  7 $ E %3 LF   ) ? "  @ ! C %3 J  - 1 E  3 -  E ,-  C    3 - @ E  3 - 6 1 - The computation of these expectations can be organized in a trellis, as illustrated in Figure 1. Once we obtain the gradient of the objective function (2), we use limited-memory L-BFGS, a quasi-Newton optimization algorithm (McCallum 2002; Nocedal and Wright 2000), to find the local maxima with the initial value being set to be the optimal solution of the supervised CRF on labeled data. 4 Time and space complexity The time and space complexity of the semisupervised CRF training procedure is greater than that of standard supervised CRF training, but nevertheless remains a small degree polynomial in the size of the training data. Let  = size of the labeled set * = size of the unlabeled set 0  = labeled sequence length 0 * = unlabeled sequence length 0E = test sequence length  = number of states  = number of training iterations. Then the time required to classify a test sequence is  0E  `M , independent of training method, since the Viterbi decoder needs to access each path. For training, supervised CRF training requires   0   `  time, whereas semi-supervised CRF training requires   0   ` e * 0U` *   time. The additional cost for semi-supervised training arises from the extra nested loop required to calculated the quadratic feature expectations, which introduces in an additional 0 *  factor. However, the space requirements of the two training methods are the same. That is, even though the covariance matrix has size  !A `  , there is never any need to store the entire matrix in memory. Rather, since we only need to compute the product of the covariance with B , the calculation can be performed iteratively without using extra space beyond that already required by supervised CRF training. start 0 1 2 stop Figure 1: Trellis for computing the expectation of a feature product over a pair of feature functions,    vs   , where the feature    occurs first. This leads to one double sum. 5 Identifying gene and protein mentions We have developed our new semi-supervised training procedure to address the problem of information extraction from biomedical text, which has received significant attention in the past few years. We have specifically focused on the problem of identifying explicit mentions of gene and protein names (McDonald and Pereira 2005). Recently, McDonald and Pereira (2005) have obtained interesting results on this problem by using a standard supervised CRF approach. However, our contention is that stronger results could be obtained in this domain by exploiting a large corpus of unannotated biomedical text to improve the quality of the predictions, which we now show. Given a biomedical text, the task of identifying gene mentions can be interpreted as a tagging task, where each word in the text can be labeled with a tag that indicates whether it is the beginning of gene mention (B), the continuation of a gene mention (I), or outside of any gene mention (O). To compare the performance of different taggers learned by different mechanisms, one can measure the precision, recall and F-measure, given by precision = # correct predictions # predicted gene mentions recall = # correct predictions # true gene mentions F-measure = ` precision  recall precision , recall In our evaluation, we compared the proposed semi-supervised learning approach to the state of the art supervised CRF of McDonald and Pereira (2005), and also to self-training (Celeux and Govaert 1992; Yarowsky 1995), using the same feature set as (McDonald and Pereira 2005). The CRF training procedures, supervised and semi213 supervised, were run with the same regularization function, [/ B ' ^ B ^`"aFb , used in (McDonald and Pereira 2005). First we evaluated the performance of the semisupervised CRF in detail, by varying the ratio between the amount of labeled and unlabeled data, and also varying the tradeoff parameter f . We choose a labeled training set consisting of 5448 words, and considered alternative unlabeled training sets,  (5210 words), P (10,208 words), and  (25,145 words), consisting of the same, 2 times and 5 times as many sentences as respectively. All of these sets were disjoint and selected randomly from the full corpus, the smaller one in (McDonald et al. 2005), consisting of 184,903 words in total. To determine sensitivity to the parameter f we examined a range of discrete values .<;. 5 ;.  5   5 . &b=.  . . In our first experiment, we train the CRF models using labeled set and unlabeled sets  , P and  respectively. Then test the performance on the sets  , P and  respectively The results of our evaluation are shown in Table 1. The performance of the supervised CRF algorithm, trained only on the labeled set , is given on the first row in Table 1 (corresponding to f  . ). By comparison, the results obtained by the semi-supervised CRFs on the held-out sets  , P and  are given in Table 1 by increasing the value of f . The results of this experiment demonstrate quite clearly that in most cases the semi-supervised CRF obtains higher precision, recall and F-measure than the fully supervised CRF, yielding a 20% improvement in the best case. In our second experiment, again we train the CRF models using labeled set and unlabeled sets  , P and  respectively with increasing values of f , but we test the performance on the heldout set  which is the full corpus minus the labeled set and unlabeled sets  , P and  . The results of our evaluation are shown in Table 2 and Figure 2. The blue line in Figure 2 is the result of the supervised CRF algorithm, trained only on the labeled set . In particular, by using the supervised CRF model, the system predicted 3334 out of 7472 gene mentions, of which 2435 were correct, resulting in a precision of 0.73, recall of 0.33 and F-measure of 0.45. The other curves are those of the semi-supervised CRFs. The results of this experiment demonstrate quite clearly that the semi-supervised CRFs simultane0 500 1000 1500 2000 2500 3000 3500 0.1 0.5 1 5 7 10 12 14 16 18 20 gamma number of correct prediction (TP) set B set C set D CRF Figure 2: Performance of the supervised and semisupervised CRFs. The sets  , and refer to the unlabeled training set used by the semi-supervised algorithm. ously increase both the number of predicted gene mentions and the number of correct predictions, thus the precision remains almost the same as the supervised CRF, and the recall increases significantly. Both experiments as illustrated in Figure 2 and Tables 1 and 2 show that clearly better results are obtained by incorporating additional unlabeled training data, even when evaluating on disjoint testing data (Figure 2). The performance of the semi-supervised CRF is not overly sensitive to the tradeoff parameter f , except that f cannot be set too large. 5.1 Comparison to self-training For completeness, we also compared our results to the self-learning algorithm, which has commonly been referred to as bootstrapping in natural language processing and originally popularized by the work of Yarowsky in word sense disambiguation (Abney 2004; Yarowsky 1995). In fact, similar ideas have been developed in pattern recognition under the name of the decision-directed algorithm (Duda and Hart 1973), and also traced back to 1970s in the EM literature (Celeux and Govaert 1992). The basic algorithm works as follows: 1. Given   and  * , begin with a seed set of labeled examples,     , chosen from / . 2. For  .< 5     (a) Train the supervised CRF on labeled examples   % , obtaining B  % . (b) For each sequence  S  O* , find   S     :=< X  : 9 L 0 1 + . %3  S  via Viterbi decoding or other inference algorithm, and add the pair   S    S     to the set of labeled examples (replacing any previous label for  S  if present). 214 Table 1: Performance of the semi-supervised CRFs obtained on the held-out sets  , P and  Test Set B, Trained on A and B Test Set C, Trained on A and C Test Set D, Trained on A and D Precision Recall F-Measure Precision Recall F-Measure Precision Recall F-Measure 0 0.80 0.36 0.50 0.77 0.29 0.43 0.74 0.30 0.43 0.1 0.82 0.4 0.54 0.79 0.32 0.46 0.74 0.31 0.44 0.5 0.82 0.4 0.54 0.79 0.33 0.46 0.74 0.31 0.44 1 0.82 0.4 0.54 0.77 0.34 0.47 0.73 0.33 0.45 5 0.84 0.45 0.59 0.78 0.38 0.51 0.72 0.36 0.48 10 0.78 0.46 0.58 0.66 0.38 0.48 0.66 0.38 0.47 Table 2: Performance of the semi-supervised CRFs trained by using unlabeled sets  , P and  Test Set E, Trained on A and B Test Set E, Trained on A and C Test Set E, Trained on A and D # predicted # correct prediction # predicted # correct prediction # predicted # correct prediction 0.1 3345 2446 3376 2470 3366 2466 0.5 3413 2489 3450 2510 3376 2469 1 3446 2503 3588 2580 3607 2590 5 4089 2878 4206 2947 4165 2888 10 4450 2799 4762 2827 4778 2845 (c) If for each  S   O* ,   S       S    , stop; otherwise  e 5 , iterate. We implemented this self training approach and tried it in our experiments. Unfortunately, we were not able to obtain any improvements over the standard supervised CRF with self-learning, using the sets   and O*    P   . The semi-supervised CRF remains the best of the approaches we have tried on this problem. 6 Conclusions and further directions We have presented a new semi-supervised training algorithm for CRFs, based on extending minimum conditional entropy regularization to the structured prediction case. Our approach is motivated by the information-theoretic argument (Grandvalet and Bengio 2004; Roberts et al. 2000) that unlabeled examples can provide the most benefit when classes have small overlap. An iterative ascent optimization procedure was developed for this new criterion, which exploits a nested dynamic programming approach to efficiently compute the covariance matrix of the features. We applied our new approach to the problem of identifying gene name occurrences in biological text, exploiting the availability of auxiliary unlabeled data to improve the performance of the state of the art supervised CRF approach in this domain. Our semi-supervised CRF approach shares all of the benefits of the standard CRF training, including the ability to exploit arbitrary features of the inputs, while obtaining improved accuracy through the use of unlabeled data. The main drawback is that training time is increased because of the extra nested loop needed to calculate feature covariances. Nevertheless, the algorithm is sufficiently efficient to be trained on unlabeled data sets that yield a notable improvement in classification accuracy over standard supervised training. To further accelerate the training process of our semi-supervised CRFs, we may apply stochastic gradient optimization method with adaptive gain adjustment as proposed by Vishwanathan et al. (2006). Acknowledgments Research supported by Genome Alberta, Genome Canada, and the Alberta Ingenuity Centre for Machine Learning. References S. Abney. (2004). Understanding the Yarowsky algorithm. Computational Linguistics, 30(3):365-395. Y. Altun, D. McAllester and M. Belkin. (2005). Maximum margin semi-supervised learning for structured variables. Advances in Neural Information Processing Systems 18. A. Blum and T. Mitchell. (1998). Combining labeled and unlabeled data with co-training. Proceedings of the Workshop on Computational Learning Theory, 92-100. S. Boyd and L. Vandenberghe. (2004). Convex Optimization. Cambridge University Press. V. Castelli and T. Cover. (1996). The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. IEEE Trans. on Information Theory, 42(6):2102-2117. G. Celeux and G. Govaert. (1992). A classification EM algorithm for clustering and two stochastic versions. Computational Statistics and Data Analysis, 14:315-332. 215 I. Cohen and F. Cozman. (2006). Risks of semi-supervised learning. Semi-Supervised Learning, O. Chapelle, B. Scholk¨opf and A. Zien, (Editors), 55-70, MIT Press. A. Corduneanu and T. Jaakkola. (2006). Data dependent regularization. Semi-Supervised Learning, O. Chapelle, B. Scholk¨opf and A. Zien, (Editors), 163-182, MIT Press. T. Cover and J. Thomas, (1991). Elements of Information Theory, John Wiley & Sons. R. Duda and P. Hart. (1973). Pattern Classification and Scene Analysis, John Wiley & Sons. Y. Grandvalet and Y. Bengio. (2004). Semi-supervised learning by entropy minimization, Advances in Neural Information Processing Systems, 17:529-536. J. Lafferty, A. McCallum and F. Pereira. (2001). Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proceedings of the 18th International Conference on Machine Learning, 282-289. W. Li and A. McCallum. (2005). Semi-supervised sequence modeling with syntactic topic models. Proceedings of Twentieth National Conference on Artificial Intelligence, 813-818. A. McCallum. (2002). MALLET: A machine learning for language toolkit. [http://mallet.cs.umass.edu] R. McDonald, K. Lerman and Y. Jin. (2005). Conditional random field biomedical entity tagger. [http://www.seas.upenn.edu/ sryantm/software/BioTagger/] R. McDonald and F. Pereira. (2005). Identifying gene and protein mentions in text using conditional random fields. BMC Bioinformatics 2005, 6(Suppl 1):S6. K. Nigam, A. McCallum, S. Thrun and T. Mitchell. (2000). Text classification from labeled and unlabeled documents using EM. Machine learning. 39(2/3):135-167. J. Nocedal and S. Wright. (2000). Numerical Optimization, Springer. S. Roberts, R. Everson and I. Rezek. (2000). Maximum certainty data partitioning. Pattern Recognition, 33(5):833839. S. Vishwanathan, N. Schraudolph, M. Schmidt and K. Murphy. (2006). Accelerated training of conditional random fields with stochastic meta-descent. Proceedings of the 23th International Conference on Machine Learning. D. Yarowsky. (1995). Unsupervised word sense disambiguation rivaling supervised methods. Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, 189-196. D. Zhou, O. Bousquet, T. Navin Lal, J. Weston and B. Sch¨olkopf. (2004). Learning with local and global consistency. Advances in Neural Information Processing Systems, 16:321-328. D. Zhou, J. Huang and B. Sch¨olkopf. (2005). Learning from labeled and unlabeled data on a directed graph. Proceedings of the 22nd International Conference on Machine Learning, 1041-1048. X. Zhu, Z. Ghahramani and J. Lafferty. (2003). Semisupervised learning using Gaussian fields and harmonic functions. Proceedings of the 20th International Conference on Machine Learning, 912-919. A Deriving the gradient of the entropy We wish to show that $ $ B % . ? S A #%,- ? L 021 %3   S   TWVYX 021 %3   S  '&  . ? S A #%,- ( V ) *   L   +-,2.  0 D   S  8!1 B (5) First, note that some simple calculation yields $ TWVYX 6 1   S   $ B 7  ? L 021 %3   S   D 7   S  8 and $ 021 %3   S   $ B 7  $ $ B 7  7:9 ; CG B  D  S 8I ( 6 1   S     021 %3   S   D 7   S  8 Z 021 %3   S   ?"L 021 %3   S   D 7   S  8 Therefore $ $ B 7 % . ? S A # ,- ? L 021 %3   S   TWVYX 0 1  3   S   &  . ? S A #%,- $ $ B 7 % ?"L 021 %3   S   G B  D   S  8I Z TWVYX 6 1   S   (  . ? S A #%,- % ?ML 021 %3   S   D 7   S  8 e ? L $ 0 1  3  S  $ B 7 G B  D   S  I Z ? L 021 %3   S   D@7   S  8 &  . ? S A #%,- % ?"L 021 %3   S   D 7   S   G B  D   S  8I Z  ? L 021 %3   S   G B  D   S  8I  ?"L 0 1  3   S   D 7   S  8 &  . ? S A # ,- % ? @ B @ 0 ? L 021 %3   S   D@7   S  8 DF@   S   Z  ? L 021 %3   S   DF@   S  8  ?"L 0 1  3   S   D 7   S  8:1 & In the vector form, this can be written as (5) 216
2006
27
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 217–224, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Training Conditional Random Fields with Multivariate Evaluation Measures Jun Suzuki, Erik McDermott and Hideki Isozaki NTT Communication Science Laboratories, NTT Corp. 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237 Japan {jun, mcd, isozaki}@cslab.kecl.ntt.co.jp Abstract This paper proposes a framework for training Conditional Random Fields (CRFs) to optimize multivariate evaluation measures, including non-linear measures such as F-score. Our proposed framework is derived from an error minimization approach that provides a simple solution for directly optimizing any evaluation measure. Specifically focusing on sequential segmentation tasks, i.e. text chunking and named entity recognition, we introduce a loss function that closely reflects the target evaluation measure for these tasks, namely, segmentation F-score. Our experiments show that our method performs better than standard CRF training. 1 Introduction Conditional random fields (CRFs) are a recently introduced formalism (Lafferty et al., 2001) for representing a conditional model p(y|x), where both a set of inputs, x, and a set of outputs, y, display non-trivial interdependency. CRFs are basically defined as a discriminative model of Markov random fields conditioned on inputs (observations) x. Unlike generative models, CRFs model only the output y’s distribution over x. This allows CRFs to use flexible features such as complicated functions of multiple observations. The modeling power of CRFs has been of great benefit in several applications, such as shallow parsing (Sha and Pereira, 2003) and information extraction (McCallum and Li, 2003). Since the introduction of CRFs, intensive research has been undertaken to boost their effectiveness. The first approach to estimating CRF parameters is the maximum likelihood (ML) criterion over conditional probability p(y|x) itself (Lafferty et al., 2001). The ML criterion, however, is prone to over-fitting the training data, especially since CRFs are often trained with a very large number of correlated features. The maximum a posteriori (MAP) criterion over parameters, λ, given x and y is the natural choice for reducing over-fitting (Sha and Pereira, 2003). Moreover, the Bayes approach, which optimizes both MAP and the prior distribution of the parameters, has also been proposed (Qi et al., 2005). Furthermore, large margin criteria have been employed to optimize the model parameters (Taskar et al., 2004; Tsochantaridis et al., 2005). These training criteria have yielded excellent results for various tasks. However, real world tasks are evaluated by task-specific evaluation measures, including non-linear measures such as Fscore, while all of the above criteria achieve optimization based on the linear combination of average accuracies, or error rates, rather than a given task-specific evaluation measure. For example, sequential segmentation tasks (SSTs), such as text chunking and named entity recognition, are generally evaluated with the segmentation F-score. This inconsistency between the objective function during training and the task evaluation measure might produce a suboptimal result. In fact, to overcome this inconsistency, an SVM-based multivariate optimization method has recently been proposed (Joachims, 2005). Moreover, an F-score optimization method for logistic regression has also been proposed (Jansche, 2005). In the same spirit as the above studies, we first propose a generalization framework for CRF training that allows us to optimize directly not only the error rate, but also any evaluation measure. In other words, our framework can incorporate any evaluation measure of interest into the loss function and then optimize this loss function as the training objective function. Our proposed framework is fundamentally derived from an approach to (smoothed) error rate minimization well 217 known in the speech and pattern recognition community, namely the Minimum Classification Error (MCE) framework (Juang and Katagiri, 1992). The framework of MCE criterion training supports the theoretical background of our method. The approach proposed here subsumes the conventional ML/MAP criteria training of CRFs, as described in the following. After describing the new framework, as an example of optimizing multivariate evaluation measures, we focus on SSTs and introduce a segmentation F-score loss function for CRFs. 2 CRFs and Training Criteria Given an input (observation) x∈X and parameter vector λ = {λ1, . . . , λM}, CRFs define the conditional probability p(y|x) of a particular output y ∈Y as being proportional to a product of potential functions on the cliques of a graph, which represents the interdependency of y and x. That is: p(y|x; λ) = 1 Zλ(x) Y c∈C(y,x) Φc(y, x; λ) where Φc(y, x; λ) is a non-negative real value potential function on a clique c ∈C(y, x). Zλ(x)= P ˜y∈Y Q c∈C(˜y,x) Φc(˜y, x; λ) is a normalization factor over all output values, Y. Following the definitions of (Sha and Pereira, 2003), a log-linear combination of weighted features, Φc(y, x; λ) = exp(λ · f c(y, x)), is used as individual potential functions, where f c represents a feature vector obtained from the corresponding clique c. That is, Q c∈C(y,x) Φc(y, x) = exp(λ·F(y, x)), where F(y, x)=P c f c(y, x) is the CRF’s global feature vector for x and y. The most probable output ˆy is given by ˆy = arg maxy∈Y p(y|x; λ). However Zλ(x) never affects the decision of ˆy since Zλ(x) does not depend on y. Thus, we can obtain the following discriminant function for CRFs: ˆy = arg max y∈Y λ · F(y, x). (1) The maximum (log-)likelihood (ML) of the conditional probability p(y|x; λ) of training data {(xk, y∗k)}N k=1 w.r.t. parameters λ is the most basic CRF training criterion, that is, arg maxλ P k log p(y∗k|xk; λ), where y∗k is the correct output for the given xk. Maximizing the conditional log-likelihood given by CRFs is equivalent to minimizing the log-loss function, P k −log p(y∗k|xk; λ). We minimize the following loss function for the ML criterion training of CRFs: LML λ = X k h −λ · F(y∗k, xk) + log Zλ(xk) i . To reduce over-fitting, the Maximum a Posteriori (MAP) criterion of parameters λ, that is, arg maxλ P k log p(λ|y∗k, xk) ∝ P k log p(y∗k|xk; λ)p(λ), is now the most widely used CRF training criterion. Therefore, we minimize the following loss function for the MAP criterion training of CRFs: LMAP λ = LML λ −log p(λ). (2) There are several possible choices when selecting a prior distribution p(λ). This paper only considers Lφ-norm prior, p(λ) ∝exp(−||λ||φ/φC), which becomes a Gaussian prior when φ=2. The essential difference between ML and MAP is simply that MAP has this prior term in the objective function. This paper sometimes refers to the ML and MAP criterion training of CRFs as ML/MAP. In order to estimate the parameters λ, we seek a zero of the gradient over the parameters λ: ∇LMAP λ = −∇log p(λ) + X k  −F(y∗k, xk) + X y∈Yk exp(λ·F(y, xk)) Zλ(xk) ·F(y, xk)  . (3) The gradient of ML is Eq. 3 without the gradient term of the prior, −∇log p(λ). The details of actual optimization procedures for linear chain CRFs, which are typical CRF applications, have already been reported (Sha and Pereira, 2003). 3 MCE Criterion Training for CRFs The Minimum Classification Error (MCE) framework first arose out of a broader family of approaches to pattern classifier design known as Generalized Probabilistic Descent (GPD) (Katagiri et al., 1991). The MCE criterion minimizes an empirical loss corresponding to a smooth approximation of the classification error. This MCE loss is itself defined in terms of a misclassification measure derived from the discriminant functions of a given task. Via the smoothing parameters, the MCE loss function can be made arbitrarily close to the binary classification error. An important property of this framework is that it makes it 218 possible in principle to achieve the optimal Bayes error even under incorrect modeling assumptions. It is easy to extend the MCE framework to use evaluation measures other than the classification error, namely the linear combination of error rates. Thus, it is possible to optimize directly a variety of (smoothed) evaluation measures. This is the approach proposed in this article. We first introduce a framework for MCE criterion training, focusing only on error rate optimization. Sec. 4 then describes an example of minimizing a different multivariate evaluation measure using MCE criterion training. 3.1 Brief Overview of MCE Let x ∈X be an input, and y ∈Y be an output. The Bayes decision rule decides the most probable output ˆy for x, by using the maximum a posteriori probability, ˆy = arg maxy∈Y p(y|x; λ). In general, p(y|x; λ) can be replaced by a more general discriminant function, that is, ˆy = arg max y∈Y g(y, x, λ). (4) Using the discriminant functions for the possible output of the task, the misclassification measure d() is defined as follows: d(y∗,x, λ)=−g(y∗,x, λ) + max y∈Y\y∗g(y, x, λ). (5) where y∗is the correct output for x. Here it can be noted that, for a given x, d()≥0 indicates misclassification. By using d(), the minimization of the error rate can be rewritten as the minimization of the sum of 0-1 (step) losses of the given training data. That is, arg minλ Lλ where Lλ= X k δ(d(y∗k, xk, λ)). (6) δ(r) is a step function returning 0 if r<0 and 1 otherwise. That is, δ is 0 if the value of the discriminant function of the correct output g(y∗k, xk, λ) is greater than that of the maximum incorrect output g(yk, xk, λ), and δ is 1 otherwise. Eq. 5 is not an appropriate function for optimization since it is a discontinuous function w.r.t. the parameters λ. One choice of continuous misclassification measure consists of substituting ‘max’ with ‘soft-max’, maxk rk ≈ log P k exp(rk). As a result d(y∗, x, λ)=−g∗+log " A X y∈Y\y∗ exp(ψg) # 1 ψ , (7) where g∗= g(y∗, x, λ), g = g(y, x, λ), and A = 1 |Y|−1. ψ is a positive constant that represents Lψnorm. When ψ approaches ∞, Eq. 7 converges to Eq. 5. Note that we can design any misclassification measure, including non-linear measures for d(). Some examples are shown in the Appendices. Of even greater concern is the fact that the step function δ is discontinuous; minimization of Eq. 6 is therefore NP-complete. In the MCE formalism, δ() is replaced with an approximated 0-1 loss function, l(), which we refer to as a smoothing function. A typical choice for l() is the sigmoid function, lsig(), which is differentiable and provides a good approximation of the 0-1 loss when the hyper-parameter α is large (see Eq. 8). Another choice is the (regularized) logistic function, llog(), that gives the upper bound of the 0-1 loss. Logistic loss is used as a conventional CRF loss function and provides convexity while the sigmoid function does not. These two smoothing functions can be written as follows: lsig = (1 + exp(−α · d(y∗, x, λ) −β))−1 llog = α−1 · log(1 + exp(α · d(y∗, x, λ) + β)), (8) where α and β are the hyper-parameters of the training. We can introduce a regularization term to reduce over-fitting, which is derived using the same sense as in MAP, Eq. 2. Finally, the objective function of the MCE criterion with the regularization term can be rewritten in the following form: LMCE λ = Fl,d,g,λ h {(xk, y∗k)}N k=1 i + ||λ||φ φC . (9) Then, the objective function of the MCE criterion that minimizes the error rate is Eq. 9 and F MCE l,d,g,λ = 1 N N X k=1 l(d(y∗k, xk, λ)) (10) is substituted for Fl,d,g,λ. Since N is constant, we can eliminate the term 1/N in actual use. 3.2 Formalization We simply substitute the discriminant function of the CRFs into that of the MCE criterion: g(y, x, λ) = log p(y|x; λ) ∝λ · F(y, x) (11) Basically, CRF training with the MCE criterion optimizes Eq. 9 with Eq. 11 after the selection of an appropriate misclassification measure, d(), and 219 smoothing function, l(). Although there is no restriction on the choice of d() and l(), in this work we select sigmoid or logistic functions for l() and Eq. 7 for d(). The gradient of the loss function Eq. 9 can be decomposed by the following chain rule: ∇LMCE λ = ∂F() ∂l() · ∂l() ∂d() · ∂d() ∂λ + ||λ||φ−1 C . The derivatives of l() w.r.t. d() given in Eq. 8 are written as: ∂lsig/∂d = α ·lsig ·(1−lsig) and ∂llog/∂d=lsig. The derivative of d() of Eq. 7 w.r.t. parameters λ is written in this form: ∂d() ∂λ = − Zλ(x, ψ) Zλ(x, ψ)−exp(ψg∗) ·F(y∗, x) + X y∈Y  exp(ψg) Zλ(x, ψ)−exp(ψg∗) ·F(y, x)  (12) where g = λ · F(y, x), g∗= λ · F(y∗, x), and Zλ(x, ψ)=P y∈Y exp(ψg). Note that we can obtain exactly the same loss function as ML/MAP with appropriate choices of F(), l() and d(). The details are provided in the Appendices. Therefore, ML/MAP can be seen as one special case of the framework proposed here. In other words, our method provides a generalized framework of CRF training. 3.3 Optimization Procedure With linear chain CRFs, we can calculate the objective function, Eq. 9 combined with Eq. 10, and the gradient, Eq. 12, by using the variant of the forward-backward and Viterbi algorithm described in (Sha and Pereira, 2003). Moreover, for the parameter optimization process, we can simply exploit gradient descent or quasi-Newton methods such as L-BFGS (Liu and Nocedal, 1989) as well as ML/MAP optimization. If we select ψ = ∞for Eq. 7, we only need to evaluate the correct and the maximum incorrect output. As we know, the maximum output can be efficiently calculated with the Viterbi algorithm, which is the same as calculating Eq. 1. Therefore, we can find the maximum incorrect output by using the A* algorithm (Hart et al., 1968), if the maximum output is the correct output, and by using the Viterbi algorithm otherwise. It may be feared that since the objective function is not differentiable everywhere for ψ = ∞, problems for optimization would occur. However, it has been shown (Le Roux and McDermott, 2005) that even simple gradient-based (firstorder) optimization methods such as GPD and (approximated) second-order methods such as QuickProp (Fahlman, 1988) and BFGS-based methods have yielded good experimental optimization results. 4 Multivariate Evaluation Measures Thus far, we have discussed the error rate version of MCE. Unlike ML/MAP, the framework of MCE criterion training allows the embedding of not only a linear combination of error rates, but also any evaluation measure, including non-linear measures. Several non-linear objective functions, such as F-score for text classification (Gao et al., 2003), and BLEU-score and some other evaluation measures for statistical machine translation (Och, 2003), have been introduced with reference to the framework of MCE criterion training. 4.1 Sequential Segmentation Tasks (SSTs) Hereafter, we focus solely on CRFs in sequences, namely the linear chain CRF. We assume that x and y have the same length: x=(x1, . . . , xn) and y=(y1, . . . , yn). In a linear chain CRF, yi depends only on yi−1. Sequential segmentation tasks (SSTs), such as text chunking (Chunking) and named entity recognition (NER), which constitute the shared tasks of the Conference of Natural Language Learning (CoNLL) 2000, 2002 and 2003, are typical CRF applications. These tasks require the extraction of pre-defined segments, referred to as target segments, from given texts. Fig. 1 shows typical examples of SSTs. These tasks are generally treated as sequential labeling problems incorporating the IOB tagging scheme (Ramshaw and Marcus, 1995). The IOB tagging scheme, where we only consider the IOB2 scheme, is also shown in Fig. 1. B-X, I-X and O indicate that the word in question is the beginning of the tag ‘X’, inside the tag ‘X’, and outside any target segment, respectively. Therefore, a segment is defined as a sequence of a few outputs. 4.2 Segmentation F-score Loss for SSTs The standard evaluation measure of SSTs is the segmentation F-score (Sang and Buchholz, 2000): Fγ = (γ2 + 1) · TP γ2 · FN + FP + (γ2 + 1) · TP (13) 220 He reckons the current account deficit will narrow to only # 1.8 billion . NP VP NP VP PP NP B-NP B-VP B-NP I-NP I-NP I-NP B-VP I-VP B-PP B-NP I-NP I-NP I-NP O x: y: Seg.: United Nation official Ekeus Smith heads for Baghdad . B-ORG I-ORG O O O B-PER I-PER B-LOC O x: y: Seg.: ORG PER LOC Text Chunking Named Entity Recognition y1 y2 y3 y4 y5 y6 y7 y8 y9 y10 y11 y12 y13 y14 Dep.: y1 y2 y3 y4 y5 y6 y7 y8 y9 Dep.: Figure 1: Examples of sequential segmentation tasks (SSTs): text chunking (Chunking) and named entity recognition (NER). where TP, FP and FN represent true positive, false positive and false negative counts, respectively. The individual evaluation units used to calculate TP, FN and PN, are not individual outputs yi or output sequences y, but rather segments. We need to define a segment-wise loss, in contrast to the standard CRF loss, which is sometimes referred to as an (entire) sequential loss (Kakade et al., 2002; Altun et al., 2003). First, we consider the point-wise decision w.r.t. Eq. 1, that is, ˆyi = arg maxyi∈Y1 g(y, x, i, λ). The point-wise discriminant function can be written as follows: g(y, x, i, λ) = max y′∈Y|y|[yi] λ · F(y′, x) (14) where Yj represents a set of all y whose length is j, and Y[yi] represents a set of all y that contain yi in the i’th position. Note that the same output ˆy can be obtained with Eqs. 1 and 14, that is, ˆy = (ˆy1, . . . , ˆyn). This point-wise discriminant function is different from that described in (Kakade et al., 2002; Altun et al., 2003), which is calculated based on marginals. Let ysj be an output sequence corresponding to the j-th segment of y, where sj represents a sequence of indices of y, that is, sj = (sj,1, . . . , sj,|sj|). An example of the Chunking data shown in Fig. 1, ys4 is (B-VP, I-VP) where s4 = (7, 8). Let Y[ysj] be a set of all outputs whose positions from sj,1 to sj,|sj| are ysj = (ysj,1, . . . , ysj,|sj|). Then, we can define a segment-wise discriminant function w.r.t. Eq. 1. That is, g(y, x, sj, λ) = max y′∈Y|y|[ysj ] λ · F(y′, x). (15) Note again that the same output ˆy can be obtained using Eqs. 1 and 15, as with the piece-wise discriminant function described above. This property is needed for evaluating segments since we do not know the correct segments of the test data; we can maintain consistency even if we use Eq. 1 for testing and Eq. 15 for training. Moreover, Eq. 15 obviously reduces to Eq. 14 if the length of all segments is 1. Then, the segment-wise misclassification measure d(y∗, x, sj, λ) can be obtained simply by replacing the discriminant function of the entire sequence g(y, x, λ) with that of segmentwise g(y, x, sj, λ) in Eq. 7. Let s∗k be a segment sequence corresponding to the correct output y∗k for a given xk, and S(xk) be all possible segments for a given xk. Then, approximated evaluation functions of TP, FP and FN can be defined as follows: TPl = X k X s∗ j ∈s∗k h 1−l(d(y∗k, xk, s∗ j, λ)) i ·δ(s∗ j) FPl = X k X s′ j∈S(xk)\s∗k l(d(y∗k, xk, s′ j, λ))·δ(s′ j) FNl = X k X s∗ j ∈s∗k l(d(y∗k, xk, s∗ j, λ))·δ(s∗ j) where δ(sj) returns 1 if segment sj is a target segment, and returns 0 otherwise. For the NER data shown in Fig. 1, ‘ORG’, ‘PER’ and ‘LOC’ are the target segments, while segments that are labeled ‘O’ in y are not. Since TPl should not have a value of less than zero, we select sigmoid loss as the smoothing function l(). The second summation of TPl and FNl performs a summation over correct segments s∗. In contrast, the second summation in FPl takes all possible segments into account, but excludes the correct segments s∗. Although an efficient way to evaluate all possible segments has been proposed in the context of semi-Markov CRFs (Sarawagi and Cohen, 2004), we introduce a simple alternative method. If we select ψ = ∞for d() in Eq. 7, we only need to evaluate the segments corresponding to the maximum incorrect output ˜y to calculate FPl. That is, s′ j ∈S(xk)\s∗k can be reduced to s′ j ∈˜sk, where ˜sk represents segments corresponding to the maximum incorrect output ˜y. In practice, this reduces the calculation cost and so we used this method for our experiments described in the next section. Maximizing the segmentation Fγ-score, Eq. 13, 221 is equivalent to minimizing γ2·FN+FP (γ2+1)·TP , since Eq. 13 can also be written as Fγ = 1 1+ γ2·F N+F P (γ2+1)·T P . Thus, an objective function closely reflecting the segmentation Fγ-score based on the MCE criterion can be written as Eq. 9 while replacing Fl,d,g,λ with: F MCE-F l,d,g,λ = γ2 · FNl + FPl (γ2 + 1) · TPl . (16) The derivative of Eq. 16 w.r.t. l() is given by the following equation: ∂F MCE-F l,d,g,λ ∂l() = ( γ2 ZD + (γ2+1)·ZN Z2 D , if δ(s∗ j) = 1 1 ZD , otherwise where ZN and ZD represent the numerator and denominator of Eq. 16, respectively. In the optimization process of the segmentation F-score objective function, we can efficiently calculate Eq. 15 by using the forward and backward Viterbi algorithm, which is almost the same as calculating Eq. 3 with a variant of the forwardbackward algorithm (Sha and Pereira, 2003). The same numerical optimization methods described in Sec. 3.3 can be employed for this optimization. 5 Experiments We used the same Chunking and ‘English’ NER task data used for the shared tasks of CoNLL2000 (Sang and Buchholz, 2000) and CoNLL2003 (Sang and De Meulder, 2003), respectively. Chunking data was obtained from the Wall Street Journal (WSJ) corpus: sections 15-18 as training data (8,936 sentences and 211,727 tokens), and section 20 as test data (2,012 sentences and 47,377 tokens), with 11 different chunk-tags, such as NP and VP plus the ‘O’ tag, which represents the outside of any target chunk (segment). The English NER data was taken from the Reuters Corpus21. The data consists of 203,621, 51,362 and 46,435 tokens from 14,987, 3,466 and 3,684 sentences in training, development and test data, respectively, with four named entity tags, PERSON, LOCATION, ORGANIZATION and MISC, plus the ‘O’ tag. 5.1 Comparison Methods and Parameters For ML and MAP, we performed exactly the same training procedure described in (Sha and Pereira, 2003) with L-BFGS optimization. For MCE, we 1http://trec.nist.gov/data/reuters/reuters.html only considered d() with ψ = ∞as described in Sec. 4.2, and used QuickProp optimization2. For MAP, MCE and MCE-F, we used the L2norm regularization. We selected a value of C from 1.0 × 10n where n takes a value from -5 to 5 in intervals 1 by development data3. The tuning of smoothing function hyper-parameters is not considered in this paper; that is, α=1 and β=0 were used for all the experiments. We evaluated the performance by Eq. 13 with γ = 1, which is the evaluation measure used in CoNLL-2000 and 2003. Moreover, we evaluated the performance by using the average sentence accuracy, since the conventional ML/MAP objective function reflects this sequential accuracy. 5.2 Features As regards the basic feature set for Chunking, we followed (Kudo and Matsumoto, 2001), which is the same feature set that provided the best result in CoNLL-2000. We expanded the basic features by using bigram combinations of the same types of features, such as words and part-of-speech tags, within window size 5. In contrast to the above, we used the original feature set for NER. We used features derived only from the data provided by CoNLL-2003 with the addition of character-level regular expressions of uppercases [A-Z], lowercases [a-z], digits [0-9] or others, and prefixes and suffixes of one to four letters. We also expanded the above basic features by using bigram combinations within window size 5. Note that we never used features derived from external information such as the Web, or a dictionary, which have been used in many previous studies but which are difficult to employ for validating the experiments. 5.3 Results and Discussion Our experiments were designed to investigate the impact of eliminating the inconsistency between objective functions and evaluation measures, that is, to compare ML/MAP and MCE-F. Table 1 shows the results of Chunking and NER. The Fγ=1 and ‘Sent’ columns show the performance evaluated using segmentation F-score and 2In order to realize faster convergence, we applied online GPD optimization for the first ten iterations. 3Chunking has no common development set. We first train the systems with all but the last 2000 sentences in the training data as a development set to obtain C, and then retrain them with all the training data. 222 Table 1: Performance of text chunking and named entity recognition data (CoNLL-2000 and 2003) Chunking NER l() n Fγ=1 Sent n Fγ=1 Sent MCE-F (sig) 5 93.96 60.44 4 84.72 78.72 MCE (log) 3 93.92 60.19 3 84.30 78.02 MCE (sig) 3 93.85 60.14 3 83.82 77.52 MAP 0 93.71 59.15 0 83.79 77.39 ML 93.19 56.26 82.39 75.71 sentence accuracy, respectively. MCE-F refers to the results obtained from optimizing Eq. 9 based on Eq. 16. In addition, we evaluated the error rate version of MCE. MCE(log) and MCE(sig) indicate that logistic and sigmoid functions are selected for l(), respectively, when optimizing Eq. 9 based on Eq. 10. Moreover, MCE(log) and MCE(sig) used d() based on ψ=∞, and were optimized using QuickProp; these are the same conditions as used for MCE-F. We found that MCE-F exhibited the best results for both Chunking and NER. There is a significant difference (p<0.01) between MCE-F and ML/MAP with the McNemar test, in terms of the correctness of both individual outputs, yk i , and sentences, yk. NER data has 83.3% (170524/204567) and 82.6% (38554/46666) of ‘O’ tags in the training and test data, respectively while the corresponding values of the Chunking data are only 13.1% (27902/211727) and 13.0% (6180/47377). In general, such an imbalanced data set is unsuitable for accuracy-based evaluation. This may be one reason why MCE-F improved the NER results much more than the Chunking results. The only difference between MCE(sig) and MCE-F is the objective function. The corresponding results reveal the effectiveness of using an objective function that is consistent as the evaluation measure for the target task. These results show that minimizing the error rate is not optimal for improving the segmentation F-score evaluation measure. Eliminating the inconsistency between the task evaluation measure and the objective function during the training can improve the overall performance. 5.3.1 Influence of Initial Parameters While ML/MAP and MCE(log) is convex w.r.t. the parameters, neither the objective function of MCE-F, nor that of MCE(sig), is convex. Therefore, initial parameters can affect the optimization Table 2: Performance when initial parameters are derived from MAP Chunking NER l() n Fγ=1 Sent n Fγ=1 Sent MCE-F (sig) 5 94.03 60.74 4 85.29 79.26 MCE (sig) 3 93.97 60.59 3 84.57 77.71 results, since QuickProp as well as L-BFGS can only find local optima. The previous experiments were only performed with all parameters initialized at zero. In this experiment, the parameters obtained by the MAPtrained model were used as the initial values of MCE-F and MCE(sig). This evaluation setting appears to be similar to reranking, although we used exactly the same model and feature set. Table 2 shows the results of Chunking and NER obtained with this parameter initialization setting. When we compare Tables 1 and 2, we find that the initialization with the MAP parameter values further improves performance. 6 Related Work Various loss functions have been proposed for designing CRFs (Kakade et al., 2002; Altun et al., 2003). This work also takes the design of the loss functions for CRFs into consideration. However, we proposed a general framework for designing these loss function that included non-linear loss functions, which has not been considered in previous work. With Chunking, (Kudo and Matsumoto, 2001) reported the best F-score of 93.91 with the voting of several models trained by Support Vector Machine in the same experimental settings and with the same feature set. MCE-F with the MAP parameter initialization achieved an F-score of 94.03, which surpasses the above result without manual parameter tuning. With NER, we cannot make a direct comparison with previous work in the same experimental settings because of the different feature set, as described in Sec. 5.2. However, MCE-F showed the better performance of 85.29 compared with (McCallum and Li, 2003) of 84.04, which used the MAP training of CRFs with a feature selection architecture, yielding similar results to the MAP results described here. 223 7 Conclusions We proposed a framework for training CRFs based on optimization criteria directly related to target multivariate evaluation measures. We first provided a general framework of CRF training based on MCE criterion. Then, specifically focusing on SSTs, we introduced an approximate segmentation F-score objective function. Experimental results showed that eliminating the inconsistency between the task evaluation measure and the objective function used during training improves the overall performance in the target task without any change in feature set or model. Appendices Misclassification measures Another type of misclassification measure using soft-max is (Katagiri et al., 1991): d(y, x, λ) = −g∗+  A X y∈Y\y∗ gψ  1 ψ . Another d(), for g in the range [0, ∞): d(y, x, λ) = h A P y∈Y\y∗gψi 1 ψ /g∗. Comparison of ML/MAP and MCE If we select llog() with α=1 and β =0, and use Eq. 7 with ψ = 1 and without the term A for d(). We can obtain the same loss function as ML/MAP: log (1 + exp(−g∗+ log(Zλ −exp(g∗)))) = log  exp(g∗) + (Zλ −exp(g∗)) exp(g∗)  = −g∗+ log(Zλ). References Y. Altun, M. Johnson, and T. Hofmann. 2003. Investigating Loss Functions and Optimization Methods for Discriminative Learning of Label Sequences. In Proc. of EMNLP2003, pages 145–152. S. E. Fahlman. 1988. An Empirical Study of Learning Speech in Backpropagation Networks. In Technical Report CMU-CS-88-162, Carnegie Mellon University. S. Gao, W. Wu, C.-H. Lee, and T.-S. Chua. 2003. A Maximal Figure-of-Merit Approach to Text Categorization. In Proc. of SIGIR’03, pages 174–181. P. E. Hart, N. J. Nilsson, and B. Raphael. 1968. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. on Systems Science and Cybernetics, SSC-4(2):100–107. M. Jansche. 2005. Maximum Expected F-Measure Training of Logistic Regression Models. In Proc. of HLT/EMNLP2005, pages 692–699. T. Joachims. 2005. A Support Vector Method for Multivariate Performance Measures. In Proc. of ICML-2005, pages 377–384. B. H. Juang and S. Katagiri. 1992. Discriminative Learning for Minimum Error Classification. IEEE Trans. on Signal Processing, 40(12):3043–3053. S. Kakade, Y. W. Teh, and S. Roweis. 2002. An Alternative Objective Function for Markovian Fields. In Proc. of ICML-2002, pages 275–282. S. Katagiri, C. H. Lee, and B.-H. Juang. 1991. New Discriminative Training Algorithms based on the Generalized Descent Method. In Proc. of IEEE Workshop on Neural Networks for Signal Processing, pages 299–308. T. Kudo and Y. Matsumoto. 2001. Chunking with Support Vector Machines. In Proc. of NAACL-2001, pages 192– 199. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proc. of ICML-2001, pages 282–289. D. C. Liu and J. Nocedal. 1989. On the Limited Memory BFGS Method for Large-scale Optimization. Mathematic Programming, (45):503–528. A. McCallum and W. Li. 2003. Early Results for Named Entity Recognition with Conditional Random Fields Feature Induction and Web-Enhanced Lexicons. In Proc. of CoNLL-2003, pages 188–191. F. J. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proc. of ACL-2003, pages 160– 167. Y. Qi, M. Szummer, and T. P. Minka. 2005. Bayesian Conditional Random Fields. In Proc. of AI & Statistics 2005. L. A. Ramshaw and M. P. Marcus. 1995. Text Chunking using Transformation-based Learning. In Proc. of VLC1995, pages 88–94. J. Le Roux and E. McDermott. 2005. Optimization Methods for Discriminative Training. In Proc. of Eurospeech 2005, pages 3341–3344. E. F. Tjong Kim Sang and S. Buchholz. 2000. Introduction to the CoNLL-2000 Shared Task: Chunking. In Proc. of CoNLL/LLL-2000, pages 127–132. E. F. Tjong Kim Sang and F. De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proc. of CoNLL-2003, pages 142–147. S. Sarawagi and W. W. Cohen. 2004. Semi-Markov Conditional Random Fields for Information Extraction. In Proc of NIPS-2004. F. Sha and F. Pereira. 2003. Shallow Parsing with Conditional Random Fields. In Proc. of HLT/NAACL-2003, pages 213–220. B. Taskar, C. Guestrin, and D. Koller. 2004. Max-Margin Markov Networks. In Proc. of NIPS-2004. I. Tsochantaridis, T. Joachims and T. Hofmann, and Y. Altun. 2005. Large Margin Methods for Structured and Interdependent Output Variables. JMLR, 6:1453–1484. 224
2006
28
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 225–232, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Approximation Lasso Methods for Language Modeling Jianfeng Gao Microsoft Research One Microsoft Way Redmond WA 98052 USA [email protected] Hisami Suzuki Microsoft Research One Microsoft Way Redmond WA 98052 USA [email protected] Bin Yu Department of Statistics University of California Berkeley., CA 94720 U.S.A. [email protected] Abstract Lasso is a regularization method for parameter estimation in linear models. It optimizes the model parameters with respect to a loss function subject to model complexities. This paper explores the use of lasso for statistical language modeling for text input. Owing to the very large number of parameters, directly optimizing the penalized lasso loss function is impossible. Therefore, we investigate two approximation methods, the boosted lasso (BLasso) and the forward stagewise linear regression (FSLR). Both methods, when used with the exponential loss function, bear strong resemblance to the boosting algorithm which has been used as a discriminative training method for language modeling. Evaluations on the task of Japanese text input show that BLasso is able to produce the best approximation to the lasso solution, and leads to a significant improvement, in terms of character error rate, over boosting and the traditional maximum likelihood estimation. 1 Introduction Language modeling (LM) is fundamental to a wide range of applications. Recently, it has been shown that a linear model estimated using discriminative training methods, such as the boosting and perceptron algorithms, outperforms significantly a traditional word trigram model trained using maximum likelihood estimation (MLE) on several tasks such as speech recognition and Asian language text input (Bacchiani et al. 2004; Roark et al. 2004; Gao et al. 2005; Suzuki and Gao 2005). The success of discriminative training methods is largely due to fact that unlike the traditional approach (e.g., MLE) that maximizes the function (e.g., likelihood of training data) that is loosely associated with error rate, discriminative training methods aim to directly minimize the error rate on training data even if they reduce the likelihood. However, given a finite set of training samples, discriminative training methods could lead to an arbitrary complex model for the purpose of achieving zero training error. It is well-known that complex models exhibit high variance and perform poorly on unseen data. Therefore some regularization methods have to be used to control the complexity of the model. Lasso is a regularization method for parameter estimation in linear models. It optimizes the model parameters with respect to a loss function subject to model complexities. The basic idea of lasso is originally proposed by Tibshirani (1996). Recently, there have been several implementations and experiments of lasso on multi-class classification tasks where only a small number of features need to be handled and the lasso solution can be directly computed via numerical methods. To our knowledge, this paper presents the first empirical study of lasso for a realistic, large scale task: LM for Asian language text input. Because the task utilizes millions of features and training samples, directly optimizing the penalized lasso loss function is impossible. Therefore, two approximation methods, the boosted lasso (BLasso, Zhao and Yu 2004) and the forward stagewise linear regression (FSLR, Hastie et al. 2001), are investigated. Both methods, when used with the exponential loss function, bear strong resemblance to the boosting algorithm which has been used as a discriminative training method for LM. Evaluations on the task of Japanese text input show that BLasso is able to produce the best approximation to the lasso solution, and leads to a significant improvement, in terms of character error rate, over the boosting algorithm and the traditional MLE. 2 LM Task and Problem Definition This paper studies LM on the application of Asian language (e.g. Chinese or Japanese) text input, a standard method of inputting Chinese or Japanese text by converting the input phonetic symbols into the appropriate word string. In this paper we call the task IME, which stands for 225 input method editor, based on the name of the commonly used Windows-based application. Performance on IME is measured in terms of the character error rate (CER), which is the number of characters wrongly converted from the phonetic string divided by the number of characters in the correct transcript. Similar to speech recognition, IME is viewed as a Bayes decision problem. Let A be the input phonetic string. An IME system’s task is to choose the most likely word string W* among those candidates that could be converted from A: ) | ( ) ( max arg ) | ( max arg (A) ) ( * W A P W P A W P W W A W GEN GEN ∈ ∈ = = (1) where GEN(A) denotes the candidate set given A. Unlike speech recognition, however, there is no acoustic ambiguity as the phonetic string is inputted by users. Moreover, we can assume a unique mapping from W and A in IME as words have unique readings, i.e. P(A|W) = 1. So the decision of Equation (1) depends solely upon P(W), making IME an ideal evaluation test bed for LM. In this study, the LM task for IME is formulated under the framework of linear models (e.g., Duda et al. 2001). We use the following notation, adapted from Collins and Koo (2005): • Training data is a set of example input/output pairs. In LM for IME, training samples are represented as {Ai, WiR}, for i = 1…M, where each Ai is an input phonetic string and WiR is the reference transcript of Ai. • We assume some way of generating a set of candidate word strings given A, denoted by GEN(A). In our experiments, GEN(A) consists of top n word strings converted from A using a baseline IME system that uses only a word trigram model. • We assume a set of D+1 features fd(W), for d = 0…D. The features could be arbitrary functions that map W to real values. Using vector notation, we have f(W)∈ℜD+1, where f(W) = [f0(W), f1(W), …, fD(W)]T. f0(W) is called the base feature, and is defined in our case as the log probability that the word trigram model assigns to W. Other features (fd(W), for d = 1…D) are defined as the counts of word n-grams (n = 1 and 2 in our experiments) in W. • Finally, the parameters of the model form a vector of D+1 dimensions, each for one feature function, λ = [λ0, λ1, …, λD]. The score of a word string W can be written as ) ( ) , ( W W Score λf λ = ∑ = = D d d d W f λ 0 ) ( . (2) The decision rule of Equation (1) is rewritten as ) , ( max arg ) , ( (A) * λ λ GEN W Score A W W∈ = . (3) Equation (3) views IME as a ranking problem, where the model gives the ranking score, not probabilities. We therefore do not evaluate the model via perplexity. Now, assume that we can measure the number of conversion errors in W by comparing it with a reference transcript WR using an error function Er(WR,W), which is the string edit distance function in our case. We call the sum of error counts over the training samples sample risk. Our goal then is to search for the best parameter set λ which minimizes the sample risk, as in Equation (4): ∑ = = M i i i R i def MSR A W W ... 1 * )) , ( , Er( min arg λ λ λ . (4) However, (4) cannot be optimized easily since Er(.) is a piecewise constant (or step) function of λ and its gradient is undefined. Therefore, discriminative methods apply different approaches that optimize it approximately. The boosting algorithm described below is one of such approaches. 3 Boosting This section gives a brief review of the boosting algorithm, following the description of some recent work (e.g., Schapire and Singer 1999; Collins and Koo 2005). The boosting algorithm uses an exponential loss function (ExpLoss) to approximate the sample risk in Equation (4). We define the margin of the pair (WR, W) with respect to the model λ as ) , ( ) , ( ) , ( λ λ W Score W Score W W M R R − = (5) Then, ExpLoss is defined as ∑ ∑ = ∈ − = M i A W i R i i i W W M ... 1 ) ( )) , ( exp( ) ExpLoss( GEN λ (6) Notice that ExpLoss is convex so there is no problem with local minima when optimizing it. It is shown in Freund et al. (1998) and Collins and Koo (2005) that there exist gradient search procedures that converge to the right solution. Figure 1 summarizes the boosting algorithm we used. After initialization, Steps 2 and 3 are 1 Set λ0 = argminλ0ExpLoss(λ); and λd = 0 for d=1…D 2 Select a feature fk* which has largest estimated impact on reducing ExpLoss of Eq. (6) 3 Update λk* Å λk* + δ*, and return to Step 2 Figure 1: The boosting algorithm 226 repeated N times; at each iteration, a feature is chosen and its weight is updated as follows. First, we define Upd(λ, k, δ) as an updated model, with the same parameter values as λ with the exception of λk, which is incremented by δ } ,..., ,..., , { ) , , Upd( 1 0 D k k λ δ λ λ λ δ + = λ Then, Steps 2 and 3 in Figure 1 can be rewritten as Equations (7) and (8), respectively. )) , , d( ExpLoss(Up min arg *) *, ( , δ δ δ k k k λ = (7) *) *, , Upd( 1 δ k t t − = λ λ (8) The boosting algorithm can be too greedy: Each iteration usually reduces the ExpLoss(.) on training data, so for the number of iterations large enough this loss can be made arbitrarily small. However, fitting training data too well eventually leads to overfiting, which degrades the performance on unseen test data (even though in boosting overfitting can happen very slowly). Shrinkage is a simple approach to dealing with the overfitting problem. It scales the incremental step δ by a small constant ν, ν ∈ (0, 1). Thus, the update of Equation (8) with shrinkage is *) *, , Upd( 1 νδ k t t − = λ λ (9) Empirically, it has been found that smaller values of ν lead to smaller numbers of test errors. 4 Lasso Lasso is a regularization method for estimation in linear models (Tibshirani 1996). It regularizes or shrinks a fitted model through an L1 penalty or constraint. Let T(λ) denote the L1 penalty of the model, i.e., T(λ) = ∑d = 0…D|λd|. We then optimize the model λ so as to minimize a regularized loss function on training data, called lasso loss defined as ) ( ) ExpLoss( ) , LassoLoss( λ λ λ T α α + = (10) where T(λ) generally penalizes larger models (or complex models), and the parameter α controls the amount of regularization applied to the estimate. Setting α = 0 reverses the LassoLoss to the unregularized ExpLoss; as α increases, the model coefficients all shrink, each ultimately becoming zero. In practice, α should be adaptively chosen to minimize an estimate of expected loss, e.g., α decreases with the increase of the number of iterations. Computation of the solution to the lasso problem has been studied for special loss functions. For least square regression, there is a fast algorithm LARS to find the whole lasso path for different α’ s (Obsborn et al. 2000a; 2000b; Efron et al. 2004); for 1-norm SVM, it can be transformed into a linear programming problem with a fast algorithm similar to LARS (Zhu et al. 2003). However, the solution to the lasso problem for a general convex loss function and an adaptive α remains open. More importantly for our purposes, directly minimizing lasso function of Equation (10) with respect to λ is not possible when a very large number of model parameters are employed, as in our task of LM for IME. Therefore we investigate below two methods that closely approximate the effect of the lasso, and are very similar to the boosting algorithm. It is also worth noting the difference between L1 and L2 penalty. The classical Ridge Regression setting uses an L2 penalty in Equation (10) i.e., T(λ) = ∑d = 0…D(λd)2, which is much easier to minimize (for least square loss but not for ExpLoss). However, recent research (Donoho et al. 1995) shows that the L1 penalty is better suited for sparse situations, where there are only a small number of features with nonzero weights among all candidate features. We find that our task is indeed a sparse situation: among 860,000 features, in the resulting linear model only around 5,000 features have nonzero weights. We then focus on the L1 penalty. We leave the empirical comparison of the L1 and L2 penalty on the LM task to future work. 4.1 Forward Stagewise Linear Regression (FSLR) The first approximation method we used is FSLR, described in (Algorithm 10.4, Hastie et al. 2001), where Steps 2 and 3 in Figure 1 are performed according to Equations (7) and (11), respectively. )) , , d( ExpLoss(Up min arg *) *, ( , δ δ δ k k k λ = (7) *)) sign( *, , Upd( 1 δ ε × = − k t t λ λ (11) Notice that FSLR is very similar to the boosting algorithm with shrinkage in that at each step, the feature fk* that has largest estimated impact on reducing ExpLoss is selected. The only difference is that FSLR updates the weight of fk* by a small fixed step size ε. By taking such small steps, FSLR imposes some implicit regularization, and can closely approximate the effect of the lasso in a local sense (Hastie et al. 2001). Empirically, we find that the performance of the boosting algorithm with shrinkage closely resembles that of FSLR, with the learning rate parameter ν corresponding to ε. 227 4.2 Boosted Lasso (BLasso) The second method we used is a modified version of the BLasso algorithm described in Zhao and Yu (2004). There are two major differences between BLasso and FSLR. At each iteration, BLasso can take either a forward step or a backward step. Similar to the boosting algorithm and FSLR, at each forward step, a feature is selected and its weight is updated according to Equations (12) and (13). )) , , d( ExpLoss(Up min arg *) *, ( , δ δ ε δ k k k λ ± = = (12) *)) sign( *, , Upd( 1 δ ε × = − k t t λ λ (13) However, there is an important difference between Equations (12) and (7). In the boosting algorithm with shrinkage and FSLR, as shown in Equation (7), a feature is selected by its impact on reducing the loss with its optimal update δ*. In contract, in BLasso, as shown in Equation (12), the optimization over δ is removed, and for each feature, its loss is calculated with an update of either +ε or -ε, i.e., the grid search is used for feature selection. We will show later that this seemingly trivial difference brings a significant improvement. The backward step is unique to BLasso. In each iteration, a feature is selected and its weight is updated backward if and only if it leads to a decrease of the lasso loss, as shown in Equations (14) and (15): ) ) sign( , , d( ExpLoss(Up min arg * 0 , ε λ λ × − = ≠ k k k k k λ (14) ) ) sign( *, , Upd( * 1 ε λ × − = − k t t k λ λ θ α α > − − − ) , LassoLoss( ) , LassoLoss( if 1 1 t t t t λ λ (15) where θ is a tolerance parameter. Figure 2 summarizes the BLasso algorithm we used. After initialization, Steps 4 and 5 are repeated N times; at each iteration, a feature is chosen and its weight is updated either backward or forward by a fixed amount ε. Notice that the value of α is adaptively chosen according to the reduction of ExpLoss during training. The algorithm starts with a large initial α, and then at each forward step the value of α decreases until the ExpLoss stops decreasing. This is intuitively desirable: It is expected that most highly effective features are selected in early stages of training, so the reduction of ExpLoss at each step in early stages are more substantial than in later stages. These early steps coincide with the boosting steps most of the time. In other words, the effect of backward steps is more visible at later stages. Our implementation of BLasso differs slightly from the original algorithm described in Zhao and Yu (2004). Firstly, because the value of the base feature f0 is the log probability (assigned by a word trigram model) and has a different range from that of other features as in Equation (2), λ0 is set to optimize ExpLoss in the initialization step (Step 1 in Figure 2) and remains fixed during training. As suggested by Collins and Koo (2005), this ensures that the contribution of the log-likelihood feature f0 is well-calibrated with respect to ExpLoss. Secondly, when updating a feature weight, if the size of the optimal update step (computed via Equation (7)) is smaller than ε, we use the optimal step to update the feature. Therefore, in our implementation BLasso does not always take a fixed step; it may take steps whose size is smaller than ε. In our initial experiments we found that both changes (also used in our implementations of boosting and FSLR) were crucial to the performance of the methods. 1 Initialize λ0: set λ0 = argminλ0ExpLoss(λ), and λd = 0 for d=1…D. 2 Take a forward step according to Eq. (12) and (13), and the updated model is denoted by λ1 3 Initialize α = (ExpLoss(λ0)-ExpLoss(λ1))/ε 4 Take a backward step if and only if it leads to a decrease of LassoLoss according to Eq. (14) and (15), where θ = 0; otherwise 5 Take a forward step according to Eq. (12) and (13); update α = min(α, (ExpLoss(λt-1)-ExpLoss(λt))/ε ); and return to Step 4. Figure 2: The BLasso algorithm (Zhao and Yu 2004) provides theoretical justifications for BLasso. It has been proved that (1) it guarantees that it is safe for BLasso to start with an initial α which is the largest α that would allow an ε step away from 0 (i.e., larger α’s correspond to T(λ)=0); (2) for each value of α, BLasso performs coordinate descent (i.e., reduces ExpLoss by updating the weight of a feature) until there is no descent step; and (3) for each step where the value of α decreases, it guarantees that the lasso loss is reduced. As a result, it can be proved that for a finite number of features and θ = 0, the BLasso algorithm shown in Figure 2 converges to the lasso solution when ε Æ 0. 5 Evaluation 5.1 Settings We evaluated the training methods described above in the so-called cross-domain language model adaptation paradigm, where we adapt a model trained on one domain (which we call the 228 background domain) to a different domain (adaptation domain), for which only a small amount of training data is available. The data sets we used in our experiments came from five distinct sources of text. A 36-million-word Nikkei Newspaper corpus was used as the background domain, on which the word trigram model was trained. We used four adaptation domains: Yomiuri (newspaper corpus), TuneUp (balanced corpus containing newspapers and other sources of text), Encarta (encyclopedia) and Shincho (collection of novels). All corpora have been pre-word-segmented using a lexicon containing 167,107 entries. For each of the four domains, we created training data consisting of 72K sentences (0.9M~1.7M words) and test data of 5K sentences (65K~120K words) from each adaptation domain. The first 800 and 8,000 sentences of each adaptation training data were also used to show how different sizes of training data affected the performances of various adaptation methods. Another 5K-sentence subset was used as held-out data for each domain. We created the training samples for discriminative learning as follows. For each phonetic string A in adaptation training data, we produced a lattice of candidate word strings W using the baseline system described in (Gao et al. 2002), which uses a word trigram model trained via MLE on the Nikkei Newspaper corpus. For efficiency, we kept only the best 20 hypotheses in its candidate conversion set GEN(A) for each training sample for discriminative training. The oracle best hypothesis, which gives the minimum number of errors, was used as the reference transcript of A. We used unigrams and bigrams that occurred more than once in the training set as features in the linear model of Equation (2). The total number of candidate features we used was around 860,000. 5.2 Main Results Table 1 summarizes the results of various model training (adaptation) methods in terms of CER (%) and CER reduction (in parentheses) over comparing models. In the first column, the numbers in parentheses next to the domain name indicates the number of training sentences used for adaptation. Baseline, with results shown in Column 3, is the word trigram model. As expected, the CER correlates very well the similarity between the background domain and the adaptation domain, where domain similarity is measured in terms of cross entropy (Yuan et al. 2005) as shown in Column 2. MAP (maximum a posteriori), with results shown in Column 4, is a traditional LM adaptation method where the parameters of the background model are adjusted in such a way that maximizes the likelihood of the adaptation data. Our implementation takes the form of linear interpolation as described in Bacchiani et al. (2004): P(wi|h) = λPb(wi|h) + (1-λ)Pa(wi|h), where Pb is the probability of the background model, Pa is the probability trained on adaptation data using MLE and the history h corresponds to two preceding words (i.e. Pb and Pa are trigram probabilities). λ is the interpolation weight optimized on held-out data. Boosting, with results shown in Column 5, is the algorithm described in Figure 1. In our implementation, we use the shrinkage method suggested by Schapire and Singer (1999) and Collins and Koo (2005). At each iteration, we used the following update for the kth feature Z C Z C k k k ε ε δ + + = + _ log 2 1 (16) where Ck+ is a value increasing exponentially with the sum of margins of (WR, W) pairs over the set where fk is seen in WR but not in W; Ck- is the value related to the sum of margins over the set where fk is seen in W but not in WR. ε is a smoothing factor (whose value is optimized on held-out data) and Z is a normalization constant (whose value is the ExpLoss(.) of training data according to the current model). We see that εZ in Equation (16) plays the same role as ν in Equation (9). BLasso, with results shown in Column 6, is the algorithm described in Figure 2. We find that the performance of BLasso is not very sensitive to the selection of the step size ε across training sets of different domains and sizes. Although small ε is preferred in theory as discussed earlier, it would lead to a very slow convergence. Therefore, in our experiments, we always use a large step (ε = 0.5) and use the so-called early stopping strategy, i.e., the number of iterations before stopping is optimized on held-out data. In the task of LM for IME, there are millions of features and training samples, forming an extremely large and sparse matrix. We therefore applied the techniques described in Collins and Koo (2005) to speed up the training procedure. The resulting algorithms run in around 15 and 30 minutes respectively for Boosting and BLasso to converge on an XEON™ MP 1.90GHz machine when training on an 8K-sentnece training set. 229 The results in Table 1 give rise to several observations. First of all, both discriminative training methods (i.e., Boosting and BLasso) outperform MAP substantially. The improvement margins are larger when the background and adaptation domains are more similar. The phenomenon is attributed to the underlying difference between the two adaptation methods: MAP aims to improve the likelihood of a distribution, so if the adaptation domain is very similar to the background domain, the difference between the two underlying distributions is so small that MAP cannot adjust the model effectively. Discriminative methods, on the other hand, do not have this limitation for they aim to reduce errors directly. Secondly, BLasso outperforms Boosting significantly (p-value < 0.01) on all test sets. The improvement margins vary with the training sets of different domains and sizes. In general, in cases where the adaptation domain is less similar to the background domain and larger training set is used, the improvement of BLasso is more visible. Note that the CER results of FSLR are not included in Table 1 because it achieves very similar results to the boosting algorithm with shrinkage if the controlling parameters of both algorithms are optimized via cross-validation. We shall discuss their difference in the next section. 5.3 Dicussion This section investigates what components of BLasso bring the improvement over Boosting. Comparing the algorithms in Figures 1 and 2, we notice three differences between BLasso and Boosting: (i) the use of backward steps in BLasso; (ii) BLasso uses the grid search (fixed step size) for feature selection in Equation (12) while Boosting uses the continuous search (optimal step size) in Equation (7); and (iii) BLasso uses a fixed step size for feature update in Equation (13) while Boosting uses an optimal step size in Equation (8). We then investigate these differences in turn. To study the impact of backward steps, we compared BLasso with the boosting algorithm with a fixed step search and a fixed step update, henceforth referred to as F-Boosting. F-Boosting was implemented as Figure 2, by setting a large value to θ in Equation (15), i.e., θ = 103, to prohibit backward steps. We find that although the training error curves of BLasso and F-Boosting are almost identical, the T(λ) curves grow apart with iterations, as shown in Figure 3. The results show that with backward steps, BLasso achieves a better approximation to the true lasso solution: It leads to a model with similar training errors but less complex (in terms of L1 penalty). In our experiments we find that the benefit of using backward steps is only visible in later iterations when BLasso’s backward steps kick in. A typical example is shown in Figure 4. The early steps fit to highly effective features and in these steps BLasso and F-Boosting agree. For later steps, fine-tuning of features is required. BLasso with backward steps provides a better mechanism than F-Boosting to revise the previously chosen features to accommodate this fine level of tuning. Consequently we observe the superior performance of BLasso at later stages as shown in our experiments. As well-known in linear regression models, when there are many strongly correlated features, model parameters can be poorly estimated and exhibit high variance. By imposing a model size constraint, as in lasso, this phenomenon is alleviated. Therefore, we speculate that a better approximation to lasso, as BLasso with backward steps, would be superior in eliminating the negative effect of strongly correlated features in model estimation. To verify our speculation, we performed the following experiments. For each training set, in addition to word unigram and bigram features, we introduced a new type of features called headword bigram. As described in Gao et al. (2002), headwords are defined as the content words of the sentence. Therefore, headword bigrams constitute a special type of skipping bigrams which can capture dependency between two words that may not be adjacent. In reality, a large portion of headword bigrams are identical to word bigrams, as two headwords can occur next to each other in text. In the adaptation test data we used, we find that headword bigram features are for the most part either completely overlapping with the word bigram features (i.e., all instances of headword bigrams also count as word bigrams) or not overlapping at all (i.e., a headword bigram feature is not observed as a word bigram feature) – less than 20% of headword bigram features displayed a variable degree of overlap with word bigram features. In our data, the rate of completely overlapping features is 25% to 47% depending on the adaptation domain. From this, we can say that the headword bigram features show moderate to high degree of correlation with the word bigram features. We then used BLasso and F-Boosting to train the linear language models including both word bigram and headword bigram features. We find that although the CER reduction by adding 230 headword features is overall very small, the difference between the two versions of BLasso is more visible in all four test sets. Comparing Figures 5 – 8 with Figure 4, it can be seen that BLasso with backward steps outperforms the one without backward steps in much earlier stages of training with a larger margin. For example, on Encarta data sets, BLasso outperforms F-Boosting after around 18,000 iterations with headword features (Figure 7), as opposed to 25,000 iterations without headword features (Figure 4). The results seem to corroborate our speculation that BLasso is more robust in the presence of highly correlated features. To investigate the impact of using the grid search (fixed step size) versus the continuous search (optimal step size) for feature selection, we compared F-Boosting with FSLR since they differs only in their search methods for feature selection. As shown in Figures 5 to 8, although FSLR is robust in that its test errors do not increase after many iterations, F-Boosting can reach a much lower error rate on three out of four test sets. Therefore, in the task of LM for IME where CER is the most important metric, the grid search for feature selection is more desirable. To investigate the impact of using a fixed versus an optimal step size for feature update, we compared FSLR with Boosting. Although both algorithms achieve very similar CER results, the performance of FSLR is much less sensitive to the selected fixed step size. For example, we can select any value from 0.2 to 0.8, and in most settings FSLR achieves the very similar lowest CER after 20,000 iterations, and will stay there for many iterations. In contrast, in Boosting, the optimal value of ε in Equation (16) varies with the sizes and domains of training data, and has to be tuned carefully. We thus conclude that in our task FSLR is more robust against different training settings and a fixed step size for feature update is more preferred. 6 Conclusion This paper investigates two approximation lasso methods for LM applied to a realistic task with a very large number of features with sparse feature space. Our results on Japanese text input are promising. BLasso outperforms the boosting algorithm significantly in terms of CER reduction on all experimental settings. We have shown that this superior performance is a consequence of BLasso’s backward step and its fixed step size in both feature selection and feature weight update. Our experimental results in Section 5 show that the use of backward step is vital for model fine-tuning after major features are selected and for coping with strongly correlated features; the fixed step size of BLasso is responsible for the improvement of CER and the robustness of the results. Experiments on other data sets and theoretical analysis are needed to further support our findings in this paper. References Bacchiani, M., Roark, B., and Saraclar, M. 2004. Language model adaptation with MAP estimation and the perceptron algorithm. In HLT-NAACL 2004. 21-24. Collins, Michael and Terry Koo 2005. Discriminative reranking for natural language parsing. Computational Linguistics 31(1): 25-69. Duda, Richard O, Hart, Peter E. and Stork, David G. 2001. Pattern classification. John Wiley & Sons, Inc. Donoho, D., I. Johnstone, G. Kerkyachairan, and D. Picard. 1995. Wavelet shrinkage; asymptopia? (with discussion), J. Royal. Statist. Soc. 57: 201-337. Efron, B., T. Hastie, I. Johnstone, and R. Tibshirani. 2004. Least angle regression. Ann. Statist. 32, 407-499. Freund, Y, R. Iyer, R. E. Schapire, and Y. Singer. 1998. An efficient boosting algorithm for combining preferences. In ICML’98. Hastie, T., R. Tibshirani and J. Friedman. 2001. The elements of statistical learning. Springer-Verlag, New York. Gao, Jianfeng, Hisami Suzuki and Yang Wen. 2002. Exploiting headword dependency and predictive clustering for language modeling. In EMNLP 2002. Gao. J., Yu, H., Yuan, W., and Xu, P. 2005. Minimum sample risk methods for language modeling. In HLT/EMNLP 2005. Osborne, M.R. and Presnell, B. and Turlach B.A. 2000a. A new approach to variable selection in least squares problems. Journal of Numerical Analysis, 20(3). Osborne, M.R. and Presnell, B. and Turlach B.A. 2000b. On the lasso and its dual. Journal of Computational and Graphical Statistics, 9(2): 319-337. Roark, Brian, Murat Saraclar and Michael Collins. 2004. Corrective language modeling for large vocabulary ASR with the perceptron algorithm. In ICASSP 2004. Schapire, Robert E. and Yoram Singer. 1999. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3): 297-336. Suzuki, Hisami and Jianfeng Gao. 2005. A comparative study on language model adaptation using new evaluation metrics. In HLT/EMNLP 2005. Tibshirani, R. 1996. Regression shrinkage and selection via the lasso. J. R. Statist. Soc. B, 58(1): 267-288. Yuan, W., J. Gao and H. Suzuki. 2005. An Empirical Study on Language Model Adaptation Using a Metric of Domain Similarity. In IJCNLP 05. Zhao, P. and B. Yu. 2004. Boosted lasso. Tech Report, Statistics Department, U. C. Berkeley. Zhu, J. S. Rosset, T. Hastie, and R. Tibshirani. 2003. 1-norm support vector machines. NIPS 16. MIT Press. 231 Table 1. CER (%) and CER reduction (%) (Y=Yomiuri; T=TuneUp; E=Encarta; S=-Shincho) Domain Entropy vs.Nikkei Baseline MAP (over Baseline) Boosting (over MAP) BLasso (over MAP/Boosting) Y (800) 7.69 3.70 3.70 (+0.00) 3.13 (+15.41) 3.01 (+18.65/+3.83) Y (8K) 7.69 3.70 3.69 (+0.27) 2.88 (+21.95) 2.85 (+22.76/+1.04) Y (72K) 7.69 3.70 3.69 (+0.27) 2.78 (+24.66) 2.73 (+26.02/+1.80) T (800) 7.95 5.81 5.81 (+0.00) 5.69 (+2.07) 5.63 (+3.10/+1.05) T (8K) 7.95 5.81 5.70 (+1.89) 5.48 (+5.48) 5.33 (+6.49/+2.74) T (72K) 7.95 5.81 5.47 (+5.85) 5.33 (+2.56) 5.05 (+7.68/+5.25) E (800) 9.30 10.24 9.60 (+6.25) 9.82 (-2.29) 9.18 (+4.38/+6.52) E (8K) 9.30 10.24 8.64 (+15.63) 8.54 (+1.16) 8.04 (+6.94/+5.85) E (72K) 9.30 10.24 7.98 (+22.07) 7.53 (+5.64) 7.20 (+9.77/+4.38) S (800) 9.40 12.18 11.86 (+2.63) 11.91 (-0.42) 11.79 (+0.59/+1.01) S (8K) 9.40 12.18 11.15 (+8.46) 11.09 (+0.54) 10.73 (+3.77/+3.25) S (72K) 9.40 12.18 10.76 (+11.66) 10.25 (+4.74) 9.64 (+10.41/+5.95) Figure 3. L1 curves: models are trained on the E(8K) dataset. Figure 4. Test error curves: models are trained on the E(8K) dataset. Figure 5. Test error curves: models are trained on the Y(8K) dataset, including headword bigram features. Figure 6. Test error curves: models are trained on the T(8K) dataset, including headword bigram features. Figure 7. Test error curves: models are trained on the E(8K) dataset, including headword bigram features. Figure 8. Test error curves: models are trained on the S(8K) dataset, including headword bigram features. 232
2006
29
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 17–24, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Unsupervised Topic Modelling for Multi-Party Spoken Discourse Matthew Purver CSLI Stanford University Stanford, CA 94305, USA [email protected] Konrad P. K¨ording Dept. of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139, USA [email protected] Thomas L. Griffiths Dept. of Cognitive & Linguistic Sciences Brown University Providence, RI 02912, USA tom [email protected] Joshua B. Tenenbaum Dept. of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139, USA [email protected] Abstract We present a method for unsupervised topic modelling which adapts methods used in document classification (Blei et al., 2003; Griffiths and Steyvers, 2004) to unsegmented multi-party discourse transcripts. We show how Bayesian inference in this generative model can be used to simultaneously address the problems of topic segmentation and topic identification: automatically segmenting multi-party meetings into topically coherent segments with performance which compares well with previous unsupervised segmentation-only methods (Galley et al., 2003) while simultaneously extracting topics which rate highly when assessed for coherence by human judges. We also show that this method appears robust in the face of off-topic dialogue and speech recognition errors. 1 Introduction Topic segmentation – division of a text or discourse into topically coherent segments – and topic identification – classification of those segments by subject matter – are joint problems. Both are necessary steps in automatic indexing, retrieval and summarization from large datasets, whether spoken or written. Both have received significant attention in the past (see Section 2), but most approaches have been targeted at either text or monologue, and most address only one of the two issues (usually for the very good reason that the dataset itself provides the other, for example by the explicit separation of individual documents or news stories in a collection). Spoken multi-party meetings pose a difficult problem: firstly, neither the segmentation nor the discussed topics can be taken as given; secondly, the discourse is by nature less tidily structured and less restricted in domain; and thirdly, speech recognition results have unavoidably high levels of error due to the noisy multispeaker environment. In this paper we present a method for unsupervised topic modelling which allows us to approach both problems simultaneously, inferring a set of topics while providing a segmentation into topically coherent segments. We show that this model can address these problems over multi-party discourse transcripts, providing good segmentation performance on a corpus of meetings (comparable to the best previous unsupervised method that we are aware of (Galley et al., 2003)), while also inferring a set of topics rated as semantically coherent by human judges. We then show that its segmentation performance appears relatively robust to speech recognition errors, giving us confidence that it can be successfully applied in a real speech-processing system. The plan of the paper is as follows. Section 2 below briefly discusses previous approaches to the identification and segmentation problems. Section 3 then describes the model we use here. Section 4 then details our experiments and results, and conclusions are drawn in Section 5. 2 Background and Related Work In this paper we are interested in spoken discourse, and in particular multi-party human-human meetings. Our overall aim is to produce information which can be used to summarize, browse and/or retrieve the information contained in meetings. User studies (Lisowska et al., 2004; Banerjee et al., 2005) have shown that topic information is important here: people are likely to want to know 17 which topics were discussed in a particular meeting, as well as have access to the discussion on particular topics in which they are interested. Of course, this requires both identification of the topics discussed, and segmentation into the periods of topically related discussion. Work on automatic topic segmentation of text and monologue has been prolific, with a variety of approaches used. (Hearst, 1994) uses a measure of lexical cohesion between adjoining paragraphs in text; (Reynar, 1999) and (Beeferman et al., 1999) combine a variety of features such as statistical language modelling, cue phrases, discourse information and the presence of pronouns or named entities to segment broadcast news; (Maskey and Hirschberg, 2003) use entirely non-lexical features. Recent advances have used generative models, allowing lexical models of the topics themselves to be built while segmenting (Imai et al., 1997; Barzilay and Lee, 2004), and we take a similar approach here, although with some important differences detailed below. Turning to multi-party discourse and meetings, however, most previous work on automatic segmentation (Reiter and Rigoll, 2004; Dielmann and Renals, 2004; Banerjee and Rudnicky, 2004), treats segments as representing meeting phases or events which characterize the type or style of discourse taking place (presentation, briefing, discussion etc.), rather than the topic or subject matter. While we expect some correlation between these two types of segmentation, they are clearly different problems. However, one comparable study is described in (Galley et al., 2003). Here, a lexical cohesion approach was used to develop an essentially unsupervised segmentation tool (LCSeg) which was applied to both text and meeting transcripts, giving performance better than that achieved by applying text/monologue-based techniques (see Section 4 below), and we take this as our benchmark for the segmentation problem. Note that they improved their accuracy by combining the unsupervised output with discourse features in a supervised classifier – while we do not attempt a similar comparison here, we expect a similar technique would yield similar segmentation improvements. In contrast, we take a generative approach, modelling the text as being generated by a sequence of mixtures of underlying topics. The approach is unsupervised, allowing both segmentation and topic extraction from unlabelled data. 3 Learning topics and segments We specify our model to address the problem of topic segmentation: attempting to break the discourse into discrete segments in which a particular set of topics are discussed. Assume we have a corpus of U utterances, ordered in sequence. The uth utterance consists of Nu words, chosen from a vocabulary of size W. The set of words associated with the uth utterance are denoted wu, and indexed as wu,i. The entire corpus is represented by w. Following previous work on probabilistic topic models (Hofmann, 1999; Blei et al., 2003; Griffiths and Steyvers, 2004), we model each utterance as being generated from a particular distribution over topics, where each topic is a probability distribution over words. The utterances are ordered sequentially, and we assume a Markov structure on the distribution over topics: with high probability, the distribution for utterance u is the same as for utterance u−1; otherwise, we sample a new distribution over topics. This pattern of dependency is produced by associating a binary switching variable with each utterance, indicating whether its topic is the same as that of the previous utterance. The joint states of all the switching variables define segments that should be semantically coherent, because their words are generated by the same topic vector. We will first describe this generative model in more detail, and then discuss inference in this model. 3.1 A hierarchical Bayesian model We are interested in where changes occur in the set of topics discussed in these utterances. To this end, let cu indicate whether a change in the distribution over topics occurs at the uth utterance and let P(cu = 1) = π (where π thus defines the expected number of segments). The distribution over topics associated with the uth utterance will be denoted θ(u), and is a multinomial distribution over T topics, with the probability of topic t being θ(u) t . If cu = 0, then θ(u) = θ(u−1). Otherwise, θ(u) is drawn from a symmetric Dirichlet distribution with parameter α. The distribution is thus: P(θ(u)|cu, θ(u−1)) = ( δ(θ(u), θ(u−1)) cu = 0 Γ(T α) Γ(α)T QT t=1(θ(u) t )α−1 cu = 1 18 Figure 1: Graphical models indicating the dependencies among variables in (a) the topic segmentation model and (b) the hidden Markov model used as a comparison. where δ(·, ·) is the Dirac delta function, and Γ(·) is the generalized factorial function. This distribution is not well-defined when u = 1, so we set c1 = 1 and draw θ(1) from a symmetric Dirichlet(α) distribution accordingly. As in (Hofmann, 1999; Blei et al., 2003; Griffiths and Steyvers, 2004), each topic Tj is a multinomial distribution φ(j) over words, and the probability of the word w under that topic is φ(j) w . The uth utterance is generated by sampling a topic assignment zu,i for each word i in that utterance with P(zu,i = t|θ(u)) = θ(u) t , and then sampling a word wu,i from φ(j), with P(wu,i = w|zu,i = j, φ(j)) = φ(j) w . If we assume that π is generated from a symmetric Beta(γ) distribution, and each φ(j) is generated from a symmetric Dirichlet(β) distribution, we obtain a joint distribution over all of these variables with the dependency structure shown in Figure 1A. 3.2 Inference Assessing the posterior probability distribution over topic changes c given a corpus w can be simplified by integrating out the parameters θ, φ, and π. According to Bayes rule we have: P(z, c|w) = P(w|z)P(z|c)P(c) P z,c P(w|z)P(z|c)P(c) (1) Evaluating P(c) requires integrating over π. Specifically, we have: P(c) = R 1 0 P(c|π)P(π) dπ = Γ(2γ) Γ(γ)2 Γ(n1+γ)Γ(n0+γ) Γ(N+2γ) (2) where n1 is the number of utterances for which cu = 1, and n0 is the number of utterances for which cu = 0. Computing P(w|z) proceeds along similar lines: P(w|z) = R ∆T W P(w|z, φ)P(φ) dφ = “ Γ(W β) Γ(β)W ”T QT t=1 QW w=1 Γ(n(t) w +β) Γ(n(t) · +W β) (3) where ∆T W is the T-dimensional cross-product of the multinomial simplex on W points, n(t) w is the number of times word w is assigned to topic t in z, and n(t) · is the total number of words assigned to topic t in z. To evaluate P(z|c) we have: P(z|c) = Z ∆U T P(z|θ)P(θ|c) dθ (4) The fact that the cu variables effectively divide the sequence of utterances into segments that use the same distribution over topics simplifies solving the integral and we obtain: P(z|c) = „Γ(Tα) Γ(α)T «n1 Y u∈U1 QT t=1 Γ(n(Su) t + α) Γ(n(Su) · + Tα) . (5) 19 P(cu|c−u, z, w) ∝ 8 > > > < > > > : QT t=1 Γ(n (S0 u) t +α) Γ(n (S0u) · +T α) n0+γ N+2γ cu = 0 Γ(T α) Γ(α)T QT t=1 Γ(n (S1 u−1) t +α) Γ(n (S1 u−1) · +T α) QT t=1 Γ(n (S1 u) t +α) Γ(n (S1u) · +T α) n1+γ N+2γ cu = 1 (7) where U1 = {u|cu = 1}, U0 = {u|cu = 0}, Su denotes the set of utterances that share the same topic distribution (i.e. belong to the same segment) as u, and n(Su) t is the number of times topic t appears in the segment Su (i.e. in the values of zu′ corresponding for u′ ∈Su). Equations 2, 3, and 5 allow us to evaluate the numerator of the expression in Equation 1. However, computing the denominator is intractable. Consequently, we sample from the posterior distribution P(z, c|w) using Markov chain Monte Carlo (MCMC) (Gilks et al., 1996). We use Gibbs sampling, drawing the topic assignment for each word, zu,i, conditioned on all other topic assignments, z−(u,i), all topic change indicators, c, and all words, w; and then drawing the topic change indicator for each utterance, cu, conditioned on all other topic change indicators, c−u, all topic assignments z, and all words w. The conditional probabilities we need can be derived directly from Equations 2, 3, and 5. The conditional probability of zu,i indicates the probability that wu,i should be assigned to a particular topic, given other assignments, the current segmentation, and the words in the utterances. Cancelling constant terms, we obtain: P(zu,i|z−(u,i), c, w) = n(t) wu,i + β n(t) · + Wβ n(Su) zu,i + α n(Su) · + Tα . (6) where all counts (i.e. the n terms) exclude zu,i. The conditional probability of cu indicates the probability that a new segment should start at u. In sampling cu from this distribution, we are splitting or merging segments. Similarly we obtain the expression in (7), where S1 u is Su for the segmentation when cu = 1, S0 u is Su for the segmentation when cu = 0, and all counts (e.g. n1) exclude cu. For this paper, we fixed α, β and γ at 0.01. Our algorithm is related to (Barzilay and Lee, 2004)’s approach to text segmentation, which uses a hidden Markov model (HMM) to model segmentation and topic inference for text using a bigram representation in restricted domains. Due to the adaptive combination of different topics our algorithm can be expected to generalize well to larger domains. It also relates to earlier work by (Blei and Moreno, 2001) that uses a topic representation but also does not allow adaptively combining different topics. However, while HMM approaches allow a segmentation of the data by topic, they do not allow adaptively combining different topics into segments: while a new segment can be modelled as being identical to a topic that has already been observed, it can not be modelled as a combination of the previously observed topics.1 Note that while (Imai et al., 1997)’s HMM approach allows topic mixtures, it requires supervision with hand-labelled topics. In our experiments we therefore compared our results with those obtained by a similar but simpler 10 state HMM, using a similar Gibbs sampling algorithm. The key difference between the two models is shown in Figure 1. In the HMM, all variation in the content of utterances is modelled at a single level, with each segment having a distribution over words corresponding to a single state. The hierarchical structure of our topic segmentation model allows variation in content to be expressed at two levels, with each segment being produced from a linear combination of the distributions associated with each topic. Consequently, our model can often capture the content of a sequence of words by postulating a single segment with a novel distribution over topics, while the HMM has to frequently switch between states. 4 Experiments 4.1 Experiment 0: Simulated data To analyze the properties of this algorithm we first applied it to a simulated dataset: a sequence of 10,000 words chosen from a vocabulary of 25. Each segment of 100 successive words had a con1Say that a particular corpus leads us to infer topics corresponding to “speech recognition” and “discourse understanding”. A single discussion concerning speech recognition for discourse understanding could be modelled by our algorithm as a single segment with a suitable weighted mixture of the two topics; a HMM approach would tend to split it into multiple segments (or require a specific topic for this segment). 20 Figure 2: Simulated data: A) inferred topics; B) segmentation probabilities; C) HMM version. stant topic distribution (with distributions for different segments drawn from a Dirichlet distribution with β = 0.1), and each subsequence of 10 words was taken to be one utterance. The topicword assignments were chosen such that when the vocabulary is aligned in a 5×5 grid the topics were binary bars. The inference algorithm was then run for 200,000 iterations, with samples collected after every 1,000 iterations to minimize autocorrelation. Figure 2 shows the inferred topic-word distributions and segment boundaries, which correspond well with those used to generate the data. 4.2 Experiment 1: The ICSI corpus We applied the algorithm to the ICSI meeting corpus transcripts (Janin et al., 2003), consisting of manual transcriptions of 75 meetings. For evaluation, we use (Galley et al., 2003)’s set of human-annotated segmentations, which covers a sub-portion of 25 meetings and takes a relatively coarse-grained approach to topic with an average of 5-6 topic segments per meeting. Note that these segmentations were not used in training the model: topic inference and segmentation was unsupervised, with the human annotations used only to provide some knowledge of the overall segmentation density and to evaluate performance. The transcripts from all 75 meetings were linearized by utterance start time and merged into a single dataset that contained 607,263 word tokens. We sampled for 200,000 iterations of MCMC, taking samples every 1,000 iterations, and then averaged the sampled cu variables over the last 100 samples to derive an estimate for the posterior probability of a segmentation boundary at each utterance start. This probability was then thresholded to derive a final segmentation which was compared to the manual annotations. More precisely, we apply a small amount of smoothing (Gaussian kernel convolution) and take the midpoints of any areas above a set threshold to be the segment boundaries. Varying this threshold allows us to segment the discourse in a more or less finegrained way (and we anticipate that this could be user-settable in a meeting browsing application). If the correct number of segments is known for a meeting, this can be used directly to determine the optimum threshold, increasing performance; if not, we must set it at a level which corresponds to the desired general level of granularity. For each set of annotations, we therefore performed two sets of segmentations: one in which the threshold was set for each meeting to give the known goldstandard number of segments, and one in which the threshold was set on a separate development set to give the overall corpus-wide average number of segments, and held constant for all test meetings.2 This also allows us to compare our results with those of (Galley et al., 2003), who apply a similar threshold to their lexical cohesion function and give corresponding results produced with known/unknown numbers of segments. Segmentation We assessed segmentation performance using the Pk and WindowDiff (WD) error measures proposed by (Beeferman et al., 1999) and (Pevzner and Hearst, 2002) respectively; both intuitively provide a measure of the probability that two points drawn from the meeting will be incorrectly separated by a hypothesized segment boundary – thus, lower Pk and WD figures indicate better agreement with the human-annotated results.3 For the numbers of segments we are dealing with, a baseline of segmenting the discourse into equal-length segments gives both Pk and WD about 50%. In order to investigate the effect of the number of underlying topics T, we tested models using 2, 5, 10 and 20 topics. We then compared performance with (Galley et al., 2003)’s LCSeg tool, and with a 10-state HMM model as described above. Results are shown in Table 1, averaged over the 25 test meetings. Results show that our model significantly outperforms the HMM equivalent – because the HMM cannot combine different topics, it places a lot of segmentation boundaries, resulting in inferior performance. Using stemming and a bigram 2The development set was formed from the other meetings in the same ICSI subject areas as the annotated test meetings. 3WD takes into account the likely number of incorrectly separating hypothesized boundaries; Pk only a binary correct/incorrect classification. 21 Figure 3: Results from the ICSI corpus: A) the words most indicative for each topic; B) Probability of a segment boundary, compared with human segmentation, for an arbitrary subset of the data; C) Receiveroperator characteristic (ROC) curves for predicting human segmentation, and conditional probabilities of placing a boundary at an offset from a human boundary; D) subjective topic coherence ratings. Number of topics T Model 2 5 10 20 HMM LCSeg Pk .284 .297 .329 .290 .375 .319 known unknown Model Pk WD Pk WD T = 10 .289 .329 .329 .353 LCSeg .264 .294 .319 .359 Table 1: Results on the ICSI meeting corpus. representation, however, might improve its performance (Barzilay and Lee, 2004), although similar benefits might equally apply to our model. It also performs comparably to (Galley et al., 2003)’s unsupervised performance (exceeding it for some settings of T). It does not perform as well as their hybrid supervised system, which combined LCSeg with supervised learning over discourse features (Pk = .23); but we expect that a similar approach would be possible here, combining our segmentation probabilities with other discourse-based features in a supervised way for improved performance. Interestingly, segmentation quality, at least at this relatively coarse-grained level, seems hardly affected by the overall number of topics T. Figure 3B shows an example for one meeting of how the inferred topic segmentation probabilities at each utterance compare with the gold-standard segment boundaries. Figure 3C illustrates the performance difference between our model and the HMM equivalent at an example segment boundary: for this example, the HMM model gives almost no discrimination. Identification Figure 3A shows the most indicative words for a subset of the topics inferred at the last iteration. Encouragingly, most topics seem intuitively to reflect the subjects we know were discussed in the ICSI meetings – the majority of them (67 meetings) are taken from the weekly meetings of 3 distinct research groups, where discussions centered around speech recognition techniques (topics 2, 5), meeting recording, annotation and hardware setup (topics 6, 3, 1, 8), robust language processing (topic 7). Others reflect general classes of words which are independent of subject matter (topic 4). To compare the quality of these inferred topics we performed an experiment in which 7 human observers rated (on a scale of 1 to 9) the semantic coherence of 50 lists of 10 words each. Of these lists, 40 contained the most indicative words for each of the 10 topics from different models: the topic segmentation model; a topic model that had the same number of segments but with fixed evenly spread segmentation boundaries; an equiv22 alent with randomly placed segmentation boundaries; and the HMM. The other 10 lists contained random samples of 10 words from the other 40 lists. Results are shown in Figure 3D, with the topic segmentation model producing the most coherent topics and the HMM model and random words scoring less well. Interestingly, using an even distribution of boundaries but allowing the topic model to infer topics performs similarly well with even segmentation, but badly with random segmentation – topic quality is thus not very susceptible to the precise segmentation of the text, but does require some reasonable approximation (on ICSI data, an even segmentation gives a Pk of about 50%, while random segmentations can do much worse). However, note that the full topic segmentation model is able to identify meaningful segmentation boundaries at the same time as inferring topics. 4.3 Experiment 2: Dialogue robustness Meetings often include off-topic dialogue, in particular at the beginning and end, where informal chat and meta-dialogue are common. Galley et al. (2003) annotated these sections explicitly, together with the ICSI “digit-task” sections (participants read sequences of digits to provide data for speech recognition experiments), and removed them from their data, as did we in Experiment 1 above. While this seems reasonable for the purposes of investigating ideal algorithm performance, in real situations we will be faced with such off-topic dialogue, and would obviously prefer segmentation performance not to be badly affected (and ideally, enabling segmentation of the off-topic sections from the meeting proper). One might suspect that an unsupervised generative model such as ours might not be robust in the presence of numerous off-topic words, as spurious topics might be inferred and used in the mixture model throughout. In order to investigate this, we therefore also tested on the full dataset without removing these sections (806,026 word tokens in total), and added the section boundaries as further desired gold-standard segmentation boundaries. Table 2 shows the results: performance is not significantly affected, and again is very similar for both our model and LCSeg. 4.4 Experiment 3: Speech recognition The experiments so far have all used manual word transcriptions. Of course, in real meeting proknown unknown Experiment Model Pk WD Pk WD 2 T = 10 .296 .342 .325 .366 (off-topic data) LCSeg .307 .338 .322 .386 3 T = 10 .266 .306 .291 .331 (ASR data) LCSeg .289 .339 .378 .472 Table 2: Results for Experiments 2 & 3: robustness to off-topic and ASR data. cessing systems, we will have to deal with speech recognition (ASR) errors. We therefore also tested on 1-best ASR output provided by ICSI, and results are shown in Table 2. The “off-topic” and “digits” sections were removed in this test, so results are comparable with Experiment 1. Segmentation accuracy seems extremely robust; interestingly, LCSeg’s results are less robust (the drop in performance is higher), especially when the number of segments in a meeting is unknown. It is surprising to notice that the segmentation accuracy in this experiment was actually slightly higher than achieved in Experiment 1 (especially given that ASR word error rates were generally above 20%). This may simply be a smoothing effect: differences in vocabulary and its distribution can effectively change the prior towards sparsity instantiated in the Dirichlet distributions. 5 Summary and Future Work We have presented an unsupervised generative model which allows topic segmentation and identification from unlabelled data. Performance on the ICSI corpus of multi-party meetings is comparable with the previous unsupervised segmentation results, and the extracted topics are rated well by human judges. Segmentation accuracy is robust in the face of noise, both in the form of off-topic discussion and speech recognition hypotheses. Future Work Spoken discourse exhibits several features not derived from the words themselves but which seem intuitively useful for segmentation, e.g. speaker changes, speaker identities and roles, silences, overlaps, prosody and so on. As shown by (Galley et al., 2003), some of these features can be combined with lexical information to improve segmentation performance (although in a supervised manner), and (Maskey and Hirschberg, 2003) show some success in broadcast news segmentation using only these kinds of non-lexical features. We are currently investigating the addition of non-lexical features as observed outputs in 23 our unsupervised generative model. We are also investigating improvements into the lexical model as presented here, firstly via simple techniques such as word stemming and replacement of named entities by generic class tokens (Barzilay and Lee, 2004); but also via the use of multiple ASR hypotheses by incorporating word confusion networks into our model. We expect that this will allow improved segmentation and identification performance with ASR data. Acknowledgements This work was supported by the CALO project (DARPA grant NBCH-D-03-0010). We thank Elizabeth Shriberg and Andreas Stolcke for providing automatic speech recognition data for the ICSI corpus and for their helpful advice; John Niekrasz and Alex Gruenstein for help with the NOMOS corpus annotation tool; and Michel Galley for discussion of his approach and results. References Satanjeev Banerjee and Alex Rudnicky. 2004. Using simple speech-based features to detect the state of a meeting and the roles of the meeting participants. In Proceedings of the 8th International Conference on Spoken Language Processing. Satanjeev Banerjee, Carolyn Ros´e, and Alex Rudnicky. 2005. The necessity of a meeting recording and playback system, and the benefit of topic-level annotations to meeting browsing. In Proceedings of the 10th International Conference on Human-Computer Interaction. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In HLT-NAACL 2004: Proceedings of the Main Conference, pages 113–120. Doug Beeferman, Adam Berger, and John D. Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1-3):177–210. David Blei and Pedro Moreno. 2001. Topic segmentation with an aspect hidden Markov model. In Proceedings of the 24th Annual International Conference on Research and Development in Information Retrieval, pages 343–348. David Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Alfred Dielmann and Steve Renals. 2004. Dynamic Bayesian Networks for meeting structuring. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Michel Galley, Kathleen McKeown, Eric FoslerLussier, and Hongyan Jing. 2003. Discourse segmentation of multi-party conversation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 562–569. W.R. Gilks, S. Richardson, and D.J. Spiegelhalter, editors. 1996. Markov Chain Monte Carlo in Practice. Chapman and Hall, Suffolk. Thomas Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Science, 101:5228–5235. Marti A. Hearst. 1994. Multi-paragraph segmentation of expository text. In Proc. 32nd Meeting of the Association for Computational Linguistics, Los Cruces, NM, June. Thomas Hofmann. 1999. Probablistic latent semantic indexing. In Proceedings of the 22nd Annual SIGIR Conference on Research and Development in Information Retrieval, pages 50–57. Toru Imai, Richard Schwartz, Francis Kubala, and Long Nguyen. 1997. Improved topic discrimination of broadcast news using a model of multiple simultaneous topics. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 727–730. Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, and Chuck Wooters. 2003. The ICSI Meeting Corpus. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 364–367. Agnes Lisowska, Andrei Popescu-Belis, and Susan Armstrong. 2004. User query analysis for the specification and evaluation of a dialogue processing and retrieval system. In Proceedings of the 4th International Conference on Language Resources and Evaluation. Sameer R. Maskey and Julia Hirschberg. 2003. Automatic summarization of broadcast news using structural features. In Eurospeech 2003, Geneva, Switzerland. Lev Pevzner and Marti Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1):19– 36. Stehpan Reiter and Gerhard Rigoll. 2004. Segmentation and classification of meeting events using multiple classifier fusion and dynamic programming. In Proceedings of the International Conference on Pattern Recognition. Jeffrey Reynar. 1999. Statistical models for topic segmentation. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 357–364. 24
2006
3
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 233–240, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Automated Japanese Essay Scoring System based on Articles Written by Experts Tsunenori Ishioka Research Division The National Center for University Entrance Examinations Tokyo 153-8501, Japan [email protected] Masayuki Kameda Software Research Center Ricoh Co., Ltd. Tokyo 112-0002, Japan [email protected] Abstract We have developed an automated Japanese essay scoring system called Jess. The system needs expert writings rather than expert raters to build the evaluation model. By detecting statistical outliers of predetermined aimed essay features compared with many professional writings for each prompt, our system can evaluate essays. The following three features are examined: (1) rhetoric — syntactic variety, or the use of various structures in the arrangement of phases, clauses, and sentences, (2) organization — characteristics associated with the orderly presentation of ideas, such as rhetorical features and linguistic cues, and (3) content — vocabulary related to the topic, such as relevant information and precise or specialized vocabulary. The final evaluation score is calculated by deducting from a perfect score assigned by a learning process using editorials and columns from the Mainichi Daily News newspaper. A diagnosis for the essay is also given. 1 Introduction When giving an essay test, the examiner expects a written essay to reflect the writing ability of the examinee. A variety of factors, however, can affect scores in a complicated manner. Cooper (1984) states that “various factors including the writer, topic, mode, time limit, examination situation, and rater can introduce error into the scoring of essays used to measure writing ability.” Most of these factors are present in giving tests, and the human rater, in particular, is a major error factor in the scoring of essays. In fact, many other factors influence the scoring of essay tests, as listed below, and much research has been devoted. Handwriting skill (handwriting quality, spelling) (Chase, 1979; Marshall and Powers, 1969) Serial effects of rating (the order in which essay answers are rated) (Hughes et al., 1983) Topic selection (how should essays written on different topics be rated?) (Meyer, 1939) Other error factors (writer’s gender, ethnic group, etc.) (Chase, 1986) In recent years, with the aim of removing these error factors and establishing fairness, considerable research has been performed on computerbased automated essay scoring (AES) systems (Burstein et al., 1998; Foltz et al., 1999; Page et al., 1997; Powers et al., 2000; Rudner and Liang, 2002). The AES systems provide the users with prompt feedback to improve their writings. Therefore, many practical AES systems have been used. Erater (Burstein et al., 1998), developed by the Educational Testing Service, began being used for operational scoring of the Analytical Writing Assessment in the Graduate Management Admission Test (GMAT), an entrance examination for business graduate schools, in February 1999, and it has scored approximately 360,000 essays per year. The system includes several independent NLP-based modules for identifying features relevant to the scoring guide from three categories: syntax, discourse, and topic. Each of the featurerecognition modules correlate the essay scores with assigned by human readers. E-rater uses a model-building module to select and weight predictive features for essay scoring. Project Essay 233 Grade (PEG), which was the first automated essay scorer, uses a regression model like e-rater (Page et al., 1997). IntelliMetric (Elliot, 2003) was first commercially released by Vantage Learning in January 1998 and was the first AI-based essay-scoring tool available to educational agencies. The system internalizes the pooled wisdom of many expert scorers. The Intelligent Essay Assessor (IEA) is a set of software tools for scoring the quality of the conceptual content of essays based on latent semantic analysis (Foltz et al., 1999). The Bayesian Essay Test Scoring sYstem (BETSY) is a windows-based program that classifies text based on trained material. The features include multi-nomial and Bernoulli Naive Bayes models (Rudner and Liang, 2002). Note that all above-mentioned systems are based on the assumption that the true quality of essays must be defined by human judges. However, Bennet and Bejar (1998) have criticized the overreliance on human ratings as the sole criterion for evaluating computer performance because ratings are typically based as a constructed rubric that may ultimately achieve acceptable reliability at the cost of validity. In addition, Friedman, in research during the 1980s, found that holistic ratings by human raters did not award particularly high marks to professionally written essays mixed in with student productions. This is a kind of negative halo effect: create a bad impression, and you will be scored low on everything. Thus, Bereiter (2003) insists that another approach to doing better than ordinary human raters would be to use expert writers rather than expert raters. Reputable professional writers produce sophisticated and easy-toread essays. The use of professional writings as the criterion, whether the evaluation is based on holistic or trait rating, has an advantage, described below. The methods based on expert rater evaluations require much effort to set up the model for each prompt. For example, e-rater and PEG use some sort of regression approaches in setting up the statistical models. Depending on how many variables are involved, these models may require thousands of cases to derive stable regression weights. BETSY requires the Bayesian rules, and IntelliMetric, the AI-based rules. Thus, the methodology limits the grader’s practical utility to largescale testing operations in which such data collection is feasible. On the other hand, a method based on professional writings can overcome this; i.e., in our system, we need not set up a model simulating a human rater because thousands of articles by professional writers can easily be obtained via various electronic media. By detecting a statistical outlier to predetermined essay features compared with many professional writings for each prompt, our system can evaluate essays. In Japan, it is possible to obtain complete articles from the Mainichi Daily News newspaper up to 2005 from Nichigai Associates, Inc. and from the Nihon Keizai newspaper up to 2004 from Nikkei Books and Software, Inc. for purposes of linguistic study. In short, it is relatively easy to collect editorials and columns (e.g., “Yoroku”) on some form of electronic media for use as essay models. Literary works in the public domain can be accessed from Aozora Bunko (http://www.aozora.gr.jp/). Furthermore, with regard to morphological analysis, the basis of Japanese natural language processing, a number of free Japanese morphological analyzers are available. These include JUMAN (http://www-lab25.kuee.kyotou.ac.jp/nlresource/juman.html), developed by the Language Media Laboratory of Kyoto University, and ChaSen (http://chasen.aist-nara.ac.jp/, used in this study) from the Matsumoto Laboratory of the Nara Institute of Science and Technology. Likewise, for syntactic analysis, free resources are available such as KNP (http://www-lab25. kuee.kyoto-u.ac.jp/nlresource/knp.html) from Kyoto University, SAX and BUP (http://cactus.aistnara.ac.jp/lab/nlt/ sax,bup  .html) from the Nara Institute of Science and Technology, and the MSLR parser (http://tanaka-www.cs.titech.ac.jp/ pub/mslr/index-j.html) from the Tanaka Tokunaga Laboratory of the Tokyo Institute of Technology. With resources such as these, we prepared tools for computer processing of the articles and columns that we collect as essay models. In addition, for the scoring of essays, where it is essential to evaluate whether content is suitable, i.e., whether a written essay responds appropriately to the essay prompt, it is becoming possible for us to use semantic search technologies not based on pattern matching as used by search engines on the Web. The methods for implementing such technologies are explained in detail by Ishioka and Kameda (1999) and elsewhere. We believe that this statistical outlier detection ap234 proach to using published professional essays and columns as models makes it possible to develop a system essentially superior to other AES systems. We have named this automated Japanese essay scoring system “Jess.” This system evaluates essays based on three features : (1) rhetoric, (2) organization, and (3) content, which are basically the same as the structure, organization, and content used by e-rater. Jess also allows the user to designate weights (allotted points) for each of these essay features. If the user does not explicitly specify the point allotment, the default weights are 5, 2, and 3 for structure, organization, and content, respectively, for a total of 10 points. (Incidentally, a perfect score in e-rater is 6.) This default point allotment in which “rhetoric” is weighted higher than “organization” and “content” is based on the work of Watanabe et al. (1988). In that research, 15 criteria were given for scoring essays: (1) wrong/omitted characters, (2) strong vocabulary, (3) character usage, (4) grammar, (5) style, (6) topic relevance, (7) ideas, (8) sentence structure, (9) power of expression, (10) knowledge, (11) logic/consistency, (12) power of thinking/judgment, (13) complacency, (14) nuance, and (15) affinity. Here, correlation coefficients were given to reflect the evaluation value of each of these criteria. For example, (3) character usage, which is deeply related to “rhetoric,” turned out to have the highest correlation coefficient at 0.58, and (1) wrong/omitted characters was relatively high at 0.36. In addition, (8) sentence structure and (11) logic/consistency, which is deeply related to “organization,” had correlation coefficients of 0.32 and 0.26, respectively, both lower than that of “rhetoric,” and (6) topic relevance and (14) nuance, which are though to be deeply related to “content,” had correlation coefficients of 0.27 and 0.32, respectively. Our system, Jess, is the first automated Japanese essay scorer and has become most famous in Japan, since it was introduced in February 2005 in a headline in the Asahi Daily News, which is well known as the most reliable and most representative newspaper of Japan. The following sections describe the scoring criteria of Jess in detail. Sections 2, 3, and 4 examine rhetoric, organization, and content, respectively. Section 5 presents an application example and associated operation times, and section 6 concludes the paper. 2 Rhetoric As metrics to portray rhetoric, Jess uses (1) ease of reading, (2) diversity of vocabulary, (3) percentage of big words (long, difficult words), and (4) percentage of passive sentences, in accordance with Maekawa (1995) and Nagao (1996). These metrics are broken down further into various statistical quantities in the following sections. The distributions of these statistical quantities were obtained from the editorials and columns stored on the Mainichi Daily News CD-ROMs. Though most of these distributions are asymmetrical (skewed), they are each treated as a distribution of an ideal essay. In the event that a score (obtained statistical quantity) turns out to be an outlier value with respect to such an ideal distribution, that score is judged to be “inappropriate” for that metric. The points originally allotted to the metric are then reduced, and a comment to that effect is output. An “outlier” is an item of data more than 1.5 times the interquartile range. (In a box-and-whisker plot, whiskers are drawn up to the maximum and minimum data points within 1.5 times the interquartile range.) In scoring, the relative weights of the broken-down metrics are equivalent with the exception of “diversity of vocabulary,” which is given a weight twice that of the others because we consider it an index contributing to not only “rhetoric” but to “content” as well. 2.1 Ease of reading The following items are considered indexes of “ease of reading.” 1. Median and maximum sentence length Shorter sentences are generally assumed to make for easier reading (Knuth et al., 1988). Many books on writing in the Japanese language, moreover, state that a sentence should be no longer than 40 or 50 characters. Median and maximum sentence length can therefore be treated as an index. The reason the median value is used as opposed to the average is that sentence-length distributions are skewed in most cases. The relative weight used in the evaluation of median and maximum sentence length is equivalent to that of the indexes described below. Sentence length is also known to be quite effective for determining style. 2. Median and maximum clause length 235 In addition to periods (.), commas (,) can also contribute to ease of reading. Here, text between commas is called a “clause.” The number of characters in a clause is also an evaluation index. 3. Median and maximum number of phrases in clauses A human being cannot understand many things at one time. The limit of human shortterm memory is said to be seven things in general, and that is thought to limit the length of clauses. Actually, on surveying the number of phrases in clauses from editorials in the Mainichi Daily News, we found it to have a median of four, which is highly compatible with the short-term memory maximum of seven things. 4. Kanji/kana ratio To simplify text and make it easier to read, a writer will generally reduce kanji (Chinese characters) intentionally. In fact, an appropriate range for the kanji/kana ratio in essays is thought to exist, and this range is taken to be an evaluation index. The kanji/kana ratio is also thought to be one aspect of style. 5. Number of attributive declined or conjugated words (embedded sentences) The declined or conjugated forms of attributive modifiers indicate the existence of “embedded sentences,” and their quantity is thought to affect ease of understanding. 6. Maximum number of consecutive infinitiveform or conjunctive-particle clauses Consecutive infinitive-form or conjunctiveparticle clauses, if many, are also thought to affect ease of understanding. Note that not this “average size” but “maximum number” of consecutive infinitive-form or conjunctiveparticle clauses holds significant meaning as an indicator of the depth of dependency affecting ease of understanding. 2.2 Diversity of vocabulary Yule (1944) used a variety of statistical quantities in his analysis of writing. The most famous of these is an index of vocabulary concentration called the characteristic value. The value of is non-negative, increases as vocabulary becomes more concentrated, and conversely, decreases as vocabulary becomes more diversified. The median values of for editorials and columns in the Mainichi Daily News were found to be 87.3 and 101.3, respectively. Incidentally, other characteristic values indicating vocabulary concentration have been proposed. See Tweedie et al. (1998), for example. 2.3 Percentage of big words It is thought that the use of big words, to whatever extent, cannot help but impress the reader. On investigating big words in Japanese, however, care must be taken because simply measuring the length of a word may lead to erroneous conclusions. While “big word” in English is usually synonymous with “long word,” a word expressed in kanji becomes longer when expressed in kana characters. That is to say, a “small word” in Japanese may become a big word simply due to notation. The number of characters in a word must therefore be counted after converting it to kana characters (i.e., to its “reading”) to judge whether that word is big or small. In editorials from the Mainichi Daily News, the median number of characters in nouns after conversion to kana was found to be 4, with 5 being the 3rd quartile (upper 25%). We therefore assumed for the time being that nouns having readings of 6 or more characters were big words, and with this as a guideline, we again measured the percentage of nouns in a document that were big words. Since the number of characters in a reading is an integer value, this percentage would not necessarily be 25%, but a distribution that takes a value near that percentage on average can be obtained. 2.4 Percentage of passive sentences It is generally felt that text should be written in active voice as much as possible, and that text with many passive sentences is poor writing (Knuth et al., 1988). For this reason, the percentage of passive sentences is also used as an index of rhetoric. Grammatically speaking, passive voice is distinguished from active voice in Japanese by the auxiliary verbs “reru” and “rareru”. In addition to passivity, however, these two auxiliary verbs can also indicate respect, possibility, and spontaneity. In fact, they may be used to indicate respect even in the case of active voice. This distinction, however, while necessary in analysis at the semantic level, is not used in morphological analysis and syntactic analysis. For example, in the case that the object 236 of respect is “teacher” (sensei) or “your husband” (goshujin), the use of “reru” and “rareru” auxiliary verbs here would certainly indicate respect. This meaning, however, belongs entirely to the world of semantics. We can assume that such an indication of respect would not be found in essays required for tests, and consequently, that the use of “reru” and “rareru” in itself would indicate the passive voice in such an essay. 3 Organization Comprehending the flow of a discussion is essential to understanding the connection between various assertions. To help the reader to catch this flow, the frequent use of conjunctive expressions is useful. In Japanese writing, however, the use of conjunctive expressions tends to alienate the reader, and such expressions, if used at all, are preferably vague. At times, in fact, presenting multiple descriptions or posing several questions seeped in ambiguity can produce interesting effects and result in a beautiful passage (Noya, 1997). In essays tests, however, examinees are not asked to come up with “beautiful passages.” They are required, rather, to write logically while making a conscious effort to use conjunctive expressions. We therefore attempt to determine the logical structure of a document by detecting the occurrence of conjunctive expressions. In this effort, we use a method based on cue words as described in Quirk et al. (1985) for measuring the organization of a document. This method, which is also used in e-rater, the basis of our system, looks for phrases like “in summary” and “in conclusion” that indicate summarization, and words like “perhaps” and “possibly” that indicate conviction or thinking when examining a matter in depth, for example. Now, a conjunctive relationship can be broadly divided into “forward connection” and “reverse connection.” “Forward connection” has a rather broad meaning indicating a general conjunctive structure that leaves discussion flow unchanged. In contrast, “reverse connection” corresponds to a conjunctive relationship that changes the flow of discussion. These logical structures can be classified as follows according to Noya (1997). The “forward connection” structure comes in the following types. Addition: A conjunctive relationship that adds emphasis. A good example is “in addition,” while other examples include “moreover” and “rather.” Abbreviation of such words is not infrequent. Explanation: A conjunctive relationship typified by words and phrases such as “namely,” “in short,” “in other words,” and “in summary.” It can be broken down further into “summarization” (summarizing and clarifying what was just described), “elaboration” (in contrast to “summarization,” begins with an overview followed by a detailed description), and “substitution” (saying the same thing in another way to aid in understanding or to make a greater impression). Demonstration: A structure indicating a reasonconsequence relation. Expressions indicating a reason include “because” and “the reason is,” and those indicating a consequence include “as a result,” “accordingly,” “therefore,” and “that is why.” Conjunctive particles in Japanese like “node” (since) and “kara” (because) also indicate a reason-consequence relation. Illustration: A conjunctive relationship most typified by the phrase “for example” having a structure that either explains or demonstrates by example. The “reverse connection” structure comes in the following types. Transition: A conjunctive relationship indicating a change in emphasis from A to B expressed by such structures as “A ..., but B...” and “A...; however, B...). Restriction: A conjunctive relationship indicating a continued emphasis on A. Also referred to as a “proviso” structure typically expressed by “though in fact” and “but then.” Concession: A type of transition that takes on a conversational structure in the case of concession or compromise. Typical expressions indicating this relationship are “certainly” and “of course.” Contrast: A conjunctive relationship typically expressed by “at the same time,” “on the other hand,” and “in contrast.” We extracted all    phrases indicating conjunctive relationships from editorials of the Mainichi Daily News, and classified them into the above four categories for forward connection and 237 those for reverse connection for a total of eight exclusive categories. In Jess, the system attaches labels to conjunctive relationships and tallies them to judge the strength of the discourse in the essay being scored. As in the case of rhetoric, Jess learns what an appropriate number of conjunctive relationships should be from editorials of the Mainichi Daily News, and deducts from the initially allotted points in the event of an outlier value in the model distribution. In the scoring, we also determined whether the pattern in which these conjunctive relationships appeared in the essay was singular compared to that in the model editorials. This was accomplished by considering a trigram model (Jelinek, 1991) for the appearance patterns of forward and reverse connections. In general, an -gram model can be represented by a stochastic finite automaton, and in a trigram model, each state of an automaton is labeled by a symbol sequence of length 2. The set of symbols here is    forwardconnection,   reverse-connection  . Each state transition is assigned a conditional output probability as shown in Table 1. The symbol  here indicates no (prior) relationship. The initial state is shown as  . For example, the expression     signifies the probability that “   forward connection” will appear as the initial state. Table 1: State transition probabilities on  forward-connection,   reverse-connection    !#"$%&'()*  +-,.'%'/  0-1 %&'%2  13!4  56*  0#+7%# 8  1#+   %29  +-+:%&  %2;*  !-!< %8  ,3" %# %.  =-,:> %?%2;*  0#+@%& %%'A  1#+ >'BC   !-!D%&EC *  +31 In this way, the probability of occurrence of certain F forward-connection  and   reverseconnection  patterns can be obtained by taking the product of appropriate conditional probabilities listed in Table 1. For example, the probability of occurrence G of the pattern IH  HJKHJ  turns out to be LNMPOQOSRTLNM /RLNM  RLNM VU WLNMXLVY . Furthermore, given that the probability of   appearing without prior information is 0.47 and that of   appearing without prior information is 0.53, the probability Z that a forward connection occurs three times and a reverse connection once under the condition of no prior information would be LNMPO>[V\]R*LNM QY  LNMXL  . As shown by this example, an occurrence probability that is greater for no prior information would indicate that the forward-connection and reverse-connection appearance pattern is singular, in which case the points initially allocated to conjunctive relationships in a discussion would be reduced. The trigram model may overcome the restrictions that the essay should be written in a pyramid structure or the reversal. 4 Content A technique called latent semantic indexing can be used to check whether the content of a written essay responds appropriately to the essay prompt. The usefulness of this technique has been stressed at the Text REtrieval Conference (TREC) and elsewhere. Latent semantic indexing begins after performing singular value decomposition on ^8R_ term-document matrix ` ( ^  number of words; _  number of documents) indicating the frequency of words appearing in a sufficiently large number of documents. Matrix ` is generally a huge sparse matrix, and SVDPACK (Berry, 1992) is known to be an effective software package for performing singular value decomposition on a matrix of this type. This package allows the use of eight different algorithms, and Ishioka and Kameda (1999) give a detailed comparison and evaluation of these algorithms in terms of their applicability to Japanese documents. Matrix ` must first be converted to the Harwell-Boeing sparse matrix format (Duff et al., 1989) in order to use SVDPACK. This format can store the data of a sparse matrix in an efficient manner, thereby saving disk space and significantly decreasing data read-in time. 5 Application 5.1 An E-rater Demonstration An e-rater demonstration can be viewed at www.ets.org, where by clicking “Products a erater Home a Demo.” In this demonstration, seven response patterns (seven essays) are evaluated. The scoring breakdown, given a perfect score of six, was one each for scores of 6, 5, 4, and 2 and three for a score of 3. We translated essays A-to-G on that Web site into Japanese and then scored them using Jess, as shown in Table 2. The second and third columns show e-rater and Jess scores, respectively, and the fourth column shows the number of characters in each essay. 238 Table 2: Comparison of scoring results Essay E-rater Jess No. of Characters Time (s) A 4 6.9 (4.1) 687 1.00 B 3 5.1 (3.0) 431 1.01 C 6 8.3 (5.0) 1,884 1.35 D 2 3.1 (1.9) 297 0.94 E 3 7.9 (4.7) 726 0.99 F 5 8.4 (5.0) 1,478 1.14 G 3 6.0 (3.6) 504 0.95 A perfect score in Jess is 10 with 5 points allocated to rhetoric, 2 to organization, and 3 to content as standard. For purposes of comparison, the Jess score converted to e-rater’s 6-point system is shown in parentheses. As can be seen here, essays given good scores by e-rater are also given good scores by Jess, and the two sets of scores show good agreement. However, e-rater (and probably human raters) tends to give more points to longer essays despite similar writing formats. Here, a difference appears between e-rater and Jess, which uses the point-deduction system for scoring. Examining the scores for essay C, for example, we see that e-rater gave a perfect score of 6, while Jess gave only a score of 5 after converting to e-rater’s 6-point system. In other words, the length of the essay could not compensate for various weak points in the essay under Jess’s point-deduction system. The fifth column in Table 2 shows the processing time (CPU time) for Jess. The computer used was Plat’Home Standard System 801S using an 800-MHz Intel Pentium III running RedHat 7.2. The Jess program is written in C shell script, jgawk, jsed, and C, and comes to just under 10,000 lines. In addition to the ChaSen morphological analysis system, Jess also needs the kakasi kanji/kana converter program (http://kakasi.namagu.org/) to operate. At present, it runs only on UNIX. Jess can be executed on the Web at http://coca.rd.dnc.ac.jp/jess/. 5.2 An Example of using a Web Entry Sheet Four hundred eighty applicants who were eager to be hired by a certain company entered their essays using a Web form without a time restriction, with the size of the text restricted implicitly by the Web screen, to about 800 characters. The theme of the essay was “What does working mean in your life.” Table 3 summarizes the correlation coefficients between the Jess score, average score of expert raters, and score of the linguistic understanding test (LUT), developed by Recruit Management Solutions Co., Ltd. The LUT is designed to measure the ability to grasp the correct meaning of words that are the elements of a sentence, and to understand the composition and the summary of a text. Five expert raters reted the essays, and three of these scored each essay independently. Table 3: Correlation between Jess score, average of expert raters, and linguistic understanding test Jess Ave. of Experts Ave. of Experts 0.57 LUT 0.08 0.13 We found that the correlation between the Jess score and the average of the expert raters’ scores is not small (0.57), and is larger than the correlation coefficient between the expert raters’ scores of 0.48. That means that Jess is superior to the expert raters on average, and is substitutable for them. Note that the restriction of the text size (800 characters in this case) caused the low correlation owing to the difficulty in evaluating the organization and the development of the arguments; the essay scores even in expert rater tend to be dispersed. We also found that neither the expert raters nor Jess, had much correlation with LUT, which shows that LUT does not reflect features indicating writing ability. That is, LUT measures quite different laterals from writing ability. Another experiment using 143 universitystudents’ essays collected at the National Institute for Japanese Language shows a similar result: for the essays on “smoking,” the correlation between Jess and the expert raters was 0.83, which is higher than the average correlation of expert raters (0.70); for the essays on “festivals in Japan,” the former is 0.84, the latter, 0.73. Three of four raters graded each essay independently. 6 Conclusion An automated Japanese essay scoring system called Jess has been created for scoring essays in college-entrance exams. This system has been shown to be valid for essays of 800 to 1,600 characters. Jess, however, uses editorials and columns taken from the Mainichi Daily News newspaper as learning models, and such models are not sufficient for learning terms used in scientific and technical fields such as computers. Consequently, we found that Jess could return a low evaluation of “content” even for an essay that responded well to the essay prompt. When analyzing content, a mechanism is needed for automatically selecting 239 a term-document cooccurrence matrix in accordance with the essay targeted for evaluation. This enable the users to avoid reverse-engineering that poor quality essays would produce perfect scores, because thresholds for detecting the outliers on rhetoric features may be varied. Acknowledgements We would like to extend their deep appreciation to Professor Eiji Muraki, currently of Tohoku University, Graduate School of Educational Informatics, Research Division, who, while resident at Educational Testing Service (ETS), was kind enough to arrange a visit for us during our survey of the e-rater system. References Bennet, R.E. and Bejar, I.I. 1998. Validity and automated scoring: It’s not only the scoring, Educational Measurement: Issues and Practice. 17(4):9– 17. Bereiter, C. 2003. Foreword. In Shermis, M. and Burstein, J. eds. Automated essay scoring: crossdisciplinary perspective. Hillsdale, NJ: Lawrence Erlbaum Associates. Berry, M.W. 1992. Large scale singular value computations, International Journal of Supercomputer Applications. 6(1):13–49. Burstein, J., Kukich, K., Wolff, S., Lu, C., Chodorow, M., Braden-Harder, L., and Harris, M.D. 1998. Automated Scoring Using A Hybrid Feature Identification Technique. the Annual Meeting of the Association of Computational Linguistics, Available online: www.ets.org/research/erater.html Chase, C.I. 1986. Essay test scoring : interaction of relevant variables, Journal of Educational Measurement, 23(1):33–41. Chase, C.I. 1979. The impact of achievement expectations and handwriting quality on scoring essay tests, Journal of Educational Measurement, 16(1):293– 297. Cooper, P.L. 1984. The assessment of writing ability: a review of research, GRE Board Research Report, GREB No.82-15R. Available online: www.gre.org/reswrit.html#TheAssessmentofWriting Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K. and Harshman, R. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(7):391–407. Duff, I.S., Grimes, R.G. and Lewis, J.G. 1989. Sparse matrix test problem. ACM Trans. Math. Software, 15:1–14. Elliot, S. 2003. IntelliMetric: From Here to Validity, 71–86. In Shermis, M. and Burstein, J. eds. Automated essay scoring: A cross-disciplinary perspective. Hillsdale, NJ: Lawrence Erlbaum Associates. Foltz, P.W., Laham, D. and Landauer, T.K. 1999. Automated Essay Scoring: Applications to Educational Technology. EdMedia ’99. Hughes, D.C., Keeling B. and Tuck, B.F. 1983. The effects of instructions to scorers intended to reduce context effects in essay scoring, Educational and Psychological Measurement, 43:1047–1050. Ishioka, T. and Kameda, M. 1999. Document retrieval based on Words’ cooccurrences — the algorithm and its applications (in Japanese), Japanese Journal of Applied Statistics, 28(2):107–121. Jelinek, F. 1991. Up from trigrams! The struggle for improved Language models, the European Conference on Speech Communication and Technology (EUROSPEECH-91), 1037–1040. Knuth, D.E., Larrabee, T. and Roberts, P.M. 1988. Mathematical Writing, Stanford University Computer Science Department, Report Number: STANCS-88-1193. Maekawa, M. 1995. Scientific Analysis of Writing (in Japanese), ISBN4-00-007953-0, Iwanami Shotton. Marshall, J.C. and Powers, J.M. 1969. Writing neatness, composition errors and essay grades, Journal of Educational Measurement, 6(2):97–101. Meyer, G. 1939. The choice of questions on essay examinations, Journal of Educational Psychology, 30(3):161–171. Nagao, M.(ed.) 1996. Natural Language Processing (in Japanese), The Iwanami Software Science Series 15, ISBN 4-00-10355-5, Noya, S.: 1997. Logical Training (in Japanese), Sangyo Tosho, ISBN 4-7828-0205-6. Page, E.B., Poggio, J.P. and Keith, T.Z. 1997. Computer analysis of student essays: Finding trait differences in the student profile. AERA/NCME Symposium on Grading Essays by Computer. Powers, D.E., Burstein, J.C., Chodorow, M., Fowles, M.E., and Kukich, K. 2000. Comparing the validity of automated and human essay scoring, GRE No. 98-08a. Princeton, NJ: Educational Testing Service. Quirk, R., Greenbaum, S., Leech, G. and Svartvik, J. 1985. A Comprehensive Grammar of the English Language, Longman. Rudner, L.M. and Liang, L. 2002. National Council on Measurement in Education, New Orleans, LA. Available online: http://ericae.net/betsy/papers/n2002e.pdf Tweedie, F.J. and Baayen, R.H. 1998 How Variable May a Constant Be? Measures of Lexical Richness in Perspective, Computers and the Humanities, 32:323–352. Watanabe, H., Taira, Y. and Inoue, T. 1988 An Analysis of Essay Examination Data (in Japanese), Research bulletin, Fuculty of Education, University of Tokyo, 28:143–164. Yule, G.U. 1944. The Statistical Study of Literary Vocabulary, Cambridge University Press, Cambridge. 240
2006
30
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 241–248, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Feedback-Augmented Method for Detecting Errors in the Writing of Learners of English Ryo Nagata Hyogo University of Teacher Education 6731494, Japan [email protected] Atsuo Kawai Mie University 5148507, Japan [email protected] Koichiro Morihiro Hyogo University of Teacher Education 6731494, Japan [email protected] Naoki Isu Mie University 5148507, Japan [email protected] Abstract This paper proposes a method for detecting errors in article usage and singular plural usage based on the mass count distinction. First, it learns decision lists from training data generated automatically to distinguish mass and count nouns. Then, in order to improve its performance, it is augmented by feedback that is obtained from the writing of learners. Finally, it detects errors by applying rules to the mass count distinction. Experiments show that it achieves a recall of 0.71 and a precision of 0.72 and outperforms other methods used for comparison when augmented by feedback. 1 Introduction Although several researchers (Kawai et al., 1984; McCoy et al., 1996; Schneider and McCoy, 1998; Tschichold et al., 1997) have shown that rulebased methods are effective to detecting grammatical errors in the writing of learners of English, it has been pointed out that it is hard to write rules for detecting errors concerning the articles and singular plural usage. To be precise, it is hard to write rules for distinguishing mass and count nouns which are particularly important in detecting these errors (Kawai et al., 1984). The major reason for this is that whether a noun is a mass noun or a count noun greatly depends on its meaning or its surrounding context (refer to Allan (1980) and Bond (2005) for details of the mass count distinction). The above errors are very common among Japanese learners of English (Kawai et al., 1984; Izumi et al., 2003). This is perhaps because the Japanese language does not have a mass count distinction system similar to that of English. Thus, it is favorable for error detection systems aiming at Japanese learners to be capable of detecting these errors. In other words, such systems need to somehow distinguish mass and count nouns. This paper proposes a method for distinguishing mass and count nouns in context to complement the conventional rules for detecting grammatical errors. In this method, first, training data, which consist of instances of mass and count nouns, are automatically generated from a corpus. Then, decision lists for distinguishing mass and count nouns are learned from the training data. Finally, the decision lists are used with the conventional rules to detect the target errors. The proposed method requires a corpus to learn decision lists for distinguishing mass and count nouns. General corpora such as newspaper articles can be used for the purpose. However, a drawback to it is that there are differences in character between general corpora and the writing of non-native learners of English (Granger, 1998; Chodorow and Leacock, 2000). For instance, Chodorow and Leacock (2000) point out that the word concentrate is usually used as a noun in a general corpus whereas it is a verb 91% of the time in essays written by non-native learners of English. Consequently, the differences affect the performance of the proposed method. In order to reduce the drawback, the proposed method is augmented by feedback; it takes as feedback learners’ essays whose errors are corrected by a teacher of English (hereafter, referred to as the feedback corpus). In essence, the feedback corpus could be added to a general corpus to generate training data. Or, ideally training data could be generated only from the feedback corpus just as 241 from a general corpus. However, this causes a serious problem in practice since the size of the feedback corpus is normally far smaller than that of a general corpus. To make it practical, this paper discusses the problem and explores its solution. The rest of this paper is structured as follows. Section 2 describes the method for detecting the target errors based on the mass count distinction. Section 3 explains how the method is augmented by feedback. Section 4 discusses experiments conducted to evaluate the proposed method. 2 Method for detecting the target errors 2.1 Generating training data First, instances of the target noun that head their noun phrase (NP) are collected from a corpus with their surrounding words. This can be simply done by an existing chunker or parser. Then, the collected instances are tagged with mass or count by the following tagging rules. For example, the underlined chicken: ... are a lot of chickens in the roost ... is tagged as ... are a lot of chickens/count in the roost ... because it is in plural form. We have made tagging rules based on linguistic knowledge (Huddleston and Pullum, 2002). Figure 1 and Table 1 represent the tagging rules. Figure 1 shows the framework of the tagging rules. Each node in Figure 1 represents a question applied to the instance in question. For example, the root node reads “Is the instance in question plural?”. Each leaf represents a result of the classification. For example, if the answer is yes at the root node, the instance in question is tagged with count. Otherwise, the question at the lower node is applied and so on. The tagging rules do not classify instances as mass or count in some cases. These unclassified instances are tagged with the symbol “?”. Unfortunately, they cannot readily be included in training data. For simplicity of implementation, they are excluded from training data1. Note that the tagging rules can be used only for generating training data. They cannot be used to distinguish mass and count nouns in the writing of learners of English for the purpose of detecting 1According to experiments we have conducted, approximately 30% of instances are tagged with “?” on average. It is highly possible that performance of the proposed method will improve if these instances are included in the training data. the target errors since they are based on the articles and the distinction between singular and plural. Finally, the tagged instances are stored in a file with their surrounding words. Each line of it consists of one of the tagged instances and its surrounding words as in the above chicken example. 2.2 Learning Decision Lists In the proposed method, decision lists are used for distinguishing mass and count nouns. One of the reasons for the use of decision lists is that they have been shown to be effective to the word sense disambiguation task and the mass count distinction is highly related to word sense as we will see in this section. Another reason is that rules for distinguishing mass and count nouns are observable in decision lists, which helps understand and improve the proposed method. A decision list consists of a set of rules. Each rule matches the template as follows: If a condition is true, then a decision (1) To define the template in the proposed method, let us have a look at the following two examples: 1. I read the paper. 2. The paper is made of hemp pulp. The underlined papers in both sentences cannot simply be classified as mass or count by the tagging rules presented in Section 2.1 because both are singular and modified by the definite article. Nevertheless, we can tell that the former is a count noun and the latter is a mass noun from the contexts. This suggests that the mass count distinction is often determined by words surrounding the target noun. In example 1, we can tell that the paper refers to something that can be read such as a newspaper or a scientific paper from read, and therefore it is a count noun. Likewise, in example 2, we can tell that the paper refers to a certain substance from made and pulp, and therefore it is a mass noun. Taking this observation into account, we define the template based on words surrounding the target noun. To formalize the template, we will use a random variable  that takes either   or  to denote that the target noun is a mass noun or a count noun, respectively. We will also use  and  to denote a word and a certain context around the target noun, respectively. We define 242                             yes yes yes yes no no no no           yes no COUNT modified by a little? ? COUNT MASS ? MASS plural? modified by one of the words in Table 1(a)? modified by one of the words in Table 1(b)? modified by one of the words in Table 1(c)? Figure 1: Framework of the tagging rules Table 1: Words used in the tagging rules (a) (b) (c) the indefinite article much the definite article another less demonstrative adjectives one enough possessive adjectives each sufficient interrogative adjectives – – quantifiers – – ’s genitives three types of  :  ,  , and  that denote the contexts consisting of the noun phrase that the target noun heads,  words to the left of the noun phrase, and  words to its right, respectively. Then the template is formalized by: If word  appears in context  of the target noun, then it is distinguished as  Hereafter, to keep the notation simple, it will be abbreviated to   (2) Now rules that match the template can be obtained from the training data. All we need to do is to collect words in  from the training data. Here, the words in Table 1 are excluded. Also, function words (except prepositions), cardinal and quasi-cardinal numerals, and the target noun are excluded. All words are reduced to their morphological stem and converted entirely to lower case when collected. For example, the following tagged instance: She ate fried chicken/mass for dinner. would give a set of rules that match the template:  "!#   $&%('*),+    $ %.#  . /10 2 %.#    for the target noun chicken when 4365 . In addition, a default rule is defined. It is based on the target noun itself and used when no other applicable rules are found in the decision list for the target noun. It is defined by 7 8 major (3) where  and  major denote the target noun and the majority of 8 in the training data, respectively. Equation (3) reads “If the target noun appears, then it is distinguished by the majority”. The log-likelihood ratio (Yarowsky, 1995) decides in which order rules are applied to the target noun in novel context. It is defined by2 9 .: <; 8>=  <? <; 8>=  <? (4) where 8 is the exclusive event of  and @; A=  7? is the probability that the target noun is used as 8 when  appears in the context  . It is important to exercise some care in estimating @; 8>=  ? . In principle, we could simply 2For the default rule, the log-likelihood ratio is defined by replacing B2C and DFE with G and DFE major, respectively. 243 count the number of times that  appears in the context  of the target noun used as  in the training data. However, this estimate can be unreliable, when  does not appear often in the context. To solve this problem, using a smoothing parameter H (Yarowsky, 1996), <; 8>=  7? is estimated by3 <; 8>=  <? 3 $ ;IKJ  ? LH $ ;I <? MFH (5) where $ ;I 7? and $ ;I  J  ? are occurrences of  appearing in  and those in  of the target noun used as 8 , respectively. The constant  is the number of possible classes, that is, N3O ( P  or  ) in our case, and introduced to satisfy @; A=  7?  @; A=  Q? 3R . In this paper, H is set to 1. Rules in a decision list are sorted in descending order by the log-likelihood ratio. They are tested on the target noun in novel context in this order. Rules sorted below the default rule are discarded4 because they are never used as we will see in Section 2.3. Table 2 shows part of a decision list for the target noun chicken that was learned from a subset of the BNC (British National Corpus) (Burnard, 1995). Note that the rules are divided into two columns for the purpose of illustration in Table 2; in practice, they are merged into one. Table 2: Rules in a decision list Mass Count   LLR   LLR  0 S T !# 1.49  !# 1.49 $ 0 .U !# 1.28 & V  # 1.32 /10  U !# 1.23  0 : ) + 1.23 . 0  # 1.23 %  !# 1.23  % W # 1.18 :X: ),+ 1.18 target noun: chicken, 43Y5 LLR (Log-Likelihood Ratio) On one hand, we associate the words in the left half with food or cooking. On the other hand, we associate those in the right half with animals or birds. From this observation, we can say that chicken in the sense of an animal or a bird is a count noun but a mass noun when referring to food 3The probability for the default rule is estimated just as the log-likelihood ratio for the default rule above. 4It depends on the target noun how many rules are discarded. or cooking, which agrees with the knowledge presented in previous work (Ostler and Atkins, 1991). 2.3 Distinguishing mass and count nouns To distinguish the target noun in novel context, each rule in the decision list is tested on it in the sorted order until the first applicable one is found. It is distinguished according to the first applicable one. Ties are broken by the rules below. It should be noted that rules sorted below the default rule are never used because the default rule is always applicable to the target noun. This is the reason why rules sorted below the default rule are discarded as mentioned in Section 2.2. 2.4 Detecting the target errors The target errors are detected by the following three steps. Rules in each step are examined on each target noun in the target text. In the first step, any mass noun in plural form is detected as an error5. If an error is detected in this step, the rest of the steps are not applied. In the second step, errors are detected by the rules described in Table 3. The symbol “ Z ” in Table 3 denotes that the combination of the corresponding row and column is erroneous. For example, the fifth row denotes that singular and plural count nouns modified by much are erroneous. The symbol “–” denotes that no error can be detected by the table. If one of the rules in Table 3 is applied to the target noun, the third step is not applied. In the third step, errors are detected by the rules described in Table 4. The symbols “ Z ” and “–” are the same as in Table 3. In addition, the indefinite article that modifies other than the head noun is judged to be erroneous Table 3: Detection rules (i) Count Mass Pattern Sing. Pl. Sing. [ another, each, one \ – Z Z [ all, enough, sufficient \ Z – – [ much \ Z Z – [ that, this \ – Z – [ few, many, several \ Z – Z [ these, those \ Z – Z [ various, numerous \ Z – Z cardinal numbers exc. one Z – Z 5Mass nouns can be used in plural in some cases. However, they are rare especially in the writing of learners of English. 244 Table 4: Detection rules (ii) Singular Plural a/an the ] a/an the ] Mass Z – – Z Z Z Count – – Z Z – – (e.g., *an expensive). Likewise, the definite article that modifies other than the head noun or adjective is judged to be erroneous (e.g., *the them). Also, we have made exceptions to the rules. The following combinations are excluded from the detection in the second and third steps: head nouns modified by interrogative adjectives (e.g., what), possessive adjectives (e.g., my), ’s genitives, “some”, “any”, or “no”. 3 Feedback-augmented method As mentioned in Section 1, the proposed method takes the feedback corpus6 as feedback to improve its performance. In essence, decision lists could be learned from a corpus consisting of a general corpus and the feedback corpus. However, since the size of the feedback corpus is normally far smaller than that of general corpora, so is the effect of the feedback corpus on @; A= ^ ? . This means that the feedback corpus hardly has effect on the performance. Instead, @; A=  7? can be estimated by interpolating the probabilities estimated from the feedback corpus and the general corpus according to confidences of their estimates. It is favorable that the interpolated probability approaches to the probability estimated from the feedback corpus as its confidence increases; the more confident its estimate is, the more effect it has on the interpolated probability. Here, confidence of ratio  is measured by the reciprocal of variance of the ratio (Tanaka, 1977). Variance is calculated by @; R_  ?  (6) where  denotes the number of samples used for calculating the ratio. Therefore, confidence of the estimate of the conditional probability used in the proposed method is measured by 3 $ ;I ? @; 8>=  7? ; R_ @; A=  Q?`? (7) 6The feedback corpus refers to learners’ essays whose errors are corrected as mentioned in Section 1. To formalize the interpolated probability, we will use the symbols aSb , dc , TaSb , and c to denote the conditional probabilities estimated from the feedback corpus and the general corpus, and their confidences, respectively. Then, the interpolated probability &e is estimated by7 e 3 f c gihkj gml ;n&aTb  c ? J TaTb_op qc &aSbqJ TaTb_rp c (8) In Equation (8), the effect of saTb on e becomes large as its confidence increases. It should also be noted that when its confidence exceeds that of  c , the general corpus is no longer used in the interpolated probability. A problem that arises in Equation (8) is that 2aTb hardly has effect on &e when a much larger general corpus is used than the feedback corpus even if taTb is estimated with a sufficient confidence. For example, &aSb estimated from 100 samples, which are a relatively large number for estimating a probability, hardly has effect on ue when  c is estimated from 10000 samples; roughly, saSb has a RVvPRTw*w effect of  c on e . One way to prevent this is to limit the effect of c to some extent. It can be realized by taking the log of ,c in Equation (8). That is, the interpolated probability is estimated by e 3 f  c xgih`j y{z`| g l ;n&aTb   c.? J} TaTb~o4€*‚ c &aSbqJ} TaTb~r4€*‚ qc (9) It is arguable what base of the log should be used. In this paper, it is set to 2 so that the effect of  c on the interpolated probability becomes large when the confidence of the estimate of the conditional probability estimated from the feedback corpus is small (that is, when there is little data in the feedback corpus for the estimate)8. In summary, Equation (9) interpolates between the conditional probabilities estimated from the feedback corpus and the general corpus in the feedback-augmented method. The interpolated probability is then used to calculate the loglikelihood ratio. Doing so, the proposed method takes the feedback corpus as feedback to improve its performance. 7In general, the interpolated probability needs to be normalized to satisfy ƒ…„*†s‡‰ˆ . In our case, however, it is always satisfied without normalization since „ h`j‹Š DFEŒ B CŽ~ „ h`j‹Š DE‘Œ B C’Ž ‡…ˆ and „ l Š DFEŒ B CŽ~ „ l Š DEŒ B CŽ ‡…ˆ are satisfied. 8We tested several bases in the experiments and found there were little difference in performance between them. 245 4 Experiments 4.1 Experimental Conditions A set of essays9 written by Japanese learners of English was used as the target essays in the experiments. It consisted of 47 essays (3180 words) on the topic traveling. A native speaker of English who was a professional rewriter of English recognized 105 target errors in it. The written part of the British National Corpus (BNC) (Burnard, 1995) was used to learn decision lists. Sentences the OAK system10, which was used to extract NPs from the corpus, failed to analyze were excluded. After these operations, the size of the corpus approximately amounted to 80 million words. Hereafter, the corpus will be referred to as the BNC. As another corpus, the English concept explication in the EDR English-Japanese Bilingual dictionary and the EDR corpus (1993) were used; it will be referred to as the EDR corpus, hereafter. Its size amounted to about 3 million words. Performance of the proposed method was evaluated by recall and precision. Recall is defined by No. of target errors detected correctly No. of target errors in the target essays (10) Precision is defined by No. of target errors detected correctly No. of detected errors (11) 4.2 Experimental Procedures First, decision lists for each target noun in the target essays were learned from the BNC11. To extract noun phrases and their head nouns, the OAK system was used. An optimal value for  (window size of context) was estimated as follows. For 25 nouns shown in (Huddleston and Pullum, 2002) as examples of nouns used as both mass and count nouns, accuracy on the BNC was calculated using ten-fold cross validation. As a result of setting small ( M3“5 ), medium ( M3NRTw ), and large ( M3•”(w ) window sizes, it turned out that –3—5 maximized the average accuracy. Following this result, A3Y5 was selected in the experiments. Second, the target nouns were distinguished whether they were mass or count by the learned 9http://www.eng.ritsumei.ac.jp/lcorpus/. 10OAK System Homepage: http://nlp.cs.nyu.edu/oak/. 11If no instance of the target noun is found in the general corpora (and also in the feedback corpus in case of the feedback-augmented method), the target noun is ignored in the error detection procedure. decision lists, and then the target errors were detected by applying the detection rules to the mass count distinction. As a preprocessing, spelling errors were corrected using a spell checker. The results of the detection were compared to those done by the native-speaker of English. From the comparison, recall and precision were calculated. Then, the feedback-augmented method was evaluated on the same target essays. Each target essay in turn was left out, and all the remaining target essays were used as a feedback corpus. The target errors in the left-out essay were detected using the feedback-augmented method. The results of all 47 detections were integrated into one to calculate overall performance. This way of feedback can be regarded as that one uses revised essays previously written in a class to detect errors in essays on the same topic written in other classes. Finally, the above two methods were compared with their seven variants shown in Table 5. “DL” in Table 5 refers to the nine decision list based methods (the above two methods and their seven variants). The words in brackets denote the corpora used to learn decision lists; the symbol “+FB” means that the feedback corpus was simply added to the general corpus. The subscripts $˜*™ and $˜,š indicate that the feedback was done by using Equation (8) and Equation (9), respectively. In addition to the seven variants, two kinds of earlier method were used for comparison. One was one (Kawai et al., 1984) of the rule-based methods. It judges singular head nouns with no determiner to be erroneous since missing articles are most common in the writing of Japanese learners of English. In the experiments, this was implemented by treating all nouns as count nouns and applying the same detection rules as in the proposed method to the countability. The other was a web-based method (Lapata and Keller, 2005)12 for generating articles. It retrieves web counts for queries consisting of two words preceding the NP that the target noun head, one of the articles ( [ a/an, the, ]\ ), and the core NP to generate articles. All queries are performed as exact matches using quotation marks and submitted to the Google search engine in lower case. For example, in the case of “*She is good student.”, it retrieves web counts for “she is a good student”, 12There are other statistical methods that can be used for comparison including Lee (2004) and Minnen (2000). Lapata and Keller (2005) report that the web-based method is the best performing article generation method. 246 “she is the good student”, and “she is good student”. Then, it generates the article that maximizes the web counts. We extended it to make it capable of detecting our target errors. First, the singular/plural distinction was taken into account in the queries (e.g., “she is a good students”, “she is the good students”, and “she is good students” in addition to the above three queries). The one(s) that maximized the web counts was judged to be correct; the rest were judged to be erroneous. Second, if determiners other than the articles modify head nouns, only the distinction between singular and plural was taken into account (e.g., “he has some book” vs “he has some books”). In the case of “much/many”, the target noun in singular form modified by “much” and that in plural form modified by “many” were compared (e.g., “he has much furniture” vs “he has many furnitures). Finally, some rules were used to detect literal errors. For example, plural head nouns modified by “this” were judged to be erroneous. 4.3 Experimental Results and Discussion Table 5 shows the experimental results. “Rulebased” and “Web-based” in Table 5 refer to the rule-based method and the web-based method, respectively. The other symbols are as already explained in Section 4.2. As we can see from Table 5, all the decision list based methods outperform the earlier methods. The rule-based method treated all nouns as count nouns, and thus it did not work well at all on mass nouns. This caused a lot of false-positives and false-negatives. The web-based method suffered a lot from other errors than the target errors since Table 5: Experimental results Method Recall Precision DL (BNC) 0.66 0.65 DL (BNC+FB) 0.66 0.65 DL aTb ™ (BNC) 0.66 0.65 DL aTb š (BNC) 0.69 0.70 DL (EDR) 0.70 0.68 DL (EDR+FB) 0.71 0.69 DL aTb ™ (EDR) 0.71 0.70 DL aTb š (EDR) 0.71 0.72 DL (FB) 0.43 0.76 Rule-based 0.59 0.39 Web-based 0.49 0.53 it implicitly assumed that there were no errors except the target errors. Contrary to this assumption, not only did the target essays contain the target errors but also other errors since they were written by Japanese learners of English. This indicate that the queries often contained the other errors when web counts were retrieved. These errors made the web counts useless, and thus it did not perform well. By contrast, the decision list based methods did because they distinguished mass and count nouns by one of the words around the target noun that was most likely to be effective according to the log-likelihood ratio13; the best performing decision list based method (DL aTb š (EDR)) is significantly superior to the best performing14 nondecision list based method (Web-based) in both recall and precision at the 99% confidence level. Table 5 also shows that the feedback-augmented methods benefit from feedback. Only an exception is “DL aTb ™ (BNC)”. The reason is that the size of BNC is far larger than that of the feedback corpus and thus it did not affect the performance. This also explains that simply adding the feedback corpus to the general corpus achieved little or no improvement as “DL (EDR+FB)” and “DL (BNC+FB)” show. Unlike these, both “DL aTb š (BNC)” and “DL aTb š (EDR)” benefit from feedback since the effect of the general corpus is limited to some extent by the log function in Equation (9). Because of this, both benefit from feedback despite the differences in size between the feedback corpus and the general corpus. Although the experimental results have shown that the feedback-augmented method is effective to detecting the target errors in the writing of Japanese learners of English, even the best performing method (DL aTb š (EDR)) made 30 falsenegatives and 29 false-positives. About 70% of the false-negatives were errors that required other sources of information than the mass count distinction to be detected. For example, extra definite articles (e.g., *the traveling) cannot be detected even if the correct mass count distinction is given. Thus, only a little improvement is expected in recall however much feedback corpus data become available. On the other hand, most of the 13Indeed, words around the target noun were effective. The default rules were used about 60% and 30% of the time in “DL (EDR)” and “DL (BNC)”, respectively; when only the default rules were used, “DL (EDR)” (“DL (BNC)”) achieved 0.66 (0.56) in recall and 0.58 (0.53) in precision. 14“Best performing” here means best performing in terms of › -measure. 247 false-positives were due to the decision lists themselves. Considering this, it is highly possible that precision will improve as the size of the feedback corpus increases. 5 Conclusions This paper has proposed a feedback-augmented method for distinguishing mass and count nouns to complement the conventional rules for detecting grammatical errors. The experiments have shown that the proposed method detected 71% of the target errors in the writing of Japanese learners of English with a precision of 72% when it was augmented by feedback. From the results, we conclude that the feedback-augmented method is effective to detecting errors concerning the articles and singular plural usage in the writing of Japanese learners of English. Although it is not taken into account in this paper, the feedback corpus contains further useful information. For example, we can obtain training data consisting of instances of errors by comparing the feedback corpus with its original corpus. Also, comparing it with the results of detection, we can know performance of each rule used in the detection, which make it possible to increase or decrease their log-likelihood ratios according to their performance. We will investigate how to exploit these sources of information in future work. Acknowledgments The authors would like to thank Sekine Satoshi who has developed the OAK System. The authors also would like to thank three anonymous reviewers for their useful comments on this paper. References K. Allan. 1980. Nouns and countability. J. Linguistic Society of America, 56(3):541–567. F. Bond. 2005. Translating the Untranslatable. CSLI publications, Stanford. L. Burnard. 1995. Users Reference Guide for the British National Corpus. version 1.0. Oxford University Computing Services, Oxford. M. Chodorow and C. Leacock. 2000. An unsupervised method for detecting grammatical errors. In Proc. of 1st Meeting of the North America Chapter of ACL, pages 140–147. Japan electronic dictionary research institute ltd. 1993. EDR electronic dictionary specifications guide. Japan electronic dictionary research institute ltd, Tokyo. S. Granger. 1998. Prefabricated patterns in advanced EFL writing: collocations and formulae. In A. P. Cowie, editor, Phraseology: theory, analysis, and applications, pages 145–160. Clarendon Press. R. Huddleston and G.K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press, Cambridge. E. Izumi, K. Uchimoto, T. Saiga, T. Supnithi, and H. Isahara. 2003. Automatic error detection in the Japanese learners’ English spoken data. In Proc. of 41st Annual Meeting of ACL, pages 145–148. A. Kawai, K. Sugihara, and N. Sugie. 1984. ASPEC-I: An error detection system for English composition. IPSJ Journal (in Japanese), 25(6):1072–1079. M. Lapata and F. Keller. 2005. Web-based models for natural language processing. ACM Transactions on Speech and Language Processing, 2(1):1–31. J. Lee. 2004. Automatic article restoration. In Proc. of the Human Language Technology Conference of the North American Chapter of ACL, pages 31–36. K.F. McCoy, C.A. Pennington, and L.Z. Suri. 1996. English error correction: A syntactic user model based on principled “mal-rule” scoring. In Proc. of 5th International Conference on User Modeling, pages 69–66. G. Minnen, F. Bond, and A. Copestake. 2000. Memory-based learning for article generation. In Proc. of CoNLL-2000 and LLL-2000 workshop, pages 43–48. N. Ostler and B.T.S Atkins. 1991. Predictable meaning shift: Some linguistic properties of lexical implication rules. In Proc. of 1st SIGLEX Workshop on Lexical Semantics and Knowledge Representation, pages 87–100. D. Schneider and K.F. McCoy. 1998. Recognizing syntactic errors in the writing of second language learners. In Proc. of 17th International Conference on Computational Linguistics, pages 1198–1205. Y. Tanaka. 1977. Psychological methods (in Japanese). University of Tokyo Press. C. Tschichold, F. Bodmer, E. Cornu, F. Grosjean, L. Grosjean, N. K œubler, N. L ewy, and C. Tschumi. 1997. Developing a new grammar checker for English as a second language. In Proc. of the From Research to Commercial Applications Workshop, pages 7–12. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of 33rd Annual Meeting of ACL, pages 189–196. D. Yarowsky. 1996. Homograph Disambiguation in Speech Synthesis. Springer-Verlag. 248
2006
31
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 249–256, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Correcting ESL Errors Using Phrasal SMT Techniques Chris Brockett, William B. Dolan, and Michael Gamon Natural Language Processing Group Microsoft Research One Microsoft Way, Redmond, WA 98005, USA {chrisbkt,billdol,mgamon}@microsoft.com Abstract This paper presents a pilot study of the use of phrasal Statistical Machine Translation (SMT) techniques to identify and correct writing errors made by learners of English as a Second Language (ESL). Using examples of mass noun errors found in the Chinese Learner Error Corpus (CLEC) to guide creation of an engineered training set, we show that application of the SMT paradigm can capture errors not well addressed by widely-used proofing tools designed for native speakers. Our system was able to correct 61.81% of mistakes in a set of naturallyoccurring examples of mass noun errors found on the World Wide Web, suggesting that efforts to collect alignable corpora of pre- and post-editing ESL writing samples offer can enable the development of SMT-based writing assistance tools capable of repairing many of the complex syntactic and lexical problems found in the writing of ESL learners. 1 Introduction Every day, in schools, universities and businesses around the world, in email and on blogs and websites, people create texts in languages that are not their own, most notably English. Yet, for writers of English as a Second Language (ESL), useful editorial assistance geared to their needs is surprisingly hard to come by. Grammar checkers such as that provided in Microsoft Word have been designed primarily with native speakers in mind. Moreover, despite growing demand for ESL proofing tools, there has been remarkably little progress in this area over the last decade. Research into computer feedback for ESL writers remains largely focused on smallscale pedagogical systems implemented within the framework of CALL (Computer Aided Language Learning) (Reuer 2003; Vanderventer Faltin, 2003), while commercial ESL grammar checkers remain brittle and difficult to customize to meet the needs of ESL writers of different first-language (L1) backgrounds and skill levels. Some researchers have begun to apply statistical techniques to identify learner errors in the context of essay evaluation (Chodorow & Leacock, 2000; Lonsdale & Strong-Krause, 2003), to detect non-native text (Tomokiyo & Jones, 2001), and to support lexical selection by ESL learners through first-language translation (Liu et al., 2000). However, none of this work appears to directly address the more general problem of how to robustly provide feedback to ESL writers—and for that matter non-native writers in any second language—in a way that is easily tailored to different L1 backgrounds and secondlanguage (L2) skill levels. In this paper, we show that a noisy channel model instantiated within the paradigm of Statistical Machine Translation (SMT) (Brown et al., 1993) can successfully provide editorial assistance for non-native writers. In particular, the SMT approach provides a natural mechanism for suggesting a correction, rather than simply stranding the user with a flag indicating that the text contains an error. Section 2 further motivates the approach and briefly describes our SMT system. Section 3 discusses the data used in our experiment, which is aimed at repairing a common type of ESL error that is not well-handled by current grammar checking technology: mass/count noun confusions. Section 4 presents experimental results, along with an analysis of errors produced by the system. Finally we present discussion and some future directions for investigation. 249 2 Error Correction as SMT 2.1 Beyond Grammar Checking A major difficulty for ESL proofing is that errors of grammar, lexical choice, idiomaticity, and style rarely occur in isolation. Instead, any given sentence produced by an ESL learner may involve a complex combination of all these error types. It is difficult enough to design a proofing tool that can reliably correct individual errors; the simultaneous combination of multiple errors is beyond the capabilities of current proofing tools designed for native speakers. Consider the following example, written by a Korean speaker and found on the World Wide Web, which involves the misapplication of countability to a mass noun: And I knew many informations about Christmas while I was preparing this article. The grammar and spelling checkers in Microsoft Word 2003 correctly suggest many Æ much and informations Æ information. Accepting these proposed changes, however, does not render the sentence entirely native-like. Substituting the word much for many leaves the sentence stilted in a way that is probably undetectable to an inexperienced non-native speaker, while the use of the word knew represents a lexical selection error that falls well outside the scope of conventional proofing tools. A better rewrite might be: And I learned a lot of information about Christmas while I was preparing this article. or, even more colloquially: And I learned a lot about Christmas while I was preparing this article Repairing the error in the original sentence, then, is not a simple matter of fixing an agreement marker or substituting one determiner for another. Instead, wholesale replacement of the phrase knew many informations with the phrase learned a lot is needed to produce idiomatic-sounding output. Seen in these terms, the process of mapping from a raw, ESLauthored string to its colloquial equivalent looks remarkably like translation. Our goal is to show that providing editorial assistance for writers should be viewed as a special case of translation. Rather than learning how strings in one language map to strings in another, however, “translation” now involves learning how systematic patterns of errors in ESL learners’ English map to corresponding patterns in native English 2.2 A Noisy Channel Model of ESL Errors If ESL error correction is seen as a translation task, the task can be treated as an SMT problem using the noisy channel model of (Brown et al., 1993): here the L2 sentence produced by the learner can be regarded as having been corrupted by noise in the form of interference from his or her L1 model and incomplete language models internalized during language learning. The task, then, is to reconstruct a corresponding valid sentence of L2 (target). Accordingly, we can seek to probabilistically identify the optimal correct target sentence(s) T* of an ESL input sentence S by applying the familiar SMT formula: ( ) { } { }) P( ) | P( max arg | P max arg * T T S S T T T T = = In the context of this model, editorial assistance becomes a matter of identifying those segments of the optimal target sentence or sentences that differ from the writer’s original input and displaying them to the user. In practice, the patterns of errors produced by ESL writers of specific L1 backgrounds can be captured in the channel model as an emergent property of training data consisting ESL sentences aligned with their corrected edited counterparts. The highest frequency errors and infelicities should emerge as targets for replacement, while lesser frequency or idiosyncratic problems will in general not surface as false flags. 2.3 Implementation In this paper, we explore the use of a large-scale production statistical machine translation system to correct a class of ESL errors. A detailed description of the system can be found in (Menezes & Quirk 2005) and (Quirk et al., 2005). In keeping with current best practices in SMT, our system is a phrasal machine translation system that attempts to learn mappings between “phrases” (which may not correspond to linguistic units) rather than individual words. What distinguishes 250 this system from other phrasal SMT systems is that rather than aligning simple sequences of words, it maps small phrasal “treelets” generated by a dependency parse to corresponding strings in the target. This “Tree-To-String” model holds promise in that it allows us to potentially benefit from being able to access a certain amount of structural information during translation, without necessarily being completely tied to the need for a fully-well-formed linguistic analysis of the input—an important consideration when it is sought to handle ungrammatical or otherwise illformed ESL input, but also simultaneously to capture relationships not involving contiguous strings, for example determiner-noun relations. In our pilot study, this system was employed without modification to the system architecture. The sole adjustment made was to have both Source (erroneous) and Target (correct) sentences tokenized using an English language tokenizer. N-best results for phrasal alignment and ordering models in the decoder were optimized by lambda training via Maximum Bleu, along the lines described in (Och, 2003). 3 Data Development 3.1 Identifying Mass Nouns In this paper, we focus on countability errors associated with mass nouns. This class of errors (involving nouns that cannot be counted, such as information, pollution, and homework) is characteristically encountered in ESL writing by native speakers of several East Asian languages (Dalgish, 1983; Hua & Lee, 2004).1 We began by identifying a list of English nouns that are frequently involved in mass/count errors in by writing by Chinese ESL learners, by taking the intersection of words which: • occurred in either the Longman Dictionary of Contemporary English or the American Heritage Dictionary with a mass sense • were involved in n ≥ 2 mass/count errors in the Chinese Learner English Corpus CLEC (Gui and Yang, 2003), either tagged as a mass noun error or else with an adjacent tag indicating an article error.2 1 These constructions are also problematic for handcrafted MT systems (Bond et al., 1994). 2 CLEC tagging is not comprehensive; some common mass noun errors (e.g., make a good progress) are not tagged in this corpus. This procedure yielded a list of 14 words: knowledge, food, homework, fruit, news, color, nutrition, equipment, paper, advice, haste, information, lunch, and tea. 3 Countability errors involving these words are scattered across 46 sentences in the CLEC corpus. For a baseline representing the level of writing assistance currently available to the average ESL writer, we submitted these sentences to the proofing tools in Microsoft Word 2003. The spelling and grammar checkers correctly identified 21 of the 46 relevant errors, proposed one incorrect substitution (a few advice Æ a few advices), and failed to flag the remaining 25 errors. With one exception, the proofing tools successfully detected as spelling errors incorrect plurals on lexical items that permit only mass noun interpretations (e.g., informations), but ignored plural forms like fruits and papers even when contextually inappropriate. The proofing tools in Word 2003 also detected singular determiner mismatches with obligatory plural forms (e.g. a news). 3.2 Training Data The errors identified in these sentences provided an informal template for engineering the data in our training set, which was created by manipulating well-formed, edited English sentences. Raw data came from a corpus of ~484.6 million words of Reuters Limited newswire articles, released between 1995 and 1998, combined with a ~7,175,000-word collection of articles from multiple news sources from 2004-2005. The resulting dataset was large enough to ensure that all targeted forms occurred with some frequency. From this dataset we culled about 346,000 sentences containing examples of the 14 targeted words. We then used hand-constructed regular expressions to convert these sentences into mostly-ungrammatical strings that exhibited characteristics of the CLEC data, for example: • much Æ many: much advice Æ many advice • some Æ a/an: some advice Æ an advice • conversions to plurals: much good advice Æ many good advices 3 Terms that also had a function word sense, such as will, were eliminated for this experiment. 251 • deletion of counters: piece(s)/ item(s)/sheet(s) of) • insertion of determiners These were produced in multiple combinations for broad coverage, for example: I'm not trying to give you legal advice. Æ • I'm not trying to give you a legal advice. • I'm not trying to give you the legal advice. • I'm not trying to give you the legal advices. A total of 24128 sentences from the news data were “lesioned” in this manner to create a set of 65826 sentence pairs. To create a balanced training set that would not introduce too many artifacts of the substitution (e.g., many should not always be recast as much just because that is the only mapping observed in the training data), we randomly created an equivalent number of identity-mapped pairs from the 346,000 examples, with each sentence mapping to itself. Training sets of various sizes up to 45,000 pairs were then randomly extracted from the lesioned and non-lesioned pairs so that data from both sets occurred in roughly equal proportions. Thus the 45K data set contains approximately 22,500 lesioned examples. An additional 1,000 randomly selected lesioned sentences were set aside for lambda training the SMT system’s ordering and replacement models. 4 Evaluation 4.1 Test Data The amount of tagged data in CLEC is too small to yield both development and test sets from the same data. In order to create a test set, we had a third party collect 150 examples of the 14 words from English websites in China. After minor cleanup to eliminate sentences irrelevant to the task,4 we ended up with 123 example sentences to use as test set. The test examples vary widely in style, from the highly casual to more formal public announcements. Thirteen examples were determined to contain no errors relevant to our experiment, but were retained in the data.5 4.2 Results Table 1 shows per-sentence results of translating the test set on systems built with training data sets of various sizes (given in thousands of sentence pairs). Numbers for the proofing tools in Word 2003 are presented by way of comparison, with the caveat that these tools have been intentionally implemented conservatively so as not to potentially irritate native users with false flags. For our purposes, a replacement string is viewed as correct if, in the view of a native speaker who might be helping an ESL writer, the replacement would appear more natural and hence potentially useful as a suggestion in the context of that sentence taken in isolation. Number disagreement on subject and verb were ignored for the purposes of this evaluation, since these errors were not modeled when we introduced lesions into the data. A correction counted as Whole if the system produced a contextually plausible substitution meeting two criteria: 1) number and 2) determiner/quantifier selection (e.g., many informations Æ much information). Transformations involving bare singular targets (e.g., the fruits Æ fruit) also counted as Whole. Partial corrections are those where only one of the two criteria was met and part of the desired correction was missing (e.g., an 4 In addition to eliminating cases that only involved subject-verb number agreement, we excluded a small amount of spam-like word salad, several instances of the word homework being misused to mean “work done out of the home”, and one misidentified quotation from Scott’s Ivanhoe. 5 This test set may be downloaded at http://research.microsoft.com/research/downloads Data Size Whole Partial Correctly Left New Error Missed Word Order Error 45K 55.28 0.81 8.13 12.20 21.14 1.63 30K 36.59 4.07 7.32 16.26 32.52 3.25 15K 47.15 2.44 5.69 11.38 29.27 4.07 cf. Word 29.27 0.81 10.57 1.63 57.72 N/A Table 1. Replacement percentages (per sentence basis) using different training data sets 252 equipments Æ an equipment versus the targeted bare noun equipment). Incorrect substitutions and newly injected erroneous material anywhere in the sentence counted as New Errors, even if the proposed replacement were otherwise correct. However, changes in upper and lower case and punctuation were ignored. The 55.28% per-sentence score for Whole matches in the system trained on the 45K data set means that it correctly proposed full corrections in 61.8% of locations where corrections needed to be made. The percentage of Missed errors, i.e., targeted errors that were ignored by the system, is correspondingly low. On the 45K training data set, the system performs nearly on a par with Word in terms of not inducing corrections on forms that did not require replacement, as shown in the Correctly Left column. The dip in accuracy in the 30K sentence pair training set is an artifact of our extraction methodology: the relatively small lexical set that we are addressing here appears to be oversensitive to random variation in the engineered training data. This makes it difficult to set a meaningful lower bound on the amount of training data that might be needed for adequate coverage. Nonetheless, it is evident from the table, that given sufficient data, SMT techniques can successfully offer corrections for a significant percentage of cases of the phenomena in question. Table 2 shows some sample inputs together with successful corrections made by the system. Table 3 illustrates a case where two valid corrections are found in the 5-best ranked translations; intervening candidates were identical with the top-ranked candidate. 4.3 Error Analysis Table 1 also indicates that errors associated with the SMT system itself are encouragingly few. A small number of errors in word order were found, one of which resulted in a severely garbled sentence in the 45K data set. In general, the percentage of this type of error declines consistently with growth of the training data size. Linearity of the training data may play a role, since the sentence pairs differ by only a few words. On the whole, however, we expect the system’s order model to benefit from more training data. The most frequent single class of newly introduced error relates to sporadic substitution of the word their for determiners a/the. This is associated with three words, lunch, tea, and haste, and is the principal contributor to the lower percentages in the Correctly Left bin, as compared with Word. This overgeneralization error reflects our attempt to engineer the discontinuous mapping the X of them Æ their X, motivated by examples like the following, encountered in the CLEC dataset: Input Shanghai residents can buy the fruits for a cheaper price than before. Replacement Shanghai residents can buy fruit for a cheaper price than before . Input Thank u for giving me so many advice. Replacement thank u for giving me so much advice . Input Acquiring the knowledge of information warfare is key to winning wars Replacement acquiring knowledge of information warfare is key to winning wars Input Many knowledge about Li Bai can be gain through it. Replacement much knowledge about Li Bai can be gain through it . Input I especially like drinking the tea. Replacement i especially like drinking tea . Input Icons printed on a paper have been brought from Europe, and were pasted on boards on Taiwan. Replacement icons printed on paper have been brought from Europe , and were pasted on boards on Taiwan . Table 2. Sample corrections, using 45K engineered training data 253 In this equal world, lots of people are still concerned on the colors of them … The inability of our translation system to handle such discontinuities in a unitary manner reflects the limited ability of current SMT modeling techniques to capture long-distance effects. Similar alternations are rife in bilingual data, e.g., ne…pas in French (Fox, 2002) and separable prefixes in German (Collins et al. 2005). As SMT models become more adept at modeling long-distance effects in a principled manner, monolingual proofing will benefit as well. The Missed category is heterogeneous. The SMT system has an inherent bias against deletion, with the result that unwanted determiners tended not to be deleted, especially in the smaller training sets. Other errors related to coverage in the development data set. Several occurrences of greengrocer’s apostrophes (tea’s, equipment’s) caused correction failures: these were not anticipated when engineering the training data. Likewise, the test data presented several malformed quantifiers and quantifier-like phrases (plenty tea Æ plenty of tea, a lot information Æ a lot of information, few information Æ too little information) that had been unattested in the development set. Examples such as these highlight the difficulty in obtaining complete coverage when using handcrafted techniques, whether to engineer errors, as in our case, or to handcraft targeted correction solutions. The system performed poorly on words that commonly present both mass and count noun senses in ways that are apt to confuse L2 writers. One problematic case was paper. The following sentences, for example, remained uncorrected: He published many paper in provincial and national publication. He has published thirty-two pieces of papers. Large amounts of additional training data would doubtless be helpful in providing contextual resolutions to the problems. Improved alignment models may also play a role here in capturing complex structures of the kind represented by constructions involving counters. 5 Discussion The artificially-engineered training data that we relied on for our experiments proved surprisingly useful in modeling real errors made by nonnative speakers. However, this is obviously a less than ideal data source, since the errors introduced by regular expressions are homogenously distributed in a way that naturally-occurring errors are not, creating artifacts that undoubtedly impair our SMT models. Artificial data of this sort may be useful as proof of concept, but hand engineering such data plainly does not present a viable path to developing real world applications. In order to be able to handle the rich panoply of errors and error interactions encountered in the text of second language learners large quantities of naturallyoccurring “before” and “after” texts will need to be collected. By way of illustration, Table 4 shows the output of results of “translating” our test data into more natural English by hand and dumping the pre- and post-editing pairs to the 45K training set.6 Although we were unable to exactly recover the target sentences, inspection showed that 25 sentences had improved, some significantly, as Table 4 shows. Under the right conditions, the SMT system can capture contextual morphological alternations (nutrition/nutritious), together with complex mappings represented by the dependencies learn  knowledge  many (ESL) and 6 Since a single example of each pair was insufficient to override the system’s inherent bias towards unigram mappings, 5 copies of each pair were appended to the training data. Input: And we can learn many knowledge or new information from TV Candidate 1: And we can learn much knowledge or new information from TV Candidate 5: And we can learn a lot of knowledge or new information from TV Table 3. Multiple replacement candidates generated by 45K training set 254 gain  knowledge  a lot of (English). In a rule-based correction system, an immense amount of hand-coding would be required to handle even a small subset of the potential range of such mismatches between learner and native-like English. This knowledge, we believe, is best acquired from data. 5.1 The Need for Data Collection Given a sufficiently large corpus of aligned sentences containing error patterns produced by ESL writers of the same L1 background and their corrected counterparts we expect eventually to be able to capture the rich complexity of non-native error within a noisy-channel based SMT model. As a practical matter, however, parallel data of the kind needed is far from easy to come by. This does not mean, however, that such data does not exist. The void left by commercial grammar checkers is filled, largely unobserved, by a number of services that provide editorial assistance, ranging from foreign language teachers, to language helpdesks in multinational corporations, to mentoring services for conferences. Translation bureaus frequently offer editing services for nonnative speakers. Yet, unlike translation, the “before” and “after” texts are rarely recycled in a form that can be used to build translation models. Although collecting this data will involve a large investment in time, effort, and infrastructure, a serious effort along these lines is likely to prove fruitful in terms of making it possible to apply the SMT paradigm to ESL error correction. 5.2 Feedback to SMT One challenge faced by the SMT model is the extremely high quality that will need to be attained before a system might be usable. Since it is highly undesirable that learners should be presented with inaccurate feedback that they may not have the experience or knowledge to assess, the quality bar imposed on error correction is far higher than is that tolerated in machine translation. Exploration of error correction and writing assistance using SMT models may thus prove an important venue for testing new SMT models. 5.3 Advantages of the SMT Approach Statistical Machine Translation has provided a hugely successful research paradigm within the field of natural language processing over the last decade. One of the major advantages of using SMT in ESL writing assistance is that it can be expected to benefit automatically from any progress made in SMT itself. In fact, the approach presented here benefits from all the advantages of statistical machine translation. Since the architecture is not dependent on hard-to-maintain rules or regular expressions, little or no linguistic expertise will be required in developing and maintain applications. As with SMT, this expertise is pushed into the data component, to be handled by instructors and editors, who do not need programming or scripting skills. We expect it to be possible, moreover, once parallel data becomes available, to quickly ramp up new systems to accommodate the needs of Input sentence And we can learn many knowledge or new information from TV. 45K system output and we can learn much knowledge or new information from TV . 45K + translation system output we can gain a lot of knowledge or new information from TV . Input sentence The following is one of the homework for last week. 45K system output the following is one of their homework for last week . 45K + translation system output the following is one of the homework assignments for last week . Input sentence i like mushroom,its very nutrition 45K system output i like mushroom , its very nutrition 45K + translation system output i like mushroom , its very nutritious Table 4. Contextual corrections before and after adding “translations” to 45K training data 255 learners with different first-language backgrounds and different skill levels and to writing assistance for learners of L2s other than English. It is also likely that this architecture may have applications in pedagogical environments and as a tool to assist editors and instructors who deal regularly with ESL texts, much in the manner of either Human Assisted Machine Translation or Machine Assisted Human Translation. We also believe that this same architecture could be extended naturally to provide grammar and style tools for native writers. 6 Conclusion and Future Directions In this pilot study we have shown that SMT techniques have potential to provide error correction and stylistic writing assistance to L2 learners. The next step will be to obtain a large dataset of pre- and post-editing ESL text with which to train a model that does not rely on engineered data. A major purpose of the present study has been to determine whether our hypothesis is robust enough to warrant the cost and effort of a collection or data creation effort. Although we anticipate that it will take a significant lead time to assemble the necessary aligned data, once a sufficiently large corpus is in hand, we expect to begin exploring ways to improve our SMT system by tailoring it more specifically to the demands of editorial assistance. In particular, we expect to be looking into alternative word alignment models and possibly enhancing our system’s decoder using some of the richer, more structured language models that are beginning to emerge. Acknowledgements The authors have benefited extensively from discussions with Casey Whitelaw when he interned at Microsoft Research during the summer of 2005. We also thank the Butler Hill Group for collecting the examples in our test set. References Bond, Francis, Kentaro Ogura and Satoru Ikehara. 1994. Countability and Number in Japanese-toEnglish Machine Translation. COLING-94. Peter E Brown, Stephen A. Della Pietra, Robert L. Mercer, and Vincent J. Della Pietra. 1993. The Mathematics of Statistical Machine Translation. Computational Linguistics, Vol. 19(2): 263-311. Martin Chodorow and Claudia Leacock. 2000. An Unsupervised Method for Detecting Grammatical Errors. NAACL 2000. Michael Collins, Philipp Koehn and Ivona Kučerová. 2005. Clause Restructuring for Statistical machine Translation. ACL 2005, 531-540. Gerard M. Dalgish. 1984. Computer-Assisted ESL Research. CALICO Journal. 2(2): 32-33 Heidi J. Fox. 2002. Phrasal Cohesion and Statistical Machine Translation. EMNLP 2002. Shicun Gui and Huizhong Yang (eds). 2003 Zhongguo Xuexizhe Yingyu Yuliaohu. (Chinese Learner English Corpus). Shanghai: Shanghai Waiyu Jiaoyu Chubanshe. (In Chinese). Hua Dongfan and Thomas Hun-Tak Lee. 2004. Chinese ESL Learners' Understanding of the English Count-Mass Distinction. In Proceedings of the 7th Generative Approaches to Second Language Acquisition Conference (GASLA 2004). Ting Liu, Ming Zhou, Jianfeng Gao, Endong Xun, and Changning Huang. 2000. PENS: A Machineaided English Writing System for Chinese Users. ACL 2000. Deryle Lonsdale and Diane Strong-Krause. 2003. Automated Rating of ESL Essays. In Proceedings of the HLT/NAACL Workshop: Building Educational Applications Using Natural Language Processing. Arul Menezes, and Chris Quirk. 2005. Microsoft Research Treelet Translation System: IWSLT Evaluation. Proceedings of the International Workshop on Spoken Language Translation. Franz Josef Och, 2003. Minimum error rate training in statistical machine translation. ACL 2003. Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models. ACL 2000. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency Tree Translation: Syntactically Informed Phrasal SMT. ACL 2005. Veit Reuer. 2003. Error Recognition and Feedback with Lexical Functional Grammar. CALICO Journal, 20(3): 497-512. Laura Mayfield Tomokiyo and Rosie Jones. 2001. You’re not from round here, are you? Naive Bayes Detection of Non-Native Utterance Text. NAACL 2001. Anne Vandeventer Faltin. 2003. Natural language processing tools for computer assisted language learning. Linguistik online 17, 5/03 (http:// www.linguistik-online.de/17_03/vandeventer.html) 256
2006
32
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 257–264, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Graph Transformations in Data-Driven Dependency Parsing Jens Nilsson V¨axj¨o University [email protected] Joakim Nivre V¨axj¨o University and Uppsala University [email protected] Johan Hall V¨axj¨o University [email protected] Abstract Transforming syntactic representations in order to improve parsing accuracy has been exploited successfully in statistical parsing systems using constituency-based representations. In this paper, we show that similar transformations can give substantial improvements also in data-driven dependency parsing. Experiments on the Prague Dependency Treebank show that systematic transformations of coordinate structures and verb groups result in a 10% error reduction for a deterministic data-driven dependency parser. Combining these transformations with previously proposed techniques for recovering nonprojective dependencies leads to state-ofthe-art accuracy for the given data set. 1 Introduction It has become increasingly clear that the choice of suitable internal representations can be a very important factor in data-driven approaches to syntactic parsing, and that accuracy can often be improved by internal transformations of a given kind of representation. This is well illustrated by the Collins parser (Collins, 1997; Collins, 1999), scrutinized by Bikel (2004), where several transformations are applied in order to improve the analysis of noun phrases, coordination and punctuation. Other examples can be found in the work of Johnson (1998) and Klein and Manning (2003), which show that well-chosen transformations of syntactic representations can greatly improve the parsing accuracy obtained with probabilistic context-free grammars. In this paper, we apply essentially the same techniques to data-driven dependency parsing, specifically targeting the analysis of coordination and verb groups, two very common constructions that pose special problems for dependency-based approaches. The basic idea is that we can facilitate learning by transforming the training data for the parser and that we can subsequently recover the original representations by applying an inverse transformation to the parser’s output. The data used in the experiments come from the Prague Dependency Treebank (PDT) (Hajiˇc, 1998; Hajiˇc et al., 2001), the largest available dependency treebank, annotated according to the theory of Functional Generative Description (FGD) (Sgall et al., 1986). The parser used is MaltParser (Nivre and Hall, 2005; Nivre et al., 2006), a freely available system that combines a deterministic parsing strategy with discriminative classifiers for predicting the next parser action. The paper is structured as follows. Section 2 provides the necessary background, including a definition of dependency graphs, a discussion of different approaches to the analysis of coordination and verb groups in dependency grammar, as well as brief descriptions of PDT, MaltParser and some related work. Section 3 introduces a set of dependency graph transformations, specifically defined to deal with the dependency annotation found in PDT, which are experimentally evaluated in section 4. While the experiments reported in section 4.1 deal with pure treebank transformations, in order to establish an upper bound on what can be achieved in parsing, the experiments presented in section 4.2 examine the effects of different transformations on parsing accuracy. Finally, in section 4.3, we combine these transformations with previously proposed techniques in order to optimize overall parsing accuracy. We conclude in section 5. 257 2 Background 2.1 Dependency Graphs The basic idea in dependency parsing is that the syntactic analysis consists in establishing typed, binary relations, called dependencies, between the words of a sentence. This kind of analysis can be represented by a labeled directed graph, defined as follows: • Let R = {r1, . . . , rm} be a set of dependency types (arc labels). • A dependency graph for a string of words W = w1 . . . wn is a labeled directed graph G = (W, A), where: – W is the set of nodes, i.e. word tokens in the input string, ordered by a linear precedence relation <. – A is a set of labeled arcs (wi, r, wj), wi, wj ∈W, r ∈R. • A dependency graph G = (W, A) is wellformed iff it is acyclic and no node has an in-degree greater than 1. We will use the notation wi r→wj to symbolize that (wi, r, wj) ∈A, where wi is referred to as the head and wj as the dependent. We say that an arc is projective iff, for every word wj occurring between wi and wk (i.e., wi < wj < wk or wi > wj > wk), there is a path from wi to wj. A graph is projective iff all its arcs are projective. Figure 1 shows a well-formed (projective) dependency graph for a sentence from the Prague Dependency Treebank. 2.2 Coordination and Verb Groups Dependency grammar assumes that syntactic structure consists of lexical nodes linked by binary dependencies. Dependency theories are thus best suited for binary syntactic constructions, where one element can clearly be distinguished as the syntactic head. The analysis of coordination is problematic in this respect, since it normally involves at least one conjunction and two conjuncts. The verb group, potentially consisting of a whole chain of verb forms, is another type of construction where the syntactic relation between elements is not clear-cut in dependency terms. Several solutions have been proposed to the problem of coordination. One alternative is to avoid creating dependency relations between the conjuncts, and instead let the conjuncts have a direct dependency relation to the same head (Tesni`ere, 1959; Hudson, 1990). Another approach is to make the conjunction the head and let the conjuncts depend on the conjunction. This analysis, which appears well motivated on semantic grounds, is adopted in the FGD framework and will therefore be called Prague style (PS). It is exemplified in figure 1, where the conjunction a (and) is the head of the conjuncts bojovnost´ı and tvrdost´ı. A different solution is to adopt a more hierarchical analysis, where the conjunction depends on the first conjunct, while the second conjunct depends on the conjunction. In cases of multiple coordination, this can be generalized to a chain, where each element except the first depends on the preceding one. This more syntactically oriented approach has been advocated notably by Mel’ˇcuk (1988) and will be called Mel’ˇcuk style (MS). It is illustrated in figure 2, which shows a transformed version of the dependency graph in figure 1, where the elements of the coordination form a chain with the first conjunct (bojovnost´ı) as the topmost head. Lombardo and Lesmo (1998) conjecture that MS is more suitable than PS for incremental dependency parsing. The difference between the more semantically oriented PS and the more syntactically oriented MS is seen also in the analysis of verb groups, where the former treats the main verb as the head, since it is the bearer of valency, while the latter treats the auxiliary verb as the head, since it is the finite element of the clause. Without questioning the theoretical validity of either approach, we can again ask which analysis is best suited to achieve high accuracy in parsing. 2.3 PDT PDT (Hajiˇc, 1998; Hajiˇc et al., 2001) consists of 1.5M words of newspaper text, annotated in three layers: morphological, analytical and tectogrammatical. In this paper, we are only concerned with the analytical layer, which contains a surfacesyntactic dependency analysis, involving a set of 28 dependency types, and not restricted to projective dependency graphs.1 The annotation follows FGD, which means that it involves a PS analysis of both coordination and verb groups. Whether better parsing accuracy can be obtained by transforming 1About 2% of all dependencies are non-projective and about 25% of all sentences have a non-projective dependency graph (Nivre and Nilsson, 2005). 258 (“The final of the tournament was distinguished by great fighting spirit and unexpected hardness”) A7 Velkou great ? Atr N7 bojovnost´ı fighting-spirit ? Obj Co Jˆ a and ? Coord A7 neˇcekanou unexpected ? Atr N7 tvrdost´ı hardness ? Obj Co P4 se itself ? AuxT Vp vyznaˇcovalo distinguished N2 fin´ale final ? Sb N2 turnaje of-the-tournament ? Atr Figure 1: Dependency graph for a Czech sentence from the Prague Dependency Treebank (“The final of the tournament was distinguished by great fighting spirit and unexpected hardness”) A7 Velkou great ? Atr N7 bojovnost´ı fighting-spirit ? Obj Jˆ a and ? Coord A7 neˇcekanou unexpected ? Atr N7 tvrdost´ı hardness ? Obj P4 se itself ? AuxT Vp vyznaˇcovalo distinguished N2 fin´ale final ? Sb N2 turnaje of-the-tournament ? Atr Figure 2: Transformed dependency graph for a Czech sentence from the Prague Dependency Treebank this to MS is one of the hypotheses explored in the experimental study below. 2.4 MaltParser MaltParser (Nivre and Hall, 2005; Nivre et al., 2006) is a data-driven parser-generator, which can induce a dependency parser from a treebank, and which supports several parsing algorithms and learning algorithms. In the experiments below we use the algorithm of Nivre (2003), which constructs a labeled dependency graph in one leftto-right pass over the input. Classifiers that predict the next parser action are constructed through memory-based learning (MBL), using the TIMBL software package (Daelemans and Van den Bosch, 2005), and support vector machines (SVM), using LIBSVM (Chang and Lin, 2005). 2.5 Related Work Other ways of improving parsing accuracy with respect to coordination include learning patterns of morphological and semantical information for the conjuncts (Park and Cho, 2000). More specifically for PDT, Collins et al. (1999) relabel coordinated phrases after converting dependency structures to phrase structures, and Zeman (2004) uses a kind of pattern matching, based on frequencies of the parts-of-speech of conjuncts and conjunctions. Zeman also mentions experiments to transform the dependency structure for coordination but does not present any results. Graph transformations in dependency parsing have also been used in order to recover nonprojective dependencies together with parsers that are restricted to projective dependency graphs. Thus, Nivre and Nilsson (2005) improve parsing accuracy for MaltParser by projectivizing training data and applying an inverse transformation to the output of the parser, while Hall and Nov´ak (2005) apply post-processing to the output of Charniak’s parser (Charniak, 2000). In the final experiments below, we combine these techniques with the transformations investigated in this paper. 3 Dependency Graph Transformations In this section, we describe algorithms for transforming dependency graphs in PDT from PS to MS and back, starting with coordination and continuing with verb groups. 3.1 Coordination The PS-to-MS transformation for coordination will be designated τc(∆), where ∆is a data set. The transformation begins with the identification of a base conjunction, based on its dependency type (Coord) and/or its part-of-speech (Jˆ). For example, the word a (and) in figure 1 is identified as a base conjunction. 259 Before the actual transformation, the base conjunction and all its dependents need to be classified into three different categories. First, the base conjunction is categorized as a separator (S). If the coordination consists of more than two conjuncts, it normally has one or more commas separating conjuncts, in addition to the base conjunction. These are identified by looking at their dependency type (mostly AuxX) and are also categorized as S. The coordination in figure 1 contains no commas, so only the word a will belong to S. The remaining dependents of the base conjunction need to be divided into conjuncts (C) and other dependents (D). To make this distinction, the algorithm again looks at the dependency type. In principle, the dependency type of a conjunct has the suffix Co, although special care has to be taken for coordinated prepositional cases and embedded clauses (B¨ohmov´a et al., 2003). The words bojovnost´ı and tvrdost´ı in figure 1, both having the dependency type Obj Co, belong to the category C. Since there are no other dependents of a, the coordination contains no instances of the category D. Given this classification of the words involved in a coordination, the transformation τc(∆) is straightforward and basically connects all the arcs in a chain. Let C1, . . . , Cn be the elements of C, ordered by linear precedence, and let S1i, . . . , Smi be the separators occurring between Ci and Ci+1. Then every Ci becomes the head of S1i, . . . , Smi, Smi becomes the head of Ci+1, and C1 becomes the only dependent of the original head of the base conjunction. The dependency types of the conjuncts are truncated by removing the suffix Co.2 Also, each word in wd ∈D becomes a dependent of the conjunct closest to its left, and if such a word does not exist, wd will depend on the leftmost conjunct. After the transformation τc(∆), every coordination forms a left-headed chain, as illustrated in figure 2. This new representation creates a problem, however. It is no longer possible to distinguish the dependents in D from other dependents of the conjuncts. For example, the word Velkou in figure 2 is not distinguishable from a possible dependent in D, which is an obvious drawback when transforming back to PS. One way of distinguishing D elements is to extend the set of dependency types. 2Preliminary results indicated that this increases parsing accuracy. The dependency type r of each wd ∈D can be replaced by a completely new dependency type r+ (e.g., Atr+), theoretically increasing the number of dependency types to 2 · |R|. The inverse transformation, τ −1 c (∆), again starts by identifying base conjunctions, using the same conditions as before. For each identified base conjunction, it calls a procedure that performs the inverse transformation by traversing the chain of conjuncts and separators “upwards” (right-to-left), collecting conjuncts (C), separators (S) and potential conjunction dependents (Dpot). When this is done, the former head of the leftmost conjunct (C1) becomes the head of the rightmost (base) conjunction (Smn−1). In figure 2, the leftmost conjunct is bojovnost´ı, with the head vyznaˇcovalo, and the rightmost (and only) conjunction is a, which will then have vyznaˇcovalo as its new head. All conjuncts in the chain become dependents of the rightmost conjunction, which means that the structure is converted back to the one depicted in figure 1. As mentioned above, the original structure in figure 1 did not have any coordination dependents, but Velkou ∈Dpot. The last step of the inverse transformation is therefore to sort out conjunction dependents from conjunct dependents, where the former will attach to the base conjunction. Four versions have been implemented, two of which take into account the fact that the dependency types AuxG, AuxX, AuxY, and Pred are the only dependency types that are more frequent as conjunction dependents (D) than as conjunct dependents in the training data set: • τc: Do not extend arc labels in τc. Leave all words in Dpot in place in τ −1 c . • τc∗: Do not extend arc labels in τc. Attach all words with label AuxG, AuxX, AuxY or Pred to the base conjunction in τ −1 c . • τc+: Extend arc labels from r to r+ for D elements in τc. Attach all words with label r+ to the base conjunction (and change the label to r) in τ −1 c . • τc+∗: Extend arc labels from r to r+ for D elements in τc, except for the labels AuxG, AuxX, AuxY and Pred. Attach all words with label r+, AuxG, AuxX, AuxY, or Pred to the base conjunction (and change the label to r if necessary) in τ −1 c . 260 3.2 Verb Groups To transform verb groups from PS to MS, the transformation algorithm, τv(∆), starts by identifying all auxiliary verbs in a sentence. These will belong to the set A and are processed from left to right. A word waux ∈A iff wmain AuxV −→waux, where wmain is the main verb. The transformation into MS reverses the relation between the verbs, i.e., waux AuxV −→wmain, and the former head of wmain becomes the new head of waux. The main verb can be located on either side of the auxiliary verb and can have other dependents (whereas auxiliary verbs never have dependents), which means that dependency relations to other dependents of wmain may become non-projective through the transformation. To avoid this, all dependents to the left of the rightmost verb will depend on the leftmost verb, whereas the others will depend on the rightmost verb. Performing the inverse transformation for verb groups, τ −1 v (∆), is quite simple and essentially the same procedure inverted. Each sentence is traversed from right to left looking for arcs of the type waux AuxV −→wmain. For every such arc, the head of waux will be the new head of wmain, and wmain the new head of waux. Furthermore, since waux does not have dependents in PS, all dependents of waux in MS will become dependents of wmain in PS. 4 Experiments All experiments are based on PDT 1.0, which is divided into three data sets, a training set (∆t), a development test set (∆d), and an evaluation test set (∆e). Table 1 shows the size of each data set, as well as the relative frequency of the specific constructions that are in focus here. Only 1.3% of all words in the training data are identified as auxiliary verbs (A), whereas coordination (S and C) is more common in PDT. This implies that coordination transformations are more likely to have a greater impact on overall accuracy compared to the verb group transformations. In the parsing experiments reported in sections 4.1–4.2, we use ∆t for training, ∆d for tuning, and ∆e for the final evaluation. The part-of-speech tagging used (both in training and testing) is the HMM tagging distributed with the treebank, with a tagging accuracy of 94.1%, and with the tagset compressed to 61 tags as in Collins et al. (1999). Data #S #W %S %C %A ∆t 73088 1256k 3.9 7.7 1.3 ∆d 7319 126k 4.0 7.8 1.4 ∆e 7507 126k 3.8 7.3 1.4 Table 1: PDT data sets; S = sentence, W = word; S = separator, C = conjunct, A = auxiliary verb T AS τc 97.8 τc∗ 98.6 τc+ 99.6 τc+∗ 99.4 τv 100.0 Table 2: Transformations; T = transformation; AS = attachment score (unlabeled) of τ −1(τ(∆t)) compared to ∆t MaltParser is used with the parsing algorithm of Nivre (2003) together with the feature model used for parsing Czech by Nivre and Nilsson (2005). In section 4.2 we use MBL, again with the same settings as Nivre and Nilsson (2005),3 and in section 4.2 we use SVM with a polynomial kernel of degree 2.4 The metrics for evaluation are the attachment score (AS) (labeled and unlabeled), i.e., the proportion of words that are assigned the correct head, and the exact match (EM) score (labeled and unlabeled), i.e., the proportion of sentences that are assigned a completely correct analysis. All tokens, including punctuation, are included in the evaluation scores. Statistical significance is assessed using McNemar’s test. 4.1 Experiment 1: Transformations The algorithms are fairly simple. In addition, there will always be a small proportion of syntactic constructions that do not follow the expected pattern. Hence, the transformation and inverse transformation will inevitably result in some distortion. In order to estimate the expected reduction in parsing accuracy due to this distortion, we first consider a pure treebank transformation experiment, where we compare τ −1(τ(∆t)) to ∆t, for all the different transformations τ defined in the previous section. The results are shown in table 2. We see that, even though coordination is more frequent, verb groups are easier to handle.5 The 3TIMBL parameters: -k5 -mM -L3 -w0 -dID. 4LIBSVM parameters: -s0 -t1 -d2 -g0.12 -r0 -c1 -e0.1. 5The result is rounded to 100.0% but the transformed tree261 coordination version with the least loss of information (τc+) fails to recover the correct head for 0.4% of all words in ∆t. The difference between τc+ and τc is expected. However, in the next section this will be contrasted with the increased burden on the parser for τc+, since it is also responsible for selecting the correct dependency type for each arc among as many as 2 · |R| types instead of |R|. 4.2 Experiment 2: Parsing Parsing experiments are carried out in four steps (for a given transformation τ): 1. Transform the training data set into τ(∆t). 2. Train a parser p on τ(∆t). 3. Parse a test set ∆using p with output p(∆). 4. Transform the parser output into τ −1(p(∆)). Table 3 presents the results for a selection of transformations using MaltParser with MBL, tested on the evaluation test set ∆e with the untransformed data as baseline. Rows 2–5 show that transforming coordinate structures to MS improves parsing accuracy compared to the baseline, regardless of which transformation and inverse transformation are used. Moreover, the parser benefits from the verb group transformation, as seen in row 6. The final row shows the best combination of a coordination transformation with the verb group transformation, which amounts to an improvement of roughly two percentage points, or a ten percent overall error reduction, for unlabeled accuracy. All improvements over the baseline are statistically significant (McNemar’s test) with respect to attachment score (labeled and unlabeled) and unlabeled exact match, with p < 0.01 except for the unlabeled exact match score of the verb group transformation, where 0.01 < p < 0.05. For the labeled exact match, no differences are significant. The experimental results indicate that MS is more suitable than PS as the target representation for deterministic data-driven dependency parsing. A relevant question is of course why this is the case. A partial explanation may be found in the “short-dependency preference” exhibited by most parsers (Eisner and Smith, 2005), with MaltParser being no exception. The first row of table 4 shows the accuracy of the parser for different arc lengths under the baseline condition (i.e., with no transformations). We see that it performs very well on bank contains 19 erroneous heads. AS EM T U L U L None 79.08 72.83 28.99 21.15 τc 80.55 74.06 30.08 21.27 τc∗ 80.90 74.41 30.56 21.42 τc+ 80.58 74.07 30.42 21.17 τc+∗ 80.87 74.36 30.89 21.38 τv 79.28 72.97 29.53 21.38 τv◦τc+∗ 81.01 74.51 31.02 21.57 Table 3: Parsing accuracy (MBL, ∆e); T = transformation; AS = attachment score, EM = exact match; U = unlabeled, L = labeled AS ∆e 90.1 83.6 70.5 59.5 45.9 Length: 1 2-3 4-6 7-10 11∆t 51.9 29.4 11.2 4.4 3.0 τc(∆t) 54.1 29.1 10.7 3.8 2.4 τv(∆t) 52.9 29.2 10.7 4.2 2.9 Table 4: Baseline labeled AS per arc length on ∆e (row 1); proportion of arcs per arc length in ∆t (rows 3–5) short arcs, but that accuracy drops quite rapidly as the arcs get longer. This can be related to the mean arc length in ∆t, which is 2.59 in the untransformed version, 2.40 in τc(∆t) and 2.54 in τv(∆t). Rows 3-5 in table 4 show the distribution of arcs for different arc lengths in different versions of the data set. Both τc and τv make arcs shorter on average, which may facilitate the task for the parser. Another possible explanation is that learning is facilitated if similar constructions are represented similarly. For instance, it is probable that learning is made more difficult when a unit has different heads depending on whether it is part of a coordination or not. 4.3 Experiment 3: Optimization In this section we combine the best results from the previous section with the graph transformations proposed by Nivre and Nilsson (2005) to recover non-projective dependencies. We write τp for the projectivization of training data and τ −1 p for the inverse transformation applied to the parser’s output.6 In addition, we replace MBL with SVM, a learning algorithm that tends to give higher accuracy in classifier-based parsing although it is more 6More precisely, we use the variant called PATH in Nivre and Nilsson (2005). 262 AS EM T LA U L U L None MBL 79.08 72.83 28.99 21.15 τp MBL 80.79 74.39 31.54 22.53 τp◦τv◦τc+∗ MBL 82.93 76.31 34.17 23.01 None SVM 81.09 75.68 32.24 25.02 τp SVM 82.93 77.28 35.99 27.05 τp◦τv◦τc+∗ SVM 84.55 78.82 37.63 27.69 Table 5: Optimized parsing results (SVM, ∆e); T = transformation; LA = learning algorithm; AS = attachment score, EM = exact match; U = unlabeled, L = labeled T P:S R:S P:C R:C P:A R:A P:M R:M None 52.63 72.35 55.15 67.03 82.17 82.21 69.95 69.07 τp◦τv◦τc+∗ 63.73 82.10 63.20 75.14 90.89 92.79 80.02 81.40 Table 6: Detailed results for SVM; T = transformation; P = unlabeled precision, R = unlabeled recall costly to train (Sagae and Lavie, 2005). Table 5 shows the results, for both MBL and SVM, of the baseline, the pure pseudo-projective parsing, and the combination of pseudo-projective parsing with PS-to-MS transformations. We see that pseudo-projective parsing brings a very consistent increase in accuracy of at least 1.5 percentage points, which is more than that reported by Nivre and Nilsson (2005), and that the addition of the PS-to-MS transformations increases accuracy with about the same margin. We also see that SVM outperforms MBL by about two percentage points across the board, and that the positive effect of the graph transformations is most pronounced for the unlabeled exact match score, where the improvement is more than five percentage points overall for both MBL and SVM. Table 6 gives a more detailed analysis of the parsing results for SVM, comparing the optimal parser to the baseline, and considering specifically the (unlabeled) precision and recall of the categories involved in coordination (separators S and conjuncts C) and verb groups (auxiliary verbs A and main verbs M). All figures indicate, without exception, that the transformations result in higher precision and recall for all directly involved words. (All differences are significant beyond the 0.01 level.) It is worth noting that the error reduction is actually higher for A and M than for S and C, although the former are less frequent. With respect to unlabeled attachment score, the results of the optimized parser are slightly below the best published results for a single parser. Hall and Nov´ak (2005) report a score of 85.1%, applying a corrective model to the output of Charniak’s parser; McDonald and Pereira (2006) achieve a score of 85.2% using a second-order spanning tree algorithm. Using ensemble methods and a pool of different parsers, Zeman and ˇZabokrtsk´y (2005) attain a top score of 87.0%. For unlabeled exact match, our results are better than any previously reported results, including those of McDonald and Pereira (2006). (For the labeled scores, we are not aware of any comparable results in the literature.) 5 Conclusion The results presented in this paper confirm that choosing the right representation is important in parsing. By systematically transforming the representation of coordinate structures and verb groups in PDT, we achieve a 10% error reduction for a data-driven dependency parser. Adding graph transformations for non-projective dependency parsing gives a total error reduction of about 20% (even more for unlabeled exact match). In this way, we achieve state-of-the-art accuracy with a deterministic, classifier-based dependency parser. Acknowledgements The research presented in this paper was partially supported by the Swedish Research Council. We are grateful to Jan Hajiˇc and Daniel Zeman for help with the Czech data and to three anonymous reviewers for helpful comments and suggestions. 263 References Daniel M. Bikel. 2004. Intricacies of Collins’ parsing model. Computational Linguistics, 30:479–511. Alena B¨ohmov´a, Jan Hajiˇc, Eva Hajiˇcov´a, and Barbora Hladk´a. 2003. The Prague Dependency Treebank: A three-level annotation scenario. In Anne Abeill´e, editor, Treebanks: Building and Using Syntactically Annotated Corpora. Kluwer Academic Publishers. Chih-Chung Chang and Chih-Jen Lin. 2005. LIBSVM: A library for support vector machines. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 132–139. Michael Collins, Jan Hajiˇc, Eric Brill, Lance Ramshaw, and Christoph Tillmann. 1999. A statistical parser for Czech. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL), pages 505–512. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annatual Meeting of the Association for Computational Linguistics (ACL), pages 16–23. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Walter Daelemans and Antal Van den Bosch. 2005. Memory-Based Language Processing. Cambridge University Press. Jason Eisner and Noah A. Smith. 2005. Parsing with soft and hard constraints on dependency length. In Proceedings of the 9th International Workshop on Parsing Technologies (IWPT). Jan Hajiˇc, Barbora Vidova Hladka, Jarmila Panevov´a, Eva Hajiˇcov´a, Petr Sgall, and Petr Pajas. 2001. Prague Dependency Treebank 1.0. LDC, 2001T10. Jan Hajiˇc. 1998. Building a Syntactically Annotated Corpus: The Prague Dependency Treebank. In Issues of Valency and Meaning, pages 12–19. Prague Karolinum, Charles University Press. Keith Hall and Vaclav Nov´ak. 2005. Corrective modeling for non-projective dependency parsing. In Proceedings of the 9th International Workshop on Parsing Technologies (IWPT). Richard Hudson. 1990. English Word Grammar. Basil Blackwell. Mark Johnson. 1998. Pcfg models of linguistic tree representations. Computational Linguistics, 24:613–632. Dan Klein and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), pages 423–430. Vincenzo Lombardo and Leonardo Lesmo. 1998. Unit coordination and gapping in dependency theory. In Proceedings of the Workshop on Processing of Dependency-Based Grammars, pages 11–20. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL). Igor Mel’cuk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press. Joakim Nivre and Johan Hall. 2005. MaltParser: A language-independent system for data-driven dependency parsing. In Proceedings of the Fourth Workshop on Treebanks and Linguistic Theories (TLT), pages 137–148. Joakim Nivre and Jens Nilsson. 2005. Pseudoprojective dependency parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 99–106. Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. MaltParser: A data-driven parser-generator for dependency parsing. In Proceedings of the 5th International Conference on Language Resources and Evaluation. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT), pages 149–160. Jong C. Park and Hyung Joon Cho. 2000. Informed parsing for coordination with combinatory categorial grammar. In Proceedings of the 18th International Conference on Computational Linguistics (COLING), pages 593–599. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the 9th International Workshop on Parsing Technologies (IWPT), pages 125–132. Petr Sgall, Eva Hajiˇcov´a, and Jarmila Panevov´a. 1986. The Meaning of the Sentence in Its Pragmatic Aspects. Reidel. Lucien Tesni`ere. 1959. ´El´ements de syntaxe structurale. Editions Klincksieck. Daniel Zeman and Zdenˇek ˇZabokrtsk´y. 2005. Improving parsing accuracy by combining diverse dependency parsers. In Proceedings of the 9th International Workshop on Parsing Technologies (IWPT). Daniel Zeman. 2004. Parsing with a Statistical Dependency Model. Ph.D. thesis, Charles University. 264
2006
33
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 265–272, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Learning to Generate Naturalistic Utterances Using Reviews in Spoken Dialogue Systems Ryuichiro Higashinaka NTT Corporation [email protected] Rashmi Prasad University of Pennsylvania [email protected] Marilyn A. Walker University of Sheffield [email protected] Abstract Spoken language generation for dialogue systems requires a dictionary of mappings between semantic representations of concepts the system wants to express and realizations of those concepts. Dictionary creation is a costly process; it is currently done by hand for each dialogue domain. We propose a novel unsupervised method for learning such mappings from user reviews in the target domain, and test it on restaurant reviews. We test the hypothesis that user reviews that provide individual ratings for distinguished attributes of the domain entity make it possible to map review sentences to their semantic representation with high precision. Experimental analyses show that the mappings learned cover most of the domain ontology, and provide good linguistic variation. A subjective user evaluation shows that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is higher than a hand-crafted baseline. 1 Introduction One obstacle to the widespread deployment of spoken dialogue systems is the cost involved with hand-crafting the spoken language generation module. Spoken language generation requires a dictionary of mappings between semantic representations of concepts the system wants to express and realizations of those concepts. Dictionary creation is a costly process: an automatic method for creating them would make dialogue technology more scalable. A secondary benefit is that a learned dictionary may produce more natural and colloquial utterances. We propose a novel method for mining user reviews to automatically acquire a domain specific generation dictionary for information presentation in a dialogue system. Our hypothesis is that reviews that provide individual ratings for various distinguished attributes of review entities can be used to map review sentences to a semantic repAn example user review (we8there.com) Ratings Food=5, Service=5, Atmosphere=5, Value=5, Overall=5 Review comment The best Spanish food in New York. I am from Spain and I had my 28th birthday there and we all had a great time. Salud! ↓ Review comment after named entity recognition The best {NE=foodtype, string=Spanish} {NE=food, string=food, rating=5} in {NE=location, string=New York}. . . . ↓ Mapping between a semantic representation (a set of relations) and a syntactic structure (DSyntS) • Relations: RESTAURANT has FOODTYPE RESTAURANT has foodquality=5 RESTAURANT has LOCATION ([foodtype, food=5, location] for shorthand.) • DSyntS: ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ lexeme : food class : common noun number : sg article : def ATTR  lexeme : best class : adjective  ATTR ⎡ ⎣ lexeme : FOODTYPE class : common noun number : sg article : no-art ⎤ ⎦ ATTR ⎡ ⎢⎢⎢⎣ lexeme : in class : preposition II ⎡ ⎣ lexeme : LOCATION class : proper noun number : sg article : no-art ⎤ ⎦ ⎤ ⎥⎥⎥⎦ ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ Figure 1: Example of procedure for acquiring a generation dictionary mapping. resentation. Figure 1 shows a user review in the restaurant domain, where we hypothesize that the user rating food=5 indicates that the semantic representation for the sentence “The best Spanish food in New York” includes the relation ‘RESTAURANT has foodquality=5.’ We apply the method to extract 451 mappings from restaurant reviews. Experimental analyses show that the mappings learned cover most of the domain ontology, and provide good linguistic variation. A subjective user evaluation indicates that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is significantly higher than a hand-crafted baseline. 265 Section 2 provides a step-by-step description of the method. Sections 3 and 4 present the evaluation results. Section 5 covers related work. Section 6 summarizes and discusses future work. 2 Learning a Generation Dictionary Our automatically created generation dictionary consists of triples (U, R, S) representing a mapping between the original utterance U in the user review, its semantic representation R(U), and its syntactic structure S(U). Although templates are widely used in many practical systems (Seneff and Polifroni, 2000; Theune, 2003), we derive syntactic structures to represent the potential realizations, in order to allow aggregation, and other syntactic transformations of utterances, as well as context specific prosody assignment (Walker et al., 2003; Moore et al., 2004). The method is outlined briefly in Fig. 1 and described below. It comprises the following steps: 1. Collect user reviews on the web to create a population of utterances U. 2. To derive semantic representations R(U): • Identify distinguished attributes and construct a domain ontology; • Specify lexicalizations of attributes; • Scrape webpages’ structured data for named-entities; • Tag named-entities. 3. Derive syntactic representations S(U). 4. Filter inappropriate mappings. 5. Add mappings (U, R, S) to dictionary. 2.1 Creating the corpus We created a corpus of restaurant reviews by scraping 3,004 user reviews of 1,810 restaurants posted at we8there.com (http://www.we8there.com/), where each individual review includes a 1-to-5 Likert-scale rating of different restaurant attributes. The corpus consists of 18,466 sentences. 2.2 Deriving semantic representations The distinguished attributes are extracted from the webpages for each restaurant entity. They include attributes that the users are asked to rate, i.e. food, service, atmosphere, value, and overall, which have scalar values. In addition, other attributes are extracted from the webpage, such as the name, foodtype and location of the restaurant, which have categorical values. The name attribute is assumed to correspond to the restaurant entity. Given the distinguished attributes, a Dist. Attr. Lexicalization food food, meal service service, staff, waitstaff, wait staff, server, waiter, waitress atmosphere atmosphere, decor, ambience, decoration value value, price, overprice, pricey, expensive, inexpensive, cheap, affordable, afford overall recommend, place, experience, establishment Table 1: Lexicalizations for distinguished attributes. simple domain ontology can be automatically derived by assuming that a meronymy relation, represented by the predicate ‘has’, holds between the entity type (RESTAURANT) and the distinguished attributes. Thus, the domain ontology consists of the relations: ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ RESTAURANT has foodquality RESTAURANT has servicequality RESTAURANT has valuequality RESTAURANT has atmospherequality RESTAURANT has overallquality RESTAURANT has foodtype RESTAURANT has location We assume that, although users may discuss other attributes of the entity, at least some of the utterances in the reviews realize the relations specified in the ontology. Our problem then is to identify these utterances. We test the hypothesis that, if an utterance U contains named-entities corresponding to the distinguished attributes, that R for that utterance includes the relation concerning that attribute in the domain ontology. We define named-entities for lexicalizations of the distinguished attributes, starting with the seed word for that attribute on the webpage (Table 1).1 For named-entity recognition, we use GATE (Cunningham et al., 2002), augmented with namedentity lists for locations, food types, restaurant names, and food subtypes (e.g. pizza), scraped from the we8there webpages. We also hypothesize that the rating given for the distinguished attribute specifies the scalar value of the relation. For example, a sentence containing food or meal is assumed to realize the relation ‘RESTAURANT has foodquality.’, and the value of the foodquality attribute is assumed to be the value specified in the user rating for that attribute, e.g. ‘RESTAURANT has foodquality = 5’ in Fig. 1. Similarly, the other relations in Fig. 1 are assumed to be realized by the utterance “The best Spanish food in New York” because it contains 1In future, we will investigate other techniques for bootstrapping these lexicalizations from the seed word on the webpage. 266 filter filtered retained No Relations Filter 7,947 10,519 Other Relations Filter 5,351 5,168 Contextual Filter 2,973 2,195 Unknown Words Filter 1,467 728 Parsing Filter 216 512 Table 2: Filtering statistics: the number of sentences filtered and retained by each filter. one FOODTYPE named-entity and one LOCATION named-entity. Values of categorical attributes are replaced by variables representing their type before the learned mappings are added to the dictionary, as shown in Fig. 1. 2.3 Parsing and DSyntS conversion We adopt Deep Syntactic Structures (DSyntSs) as a format for syntactic structures because they can be realized by the fast portable realizer RealPro (Lavoie and Rambow, 1997). Since DSyntSs are a type of dependency structure, we first process the sentences with Minipar (Lin, 1998), and then convert Minipar’s representation into DSyntS. Since user reviews are different from the newspaper articles on which Minipar was trained, the output of Minipar can be inaccurate, leading to failure in conversion. We check whether conversion is successful in the filtering stage. 2.4 Filtering The goal of filtering is to identify U that realize the distinguished attributes and to guarantee high precision for the learned mappings. Recall is less important since systems need to convey requested information as accurately as possible. Our procedure for deriving semantic representations is based on the hypothesis that if U contains named-entities that realize the distinguished attributes, that R will include the relevant relation in the domain ontology. We also assume that if U contains namedentities that are not covered by the domain ontology, or words indicating that the meaning of U depends on the surrounding context, that R will not completely characterizes the meaning of U, and so U should be eliminated. We also require an accurate S for U. Therefore, the filters described below eliminate U that (1) realize semantic relations not in the ontology; (2) contain words indicating that its meaning depends on the context; (3) contain unknown words; or (4) cannot be parsed accurately. No Relations Filter: The sentence does not contain any named-entities for the distinguished attributes. Other Relations Filter: The sentence contains named-entities for food subtypes, person Rating Dist.Attr. 1 2 3 4 5 Total food 5 8 6 18 57 94 service 15 3 6 17 56 97 atmosphere 0 3 3 8 31 45 value 0 0 1 8 12 21 overall 3 2 5 15 45 70 Total 23 15 21 64 201 327 Table 3: Domain coverage of single scalar-valued relation mappings. names, country names, dates (e.g., today, tomorrow, Aug. 26th) or prices (e.g., 12 dollars), or POS tag CD for numerals. These indicate relations not in the ontology. Contextual Filter: The sentence contains indexicals such as I, you, that or cohesive markers of rhetorical relations that connect it to some part of the preceding text, which means that the sentence cannot be interpreted out of context. These include discourse markers, such as list item markers with LS as the POS tag, that signal the organization structure of the text (Hirschberg and Litman, 1987), as well as discourse connectives that signal semantic and pragmatic relations of the sentence with other parts of the text (Knott, 1996), such as coordinating conjunctions at the beginning of the utterance like and and but etc., and conjunct adverbs such as however, also, then. Unknown Words Filter: The sentence contains words not in WordNet (Fellbaum, 1998) (which includes typographical errors), or POS tags contain NN (Noun), which may indicate an unknown named-entity, or the sentence has more than a fixed length of words,2 indicating that its meaning may not be estimated solely by named entities. Parsing Filter: The sentence fails the parsing to DSyntS conversion. Failures are automatically detected by comparing the original sentence with the one realized by RealPro taking the converted DSyntS as an input. We apply the filters, in a cascading manner, to the 18,466 sentences with semantic representations. As a result, we obtain 512 (2.8%) mappings of (U, R, S). After removing 61 duplicates, 451 distinct (2.4%) mappings remain. Table 2 shows the number of sentences eliminated by each filter. 3 Objective Evaluation We evaluate the learned expressions with respect to domain coverage, linguistic variation and generativity. 2We used 20 as a threshold. 267 # Combination of Dist. Attrs Count 1 food-service 39 2 food-value 21 3 atmosphere-food 14 4 atmosphere-service 10 5 atmosphere-food-service 7 6 food-foodtype 4 7 atmosphere-food-value 4 8 location-overall 3 9 food-foodtype-value 3 10 food-service-value 2 11 food-foodtype-location 2 12 food-overall 2 13 atmosphere-foodtype 2 14 atmosphere-overall 2 15 service-value 1 16 overall-service 1 17 overall-value 1 18 foodtype-overall 1 19 food-foodtype-location-overall 1 20 atmosphere-food-service-value 1 21 atmosphere-food-overallservice-value 1 Total 122 Table 4: Counts for multi-relation mappings. 3.1 Domain Coverage To be usable for a dialogue system, the mappings must have good domain coverage. Table 3 shows the distribution of the 327 mappings realizing a single scalar-valued relation, categorized by the associated rating score.3 For example, there are 57 mappings with R of ‘RESTAURANT has foodquality=5,’ and a large number of mappings for both the foodquality and servicequality relations. Although we could not obtain mappings for some relations such as price={1,2}, coverage for expressing a single relation is fairly complete. There are also mappings that express several relations. Table 4 shows the counts of mappings for multi-relation mappings, with those containing a food or service relation occurring more frequently as in the single scalar-valued relation mappings. We found only 21 combinations of relations, which is surprising given the large potential number of combinations (There are 50 combinations if we treat relations with different scalar values differently). We also find that most of the mappings have two or three relations, perhaps suggesting that system utterances should not express too many relations in a single sentence. 3.2 Linguistic Variation We also wish to assess whether the linguistic variation of the learned mappings was greater than what we could easily have generated with a hand-crafted dictionary, or a hand-crafted dictionary augmented with aggregation operators, as in 3There are two other single-relation but not scalar-valued mappings that concern LOCATION in our mappings. (Walker et al., 2003). Thus, we first categorized the mappings by the patterns of the DSyntSs. Table 5 shows the most common syntactic patterns (more than 10 occurrences), indicating that 30% of the learned patterns consist of the simple form “X is ADJ” where ADJ is an adjective, or “X is RB ADJ,” where RB is a degree modifier. Furthermore, up to 55% of the learned mappings could be generated from these basic patterns by the application of a combination operator that coordinates multiple adjectives, or coordinates predications over distinct attributes. However, there are 137 syntactic patterns in all, 97 with unique syntactic structures and 21 with two occurrences, accounting for 45% of the learned mappings. Table 6 shows examples of learned mappings with distinct syntactic structures. It would be surprising to see this type of variety in a hand-crafted generation dictionary. In addition, the learned mappings contain 275 distinct lexemes, with a minimum of 2, maximum of 15, and mean of 4.63 lexemes per DSyntS, indicating that the method extracts a wide variety of expressions of varying lengths. Another interesting aspect of the learned mappings is the wide variety of adjectival phrases (APs) in the common patterns. Tables 7 and 8 show the APs in single scalar-valued relation mappings for food and service categorized by the associated ratings. Tables for atmosphere, value and overall can be found in the Appendix. Moreover, the meanings for some of the learned APs are very specific to the particular attribute, e.g. cold and burnt associated with foodquality of 1, attentive and prompt for servicequality of 5, silly and inattentive for servicequality of 1. and mellow for atmosphere of 5. In addition, our method places the adjectival phrases (APs) in the common patterns on a more fine-grained scale of 1 to 5, similar to the strength classifications in (Wilson et al., 2004), in contrast to other automatic methods that classify expressions into a binary positive or negative polarity (e.g. (Turney, 2002)). 3.3 Generativity Our motivation for deriving syntactic representations for the learned expressions was the possibility of using an off-the-shelf sentence planner to derive new combinations of relations, and apply aggregation and other syntactic transformations. We examined how many of the learned DSyntSs can be combined with each other, by taking every pair of DSyntSs in the mappings and applying the built-in merge operation in the SPaRKy generator (Walker et al., 2003). We found that only 306 combinations out of a potential 81,318 268 # syntactic pattern example utterance count ratio accum. 1 NN VB JJ The atmosphere is wonderful. 92 20.4% 20.4% 2 NN VB RB JJ The atmosphere was very nice. 52 11.5% 31.9% 3 JJ NN Bad service. 36 8.0% 39.9% 4 NN VB JJ CC JJ The food was flavorful but cold. 25 5.5% 45.5% 5 RB JJ NN Very trendy ambience. 22 4.9% 50.3% 6 NN VB JJ CC NN VB JJ The food is excellent and the atmosphere is great. 13 2.9% 53.2% 7 NN CC NN VB JJ The food and service were fantastic. 10 2.2% 55.4% Table 5: Common syntactic patterns of DSyntSs, flattened to a POS sequence for readability. NN, VB, JJ, RB, CC stand for noun, verb, adjective, adverb, and conjunction, respectively. [overall=1, value=2] Very disappointing experience for the money charged. [food=5, value=5] The food is excellent and plentiful at a reasonable price. [food=5, service=5] The food is exquisite as well as the service and setting. [food=5, service=5] The food was spectacular and so was the service. [food=5, foodtype, value=5] Best FOODTYPE food with a great value for money. [food=5, foodtype, value=5] An absolutely outstanding value with fantastic FOODTYPE food. [food=5, foodtype, location, overall=5] This is the best place to eat FOODTYPE food in LOCATION. [food=5, foodtype] Simply amazing FOODTYPE food. [food=5, foodtype] RESTAURANTNAME is the best of the best for FOODTYPE food. [food=5] The food is to die for. [food=5] What incredible food. [food=4] Very pleasantly surprised by the food. [food=1] The food has gone downhill. [atmosphere=5, overall=5] This is a quiet little place with great atmosphere. [atmosphere=5, food=5, overall=5, service=5, value=5] The food, service and ambience of the place are all fabulous and the prices are downright cheap. Table 6: Acquired generation patterns (with shorthand for relations in square brackets) whose syntactic patterns occurred only once. combinations (0.37%) were successful. This is because the merge operation in SPaRKy requires that the subjects and the verbs of the two DSyntSs are identical, e.g. the subject is RESTAURANT and verb is has, whereas the learned DSyntSs often place the attribute in subject position as a definite noun phrase. However, the learned DSyntS can be incorporated into SPaRKy using the semantic representations to substitute learned DSyntSs into nodes in the sentence plan tree. Figure 2 shows some example utterances generated by SPaRKy with its original dictionary and example utterances when the learned mappings are incorporated. The resulting utterances seem more natural and colloquial; we examine whether this is true in the next section. 4 Subjective Evaluation We evaluate the obtained mappings in two respects: the consistency between the automatically derived semantic representation and the realizafood=1 awful, bad, burnt, cold, very ordinary food=2 acceptable, bad, flavored, not enough, very bland, very good food=3 adequate, bland and mediocre, flavorful but cold, pretty good, rather bland, very good food=4 absolutely wonderful, awesome, decent, excellent, good, good and generous, great, outstanding, rather good, really good, traditional, very fresh and tasty, very good, very very good food=5 absolutely delicious, absolutely fantastic, absolutely great, absolutely terrific, ample, well seasoned and hot, awesome, best, delectable and plentiful, delicious, delicious but simple, excellent, exquisite, fabulous, fancy but tasty, fantastic, fresh, good, great, hot, incredible, just fantastic, large and satisfying, outstanding, plentiful and outstanding, plentiful and tasty, quick and hot, simply great, so delicious, so very tasty, superb, terrific, tremendous, very good, wonderful Table 7: Adjectival phrases (APs) in single scalarvalued relation mappings for foodquality. tion, and the naturalness of the realization. For comparison, we used a baseline of handcrafted mappings from (Walker et al., 2003) except that we changed the word decor to atmosphere and added five mappings for overall. For scalar relations, this consists of the realization “RESTAURANT has ADJ LEX” where ADJ is mediocre, decent, good, very good, or excellent for rating values 1-5, and LEX is food quality, service, atmosphere, value, or overall depending on the relation. RESTAURANT is filled with the name of a restaurant at runtime. For example, ‘RESTAURANT has foodquality=1’ is realized as “RESTAURANT has mediocre food quality.” The location and food type relations are mapped to “RESTAURANT is located in LOCATION” and “RESTAURANT is a FOODTYPE restaurant.” The learned mappings include 23 distinct semantic representations for a single-relation (22 for scalar-valued relations and one for location) and 50 for multi-relations. Therefore, using the handcrafted mappings, we first created 23 utterances for the single-relations. We then created three utterances for each of 50 multi-relations using different clause-combining operations from (Walker et al., 2003). This gave a total of 173 baseline utterances, which together with 451 learned mappings, 269 service=1 awful, bad, great, horrendous, horrible, inattentive, forgetful and slow, marginal, really slow, silly and inattentive, still marginal, terrible, young service=2 overly slow, very slow and inattentive service=3 bad, bland and mediocre, friendly and knowledgeable, good, pleasant, prompt, very friendly service=4 all very warm and welcoming, attentive, extremely friendly and good, extremely pleasant, fantastic, friendly, friendly and helpful, good, great, great and courteous, prompt and friendly, really friendly, so nice, swift and friendly, very friendly, very friendly and accommodating service=5 all courteous, excellent, excellent and friendly, extremely friendly, fabulous, fantastic, friendly, friendly and helpful, friendly and very attentive, good, great, great, prompt and courteous, happy and friendly, impeccable, intrusive, legendary, outstanding, pleasant, polite, attentive and prompt, prompt and courteous, prompt and pleasant, quick and cheerful, stupendous, superb, the most attentive, unbelievable, very attentive, very congenial, very courteous, very friendly, very friendly and helpful, very friendly and pleasant, very friendly and totally personal, very friendly and welcoming, very good, very helpful, very timely, warm and friendly, wonderful Table 8: Adjectival phrases (APs) in single scalarvalued relation mappings for servicequality. yielded 624 utterances for evaluation. Ten subjects, all native English speakers, evaluated the mappings by reading them from a webpage. For each system utterance, the subjects were asked to express their degree of agreement, on a scale of 1 (lowest) to 5 (highest), with the statement (a) The meaning of the utterance is consistent with the ratings expressing their semantics, and with the statement (b) The style of the utterance is very natural and colloquial. They were asked not to correct their decisions and also to rate each utterance on its own merit. 4.1 Results Table 9 shows the means and standard deviations of the scores for baseline vs. learned utterances for consistency and naturalness. A t-test shows that the consistency of the learned expression is significantly lower than the baseline (df=4712, p < .001) but that their naturalness is significantly higher than the baseline (df=3107, p < .001). However, consistency is still high. Only 14 of the learned utterances (shown in Tab. 10) have a mean consistency score lower than 3, which indicates that, by and large, the human judges felt that the inferred semantic representations were consistent with the meaning of the learned expressions. The correlation coefficient between consistency and naturalness scores is 0.42, which indicates that consisOriginal SPaRKy utterances • Babbo has the best overall quality among the selected restaurants with excellent decor, excellent service and superb food quality. • Babbo has excellent decor and superb food quality with excellent service. It has the best overall quality among the selected restaurants. ↓ Combination of SPaRKy and learned DSyntS • Because the food is excellent, the wait staff is professional and the decor is beautiful and very comfortable, Babbo has the best overall quality among the selected restaurants. • Babbo has the best overall quality among the selected restaurants because atmosphere is exceptionally nice, food is excellent and the service is superb. • Babbo has superb food quality, the service is exceptional and the atmosphere is very creative. It has the best overall quality among the selected restaurants. Figure 2: Utterances incorporating learned DSyntSs (Bold font) in SPaRKy. baseline learned stat. mean sd. mean sd. sig. Consistency 4.714 0.588 4.459 0.890 + Naturalness 4.227 0.852 4.613 0.844 + Table 9: Consistency and naturalness scores averaged over 10 subjects. tency does not greatly relate to naturalness. We also performed an ANOVA (ANalysis Of VAriance) of the effect of each relation in R on naturalness and consistency. There were no significant effects except that mappings combining food, service, and atmosphere were significantly worse (df=1, F=7.79, p=0.005). However, there is a trend for mappings to be rated higher for the food attribute (df=1, F=3.14, p=0.08) and the value attribute (df=1, F=3.55, p=0.06) for consistency, suggesting that perhaps it is easier to learn some mappings than others. 5 Related Work Automatically finding sentences with the same meaning has been extensively studied in the field of automatic paraphrasing using parallel corpora and corpora with multiple descriptions of the same events (Barzilay and McKeown, 2001; Barzilay and Lee, 2003). Other work finds predicates of similar meanings by using the similarity of contexts around the predicates (Lin and Pantel, 2001). However, these studies find a set of sentences with the same meaning, but do not associate a specific meaning with the sentences. One exception is (Barzilay and Lee, 2002), which derives mappings between semantic representations and realizations using a parallel (but unaligned) corpus consisting of both complex semantic input and corresponding natural language verbalizations for mathemat270 shorthand for relations and utterance score [food=4] The food is delicious and beautifully prepared. 2.9 [overall=4] A wonderful experience. 2.9 [service=3] The service is bland and mediocre. 2.8 [atmosphere=2] The atmosphere here is eclectic. 2.6 [overall=3] Really fancy place. 2.6 [food=3, service=4] Wonderful service and great food. 2.5 [service=4] The service is fantastic. 2.5 [overall=2] The RESTAURANTNAME is once a great place to go and socialize. 2.2 [atmosphere=2] The atmosphere is unique and pleasant. 2.0 [food=5, foodtype] FOODTYPE and FOODTYPE food. 1.8 [service=3] Waitstaff is friendly and knowledgeable. 1.7 [atmosphere=5, food=5, service=5] The atmosphere, food and service. 1.6 [overall=3] Overall, a great experience. 1.4 [service=1] The waiter is great. 1.4 Table 10: The 14 utterances with consistency scores below 3. ical proofs. However, our technique does not require parallel corpora or previously existing semantic transcripts or labeling, and user reviews are widely available in many different domains (See http://www.epinions.com/). There is also significant previous work on mining user reviews. For example, Hu and Liu (2005) use reviews to find adjectives to describe products, and Popescu and Etzioni (2005) automatically find features of a product together with the polarity of adjectives used to describe them. They both aim at summarizing reviews so that users can make decisions easily. Our method is also capable of finding polarities of modifying expressions including adjectives, but on a more fine-grained scale of 1 to 5. However, it might be possible to use their approach to create rating information for raw review texts as in (Pang and Lee, 2005), so that we can create mappings from reviews without ratings. 6 Summary and Future Work We proposed automatically obtaining mappings between semantic representations and realizations from reviews with individual ratings. The results show that: (1) the learned mappings provide good coverage of the domain ontology and exhibit good linguistic variation; (2) the consistency between the semantic representations and realizations is high; and (3) the naturalness of the realizations are significantly higher than the baseline. There are also limitations in our method. Even though consistency is rated highly by human subjects, this may actually be a judgement of whether the polarity of the learned mapping is correctly placed on the 1 to 5 rating scale. Thus, alternate ways of expressing, for example foodquality=5, shown in Table 7, cannot be guaranteed to be synonymous, which may be required for use in spoken language generation. Rather, an examination of the adjectival phrases in Table 7 shows that different aspects of the food are discussed. For example ample and plentiful refer to the portion size, fancy may refer to the presentation, and delicious describes the flavors. This suggests that perhaps the ontology would benefit from representing these sub-attributes of the food attribute, and sub-attributes in general. Another problem with consistency is that the same AP, e.g. very good in Table 7 may appear with multiple ratings. For example, very good is used for every foodquality rating from 2 to 5. Thus some further automatic or by-hand analysis is required to refine what is learned before actual use in spoken language generation. Still, our method could reduce the amount of time a system designer spends developing the spoken language generator, and increase the naturalness of spoken language generation. Another issue is that the recall appears to be quite low given that all of the sentences concern the same domain: only 2.4% of the sentences could be used to create the mappings. One way to increase recall might be to automatically augment the list of distinguished attribute lexicalizations, using WordNet or work on automatic identification of synonyms, such as (Lin and Pantel, 2001). However, the method here has high precision, and automatic techniques may introduce noise. A related issue is that the filters are in some cases too strict. For example the contextual filter is based on POS-tags, so that sentences that do not require the prior context for their interpretation are eliminated, such as sentences containing subordinating conjunctions like because, when, if, whose arguments are both given in the same sentence (Prasad et al., 2005). In addition, recall is affected by the domain ontology, and the automatically constructed domain ontology from the review webpages may not cover all of the domain. In some review domains, the attributes that get individual ratings are a limited subset of the domain ontology. Techniques for automatic feature identification (Hu and Liu, 2005; Popescu and Etzioni, 2005) could possibly help here, although these techniques currently have the limitation that they do not automatically identify different lexicalizations of the same feature. A different type of limitation is that dialogue systems need to generate utterances for information gathering whereas the mappings we obtained 271 can only be used for information presentation. Thus these would have to be constructed by hand, as in current practice, or perhaps other types of corpora or resources could be utilized. In addition, the utility of syntactic structures in the mappings should be further examined, especially given the failures in DSyntS conversion. An alternative would be to leave some sentences unparsed and use them as templates with hybrid generation techniques (White and Caldwell, 1998). Finally, while we believe that this technique will apply across domains, it would be useful to test it on domains such as movie reviews or product reviews, which have more complex domain ontologies. Acknowledgments We thank the anonymous reviewers for their helpful comments. This work was supported by a Royal Society Wolfson award to Marilyn Walker and a research collaboration grant from NTT to the Cognitive Systems Group at the University of Sheffield. References Regina Barzilay and Lillian Lee. 2002. Bootstrapping lexical choice via multiple-sequence alignment. In Proc. EMNLP, pages 164–171. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiplesequence alignment. In Proc. HLT/NAACL, pages 16–23. Regina Barzilay and Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proc. 39th ACL, pages 50–57. Hamish Cunningham, Diana Maynard, Kalina Bontcheva, and Valentin Tablan. 2002. GATE: A framework and graphical development environment for robust NLP tools and applications. In Proc. 40th ACL. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Communication). The MIT Press. Julia Hirschberg and Diane. J. Litman. 1987. Now let’s talk about NOW: Identifying cue phrases intonationally. In Proc. 25th ACL, pages 163–171. Minqing Hu and Bing Liu. 2005. Mining and summarizing customer reviews. In Proc. KDD, pages 168–177. Alistair Knott. 1996. A Data-Driven Methodology for Motivating a Set of Coherence Relations. Ph.D. thesis, University of Edinburgh, Edinburgh. Benoit Lavoie and Owen Rambow. 1997. A fast and portable realizer for text generation systems. In Proc. 5th Applied NLP, pages 265–268. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question answering. Natural Language Engineering, 7(4):343–360. Dekang Lin. 1998. Dependency-based evaluation of MINIPAR. In Workshop on the Evaluation of Parsing Systems. Johanna D. Moore, Mary Ellen Foster, Oliver Lemon, and Michael White. 2004. Generating tailored, comparative descriptions in spoken dialogue. In Proc. 7th FLAIR. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proc. 43st ACL, pages 115–124. Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Proc. HLT/EMNLP, pages 339–346. Rashmi Prasad, Aravind Joshi, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, and Bonnie Webber. 2005. The Penn Discourse TreeBank as a resource for natural language generation. In Proc. Corpus Linguistics Workshop on Using Corpora for NLG. Stephanie Seneff and Joseph Polifroni. 2000. Formal and natural language generation in the mercury conversational system. In Proc. ICSLP, volume 2, pages 767–770. Mari¨et Theune. 2003. From monologue to dialogue: natural language generation in OVIS. In AAAI 2003 Spring Symposium on Natural Language Generation in Written and Spoken Dialogue, pages 141–150. Peter D. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In Proc. 40th ACL, pages 417–424. Marilyn Walker, Rashmi Prasad, and Amanda Stent. 2003. A trainable generator for recommendations in multimodal dialog. In Proc. Eurospeech, pages 1697–1700. Michael White and Ted Caldwell. 1998. EXEMPLARS: A practical, extensible framework for dynamic text generation. In Proc. INLG, pages 266–275. Theresa Wilson, Janyce Wiebe, and Rebecca Hwa. 2004. Just how mad are you? finding strong and weak opinion clauses. In Proc. AAAI, pages 761–769. Appendix Adjectival phrases (APs) in single scalar-valued relation mappings for atmosphere, value, and overall. atmosphere=2 eclectic, unique and pleasant atmosphere=3 busy, pleasant but extremely hot atmosphere=4 fantastic, great, quite nice and simple, typical, very casual, very trendy, wonderful atmosphere=5 beautiful, comfortable, excellent, great, interior, lovely, mellow, nice, nice and comfortable, phenomenal, pleasant, quite pleasant, unbelievably beautiful, very comfortable, very cozy, very friendly, very intimate, very nice, very nice and relaxing, very pleasant, very relaxing, warm and contemporary, warm and very comfortable, wonderful value=3 very reasonable value=4 great, pretty good, reasonable, very good value=5 best, extremely reasonable, good, great, reasonable, totally reasonable, very good, very reasonable overall=1 just bad, nice, thoroughly humiliating overall=2 great, really bad overall=3 bad, decent, great, interesting, really fancy overall=4 excellent, good, great, just great, never busy, not very busy, outstanding, recommended, wonderful overall=5 amazing, awesome, capacious, delightful, extremely pleasant, fantastic, good, great, local, marvelous, neat, new, overall, overwhelmingly pleasant, pampering, peaceful but idyllic, really cool, really great, really neat, really nice, special, tasty, truly great, ultimate, unique and enjoyable, very enjoyable, very excellent, very good, very nice, very wonderful, warm and friendly, wonderful 272
2006
34
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 273–280, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Measuring Language Divergence by Intra-Lexical Comparison T. Mark Ellison Informatics University of Edinburgh [email protected] Simon Kirby Language Evolution and Computation Research Unit Philosophy, Psychology and Language Sciences, University of Edinburgh [email protected] Abstract This paper presents a method for building genetic language taxonomies based on a new approach to comparing lexical forms. Instead of comparing forms cross-linguistically, a matrix of languageinternal similarities between forms is calculated. These matrices are then compared to give distances between languages. We argue that this coheres better with current thinking in linguistics and psycholinguistics. An implementation of this approach, called PHILOLOGICON, is described, along with its application to Dyen et al.’s (1992) ninety-five wordlists from Indo-European languages. 1 Introduction Recently, there has been burgeoning interest in the computational construction of genetic language taxonomies (Dyen et al., 1992; Nerbonne and Heeringa, 1997; Kondrak, 2002; Ringe et al., 2002; Benedetto et al., 2002; McMahon and McMahon, 2003; Gray and Atkinson, 2003; Nakleh et al., 2005). One common approach to building language taxonomies is to ascribe language-language distances, and then use a generic algorithm to construct a tree which explains these distances as much as possible. Two questions arise with this approach. The first asks what aspects of languages are important in measuring inter-language distance. The second asks how to measure distance given these aspects. A more traditional approach to building language taxonomies (Dyen et al., 1992) answers these questions in terms of cognates. A word in language A is said to be cognate with word in language B if the forms shared a common ancestor in the parent language of A and B. In the cognatecounting method, inter-language distance depends on the lexical forms of the languages. The distance between two languages is a function of the number or fraction of these forms which are cognate between the two languages1. This approach to building language taxonomies is hard to implement in toto because constructing ancestor forms is not easily automatable. More recent approaches, such as Kondrak’s (2002) and Heggarty et al’s (2005) work on dialect comparison, take the synchronic word forms themselves as the language aspect to be compared. Variations on edit distance (see Kessler (2005) for a survey) are then used to evaluate differences between languages for each word, and these differences are aggregated to give a distance between languages or dialects as a whole. This approach is largely automatable, although some methods do require human intervention. In this paper, we present novel answers to the two questions. The features of language we will compare are not sets of words or phonological forms. Instead we compare the similarities between forms, expressed as confusion probabilities. The distribution of confusion probabilities in one language is called a lexical metric. Section 2 presents the definition of lexical metrics and some arguments for their being good language representatives for the purposes of comparison. The distance between two languages is the divergence their lexical metrics. In section 3, we detail two methods for measuring this divergence: 1McMahon and McMahon (2003) for an account of treeinference from the cognate percentages in the Dyen et al. (1992) data. 273 Kullback-Liebler (herafter KL) divergence and Rao distance. The subsequent section (4) describes the application of our approach to automatically constructing a taxonomy of Indo-European languages from Dyen et al. (1992) data. Section 5 suggests how lexical metrics can help identify cognates. The final section (6) presents our conclusions, and discusses possible future directions for this work. Versions of the software and data files described in the paper will be made available to coincide with its publication. 2 Lexical Metric The first question posed by the distance-based approach to genetic language taxonomy is: what should we compare? In some approaches (Kondrak, 2002; McMahon et al., 2005; Heggarty et al., 2005; Nerbonne and Heeringa, 1997), the answer to this question is that we should compare the phonetic or phonological realisations of a particular set of meanings across the range of languages being studied. There are a number of problems with using lexical forms in this way. Firstly, in order to compare forms from different languages, we need to embed them in common phonetic space. This phonetic space provides granularity, marking two phones as identical or distinct, and where there is a graded measure of phonetic distinction it measures this. There is growing doubt in the field of phonology and phonetics about the meaningfulness of assuming of a common phonetic space. Port and Leary (2005) argue convincingly that this assumption, while having played a fundamental role in much recent linguistic theorising, is nevertheless unfounded. The degree of difference between sounds, and consequently, the degree of phonetic difference between words can only be ascertained within the context of a single language. It may be argued that a common phonetic space can be found in either acoustics or degrees of freedom in the speech articulators. Language-specific categorisation of sound, however, often restructures this space, sometimes with distinct sounds being treated as homophones. One example of this is the realisation of orthographic rr in European Portuguese: it is indifferently realised with an apical or a uvular trill, different sounds made at distinct points of articulation. If there is no language-independent, common phonetic space with an equally common similarity measure, there can be no principled approach to comparing forms in one language with those of another. In contrast, language-specific word-similarity is well-founded. A number of psycholinguistic models of spoken word recognition (Luce et al., 1990) are based on the idea of lexical neighbourhoods. When a word is accessed during processing, the other words that are phonemically or orthographically similar are also activated. This effect can be detected using experimental paradigms such as priming. Our approach, therefore, is to abandon the cross-linguistic comparison of phonetic realisations, in favour of language-internal comparison of forms. (See also work by Shillcock et al. (2001) and Tamariz (2005)). 2.1 Confusion probabilities One psychologically well-grounded way of describing the similarity of words is in terms of their confusion probabilities. Two words have high confusion probability if it is likely that one word could be produced or understood when the other was intended. This type of confusion can be measured experimentally by giving subjects words in noisy environments and measuring what they apprehend. A less pathological way in which confusion probability is realised is in coactivation. If a person hears a word, then they more easily and more quickly recognise similar words. This coactivation occurs because the phonological realisation of words is not completely separate in the mind. Instead, realisations are interdependent with realisations of similar words. We propose that confusion probabilities are ideal information to constitute the lexical metric. They are language-specific, psychologically grounded, can be determined by experiment, and integrate with existing psycholinguistic models of word recognition. 2.2 NAM and beyond Unfortunately, experimentally determined confusion probabilities for a large number of languages are not available. Fortunately, models of spoken word recognition allow us to predict these probabilities from easily-computable measures of word similarity. 274 For example, the neighbourhood activation model (NAM) (Luce et al., 1990; Luce and Pisoni, 1998) predicts confusion probabilities from the relative frequency of words in the neighbourhood of the target. Words are in the neighbourhood of the target if their Levenstein (1965) edit distance from the target is one. The more frequent the word is, the greater its likelihood of replacing the target. Bailey and Hahn (2001) argue, however, that the all-or-nothing nature of the lexical neighbourhood is insufficient. Instead word similarity is the complex function of frequency and phonetic similarity shown in equation (1). Here A, B, C and D are constants of the model, u and v are words, and d is a phonetic similarity model. s = (AF(u)2 + BF(u) + C)e−D.d(u,v) (1) We have adapted this model slightly, in line with NAM, taking the similarity s to be the probability of confusing stimulus v with form u. Also, as our data usually offers no frequency information, we have adopted the maximum entropy assumption, namely, that all relative frequencies are equal. Consequently, the probability of confusion of two words depends solely on their similarity distance. While this assumption degrades the psychological reality of the model, it does not render it useless, as the similarity measure continues to provide important distinctions in neighbourhood confusability. We also assume for simplicity, that the constant D has the value 1. With these simplifications, equation (2) shows the probability of apprehending word w, out of a set W of possible alternatives, given a stimulus word ws. P(w|ws) = e−d(w,ws)/N(ws) (2) The normalising constant N(s) is the sum of the non-normalised values for e−d(w,ws) for all words w. N(ws) = X w∈W e−d(u,v) 2.3 Scaled edit distances Kidd and Watson (1992) have shown that discriminability of frequency and of duration of tones in a tone sequence depends on its length as a proportion of the length of the sequence. Kapatsinski (2006) uses this, with other evidence, to argue that word recognition edit distances must be scaled by word-length. There are other reasons for coming to the same conclusion. The simple Levenstein distance exaggerates the disparity between long words in comparison with short words. A word of consisting of 10 symbols, purely by virtue of its length, will on average be marked as more different from other words than a word of length two. For example, Levenstein distance between interested and rest is six, the same as the distance between rest and by, even though the latter two have nothing in common. As a consequence, close phonetic transcriptions, which by their very nature are likely to involve more symbols per word, will result in larger edit distances than broad phonemic transcriptions of the same data. To alleviate this problem, we define a new edit distance function d2 which scales Levenstein distances by the average length of the words being compared (see equation 3). Now the distance between interested and rest is 0.86, while that between rest and by is 2.0, reflecting the greater relative difference in the second pair. d2(w2, w1) = 2d(w2, w1) |w1| + |w2| (3) Note that by scaling the raw edit distance with the average lengths of the words, we are preserving the symmetric property of the distance measure. There are other methods of comparing strings, for example string kernels (Shawe-Taylor and Cristianini, 2004), but using Levenstein distance keeps us coherent with the psycholinguistic accounts of word similarity. 2.4 Lexical Metric Bringing this all together, we can define the lexical metric. A lexicon L is a mapping from a set of meanings M, such as “DOG”, “TO RUN”, “GREEN”, etc., onto a set F of forms such as /pies/, /biec/, /zielony/. The confusion probability P of m1 for m2 in lexical L is the normalised negative exponential of the scaled edit-distance of the corresponding forms. It is worth noting that when frequencies are assumed to follow the maximum entropy distribution, this connection between confusion probabilities and distances (see equation 4) is the same as that proposed by Shepard (1987). 275 P(m1|m2; L) = e−d2(L(m1),L(m2)) N(m2; L) (4) A lexical metric of L is the mapping LM(L) : M2 →[0, 1] which assigns to each pair of meanings m1, m2 the probability of confusing m1 for m2, scaled by the frequency of m2. LM(L)(m1, m2) = P(L(m1)|L(m2))P(m2) = e−d2(L(m1),L(m2)) N(m2; L)|M| where N(m2; L) is the normalising function defined in equation (5). N(m2; L) = X m∈M e−d2(L(m),L(m2)) (5) Table 1 shows a minimal lexicon consisting only of the numbers one to five, and a corresponding lexical metric. The values in the lexical metric are one two three four five one 0.102 0.027 0.023 0.024 0.024 two 0.028 0.107 0.024 0.026 0.015 three 0.024 0.024 0.107 0.023 0.023 four 0.025 0.025 0.022 0.104 0.023 five 0.026 0.015 0.023 0.025 0.111 Table 1: A lexical metric on a mini-lexicon consisting of the numbers one to five. inferred word confusion probabilities. The matrix is normalised so that the sum of each row is 0.2, ie. one-fifth for each of the five words, so the total of the matrix is one. Note that the diagonal values vary because the off-diagonal values in each row vary, and consequently, so does the normalisation for the row. 3 Language-Language Distance In the previous section, we introduced the lexical metric as the key measurable for comparing languages. Since lexical metrics are probability distributions, comparison of metrics means measuring the difference between probability distributions. To do this, we use two measures: the symmetric Kullback-Liebler divergence (Jeffreys, 1946) and the Rao distance (Rao, 1949; Atkinson and Mitchell, 1981; Micchelli and Noakes, 2005) based on Fisher Information (Fisher, 1959). These can be defined in terms the geometric path from one distribution to another. 3.1 Geometric paths The geometric path between two distributions P and Q is a conditional distribution R with a continuous parameter α such that at α = 0, the distribution is P, and at α = 1 it is Q. This conditional distribution is called the geometric because it consists of normalised weighted geometric means of the two defining distributions (equation 6). R( ¯w|α) = P( ¯w)αQ( ¯w)1−α/k(α; P, Q) (6) The function k(α; P, Q) is a normaliser for the conditional distribution, being the sum of the weighted geometric means of values from P and Q (equation 7). This value is known as the Chernoff coefficient or Helliger path (Basseville, 1989). For brevity, the P, Q arguments to k will be treated as implicit and not expressed in equations. k(α) = X ¯w∈W 2 P( ¯w)1−αQ( ¯w)α (7) 3.2 Kullback-Liebler distance The first-order (equation 8) differential of the normaliser with regard to α is of particular interest. k′(α) = X ¯w∈W 2 log Q( ¯w) P( ¯w)P( ¯w)1−αQ( ¯w)α (8) At α = 0, this value is the negative of the Kullback-Liebler distance KL(P|Q) of Q with regard to P (Basseville, 1989). At α = 1, it is the Kullback-Liebler distance KL(Q|P) of P with regard to Q. Jeffreys’ (1946) measure is a symmetrisation of KL distance, by averaging the commutations (equations 9,10). KL(P, Q) = KL(Q|P) + KL(P|Q) 2 (9) = k′(1) −k′(0) 2 (10) 3.3 Rao distance Rao distance depends on the second-order (equation 11) differential of the normaliser with regard to α. k′′(α) = X ¯w∈W 2 log2 Q( ¯w) P( ¯w)P( ¯w)1−αQ( ¯w)α (11) Fisher information is defined as in equation (12). FI(P, x) = − Z ∂2 log P(y|x) ∂x2 P(y|x)dy (12) 276 Equation (13) expresses Fisher information along the path R from P to Q at point α using k and its first two derivatives. FI(R, α) = k(α)k′′(α) −k′(α)2 k(α)2 (13) The Rao distance r(P, Q) along R can be approximated by the square root of the Fisher information at the path’s midpoint α = 0.5. r(P, Q) = s k(0.5)k′′(0.5) −k′(0.5)2 k(0.5)2 (14) 3.4 The PHILOLOGICON algorithm Bringing these pieces together, the PHILOLOGICON algorithm for measuring the divergence between two languages has the following steps: 1. determine their joint confusion probability matrices, P and Q, 2. substitute these into equation (7), equation (8) and equation (11) to calculate k(0), k(0.5), k(1), k′(0.5), and k′′(0.5), 3. and put these into equation (10) and equation (14) to calculate the KL and Rao distances between between the languages. 4 Indo-European The ideal data for reconstructing Indo-European would be an accurate phonemic transcription of words used to express specifically defined meanings. Sadly, this kind of data is not readily available. However, as a stop-gap measure, we can adopt the data that Dyen et al. collected to construct a Indo-European taxonomy using the cognate method. 4.1 Dyen et al’s data Dyen et al. (1992) collected 95 data sets, each pairing a meaning from a Swadesh (1952)-like 200word list with its expression in the corresponding language. The compilers annotated with data with cognacy relations, as part of their own taxonomic analysis of Indo-European. There are problems with using Dyen’s data for the purposes of the current paper. Firstly, the word forms collected are not phonetic, phonological or even full orthographic representations. As the authors state, the forms are expressed in sufficient detail to allow an interested reader acquainted with the language in question to identify which word is being expressed. Secondly, many meanings offer alternative forms, presumably corresponding to synonyms. For a human analyst using the cognate approach, this means that a language can participate in two (or more) word-derivation systems. In preparing this data for processing, we have consistently chosen the first of any alternatives. A further difficulty lies in the fact that many languages are not represented by the full 200 meanings. Consequently, in comparing lexical metrics from two data sets, we frequently need to restrict the metrics to only those meanings expressed in both the sets. This means that the KL divergence or the Rao distance between two languages were measured on lexical metrics cropped and rescaled to the meanings common to both data-sets. In most cases, this was still more than 190 words. Despite these mismatches between Dyen et al.’s data and our needs, it provides an testbed for the PHILOLOGICON algorithm. Our reasoning being, that if successful with this data, the method is reasonably reliable. Data was extracted to languagespecific files, and preprocessed to clean up problems such as those described above. An additional data-set was added with random data to act as an outlier to root the tree. 4.2 Processing the data PHILOLOGICON software was then used to calculate the lexical metrics corresponding to the individual data files and to measure KL divergences and Rao distances between them. The program NEIGHBOR from the PHYLIP2 package was used to construct trees from the results. 4.3 The results The tree based on Rao distances is shown in figure 1. The discussion follows this tree except in those few cases mentioning differences in the KL tree. The standard against which we measure the success of our trees is the conservative traditional taxonomy to be found in the Ethnologue (Grimes and Grimes, 2000). The fit with this taxonomy was so good that we have labelled the major branches with their traditional names: Celtic, Germanic, etc. In fact, in most cases, the branchinternal divisions — eg. Brythonic/Goidelic in Celtic, Western/Eastern/Southern in Slavic, or 2See http://evolution.genetics.washington.edu/phylip.html. 277 Western/Northern in Germanic — also accord. Note that PHILOLOGICON even groups Baltic and Slavic together into a super-branch Balto-Slavic. Where languages are clearly out of place in comparison to the traditional taxonomy, these are highlighted: visually in the tree, and verbally in the following text. In almost every case, there are obvious contact phenomena which explain the deviation from the standard taxonomy. Armenian was grouped with the Indo-Iranian languages. Interestingly, Armenian was at first thought to be an Iranian language, as it shares much vocabulary with these languages. The common vocabulary is now thought to be the result of borrowing, rather than common genetic origin. In the KL tree, Armenian is placed outside of the Indo-Iranian languages, except for Gypsy. On the other hand, in this tree, Ossetic is placed as an outlier of the Indian group, while its traditional classification (and the Rao distance tree) puts it among the Iranian languages. Gypsy is an Indian language, related to Hindi. It has, however, been surrounded by European languages for some centuries. The effects of this influence is the likely cause for it being classified as an outlier in the Indo-Iranian family. A similar situation exists for Slavic: one of the two lists that Dyen et al. offer for Slovenian is classed as an outlier in Slavic, rather than classifying it with the Southern Slavic languages. The other Slovenian list is classified correctly with Serbocroatian. It is possible that the significant impact of Italian on Slovenian has made it an outlier. In Germanic, it is English that is the outlier. This may be due to the impact of the English creole, Takitaki, on the hierarchy. This language is closest to English, but is very distinct from the rest of the Germanic languages. Another misclassification also is the result of contact phenomena. According to the Ethnologue, Sardinian is Southern Romance, a separate branch from Italian or from Spanish. However, its constant contact with Italian has influenced the language such that it is classified here with Italian. We can offer no explanation for why Wakhi ends up an outlier to all the groups. In conclusion, despite the noisy state of Dyen et al.’s data (for our purposes), the PHILOLOGICON generates a taxonomy close to that constructed using the traditional methods of historical linguistics. Where it deviates, the deviation usually points to identifiable contact between languages. G r e e k I n d o − I r a n i a n A l b a n i a n B a l t o − S l a v i c G e r m a n i c R o m a n c e C e l t i c Wakhi Greek D Greek MD Greek ML Greek Mod Greek K Afghan Waziri Armenian List Baluchi Persian List Tadzik Ossetic Bengali Hindi Lahnda Panjabi ST Gujarati Marathi Khaskura Nepali List Kashmiri Singhalese Gypsy Gk ALBANIAN Albanian G Albanian C Albanian K Albanian Top Albanian T Bulgarian BULGARIAN P MACEDONIAN P Macedonian Serbocroatian SERBOCROATIAN P SLOVENIAN P Byelorussian BYELORUSSIAN P Russian RUSSIAN P Ukrainian UKRAINIAN P Czech CZECH P Slovak SLOVAK P Czech E Lusatian L Lusatian U Polish POLISH P Slovenian Latvian Lithuanian O Lithuanian ST Afrikaans Dutch List Flemish Frisian German ST Penn Dutch Danish Riksmal Swedish List Swedish Up Swedish VL Faroese Icelandic ST English ST Takitaki Brazilian Portuguese ST Spanish Catalan Italian Sardinian L Sardinian N Ladin French Walloon Provencal French Creole C French Creole D Rumanian List Vlach Breton List Breton ST Breton SE Welsh C Welsh N Irish A Irish B Random Armenian Mod Sardinian C Figure 1: Taxonomy of 95 Indo-European data sets and artificial outlier using PHILOLOGICON and PHYLIP 278 5 Reconstruction and Cognacy Subsection 3.1 described the construction of geometric paths from one lexical metric to another. This section describes how the synthetic lexical metric at the midpoint of the path can indicate which words are cognate between the two languages. The synthetic lexical metric (equation 15) applies the formula for the geometric path equation (6) to the lexical metrics equation (5) of the languages being compared, at the midpoint α = 0.5. R 1 2 (m1, m2) = p P(m1|m2)Q(m1|m2) |M|k(1 2) (15) If the words for m1 and m2 in both languages have common origins in a parent language, then it is reasonable to expect that their confusion probabilities in both languages will be similar. Of course different cognate pairs m1, m2 will have differing values for R, but the confusion probabilities in P and Q will be similar, and consequently, the reinforce the variance. If either m1 or m2, or both, is non-cognate, that is, has been replaced by another arbitrary form at some point in the history of either language, then the P and Q for this pair will take independently varying values. Consequently, the geometric mean of these values is likely to take a value more closely bound to the average, than in the purely cognate case. Thus rows in the lexical metric with wider dynamic ranges are likely to correspond to cognate words. Rows corresponding to non-cognates are likely to have smaller dynamic ranges. The dynamic range can be measured by taking the Shannon information of the probabilities in the row. Table 2 shows the most low- and highinformation rows from English and Swedish (Dyen et al’s (1992) data). At the extremes of low and high information, the words are invariably cognate and non-cognate. Between these extremes, the division is not so clear cut, due to chance effects in the data. 6 Conclusions and Future Directions In this paper, we have presented a distancebased method, called PHILOLOGICON, that constructs genetic trees on the basis of lexica from each language. The method only compares words language-internally, where comparison seems both psychologically real and reliable, English Swedish 104(h −¯h) Low Information we vi -1.30 here her -1.19 to sit sitta -1.14 to flow flyta -1.04 wide vid -0.97 : scratch klosa 0.78 dirty smutsig 0.79 left (hand) vanster 0.84 because emedan 0.89 High Information Table 2: Shannon information of confusion distributions in the reconstruction of English and Swedish. Information levels are shown translated so that the average is zero. and never cross-linguistically, where comparison is less well-founded. It uses measures founded in information theory to compare the intra-lexical differences. The method successfully, if not perfectly, recreated the phylogenetic tree of Indo-European languages on the basis of noisy data. In further work, we plan to improve both the quantity and the quality of the data. Since most of the mis-placements on the tree could be accounted for by contact phenomena, it is possible that a network-drawing, rather than tree-drawing, analysis would produce better results. Likewise, we plan to develop the method for identifying cognates. The key improvement needed is a way to distinguish indeterminate distances in reconstructed lexical metrics from determinate but uniform ones. This may be achieved by retaining information about the distribution of the original values which were combined to form the reconstructed metric. References C. Atkinson and A.F.S. Mitchell. 1981. Rao’s distance measure. Sankhy¯a, 4:345–365. Todd M. Bailey and Ulrike Hahn. 2001. Determinants of wordlikeness: Phonotactics or lexical neighborhoods? Journal of Memory and Language, 44:568– 591. Michle Basseville. 1989. Distance measures for signal processing and pattern recognition. Signal Processing, 18(4):349–369, December. 279 D. Benedetto, E. Caglioti, and V. Loreto. 2002. Language trees and zipping. Physical Review Letters, 88. Isidore Dyen, Joseph B. Kruskal, and Paul Black. 1992. An indo-european classification: a lexicostatistical experiment. Transactions of the American Philosophical Society, 82(5). R.A. Fisher. 1959. Statistical Methods and Scientific Inference. Oliver and Boyd, London. Russell D. Gray and Quentin D. Atkinson. 2003. Language-tree divergence times support the anatolian theory of indo-european origin. Nature, 426:435–439. B.F. Grimes and J.E. Grimes, editors. 2000. Ethnologue: Languages of the World. SIL International, 14th edition. Paul Heggarty, April McMahon, and Robert McMahon, 2005. Perspectives on Variation, chapter From phonetic similarity to dialect classification. Mouton de Gruyter. H. Jeffreys. 1946. An invariant form for the prior probability in estimation problems. Proc. Roy. Soc. A, 186:453–461. Vsevolod Kapatsinski. 2006. Sound similarity relations in the mental lexicon: Modeling the lexicon as a complex network. Technical Report 27, Indiana University Speech Research Lab. Brett Kessler. 2005. Phonetic comparison algorithms. Transactions of the Philological Society, 103(2):243–260. Gary R. Kidd and C.S. Watson. 1992. The ”proportion-of-the-total-duration rule for the discrimination of auditory patterns. Journal of the Acoustic Society of America, 92:3109–3118. Grzegorz Kondrak. 2002. Algorithms for Language Reconstruction. Ph.D. thesis, University of Toronto. V.I. Levenstein. 1965. Binary codes capable of correcting deletions, insertions and reversals. Doklady Akademii Nauk SSSR, 163(4):845–848. Paul Luce and D. Pisoni. 1998. Recognizing spoken words: The neighborhood activation model. Ear and Hearing, 19:1–36. Paul Luce, D. Pisoni, and S. Goldinger, 1990. Cognitive Models of Speech Perception: Psycholinguistic and Computational Perspectives, chapter Similarity neighborhoods of spoken words, pages 122– 147. MIT Press, Cambridge, MA. April McMahon and Robert McMahon. 2003. Finding families: quantitative methods in language classification. Transactions of the Philological Society, 101:7–55. April McMahon, Paul Heggarty, Robert McMahon, and Natalia Slaska. 2005. Swadesh sublists and the benefits of borrowing: an andean case study. Transactions of the Philological Society, 103(2):147–170. Charles A. Micchelli and Lyle Noakes. 2005. Rao distances. Journal of Multivariate Analysis, 92(1):97– 115. Luay Nakleh, Tandy Warnow, Don Ringe, and Steven N. Evans. 2005. A comparison of phylogenetic reconstruction methods on an ie dataset. Transactions of the Philological Society, 103(2):171–192. J. Nerbonne and W. Heeringa. 1997. Measuring dialect distance phonetically. In Proceedings of SIGPHON-97: 3rd Meeting of the ACL Special Interest Group in Computational Phonology. B. Port and A. Leary. 2005. Against formal phonology. Language, 81(4):927–964. C.R. Rao. 1949. On the distance between two populations. Sankhy¯a, 9:246–248. D. Ringe, Tandy Warnow, and A. Taylor. 2002. Indoeuropean and computational cladistics. Transactions of the Philological Society, 100(1):59–129. John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press. R.N. Shepard. 1987. Toward a universal law of generalization for physical science. Science, 237:1317– 1323. Richard C. Shillcock, Simon Kirby, Scott McDonald, and Chris Brew. 2001. Filled pauses and their status in the mental lexicon. In Proceedings of the 2001 Conference of Disfluency in Spontaneous Speech, pages 53–56. M. Swadesh. 1952. Lexico-statistic dating of prehistoric ethnic contacts. Proceedings of the American philosophical society, 96(4). Monica Tamariz. 2005. Exploring the Adaptive Structure of the Mental Lexicon. Ph.D. thesis, University of Edinburgh. 280
2006
35
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 281–288, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Enhancing electronic dictionaries with an index based on associations Olivier Ferret CEA –LIST/LIC2M 18 Route du Panorama F-92265 Fontenay-aux-Roses [email protected] Michael Zock1 LIF-CNRS 163 Avenue de Luminy F-13288 Marseille Cedex 9 [email protected] Abstract A good dictionary contains not only many entries and a lot of information concerning each one of them, but also adequate means to reveal the stored information. Information access depends crucially on the quality of the index. We will present here some ideas of how a dictionary could be enhanced to support a speaker/writer to find the word s/he is looking for. To this end we suggest to add to an existing electronic resource an index based on the notion of association. We will also present preliminary work of how a subset of such associations, for example, topical associations, can be acquired by filtering a network of lexical co-occurrences extracted from a corpus. 1 Introduction A dictionary user typically pursues one of two goals (Humble, 2001): as a decoder (reading, listening), he may look for the definition or the translation of a specific target word, while as an encoder (speaker, writer) he may want to find a word that expresses well not only a given concept, but is also appropriate in a given context. Obviously, readers and writers come to the dictionary with different mindsets, information and expectations concerning input and output. While the decoder can provide the word he wants additional information for, the encoder (language producer) provides the meaning of a word for which he lacks the corresponding form. In sum, users with different goals need access to different indexes, one that is based on form (decoding), 1 In alphabetical order the other being based on meaning or meaning relations (encoding). Our concern here is more with the encoder, i.e. lexical access in language production, a feature largely neglected in lexicographical work. Yet, a good dictionary contains not only many entries and a lot of information concerning each one of them, but also efficient means to reveal the stored information. Because, what is a huge dictionary good for, if one cannot access the information it contains? 2 Lexical access on the basis of what: concepts (i.e. meanings) or words? Broadly speaking, there are two views concerning lexicalization: the process is conceptuallydriven (meaning, or parts of it are the starting point) or lexically-driven2: the target word is accessed via a source word. This is typically the case when we are looking for a synonym, antonym, hypernym (paradigmatic associations), or any of its syntagmatic associates (red-rose, coffee-black), the kind of association we will be concerned with here. Yet, besides conceptual knowledge, people seem also to know a lot of things concerning the lexical form (Brown and Mc Neill, 1966): number of syllables, beginning/ending of the target word, part of speech (noun, verb, adjective, etc.), origin (Greek or Latin), gender (Vigliocco et al., 2 Of course, the input can also be hybrid, that is, it can be composed of a conceptual and a linguistic component. For example, in order to express the notion of intensity, MAGN in Mel’čuk’s theory (Mel’čuk et al., 1995), a speaker or writer has to use different words (very, seriously, high) depending on the form of the argument (ill, wounded, price), as he says very ill, seriously wounded, high price. In each case he expresses the very same notion, but by using a different word. While he could use the adverb very for qualifying the state of somebody’s health (he is ill), he cannot do so when qualifying the words injury or price. Likewise, he cannot use this specific adverb to qualify the noun illness. 281 1997). While in principle, all this information could be used to constrain the search space, we will deal here only with one aspect, the words’ relations to other concepts or words (associative knowledge). Suppose, you were looking for a word expressing the following ideas: domesticated animal, producing milk suitable for making cheese. Suppose further that you knew that the target word was neither cow, buffalo nor sheep. While none of this information is sufficient to guarantee the access of the intended word goat, the information at hand (part of the definition) could certainly be used3. Besides this type of information, people often have other kinds of knowledge concerning the target word. In particular, they know how the latter relates to other words. For example, they know that goats and sheep are somehow connected, sharing a great number of features, that both are animals (hypernym), that sheep are appreciated for their wool and meat, that they tend to follow each other blindly, etc., while goats manage to survive, while hardly eating anything, etc. In sum, people have in their mind a huge lexico-conceptual network, with words 4 , concepts or ideas being highly interconnected. Hence, any one of them can evoke the other. The likelihood for this to happen depends on such factors as frequency (associative strength), saliency and distance (direct vs. indirect access). As one can see, associations are a very general and powerful mechanism. No matter what we hear, read or say, anything is likely to remind us of something else. This being so, we should make use of it. 3 For some concrete proposals going in this direction, see dictionaries offering reverse lookup: http://www.ultralingua. net/ ,http://www.onelook.com/reverse-dictionary.shtml. 4 Of course, one can question the very fact that people store words in their mind. Rather than considering the human mind as a wordstore one might consider it as a wordfactory. Indeed, by looking at some of the work done by psychologists who try to emulate the mental lexicon (Levelt et al., 1999) one gets the impression that words are synthesized rather than located and call up. In this case one might conclude that rather than having words in our mind we have a set of highly distributed, more or less abstract information. By propagating energy rather than data —(as there is no message passing, transformation or cumulation of information, there is only activation spreading, that is, changes of energy levels, call it weights, electronic impulses, or whatever),— that we propagate signals, activating ultimately certain peripherical organs (larynx, tongue, mouth, lips, hands) in such a way as to produce movements or sounds, that, not knowing better, we call words. 3 Accessing the target word by navigating in a huge associative network If one agrees with what we have just said, one could view the mental lexicon as a huge semantic network composed of nodes (words and concepts) and links (associations), with either being able to activate the other5. Finding a word involves entering the network and following the links leading from the source node (the first word that comes to your mind) to the target word (the one you are looking for). Suppose you wanted to find the word nurse (target word), yet the only token coming to your mind is hospital. In this case the system would generate internally a graph with the source word at the center and all the associated words at the periphery. Put differently, the system would build internally a semantic network with hospital in the center and all its associated words as satellites (see Figure 1, next page). Obviously, the greater the number of associations, the more complex the graph. Given the diversity of situations in which a given object may occur we are likely to build many associations. In other words, lexical graphs tend to become complex, too complex to be a good representation to support navigation. Readability is hampered by at least two factors: high connectivity (the great number of links or associations emanating from each word), and distribution: conceptually related nodes, that is, nodes activated by the same kind of association are scattered around, that is, they do not necessarily occur next to each other, which is quite confusing for the user. In order to solve this problem, we suggest to display by category (chunks) all the words linked by the same kind of association to the source word (see Figure 2). Hence, rather than displaying all the connected words as a flat list, we suggest to present them in chunks to allow for categorial search. Having chosen a category, the user will be presented a list of words or categories from which he must choose. If the target word is in the category chosen by the user (suppose he looked for a hypernym, hence he checked the ISA-bag), search stops, otherwise it continues. The user could choose either another category (e.g. AKO or TIORA), or a word in the current list, which would then become the new starting point. 5 While the links in our brain may only be weighted, they need to be labelled to become interpretable for human beings using them for navigational purposes in a lexicon. 282 DENTIST assistant near-synonym GYNECOLOGIST PHYSICIAN HEALTH INSTITUTION CLINIC DOCTOR SANATORIUM PSYCHIATRIC HOSPITAL MILITARY HOSPITAL ASYLUM treat A O K take care of treat HOSPITAL PATIENT INMATE TIORA synonym ISA A O K A O K A O K A O K ISA ISA ISA ISA ISA TIORA TIORA nurse Internal Representation Figure 1: Search based on navigating in a network (internal representation) AKO: a kind of; ISA: subtype; TIORA: Typically Involved Object, Relation or Actor. list of potential target words (LOPTW) source word link link link link link LOPTW LOPTW list of potential target words (LOPTW) ... Abstract representation of the search graph hospital TIORA ISA AKO clinic, sanatorium, ... military hospital, psychiatric hospital inmate SYNONYM nurse doctor, ... patient ... A concrete example Figure 2: Proposed candidates, grouped by family, i.e. according to the nature of the link As one can see, the fact that the links are labeled has some very important consequences: (a) While maintaining the power of a highly connected graph (possible cyclic navigation), it has at the interface level the simplicity of a tree: each node points only to data of the same type, i.e. to the same kind of association. (b) With words being presented in clusters, navigation can be accomplished by clicking on the appropriate category. The assumption being that the user generally knows to which category the target word belongs (or at least, he can recognize within which of the listed categories it falls), and that categorical search is in principle faster than search in a huge list of unordered (or, alphabetically ordered) words6. Obviously, in order to allow for this kind of access, the resource has to be built accordingly. This requires at least two things: (a) indexing words by the associations they evoke, (b) identi 6 Even though very important, at this stage we shall not worry too much for the names given to the links. Indeed, one might question nearly all of them. What is important is the underlying rational: help users to navigate on the basis of symbolically qualified links. In reality a whole set of words (synonyms, of course, but not only) could amount to a link, i.e. be its conceptual equivalent. 283 fying and labelling the most frequent/useful associations. This is precisely our goal. Actually, we propose to build an associative network by enriching an existing electronic dictionary (essentially) with (syntagmatic) associations coming from a corpus, representing the average citizen’s shared, basic knowledge of the world (encyclopaedia). While some associations are too complex to be extracted automatically by machine, others are clearly within reach. We will illustrate in the next section how this can be achieved. 4 Automatic extraction of topical relations 4.1 Definition of the problem We have argued in the previous sections that dictionaries must contain many kinds of relations on the syntagmatic and paradigmatic axis to allow for natural and flexible access of words. Synonymy, hypernymy or meronymy fall clearly in this latter category, and well known resources like WordNet (Miller, 1995), EuroWordNet (Vossen, 1998) or MindNet (Richardson et al., 1998) contain them. However, as various researchers have pointed out (Harabagiu et al., 1999), these networks lack information, in particular with regard to syntagmatic associations, which are generally unsystematic. These latter, called TIORA (Zock and Bilac, 2004) or topical relations (Ferret, 2002) account for the fact that two words refer to the same topic, or take part in the same situation or scenario. Word-pairs like doctor–hospital, burglar–policeman or plane–airport, are examples in case. The lack of such topical relations in resources like WordNet has been dubbed as the tennis problem (Roger Chaffin, cited in Fellbaum, 1998). Some of these links have been introduced more recently in WordNet via the domain relation. Yet their number remains still very small. For instance, WordNet 2.1 does not contain any of the three associations mentioned here above, despite their high frequency. The lack of systematicity of these topical relations makes their extraction and typing very difficult on a large scale. This is why some researchers have proposed to use automatic learning techniques to extend lexical networks like WordNet. In (Harabagiu & Moldovan, 1998), this was done by extracting topical relations from the glosses associated to the synsets. Other researchers used external sources: Mandala et al. (1999) integrated co-occurrences and a thesaurus to WordNet for query expansion; Agirre et al. (2001) built topic signatures from texts in relation to synsets; Magnini and Cavagliá (2000) annotated the synsets with Subject Field Codes. This last idea has been taken up and extended by (Avancini et al., 2003) who expanded the domains built from this annotation. Despite the improvements, all these approaches are limited by the fact that they rely too heavily on WordNet and some of its more sophisticated features (such as the definitions associated with the synsets). While often being exploited by acquisition methods, these features are generally lacking in similar lexico-semantic networks. Moreover, these methods attempt to learn topical knowledge from a lexical network rather than topical relations. Since our goal is different, we have chosen not to rely on any significant resource, all the more as we would like our method to be applicable to a wide array of languages. In consequence, we took an incremental approach (Ferret, 2006): starting from a network of lexical co-occurrences7 collected from a large corpus, we used these latter to select potential topical relations by using a topical analyzer. 4.2 From a network of co-occurrences to a set of Topical Units We start by extracting lexical co-occurrences from a corpus to build a network. To this end we follow the method introduced by (Church and Hanks, 1990), i.e. by sliding a window of a given size over some texts. The parameters of this extraction were set in such a way as to catch the most obvious topical relations: the window was fairly large (20-words wide), and while it took text boundaries into account, it ignored the order of the co-occurrences. Like (Church and Hanks, 1990), we used mutual information to measure the cohesion between two words. The finite size of the corpus allows us to normalize this measure in line with the maximal mutual information relative to the corpus. This network is used by TOPICOLL (Ferret, 2002), a topic analyzer, which performs simultaneously three tasks, relevant for this goal: • it segments texts into topically homogeneous segments; • it selects in each segment the most representative words of its topic; 7 Such a network is only another view of a set of cooccurrences: its nodes are the co-occurrent words and its edges are the co-occurrence relations. 284 • it proposes a restricted set of words from the co-occurrence network to expand the selected words of the segment. These three tasks rely on a common mechanism: a window is moved over the text to be analyzed in order to limit the focus space of the analysis. This latter contains a lemmatized version of the text’s plain words. For each position of this window, we select only words of the cooccurrence network that are linked to at least three other words of the window (see Figure 3). This leads to select both words that are in the window (first order co-occurrents) and words coming from the network (second order cooccurrents). The number of links between the selected words of the network, called expansion words, and those of the window is a good indicator of the topical coherence of the window’s content. Hence, when their number is small, a segment boundary can be assumed. This is the basic principle underlying our topic analyzer. 0.14 0.21 0.10 0.18 0.13 0.17 w5 w4 w3 w2 w1 0.48 = pw3x0.18+pw4x0.13 +pw5x0.17 selected word from the co-occurrence network (with its weight) 1.0 word from text (with p its weight in the window, equal to 0.21 link in the co-occurrence network (with its cohesion value) 1.0 1.0 1.0 1.0 1.0 wi, n1 n2 1.0 for all words of the window in this example) 0.48 Figure 3: Selection and weighting of words from the co-occurrence network The words selected for each position of the window are summed, to keep only those occurring in 75% of the positions of the segment. This allows reducing the number of words selected from non-topical co-occurrences. Once a corpus has been processed by TOPICOLL, we obtain a set of segments and a set of expansion words for each one of them. The association of the selected words of a segment and its expansion words is called a Topical Unit. Since both sets of words are selected for reasons of topical homogeneity, their co-occurrence is more likely to be a topical relation than in our initial network. 4.3 Filtering of Topical Units Before recording the co-occurrences in the Topical Units built in this way, the units are filtered twice. The first filter aims at discarding heterogeneous Topical Units, which can arise as a side effect of a document whose topics are so intermingled that it is impossible to get a reliable linear segmentation of the text. We consider that this occurs when for a given text segment, no word can be selected as a representative of the topic of the segment. Moreover, we only keep the Topical Units that contain at least two words from their original segment. A topic is defined here as a configuration of words. Note that the identification of such a configuration cannot be based solely on a single word. Text words Expansion words surveillance (watch) police_judiciaire (judiciary police) téléphonique (telephone) écrouer (to imprison) juge (judge) garde_à_vue (police custody) policier (policeman) écoute_téléphonique (phone tapping) brigade (squad) juge_d’instruction (examining judge) enquête (investigation) contrôle_judiciaire (judicial review) placer (to put) Table 1: Content of a filtered Topical Unit The second filter is applied to the expansion words of each Topical Unit to increase their topical homogeneity. The principle of the filtering of these words is the same as the principle of their selection described in Section 4.2: an expansion word is kept if it is linked in the co-occurrence network to at least three text words of the Topical Unit. Moreover, a selective threshold is applied to the frequency and the cohesion of the cooccurrences supporting these links: only cooccurrences whose frequency and cohesion are respectively higher or equal to 15 and 0.15 are used. For instance in Table 1, which shows an example of a Topical Unit after its filtering, écrouer (to imprison) is selected, because it is linked in the co-occurrence network to the following words of the text: juge (judge): 52 (frequency) – 0.17 (cohesion) policier (policeman): 56 – 0.17 enquête (investigation): 42 – 0.16 285 word freq. word freq. word freq. word freq. scène (stage) 884 théâtral (dramatic) 62 cynique (cynical) 26 scénique (theatrical) 14 théâtre (theater) 679 scénariste (scriptwriter) 51 miss (miss) 20 Chabol (Chabol) 13 réalisateur (director) 220 comique (comic) 51 parti_pris (bias) 16 Tchekov (Tchekov) 13 cinéaste (film-marker) 135 oscar (oscar) 40 monologue (monolog) 15 allocataire (beneficiary) 13 comédie (comedy) 104 film_américain (american film) 38 revisiter (to revisit) 14 satirique (satirical) 13 costumer (to dress up) 63 hollywoodien (Hollywood) 30 gros_plan (close-up) 14 Table 2: Co-occurrents of the word acteur (actor) with a cohesion of 0.16 (the co-occurrents removed by our filtering method are underlined) 4.4 From Topical Units to a network of topical relations After the filtering, a Topical Unit gathers a set of words supposed to be strongly coherent from the topical point of view. Next, we record the cooccurrences between these words for all the Topical Units remaining after filtering. Hence, we get a large set of topical co-occurrences, despite the fact that a significant number of nontopical co-occurrences remains, the filtering of Topical Units being an unsupervised process. The frequency of a co-occurrence in this case is given by the number of Topical Units containing both words simultaneously. No distinction concerning the origin of the words of the Topical Units is made. The network of topical co-occurrences built from Topical Units is a subset of the initial network. However, it also contains co-occurrences that are not part of it, i.e. co-occurrences that were not extracted from the corpus used for setting the initial network or co-occurrences whose frequency in this corpus was too low. Only some of these “new” co-occurrences are topical. Since it is difficult to estimate globally which ones are interesting, we have decided to focus our attention only on the co-occurrences of the topical network already present in the initial network. Thus, we only use the network of topical cooccurrences as a filter for the initial cooccurrence network. Before doing so, we filter the topical network in order to discard cooccurrences whose frequency is too low, that is, co-occurrences that are unstable and not representative. From the use of the final network by TOPICOLL (see Section 4.5), we set the threshold experimentally to 5. Finally, the initial network is filtered by keeping only co-occurrences present in the topical network. Their frequency and cohesion are taken from the initial network. While the frequencies given by the topical network are potentially interesting for their topical significance, we do not use them because the results of the filtering of Topical Units are too hard to evaluate. 4.5 Results and evaluation We applied the method described here to an initial co-occurrence network extracted from a corpus of 24 months of Le Monde, a major French newspaper. The size of the corpus was around 39 million words. The initial network contained 18,958 words and 341,549 relations. The first run produced 382,208 Topical Units. After filtering, we kept 59% of them. The network built from these Topical Units was made of 11,674 words and 2,864,473 co-occurrences. 70% of these cooccurrences were new with regard to the initial network and were discarded. Finally, we got a filtered network of 7,160 words and 183,074 relations, which represents a cut of 46% of the initial network. A qualitative study showed that most of the discarded relations are non-topical. This is illustrated by Table 2, which gives the cooccurrents of the word acteur (actor) that are filtered by our method among its co-occurrents with a high cohesion (equal to 0.16). For instance, the words cynique (cynical) or allocataire (beneficiary) are cohesive co-occurrents of the 286 word actor, even though they are not topically linked to it. These words are filtered out, while we keep words like gros_plan (close-up) or scénique (theatrical), which topically cohere with acteur (actor) despite their lower frequency than the discarded words. Recall8 Precision F1measure Error (Pk)9 initial (I) 0.85 0.79 0.82 0.20 topical filtering (T) 0.85 0.79 0.82 0.21 frequency filtering (F) 0.83 0.71 0.77 0.25 Table 3: TOPICOLL’s results with different networks In order to evaluate more objectively our work, we compared the quantitative results of TOPICOLL with the initial network and its filtered version. The evaluation showed that the performance of the segmenter remains stable, even if we use a topically filtered network (see Table 3). Moreover, it became obvious that a network filtered only by frequency and cohesion performs significantly less well, even with a comparable size. For testing the statistical significance of these results, we applied to the Pk values a one-side t-test with a null hypothesis of equal means. Levels lower or equal to 0.05 are considered as statistically significant: pval (I-T): 0.08 pval (I-F): 0.02 pval (T-F): 0.05 These values confirm that the difference between the initial network (I) and the topically filtered one (T) is actually not significant, whereas the filtering based on co-occurrence frequencies leads to significantly lower results, both compared to the initial network and the topically filtered one. Hence, one may conclude that our 8 Precision is given by Nt / Nb and recall by Nt / D, with D being the number of document breaks, Nb the number of boundaries found by TOPICOLL and Nt the number of boundaries that are document breaks (the boundary should not be farther than 9 plain words from the document break). 9 Pk (Beeferman et al., 1999) evaluates the probability that a randomly chosen pair of words, separated by k words, is wrongly classified, i.e. they are found in the same segment by TOPICOLL, while they are actually in different ones (miss of a document break), or they are found in different segments, while they are actually in the same one (false alarm). method is an effective way of selecting topical relations by preference. 5 Discussion and conclusion We have raised and partially answered the question of how a dictionary should be indexed in order to support word access, a question initially addressed in (Zock, 2002) and (Zock and Bilac, 2004). We were particularly concerned with the language producer, as his needs (and knowledge at the onset) are quite different from the ones of the language receiver (listener/reader). It seems that, in order to achieve our goal, we need to do two things: add to an existing electronic dictionary information that people tend to associate with a word, that is, build and enrich a semantic network, and provide a tool to navigate in it. To this end we have suggested to label the links, as this would reduce the graph complexity and allow for type-based navigation. Actually our basic proposal is to extend a resource like WordNet by adding certain links, in particular on the syntagmatic axis. These links are associations, and their role consists in helping the encoder to find ideas (concepts/words) related to a given stimulus (brainstorming), or to find the word he is thinking of (word access). One problem that we are confronted with is to identify possible associations. Ideally we would need a complete list, but unfortunately, this does not exist. Yet, there is a lot of highly relevant information out there. For example, Mel’cuk’s lexical functions (Mel’cuk, 1995), Fillmore’s FRAMENET10, work on ontologies (CYC), thesaurus (Roget), WordNets (the original version from Princeton, various Euro-WordNets, BalkaNet), HowNet11, the work done by MICRA, the FACTOTUM project 12 , or the Wordsmyth dictionary/thesaurus13. Since words are linked via associations, it is important to reveal these links. Once this is done, words can be accessed by following these links. We have presented here some preliminary work for extracting an important subset of such links from texts, topical associations, which are generally absent from dictionaries or resources like WordNet. An evaluation of the topic segmentation has shown that the relations extracted are sound from the topical point of view, and that they can be extracted automatically. However, 10 http://www.icsi.berkeley.edu/~framenet/ 11 http://www.keenage.com/html/e_index.html 12 http://humanities.uchicago.edu/homes/MICRA/ 13 http://www.wordsmyth.com/ 287 they still contain too much noise to be directly exploitable by an end user for accessing a word in a dictionary. One way of reducing the noise of the extracted relations would be to build from each text a representation of its topics and to record the co-occurrences in these representations rather than in the segments delimited by a topic segmenter. This is a hypothesis we are currently exploring. While we have focused here only on word access on the basis of (other) words, one should not forget that most of the time speakers or writers start from meanings. Hence, we shall consider this point more carefully in our future work, by taking a serious look at the proposals made by Bilac et al. (2004); Durgar and Oflazer (2004), or Dutoit and Nugues (2002). References Eneko Agirre, Olatz Ansa, David Martinez and Eduard Hovy. 2001. Enriching WordNet concepts with topic signatures. In NAACL’01 Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations. Henri Avancini, Alberto Lavelli, Bernardo Magnini, Fabrizio Sebastiani and Roberto Zanoli. 2003. Expanding Domain-Specific Lexicons by Term Categorization. In 18th ACM Symposium on Applied Computing (SAC-03). Doug Beeferman, Adam Berger and Lafferty. 1999. Statistical Models for Text Segmentation. Machine Learning, 34(1): 177-210. Slaven Bilac, Wataru Watanabe, Taiichi Hashimoto, Takenobu Tokunaga and Hozumi Tanaka. 2004. Dictionary search based on the target word description. In Tenth Annual Meeting of The Association for Natural Language Processing (NLP2004), pages 556-559. Roger Brown and David McNeill. 1996. The tip of the tongue phenomenon. Journal of Verbal Learning and Verbal Behaviour, 5: 325-337. Kenneth Church and Patrick Hanks. 1990. Word Association Norms, Mutual Information, And Lexicography. Computational Linguistics, 16(1): 177210. Ilknur Durgar El-Kahlout and Kemal Oflazer. 2004. Use of Wordnet for Retrieving Words from Their Meanings, In 2nd Global WordNet Conference, Brno Dominique Dutoit and Pierre Nugues. 2002. A lexical network and an algorithm to find words from definitions. In 15th European Conference on Artificial Intelligence (ECAI 2002), Lyon, pages 450-454, IOS Press. Christiane Fellbaum. 1998. WordNet - An Electronic Lexical Database, MIT Press. Olivier Ferret. 2006. Building a network of topical relations from a corpus. In LREC 2006. Olivier Ferret. 2002. Using collocations for topic segmentation and link detection. In COLING 2002, pages 260-266. Sanda M. Harabagiu, George A. Miller and Dan I. Moldovan. 1999. WordNet 2 - A Morphologically and Semantically Enhanced Resource. In ACLSIGLEX99: Standardizing Lexical Resources, pages 1-8. Sanda M. Harabagiu and Dan I. Moldovan. 1998. Knowledge Processing on an Extended WordNet. In WordNet - An Electronic Lexical Database, pages 379-405. Philip Humble. 2001. Dictionaries and Language Learners, Haag and Herchen. William Levelt, Ardi Roelofs and Antje Meyer. 1999. A theory of lexical access in speech production, Behavioral and Brain Sciences, 22: 1-75. Bernardo Magnini and Gabriela Cavagliá. 2000. Integrating Subject Field Codes into WordNet. In LREC 2000. Rila Mandala, Takenobu Tokunaga and Hozumi Tanaka. 1999. Complementing WordNet with Roget’s and Corpus-based Thesauri for Information Retrieval. In EACL 99. Igor Mel’čuk, Arno Clas and Alain Polguère. 1995. Introduction à la lexicologie explicative et combinatoire, Louvain, Duculot. George A. Miller. 1995. WordNet: A lexical Database, Communications of the ACM. 38(11): 39-41. Stephen D. Richardson, William B. Dolan and Lucy Vanderwende. 1998. MindNet: Acquiring and Structuring Semantic Information from Text. In ACL-COLING’98, pages 1098-1102. Piek Vossen. 1998. EuroWordNet: A Multilingual Database with Lexical Semantic Networks. Kluwer Academic Publisher. Gabriella Vigliocco, Antonini, T., and Merryl Garrett. 1997. Grammatical gender is on the tip of Italian tongues. Psychological Science, 8: 314-317. Michael Zock. 2002. Sorry, what was your name again, or how to overcome the tip-of-the tongue problem with the help of a computer? In SemaNet workshop, COLING 2002, Taipei. http://acl.ldc.upenn.edu /W/W02/W02-1118.pdf Michael Zock and Slaven Bilac. 2004. Word lookup on the basis of associations: from an idea to a roadmap. In COLING 2004 workshop: Enhancing and using dictionaries, Geneva. http://acl.ldc.upenn.edu/ coling2004/W10/pdf/5.pdf 288
2006
36
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 289–296, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Guiding a Constraint Dependency Parser with Supertags Kilian Foth, Tomas By, and Wolfgang Menzel Department f¨ur Informatik, Universit¨at Hamburg, Germany foth|by|[email protected] Abstract We investigate the utility of supertag information for guiding an existing dependency parser of German. Using weighted constraints to integrate the additionally available information, the decision process of the parser is influenced by changing its preferences, without excluding alternative structural interpretations from being considered. The paper reports on a series of experiments using varying models of supertags that significantly increase the parsing accuracy. In addition, an upper bound on the accuracy that can be achieved with perfect supertags is estimated. 1 Introduction Supertagging is based on the combination of two powerful and influential ideas of natural language processing: On the one hand, parsing is (at least partially) reduced to a decision on the optimal sequence of categories, a problem for which efficient and easily trainable procedures exist. On the other hand, supertagging exploits complex categories, i.e. tree fragments which much better reflect the mutual compatibility between neighbouring lexical items than say part-of-speech tags. Bangalore and Joshi (1999) derived the notion of supertag within the framework of Lexicalized Tree-Adjoining Grammars (LTAG) (Schabes and Joshi, 1991). They considered supertagging a process of almost parsing, since all that needs to be done after having a sufficiently reliable sequence of supertags available is to decide on their combination into a spanning tree for the complete sentence. Thus the approach lends itself easily to preprocessing sentences or filtering parsing results with the goal of guiding the parser or reducing its output ambiguity. Nasr and Rambow (2004) estimated that perfect supertag information already provides for a parsing accuracy of 98% if a correct supertag assignment were available. Unfortunately, perfectly reliable supertag information cannot be expected; usually this uncertainty is compensated by running the tagger in multi-tagging mode, expecting that the reliability can be increased by not forcing the tagger to take unreliable decisions but instead offering a set of alternatives from which a subsequent processing component can choose. A grammar formalism which seems particularly well suited to decompose structural descriptions into lexicalized tree fragments is dependency grammar. It allows us to define supertags on different levels of granularity (White, 2000; Wang and Harper, 2002), thus facilitating a fine grained analysis of how the different aspects of supertag information influence the parsing behaviour. In the following we will use this characteristic to study in more detail the utility of different kinds of supertag information for guiding the parsing process. Usually supertags are combined with a parser in a filtering mode, i.e. parsing hypotheses which are not compatible with the supertag predictions are simply discarded. Drawing on the ability of Weighted Constraint Dependency Grammar (WCDG) (Schr¨oder et al., 2000) to deal with defeasible constraints, here we try another option for making available supertag information: Using a score to estimate the general reliability of unique supertag decisions, the information can be combined with evidence derived from other constraints of the grammar in a soft manner. It makes possible to rank parsing hypotheses according to their plausibility and allows the parser to even override potentially wrong supertag decisions. Starting from a range of possible supertag models, Section 2 explores the reliability with which dependency-based supertags can be determined on 289 SUBJC PN ATTR DET PP OBJA ATTR DET SUBJ DET KONJ AUX S EXPL es mag sein , daß die Franzosen kein schlüssiges Konzept für eine echte Partnerschaft besitzen . Figure 1: Dependency tree for sentence 19601 of the NEGRA corpus. different levels of granularity. Then, Section 3 describes how supertags are integrated into the existing parser for German. The complex nature of supertags as we define them makes it possible to separate the different structural predictions made by a single supertag into components and study their contributions independently (c.f. Section 4). We can show that indeed the parser is robust enough to tolerate supertag errors and that even with a fairly low tagger performance it can profit from the additional, though unreliable information. 2 Supertagging German text In defining the nature of supertags for dependency parsing, a trade-off has to be made between expressiveness and accuracy. A simple definition with very small number of supertags will not be able to capture the full variety of syntactic contexts that actually occur, while an overly expressive definition may lead to a tag set that is so large that it cannot be accurately learnt from the training data. The local context of a word to be encoded in a supertag could include its edge label, the attachment direction, the occurrence of obligatory1 or of all dependents, whether each predicted dependent occurs to the right or to the left of the word, and the relative order among different dependents. The simplest useful task that could be asked of a supertagger would be to predict the dependency relation that each word enters. In terms of the WCDG formalism, this means associating each word at least with one of the syntactic labels that decorate dependency edges, such as SUBJ or DET; in other words, the supertag set would be identical to the label set. The example sentence 1The model of German used here considers the objects of verbs, prepositions and conjunctions to be obligatory and most other relations as optional. This corresponds closely to the set of needs roles of (Wang and Harper, 2002). “Es mag sein, daß die Franzosen kein schl¨ussiges Konzept f¨ur eine echte Partnerschaft besitzen.” (Perhaps the French do not have a viable concept for a true partnership.) if analyzed as in Figure 1, would then be described by a supertag sequence beginning with EXPL S AUX ... Following (Wang and Harper, 2002), we further classify dependencies into Left (L), Right (R), and No attachments (N), depending on whether a word is attached to its left or right, or not at all. We combine the label with the attachment direction to obtain composite supertags. The sequence of supertags describing the example sentence would then begin with EXPL/R S/N AUX/L ... Although this kind of supertag describes the role of each word in a sentence, it still does not specify the entire local context; for instance, it associates the information that a word functions as a subject only with the subject and not with the verb that takes the subject. In other words, it does not predict the relations under a given word. Greater expressivity is reached by also encoding the labels of these relations into the supertag. For instance, the word ‘mag’ in the example sentence is modified by an expletive (EXPL) on its left side and by an auxiliary (AUX) and a subject clause (SUBJC) dependency on its right side. To capture this extended local context, these labels must be encoded into the supertag. We add the local context of a word to the end of its supertag, separated with the delimiter +. This yields the expression S/N+AUX,EXPL,SUBJC. If we also want to express that the EXPL precedes the word but the AUX follows it, we can instead add two new fields to the left and to the right of the supertag, which leads to the new supertag EXPL+S/N+AUX,SUBJC. Table 1 shows the annotation of the example us290 Word Supertag model J es +EXPL/R+ mag EXPL+S/N+AUX,SUBJC sein +AUX/L+ , +/N+ daß +KONJ/R+ die +DET/R+ Franzosen DET+SUBJ/R+ kein +DET/R+ schl¨ussiges +ATTR/R+ Konzept ATTR,DET+OBJA/R+PP f¨ur +PP/L+PN eine +DET/R+ echte +ATTR/R+ Partnerschaft ATTR,DET+PN/L+ besitzen KONJ,OBJA,SUBJ+SUBJC/L+ . +/N+ Table 1: An annotation of the example sentence ST Prediction of #tags SuperCommo- label direc- depen- order tag ponent del tion dents accuracy accuracy A yes no none no 35 84.1% 84.1% B yes yes none no 73 78.9% 85.7% C yes no oblig. no 914 81.1% 88.5% D yes yes oblig. no 1336 76.9% 90.8% E yes no oblig. yes 1465 80.6% 91.8% F yes yes oblig. yes 2026 76.2% 90.9% G yes no all no 6858 71.8% 81.3% H yes yes all no 8684 67.9% 85.8% I yes no all yes 10762 71.6% 84.3% J yes yes all yes 12947 67.6% 84.5% Table 2: Definition of all supertag models used. ing the most sophisticated supertag model. Note that the notation +EXPL/R+ explicitly represents the fact that the word labelled EXPL has no dependents of its own, while the simpler EXPL/R made no assertion of this kind. The extended context specification with two + delimiters expresses the complete set of dependents of a word and whether they occur to its left or right. However, it does not distinguish the order of the left or right dependents among each other (we order the labels on either side alphabetically for consistency). Also, duplicate labels among the dependents on either side are not represented. For instance, a verb with two post-modifying prepositions would still list PP only once in its right context. This ensures that the set of possible supertags is finite. The full set of different supertag models we used is given in Table 2. Note that the more complicated models G, H, I and J predict all dependents of each word, while the others predict obligatory dependents only, which should be an easier task. To obtain and evaluate supertag predictions, we used the NEGRA and TIGER corpora (Brants et al., 1997; Brants et al., 2002), automatically transformed into dependency format with the freely available tool DepSy (Daum et al., 2004). As our test set we used sentences 18,602–19,601 of the NEGRA corpus, for comparability to earlier work. All other sentences (59,622 sentences with 1,032,091 words) were used as the training set. For each word in the training set, the local context was extracted and expressed in our supertag notation. The word/supertag pairs were then used to train the statistical part-of-speech tagger TnT (Brants, 2000), which performs trigram tagging efficiently and allows easy retraining on different data. However, a few of TnT’s limitations had to be worked around: since it cannot deal with words that have more than 510 different possible tags, we systematically replaced the rarest tags in the training set with a generic ‘OTHER’ tag until the limit was met. Also, in tagging mode it can fail to process sentences with many unknown words in close succession. In such cases, we simply ran it on shorter fragments of the sentence until no error occurred. Fewer than 0.5% of all sentences were affected by this problem even with the largest tag set. A more serious problem arises when using a stochastic process to assign tags that partially predict structure: the tags emitted by the model may contradict each other. Consider, for instance, the following supertagger output for the previous example sentence: es: +EXPL/R+ mag: +S/N+AUX,SUBJC sein: PRED+AUX/L+ ... The supertagger correctly predicts that the first three labels are EXPL, S, and AUX. It also predicts that the word ‘sein’ has a preceding PRED complement, but this is impossible if the two preceding words are labelled EXPL and S. Such contradictory information is not fatal in a robust system, but it is likely to cause unnecessary work for the parser when some rules demand the impossible. We therefore decided simply to ignore context predictions when they contradict the basic label predictions made for the same sentence; in other words, we pretend that the prediction for the third word was just +AUX/L+ rather than PRED+AUX/L+. Up to 13% of all predictions were simplified in this way for the most complex supertag model. The last columns of Table 2 give the number of different supertags in the training set and the performance of the retrained TnT on the test set in single-tagging mode. Although the number of oc291 curring tags rises and the prediction accuracy falls with the supertag complexity, the correlation is not absolute: It seems markedly easier to predict supertags with complements but no direction information (C) than supertags with direction information but no complements (B), although the tag set is larger by an order of magnitude. In fact, the prediction of attachment direction seems much more difficult than that of undirected supertags in every case, due to the semi-free word order of German. The greater tag set size when predicting complements of each words is at least partly offset by the contextual information available to the n-gram model, since it is much more likely that a word will have, e.g., a ‘SUBJ’ complement when an adjacent ‘SUBJ’ supertag is present. For the simplest model A, all 35 possible supertags actually occur, while in the most complicated model J, only 12,947 different supertags are observed in the training data (out of a theoretically possible 1024 for a set of 35 edge labels). Note that this is still considerably larger than most other reported supertag sets. The prediction quality falls to rather low values with the more complicated models; however, our goal in this paper is not to optimize the supertagger, but to estimate the effect that an imperfect one has on an existing parser. Altogether most results fall into a range of 70–80% of accuracy; as we will see later, this is in fact enough to provide a benefit to automatic parsing. Although supertag accuracy is usually determined by simply counting matching and nonmatching predictions, a more accurate measure should take into account how many of the individual predictions that are combined into a supertag are correct or wrong. For instance, a word that is attached to its left as a subject, is preceded by a preposition and an attributive adjective, and followed by an apposition would bear the supertag PP,ATTR+SUBJ/L+APP. Since the prepositional attachment is notoriously difficult to predict, a supertagger might miss it and emit the slightly different tag ATTR+SUBJ/L+APP. Although this supertag is technically wrong, it is in fact much more right than wrong: of the four predictions of label, direction, preceding and following dependents, three are correct and only one is wrong. We therefore define the component accuracy for a given model as the ratio of correct predictions among the possible ones, which results in a value of 0.75 rather than 0 for the example prediction. The component accuracy of the supertag model J e. g. is in fact 84.5% rather than 67.6%. We would expect the component accuracy to match the effect on parsing more closely than the supertag accuracy. 3 Using supertag information in WCDG Weighted Constraint Dependency Grammar (WCDG) is a formalism in which declarative constraints can be formulated that describe well-formed dependency trees in a particular natural language. A grammar composed of such constraints can be used for parsing by feeding it to a constraint-solving component that searches for structures that satisfy the constraints. Each constraint carries a numeric score or penalty between 0 and 1 that indicates its importance. The penalties of all instances of constraint violations are multiplied to yield a score for an entire analysis; hence, an analysis that satisfies all rules of the WCDG bears the score 1, while lower values indicate small or large aberrations from the language norm. A constraint penalty of 0, then, corresponds to a hard constraint, since every analysis that violates such a constraint will always bear the worst possible score of 0. This means that of two constraints, the one with the lower penalty is more important to the grammar. Since constraints can be soft as well as hard, parsing in the WCDG formalism amounts to multidimensional optimization. Of two possible analyses of an utterance, the one that satisfies more (or more important) constraints is always preferred. All knowledge about grammatical rules is encoded in the constraints that (together with the lexicon) constitute the grammar. Adding a constraint which is sensitive to supertag predictions will therefore change the objective function of the optimization problem, hopefully leading to a higher share of correct attachments. Details about the WDCG parser can be found in (Foth and Menzel, 2006). A grammar of German is available (Foth et al., 2004) that achieves a good accuracy on written German input. Despite its good results, it seems probable that the information provided by a supertag prediction component could improve the accuracy further. First, because the optimization problem that WCDG defines is infeasible to solve exactly, the parser must usually use incomplete, 292 heuristic algorithms to try to compute the optimal analysis. This means that it sometimes fails to find the correct analysis even if the language model accurately defines it, because of search errors during heuristic optimization. A component that makes specific predictions about local structure could guide the process so that the correct alternative is tried first in more cases, and help prevent such search errors. Second, the existing grammar rules deal mainly with structural compatibility, while supertagging exploits patterns in the sequence of words in its input, i. e. both models contribute complementary information. Moreover, the parser can be expected to profit from supertags providing highly lexicalized pieces of information. Supertag Component Parsing accuracy Model accuracy accuracy unlabelled labelled baseline – – 89.6% 87.9% A 84.1% 84.1% 90.8% 89.4% B 78.9% 85.7% 90.6% 89.2% C 81.1% 88.5% 91.0% 89.6% D 76.9% 90.8% 91.1% 89.8% E 80.6% 91.8% 90.9% 89.6% F 76.2% 90.9% 91.4% 90.0% G 71.8% 81.3% 90.8% 89.4% H 67.9% 85.8% 90.8% 89.4% I 71.6% 84.3% 91.8% 90.4% J 67.6% 84.5% 91.8% 90.5% Table 3: Influence of supertag integration on parsing accuracy. Parsing accuracy Constraint penalty unlabelled labelled 0.0 3.7% 3.7% 0.05 85.2% 83.5% 0.1 87.6% 85.7% 0.2 88.9% 87.3% 0.5 91.2% 89.5% 0.7 91.5% 90.1% 0.9 91.8% 90.5% 0.95 91.1% 89.8% 1.0 89.6% 87.9% Table 4: Parsing accuracy depending on different strength of supertag integration. To make the information from the supertag sequence available to the parser, we treat the complex supertags as a set of predictions and write constraints to prefer those analyses that satisfy them. The predictions of label and direction made by models A and B are mapped onto two constraints which demand that each word in the analysis should exhibit the predicted label and direction. The more complicated supertag models constrain the local context of each word further. Effectively, they predict that the specified dependents of a word occur, and that no other dependents occur. The former prediction equates to an existence condition, so constraints are added which demand the presence of the predicted relation types under that word (one for left dependents and one for right dependents). The latter prediction disallows all other dependents; it is implemented by two constraints that test the edge label of each word-to-word attachment against the set of predicted dependents of the regent (again, separately for left and right dependents). Altogether six new constraints are added to the grammar which refer to the output of the supertagger on the current sentence. Note that in contrast to most other approaches we do not perform multi-supertagging; exactly one supertag is assumed for each word. Alternatives could be integrated by computing the logical disjunctions of the predictions made by each supertag, and then adapting the new constraints accordingly. 4 Experiments We tested the effect of supertag predictions on a full parser by adding the new constraints to the WCDG of German described in (Foth et al., 2004) and re-parsing the same 1,000 sentences from the NEGRA corpus. The quality of a dependency parser such as this can be measured as the ratio of correctly attached words to all words (structural accuracy) or the ratio of the correctly attached and correctly labelled words to all words (labelled accuracy). Note that because the parser always finds exactly one analysis with exactly one subordination per word, there is no distinction between recall and precision. The structural accuracy without any supertags is 89.6%. To determine the best trade-off between complexity and prediction quality, we tested all 10 supertag models against the baseline case of no supertags at all. The results are given in Table 3. Two observations can be made about the effect of the supertag model on parsing. Firstly, all types of supertag prediction, even the very basic model A which predicts only edge labels, improve the overall accuracy of parsing, although the baseline is already quite high. Second, the richer models of supertags appear to be more suitable for guiding the parser than the simpler ones, even though their own accuracy is markedly lower; almost one third of the supertag predictions according to the most compli293 cated definition J are wrong, but nevertheless their inclusion reduces the remaining error rate of the parser by over 20%. This result confirms the assumption that if supertags are integrated as individual constraints, their component accuracy is more important than the supertag accuracy. The decreasing accuracy of more complex supertags is more than counterbalanced by the additional information that they contribute to the analysis. Obviously, this trend cannot continue indefinitely; a supertag definition that predicted even larger parts of the dependency tree would certainly lead to much lower accuracy by even the most lenient measure, and a prediction that is mostly wrong must ultimately degrade parsing performance. Since the most complex model J shows no parsing improvement over its successor I, this point might already have been reached. The use of supertags in WCDG is comparable to previous work which integrated POS tagging and chunk parsing. (Foth and Hagenstr¨om, 2002; Daum et al., 2003) showed that the correct balance between the new knowledge and the existing grammar is crucial for successful integration. This is achieved by means of an additional parameter, modeling how trustworthy supertag predictions are considered. Its effect is shown in Table 4. As expected, making supertag constraints hard (with a value of 0.0) over-constrains most parsing problems, so that hardly any analyses can be computed. Other values near 0 avoid this problem but still lead to much worse overall performance, as wrong or even impossible predictions too often overrule the normal syntax constraints. The previously used value of 0.9 actually yields the best results with this particular grammar. The fact that a statistical model can improve parsing performance when superimposed on a sophisticated hand-written grammar is of particular interest because the statistical model we used is so simple, and in fact not particularly accurate; it certainly does not represent the state of the art in supertagging. This gives rise to the hope that as better supertaggers for German become available, parsing results will continue to see additional improvements, i.e., future supertagging research will directly benefit parsing. The obvious question is how great this benefit might conceivably become under optimal conditions. To obtain this upper limit of the utility of supertags we repeated Supertag Constraint penalty model 0.9 0.0 A 92.7% / 92.2% 94.0% / 94.0% B 94.3% / 93.7% 96.0% / 96.0% C 92.8% / 92.4% 94.1% / 94.1% D 94.3% / 93.8% 96.0% / 96.0% E 93.1% / 92.6% 94.3% / 94.3% F 94.6% / 94.1% 96.1% / 96.1% G 94.2% / 93.7% 95.8% / 95.8% H 95.2% / 94.7% 97.4% / 97.4% I 97.1% / 96.8% 99.5% / 99.5% J 97.1% / 96.8% 99.6% / 99.6% Table 5: Unlabelled and labelled parsing accuracy with a simulated perfect supertagger. the process of translating each supertag into additional WCDG constraints, but this time using the test set itself rather than TnT’s predictions. Table 5 again gives the unlabelled and labelled parsing accuracy for all 10 different supertag models with the integration strengths of 0 and 0.9. (Note that since all our models predict the edge label of each word, hard integration of perfect predictions eliminates the difference between labelled und unlabelled accuracy.) As expected, an improved accuracy of supertagging would lead to improved parsing accuracy in each case. In fact, knowing the correct supertag would solve the parsing problem almost completely with the more complex models. This confirms earlier findings for English (Nasr and Rambow, 2004). Since perfect supertaggers are not available, we have to make do with the imperfect ones that do exist. One method of avoiding some errors introduced by supertagging would be to reject supertag predictions that tend to be wrong. To this end, we ran the supertagger on its training set and determined the average component accuracy of each occurring supertag. The supertags whose average precision fell below a variable threshold were not considered during parsing as if the supertagger had not made a prediction. This means that a threshold of 100% corresponds to the baseline of not using supertags at all, while a threshold of 0% prunes nothing, so that these two cases duplicate the first and last line from Table 2. As Table 6 shows, pruning supertags that are wrong more often than they are right results in a further small improvement in parsing accuracy: unlabelled syntax accuracy rises up to 92.1% against the 91.8% if all supertags of model J are used. However, the effect is not very noticeable, so that it would be almost certainly more useful to 294 Parsing accuracy Threshold unlabelled labelled 0% 91.8% 90.5% 20% 91.8% 90.4% 40% 91.9% 90.5% 50% 92.0% 90.7% 60% 92.1% 91.0% 80% 91.4% 90.0% 100% 89.6% 87.9% Table 6: Parsing accuracy with empirically pruned supertag predictions. improve the supertagger itself rather than secondguess its output. 5 Related work Supertagging was originally suggested as a method to reduce lexical ambiguity, and thereby the amount of disambiguation work done by the parser. Sakar et al. (2000) report that this increases the speed of their LTAG parser by a factor of 26 (from 548k to 21k seconds) but at the price of only being able to parse 59% of the sentences in their test data (of 2250 sentences), because too often the correct supertag is missing from the output of the supertagger. Chen et al. (2002) investigate different supertagging methods as pre-processors to a Tree-Adjoining Grammar parser, and they claim a 1-best supertagging accuracy of 81.47%, and a 4best accuracy of 91.41%. With the latter they reach the highest parser coverage, about three quarters of the 1700 sentences in their test data. Clark and Curran (2004a; 2004b) describe a combination of supertagger and parser for parsing Combinatory Categorial Grammar, where the tagger is used to filter the parses produced by the grammar, before the computation of the model parameters. The parser uses an incremental method: the supertagger first assigns a small number of categories to each word, and the parser requests more alternatives only if the analysis fails. They report 91.4% precision and 91.0% recall of unlabelled dependencies and a speed of 1.6 minutes to parse 2401 sentences, and claim a parser speedup of a factor of 77 thanks to supertagging. The supertagging approach that is closest to ours in terms of linguistic representations is probably (Wang and Harper, 2002; Wang and Harper, 2004) whose ‘Super Abstract Role Values’ are very similar to our model F supertags (Table 2). It is interesting to note that they only report between 328 and 791 SuperARVs for different corpora, whereas we have 2026 category F supertags. Part of the difference is explained by our larger label set: 35, the same as the number of model A supertags in table 2 against their 24 (White, 2000, p. 50). Also, we are not using the same corpus. In addition to determining the optimal SuperARV sequence in isolation, Wang and Harper (2002) also combine the SuperARV n-gram probabilities with a dependency assignment probability into a dependency parser for English. A maximum tagging accuracy of 96.3% (for sentences up to 100 words) is achieved using a 4-gram n-best tagger producing the 100 best SuperARV sequences for a sentence. The tightly integrated model is able to determine 96.6% of SuperARVs correctly. The parser itself reaches a labelled precision of 92.6% and a labelled recall of 92.2% (Wang and Harper, 2004). In general, the effect of supertagging in the other systems mentioned here is to reduce the ambiguity in the input to the parser and thereby increase its speed, in some cases dramatically. For us, supertagging decreases the speed slightly, because additional constraints means more work for the parser, and because our supertagger-parser integration is not yet optimal. On the other hand it gives us better parsing accuracy. Using a constraint penalty of 0.0 for the supertagger integration (c.f. Table 5) does speed up our parser several times, but would only be practical with very high tagging accuracy. An important point is that for some other systems, like (Sarkar et al., 2000) and (Chen et al., 2002), parsing is not actually feasible without the supertagging speedup. 6 Conclusions and future work We have shown that a statistical supertagging component can significantly improve the parsing accuracy of a general-purpose dependency parser for German. The error rate among syntactic attachments can be reduced by 24% over an already competitive baseline. After all, the integration of the supertagging results helped to reach a quality level which compares favourably with the state-of-the-art in probabilistic dependency parsing for German as defined with 87.34%/90.38% labelled/unlabelled attachment accuracy on this years shared CoNLL task by (McDonald et al., 2005) (see (Foth and Menzel, 2006) for a more detailed comparison). Although the statistical model used in our system is rather simple-minded, it clearly captures at least some distributional char295 acteristics of German text that the hand-written rules do not. A crucial factor for success is the defeasible integration of the supertagging predictions via soft constraints. Rather than pursuing a strict filtering approach where supertagging errors are partially compensated by an n-best selection, we commit to only one supertag per word, but reduce its influence. Treating supertag predictions as weak preferences yields the best results. By measuring the accuracy of the different types of predictions made by complex supertags, different weights could also be assigned to the six new constraints. Of the investigated supertag models, the most complex ones guide the parser best, although their own accuracy is not the best one, even when measured by the more pertinent component accuracy. Since purely statistical parsing methods do not reach comparable parsing accuracy on the same data, we assume that this trend does not continue indefinitely, but would stop at some point, perhaps already reached. References S. Bangalore and A. K. Joshi. 1999. Supertagging: an approach to almost parsing. Computational Linguistics, 25(2):237–265. T. Brants, R. Hendriks, S. Kramp, B. Krenn, C. Preis, W. Skut, and H. Uszkoreit. 1997. Das NEGRAAnnotationsschema. Technical report, Universit¨at des Saarlandes, Computerlinguistik. S. Brants, St. Dipper, S. Hansen, W. Lezius, and G. Smith. 2002. The TIGER treebank. In Proc. Workshop on Treebanks and Linguistic Theories, Sozopol. T. Brants. 2000. TnT – A statistical part-of-speech tagger. In Proc. the 6th Conf. on Applied Natural Language Processing, ANLP-2000, pages 224–231, Seattle, WA. J. Chen, S. Bangalore, M. Collins, and O. Rambow. 2002. Reranking an N-gram supertagger. In Proc. 6th Int. Workshop on Tree Adjoining Grammar and Related Frameworks. S. Clark and J. R. Curran. 2004a. The importance of supertagging for wide-coverage CCG parsing. In Proc. 20th Int. Conf. on Computational Linguistics. S. Clark and J. R. Curran. 2004b. Parsing the WSJ using CCG and log-linear models. In Proc. 42nd Meeting of the ACL. M. Daum, K. Foth, and W. Menzel. 2003. Constraint based integration of deep and shallow parsing techniques. In Proc. 11th Conf. of the EACL, Budapest, Hungary. M. Daum, K. Foth, and W. Menzel. 2004. Automatic transformation of phrase treebanks to dependency trees. In Proc. 4th Int. Conf. on Language Resources and Evaluation, LREC-2004, pages 99–106, Lisbon, Portugal. K. Foth and J. Hagenstr¨om. 2002. Tagging for robust parsers. In 2nd Workshop on Robust Methods in Analysis of Natural Language Data, ROMAND-2002, pages 21 – 32, Frascati, Italy. K. Foth and W. Menzel. 2006. Hybrid parsing: Using probabilistic models as predictors for a symbolic parser. In Proc. 21st Int. Conf. on Computational Linguistics, Coling-ACL-2006, Sydney. K. Foth, M. Daum, and W. Menzel. 2004. A broadcoverage parser for german based on defeasible constraints. In 7. Konferenz zur Verarbeitung nat¨urlicher Sprache, KONVENS-2004, pages 45–52, Wien. R. McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proc. Human Language Technology Conference, HLT/EMNLP-2005, Vancouver, B.C. A. Nasr and O. Rambow. 2004. A simple stringrewriting formalism for dependency grammar. In Coling-Workshop Recent Advances in Dependency Grammar, pages 17–24, Geneva, Switzerland. A. Sarkar, F. Xia, and A. Joshi. 2000. Some experiments on indicators of parsing complexity for lexicalized grammars. In Proc. COLING Workshop on Efficiency in Large-Scale Parsing Systems. Y. Schabes and A. K. Joshi. 1991. Parsing with lexicalized tree adjoining grammar. In M. Tomita, editor, Current Issues in Parsing Technologies. Kluwer Academic Publishers. I. Schr¨oder, W. Menzel, K. Foth, and M. Schulz. 2000. Modeling dependency grammar with restricted constraints. Traitement Automatique des Langues (T.A.L.), 41(1):97–126. W. Wang and M. P. Harper. 2002. The SuperARV language model: Investigating the effectiveness of tightly integrating multiple knowledge sources. In Proc. Conf. on Empirical Methods in Natural Language Processing, EMNLP-2002, pages 238–247, Philadelphia, PA. W. Wang and M. P. Harper. 2004. A statistical constraint dependency grammar (CDG) parser. In Proc. ACL Workshop Incremental Parsing: Bringing Engineering and Cognition Together, pages 42–49, Barcelona, Spain. Ch. M. White. 2000. Rapid Grammar Development and Parsing: Constraint Dependency Grammar with Abstract Role Values. Ph.D. thesis, Purdue University, West Lafayette, IN. 296
2006
37
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 297–304, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Efficient Unsupervised Discovery of Word Categories Using Symmetric Patterns and High Frequency Words Dmitry Davidov ICNC The Hebrew University Jerusalem 91904, Israel [email protected] Ari Rappoport Institute of Computer Science The Hebrew University Jerusalem 91904, Israel www.cs.huji.ac.il/∼arir Abstract We present a novel approach for discovering word categories, sets of words sharing a significant aspect of their meaning. We utilize meta-patterns of highfrequency words and content words in order to discover pattern candidates. Symmetric patterns are then identified using graph-based measures, and word categories are created based on graph clique sets. Our method is the first pattern-based method that requires no corpus annotation or manually provided seed patterns or words. We evaluate our algorithm on very large corpora in two languages, using both human judgments and WordNetbased evaluation. Our fully unsupervised results are superior to previous work that used a POS tagged corpus, and computation time for huge corpora are orders of magnitude faster than previously reported. 1 Introduction Lexical resources are crucial in most NLP tasks and are extensively used by people. Manual compilation of lexical resources is labor intensive, error prone, and susceptible to arbitrary human decisions. Hence there is a need for automatic authoring that would be as unsupervised and languageindependent as possible. An important type of lexical resource is that given by grouping words into categories. In general, the notion of a category is a fundamental one in cognitive psychology (Matlin, 2005). A lexical category is a set of words that share a significant aspect of their meaning, e.g., sets of words denoting vehicles, types of food, tool names, etc. A word can obviously belong to more than a single category. We will use ‘category’ instead of ‘lexical category’ for brevity1. Grouping of words into categories is useful in itself (e.g., for the construction of thesauri), and can serve as the starting point in many applications, such as ontology construction and enhancement, discovery of verb subcategorization frames, etc. Our goal in this paper is a fully unsupervised discovery of categories from large unannotated text corpora. We aim for categories containing single words (multi-word lexical items will be dealt with in future papers.) Our approach is based on patterns, and utilizes the following stages: 1. Discovery of a set of pattern candidates that might be useful for induction of lexical relationships. We do this in a fully unsupervised manner, using meta-patterns comprised of high frequency words and content words. 2. Identification of pattern candidates that give rise to symmetric lexical relationships. This is done using simple measures in a word relationship graph. 3. Usage of a novel graph clique-set algorithm in order to generate categories from information on the co-occurrence of content words in the symmetric patterns. We performed a thorough evaluation on two English corpora (the BNC and a 68GB web corpus) and on a 33GB Russian corpus, and a sanity-check test on smaller Danish, Irish and Portuguese corpora. Evaluations were done using both human 1Some people use the term ‘concept’. We adhere to the cognitive psychology terminology, in which ‘concept’ refers to the mental representation of a category (Matlin, 2005). 297 judgments and WordNet in a setting quite similar to that done (for the BNC) in previous work. Our unsupervised results are superior to previous work that used a POS tagged corpus, are less language dependent, and are very efficient computationally2. Patterns are a common approach in lexical acquisition. Our approach is novel in several aspects: (1) we discover patterns in a fully unsupervised manner, as opposed to using a manually prepared pattern set, pattern seed or words seeds; (2) our pattern discovery requires no annotation of the input corpus, as opposed to requiring POS tagging or partial or full parsing; (3) we discover general symmetric patterns, as opposed to using a few hard-coded ones such as ‘x and y’; (4) the cliqueset graph algorithm in stage 3 is novel. In addition, we demonstrated the relatively language independent nature of our approach by evaluating on very large corpora in two languages3. Section 2 surveys previous work. Section 3 describes pattern discovery, and Section 4 describes the formation of categories. Evaluation is presented in Section 5, and a discussion in Section 6. 2 Previous Work Much work has been done on lexical acquisition of all sorts. The three main distinguishing axes are (1) the type of corpus annotation and other human input used; (2) the type of lexical relationship targeted; and (3) the basic algorithmic approach. The two main approaches are pattern-based discovery and clustering of context feature vectors. Many of the papers cited below aim at the construction of hyponym (is-a) hierarchies. Note that they can also be viewed as algorithms for category discovery, because a subtree in such a hierarchy defines a lexical category. A first major algorithmic approach is to represent word contexts as vectors in some space and use similarity measures and automatic clustering in that space (Curran and Moens, 2002). Pereira (1993) and Lin (1998) use syntactic features in the vector definition. (Pantel and Lin, 2002) improves on the latter by clustering by committee. Caraballo (1999) uses conjunction and appositive annotations in the vector representation. 2We did not compare against methods that use richer syntactic information, both because they are supervised and because they are much more computationally demanding. 3We are not aware of any multilingual evaluation previously reported on the task. The only previous works addressing our problem and not requiring any syntactic annotation are those that decompose a lexically-defined matrix (by SVD, PCA etc), e.g. (Sch¨utze, 1998; Deerwester et al, 1990). Such matrix decomposition is computationally heavy and has not been proven to scale well when the number of words assigned to categories grows. Agglomerative clustering (e.g., (Brown et al, 1992; Li, 1996)) can produce hierarchical word categories from an unannotated corpus. However, we are not aware of work in this direction that has been evaluated with good results on lexical category acquisition. The technique is also quite demanding computationally. The second main algorithmic approach is to use lexico-syntactic patterns. Patterns have been shown to produce more accurate results than feature vectors, at a lower computational cost on large corpora (Pantel et al, 2004). Hearst (1992) uses a manually prepared set of initial lexical patterns in order to discover hierarchical categories, and utilizes those categories in order to automatically discover additional patterns. (Berland and Charniak, 1999) use hand crafted patterns to discover part-of (meronymy) relationships, and (Chklovski and Pantel, 2004) discover various interesting relations between verbs. Both use information obtained by parsing. (Pantel et al, 2004) reduce the depth of the linguistic data used but still requires POS tagging. Many papers directly target specific applications, and build lexical resources as a side effect. Named Entity Recognition can be viewed as an instance of our problem where the desired categories contain words that are names of entities of a particular kind, as done in (Freitag, 2004) using coclustering. Many Information Extraction papers discover relationships between words using syntactic patterns (Riloff and Jones, 1999). (Widdows and Dorow, 2002; Dorow et al, 2005) discover categories using two hard-coded symmetric patterns, and are thus the closest to us. They also introduce an elegant graph representation that we adopted. They report good results. However, they require POS tagging of the corpus, use only two hard-coded patterns (‘x and y’, ‘x or y’), deal only with nouns, and require non-trivial computations on the graph. A third, less common, approach uses settheoretic inference, for example (Cimiano et al, 298 2005). Again, that paper uses syntactic information. In summary, no previous work has combined the accuracy, scalability and performance advantages of patterns with the fully unsupervised, unannotated nature possible with clustering approaches. This severely limits the applicability of previous work on the huge corpora available at present. 3 Discovery of Patterns Our first step is the discovery of patterns that are useful for lexical category acquisition. We use two main stages: discovery of pattern candidates, and identification of the symmetric patterns among the candidates. 3.1 Pattern Candidates An examination of the patterns found useful in previous work shows that they contain one or more very frequent word, such as ‘and’, ‘is’, etc. Our approach towards unsupervised pattern induction is to find such words and utilize them. We define a high frequency word (HFW) as a word appearing more than TH times per million words, and a content word (CW) as a word appearing less than TC times per a million words4. Now define a meta-pattern as any sequence of HFWs and CWs. In this paper we require that meta-patterns obey the following constraints: (1) at most 4 words; (2) exactly two content words; (3) no two consecutive CWs. The rationale is to see what can be achieved using relatively short patterns and where the discovered categories contain single words only. We will relax these constraints in future papers. Our meta-patterns here are thus of four types: CHC, CHCH, CHHC, and HCHC. In order to focus on patterns that are more likely to provide high quality categories, we removed patterns that appear in the corpus less than TP times per million words. Since we can ensure that the number of HFWs is bounded, the total number of pattern candidates is bounded as well. Hence, this stage can be computed in time linear in the size of the corpus (assuming the corpus has been already pre-processed to allow direct access to a word by its index.) 4Considerations for the selection of thresholds are discussed in Section 5. 3.2 Symmetric Patterns Many of the pattern candidates discovered in the previous stage are not usable. In order to find a usable subset, we focus on the symmetric patterns. Our rationale is that two content-bearing words that appear in a symmetric pattern are likely to be semantically similar in some sense. This simple observation turns out to be very powerful, as shown by our results. We will eventually combine data from several patterns and from different corpus windows (Section 4.) For identifying symmetric patterns, we use a version of the graph representation of (Widdows and Dorow, 2002). We first define the singlepattern graph G(P) as follows. Nodes correspond to content words, and there is a directed arc A(x, y) from node x to node y iff (1) the words x and y both appear in an instance of the pattern P as its two CWs; and (2) x precedes y in P. Denote by Nodes(G), Arcs(G) the nodes and arcs in a graph G, respectively. We now compute three measures on G(P) and combine them for all pattern candidates to filter asymmetric ones. The first measure (M1) counts the proportion of words that can appear in both slots of the pattern, out of the total number of words. The reasoning here is that if a pattern allows a large percentage of words to participate in both slots, its chances of being a symmetric pattern are greater: M1 := |{x|∃yA(x, y) ∧∃zA(z, x)}| |Nodes(G(P))| M1 filters well patterns that connect words having different parts of speech. However, it may fail to filter patterns that contain multiple levels of asymmetric relationships. For example, in the pattern ‘x belongs to y’, we may find a word B on both sides (‘A belongs to B’, ‘B belongs to C’) while the pattern is still asymmetric. In order to detect symmetric relationships in a finer manner, for the second and third measures we define SymG(P), the symmetric subgraph of G(P), containing only the bidirectional arcs and nodes of G(P): SymG(P) = {{x}, {(x, y)}|A(x, y) ∧A(y, x)} The second and third measures count the proportion of the number of symmetric nodes and edges in G(P), respectively: M2 := |Nodes(SymG(P))| |Nodes(G(P))| 299 M3 := |Arcs(SymG(P))| |Arcs(G(P))| All three measures yield values in [0, 1], and in all three a higher value indicates more symmetry. M2 and M3 are obviously correlated, but they capture different aspects of a pattern’s nature: M3 is informative for highly interconnected but small word categories (e.g., month names), while M2 is useful for larger categories that are more loosely connected in the corpus. We use the three measures as follows. For each measure, we prepare a sorted list of all candidate patterns. We remove patterns that are not in the top ZT (we use 100, see Section 5) in any of the three lists, and patterns that are in the bottom ZB in at least one of the lists. The remaining patterns constitute our final list of symmetric patterns. We do not rank the final list, since the category discovery algorithm of the next section does not need such a ranking. Defining and utilizing such a ranking is a subject for future work. A sparse matrix representation of each graph can be computed in time linear in the size of the input corpus, since (1) the number of patterns |P| is bounded, (2) vocabulary size |V | (the total number of graph nodes) is much smaller than corpus size, and (3) the average node degree is much smaller than |V | (in practice, with the thresholds used, it is a small constant.) 4 Discovery of Categories After the end of the previous stage we have a set of symmetric patterns. We now use them in order to discover categories. In this section we describe the graph clique-set method for generating initial categories, and category pruning techniques for increased quality. 4.1 The Clique-Set Method Our approach to category discovery is based on connectivity structures in the all-pattern word relationship graph G, resulting from merging all of the single-pattern graphs into a single unified graph. The graph G can be built in time O(|V | × |P| × AverageDegree(G(P))) = O(|V |) (we use V rather than Nodes(G) for brevity.) When building G, no special treatment is done when one pattern is contained within another. For example, any pattern of the form CHC is contained in a pattern of the form HCHC (‘x and y’, ‘both x and y’.) The shared part yields exactly the same subgraph. This policy could be changed for a discovery of finer relationships. The main observation on G is that words that are highly interconnected are good candidates to form a category. This is the same general observation exploited by (Widdows and Dorow, 2002), who try to find graph regions that are more connected internally than externally. We use a different algorithm. We find all strong n-cliques (subgraphs containing n nodes that are all bidirectionally interconnected.) A clique Q defines a category that contains the nodes in Q plus all of the nodes that are (1) at least unidirectionally connected to all nodes in Q, and (2) bidirectionally connected to at least one node in Q. In practice we use 2-cliques. The strongly connected cliques are the bidirectional arcs in G and their nodes. For each such arc A, a category is generated that contains the nodes of all triangles that contain A and at least one additional bidirectional arc. For example, suppose the corpus contains the text fragments ‘book and newspaper’, ‘newspaper and book’, ‘book and note’, ‘note and book’ and ‘note and newspaper’. In this case the three words are assigned to a category. Note that a pair of nodes connected by a symmetric arc can appear in more than a single category. For example, suppose a graph G containing five nodes and seven arcs that define exactly three strongly connected triangles, ABC, ABD, ACE. The arc (A, B) yields a category {A, B, C, D}, and the arc (A, C) yields a category {A, C, B, E}. Nodes A and C appear in both categories. Category merging is described below. This stage requires an O(1) computation for each bidirectional arc of each node, so its complexity is O(|V | × AverageDegree(G)) = O(|V |). 4.2 Enhancing Category Quality: Category Merging and Corpus Windowing In order to cover as many words as possible, we use the smallest clique, a single symmetric arc. This creates redundant categories. We enhance the quality of the categories by merging them and by windowing on the corpus. We use two simple merge heuristics. First, if two categories are identical we treat them as one. Second, given two categories Q, R, we merge them iff there’s more than a 50% overlap between them: (|Q T R| > |Q|/2) ∧(|Q T R| > |R|/2). 300 This could be added to the clique-set stage, but the phrasing above is simpler to explain and implement. In order to increase category quality and remove categories that are too context-specific, we use a simple corpus windowing technique. Instead of running the algorithm of this section on the whole corpus, we divide the corpus into windows of equal size (see Section 5 for size determination) and perform the category discovery algorithm of this section on each window independently. Merging is also performed in each window separately. We now have a set of categories for each window. For the final set, we select only those categories that appear in at least two of the windows. This technique reduces noise at the potential cost of lowering coverage. However, the numbers of categories discovered and words they contain is still very large (see Section 5), so windowing achieves higher precision without hurting coverage in practice. The complexity of the merge stage is O(|V |) times the average number of categories per word times the average number of words per category. The latter two are small in practice, so complexity amounts to O(|V |). 5 Evaluation Lexical acquisition algorithms are notoriously hard to evaluate. We have attempted to be as thorough as possible, using several languages and both automatic and human evaluation. In the automatic part, we followed as closely as possible the methodology and data used in previous work, so that meaningful comparisons could be made. 5.1 Languages and Corpora We performed in-depth evaluation on two languages, English and Russian, using three corpora, two for English and one for Russian. The first English corpus is the BNC, containing about 100M words. The second English corpus, Dmoz (Gabrilovich and Markovitch, 2005), is a web corpus obtained by crawling and cleaning the URLs in the Open Directory Project (dmoz.org), resulting in 68GB containing about 8.2G words from 50M web pages. The Russian corpus was assembled from many web sites and carefully filtered for duplicates, to yield 33GB and 4G words. It is a varied corpus comprising literature, technical texts, news, newsgroups, etc. As a preliminary sanity-check test we also applied our method to smaller corpora in Danish, Irish and Portuguese, and noted some substantial similarities in the discovered patterns. For example, in all 5 languages the pattern corresponding to ‘x and y’ was among the 50 selected. 5.2 Thresholds, Statistics and Examples The thresholds TH, TC, TP , ZT , ZB, were determined by memory size considerations: we computed thresholds that would give us the maximal number of words, while enabling the pattern access table to reside in main memory. The resulting numbers are 100, 50, 20, 100, 100. Corpus window size was determined by starting from a very small window size, defining at random a single window of that size, running the algorithm, and iterating this process with increased window sizes until reaching a desired vocabulary category participation percentage (i.e., x% of the different words in the corpus assigned into categories. We used 5%.) This process has only a negligible effect on running times, because each iteration is run only on a single window, not on the whole corpus. The table below gives some statistics. V is the total number of different words in the corpus. W is the number of words belonging to at least one of our categories. C is the number of categories (after merging and windowing.) AS is the average category size. Running times are in minutes on a 2.53Ghz Pentium 4 XP machine with 1GB memory. Note how small they are, when compared to (Pantel et al, 2004), which took 4 days for a smaller corpus using the same CPU. V W C AS Time Dmoz 16M 330K 142K 12.8 93m BNC 337K 25K 9.6K 10.2 6.8m Russian 10M 235K 115K 11.6 60m Among the patterns discovered are the ubiquitous ‘x and y’, ‘x or y’ and many patterns containing them. Additional patterns include ‘from x to y’, ‘x and/or y’ (punctuation is treated here as white space), ‘x and a y’, and ‘neither x nor y’. We discover categories of different parts of speech. Among the noun ones, there are many whose precision is 100%: 37 countries, 18 languages, 51 chemical elements, 62 animals, 28 types of meat, 19 fruits, 32 university names, etc. A nice verb category example is {dive, snorkel, swim, float, surf, sail, canoe, kayak, paddle, tube, drift}. A nice adjective example is {amazing, 301 awesome, fascinating, inspiring, inspirational, exciting, fantastic, breathtaking, gorgeous.} 5.3 Human Judgment Evaluation The purpose of the human evaluation was dual: to assess the quality of the discovered categories in terms of precision, and to compare with those obtained by a baseline clustering algorithm. For the baseline, we implemented k-means as follows. We have removed stopwords from the corpus, and then used as features the words which appear before or after the target word. In the calculation of feature values and inter-vector distances, and in the removal of less informative features, we have strictly followed (Pantel and Lin, 2002). We ran the algorithm 10 times using k = 500 with randomly selected centroids, producing 5000 clusters. We then merged the resulting clusters using the same 50% overlap criterion as in our algorithm. The result included 3090, 2116, and 3206 clusters for Dmoz, BNC and Russian respectively. We used 8 subjects for evaluation of the English categories and 15 subjects for evaluation of the Russian ones. In order to assess the subjects’ reliability, we also included random categories (see below.) The experiment contained two parts. In Part I, subjects were given 40 triplets of words and were asked to rank them using the following scale: (1) the words definitely share a significant part of their meaning; (2) the words have a shared meaning but only in some context; (3) the words have a shared meaning only under a very unusual context/situation; (4) the words do not share any meaning; (5) I am not familiar enough with some/all of the words. The 40 triplets were obtained as follows. 20 of our categories were selected at random from the non-overlapping categories we have discovered, and three words were selected from each of these at random. 10 triplets were selected in the same manner from the categories produced by k-means, and 10 triplets were generated by random selection of content words from the same window in the corpus. In Part II, subjects were given the full categories of the triplets that were graded as 1 or 2 in Part I (that is, the full ‘good’ categories in terms of sharing of meaning.) They were asked to grade the categories from 1 (worst) to 10 (best) according to how much the full category had met the expectations they had when seeing only the triplet. Results are given in Table 1. The first line gives the average percentage of triplets that were given scores of 1 or 2 (that is, ‘significant shared meaning’.) The 2nd line gives the average score of a triplet (1 is best.) In these lines scores of 5 were not counted. The 3rd line gives the average score given to a full category (10 is best.) Interevaluator Kappa between scores 1,2 and 3,4 was 0.56, 0.67 and 0.72 for Dmoz, BNC and Russian respectively. Our algorithm clearly outperforms k-means, which outperforms random. We believe that the Russian results are better because the percentage of native speakers among our subjects for Russian was larger than that for English. 5.4 WordNet-Based Evaluation The major guideline in this part of the evaluation was to compare our results with previous work having a similar goal (Widdows and Dorow, 2002). We have followed their methodology as best as we could, using the same WordNet (WN) categories and the same corpus (BNC) in addition to the Dmoz and Russian corpora5. The evaluation method is as follows. We took the exact 10 WN subsets referred to as ‘subjects’ in (Widdows and Dorow, 2002), and removed all multi-word items. We now selected at random 10 pairs of words from each subject. For each pair, we found the largest of our discovered categories containing it (if there isn’t one, we pick another pair. This is valid because our Recall is obviously not even close to 100%, so if we did not pick another pair we would seriously harm the validity of the evaluation.) The various morphological forms of the same word were treated as one during the evaluation. The only difference from the (Widdows and Dorow, 2002) experiment is the usage of pairs rather than single words. We did this in order to disambiguate our categories. This was not needed in (Widdows and Dorow, 2002) because they had directly accessed the word graph, which may be an advantage in some applications. The Russian evaluation posed a bit of a problem because the Russian WordNet is not readily available and its coverage is rather small. Fortunately, the subject list is such that WordNet words 5(Widdows and Dorow, 2002) also reports results for an LSA-based clustering algorithm that are vastly inferior to the pattern-based ones. 302 Dmoz BNC Russian us k-means random us k-means random us k-means random avg ‘shared meaning’ (%) 80.53 18.25 1.43 86.87 8.52 0.00 95.00 18.96 7.33 avg triplet score (1-4) 1.74 3.34 3.88 1.56 3.61 3.94 1.34 3.32 3.76 avg category score (1-10) 9.27 4.00 1.8 9.31 4.50 0.00 8.50 4.66 3.32 Table 1: Results of evaluation by human judgment of three data sets (ours, that obtained by k-means, and random categories) on the three corpora. See text for detailed explanations. could be translated unambiguously to Russian and words in our discovered categories could be translated unambiguously into English. This was the methodology taken. For each found category C containing N words, we computed the following (see Table 2): (1) Precision: the number of words present in both C and WN divided by N; (2) Precision*: the number of correct words divided by N. Correct words are either words that appear in the WN subtree, or words whose entry in the American Heritage Dictionary or the Britannica directly defines them as belonging to the given class (e.g., ‘keyboard’ is defined as ‘a piano’; ‘mitt’ is defined by ‘a type of glove’.) This was done in order to overcome the relative poorness of WordNet; (3) Recall: the number of words present in both C and WN divided by the number of (single) words in WN; (4) The number of correctly discovered words (New) that are not in WN. The Table also shows the number of WN words (:WN), in order to get a feeling by how much WN could be improved here. For each subject, we show the average over the 10 randomly selected pairs. Table 2 also shows the average of each measure over the subjects, and the two precision measures when computed on the total set of WN words. The (uncorrected) precision is the only metric given in (Widdows and Dorow, 2002), who reported 82% (for the BNC.) Our method gives 90.47% for this metric on the same corpus. 5.5 Summary Our human-evaluated and WordNet-based results are better than the baseline and previous work respectively. Both are also of good standalone quality. Clearly, evaluation methodology for lexical acquisition tasks should be improved, which is an interesting research direction in itself. Examining our categories at random, we found a nice example that shows how difficult it is to evaluate the task and how useful automatic category discovery can be, as opposed to manual definition. Consider the following category, discovered in the Dmoz corpus: {nightcrawlers, chicken, shrimp, liver, leeches}. We did not know why these words were grouped together; if asked in an evaluation, we would give the category a very low score. However, after some web search, we found that this is a ‘fish bait’ category, especially suitable for catfish. 6 Discussion We have presented a novel method for patternbased discovery of lexical semantic categories. It is the first pattern-based lexical acquisition method that is fully unsupervised, requiring no corpus annotation or manually provided patterns or words. Pattern candidates are discovered using meta-patterns of high frequency and content words, and symmetric patterns are discovered using simple graph-theoretic measures. Categories are generated using a novel graph clique-set algorithm. The only other fully unsupervised lexical category acquisition approach is based on decomposition of a matrix defined by context feature vectors, and it has not been shown to scale well yet. Our algorithm was evaluated using both human judgment and automatic comparisons with WordNet, and results were superior to previous work (although it used a POS tagged corpus) and more efficient computationally. Our algorithm is also easy to implement. Computational efficiency and specifically lack of annotation are important criteria, because they allow usage of huge corpora, which are presently becoming available and growing in size. There are many directions to pursue in the future: (1) support multi-word lexical items; (2) increase category quality by improved merge algorithms; (3) discover various relationships (e.g., hyponymy) between the discovered categories; (4) discover finer inter-word relationships, such as verb selection preferences; (5) study various properties of discovered patterns in a detailed manner; and (6) adapt the algorithm to morphologically rich languages. 303 Subject Prec. Prec.* Rec. New:WN Dmoz instruments 79.25 89.34 34.54 7.2:163 vehicles 80.17 86.84 18.35 6.3:407 academic 78.78 89.32 30.83 15.5:396 body parts 73.85 79.29 5.95 9.1:1491 foodstuff 83.94 90.51 28.41 26.3:1209 clothes 83.41 89.43 10.65 4.5:539 tools 83.99 89.91 21.69 4.3:219 places 76.96 84.45 25.82 6.3:232 crimes 76.32 86.99 31.86 4.7:102 diseases 81.33 88.99 19.58 6.8:332 set avg 79.80 87.51 22.77 9.1:509 all words 79.32 86.94 BNC instruments 92.68 95.43 9.51 0.6:163 vehicles 94.16 95.23 3.81 0.2:407 academic 93.45 96.10 12.02 0.6:396 body parts 96.38 97.60 0.97 0.3:1491 foodstuff 93.76 94.36 3.60 0.6:1209 cloths 93.49 94.90 4.04 0.3:539 tools 96.84 97.24 6.67 0.1:219 places 87.88 97.25 6.42 1.5:232 crimes 83.79 91.99 19.61 2.6:102 diseases 95.16 97.14 5.54 0.5:332 set avg 92.76 95.72 7.22 0.73:509 all words 90.47 93.80 Russian instruments 82.46 89.09 25.28 3.4:163 vehicles 83.16 89.58 16.31 5.1:407 academic 87.27 92.92 15.71 4.9:396 body parts 81.42 89.68 3.94 8.3:1491 foodstuff 80.34 89.23 13.41 24.3:1209 clothes 82.47 87.75 15.94 5.1:539 tools 79.69 86.98 21.14 3.7:219 places 82.25 90.20 33.66 8.5:232 crimes 84.77 93.26 34.22 3.3:102 diseases 80.11 87.70 20.69 7.7:332 set avg 82.39 89.64 20.03 7.43:509 all words 80.67 89.17 Table 2: WordNet evaluation. Note the BNC ‘all words’ precision of 90.47%. This metric was reported to be 82% in (Widdows and Dorow, 2002). It should be noted that our algorithm can be viewed as one for automatic discovery of word senses, because it allows a word to participate in more than a single category. When merged properly, the different categories containing a word can be viewed as the set of its senses. We are planning an evaluation according to this measure after improving the merge stage. References Matthew Berland and Eugene Charniak, 1999. Finding parts in very large corpora. ACL ’99. Peter Brown, Vincent Della Pietra, Peter deSouza, Jenifer Lai, Robert Mercer, 1992. Class-based ngram models for natural language. Comp. Linguistics, 18(4):468–479. Sharon Caraballo, 1999. Automatic construction of a hypernym-labeled noun hierarchy from text. ACL ’99. Timothy Chklovski, Patrick Pantel, 2004. VerbOcean: mining the web for fine-grained semantic verb relations. EMNLP ’04. Philipp Cimiano, Andreas Hotho, Steffen Staab, 2005. Learning concept hierarchies from text corpora using formal concept analysis. J. of Artificial Intelligence Research, 24:305–339. James Curran, Marc Moens, 2002. Improvements in automatic thesaurus extraction. ACL Workshop on Unsupervised Lexical Acquisition, 2002. Scott Deerwester, Susan Dumais, George Furnas, Thomas Landauer, Richard Harshman, 1990. Indexing by latent semantic analysis. J. of the American Society for Info. Science, 41(6):391–407. Beate Dorow, Dominic Widdows, Katarina Ling, JeanPierre Eckmann, Danilo Sergi, Elisha Moses, 2005. Using curvature and Markov clustering in graphs for lexical acquisition and word sense discrimination. MEANING ’05. Dayne Freitag, 2004. Trained named entity recognition using distributional clusters. EMNLP ’04. Evgeniy Gabrilovich, Shaul Markovitch, 2005. Feature generation for text categorization using world knowledge. IJCAI ’05. Marti Hearst, 1992. Automatic acquisition of hyponyms from large text corpora. COLING ’92. Hang Li, Naoki Abe, 1996. Clustering words with the MDL principle. COLING ’96. Dekang Lin, 1998. Automatic retrieval and clustering of similar words. COLING ’98. Margaret Matlin, 2005. Cognition, 6th edition. John Wiley & Sons. Patrick Pantel, Dekang Lin, 2002. Discovering word senses from text. SIGKDD ’02. Patrick Pantel, Deepak Ravichandran, Eduard Hovy, 2004. Towards terascale knowledge acquisition. COLING ’04. Fernando Pereira, Naftali Tishby, Lillian Lee, 1993. Distributional clustering of English words. ACL ’93. Ellen Riloff, Rosie Jones, 1999. Learning dictionaries for information extraction by multi-level bootstrapping. AAAI ’99. Hinrich Sch¨utze, 1998. Automatic word sense discrimination. Comp. Linguistics, 24(1):97–123. Dominic Widdows, Beate Dorow, 2002. A graph model for unsupervised Lexical acquisition. COLING ’02. 304
2006
38
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 305–312, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Bayesian Query-Focused Summarization Hal Daum´e III and Daniel Marcu Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 [email protected],[email protected] Abstract We present BAYESUM (for “Bayesian summarization”), a model for sentence extraction in query-focused summarization. BAYESUM leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BAYESUM is not afflicted by the paucity of information in short queries. We show that approximate inference in BAYESUM is possible on large data sets and results in a stateof-the-art summarization system. Furthermore, we show how BAYESUM can be understood as a justified query expansion technique in the language modeling for IR framework. 1 Introduction We describe BAYESUM, an algorithm for performing query-focused summarization in the common case that there are many relevant documents for a given query. Given a query and a collection of relevant documents, our algorithm functions by asking itself the following question: what is it about these relevant documents that differentiates them from the non-relevant documents? BAYESUM can be seen as providing a statistical formulation of this exact question. The key requirement of BAYESUM is that multiple relevant documents are known for the query in question. This is not a severe limitation. In two well-studied problems, it is the de-facto standard. In standard multidocument summarization (with or without a query), we have access to known relevant documents for some user need. Similarly, in the case of a web-search application, an underlying IR engine will retrieve multiple (presumably) relevant documents for a given query. For both of these tasks, BAYESUM performs well, even when the underlying retrieval model is noisy. The idea of leveraging known relevant documents is known as query expansion in the information retrieval community, where it has been shown to be successful in ad hoc retrieval tasks. Viewed from the perspective of IR, our work can be interpreted in two ways. First, it can be seen as an application of query expansion to the summarization task (or, in IR terminology, passage retrieval); see (Liu and Croft, 2002; Murdock and Croft, 2005). Second, and more importantly, it can be seen as a method for query expansion in a non-ad-hoc manner. That is, BAYESUM is a statistically justified query expansion method in the language modeling for IR framework (Ponte and Croft, 1998). 2 Bayesian Query-Focused Summarization In this section, we describe our Bayesian queryfocused summarization model (BAYESUM). This task is very similar to the standard ad-hoc IR task, with the important distinction that we are comparing query models against sentence models, rather than against document models. The shortness of sentences means that one must do a good job of creating the query models. To maintain generality, so that our model is applicable to any problem for which multiple relevant documents are known for a query, we formulate our model in terms of relevance judgments. For a collection of D documents and Q queries, we assume we have a D × Q binary matrix r, where rdq = 1 if an only if document d is relevant to query q. In multidocument summarization, rdq will be 1 exactly when d is in the document set corresponding to query q; in search-engine sum305 marization, it will be 1 exactly when d is returned by the search engine for query q. 2.1 Language Modeling for IR BAYESUM is built on the concept of language models for information retrieval. The idea behind the language modeling techniques used in IR is to represent either queries or documents (or both) as probability distributions, and then use standard probabilistic techniques for comparing them. These probability distributions are almost always “bag of words” distributions that assign a probability to words from a fixed vocabulary V. One approach is to build a probability distribution for a given document, pd(·), and to look at the probability of a query under that distribution: pd(q). Documents are ranked according to how likely they make the query (Ponte and Croft, 1998). Other researchers have built probability distributions over queries pq(·) and ranked documents according to how likely they look under the query model: pq(d) (Lafferty and Zhai, 2001). A third approach builds a probability distribution pq(·) for the query, a probability distribution pd(·) for the document and then measures the similarity between these two distributions using KL divergence (Lavrenko et al., 2002): KL (pq || pd) = X w∈V pq(w) log pq(w) pd(w) (1) The KL divergence between two probability distributions is zero when they are identical and otherwise strictly positive. It implicitly assumes that both distributions pq and pd have the same support: they assign non-zero probability to exactly the same subset of V; in order to account for this, the distributions pq and pd are smoothed against a background general English model. This final mode—the KL model—is the one on which BAYESUM is based. 2.2 Bayesian Statistical Model In the language of information retrieval, the queryfocused sentence extraction task boils down to estimating a good query model, pq(·). Once we have such a model, we could estimate sentence models for each sentence in a relevant document, and rank the sentences according to Eq (1). The BAYESUM system is based on the following model: we hypothesize that a sentence appears in a document because it is relevant to some query, because it provides background information about the document (but is not relevant to a known query) or simply because it contains useless, general English filler. Similarly, we model each word as appearing for one of those purposes. More specifically, our model assumes that each word can be assigned a discrete, exact source, such as “this word is relevant to query q1” or “this word is general English.” At the sentence level, however, sentences are assigned degrees: “this sentence is 60% about query q1, 30% background document information, and 10% general English.” To model this, we define a general English language model, pG(·) to capture the English filler. Furthermore, for each document dk, we define a background document language model, pdk(·); similarly, for each query qj, we define a query-specific language model pqj(·). Every word in a document dk is modeled as being generated from a mixture of pG, pdk and {pqj : query qj is relevant to document dk}. Supposing there are J total queries and K total documents, we say that the nth word from the sth sentence in document d, wdsn, has a corresponding hidden variable, zdsn that specifies exactly which of these distributions is used to generate that one word. In particular, zdsn is a vector of length 1 + J + K, where exactly one element is 1 and the rest are 0. At the sentence level, we introduce a second layer of hidden variables. For the sth sentence in document d, we let πds be a vector also of length 1 + J + K that represents our degree of belief that this sentence came from any of the models. The πdss lie in the J + K-dimensional simplex ∆J+K = {θ = ⟨θ1, . . . , θJ+K+1⟩: (∀i) θi ≥ 0, P i θi = 1}. The interpretation of the π variables is that if the “general English” component of π is 0.9, then 90% of the words in this sentence will be general English. The π and z variables are constrained so that a sentence cannot be generated by a document language model other than its own document and cannot be generated by a query language model for a query to which it is not relevant. Since the πs are unknown, and it is unlikely that there is a “true” correct value, we place a corpuslevel prior on them. Since π is a multinomial distribution over its corresponding zs, it is natural to use a Dirichlet distribution as a prior over π. A Dirichlet distribution is parameterized by a vector α of equal length to the corresponding multinomial parameter, again with the positivity restric306 tion, but no longer required to sum to one. It has continuous density over a variable θ1, . . . , θI given by: Dir(θ | α) = Γ( P i αi) Q i Γ(αi) Q i θαi−1 i . The first term is a normalization term that ensures that R ∆I dθ Dir(θ | α) = 1. 2.3 Generative Story The generative story for our model defines a distribution over a corpus of queries, {qj}1:J, and documents, {dk}1:K, as follows: 1. For each query j = 1 . . . J: Generate each word qjn in qj by pqj(qjn) 2. For each document k = 1 . . . K and each sentence s in document k: (a) Select the current sentence degree πks by Dir(πks | α)rk(πks) (b) For each word wksn in sentence s: • Select the word source zksn according to Mult(z | πks) • Generate the word wksn by    pG(wksn) if zksn = 0 pdk(wksn) if zksn = k + 1 pqj(wksn) if zksn = j + K + 1 We used r to denote relevance judgments: rk(π) = 0 if any document component of π except the one corresponding to k is non-zero, or if any query component of π except those queries to which document k is deemed relevant is non-zero (this prevents a document using the “wrong” document or query components). We have further assumed that the z vector is laid out so that z0 corresponds to general English, zk+1 corresponds to document dk for 0 ≤j < J and that zj+K+1 corresponds to query qj for 0 ≤k < K. 2.4 Graphical Model The graphical model corresponding to this generative story is in Figure 1. This model depicts the four known parameters in square boxes (α, pQ, pD and pG) with the three observed random variables in shaded circles (the queries q, the relevance judgments r and the words w) and two unobserved random variables in empty circles (the word-level indicator variables z and the sentence level degrees π). The rounded plates denote replication: there are J queries and K documents, containing S sentences in a given document and N words in a given sentence. The joint probability over the observed random variables is given in Eq (2): w z r q pQ pG pD K J N π α S Figure 1: Graphical model for the Bayesian Query-Focused Summarization Model. p (q1:J, r, d1:K) = " Y j Y n pqj (qjn) # × (2) " Y k Y s Z ∆ dπks p (πks | α, r) Y n X zksn p (zksn | πks) p (wksn | zksn) # This expression computes the probability of the data by integrating out the unknown variables. In the case of the π variables, this is accomplished by integrating over ∆, the multinomial simplex, according to the prior distribution given by α. In the case of the z variables, this is accomplished by summing over all possible (discrete) values. The final word probability is conditioned on the z value by selecting the appropriate distribution from pG, pD and pQ. Computing this expression and finding optimal model parameters is intractable due to the coupling of the variables under the integral. 3 Statistical Inference in BAYESUM Bayesian inference problems often give rise to intractable integrals, and a large variety of techniques have been proposed to deal with this. The most popular are Markov Chain Monte Carlo (MCMC), the Laplace (or saddle-point) approximation and the variational approximation. A third, less common, but very effective technique, especially for dealing with mixture models, is expectation propagation (Minka, 2001). In this paper, we will focus on expectation propagation; experiments not reported here have shown variational 307 EM to perform comparably but take roughly 50% longer to converge. Expectation propagation (EP) is an inference technique introduced by Minka (2001) as a generalization of both belief propagation and assumed density filtering. In his thesis, Minka showed that EP is very effective in mixture modeling problems, and later demonstrated its superiority to variational techniques in the Generative Aspect Model (Minka and Lafferty, 2003). The key idea is to compute an integral of a product of terms by iteratively applying a sequence of “deletion/inclusion” steps. Given an integral of the form: R ∆dπ p(π) Q n tn(π), EP approximates each term tn by a simpler term ˜tn, giving Eq (3). Z ∆ dπ q(π) q(π) = p(π) Y n ˜tn(π) (3) In each deletion/inclusion step, one of the approximate terms is deleted from q(·), leaving q−n(·) = q(·)/˜tn(·). A new approximation for tn(·) is computed so that tn(·)q−n(·) has the same integral, mean and variance as ˜tn(·)q−n(·). This new approximation, ˜tn(·) is then included back into the full expression for q(·) and the process repeats. This algorithm always has a fixed point and there are methods for ensuring that the approximation remains in a location where the integral is well-defined. Unlike variational EM, the approximation given by EP is global, and often leads to much more reliable estimates of the true integral. In the case of our model, we follow Minka and Lafferty (2003), who adapts latent Dirichlet allocation of Blei et al. (2003) to EP. Due to space constraints, we omit the inference algorithms and instead direct the interested reader to the description given by Minka and Lafferty (2003). 4 Search-Engine Experiments The first experiments we run are for query-focused single document summarization, where relevant documents are returned from a search engine, and a short summary is desired of each document. 4.1 Data The data we use to train and test BAYESUM is drawn from the Text REtrieval Conference (TREC) competitions. This data set consists of queries, documents and relevance judgments, exactly as required by our model. The queries are typically broken down into four fields of increasing length: the title (3-4 words), the summary (1 sentence), the narrative (2-4 sentences) and the concepts (a list of keywords). Obviously, one would expect that the longer the query, the better a model would be able to do, and this is borne out experimentally (Section 4.5). Of the TREC data, we have trained our model on 350 queries (queries numbered 51-350 and 401-450) and all corresponding relevant documents. This amounts to roughly 43k documents, 2.1m sentences and 65.8m words. The mean number of relevant documents per query is 137 and the median is 81 (the most prolific query has 968 relevant documents). On the other hand, each document is relevant to, on average, 1.11 queries (the median is 5.5 and the most generally relevant document is relevant to 20 different queries). In all cases, we apply stemming using the Porter stemmer; for all other models, we remove stop words. In order to evaluate our model, we had seven human judges manually perform the queryfocused sentence extraction task. The judges were supplied with the full TREC query and a single document relevant to that query, and were asked to select up to four sentences from the document that best met the needs given by the query. Each judge annotated 25 queries with some overlap to allow for an evaluation of inter-annotator agreement, yielding annotations for a total of 166 unique query/document pairs. On the doubly annotated data, we computed the inter-annotator agreement using the kappa measure. The kappa value found was 0.58, which is low, but not abysmal (also, keep in mind that this is computed over only 25 of the 166 examples). 4.2 Evaluation Criteria Since there are differing numbers of sentences selected per document by the human judges, one cannot compute precision and recall; instead, we opt for other standard IR performance measures. We consider three related criteria: mean average precision (MAP), mean reciprocal rank (MRR) and precision at 2 (P@2). MAP is computed by calculating precision at every sentence as ordered by the system up until all relevant sentences are selected and averaged. MRR is the reciprocal of the rank of the first relevant sentence. P@2 is the precision computed at the first point that two relevant sentences have been selected (in the rare case that 308 humans selected only one sentence, we use P@1). 4.3 Baseline Models As baselines, we consider four strawman models and two state-of-the-art information retrieval models. The first strawman, RANDOM ranks sentences randomly. The second strawman, POSITION, ranks sentences according to their absolute position (in the context of non-query-focused summarization, this is an incredibly powerful baseline). The third and fourth models are based on the vector space interpretation of IR. The third model, JACCARD, uses standard Jaccard distance score (intersection over union) between each sentence and the query to rank sentences. The fourth, COSINE, uses TF-IDF weighted cosine similarity. The two state-of-the-art IR models used as comparative systems are based on the language modeling framework described in Section 2.1. These systems compute a language model for each query and for each sentence in a document. Sentences are then ranked according to the KL divergence between the query model and the sentence model, smoothed against a general model estimated from the entire collection, as described in the case of document retrieval by Lavrenko et al. (2002). This is the first system we compare against, called KL. The second true system, KL+REL is based on augmenting the KL system with blind relevance feedback (query expansion). Specifically, we first run each query against the document set returned by the relevance judgments and retrieve the top n sentences. We then expand the query by interpolating the original query model with a query model estimated on these sentences. This serves as a method of query expansion. We ran experiments ranging n in {5, 10, 25, 50, 100} and the interpolation parameter λ in {0.2, 0.4, 0.6, 0.8} and used oracle selection (on MRR) to choose the values that performed best (the results are thus overly optimistic). These values were n = 25 and λ = 0.4. Of all the systems compared, only BAYESUM and the KL+REL model use the relevance judgments; however, they both have access to exactly the same information. The other models only run on the subset of the data used for evaluation (the corpus language model for the KL system and the IDF values for the COSINE model are computed on the full data set). EP ran for 2.5 hours. MAP MRR P@2 RANDOM 19.9 37.3 16.6 POSITION 24.8 41.6 19.9 JACCARD 17.9 29.3 16.7 COSINE 29.6 50.3 23.7 KL 36.6 64.1 27.6 KL+REL 36.3 62.9 29.2 BAYESUM 44.1 70.8 33.6 Table 1: Empirical results for the baseline models as well as BAYESUM, when all query fields are used. 4.4 Performance on all Query Fields Our first evaluation compares results when all query fields are used (title, summary, description and concepts1). These results are shown in Table 1. As we can see from these results, the JACCARD system alone is not sufficient to beat the position-based baseline. The COSINE does beat the position baseline by a bit of a margin (5 points better in MAP, 9 points in MRR and 4 points in P@2), and is in turn beaten by the KL system (which is 7 points, 14 points and 4 points better in MAP, MRR and P@2, respectively). Blind relevance feedback (parameters of which were chosen by an oracle to maximize the P@2 metric) actually hurts MAP and MRR performance by 0.3 and 1.2, respectively, and increases P@2 by 1.5. Over the best performing baseline system (either KL or KL+REL), BAYESUM wins by a margin of 7.5 points in MAP, 6.7 for MRR and 4.4 for P@2. 4.5 Varying Query Fields Our next experimental comparison has to do with reducing the amount of information given in the query. In Table 2, we show the performance of the KL, KL-REL and BAYESUM systems, as we use different query fields. There are several things to notice in these results. First, the standard KL model without blind relevance feedback performs worse than the position-based model when only the 3-4 word title is available. Second, BAYESUM using only the title outperform the KL model with relevance feedback using all fields. In fact, one can apply BAYESUM without using any of the query fields; in this case, only the relevance judgments are available to make sense 1A reviewer pointed out that concepts were later removed from TREC because they were “too good.” Section 4.5 considers the case without the concepts field. 309 MAP MRR P@2 POSITION 24.8 41.6 19.9 Title KL 19.9 32.6 17.8 KL-Rel 31.9 53.8 26.1 BAYESUM 41.1 65.7 31.6 +Description KL 31.5 58.3 24.1 KL-Rel 32.6 55.0 26.2 BAYESUM 40.9 66.9 31.0 +Summary KL 31.6 56.9 23.8 KL-Rel 34.2 48.5 27.0 BAYESUM 42.0 67.8 31.8 +Concepts KL 36.7 64.2 27.6 KL-Rel 36.3 62.9 29.2 BAYESUM 44.1 70.8 33.6 No Query BAYESUM 39.4 64.7 30.4 Table 2: Empirical results for the position-based model, the KL-based models and BAYESUM, with different inputs. of what the query might be. Even in this circumstance, BAYESUM achieves a MAP of 39.4, an MRR of 64.7 and a P@2 of 30.4, still better across the board than KL-REL with all query fields. While initially this seems counterintuitive, it is actually not so unreasonable: there is significantly more information available in several hundred positive relevance judgments than in a few sentences. However, the simple blind relevance feedback mechanism so popular in IR is unable to adequately model this. With the exception of the KL model without relevance feedback, adding the description on top of the title does not seem to make any difference for any of the models (and, in fact, occasionally hurts according to some metrics). Adding the summary improves performance in most cases, but not significantly. Adding concepts tends to improve results slightly more substantially than any other. 4.6 Noisy Relevance Judgments Our model hinges on the assumption that, for a given query, we have access to a collection of known relevant documents. In most real-world cases, this assumption is violated. Even in multidocument summarization as run in the DUC competitions, the assumption of access to a collection of documents all relevant to a user need is unrealistic. In the real world, we will have to deal with document collections that “accidentally” contain irrelevant documents. The experiments in this section show that BAYESUM is comparatively robust. For this experiment, we use the IR engine that performed best in the TREC 1 evaluation: Inquery (Callan et al., 1992). We used the offi0.4 0.5 0.6 0.7 0.8 0.9 1 28 30 32 34 36 38 40 42 44 R−precision of IR Engine Mean Average Precision of Sentence Extraction KL−Rel (title only) BayeSum (title only) KL−Rel (title+desc+sum) BayeSum (title+desc+sum) KL−Rel (all fields) BayeSum (all fields) Figure 2: Performance with noisy relevance judgments. The X-axis is the R-precision of the IR engine and the Y-axis is the summarization performance in MAP. Solid lines are BAYESUM, dotted lines are KL-Rel. Blue/stars indicate title only, red/circles indicated title+description+summary and black/pluses indicate all fields. cial TREC results of Inquery on the subset of the TREC corpus we consider. The Inquery Rprecision on this task is 0.39 using title only, and 0.51 using all fields. In order to obtain curves as the IR engine improves, we have linearly interpolated the Inquery rankings with the true relevance judgments. By tweaking the interpolation parameter, we obtain an IR engine with improving performance, but with a reasonable bias. We have run both BAYESUM and KL-Rel on the relevance judgments obtained by this method for six values of the interpolation parameter. The results are shown in Figure 2. As we can observe from the figure, the solid lines (BAYESUM) are always above the dotted lines (KL-Rel). Considering the KL-Rel results alone, we can see that for a non-perfect IR engine, it makes little difference what query fields we use for the summarization task: they all obtain roughly equal scores. This is because the performance in KL-Rel is dominated by the performance of the IR engine. Looking only at the BAYESUM results, we can see a much stronger, and perhaps surprising difference. For an imperfect IR system, it is better to use only the title than to use the title, description and summary for the summarization component. We believe this is because the title is more on topic than the other fields, which contain terms like “A relevant document should describe ....” Never310 theless, BAYESUM has a more upward trend than KL-Rel, which indicates that improved IR will result in improved summarization for BAYESUM but not for KL-Rel. 5 Multidocument Experiments We present two results using BAYESUM in the multidocument summarization settings, based on the official results from the Multilingual Summarization Evaluation (MSE) and Document Understanding Conference (DUC) competitions in 2005. 5.1 Performance at MSE 2005 We participated in the Multilingual Summarization Evaluation (MSE) workshop with a system based on BAYESUM. The task for this competition was generic (no query) multidocument summarization. Fortunately, not having a query is not a hindrance to our model. To account for the redundancy present in document collections, we applied a greedy selection technique that selects sentences central to the document cluster but far from previously selected sentences (Daum´e III and Marcu, 2005a). In MSE, our system performed very well. According to the human “pyramid” evaluation, our system came first with a score of 0.529; the next best score was 0.489. In the automatic “Basic Element” evaluation, our system scored 0.0704 (with a 95% confidence interval of [0.0429, 0.1057]), which was the third best score on a site basis (out of 10 sites), and was not statistically significantly different from the best system, which scored 0.0981. 5.2 Performance at DUC 2005 We also participated in the Document Understanding Conference (DUC) competition. The chosen task for DUC was query-focused multidocument summarization. We entered a nearly identical system to DUC as to MSE, with an additional rulebased sentence compression component (Daum´e III and Marcu, 2005b). Human evaluators considered both responsiveness (how well did the summary answer the query) and linguistic quality. Our system achieved the highest responsiveness score in the competition. We scored more poorly on the linguistic quality evaluation, which (only 5 out of about 30 systems performed worse); this is likely due to the sentence compression we performed on top of BAYESUM. On the automatic Rouge-based evaluations, our system performed between third and sixth (depending on the Rouge parameters), but was never statistically significantly worse than the best performing systems. 6 Discussion and Future Work In this paper we have described a model for automatically generating a query-focused summary, when one has access to multiple relevance judgments. Our Bayesian Query-Focused Summarization model (BAYESUM) consistently outperforms contending, state of the art information retrieval models, even when it is forced to work with significantly less information (either in the complexity of the query terms or the quality of relevance judgments documents). When we applied our system as a stand-alone summarization model in the 2005 MSE and DUC tasks, we achieved among the highest scores in the evaluation metrics. The primary weakness of the model is that it currently only operates in a purely extractive setting. One question that arises is: why does BAYESUM so strongly outperform KL-Rel, given that BAYESUM can be seen as Bayesian formalism for relevance feedback (query expansion)? Both models have access to exactly the same information: the queries and the true relevance judgments. This is especially interesting due to the fact that the two relevance feedback parameters for KLRel were chosen optimally in our experiments, yet BAYESUM consistently won out. One explanation for this performance win is that BAYESUM provides a separate weight for each word, for each query. This gives it significantly more flexibility. Doing something similar with ad-hoc query expansion techniques is difficult due to the enormous number of parameters; see, for instance, (Buckley and Salton, 1995). One significant advantage of working in the Bayesian statistical framework is that it gives us a straightforward way to integrate other sources of knowledge into our model in a coherent manner. One could consider, for instance, to extend this model to the multi-document setting, where one would need to explicitly model redundancy across documents. Alternatively, one could include user models to account for novelty or user preferences along the lines of Zhang et al. (2002). Our model is similar in spirit to the randomwalk summarization model (Otterbacher et al., 2005). However, our model has several advantages over this technique. First, our model has 311 no tunable parameters: the random-walk method has many (graph connectivity, various thresholds, choice of similarity metrics, etc.). Moreover, since our model is properly Bayesian, it is straightforward to extend it to model other aspects of the problem, or to related problems. Doing so in a non ad-hoc manner in the random-walk model would be nearly impossible. Another interesting avenue of future work is to relax the bag-of-words assumption. Recent work has shown, in related models, how this can be done for moving from bag-of-words models to bag-ofngram models (Wallach, 2006); more interesting than moving to ngrams would be to move to dependency parse trees, which could likely be accounted for in a similar fashion. One could also potentially relax the assumption that the relevance judgments are known, and attempt to integrate them out as well, essentially simultaneously performing IR and summarization. Acknowledgments. We thank Dave Blei and Tom Minka for discussions related to topic models, and to the anonymous reviewers, whose comments have been of great benefit. This work was partially supported by the National Science Foundation, Grant IIS-0326276. References David Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research (JMLR), 3:993–1022, January. Chris Buckley and Gerard Salton. 1995. Optimization of relevance feedback weights. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). Jamie Callan, Bruce Croft, and Stephen Harding. 1992. The INQUERY retrieval system. In Proceedings of the 3rd International Conference on Database and Expert Systems Applications. Hal Daum´e III and Daniel Marcu. 2005a. Bayesian multi-document summarization at MSE. In ACL 2005 Workshop on Intrinsic and Extrinsic Evaluation Measures. Hal Daum´e III and Daniel Marcu. 2005b. Bayesian summarization at DUC and a suggestion for extrinsic evaluation. In Document Understanding Conference. John Lafferty and ChengXiang Zhai. 2001. Document language models, query models, and risk minimization for information retrieval. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). Victor Lavrenko, M. Choquette, and Bruce Croft. 2002. Crosslingual relevance models. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). Xiaoyong Liu and Bruce Croft. 2002. Passage retrieval based on language models. In Processing of the Conference on Information and Knowledge Management (CIKM). Thomas Minka and John Lafferty. 2003. Expectationpropagation for the generative aspect model. In Proceedings of the Converence on Uncertainty in Artificial Intelligence (UAI). Thomas Minka. 2001. A family of algorithms for approximate Bayesian inference. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA. Vanessa Murdock and Bruce Croft. 2005. A translation model for sentence retrieval. In Proceedings of the Joint Conference on Human Language Technology Conference and Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 684– 691. Jahna Otterbacher, Gunes Erkan, and Dragomir R. Radev. 2005. Using random walks for questionfocused sentence retrieval. In Proceedings of the Joint Conference on Human Language Technology Conference and Empirical Methods in Natural Language Processing (HLT/EMNLP). Jay M. Ponte and Bruce Croft. 1998. A language modeling approach to information retrieval. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). Hanna Wallach. 2006. Topic modeling: beyond bagof-words. In Proceedings of the International Conference on Machine Learning (ICML). Yi Zhang, Jamie Callan, and Thomas Minka. 2002. Novelty and redundancy detection in adaptive filtering. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). 312
2006
39
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 25–32, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Minimum Cut Model for Spoken Lecture Segmentation Igor Malioutov and Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {igorm,regina}@csail.mit.edu Abstract We consider the task of unsupervised lecture segmentation. We formalize segmentation as a graph-partitioning task that optimizes the normalized cut criterion. Our approach moves beyond localized comparisons and takes into account longrange cohesion dependencies. Our results demonstrate that global analysis improves the segmentation accuracy and is robust in the presence of speech recognition errors. 1 Introduction The development of computational models of text structure is a central concern in natural language processing. Text segmentation is an important instance of such work. The task is to partition a text into a linear sequence of topically coherent segments and thereby induce a content structure of the text. The applications of the derived representation are broad, encompassing information retrieval, question-answering and summarization. Not surprisingly, text segmentation has been extensively investigated over the last decade. Following the first unsupervised segmentation approach by Hearst (1994), most algorithms assume that variations in lexical distribution indicate topic changes. When documents exhibit sharp variations in lexical distribution, these algorithms are likely to detect segment boundaries accurately. For example, most algorithms achieve high performance on synthetic collections, generated by concatenation of random text blocks (Choi, 2000). The difficulty arises, however, when transitions between topics are smooth and distributional variations are subtle. This is evident in the performance of existing unsupervised algorithms on less structured datasets, such as spoken meeting transcripts (Galley et al., 2003). Therefore, a more refined analysis of lexical distribution is needed. Our work addresses this challenge by casting text segmentation in a graph-theoretic framework. We abstract a text into a weighted undirected graph, where the nodes of the graph correspond to sentences and edge weights represent the pairwise sentence similarity. In this framework, text segmentation corresponds to a graph partitioning that optimizes the normalized-cut criterion (Shi and Malik, 2000). This criterion measures both the similarity within each partition and the dissimilarity across different partitions. Thus, our approach moves beyond localized comparisons and takes into account long-range changes in lexical distribution. Our key hypothesis is that global analysis yields more accurate segmentation results than local models. We tested our algorithm on a corpus of spoken lectures. Segmentation in this domain is challenging in several respects. Being less structured than written text, lecture material exhibits digressions, disfluencies, and other artifacts of spontaneous communication. In addition, the output of speech recognizers is fraught with high word error rates due to specialized technical vocabulary and lack of in-domain spoken data for training. Finally, pedagogical considerations call for fluent transitions between different topics in a lecture, further complicating the segmentation task. Our experimental results confirm our hypothesis: considering long-distance lexical dependencies yields substantial gains in segmentation performance. Our graph-theoretic approach compares favorably to state-of-the-art segmentation algorithms and attains results close to the range of human agreement scores. Another attractive prop25 erty of the algorithm is its robustness to noise: the accuracy of our algorithm does not deteriorate significantly when applied to speech recognition output. 2 Previous Work Most unsupervised algorithms assume that fragments of text with homogeneous lexical distribution correspond to topically coherent segments. Previous research has analyzed various facets of lexical distribution, including lexical weighting, similarity computation, and smoothing (Hearst, 1994; Utiyama and Isahara, 2001; Choi, 2000; Reynar, 1998; Kehagias et al., 2003; Ji and Zha, 2003). The focus of our work, however, is on an orthogonal yet fundamental aspect of this analysis — the impact of long-range cohesion dependencies on segmentation performance. In contrast to previous approaches, the homogeneity of a segment is determined not only by the similarity of its words, but also by their relation to words in other segments of the text. We show that optimizing our global objective enables us to detect subtle topical changes. Graph-Theoretic Approaches in Vision Segmentation Our work is inspired by minimum-cutbased segmentation algorithms developed for image analysis. Shi and Malik (2000) introduced the normalized-cut criterion and demonstrated its practical benefits for segmenting static images. Our method, however, is not a simple application of the existing approach to a new task. First, in order to make it work in the new linguistic framework, we had to redefine the underlying representation and introduce a variety of smoothing and lexical weighting techniques. Second, the computational techniques for finding the optimal partitioning are also quite different. Since the minimization of the normalized cut is NP-complete in the general case, researchers in vision have to approximate this computation. Fortunately, we can find an exact solution due to the linearity constraint on text segmentation. 3 Minimum Cut Framework Linguistic research has shown that word repetition in a particular section of a text is a device for creating thematic cohesion (Halliday and Hasan, 1976), and that changes in the lexical distributions usually signal topic transitions. Figure 1: Sentence similarity plot for a Physics lecture, with vertical lines indicating true segment boundaries. Figure 1 illustrates these properties in a lecture transcript from an undergraduate Physics class. We use the text Dotplotting representation by (Church, 1993) and plot the cosine similarity scores between every pair of sentences in the text. The intensity of a point (i, j) on the plot indicates the degree to which the i-th sentence in the text is similar to the j-th sentence. The true segment boundaries are denoted by vertical lines. This similarity plot reveals a block structure where true boundaries delimit blocks of text with high inter-sentential similarity. Sentences found in different blocks, on the other hand, tend to exhibit low similarity. u1 u2 u3 un Figure 2: Graph-based Representation of Text Formalizing the Objective Whereas previous unsupervised approaches to segmentation rested on intuitive notions of similarity density, we formalize the objective of text segmentation through cuts on graphs. We aim to jointly maximize the intra-segmental similarity and minimize the similarity between different segments. In other words, we want to find the segmentation with a maximally homogeneous set of segments that are also maxi26 mally different from each other. Let G = {V, E} be an undirected, weighted graph, where V is the set of nodes corresponding to sentences in the text and E is the set of weighted edges (See Figure 2). The edge weights, w(u, v), define a measure of similarity between pairs of nodes u and v, where higher scores indicate higher similarity. Section 4 provides more details on graph construction. We consider the problem of partitioning the graph into two disjoint sets of nodes A and B. We aim to minimize the cut, which is defined to be the sum of the crossing edges between the two sets of nodes. In other words, we want to split the sentences into two maximally dissimilar classes by choosing A and B to minimize: cut(A, B) = X u∈A,v∈B w(u, v) However, we need to ensure that the two partitions are not only maximally different from each other, but also that they are themselves homogeneous by accounting for intra-partition node similarity. We formulate this requirement in the framework of normalized cuts (Shi and Malik, 2000), where the cut value is normalized by the volume of the corresponding partitions. The volume of the partition is the sum of its edges to the whole graph: vol(A) = X u∈A,v∈V w(u, v) The normalized cut criterion (Ncut) is then defined as follows: Ncut(A, B) = cut(A, B) vol(A) + cut(A, B) vol(B) By minimizing this objective we simultaneously minimize the similarity across partitions and maximize the similarity within partitions. This formulation also allows us to decompose the objective into a sum of individual terms, and formulate a dynamic programming solution to the multiway cut problem. This criterion is naturally extended to a k-way normalized cut: Ncutk(V ) = cut(A1, V −A1) vol(A1) + . . . + cut(Ak, V −Ak) vol(Ak) where A1 . . . Ak form a partition of the graph, and V −Ak is the set difference between the entire graph and partition k. Decoding Papadimitriou proved that the problem of minimizing normalized cuts on graphs is NP-complete (Shi and Malik, 2000). However, in our case, the multi-way cut is constrained to preserve the linearity of the segmentation. By segmentation linearity, we mean that all of the nodes between the leftmost and the rightmost nodes of a particular partition have to belong to that partition. With this constraint, we formulate a dynamic programming algorithm for exactly finding the minimum normalized multiway cut in polynomial time: C [i, k] = min j<k  C [i −1, j] + cut [Aj,k, V −Aj,k] vol [Aj,k]  (1) B [i, k] = argmin j<k  C [i −1, j] + cut [Aj,k, V −Aj,k] vol [Aj,k]  (2) s.t. C [0, 1] = 0, C [0, k] = ∞, 1 < k ≤N (3) B [0, k] = 1, 1 ≤k ≤N (4) C [i, k] is the normalized cut value of the optimal segmentation of the first k sentences into i segments. The i-th segment, Aj,k, begins at node uj and ends at node uk. B [i, k] is the back-pointer table from which we recover the optimal sequence of segment boundaries. Equations 3 and 4 capture respectively the condition that the normalized cut value of the trivial segmentation of an empty text into one segment is zero and the constraint that the first segment starts with the first node. The time complexity of the dynamic programming algorithm is O(KN2), where K is the number of partitions and N is the number of nodes in the graph or sentences in the transcript. 4 Building the Graph Clearly, the performance of our model depends on the underlying representation, the definition of the pairwise similarity function, and various other model parameters. In this section we provide further details on the graph construction process. Preprocessing Before building the graph, we apply standard text preprocessing techniques to the text. We stem words with the Porter stemmer (Porter, 1980) to alleviate the sparsity of word counts through stem equivalence classes. We also remove words matching a prespecified list of stop words. 27 Graph Topology As we noted in the previous section, the normalized cut criterion considers long-term similarity relationships between nodes. This effect is achieved by constructing a fullyconnected graph. However, considering all pairwise relations in a long text may be detrimental to segmentation accuracy. Therefore, we discard edges between sentences exceeding a certain threshold distance. This reduction in the graph size also provides us with computational savings. Similarity Computation In computing pairwise sentence similarities, sentences are represented as vectors of word counts. Cosine similarity is commonly used in text segmentation (Hearst, 1994). To avoid numerical precision issues when summing a series of very small scores, we compute exponentiated cosine similarity scores between pairs of sentence vectors: w(si, sj) = e si·sj ||si||×||sj|| We further refine our analysis by smoothing the similarity metric. When comparing two sentences, we also take into account similarity between their immediate neighborhoods. The smoothing is achieved by adding counts of words that occur in adjoining sentences to the current sentence feature vector. These counts are weighted in accordance to their distance from the current sentence: ˜si = i+k X j=i e−α(j−i)sj, where si are vectors of word counts, and α is a parameter that controls the degree of smoothing. In the formulation above we use sentences as our nodes. However, we can also represent graph nodes with non-overlapping blocks of words of fixed length. This is desirable, since the lecture transcripts lack sentence boundary markers, and short utterances can skew the cosine similarity scores. The optimal length of the block is tuned on a heldout development set. Lexical Weighting Previous research has shown that weighting schemes play an important role in segmentation performance (Ji and Zha, 2003; Choi et al., 2001). Of particular concern are words that may not be common in general English discourse but that occur throughout the text for a particular lecture or subject. For example, in a lecture about support vector machines, the occurrence of the term “SVM” is not going to convey a lot of information about the distribution of Segments per Total Word ASR WER Corpus Lectures Lecture Tokens Accuracy Physics 33 5.9 232K 19.4% AI 22 12.3 182K × Table 1: Lecture Corpus Statistics sub-topics, even though it is a fairly rare term in general English and bears much semantic content. The same words can convey varying degrees of information across different lectures, and term weighting specific to individual lectures becomes important in the similarity computation. In order to address this issue, we introduce a variation on the tf-idf scoring scheme used in the information-retrieval literature (Salton and Buckley, 1988). A transcript is split uniformly into N chunks; each chunk serves as the equivalent of documents in the tf-idf computation. The weights are computed separately for each transcript, since topic and word distributions vary across lectures. 5 Evaluation Set-Up In this section we present the different corpora used to evaluate our model and provide a brief overview of the evaluation metrics. Next, we describe our human segmentation study on the corpus of spoken lecture data. 5.1 Parameter Estimation A heldout development set of three lectures isused for estimating the optimal word block length for representing nodes, the threshold distances for discarding node edges, the number of uniform chunks for estimating tf-idf lexical weights, the alpha parameter for smoothing, and the length of the smoothing window. We use a simple greedy search procedure for optimizing the parameters. 5.2 Corpora We evaluate our segmentation algorithm on three sets of data. Two of the datasets we use are new segmentation collections that we have compiled for this study,1 and the remaining set includes a standard collection previously used for evaluation of segmentation algorithms. Various corpus statistics for the new datasets are presented in Table 1. Below we briefly describe each corpus. Physics Lectures Our first corpus consists of spoken lecture transcripts from an undergraduate 1Our materials are publicly available at http://www. csail.mit.edu/˜igorm/acl06.html 28 Physics class. In contrast to other segmentation datasets, our corpus contains much longer texts. A typical lecture of 90 minutes has 500 to 700 sentences with 8500 words, which corresponds to about 15 pages of raw text. We have access both to manual transcriptions of these lectures and also output from an automatic speech recognition system. The word error rate for the latter is 19.4%,2 which is representative of state-of-the-art performance on lecture material (Leeuwis et al., 2003). The Physics lecture transcript segmentations were produced by the teaching staff of the introductory Physics course at the Massachusetts Institute of Technology. Their objective was to facilitate access to lecture recordings available on the class website. This segmentation conveys the high-level topical structure of the lectures. On average, a lecture was annotated with six segments, and a typical segment corresponds to two pages of a transcript. Artificial Intelligence Lectures Our second lecture corpus differs in subject matter, lecturing style, and segmentation granularity. The graduate Artificial Intelligence class has, on average, twelve segments per lecture, and a typical segment is about half of a page. One segment roughly corresponds to the content of a slide. This time the segmentation was obtained from the lecturer herself. The lecturer went through the transcripts of lecture recordings and segmented the lectures with the objective of making the segments correspond to presentation slides for the lectures. Due to the low recording quality, we were unable to obtain the ASR transcripts for this class. Therefore, we only use manual transcriptions of these lectures. Synthetic Corpus Also as part of our analysis, we used the synthetic corpus created by Choi (2000) which is commonly used in the evaluation of segmentation algorithms. This corpus consists of a set of concatenated segments randomly sampled from the Brown corpus. The length of the segments in this corpus ranges from three to eleven sentences. It is important to note that the lexical transitions in these concatenated texts are very sharp, since the segments come from texts written in widely varying language styles on completely different topics. 2A speaker-dependent model of the lecturer was trained on 38 hours of lectures from other courses using the SUMMIT segment-based Speech Recognizer (Glass, 2003). 5.3 Evaluation Metric We use the Pk and WindowDiff measures to evaluate our system (Beeferman et al., 1999; Pevzner and Hearst, 2002). The Pk measure estimates the probability that a randomly chosen pair of words within a window of length k words is inconsistently classified. The WindowDiff metric is a variant of the Pk measure, which penalizes false positives on an equal basis with near misses. Both of these metrics are defined with respect to the average segment length of texts and exhibit high variability on real data. We follow Choi (2000) and compute the mean segment length used in determining the parameter k on each reference text separately. We also plot the Receiver Operating Characteristic (ROC) curve to gauge performance at a finer level of discrimination (Swets, 1988). The ROC plot is the plot of the true positive rate against the false positive rate for various settings of a decision criterion. In our case, the true positive rate is the fraction of boundaries correctly classified, and the false positive rate is the fraction of non-boundary positions incorrectly classified as boundaries. In computing the true and false positive rates, we vary the threshold distance to the true boundary within which a hypothesized boundary is considered correct. Larger areas under the ROC curve of a classifier indicate better discriminative performance. 5.4 Human Segmentation Study Spoken lectures are very different in style from other corpora used in human segmentation studies (Hearst, 1994; Galley et al., 2003). We are interested in analyzing human performance on a corpus of lecture transcripts with much longer texts and a less clear-cut concept of a sub-topic. We define a segment to be a sub-topic that signals a prominent shift in subject matter. Disregarding this sub-topic change would impair the high-level understanding of the structure and the content of the lecture. As part of our human segmentation analysis, we asked three annotators to segment the Physics lecture corpus. These annotators had taken the class in the past and were familiar with the subject matter under consideration. We wrote a detailed instruction manual for the task, with annotation guidelines for the most part following the model used by Gruenstein et al. (2005). The annotators were instructed to segment at a level of granularity 29 O A B C MEAN SEG. COUNT 6.6 8.9 18.4 13.8 MEAN SEG. LENGTH 69.4 51.5 24.9 33.2 SEG. LENGTH DEV. 39.6 37.4 34.5 39.4 Table 2: Annotator Segmentation Statistics for the first ten Physics lectures. REF/HYP O A B C O 0 0.243 0.418 0.312 A 0.219 0 0.400 0.355 B 0.314 0.337 0 0.332 C 0.260 0.296 0.370 0 Table 3: Pk annotation agreement between different pairs of annotators. that would identify most of the prominent topical transitions necessary for a summary of the lecture. The annotators used the NOMOS annotation software toolkit, developed for meeting segmentation (Gruenstein et al., 2005). They were provided with recorded audio of the lectures and the corresponding text transcriptions. We intentionally did not provide the subjects with the target number of boundaries, since we wanted to see if the annotators would converge on a common segmentation granularity. Table 2 presents the annotator segmentation statistics. We see two classes of segmentation granularities. The original reference (O) and annotator A segmented at a coarse level with an average of 6.6 and 8.9 segments per lecture, respectively. Annotators B and C operated at much finer levels of discrimination with 18.4 and 13.8 segments per lecture on average. We conclude that multiple levels of granularity are acceptable in spoken lecture segmentation. This is expected given the length of the lectures and varying human judgments in selecting relevant topical content. Following previous studies, we quantify the level of annotator agreement with the Pk measure (Gruenstein et al., 2005).3 Table 3 shows the annotator agreement scores between different pairs of annotators. Pk measures ranged from 0.24 and 0.42. We observe greater consistency at similar levels of granularity, and less so across the two 3Kappa measure would not be the appropriate measure in this case, because it is not sensitive to near misses, and we cannot make the required independence assumption on the placement of boundaries. EDGE CUTOFF 10 25 50 100 200 NONE PHYSICS (MANUAL) PK 0.394 0.373 0.341 0.295 0.311 0.330 WD 0.404 0.383 0.352 0.308 0.329 0.350 PHYSICS (ASR) PK 0.440 0.371 0.343 0.330 0.322 0.359 WD 0.456 0.383 0.356 0.343 0.342 0.398 AI PK 0.480 0.422 0.408 0.416 0.393 0.397 WD 0.493 0.435 0.420 0.440 0.424 0.432 CHOI PK 0.222 0.202 0.213 0.216 0.208 0.208 WD 0.234 0.222 0.233 0.238 0.230 0.230 Table 4: Edges between nodes separated beyond a certain threshold distance are removed. classes. Note that annotator A operated at a level of granularity consistent with the original reference segmentation. Hence, the 0.24 Pk measure score serves as the benchmark with which we can compare the results attained by segmentation algorithms on the Physics lecture data. As an additional point of reference we note that the uniform and random baseline segmentations attain 0.469 and 0.493 Pk measure, respectively, on the Physics lecture set. 6 Experimental Results 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 False Positive Rate True Positive Rate Cutoff=5 Cutoff=100 Figure 3: ROC plot for the Minimum Cut Segmenter on thirty Physics Lectures, with edge cutoffs set at five and hundred sentences. Benefits of global analysis We first determine the impact of long-range pairwise similarity dependencies on segmentation performance. Our 30 CHOI UI MINCUT PHYSICS (MANUAL) PK 0.372 0.310 0.298 WD 0.385 0.323 0.311 PHYSICS (ASR TRANSCRIPTS) PK 0.361 0.352 0.322 WD 0.376 0.364 0.340 AI PK 0.445 0.374 0.383 WD 0.478 0.420 0.417 CHOI PK 0.110 0.105 0.212 WD 0.121 0.116 0.234 Table 5: Performance analysis of different algorithms using the Pk and WindowDiff measures, with three lectures heldout for development. key hypothesis is that considering long-distance lexical relations contributes to the effectiveness of the algorithm. To test this hypothesis, we discard edges between nodes that are more than a certain number of sentences apart. We test the system on a range of data sets, including the Physics and AI lectures and the synthetic corpus created by Choi (2000). We also include segmentation results on Physics ASR transcripts. The results in Table 4 confirm our hypothesis — taking into account non-local lexical dependencies helps across different domains. On manually transcribed Physics lecture data, for example, the algorithm yields 0.394 Pk measure when taking into account edges separated by up to ten sentences. When dependencies up to a hundred sentences are considered, the algorithm yields a 25% reduction in Pk measure. Figure 3 shows the ROC plot for the segmentation of the Physics lecture data with different cutoff parameters, again demonstrating clear gains attained by employing longrange dependencies. As Table 4 shows, the improvement is consistent across all lecture datasets. We note, however, that after some point increasing the threshold degrades performance, because it introduces too many spurious dependencies (see the last column of Table 4). The speaker will occasionally return to a topic described at the beginning of the lecture, and this will bias the algorithm to put the segment boundary closer to the end of the lecture. Long-range dependencies do not improve the performance on the synthetic dataset. This is expected since the segments in the synthetic dataset are randomly selected from widely-varying documents in the Brown corpus, even spanning different genres of written language. So, effectively, there are no genuine long-range dependencies that can be exploited by the algorithm. Comparison with local dependency models We compare our system with the state-of-the-art similarity-based segmentation system developed by Choi (2000). We use the publicly available implementation of the system and optimize the system on a range of mask-sizes and different parameter settings described in (Choi, 2000) on a heldout development set of three lectures. To control for segmentation granularity, we specify the number of segments in the reference (“O”) segmentation for both our system and the baseline. Table 5 shows that the Minimum Cut algorithm consistently outperforms the similarity-based baseline on all the lecture datasets. We attribute this gain to the presence of more attenuated topic transitions in spoken language. Since spoken language is more spontaneous and less structured than written language, the speaker needs to keep the listener abreast of the changes in topic content by introducing subtle cues and references to prior topics in the course of topical transitions. Non-local dependencies help to elucidate shifts in focus, because the strength of a particular transition is measured with respect to other local and long-distance contextual discourse relationships. Our system does not outperform Choi’s algorithm on the synthetic data. This again can be attributed to the discrepancy in distributional properties of the synthetic corpus which lacks coherence in its thematic shifts and the lecture corpus of spontaneous speech with smooth distributional variations. We also note that we did not try to adjust our model to optimize its performance on the synthetic data. The smoothing method developed for lecture segmentation may not be appropriate for short segments ranging from three to eleven sentences that constitute the synthetic set. We also compared our method with another state-of-the-art algorithm which does not explicitly rely on pairwise similarity analysis. This algorithm (Utiyama and Isahara, 2001) (UI) computes the optimal segmentation by estimating changes in the language model predictions over different partitions. We used the publicly available implemen31 tation of the system that does not require parameter tuning on a heldout development set. Again, our method achieves favorable performance on a range of lecture data sets (See Table 5), and both algorithms attain results close to the range of human agreement scores. The attractive feature of our algorithm, however, is robustness to recognition errors — testing it on the ASR transcripts caused only 7.8% relative increase in Pk measure (from 0.298 to 0.322), compared to a 13.5% relative increase for the UI system. We attribute this feature to the fact that the model is less dependent on individual recognition errors, which have a detrimental effect on the local segment language modeling probability estimates for the UI system. The block-level similarity function is not as sensitive to individual word errors, because the partition volume normalization factor dampens their overall effect on the derived models. 7 Conclusions In this paper we studied the impact of long-range dependencies on the accuracy of text segmentation. We modeled text segmentation as a graphpartitioning task aiming to simultaneously optimize the total similarity within each segment and dissimilarity across various segments. We showed that global analysis of lexical distribution improves the segmentation accuracy and is robust in the presence of recognition errors. Combining global analysis with advanced methods for smoothing (Ji and Zha, 2003) and weighting could further boost the segmentation performance. Our current implementation does not automatically determine the granularity of a resulting segmentation. This issue has been explored in the past (Ji and Zha, 2003; Utiyama and Isahara, 2001), and we will explore the existing strategies in our framework. We believe that the algorithm has to produce segmentations for various levels of granularity, depending on the needs of the application that employs it. Our ultimate goal is to automatically generate tables of content for lectures. We plan to investigate strategies for generating titles that will succinctly describe the content of each segment. We will explore how the interaction between the generation and segmentation components can improve the performance of such a system as a whole. 8 Acknowledgements The authors acknowledge the support of the National Science Foundation (CAREER grant IIS-0448168, grant IIS0415865, and the NSF Graduate Fellowship). Any opinions, findings, conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We would like to thank Masao Utiyama for providing us with an implementation of his segmentation system and Alex Gruenstein for assisting us with the NOMOS toolkit. We are grateful to David Karger for an illuminating discussion on the Minimum Cut algorithm. We also would like to acknowledge the MIT NLP and Speech Groups, the three annotators, and the three anonymous reviewers for valuable comments, suggestions, and help. References D. Beeferman, A. Berger, J. D. Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(13):177–210. F. Choi, P. Wiemer-Hastings, J. Moore. 2001. Latent semantic analysis for text segmentation. In Proceedings of EMNLP, 109–117. F. Y. Y. Choi. 2000. Advances in domain independent linear text segmentation. In Proceedings of the NAACL, 26–33. K. W. Church. 1993. Char align: A program for aligning parallel texts at the character level. In Proceedings of the ACL, 1–8. M. Galley, K. McKeown, E. Fosler-Lussier, H. Jing. 2003. Discourse segmentation of multi-party conversation. In Proceedings of the ACL, 562–569. J. R. Glass. 2003. A probabilistic framework for segmentbased speech recognition. Computer Speech and Language, 17(2–3):137–152. A. Gruenstein, J. Niekrasz, M. Purver. 2005. Meeting structure annotation: Data and tools. In Proceedings of the SIGdial Workshop on Discourse and Dialogue, 117–127. M. A. K. Halliday, R. Hasan. 1976. Cohesion in English. Longman, London. M. Hearst. 1994. Multi-paragraph segmentation of expository text. In Proceedings of the ACL, 9–16. X. Ji, H. Zha. 2003. Domain-independent text segmentation using anisotropic diffusion and dynamic programming. In Proceedings of SIGIR, 322–329. A. Kehagias, P. Fragkou, V. Petridis. 2003. Linear text segmentation using a dynamic programming algorithm. In Proceedings of the EACL, 171–178. E. Leeuwis, M. Federico, M. Cettolo. 2003. Language modeling and transcription of the ted corpus lectures. In Proceedings of ICASSP, 232–235. L. Pevzner, M. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1):pp. 19–36. M. F. Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130–137. J. Reynar. 1998. Topic segmentation: Algorithms and applications. Ph.D. thesis, University of Pennsylvania. G. Salton, C. Buckley. 1988. Term weighting approaches in automatic text retrieval. Information Processing and Management, 24(5):513–523. J. Shi, J. Malik. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905. J. Swets. 1988. Measuring the accuracy of diagnostic systems. Science, 240(4857):1285–1293. M. Utiyama, H. Isahara. 2001. A statistical model for domain-independent text segmentation. In Proceedings of the ACL, 499–506. 32
2006
4
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 313–320, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Expressing Implicit Semantic Relations without Supervision Peter D. Turney Institute for Information Technology National Research Council Canada M-50 Montreal Road Ottawa, Ontario, Canada, K1A 0R6 [email protected] Abstract We present an unsupervised learning algorithm that mines large text corpora for patterns that express implicit semantic relations. For a given input word pair Y X : with some unspecified semantic relations, the corresponding output list of patterns m P P , , 1  is ranked according to how well each pattern iP expresses the relations between X and Y . For example, given ostrich = X and bird = Y , the two highest ranking output patterns are “X is the largest Y” and “Y such as the X”. The output patterns are intended to be useful for finding further pairs with the same relations, to support the construction of lexicons, ontologies, and semantic networks. The patterns are sorted by pertinence, where the pertinence of a pattern iP for a word pair Y X : is the expected relational similarity between the given pair and typical pairs for iP . The algorithm is empirically evaluated on two tasks, solving multiple-choice SAT word analogy questions and classifying semantic relations in noun-modifier pairs. On both tasks, the algorithm achieves stateof-the-art results, performing significantly better than several alternative pattern ranking algorithms, based on tf-idf. 1 Introduction In a widely cited paper, Hearst (1992) showed that the lexico-syntactic pattern “Y such as the X” can be used to mine large text corpora for word pairs Y X : in which X is a hyponym (type) of Y. For example, if we search in a large corpus using the pattern “Y such as the X” and we find the string “bird such as the ostrich”, then we can infer that “ostrich” is a hyponym of “bird”. Berland and Charniak (1999) demonstrated that the patterns “Y’s X” and “X of the Y” can be used to mine corpora for pairs Y X : in which X is a meronym (part) of Y (e.g., “wheel of the car”). Here we consider the inverse of this problem: Given a word pair Y X : with some unspecified semantic relations, can we mine a large text corpus for lexico-syntactic patterns that express the implicit relations between X and Y ? For example, if we are given the pair ostrich:bird, can we discover the pattern “Y such as the X”? We are particularly interested in discovering high quality patterns that are reliable for mining further word pairs with the same semantic relations. In our experiments, we use a corpus of web pages containing about 10 10 5× English words (Terra and Clarke, 2003). From co-occurrences of the pair ostrich:bird in this corpus, we can generate 516 patterns of the form “X ... Y” and 452 patterns of the form “Y ... X”. Most of these patterns are not very useful for text mining. The main challenge is to find a way of ranking the patterns, so that patterns like “Y such as the X” are highly ranked. Another challenge is to find a way to empirically evaluate the performance of any such pattern ranking algorithm. For a given input word pair Y X : with some unspecified semantic relations, we rank the corresponding output list of patterns m P P , , 1  in order of decreasing pertinence. The pertinence of a pattern iP for a word pair Y X : is the expected relational similarity between the given pair and typical pairs that fit iP . We define pertinence more precisely in Section 2. Hearst (1992) suggests that her work may be useful for building a thesaurus. Berland and Charniak (1999) suggest their work may be useful for building a lexicon or ontology, like WordNet. Our algorithm is also applicable to these tasks. Other potential applications and related problems are discussed in Section 3. To calculate pertinence, we must be able to measure relational similarity. Our measure is based on Latent Relational Analysis (Turney, 2005). The details are given in Section 4. Given a word pair Y X : , we want our algorithm to rank the corresponding list of patterns 313 m P P , , 1  according to their value for mining text, in support of semantic network construction and similar tasks. Unfortunately, it is difficult to measure performance on such tasks. Therefore our experiments are based on two tasks that provide objective performance measures. In Section 5, ranking algorithms are compared by their performance on solving multiple-choice SAT word analogy questions. In Section 6, they are compared by their performance on classifying semantic relations in noun-modifier pairs. The experiments demonstrate that ranking by pertinence is significantly better than several alternative pattern ranking algorithms, based on tf-idf. The performance of pertinence on these two tasks is slightly below the best performance that has been reported so far (Turney, 2005), but the difference is not statistically significant. We discuss the results in Section 7 and conclude in Section 8. 2 Pertinence The relational similarity between two pairs of words, 1 1 :Y X and 2 2 :Y X , is the degree to which their semantic relations are analogous. For example, mason:stone and carpenter:wood have a high degree of relational similarity. Measuring relational similarity will be discussed in Section 4. For now, assume that we have a measure of the relational similarity between pairs of words, ℜ ∈ ) : , : ( sim 2 2 1 1 r Y X Y X . Let } : , , : { 1 1 n n Y X Y X W  = be a set of word pairs and let } , , { 1 m P P P  = be a set of patterns. The pertinence of pattern iP to a word pair j j Y X : is the expected relational similarity between a word pair k k Y X : , randomly selected from W according to the probability distribution ) : ( p i k k P Y X , and the word pair j j Y X : : ) , : ( pertinence i j j P Y X  = ⋅ = n k k k j j i k k Y X Y X P Y X 1 r ) : , : ( sim ) : ( p The conditional probability ) : ( p i k k P Y X can be interpreted as the degree to which the pair k k Y X : is representative (i.e., typical) of pairs that fit the pattern iP . That is, iP is pertinent to j j Y X : if highly typical word pairs k k Y X : for the pattern iP tend to be relationally similar to j j Y X : . Pertinence tends to be highest with patterns that are unambiguous. The maximum value of ) , : ( pertinence i j j P Y X is attained when the pair j j Y X : belongs to a cluster of highly similar pairs and the conditional probability distribution ) : ( p i k k P Y X is concentrated on the cluster. An ambiguous pattern, with its probability spread over multiple clusters, will have less pertinence. If a pattern with high pertinence is used for text mining, it will tend to produce word pairs that are very similar to the given word pair; this follows from the definition of pertinence. We believe this definition is the first formal measure of quality for text mining patterns. Let i kf , be the number of occurrences in a corpus of the word pair k k Y X : with the pattern iP . We could estimate ) : ( p i k k P Y X as follows:  = = n j i j i k i k k f f P Y X 1 , , ) : ( p Instead, we first estimate ) : ( p k k i Y X P :  = = m j j k i k k k i f f Y X P 1 , , ) : ( p Then we apply Bayes’ Theorem:  = ⋅ ⋅ = n j j j i j j k k i k k i k k Y X P Y X Y X P Y X P Y X 1 ) : p( ) : p( ) : p( ) : p( ) : p( We assume n Y X j j 1 ) : p( = for all pairs in W :  = = n j j j i k k i i k k Y X P Y X P P Y X 1 ) : p( ) : p( ) : p( The use of Bayes’ Theorem and the assumption that n Y X j j 1 ) : p( = for all word pairs is a way of smoothing the probability ) : ( p i k k P Y X , similar to Laplace smoothing. 3 Related Work Hearst (1992) describes a method for finding patterns like “Y such as the X”, but her method requires human judgement. Berland and Charniak (1999) use Hearst’s manual procedure. Riloff and Jones (1999) use a mutual bootstrapping technique that can find patterns automatically, but the bootstrapping requires an initial seed of manually chosen examples for each class of words. Miller et al. (2000) propose an approach to relation extraction that was evaluated in the Seventh Message Understanding Conference (MUC7). Their algorithm requires labeled examples of each relation. Similarly, Zelenko et al. (2003) use a supervised kernel method that requires labeled training examples. Agichtein and Gravano (2000) also require training examples for each relation. Brin (1998) uses bootstrapping from seed examples of author:title pairs to discover patterns for mining further pairs. Yangarber et al. (2000) and Yangarber (2003) present an algorithm that can find patterns automatically, but it requires an initial seed of manually designed patterns for each semantic relation. Stevenson (2004) uses WordNet to extract relations from text, but also requires initial seed patterns for each relation. 314 Lapata (2002) examines the task of expressing the implicit relations in nominalizations, which are noun compounds whose head noun is derived from a verb and whose modifier can be interpreted as an argument of the verb. In contrast with this work, our algorithm is not restricted to nominalizations. Section 6 shows that our algorithm works with arbitrary noun compounds and the SAT questions in Section 5 include all nine possible pairings of nouns, verbs, and adjectives. As far as we know, our algorithm is the first unsupervised learning algorithm that can find patterns for semantic relations, given only a large corpus (e.g., in our experiments, about 10 10 5× words) and a moderately sized set of word pairs (e.g., 600 or more pairs in the experiments), such that the members of each pair appear together frequently in short phrases in the corpus. These word pairs are not seeds, since the algorithm does not require the pairs to be labeled or grouped; we do not assume they are homogenous. The word pairs that we need could be generated automatically, by searching for word pairs that co-occur frequently in the corpus. However, our evaluation methods (Sections 5 and 6) both involve a predetermined list of word pairs. If our algorithm were allowed to generate its own word pairs, the overlap with the predetermined lists would likely be small. This is a limitation of our evaluation methods rather than the algorithm. Since any two word pairs may have some relations in common and some that are not shared, our algorithm generates a unique list of patterns for each input word pair. For example, mason:stone and carpenter:wood share the pattern “X carves Y”, but the patterns “X nails Y” and “X bends Y” are unique to carpenter:wood. The ranked list of patterns for a word pair Y X : gives the relations between X and Y in the corpus, sorted with the most pertinent (i.e., characteristic, distinctive, unambiguous) relations first. Turney (2005) gives an algorithm for measuring the relational similarity between two pairs of words, called Latent Relational Analysis (LRA). This algorithm can be used to solve multiplechoice word analogy questions and to classify noun-modifier pairs (Turney, 2005), but it does not attempt to express the implicit semantic relations. Turney (2005) maps each pair Y X : to a high-dimensional vector v . The value of each element iv in v is based on the frequency, for the pair Y X : , of a corresponding pattern iP . The relational similarity between two pairs, 1 1 :Y X and 2 2 :Y X , is derived from the cosine of the angle between their two vectors. A limitation of this approach is that the semantic content of the vectors is difficult to interpret; the magnitude of an element iv is not a good indicator of how well the corresponding pattern iP expresses a relation of Y X : . This claim is supported by the experiments in Sections 5 and 6. Pertinence (as defined in Section 2) builds on the measure of relational similarity in Turney (2005), but it has the advantage that the semantic content can be interpreted; we can point to specific patterns and say that they express the implicit relations. Furthermore, we can use the patterns to find other pairs with the same relations. Hearst (1992) processed her text with a partof-speech tagger and a unification-based constituent analyzer. This makes it possible to use more general patterns. For example, instead of the literal string pattern “Y such as the X”, where X and Y are words, Hearst (1992) used the more abstract pattern “ 0 NP such as 1 NP ”, where i NP represents a noun phrase. For the sake of simplicity, we have avoided part-of-speech tagging, which limits us to literal patterns. We plan to experiment with tagging in future work. 4 The Algorithm The algorithm takes as input a set of word pairs } : , , : { 1 1 n n Y X Y X W  = and produces as output ranked lists of patterns m P P , , 1  for each input pair. The following steps are similar to the algorithm of Turney (2005), with several changes to support the calculation of pertinence. 1. Find phrases: For each pair i i Y X : , make a list of phrases in the corpus that contain the pair. We use the Waterloo MultiText System (Clarke et al., 1998) to search in a corpus of about 10 10 5× English words (Terra and Clarke, 2003). Make one list of phrases that begin with i X and end with iY and a second list for the opposite order. Each phrase must have one to three intervening words between i X and iY . The first and last words in the phrase do not need to exactly match i X and iY . The MultiText query language allows different suffixes. Veale (2004) has observed that it is easier to identify semantic relations between nouns than between other parts of speech. Therefore we use WordNet 2.0 (Miller, 1995) to guess whether i X and iY are likely to be nouns. When they are nouns, we are relatively strict about suffixes; we only allow variation in pluralization. For all other parts of speech, we are liberal about suffixes. For example, we allow an adjective such as “inflated” to match a noun such as “inflation”. With MultiText, the query “inflat*” matches both “inflated” and “inflation”. 2. Generate patterns: For each list of phrases, generate a list of patterns, based on the phrases. Replace the first word in each phrase with the generic marker “X” and replace the last word with “Y”. The intervening words in each phrase 315 may be either left as they are or replaced with the wildcard “*”. For example, the phrase “carpenter nails the wood” yields the patterns “X nails the Y”, “X nails * Y”, “X * the Y”, and “X * * Y”. Do not allow duplicate patterns in a list, but note the number of times a pattern is generated for each word pair i i Y X : in each order ( i X first and iY last or vice versa). We call this the pattern frequency. It is a local frequency count, analogous to term frequency in information retrieval. 3. Count pair frequency: The pair frequency for a pattern is the number of lists from the preceding step that contain the given pattern. It is a global frequency count, analogous to document frequency in information retrieval. Note that a pair i i Y X : yields two lists of phrases and hence two lists of patterns. A given pattern might appear in zero, one, or two of the lists for i i Y X : . 4. Map pairs to rows: In preparation for building a matrix X , create a mapping of word pairs to row numbers. For each pair i i Y X : , create a row for i i Y X : and another row for i i X Y : . If W does not already contain } : , , : { 1 1 n n X Y X Y  , then we have effectively doubled the number of word pairs, which increases the sample size for calculating pertinence. 5. Map patterns to columns: Create a mapping of patterns to column numbers. For each unique pattern of the form “X ... Y” from Step 2, create a column for the original pattern “X ... Y” and another column for the same pattern with X and Y swapped, “Y ... X”. Step 2 can generate millions of distinct patterns. The experiment in Section 5 results in 1,706,845 distinct patterns, yielding 3,413,690 columns. This is too many columns for matrix operations with today’s standard desktop computer. Most of the patterns have a very low pair frequency. For the experiment in Section 5, 1,371,702 of the patterns have a pair frequency of one. To keep the matrix X manageable, we drop all patterns with a pair frequency less than ten. For Section 5, this leaves 42,032 patterns, yielding 84,064 columns. Turney (2005) limited the matrix to 8,000 columns, but a larger pool of patterns is better for our purposes, since it increases the likelihood of finding good patterns for expressing the semantic relations of a given word pair. 6. Build a sparse matrix: Build a matrix X in sparse matrix format. The value for the cell in row i and column j is the pattern frequency of the j-th pattern for the the i-th word pair. 7. Calculate entropy: Apply log and entropy transformations to the sparse matrix X (Landauer and Dumais, 1997). Each cell is replaced with its logarithm, multiplied by a weight based on the negative entropy of the corresponding column vector in the matrix. This gives more weight to patterns that vary substantially in frequency for each pair. 8. Apply SVD: After log and entropy transforms, apply the Singular Value Decomposition (SVD) to X (Golub and Van Loan, 1996). SVD decomposes X into a product of three matrices T V UΣ , where U and V are in column orthonormal form (i.e., the columns are orthogonal and have unit length) and Σ is a diagonal matrix of singular values (hence SVD). If X is of rank r , then Σ is also of rank r . Let k Σ , where r k < , be the diagonal matrix formed from the top k singular values, and let k U and k V be the matrices produced by selecting the corresponding columns from U and V . The matrix T k k k V U Σ is the matrix of rank k that best approximates the original matrix X , in the sense that it minimizes the approximation errors (Golub and Van Loan, 1996). Following Landauer and Dumais (1997), we use 300 = k . We may think of this matrix T k k k V U Σ as a smoothed version of the original matrix. SVD is used to reduce noise and compensate for sparseness (Landauer and Dumais, 1997). 9. Calculate cosines: The relational similarity between two pairs, ) : , : ( sim 2 2 1 1 r Y X Y X , is given by the cosine of the angle between their corresponding row vectors in the matrix T k k k V U Σ (Turney, 2005). To calculate pertinence, we will need the relational similarity between all possible pairs of pairs. All of the cosines can be efficiently derived from the matrix T k k k k ) ( Σ Σ U U (Landauer and Dumais, 1997). 10. Calculate conditional probabilities: Using Bayes’ Theorem (see Section 2) and the raw frequency data in the matrix X from Step 6, before log and entropy transforms, calculate the conditional probability ) : ( p j i i P Y X for every row (word pair) and every column (pattern). 11. Calculate pertinence: With the cosines from Step 9 and the conditional probabilities from Step 10, calculate ) , : ( pertinence j i i P Y X for every row i i Y X : and every column j P for which 0 ) : ( p > j i i P Y X . When 0 ) : ( p = j i i P Y X , it is possible that 0 ) , : ( pertinence > j i i P Y X , but we avoid calculating pertinence in these cases for two reasons. First, it speeds computation, because X is sparse, so 0 ) : ( p = j i i P Y X for most rows and columns. Second, 0 ) : ( p = j i i P Y X implies that the pattern j P does not actually appear with the word pair i i Y X : in the corpus; we are only guessing that the pattern is appropriate for the word pair, and we could be wrong. Therefore we prefer to limit ourselves to patterns and word pairs that have actually been observed in the corpus. For each pair i i Y X : in W, output two separate ranked lists, one for patterns of the form “X … Y” and another for patterns of the form 316 “Y … X”, where the patterns in both lists are sorted in order of decreasing pertinence to i i Y X : . Ranking serves as a kind of normalization. We have found that the relative rank of a pattern is more reliable as an indicator of its importance than the absolute pertinence. This is analogous to information retrieval, where documents are ranked in order of their relevance to a query. The relative rank of a document is more important than its actual numerical score (which is usually hidden from the user of a search engine). Having two separate ranked lists helps to avoid bias. For example, ostrich:bird generates 516 patterns of the form “X ... Y” and 452 patterns of the form “Y ... X”. Since there are more patterns of the form “X ... Y”, there is a slight bias towards these patterns. If the two lists were merged, the “Y ... X” patterns would be at a disadvantage. 5 Experiments with Word Analogies In these experiments, we evaluate pertinence using 374 college-level multiple-choice word analogies, taken from the SAT test. For each question, there is a target word pair, called the stem pair, and five choice pairs. The task is to find the choice that is most analogous (i.e., has the highest relational similarity) to the stem. This choice pair is called the solution and the other choices are distractors. Since there are six word pairs per question (the stem and the five choices), there are 2244 6 374 = × pairs in the input set W. In Step 4 of the algorithm, we double the pairs, but we also drop some pairs because they do not co-occur in the corpus. This leaves us with 4194 rows in the matrix. As mentioned in Step 5, the matrix has 84,064 columns (patterns). The sparse matrix density is 0.91%. To answer a SAT question, we generate ranked lists of patterns for each of the six word pairs. Each choice is evaluated by taking the intersection of its patterns with the stem’s patterns. The shared patterns are scored by the average of their rank in the stem’s lists and the choice’s lists. Since the lists are sorted in order of decreasing pertinence, a low score means a high pertinence. Our guess is the choice with the lowest scoring shared pattern. Table 1 shows three examples, two questions that are answered correctly followed by one that is answered incorrectly. The correct answers are in bold font. For the first question, the stem is ostrich:bird and the best choice is (a) lion:cat. The highest ranking pattern that is shared by both of these pairs is “Y such as the X”. The third question illustrates that, even when the answer is incorrect, the best shared pattern (“Y powered * * X”) may be plausible. Word pair Best shared pattern Score 1. ostrich:bird (a) lion:cat “Y such as the X” 1.0 (b) goose:flock “X * * breeding Y” 43.5 (c) ewe:sheep “X are the only Y” 13.5 (d) cub:bear “Y are called X” 29.0 (e) primate:monkey “Y is the * X” 80.0 2. traffic:street (a) ship:gangplank “X * down the Y” 53.0 (b) crop:harvest “X * adjacent * Y” 248.0 (c) car:garage “X * a residential Y” 63.0 (d) pedestrians:feet “Y * accommodate X” 23.0 (e) water:riverbed “Y that carry X” 17.0 3. locomotive:train (a) horse:saddle “X carrying * Y” 82.0 (b) tractor:plow “X pulled * Y” 7.0 (c) rudder:rowboat “Y * X” 319.0 (d) camel:desert “Y with two X” 43.0 (e) gasoline:automobile “Y powered * * X” 5.0 Table 1. Three examples of SAT questions. Table 2 shows the four highest ranking patterns for the stem and solution for the first example. The pattern “X lion Y” is anomalous, but the other patterns seem reasonable. The shared pattern “Y such as the X” is ranked 1 for both pairs, hence the average score for this pattern is 1.0, as shown in Table 1. Note that the “ostrich is the largest bird” and “lions are large cats”, but the largest cat is the Siberian tiger. Word pair “X ... Y” “Y ... X” ostrich:bird “X is the largest Y” “Y such as the X” “X is * largest Y” “Y such * the X” lion:cat “X lion Y” “Y such as the X” “X are large Y” “Y and mountain X” Table 2. The highest ranking patterns. Table 3 lists the top five pairs in W that match the pattern “Y such as the X”. The pairs are sorted by ) : ( p P Y X . The pattern “Y such as the X” is one of 146 patterns that are shared by ostrich:bird and lion:cat. Most of these shared patterns are not very informative. Word pair Conditional probability heart:organ 0.49342 dodo:bird 0.08888 elbow:joint 0.06385 ostrich:bird 0.05774 semaphore:signal 0.03741 Table 3. The top five pairs for “Y such as the X”. In Table 4, we compare ranking patterns by pertinence to ranking by various other measures, mostly based on varieties of tf-idf (term frequency times inverse document frequency, a common way to rank documents in information retrieval). The tf-idf measures are taken from Salton and Buckley (1988). For comparison, we also include three algorithms that do not rank 317 patterns (the bottom three rows in the table). These three algorithms can answer the SAT questions, but they do not provide any kind of explanation for their answers. Algorithm Prec. Rec. F 1 pertinence (Step 11) 55.7 53.5 54.6 2 log and entropy matrix (Step 7) 43.5 41.7 42.6 3 TF = f, IDF = log((N-n)/n) 43.2 41.4 42.3 4 TF = log(f+1), IDF = log(N/n) 42.9 41.2 42.0 5 TF = f, IDF = log(N/n) 42.9 41.2 42.0 6 TF = log(f+1), IDF = log((N-n)/n) 42.3 40.6 41.4 7 TF = 1.0, IDF = 1/n 41.5 39.8 40.6 8 TF = f, IDF = 1/n 41.5 39.8 40.6 9 TF = 0.5 + 0.5 * (f/F), IDF = log(N/n) 41.5 39.8 40.6 10 TF = log(f+1), IDF = 1/n 41.2 39.6 40.4 11 p(X:Y|P) (Step 10) 39.8 38.2 39.0 12 SVD matrix (Step 8) 35.9 34.5 35.2 13 random 27.0 25.9 26.4 14 TF = 1/f, IDF = 1.0 26.7 25.7 26.2 15 TF = f, IDF = 1.0 (Step 6) 18.1 17.4 17.7 16 Turney (2005) 56.8 56.1 56.4 17 Turney and Littman (2005) 47.7 47.1 47.4 18 Veale (2004) 42.8 42.8 42.8 Table 4. Performance of various algorithms on SAT. All of the pattern ranking algorithms are given exactly the same sets of patterns to rank. Any differences in performance are due to the ranking method alone. The algorithms may skip questions when the word pairs do not co-occur in the corpus. All of the ranking algorithms skip the same set of 15 of the 374 SAT questions. Precision is defined as the percentage of correct answers out of the questions that were answered (not skipped). Recall is the percentage of correct answers out of the maximum possible number correct (374). The F measure is the harmonic mean of precision and recall. For the tf-idf methods in Table 4, f is the pattern frequency, n is the pair frequency, F is the maximum f for all patterns for the given word pair, and N is the total number of word pairs. By “TF = f, IDF = n / 1 ”, for example (row 8), we mean that f plays a role that is analogous to term frequency and n / 1 plays a role that is analogous to inverse document frequency. That is, in row 8, the patterns are ranked in decreasing order of pattern frequency divided by pair frequency. Table 4 also shows some ranking methods based on intermediate calculations in the algorithm in Section 4. For example, row 2 in Table 4 gives the results when patterns are ranked in order of decreasing values in the corresponding cells of the matrix X from Step 7. Row 12 in Table 4 shows the results we would get using Latent Relational Analysis (Turney, 2005) to rank patterns. The results in row 12 support the claim made in Section 3, that LRA is not suitable for ranking patterns, although it works well for answering the SAT questions (as we see in row 16). The vectors in LRA yield a good measure of relational similarity, but the magnitude of the value of a specific element in a vector is not a good indicator of the quality of the corresponding pattern. The best method for ranking patterns is pertinence (row 1 in Table 4). As a point of comparison, the performance of the average senior highschool student on the SAT analogies is about 57% (Turney and Littman, 2005). The second best method is to use the values in the matrix X after the log and entropy transformations in Step 7 (row 2). The difference between these two methods is statistically significant with 95% confidence. Pertinence (row 1) performs slightly below Latent Relational Analysis (row 16; Turney, 2005), but the difference is not significant. Randomly guessing answers should yield an F of 20% (1 out of 5 choices), but ranking patterns randomly (row 13) results in an F of 26.4%. This is because the stem pair tends to share more patterns with the solution pair than with the distractors. The minimum of a large set of random numbers is likely to be lower than the minimum of a small set of random numbers. 6 Experiments with Noun-Modifiers In these experiments, we evaluate pertinence on the task of classifying noun-modifier pairs. The problem is to classify a noun-modifier pair, such as “flu virus”, according to the semantic relation between the head noun (virus) and the modifier (flu). For example, “flu virus” is classified as a causality relation (the flu is caused by a virus). For these experiments, we use a set of 600 manually labeled noun-modifier pairs (Nastase and Szpakowicz, 2003). There are five general classes of labels with thirty subclasses. We present here the results with five classes; the results with thirty subclasses follow the same trends (that is, pertinence performs significantly better than the other ranking methods). The five classes are causality (storm cloud), temporality (daily exercise), spatial (desert storm), participant (student protest), and quality (expensive book). The input set W consists of the 600 nounmodifier pairs. This set is doubled in Step 4, but we drop some pairs because they do not co-occur in the corpus, leaving us with 1184 rows in the matrix. There are 16,849 distinct patterns with a pair frequency of ten or more, resulting in 33,698 columns. The matrix density is 2.57%. 318 To classify a noun-modifier pair, we use a single nearest neighbour algorithm with leave-oneout cross-validation. We split the set 600 times. Each pair gets a turn as the single testing example, while the other 599 pairs serve as training examples. The testing example is classified according to the label of its nearest neighbour in the training set. The distance between two nounmodifier pairs is measured by the average rank of their best shared pattern. Table 5 shows the resulting precision, recall, and F, when ranking patterns by pertinence. Class name Prec. Rec. F Class size causality 37.3 36.0 36.7 86 participant 61.1 64.4 62.7 260 quality 49.3 50.7 50.0 146 spatial 43.9 32.7 37.5 56 temporality 64.7 63.5 64.1 52 all 51.3 49.5 50.2 600 Table 5. Performance on noun-modifiers. To gain some insight into the algorithm, we examined the 600 best shared patterns for each pair and its single nearest neighbour. For each of the five classes, Table 6 lists the most frequent pattern among the best shared patterns for the given class. All of these patterns seem appropriate for their respective classes. Class Most frequent pattern Example pair causality “Y * causes X” “cold virus” participant “Y of his X” “dream analysis” quality “Y made of X” “copper coin” spatial “X * * terrestrial Y” “aquatic mammal” temporality “Y in * early X” “morning frost” Table 6. Most frequent of the best shared patterns. Table 7 gives the performance of pertinence on the noun-modifier problem, compared to various other pattern ranking methods. The bottom two rows are included for comparison; they are not pattern ranking algorithms. The best method for ranking patterns is pertinence (row 1 in Table 7). The difference between pertinence and the second best ranking method (row 2) is statistically significant with 95% confidence. Latent Relational Analysis (row 16) performs slightly better than pertinence (row 1), but the difference is not statistically significant. Row 6 in Table 7 shows the results we would get using Latent Relational Analysis (Turney, 2005) to rank patterns. Again, the results support the claim in Section 3, that LRA is not suitable for ranking patterns. LRA can classify the nounmodifiers (as we see in row 16), but it cannot express the implicit semantic relations that make an unlabeled noun-modifier in the testing set similar to its nearest neighbour in the training set. Algorithm Prec. Rec. F 1 pertinence (Step 11) 51.3 49.5 50.2 2 TF = log(f+1), IDF = 1/n 37.4 36.5 36.9 3 TF = log(f+1), IDF = log(N/n) 36.5 36.0 36.2 4 TF = log(f+1), IDF = log((N-n)/n) 36.0 35.4 35.7 5 TF = f, IDF = log((N-n)/n) 36.0 35.3 35.6 6 SVD matrix (Step 8) 43.9 33.4 34.8 7 TF = f, IDF = 1/n 35.4 33.6 34.3 8 log and entropy matrix (Step 7) 35.6 33.3 34.1 9 TF = f, IDF = log(N/n) 34.1 31.4 32.2 10 TF = 0.5 + 0.5 * (f/F), IDF = log(N/n) 31.9 31.7 31.6 11 p(X:Y|P) (Step 10) 31.8 30.8 31.2 12 TF = 1.0, IDF = 1/n 29.2 28.8 28.7 13 random 19.4 19.3 19.2 14 TF = 1/f, IDF = 1.0 20.3 20.7 19.2 15 TF = f, IDF = 1.0 (Step 6) 12.8 19.7 8.0 16 Turney (2005) 55.9 53.6 54.6 17 Turney and Littman (2005) 43.4 43.1 43.2 Table 7. Performance on noun-modifiers. 7 Discussion Computing pertinence took about 18 hours for the experiments in Section 5 and 9 hours for Section 6. In both cases, the majority of the time was spent in Step 1, using MultiText (Clarke et al., 1998) to search through the corpus of 10 10 5× words. MultiText was running on a Beowulf cluster with sixteen 2.4 GHz Intel Xeon CPUs. The corpus and the search index require about one terabyte of disk space. This may seem computationally demanding by today’s standards, but progress in hardware will soon allow an average desktop computer to handle corpora of this size. Although the performance on the SAT analogy questions (54.6%) is near the level of the average senior highschool student (57%), there is room for improvement. For applications such as building a thesaurus, lexicon, or ontology, this level of performance suggests that our algorithm could assist, but not replace, a human expert. One possible improvement would be to add part-of-speech tagging or parsing. We have done some preliminary experiments with parsing and plan to explore tagging as well. A difficulty is that much of the text in our corpus does not consist of properly formed sentences, since the text comes from web pages. This poses problems for most part-of-speech taggers and parsers. 8 Conclusion Latent Relational Analysis (Turney, 2005) provides a way to measure the relational similarity between two word pairs, but it gives us little insight into how the two pairs are similar. In effect, 319 LRA is a black box. The main contribution of this paper is the idea of pertinence, which allows us to take an opaque measure of relational similarity and use it to find patterns that express the implicit semantic relations between two words. The experiments in Sections 5 and 6 show that ranking patterns by pertinence is superior to ranking them by a variety of tf-idf methods. On the word analogy and noun-modifier tasks, pertinence performs as well as the state-of-the-art, LRA, but pertinence goes beyond LRA by making relations explicit. Acknowledgements Thanks to Joel Martin, David Nadeau, and Deniz Yuret for helpful comments and suggestions. References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM Conference on Digital Libraries (ACM DL 2000), pages 85-94. Matthew Berland and Eugene Charniak. 1999. Finding parts in very large corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL-99), pages 57-64. Sergey Brin. 1998. Extracting patterns and relations from the World Wide Web. In WebDB Workshop at the 6th International Conference on Extending Database Technology (EDBT-98), pages 172-183. Charles L.A. Clarke, Gordon V. Cormack, and Christopher R. Palmer. 1998. An overview of MultiText. ACM SIGIR Forum, 32(2):14-15. Gene H. Golub and Charles F. Van Loan. 1996. Matrix Computations. Third edition. Johns Hopkins University Press, Baltimore, MD. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th International Conference on Computational Linguistics (COLING-92), pages 539-545. Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211-240. Maria Lapata. 2002. The disambiguation of nominalisations. Computational Linguistics, 28(3):357-388. George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41. Scott Miller, Heidi Fox, Lance Ramshaw, and Ralph Weischedel. 2000. A novel use of statistical parsing to extract information from text. In Proceedings of the Sixth Applied Natural Language Processing Conference (ANLP 2000), pages 226-233. Vivi Nastase and Stan Szpakowicz. 2003. Exploring noun-modifier semantic relations. In Fifth International Workshop on Computational Semantics (IWCS-5), pages 285-301. Ellen Riloff and Rosie Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of the 16th National Conference on Artificial Intelligence (AAAI-99), pages 474-479. Gerard Salton and Chris Buckley. 1988. Term weighting approaches in automatic text retrieval. Information Processing and Management, 24(5):513-523. Mark Stevenson. 2004. An unsupervised WordNetbased algorithm for relation extraction. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC) Workshop, Beyond Named Entity Recognition: Semantic Labelling for NLP Tasks, Lisbon, Portugal. Egidio Terra and Charles L.A. Clarke. 2003. Frequency estimates for statistical word similarity measures. In Proceedings of the Human Language Technology and North American Chapter of Association of Computational Linguistics Conference (HLT/NAACL-03), pages 244-251. Peter D. Turney. 2005. Measuring semantic similarity by latent relational analysis. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05), pages 1136-1141. Peter D. Turney and Michael L. Littman. 2005. Corpus-based learning of analogies and semantic relations. Machine Learning, 60(1-3):251-278. Tony Veale. 2004. WordNet sits the SAT: A knowledge-based approach to lexical analogy. In Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004), pages 606-612. Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Unsupervised discovery of scenario-level patterns for information extraction. In Proceedings of the Sixth Applied Natural Language Processing Conference (ANLP 2000), pages 282-289. Roman Yangarber. 2003. Counter-training in discovery of semantic patterns. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03), pages 343-350. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, 3:1083-1106. 320
2006
40
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 321–328, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Hybrid Parsing: Using Probabilistic Models as Predictors for a Symbolic Parser Kilian A. Foth, Wolfgang Menzel Department of Informatics Universit¨at Hamburg, Germany {foth|menzel}@informatik.uni-hamburg.de Abstract In this paper we investigate the benefit of stochastic predictor components for the parsing quality which can be obtained with a rule-based dependency grammar. By including a chunker, a supertagger, a PP attacher, and a fast probabilistic parser we were able to improve upon the baseline by 3.2%, bringing the overall labelled accuracy to 91.1% on the German NEGRA corpus. We attribute the successful integration to the ability of the underlying grammar model to combine uncertain evidence in a soft manner, thus avoiding the problem of error propagation. 1 Introduction There seems to be an upper limit for the level of quality that can be achieved by a parser if it is confined to information drawn from a single source. Stochastic parsers for English trained on the Penn Treebank have peaked their performance around 90% (Charniak, 2000). Parsing of German seems to be even harder and parsers trained on the NEGRA corpus or an enriched version of it still perform considerably worse. On the other hand, a great number of shallow components like taggers, chunkers, supertaggers, as well as general or specialized attachment predictors have been developed that might provide additional information to further improve the quality of a parser’s output, as long as their contributions are in some sense complementory. Despite these prospects, such possibilities have rarely been investigated so far. To estimate the degree to which the desired synergy between heterogeneous knowledge sources can be achieved, we have established an experimental framework for syntactic analysis which allows us to plug in a wide variety of external predictor components, and to integrate their contributions as additional evidence in the general decision-making on the optimal structural interpretation. We refer to this approach as hybrid parsing because it combines different kinds of linguistic models, which have been acquired in totally different ways, ranging from manually compiled rule sets to statistically trained components. In this paper we investigate the benefit of external predictor components for the parsing quality which can be obtained with a rule-based grammar. For that purpose we trained a range of predictor components and integrated their output into the parser by means of soft constraints. Accordingly, the goal of our research was not to extensively optimize the predictor components themselves, but to quantify their contribution to the overall parsing quality. The results of these experiments not only lead to a better understanding of the utility of the different knowledge sources, but also allow us to derive empirically based priorities for further improving them. We are able to show that the potential of WCDG for information fusion is strong enough to accomodate even rather unreliable information from a wide range of predictor components. Using this potential we were able to reach a quality level for dependency parsing German which is unprecendented so far. 2 Hybrid Parsing A hybridization seems advantageous even among purely stochastic models. Depending on their degree of sophistication, they can and must be trained on quite different kinds of data collections, which due to the necessary annotation effort are available in vastly different amounts: While training a probabilistic parser or a supertagger usually 321 requires a fully developed tree bank, in the case of taggers or chunkers a much more shallow and less expensive annotation suffices. Using a set of rather simple heuristics, a PP-attacher can even be trained on huge amounts of plain text. Another reason for considering hybrid approaches is the influence that contextual factors might exert on the process of determining the most plausible sentence interpretation. Since this influence is dynamically changing with the environment, it can hardly be captured from available corpus data at all. To gain a benefit from such contextual cues, e.g. in a dialogue system, requires to integrate yet another kind of external information. Unfortunately, stochastic predictor components are usually not perfect, at best producing preferences and guiding hints instead of reliable certainties. Integrating a number of them into a single systems poses the problem of error propagation. Whenever one component decides on the input of another, the subsequent one will most probably fail whenever the decision was wrong; if not, the erroneous information was not crucial anyhow. Dubey (2005) reported how serious this problem can be when he coupled a tagger with a subsequent parser, and noted that tagging errors are by far the most important source of parsing errors. As soon as more than two components are involved, the combination of different error sources migth easily lead to a substantial decrease of the overall quality instead of achieving the desired synergy. Moreover, the likelihood of conflicting contributions will rise tremendously the more predictor components are involved. Therefore, it is far from obvious that additional information always helps. Certainly, a processing regime is needed which can deal with conflicting information by taking its reliability (or relative strength) into account. Such a preference-based decision procedure would then allow stronger valued evidence to override weaker one. 3 WCDG An architecture which fulfills this requirement is Weighted Constraint Dependency Grammar, which was based on a model originally proposed by Maruyama (1990) and later extended with weights (Schr¨oder, 2002). A WCDG models natural language as labelled dependency trees on words, with no intermediate constituents assumed. It is entirely declarative: it only contains rules (called constraints) that explicitly describe the properties of well-formed trees, but no derivation rules. For instance, a constraint can state that determiners must precede their regents, or that there cannot be two determiners for the same regent, or that a determiner and its regent must agree in number, or that a countable noun must have a determiner. Further details can be found in (Foth, 2004). There is only a trivial generator component which enumerates all possible combinations of labelled word-to-word subordinations; among these any combination that satisfies the constraints is considered a correct analysis. Constraints on trees can be hard or soft. Of the examples above, the first two should probably be considered hard, but the last two could be made defeasible, particularly if a robust coverage of potentially faulty input is desired. When two alternative analyses of the same input violate different constraints, the one that satisfies the more important constraint should be preferred. WCDG ensures this by assigning every analysis a score that is the product of the weights of all instances of constraint failures. Parsing tries to retrieve the analysis with the highest score. The weight of a constraint is usually determined by the grammar writer as it is formulated. Rules whose violation would produce nonsensical structures are usually made hard, while rules that enforce preferred but not required properties receive less weight. Obviously this classification depends on the purpose of a parsing system; a prescriptive language definition would enforce grammatical principles such as agreement with hard constraints, while a robust grammar must allow violations but disprefer them via soft constraints. In practice, the precise weight of a constraint is not particularly important as long as the relative importance of two rules is clearly reflected in their weights (for instance, a misinflected determiner is a language error, but probably a less severe one than duplicate determiners). There have been attempts to compute the weights of a WCDG automatically by observing which weight vectors perform best on a given corpus (Schr¨oder et al., 2001), but weights computed completely automatically failed to improve on the original, handscored grammar. Weighted constraints provide an ideal interface to integrate arbitrary predictor components in a soft manner. Thus, external predictions are treated 322 the same way as grammar-internal preferences, e.g. on word order or distance. In contrast to a filtering approach such a strong integration does not blindly rely on the available predictions but is able to question them as long as there is strong enough combined evidence from the grammar and the other predictor components. For our investigations, we used the reference implementation of WCDG available from http://nats-www.informatik. uni-hamburg.de/download, which allows constraints to express any formalizable property of a dependency tree. This great expressiveness has the disadvantage that the parsing problem becomes NP-complete and cannot be solved efficiently. However, good success has been achieved with transformation-based solution methods that start out with an educated guess about the optimal tree and use constraint failures as cues where to change labels, subordinations, or lexical readings. As an example we show intermediate and final analyses of a sentence from our test set (negra-s18959): ‘Hier kletterte die Marke von 420 auf 570 Mark.’ (Here the figure rose from 420 to 570 DM). SUBJ PN PP PN PP OBJA DET S ADV hier kletterte die Marke von 420 auf 570 Mark . In the first analysis, subject and object relations are analysed wrongly, and the noun phrase ‘570 Mark’ has not been recognized. The analysis is imperfect because the common noun ‘Mark’ lacks a Determiner. PN ATTR PP PN PP SUBJ DET S ADV hier kletterte die Marke von 420 auf 570 Mark . The final analysis correctly takes ‘570 Mark’ as the kernel of the last preposition, and ‘Marke’ as the subject. Altogether, three dependency edges had to be changed to arrive at this solution. Figure 1 shows the pseudocode of the best solution algorithm for WCDG described so far (Foth et al., 2000). Although it cannot guarantee to find the best solution to the constraint satisfaction problem, it requires only limited space and can be interrupted at any time and still returns a solution. If not interrupted, the algorithm terminates when A := the set of levels of analysis W:= the set of all lexical readings of words in the sentence L := the set of defined dependency labels E := A × W × W × L = the base set of dependency edges D := A × W = the set of domains da,w of all constraint variables B := ∅= the best analysis found C := ∅= the current analysis { Create the search space. } for e ∈E if eval(e) > 0 then da,w := da,w ∪{e} { Build initial analysis. } for da,w ∈D e0 = arg max e∈da,w score(C ∪{e}) C := C ∪{e0} B := C T := ∅= tabu set of conflicts removed so far. U := ∅= set of unremovable conflicts. i := the penalty threshold above which conflicts are ignored. n := 0 { Remove conflicts. } while ∃c ∈eval(C) \ U : penalty(c) > i and no interruption occurred { Determine which conflict to resolve. } cn := arg max c∈eval(C)\U penalty(c) T := T ∪{c} { Find the best resolution set. } Rn := arg max R ∈× domains(cn) score(replace(C, R)) where replace(C, R) does not cause any c ∈T and |R \ C| <= 2 if no Rn can be found { Consider c0 unremovable. } n := 0, C := B, T := ∅, U := U ∪{c0} else { Take a step. } n := n + 1, C := replace(C, Rn) if score(C) > score(B) n := 0, B := C, T := ∅, U := U ∩eval(C) return B Figure 1: Basic algorithm for heuristic transformational search. no constraints with a weight less than a predefined threshold are violated. In contrast, a complete search usually requires more time and space than available, and often fails to return a usable result at all. All experiments described in this paper were conducted with the transformational search. For our investigation we use a comprehensive grammar of German expressed in about 1,000 constraints (Foth et al., 2005). It is intended to cover modern German completely and to be ro323 bust against many kinds of language error. A large WCDG such as this that is written entirely by hand can describe natural language with great precision, but at the price of very great effort for the grammar writer. Also, because many incorrect analyses are allowed, the space of possible trees becomes even larger than it would be for a prescriptive grammar. 4 Predictor components Many rules of a language have the character of general preferences so weak that they are easily overlooked even by a language expert; for instance, the ordering of elements in the German mittelfeld is subject to several types of preference rules. Other regularities depend crucially on the lexical identity of the words concerned; modelling these fully would require the writing of a specific constraint for each word, which is all but infeasible. Empirically obtained information about the behaviour of a language would be welcome in such cases where manual constraints are not obvious or would require too much effort. This has already been demonstrated for the case of part-of-speech tagging: because contextual cues are very effective in determining the categories of ambiguous words, purely stochastical models can achieve a high accuracy. (Hagenstr¨om and Foth, 2002) show that the TnT tagger (Brants, 2000) can be profitably integrated into WCDG parsing: A constraint that prefers analyses which conform to TnT’s category predictions can greatly reduce the number of spurious readings of lexically ambiguous words. Due to the soft integration of the tagger, though, the parser is not forced to accept its predictions unchallenged, but can override them if the wider syntactic context suggests this. In our experiments (line 1 in Table 1) this happens 75 times; 52 of these cases were actual errors committed by the tagger. These advantages taken together made the tagger the by far most valuable information source, whithout which the analysis of arbitrary input would not be feasible at all. Therefore, we use this component (POS) in all subsequent experiments. Starting from this observation, we extended the idea to integrate several other external components that predict particular aspects of syntax analyses. Where possible, we re-used publicly available components to make the predictions rather than construct the best predictors possible; it is likely that better predictors could be found, but components ‘off the shelf’ or written in the simplest workable way proved enough to demonstrate a positive benefit of the technique in each case. For the task of predicting the boundaries of major constituents in a sentence (chunk parsing, CP), we used the decision tree model TreeTagger (Schmid, 1994), which was trained on articles from Stuttgarter Zeitung. The noun, verb and prepositional chunk boundaries that it predicts are fed into a constraint which requires all chunk heads to be attached outside the current chunk, and all other words within it. Obviously such information can greatly reduce the number of structural alternatives that have to be considered during parsing. On our test set, the TreeTagger achieves a precision of 88.0% and a recall of 89.5%. Models for category disambiguation can easily be extended to predict not only the syntactic category, but also the local syntactic environment of each word (supertagging). Supertags have been successfully applied to guide parsing in symbolic frameworks such as Lexicalised Tree-Adjoning grammar (Bangalore and Joshi, 1999). To obtain and evaluate supertag predictions, we re-trained the TnT Tagger on the combined NEGRA and TIGER treebanks (1997; 2002). Putting aside the standard NEGRA test set, this amounts to 59,622 sentences with 1,032,091 words as training data. For each word in the training set, the local context was extracted and encoded into a linear representation. The output of the retrained TnT then predicts the label of each word, whether it follows or precedes its regent, and what other types of relations are found below it. Each of these predictions is fed into a constraint which weakly prefers dependencies that do not violate the respective prediction (ST). Due to the high number of 12947 supertags in the maximally detailed model, the accuracy of the supertagger for complete supertags is as low as 67.6%. Considering that a detailed supertag corresponds to several distinct predictions (about label, direction etc.), it might be more appropriate to measure the average accuracy of these distinct predictions; by this measure, the individual predictions of the supertagger are 84.5% accurate; see (Foth et al., 2006) for details. As with many parsers, the attachment of prepositions poses a particular problem for the base WCDG of German, because it is depends largely upon lexicalized information that is not widely used in its constraints. However, such information 324 Reannotated Transformed Predictors Dependencies Dependencies 1: POS only 89.7%/87.9% 88.3%/85.6% 2: POS+CP 90.2%/88.4% 88.7%/86.0% 3: POS+PP 90.9%/89.1% 89.6%/86.8% 4: POS+ST 92.1%/90.7% 90.7%/88.5% 5: POS+SR 91.4%/90.0% 90.0%/87.7% 6: POS+PP+SR 91.6%/90.2% 90.1%/87.8% 7: POS+ST+SR 92.3%/90.9% 90.8%/88.8% 8: POS+ST+PP 92.1%/90.7% 90.7%/88.5% 9: all five 92.5%/91.1% 91.0%/89.0% Table 1: Structural/labelled parsing accuracy with various predictor components. can be automatically extracted from large corpora of trees or even raw text: prepositions that tend to occur in the vicinity of specific nouns or verbs more often than chance would suggest can be assumed to modify those words preferentially (Volk, 2002). A simple probabilistic model of PP attachment (PP) was used that counts only the occurrences of prepositions and potential attachment words (ignoring the information in the kernel noun of the PP). It was trained on both the available tree banks and on 295,000,000 words of raw text drawn from the taz corpus of German newspaper text. When used to predict the probability of the possible regents of each preposition in each sentence, it achieved an accuracy of 79.4% and 78.3%, respectively (see (Foth and Menzel, 2006) for details). The predictions were integrated into the grammar by another constraint which disprefers all possible regents to the corresponding degree (except for the predicted regent, which is not penalized at all). Finally, we used a full dependency parser in order to obtain structural predictions for all words, and not merely for chunk heads or prepositions. We constructed a probabilistic shift-reduce parser (SR) for labelled dependency trees using the model described by (Nivre, 2003): from all available dependency trees, we reconstructed the series of parse actions (shift, reduce and attach) that would have constructed the tree, and then trained a simple maximum-likelihood model that predicts parse actions based on features of the current state such as the categories of the current and following words, the environment of the top stack word constructed so far, and the distance between the top word and the next word. This oracle parser achieves a structural and labelled accuracy of 84.8%/80.5% on the test set but can only predict projective dependency trees, which causes problems with about 1% of the edges in the 125,000 dependency trees used for training; in the interest of simplicity we did not address this issue specially, instead relying on the ability of the WCDG parser to robustly integrate even predictions which are wrong by definition. 5 Evaluation Since the WCDG parser never fails on typical treebank sentences, and always delivers an analysis that contains exactly one subordination for each word, the common measures of precision, recall and f-score all coincide; all three are summarized as accuracy here. We measure the structural (i.e. unlabelled) accuracy as the ratio of correctly attached words to all words; the labelled accuracy counts only those words that have the correct regent and also bear the correct label. For comparison with previous work, we used the next-to-last 1,000 sentences of the NEGRA corpus as our test set. Table 1 shows the accuracy obtained.1 The gold standard used for evaluation was derived from the annotations of the NEGRA treebank (version 2.0) in a semi-automatic procedure. First, the NEGRA phrase structures were automatically transformed to dependency trees with the DEPSY tool (Daum et al., 2004). However, before the parsing experiments, the results were manually corrected to (1) take care of systematic inconsistencies between the NEGRA annotations and the WCDG annotations (e.g. for nonprojectivities, which in our case are used only if necessary for an ambiguity free attachment of verbal arguments, relative clauses and coordinations, but not for other types of adjuncts) and (2) to remove inconsistencies with NEGRAs own annotation guidelines (e.g. with regard to elliptical and co-ordinated structures, adverbs and subordinated main clauses.) To illustrate the consequences of these corrections we report in Table 1 both kinds of results: those obtained on our WCDG-conform annotations (reannotated) and the others on the raw output of the automatic conversion (trans1Note that the POS model employed by TnT was trained on the entire NEGRA corpus, so that there is an overlap between the training set of TnT and the test set of the parser. However, control experiments showed that a POS model trained on the NEGRA and TIGER treebanks minus the test set results in the same parsing accuracy, and in fact slightly better POS accuracy. All other statistical predictors were trained on data disjunct from the test set. 325 formed), although the latter ones introduce a systematic mismatch between the gold standard and the design principles of the grammar. The experiments 2–5 show the effect of adding the POS tagger and one of the other predictor components to the parser. The chunk parser yields only a slight improvement of about 0.5% accuracy; this is most probably because the baseline parser (line 1) does not make very many mistakes at this level anyway. For instance, the relation type with the highest error rate is prepositional attachment, about which the chunk parser makes no predictions at all. In fact, the benefit of the PP component alone (line 3) is much larger even though it predicts only the regents of prepositions. The two other components make predictions about all types of relations, and yield even bigger benefits. When more than one other predictor is added to the grammar, the beneft is generally higher than that of either alone, but smaller than the sum of both. An exception is seen in line 8, where the combination of POS tagging, supertagging and PP prediction fails to better the results of just POS tagging and supertagging (line 4). Individual inspection of the results suggests that the lexicalized information of the PP attacher is often counteracted by the less informed predictions of the supertagger (this was confirmed in preliminary experiments by a gain in accuracy when prepositions were exempted from the supertag constraint). Finally, combining all five predictors results in the highest accuracy of all, improving over the first experiment by 2.8% and 3.2% for structural and labelled accuracy respectively. We see that the introduction of stochastical information into the handwritten language model is generally helpful, although the different predictors contribute different types of information. The POS tagger and PP attacher capture lexicalized regularities which are genuinely new to the grammar: in effect, they refine the language model of the grammar in places that would be tedious to describe through individual rules. In contrast, the more global components tend to make the same predictions as the WCDG itself, only explicitly. This guides the parser so that it tends to check the correct alternative first more often, and has a greater chance of finding the global optimum. This explains why their addition increases parsing accuracy even when their own accuracy is markedly lower than even the baseline (line 1). 6 Related work The idea of integrating knowledge sources of different origin is not particularly new. It has been successfully used in areas like speech recognition or statistical machine translation where acoustic models or bilingual mappings have to be combined with (monolingual) language models. A similar architecture has been adopted by (Wang and Harper, 2004) who train an n-best supertagger and an attachment predictor on the Penn Treebank and obtain an labelled F-score of 92.4%, thus slightly outperforming the results of (Collins, 1999) who obtained 92.0% on the same sentences, but evaluating on transformed phrase structure trees instead on directly computed dependency relations. Similar to our approach, the result of (Wang and Harper, 2004) was achieved by integrating the evidence of two (stochastic) components into a single decision procedure on the optimal interpretation. Both, however, have been trained on the very same data set. Combining more than two different knowledge sources into a system for syntactic parsing to our knowledge has never been attempted so far. The possible synergy between different knowledge sources is often assumed but viable alternatives to filtering or selection in a pipelined architecture have not yet been been demonstrated successfully. Therefore, external evidence is either used to restrict the space of possibilities for a subsequent component (Clark and Curran, 2004) or to choose among the alternative results which a traditional rule-based parser usually delivers (Malouf and van Noord, 2004). In contrast to these approaches, our system directly integrates the available evidence into the decision procedure of the rule-based parser by modifying the objective function in a way that helps guiding the parsing process towards the desired interpretation. This seems to be crucial for being able to extend the approach to multiple predictors. An extensive evaluation of probabilistic dependency parsers has recently been carried out within the framework of the 2006 CoNLL shared task (see http://nextens.uvt.nl/ ∼conll). Most successful for many of the 13 different languages has been the system described in (McDonald et al., 2005). This approach is based on a procedure for online large margin learning and considers a huge number of locally available features to predict dependency attachments with326 out being restricted to projective structures. For German it achieves 87.34% labelled and 90.38% unlabelled attachment accuracy. These results are particularly impressive, since due to the strictly local evaluation of attachment hypotheses the runtime complexity of the parser is only O(n2). Although a similar source of text has been used for this evaluation (newspaper), the numbers cannot be directly compared to our results since both the test set and the annotation guidelines differ from those used in our experiments. Moreover, the different methodologies adopted for system development clearly favour a manual grammar development, where more lexical resources are available and because of human involvement a perfect isolation between test and training data can only be guaranteed for the probabilistic components. On the other hand CoNLL restricted itself to the easier attachment task and therefore provided the gold standard POS tag as part of the input data, whereas in our case pure word form sequences are analysed and POS disambiguation is part of the task to be solved. Finally, punctuation has been ignored in the CoNLL evaluation, while we included it in the attachment scores. To compensate for the last two effects we re-evaluated our parser without considering punctuation but providing it with perfect POS tags. Thus, under similar conditions as used for the CoNLL evaluation we achieved a labelled accuracy of 90.4% and an unlabelled one of 91.9%. Less obvious, though, is a comparison with results which have been obtained for phrase structure trees. Here the state of the art for German is defined by a system which applies treebank transformations to the original NEGRA treebank and extends a Collins-style parser with a suffix analysis (Dubey, 2005). Using the same test set as the one described above, but restricting the maximum sentence length to 40 and providing the correct POS tag, the system achieved a labelled bracket F-score of 76.3%. 7 Conclusions We have presented an architecture for the fusion of information contributed from a variety of components which are either based on expert knowledge or have been trained on quite different data collections. The results of the experiments show that there is a high degree of synergy between these different contributions, even if they themselves are fairly unreliable. Integrating all the available predictors we were able to improve the overall labelled accuracy on a standard test set for German to 91.1%, a level which is as least as good as the results reported for alternative approaches to parsing German. The result we obtained also challenges the common perception that rule-based parsers are necessarily inferior to stochastic ones. Supplied with appropriate helper components, the WCDG parser not only reached a surprisingly high level of output quality but in addition appears to be fairly stable against changes in the text type it is applied to (Foth et al., 2005). We attribute the successful integration of different information sources primarily to the fundamental ability of the WCDG grammar to combine evidence in a soft manner. If unreliable information needs to be integrated, this possibility is certainly an undispensible prerequisite for preventing local errors from accumulating and leading to an unacceptably low degree of reliability for the whole system eventually. By integrating the different predictors into the WCDG parsers’s general mechanism for evidence arbitration, we not only avoided the adverse effect of individual error rates multiplying out, but instead were able to even raise the degree of output quality substantially. From the fact that the combination of all predictor components achieved the best results, even if the individual predictions are fairly unreliable, we can also conclude that diversity in the selection of predictor components is more important than the reliability of their contributions. Among the available predictor components which could be integrated into the parser additionally, the approach of (McDonald et al., 2005) certainly looks most promising. Compared to the shift-reduce parser which has been used as one of the predictor components for our experiments, it seems particularly attractive because it is able to predict non-projective structures without any additional provision, thus avoiding the misfit between our (non-projective) gold standard annotations and the restriction to projective structures that our shiftreduce parser suffers from. Another interesting goal of future work might be to even consider dynamic predictors, which can change their behaviour according to text type and perhaps even to text structure. This, however, would also require extending and adapting the cur327 rently dominating standard scenario of parser evaluation substantially. References Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: an approach to almost parsing. Computational Linguistics, 25(2):237–265. Thorsten Brants, Roland Hendriks, Sabine Kramp, Brigitte Krenn, Cordula Preis, Wojciech Skut, and Hans Uszkoreit. 1997. Das NEGRAAnnotationsschema. Negra project report, Universit¨at des Saarlandes, Computerlinguistik, Saarbr¨ucken, Germany. Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, Sozopol. Thorsten Brants. 2000. TnT – A Statistical Part-ofSpeech Tagger. In Proceedings of the Sixth Applied Natural Language Processing Conference (ANLP2000), Seattle, WA, USA. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proc. NAACL-2000. Stephen Clark and James R. Curran. 2004. The importance of supertagging for wide-coverage CCG parsing. In Proc. 20th Int. Conf. on Computational Linguistics, Coling-2004. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Phd thesis, University of Pennsylvania, Philadephia, PA. Michael Daum, Kilian Foth, and Wolfgang Menzel. 2004. Automatic transformation of phrase treebanks to dependency trees. In Proc. 4th Int. Conf. on Language Resources and Evaluation, LREC-2004, Lisbon, Portugal. Amit Dubey. 2005. What to do when lexicalization fails: parsing German with suffix analysis and smoothing. In Proc. 43rd Annual Meeting of the ACL, Ann Arbor, MI. Kilian Foth and Wolfgang Menzel. 2006. The benefit of stochastic PP-attachment to a rule-based parser. In Proc. 21st Int. Conf. on Computational Linguistics, Coling-ACL-2006, Sydney. Kilian A. Foth, Wolfgang Menzel, and Ingo Schr¨oder. 2000. A Transformation-based Parsing Technique with Anytime Properties. In 4th Int. Workshop on Parsing Technologies, IWPT-2000, pages 89 – 100. Kilian Foth, Michael Daum, and Wolfgang Menzel. 2005. Parsing unrestricted German text with defeasible constraints. In H. Christiansen, P. R. Skadhauge, and J. Villadsen, editors, Constraint Solving and Language Processing, volume 3438 of Lecture Notes in Artificial Intelligence, pages 140–157. Springer-Verlag, Berlin. Kilian Foth, Tomas By, and Wolfgang Menzel. 2006. Guiding a constraint dependency parser with supertags. In Proc. 21st Int. Conf. on Computational Linguistics, Coling-ACL-2006, Sydney. Kilian Foth. 2004. Writing Weighted Constraints for Large Dependency Grammars. In Proc. Recent Advances in Dependency Grammars, COLINGWorkshop 2004, Geneva, Switzerland. Jochen Hagenstr¨om and Kilian A. Foth. 2002. Tagging for robust parsers. In Proc. 2nd. Int. Workshop, Robust Methods in Analysis of Natural Language Data, ROMAND-2002. Robert Malouf and Gertjan van Noord. 2004. Wide coverage parsing with stochastic attribute value grammars. In Proc. IJCNLP-04 Workshop Beyond Shallow Analyses - Formalisms and statistical modeling for deep analyses, Sanya City, China. Hiroshi Maruyama. 1990. Structural disambiguation with constraint propagation. In Proc. 28th Annual Meeting of the ACL (ACL-90), pages 31–38, Pittsburgh, PA. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proc. Human Language Technology Conference / Conference on Empirical Methods in Natural Language Processing, HLT/EMNLP-2005, Vancouver, B.C. Joakim Nivre. 2003. An Efficient Algorithm for Projective Dependency Parsing. In Proc. 4th International Workshop on Parsing Technologies, IWPT2003, pages 149–160. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Int. Conf. on New Methods in Language Processing, Manchester, UK. Ingo Schr¨oder, Horia F. Pop, Wolfgang Menzel, and Kilian Foth. 2001. Learning grammar weights using genetic algorithms. In Proceedings Euroconference Recent Advances in Natural Language Processing, pages 235–239, Tsigov Chark, Bulgaria. Ingo Schr¨oder. 2002. Natural Language Parsing with Graded Constraints. Ph.D. thesis, Dept. of Computer Science, University of Hamburg, Germany. Martin Volk. 2002. Combining unsupervised and supervised methods for pp attachment disambiguation. In Proc. of COLING-2002, Taipeh. Wen Wang and Mary P. Harper. 2004. A statistical constraint dependency grammar (CDG) parser. In Proc. ACL Workshop Incremental Parsing: Bringing Engineering and Cognition Together, pages 42–49, Barcelona, Spain. 328
2006
41
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 329–336, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Error mining in parsing results Benoît Sagot and Éric de la Clergerie Projet ATOLL - INRIA Domaine de Voluceau, B.P. 105 78153 Le Chesnay Cedex, France {benoit.sagot,eric.de_la_clergerie}@inria.fr Abstract We introduce an error mining technique for automatically detecting errors in resources that are used in parsing systems. We applied this technique on parsing results produced on several million words by two distinct parsing systems, which share the syntactic lexicon and the pre-parsing processing chain. We were thus able to identify missing and erroneous information in these resources. 1 Introduction Natural language parsing is a hard task, partly because of the complexity and the volume of information that have to be taken into account about words and syntactic constructions. However, it is necessary to have access to such information, stored in resources such as lexica and grammars, and to try and minimize the amount of missing and erroneous information in these resources. To achieve this, the use of these resources at a largescale in parsers is a very promising approach (van Noord, 2004), and in particular the analysis of situations that lead to a parsing failure: one can learn from one’s own mistakes. We introduce a probabilistic model that allows to identify forms and form bigrams that may be the source of errors, thanks to a corpus of parsed sentences. In order to facilitate the exploitation of forms and form bigrams detected by the model, and in particular to identify causes of errors, we have developed a visualization environment. The whole system has been tested on parsing results produced for several multi-million-word corpora and with two different parsers for French, namely SXLFG and FRMG. However, the error mining technique which is the topic of this paper is fully system- and language-independent. It could be applied without any change on parsing results produced by any system working on any language. The only information that is needed is a boolean value for each sentence which indicates if it has been successfully parsed or not. 2 Principles 2.1 General idea The idea we implemented is inspired from (van Noord, 2004). In order to identify missing and erroneous information in a parsing system, one can analyze a large corpus and study with statistical tools what differentiates sentences for which parsing succeeded from sentences for which it failed. The simplest application of this idea is to look for forms, called suspicious forms, that are found more frequently in sentences that could not be parsed. This is what van Noord (2004) does, without trying to identify a suspicious form in any sentence whose parsing failed, and thus without taking into account the fact that there is (at least) one cause of error in each unparsable sentence.1 On the contrary, we will look, in each sentence on which parsing failed, for the form that has the highest probability of being the cause of this failure: it is the main suspect of the sentence. This form may be incorrectly or only partially described in the lexicon, it may take part in constructions that are not described in the grammar, or it may exemplify imperfections of the pre-syntactic processing chain. This idea can be easily extended to sequences of forms, which is what we do by tak1Indeed, he defines the suspicion rate of a form f as the rate of unparsable sentences among sentences that contain f. 329 ing form bigrams into account, but also to lemmas (or sequences of lemmas). 2.2 Form-level probabilistic model We suppose that the corpus is split in sentences, sentences being segmented in forms. We denote by si the i-th sentence. We denote by oi,j, (1 ≤ j ≤|si|) the occurrences of forms that constitute si, and by F(oi,j) the corresponding forms. Finally, we call error the function that associates to each sentence si either 1, if si’s parsing failed, and 0 if it succeeded. Let Of be the set of the occurrences of a form f in the corpus: Of = {oi,j|F(oi,j) = f}. The number of occurrences of f in the corpus is therefore |Of|. Let us define at first the mean global suspicion rate S, that is the mean probability that a given occurrence of a form be the cause of a parsing failure. We make the assumption that the failure of the parsing of a sentence has a unique cause (here, a unique form. . . ). This assumption, which is not necessarily exactly verified, simplifies the model and leads to good results. If we call occtotal the total amount of forms in the corpus, we have then: S = Σierror(si) occtotal Let f be a form, that occurs as the j-th form of sentence si, which means that F(oi,j) = f. Let us assume that si’s parsing failed: error(si) = 1. We call suspicion rate of the j-th form oi,j of sentence si the probability, denoted by Si,j, that the occurrence oi,j of form form f be the cause of the si’s parsing failure. If, on the contrary, si’s parsing succeeded, its occurrences have a suspicion rate that is equal to zero. We then define the mean suspicion rate Sf of a form f as the mean of all suspicion rates of its occurrences: Sf = 1 |Of| · X oi,j∈Of Si,j To compute these rates, we use a fix-point algorithm by iterating a certain amount of times the following computations. Let us assume that we just completed the n-th iteration: we know, for each sentence si, and for each occurrence oi,j of this sentence, the estimation of its suspicion rate Si,j as computed by the n-th iteration, estimation that is denoted by S(n) i,j . From this estimation, we compute the n + 1-th estimation of the mean suspicion rate of each form f, denoted by S(n+1) f : S(n+1) f = 1 |Of| · X oi,j∈Of S(n) i,j This rate2 allows us to compute a new estimation of the suspicion rate of all occurrences, by giving to each occurrence if a sentence si a suspicion rate S(n+1) i,j that is exactly the estimation S(n+1) f of the mean suspicion rate of Sf of the corresponding form, and then to perform a sentencelevel normalization. Thus: S(n+1) i,j = error(si) · S(n+1) F (oi,j) P 1≤j≤|si| S(n+1) F (oi,j) At this point, the n+1-th iteration is completed, and we can resume again these computations, until convergence on a fix-point. To begin the whole process, we just say, for an occurrence oi,j of sentence si, that S(0) i,j = error(si)/|si|. This means that for a non-parsable sentence, we start from a baseline where all of its occurrences have an equal probability of being the cause of the failure. After a few dozens of iterations, we get stabilized estimations of the mean suspicion rate each form, which allows: • to identify the forms that most probably cause errors, • for each form f, to identify non-parsable sentences si where an occurrence oi,j ∈Of of f is a main suspect and where oi,j has a very 2We also performed experiment in which Sf was estimated by an other estimator, namely the smoothed mean suspicion rate, denoted by ˜S(n) f , that takes into account the number of occurrences of f. Indeed, the confidence we can have in the estimation S(n) f is lower if the number of occurrences of f is lower. Hence the idea to smooth S(n) f by replacing it with a weighted mean ˜S(n) f between S(n) f and S, where the weights λ and 1 −λ depend on |Of|: if |Of| is high, ˜S(n) f will be close from S(n) f ; if it is low, it will be closer from S: ˜S(n) f = λ(|Of|) · S(n) f + (1 −λ(|Of|)) · S. In these experiments, we used the smoothing function λ(|Of|) = 1 −e−β|Of | with β = 0.1. But this model, used with the ranking according to Mf = Sf · ln |Of| (see below), leads results that are very similar to those obtained without smoothing. Therefore, we describe the smoothingless model, which has the advantage not to use an empirically chosen smoothing function. 330 high suspicion rate among all occurrences of form f. We implemented this algorithm as a perl script, with strong optimizations of data structures so as to reduce memory and time usage. In particular, form-level structures are shared between sentences. 2.3 Extensions of the model This model gives already very good results, as we shall see in section 4. However, it can be extended in different ways, some of which we already implemented. First of all, it is possible not to stick to forms. Indeed, we do not only work on forms, but on couples made out of a form (a lexical entry) and one or several token(s) that correspond to this form in the raw text (a token is a portion of text delimited by spaces or punctuation tokens). Moreover, one can look for the cause of the failure of the parsing of a sentence not only in the presence of a form in this sentence, but also in the presence of a bigram3 of forms. To perform this, one just needs to extend the notions of form and occurrence, by saying that a (generalized) form is a unigram or a bigram of forms, and that a (generalized) occurrence is an occurrence of a generalized form, i.e., an occurrence of a unigram or a bigram of forms. The results we present in section 4 includes this extension, as well as the previous one. Another possible generalization would be to take into account facts about the sentence that are not simultaneous (such as form unigrams and form bigrams) but mutually exclusive, and that must therefore be probabilized as well. We have not yet implemented such a mechanism, but it would be very interesting, because it would allow to go beyond forms or n-grams of forms, and to manipulate also lemmas (since a given form has usually several possible lemmas). 3 Experiments In order to validate our approach, we applied these principles to look for error causes in parsing results given by two deep parsing systems for French, FRMG and SXLFG, on large corpora. 3One could generalize this to n-grams, but as n gets higher the number of occurrences of n-grams gets lower, hence leading to non-significant statistics. 3.1 Parsers Both parsing systems we used are based on deep non-probabilistic parsers. They share: • the Lefff 2 syntactic lexicon for French (Sagot et al., 2005), that contains 500,000 entries (representing 400,000 different forms) ; each lexical entry contains morphological information, sub-categorization frames (when relevant), and complementary syntactic information, in particular for verbal forms (controls, attributives, impersonals,. . . ), • the SXPipe pre-syntactic processing chain (Sagot and Boullier, 2005), that converts a raw text in a sequence of DAGs of forms that are present in the Lefff ; SXPipe contains, among other modules, a sentence-level segmenter, a tokenization and spelling-error correction module, named-entities recognizers, and a non-deterministic multi-word identifier. But FRMG and SXLFG use completely different parsers, that rely on different formalisms, on different grammars and on different parser builder. Therefore, the comparison of error mining results on the output of these two systems makes it possible to distinguish errors coming from the Lefff or from SXPipe from those coming to one grammar or the other. Let us describe in more details the characteristics of these two parsers. The FRMG parser (Thomasset and Villemonte de la Clergerie, 2005) is based on a compact TAG for French that is automatically generated from a meta-grammar. The compilation and execution of the parser is performed in the framework of the DYALOG system (Villemonte de la Clergerie, 2005). The SXLFG parser (Boullier and Sagot, 2005b; Boullier and Sagot, 2005a) is an efficient and robust LFG parser. Parsing is performed in two steps. First, an Earley-like parser builds a shared forest that represents all constituent structures that satisfy the context-free skeleton of the grammar. Then functional structures are built, in one or more bottom-up passes. Parsing efficiency is achieved thanks to several techniques such as compact data representation, systematic use of structure and computation sharing, lazy evaluation and heuristic and almost non-destructive pruning during parsing. Both parsers implement also advanced error recovery and tolerance techniques, but they were 331 corpus #sentences #success (%) #forms #occ S (%) Date MD/FRMG 330,938 136,885 (41.30%) 255,616 10,422,926 1.86% Jul. 05 MD/SXLFG 567,039 343,988 (60.66%) 327,785 14,482,059 1.54% Mar. 05 EASy/FRMG 39,872 16,477 (41.32%) 61,135 878,156 2.66% Dec. 05 EASy/SXLFG 39,872 21,067 (52.84%) 61,135 878,156 2.15% Dec. 05 Table 1: General information on corpora and parsing results useless for the experiments described here, since we want only to distinguish sentences that receive a full parse (without any recovery technique) from those that do not. 3.2 Corpora We parsed with these two systems the following corpora: MD corpus : This corpus is made out of 14.5 million words (570,000 sentences) of general journalistic corpus that are articles from the Monde diplomatique. EASy corpus : This is the 40,000-sentence corpus that has been built for the EASy parsing evaluation campaign for French (Paroubek et al., 2005). We only used the raw corpus (without taking into account the fact that a manual parse is available for 10% of all sentences). The EASy corpus contains several sub-corpora of varied style: journalistic, literacy, legal, medical, transcription of oral, email, questions, etc. Both corpora are raw in the sense that no cleaning whatsoever has been performed so as to eliminate some sequences of characters that can not really be considered as sentences. Table 1 gives some general information on these corpora as well as the results we got with both parsing systems. It shall be noticed that both parsers did not parse exactly the same set and the same number of sentences for the MD corpus, and that they do not define in the exactly same way the notion of sentence. 3.3 Results visualization environment We developed a visualization tool for the results of the error mining, that allows to examine and annotate them. It has the form of an HTML page that uses dynamic generation methods, in particular javascript. An example is shown on Figure 1. To achieve this, suspicious forms are ranked according to a measure Mf that models, for a given form f, the benefit there is to try and correct the (potential) corresponding error in the resources. A user who wants to concentrate on almost certain errors rather than on most frequent ones can visualize suspicious forms ranked according to Mf = Sf. On the contrary, a user who wants to concentrate on most frequent potential errors, rather than on the confidence that the algorithm has given to errors, can visualize suspicious forms ranked according to4 Mf = Sf|Of|. The default choice, which is adopted to produce all tables shown in this paper, is a balance between these two possibilities, and ranks suspicious forms according to Mf = Sf · ln |Of|. The visualization environment allows to browse through (ranked) suspicious forms in a scrolling list on the left part of the page (A). When the suspicious form is associated to a token that is the same as the form, only the form is shown. Otherwise, the token is separated from the form by the symbol “ / ”. The right part of the page shows various pieces of information about the currently selected form. After having given its rank according to the ranking measure Mf that has been chosen (B), a field is available to add or edit an annotation associated with the suspicious form (D). These annotations, aimed to ease the analysis of the error mining results by linguists and by the developers of parsers and resources (lexica, grammars), are saved in a database (SQLITE). Statistical information is also given about f (E), including its number of occurrences occf, the number of occurrences of f in non-parsable sentences, the final estimation of its mean suspicion rate Sf and the rate err(f) of non-parsable sentences among those where f appears. This indications are complemented by a brief summary of the iterative process that shows the convergence of the successive estimations of Sf. The lower part of the page gives a mean to identify the cause of f-related errors by showing 4Let f be a form. The suspicion rate Sf can be considered as the probability for a particular occurrence of f to cause a parsing error. Therefore, Sf|Of| models the number of occurrences of f that do cause a parsing error. 332 A B C D E F G H Figure 1: Error mining results visualization environment (results are shown for MD/FRMG). f’s entries in the Lefff lexicon (G) as well as nonparsable sentences where f is the main suspect and where one of its occurrences has a particularly high suspicion rate5 (H). The whole page (with annotations) can be sent by e-mail, for example to the developer of the lexicon or to the developer of one parser or the other (C). 4 Results In this section, we mostly focus on the results of our error mining algorithm on the parsing results provided by SXLFG on the MD corpus. We first present results when only forms are taken into account, and then give an insight on results when both forms and form bigrams are considered. 5Such an information, which is extremely valuable for the developers of the resources, can not be obtained by global (form-level and not occurrence-level) approaches such as the err(f)-based approach of (van Noord, 2004). Indeed, enumerating all sentences which include a given form f, and which did not receive a full parse, is not precise enough: it would show at the same time sentences wich fail because of f (e.g., because its lexical entry lacks a given subcategorization frame) and sentences which fail for an other independent reason. 4.1 Finding suspicious forms The execution of our error mining script on MD/SXLFG, with imax = 50 iterations and when only (isolated) forms are taken into account, takes less than one hour on a 3.2 GHz PC running Linux with a 1.5 Go RAM. It outputs 18,334 relevant suspicious forms (out of the 327,785 possible ones), where a relevant suspicious form is defined as a form f that satisfies the following arbitrary constraints:6 S(imax) f > 1, 5 · S and |Of| > 5. We still can not prove theoretically the convergence of the algorithm.7 But among the 1000 bestranked forms, the last iteration induces a mean variation of the suspicion rate that is less than 0.01%. On a smaller corpus like the EASy corpus, 200 iterations take 260s. The algorithm outputs less than 3,000 relevant suspicious forms (out of the 61,125 possible ones). Convergence information 6These constraints filter results, but all forms are taken into account during all iterations of the algorithm. 7However, the algorithms shares many common points with iterative algorithm that are known to converge and that have been proposed to find maximum entropy probability distributions under a set of constraints (Berger et al., 1996). Such an algorithm is compared to ours later on in this paper. 333 is the same as what has been said above for the MD corpus. Table 2 gives an idea of the repartition of suspicious forms w.r.t. their frequency (for FRMG on MD), showing that rare forms have a greater probability to be suspicious. The most frequent suspicious form is the double-quote, with (only) Sf = 9%, partly because of segmentation problems. 4.2 Analyzing results Table 3 gives an insight on the output of our algorithm on parsing results obtained by SXLFG on the MD corpus. For each form f (in fact, for each couple of the form (token,form)), this table displays its suspicion rate and its number of occurrences, as well as the rate err(f) of non-parsable sentences among those where f appears and a short manual analysis of the underlying error. In fact, a more in-depth manual analysis of the results shows that they are very good: errors are correctly identified, that can be associated with four error sources: (1) the Lefff lexicon, (2) the SXPipe pre-syntactic processing chain, (3) imperfections of the grammar, but also (4) problems related to the corpus itself (and to the fact that it is a raw corpus, with meta-data and typographic noise). On the EASy corpus, results are also relevant, but sometimes more difficult to interpret, because of the relative small size of the corpus and because of its heterogeneity. In particular, it contains email and oral transcriptions sub-corpora that introduce a lot of noise. Segmentation problems (caused both by SXPipe and by the corpus itself, which is already segmented) play an especially important role. 4.3 Comparing results with results of other algorithms In order to validate our approach, we compared our results with results given by two other relevant algorithms: • van Noord’s (van Noord, 2004) (form-level and non-iterative) evaluation of err(f) (the rate of non-parsable sentences among sentences containing the form f), • a standard (occurrence-level and iterative) maximum entropy evaluation of each form’s contribution to the success or the failure of a sentence (we used the MEGAM package (Daumé III, 2004)). As done for our algorithm, we do not rank forms directly according to the suspicion rate Sf computed by these algorithms. Instead, we use the Mf measure presented above (Mf = Sf ·ln |Of|). Using directly van Noord’s measure selects as most suspicious words very rare words, which shows the importance of a good balance between suspicion rate and frequency (as noted by (van Noord, 2004) in the discussion of his results). This remark applies to the maximum entropy measure as well. Table 4 shows for all algorithms the 10 bestranked suspicious forms, complemented by a manual evaluation of their relevance. One clearly sees that our approach leads to the best results. Van Noord’s technique has been initially designed to find errors in resources that already ensured a very high coverage. On our systems, whose development is less advanced, this technique ranks as most suspicious forms those which are simply the most frequent ones. It seems to be the case for the standard maximum entropy algorithm, thus showing the importance to take into account the fact that there is at least one cause of error in any sentence whose parsing failed, not only to identify a main suspicious form in each sentence, but also to get relevant global results. 4.4 Comparing results for both parsers We complemented the separated study of error mining results on the output of both parsers by an analysis of merged results. We computed for each form the harmonic mean of both measures Mf = Sf · ln |Of| obtained for each parsing system. Results (not shown here) are very interesting, because they identify errors that come mostly from resources that are shared by both systems (the Lefff lexicon and the pre-syntactic processing chain SXPipe). Although some errors come from common lacks of coverage in both grammars, it is nevertheless a very efficient mean to get a first repartition between error sources. 4.5 Introducing form bigrams As said before, we also performed experiments where not only forms but also form bigrams are treated as potential causes of errors. This approach allows to identify situations where a form is not in itself a relevant cause of error, but leads often to a parse failure when immediately followed or preceded by an other form. Table 5 shows best-ranked form bigrams (forms that are ranked in-between are not shown, to em334 #occ > 100 000 > 10 000 > 1000 > 100 > 10 #forms 13 84 947 8345 40 393 #suspicious forms (%) 1 (7.6%) 13 (15.5%) 177 (18.7%) 1919 (23%) 12 022 (29.8%) Table 2: Suspicious forms repartition for MD/FRMG Rank Token(s)/form S(50) f |Of| err(f) Mf Error cause 1 _____/_UNDERSCORE 100% 6399 100% 8.76 corpus: typographic noise 2 (...) 46% 2168 67% 2.82 SXPipe: should be treated as skippable words 3 2_]/_NUMBER 76% 30 93% 2.58 SXPipe: bad treatment of list constructs 4 privées 39% 589 87% 2.53 Lefff : misses as an adjective 5 Haaretz/_Uw 51% 149 70% 2.53 SXPipe: needs local grammars for references 6 contesté 52% 122 90% 2.52 Lefff : misses as an adjective 7 occupés 38% 601 86% 2.42 Lefff : misses as an adjective 8 privée 35% 834 82% 2.38 Lefff : misses as an adjective 9 [...] 44% 193 71% 2.33 SXPipe: should be treated as skippable words 10 faudrait 36% 603 85% 2.32 Lefff : can have a nominal object Table 3: Analysis of the 10 best-ranked forms (ranked according to Mf = Sf · ln |Of|) this paper global maxent Rank Token(s)/form Eval Token(s)/form Eval Token(s)/form Eval 1 _____/_UNDERSCORE ++ * + pour 2 (...) ++ , ) 3 2_]/_NUMBER ++ livre à 4 privées ++ . qu’il/qu’ 5 Haaretz/_Uw ++ de sont 6 contesté ++ ; le 7 occupés ++ : qu’un/qu’ + 8 privée ++ la qu’un/un + 9 [...] ++ ´étrangères que 10 faudrait ++ lecteurs pourrait Table 4: The 10 best-ranked suspicious forms, according the the Mf measure, as computed by different algorithms: ours (this paper), a standard maximum entropy algorithm (maxent) and van Noord’s rate err(f) (global). Rank Tokens and forms Mf Error cause 4 Toutes/toutes les 2.73 grammar: badly treated pre-determiner adjective 6 y en 2,34 grammar: problem with the construction il y en a. . . 7 in “ 1.81 Lefff : in misses as a preposition, which happends before book titles (hence the “) 10 donne à 1.44 Lefff : donner should sub-categorize à-vcomps (donner à voir. . . ) 11 de demain 1.19 Lefff : demain misses as common noun (standard adv are not preceded by prep) 16 ( 22/_NUMBER 0.86 grammar: footnote references not treated 16 22/_NUMBER ) 0.86 as above Table 5: Best ranked form bigrams (forms ranked inbetween are not shown; ranked according to Mf = Sf · ln |Of|). These results have been computed on a subset of the MD corpus (60,000 sentences). 335 phasize bigram results), with the same data as in table 3. 5 Conclusions and perspectives As we have shown, parsing large corpora allows to set up error mining techniques, so as to identify missing and erroneous information in the different resources that are used by full-featured parsing systems. The technique described in this paper and its implementation on forms and form bigrams has already allowed us to detect many errors and omissions in the Lefff lexicon, to point out inappropriate behaviors of the SXPipe pre-syntactic processing chain, and to reveal the lack of coverage of the grammars for certain phenomena. We intend to carry on and extend this work. First of all, the visualization environment can be enhanced, as is the case for the implementation of the algorithm itself. We would also like to integrate to the model the possibility that facts taken into account (today, forms and form bigrams) are not necessarily certain, because some of them could be the consequence of an ambiguity. For example, for a given form, several lemmas are often possible. The probabilization of these lemmas would thus allow to look for most suspicious lemmas. We are already working on a module that will allow not only to detect errors, for example in the lexicon, but also to propose a correction. To achieve this, we want to parse anew all nonparsable sentences, after having replaced their main suspects by a special form that receives under-specified lexical information. These information can be either very general, or can be computed by appropriate generalization patterns applied on the information associated by the lexicon with the original form. A statistical study of the new parsing results will make it possible to propose corrections concerning the involved forms. References A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximun entropy approach to natural language processing. Computational Linguistics, 22(1):pp. 39– 71. Pierre Boullier and Benoît Sagot. 2005a. Analyse syntaxique profonde à grande échelle: SXLFG. Traitement Automatique des Langues (T.A.L.), 46(2). Pierre Boullier and Benoît Sagot. 2005b. Efficient and robust LFG parsing: SxLfg. In Proceedings of IWPT’05, Vancouver, Canada, October. Hal Daumé III. 2004. Notes on CG and LM-BFGS optimization of logistic regression. Paper available at http://www.isi.edu/~hdaume/docs/ daume04cg-bfgs.ps, implementation available at http://www.isi.edu/~hdaume/ megam/. Patrick Paroubek, Louis-Gabriel Pouillot, Isabelle Robba, and Anne Vilnat. 2005. EASy : campagne d’évaluation des analyseurs syntaxiques. In Proceedings of the EASy workshop of TALN 2005, Dourdan, France. Benoît Sagot and Pierre Boullier. 2005. From raw corpus to word lattices: robust pre-parsing processing. In Proceedings of L&TC 2005, Pozna´n, Pologne. Benoît Sagot, Lionel Clément, Éric Villemonte de la Clergerie, and Pierre Boullier. 2005. Vers un méta-lexique pour le français : architecture, acquisition, utilisation. Journée d’étude de l’ATALA sur l’interface lexique-grammaireet les lexiques syntaxiques et sémantiques, March. François Thomasset and Éric Villemonte de la Clergerie. 2005. Comment obtenir plus des métagrammaires. In Proceedings of TALN’05, Dourdan, France, June. ATALA. Gertjan van Noord. 2004. Error mining for widecoverage grammar engineering. In Proc. of ACL 2004, Barcelona, Spain. Éric Villemonte de la Clergerie. 2005. DyALog: a tabular logic programming based environment for NLP. In Proceedings of 2nd International Workshop on Constraint Solving and Language Processing (CSLP’05), Barcelona, Spain, October. 336
2006
42
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 337–344, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Reranking and Self-Training for Parser Adaptation David McClosky, Eugene Charniak, and Mark Johnson Brown Laboratory for Linguistic Information Processing (BLLIP) Brown University Providence, RI 02912 {dmcc|ec|mj}@cs.brown.edu Abstract Statistical parsers trained and tested on the Penn Wall Street Journal (WSJ) treebank have shown vast improvements over the last 10 years. Much of this improvement, however, is based upon an ever-increasing number of features to be trained on (typically) the WSJ treebank data. This has led to concern that such parsers may be too finely tuned to this corpus at the expense of portability to other genres. Such worries have merit. The standard “Charniak parser” checks in at a labeled precisionrecall f-measure of 89.7% on the Penn WSJ test set, but only 82.9% on the test set from the Brown treebank corpus. This paper should allay these fears. In particular, we show that the reranking parser described in Charniak and Johnson (2005) improves performance of the parser on Brown to 85.2%. Furthermore, use of the self-training techniques described in (McClosky et al., 2006) raise this to 87.8% (an error reduction of 28%) again without any use of labeled Brown data. This is remarkable since training the parser and reranker on labeled Brown data achieves only 88.4%. 1 Introduction Modern statistical parsers require treebanks to train their parameters, but their performance declines when one parses genres more distant from the training data’s domain. Furthermore, the treebanks required to train said parsers are expensive and difficult to produce. Naturally, one of the goals of statistical parsing is to produce a broad-coverage parser which is relatively insensitive to textual domain. But the lack of corpora has led to a situation where much of the current work on parsing is performed on a single domain using training data from that domain — the Wall Street Journal (WSJ) section of the Penn Treebank (Marcus et al., 1993). Given the aforementioned costs, it is unlikely that many significant treebanks will be created for new genres. Thus, parser adaptation attempts to leverage existing labeled data from one domain and create a parser capable of parsing a different domain. Unfortunately, the state of the art in parser portability (i.e. using a parser trained on one domain to parse a different domain) is not good. The “Charniak parser” has a labeled precision-recall f-measure of 89.7% on WSJ but a lowly 82.9% on the test set from the Brown corpus treebank. Furthermore, the treebanked Brown data is mostly general non-fiction and much closer to WSJ than, e.g., medical corpora would be. Thus, most work on parser adaptation resorts to using some labeled in-domain data to fortify the larger quantity of outof-domain data. In this paper, we present some encouraging results on parser adaptation without any in-domain data. (Though we also present results with indomain data as a reference point.) In particular we note the effects of two comparatively recent techniques for parser improvement. The first of these, parse-reranking (Collins, 2000; Charniak and Johnson, 2005) starts with a “standard” generative parser, but uses it to generate the n-best parses rather than a single parse. Then a reranking phase uses more detailed features, features which would (mostly) be impossible to incorporate in the initial phase, to reorder 337 the list and pick a possibly different best parse. At first blush one might think that gathering even more fine-grained features from a WSJ treebank would not help adaptation. However, we find that reranking improves the parsers performance from 82.9% to 85.2%. The second technique is self-training — parsing unlabeled data and adding it to the training corpus. Recent work, (McClosky et al., 2006), has shown that adding many millions of words of machine parsed and reranked LA Times articles does, in fact, improve performance of the parser on the closely related WSJ data. Here we show that it also helps the father-afield Brown data. Adding it improves performance yet-again, this time from 85.2% to 87.8%, for a net error reduction of 28%. It is interesting to compare this to our results for a completely Brown trained system (i.e. one in which the first-phase parser is trained on just Brown training data, and the second-phase reranker is trained on Brown 50-best lists). This system performs at a 88.4% level — only slightly higher than that achieved by our system with only WSJ data. 2 Related Work Work in parser adaptation is premised on the assumption that one wants a single parser that can handle a wide variety of domains. While this is the goal of the majority of parsing researchers, it is not quite universal. Sekine (1997) observes that for parsing a specific domain, data from that domain is most beneficial, followed by data from the same class, data from a different class, and data from a different domain. He also notes that different domains have very different structures by looking at frequent grammar productions. For these reasons he takes the position that we should, instead, simply create treebanks for a large number of domains. While this is a coherent position, it is far from the majority view. There are many different approaches to parser adaptation. Steedman et al. (2003) apply cotraining to parser adaptation and find that cotraining can work across domains. The need to parse biomedical literature inspires (Clegg and Shepherd, 2005; Lease and Charniak, 2005). Clegg and Shepherd (2005) provide an extensive side-by-side performance analysis of several modern statistical parsers when faced with such data. They find that techniques which combine differTraining Testing f-measure Gildea Bacchiani WSJ WSJ 86.4 87.0 WSJ Brown 80.6 81.1 Brown Brown 84.0 84.7 WSJ+Brown Brown 84.3 85.6 Table 1: Gildea and Bacchiani results on WSJ and Brown test corpora using different WSJ and Brown training sets. Gildea evaluates on sentences of length ≤40, Bacchiani on all sentences. ent parsers such as voting schemes and parse selection can improve performance on biomedical data. Lease and Charniak (2005) use the Charniak parser for biomedical data and find that the use of out-of-domain trees and in-domain vocabulary information can considerably improve performance. However, the work which is most directly comparable to ours is that of (Ratnaparkhi, 1999; Hwa, 1999; Gildea, 2001; Bacchiani et al., 2006). All of these papers look at what happens to modern WSJ-trained statistical parsers (Ratnaparkhi’s, Collins’, Gildea’s and Roark’s, respectively) as training data varies in size or usefulness (because we are testing on something other than WSJ). We concentrate particularly on the work of (Gildea, 2001; Bacchiani et al., 2006) as they provide results which are directly comparable to those presented in this paper. Looking at Table 1, the first line shows us the standard training and testing on WSJ — both parsers perform in the 86-87% range. The next line shows what happens when parsing Brown using a WSJ-trained parser. As with the Charniak parser, both parsers take an approximately 6% hit. It is at this point that our work deviates from these two papers. Lacking alternatives, both (Gildea, 2001) and (Bacchiani et al., 2006) give up on adapting a pure WSJ trained system, instead looking at the issue of how much of an improvement one gets over a pure Brown system by adding WSJ data (as seen in the last two lines of Table 1). Both systems use a “model-merging” (Bacchiani et al., 2006) approach. The different corpora are, in effect, concatenated together. However, (Bacchiani et al., 2006) achieve a larger gain by weighting the in-domain (Brown) data more heavily than the out-of-domain WSJ data. One can imagine, for instance, five copies of the Brown data concatenated with just one copy of WSJ data. 338 3 Corpora We primarily use three corpora in this paper. Selftraining requires labeled and unlabeled data. We assume that these sets of data must be in similar domains (e.g. news articles) though the effectiveness of self-training across domains is currently an open question. Thus, we have labeled (WSJ) and unlabeled (NANC) out-of-domain data and labeled in-domain data (BROWN). Unfortunately, lacking a corresponding corpus to NANC for BROWN, we cannot perform the opposite scenario and adapt BROWN to WSJ. 3.1 Brown The BROWN corpus (Francis and Kuˇcera, 1979) consists of many different genres of text, intended to approximate a “balanced” corpus. While the full corpus consists of fiction and nonfiction domains, the sections that have been annotated in Treebank II bracketing are primarily those containing fiction. Examples of the sections annotated include science fiction, humor, romance, mystery, adventure, and “popular lore.” We use the same divisions as Bacchiani et al. (2006), who base their divisions on Gildea (2001). Each division of the corpus consists of sentences from all available genres. The training division consists of approximately 80% of the data, while held-out development and testing divisions each make up 10% of the data. The treebanked sections contain approximately 25,000 sentences (458,000 words). 3.2 Wall Street Journal Our out-of-domain data is the Wall Street Journal (WSJ) portion of the Penn Treebank (Marcus et al., 1993) which consists of about 40,000 sentences (one million words) annotated with syntactic information. We use the standard divisions: Sections 2 through 21 are used for training, section 24 for held-out development, and section 23 for final testing. 3.3 North American News Corpus In addition to labeled news data, we make use of a large quantity of unlabeled news data. The unlabeled data is the North American News Corpus, NANC (Graff, 1995), which is approximately 24 million unlabeled sentences from various news sources. NANC contains no syntactic information and sentence boundaries are induced by a simple discriminative model. We also perform some basic cleanups on NANC to ease parsing. NANC contains news articles from various news sources including the Wall Street Journal, though for this paper, we only use articles from the LA Times portion. To use the data from NANC, we use self-training (McClosky et al., 2006). First, we take a WSJ trained reranking parser (i.e. both the parser and reranker are built from WSJ training data) and parse the sentences from NANC with the 50-best (Charniak and Johnson, 2005) parser. Next, the 50-best parses are reordered by the reranker. Finally, the 1-best parses after reranking are combined with the WSJ training set to retrain the firststage parser.1 McClosky et al. (2006) find that the self-trained models help considerably when parsing WSJ. 4 Experiments We use the Charniak and Johnson (2005) reranking parser in our experiments. Unless mentioned otherwise, we use the WSJ-trained reranker (as opposed to a BROWN-trained reranker). To evaluate, we report bracketing f-scores.2 Parser f-scores reported are for sentences up to 100 words long, while reranking parser f-scores are over all sentences. For simplicity and ease of comparison, most of our evaluations are performed on the development section of BROWN. 4.1 Adapting self-training Our first experiment examines the performance of the self-trained parsers. While the parsers are created entirely from labeled WSJ data and unlabeled NANC data, they perform extremely well on BROWN development (Table 2). The trends are the same as in (McClosky et al., 2006): Adding NANC data improves parsing performance on BROWN development considerably, improving the f-score from 83.9% to 86.4%. As more NANC data is added, the f-score appears to approach an asymptote. The NANC data appears to help reduce data sparsity and fill in some of the gaps in the WSJ model. Additionally, the reranker provides further benefit and adds an absolute 1-2% to the fscore. The improvements appear to be orthogonal, as our best performance is reached when we use the reranker and add 2,500k self-trained sentences from NANC. 1We trained a new reranker from this data as well, but it does not seem to get significantly different performance. 2The harmonic mean of labeled precision (P) and labeled recall (R), i.e. f = 2×P ×R P +R 339 Sentences added Parser Reranking Parser Baseline BROWN 86.4 87.4 Baseline WSJ 83.9 85.8 WSJ+50k 84.8 86.6 WSJ+250k 85.7 87.2 WSJ+500k 86.0 87.3 WSJ+750k 86.1 87.5 WSJ+1,000k 86.2 87.3 WSJ+1,500k 86.2 87.6 WSJ+2,000k 86.1 87.7 WSJ+2,500k 86.4 87.7 Table 2: Effects of adding NANC sentences to WSJ training data on parsing performance. f-scores for the parser with and without the WSJ reranker are shown when evaluating on BROWN development. For this experiment, we use the WSJ-trained reranker. The results are even more surprising when we compare against a parser3 trained on the labeled training section of the BROWN corpus, with parameters tuned against its held-out section. Despite seeing no in-domain data, the WSJ based parser is able to match the BROWN based parser. For the remainder of this paper, we will refer to the model trained on WSJ+2,500k sentences of NANC as our “best WSJ+NANC” model. We also note that this “best” parser is different from the “best” parser for parsing WSJ, which was trained on WSJ with a relative weight4 of 5 and 1,750k sentences from NANC. For parsing BROWN, the difference between these two parsers is not large, though. Increasing the relative weight of WSJ sentences versus NANC sentences when testing on BROWN development does not appear to have a significant effect. While (McClosky et al., 2006) showed that this technique was effective when testing on WSJ, the true distribution was closer to WSJ so it made sense to emphasize it. 4.2 Incorporating In-Domain Data Up to this point, we have only considered the situation where we have no in-domain data. We now 3In this case, only the parser is trained on BROWN. In section 4.3, we compare against a fully BROWN-trained reranking parser as well. 4A relative weight of n is equivalent to using n copies of the corpus, i.e. an event that occurred x times in the corpus would occur x×n times in the weighted corpus. Thus, larger corpora will tend to dominate smaller corpora of the same relative weight in terms of event counts. explore different ways of making use of labeled and unlabeled in-domain data. Bacchiani et al. (2006) applies self-training to parser adaptation to utilize unlabeled in-domain data. The authors find that it helps quite a bit when adapting from BROWN to WSJ. They use a parser trained from the BROWN train set to parse WSJ and add the parsed WSJ sentences to their training set. We perform a similar experiment, using our WSJtrained reranking parser to parse BROWN train and testing on BROWN development. We achieved a boost from 84.8% to 85.6% when we added the parsed BROWN sentences to our training. Adding in 1,000k sentences from NANC as well, we saw a further increase to 86.3%. However, the technique does not seem as effective in our case. While the self-trained BROWN data helps the parser, it adversely affects the performance of the reranking parser. When self-trained BROWN data is added to WSJ training, the reranking parser’s performance drops from 86.6% to 86.1%. We see a similar degradation as NANC data is added to the training set as well. We are not yet able to explain this unusual behavior. We now turn to the scenario where we have some labeled in-domain data. The most obvious way to incorporate labeled in-domain data is to combine it with the labeled out-of-domain data. We have already seen the results (Gildea, 2001) and (Bacchiani et al., 2006) achieve in Table 1. We explore various combinations of BROWN, WSJ, and NANC corpora. Because we are mainly interested in exploring techniques with self-trained models rather than optimizing performance, we only consider weighting each corpus with a relative weight of one for this paper. The models generated are tuned on section 24 from WSJ. The results are summarized in Table 3. While both WSJ and BROWN models benefit from a small amount of NANC data, adding more than 250k NANC sentences to the BROWN or combined models causes their performance to drop. This is not surprising, though, since adding “too much” NANC overwhelms the more accurate BROWN or WSJ counts. By weighting the counts from each corpus appropriately, this problem can be avoided. Another way to incorporate labeled data is to tune the parser back-off parameters on it. Bacchiani et al. (2006) report that tuning on held-out BROWN data gives a large improvement over tun340 ing on WSJ data. The improvement is mostly (but not entirely) in precision. We do not see the same improvement (Figure 1) but this is likely due to differences in the parsers. However, we do see a similar improvement for parsing accuracy once NANC data has been added. The reranking parser generally sees an improvement, but it does not appear to be significant. 4.3 Reranker Portability We have shown that the WSJ-trained reranker is actually quite portable to the BROWN fiction domain. This is surprising given the large number of features (over a million in the case of the WSJ reranker) tuned to adjust for errors made in the 50best lists by the first-stage parser. It would seem the corrections memorized by the reranker are not as domain-specific as we might expect. As further evidence, we present the results of applying the WSJ model to the Switchboard corpus — a domain much less similar to WSJ than BROWN. In Table 4, we see that while the parser’s performance is low, self-training and reranking provide orthogonal benefits. The improvements represent a 12% error reduction with no additional in-domain data. Naturally, in-domain data and speech-specific handling (e.g. disfluency modeling) would probably help dramatically as well. Finally, to compare against a model fully trained on BROWN data, we created a BROWN reranker. We parsed the BROWN training set with 20-fold cross-validation, selected features that occurred 5 times or more in the training set, and fed the 50-best lists from the parser to a numerical optimizer to estimate feature weights. The resulting reranker model had approximately 700,000 features, which is about half as many as the WSJ trained reranker. This may be due to the smaller size of the BROWN training set or because the feature schemas for the reranker were developed on WSJ data. As seen in Table 5, the BROWN reranker is not a significant improvement over the WSJ reranker for parsing BROWN data. 5 Analysis We perform several types of analysis to measure some of the differences and similarities between the BROWN-trained and WSJ-trained reranking parsers. While the two parsers agree on a large number of parse brackets (Section 5.2), there are categorical differences between them (as seen in Parser model Parser f-score Reranker f-score WSJ 74.0 75.9 WSJ+NANC 75.6 77.0 Table 4: Parser and reranking parser performance on the SWITCHBOARD development corpus. In this case, WSJ+NANC is a model created from WSJ and 1,750k sentences from NANC. Model 1-best 10-best 25-best 50-best WSJ 82.6 88.9 90.7 91.9 WSJ+NANC 86.4 92.1 93.5 94.3 BROWN 86.3 92.0 93.3 94.2 Table 6: Oracle f-scores of top n parses produced by baseline WSJ parser, a combined WSJ and NANC parser, and a baseline BROWN parser. Section 5.3). 5.1 Oracle Scores Table 6 shows the f-scores of an “oracle reranker” — i.e. one which would always choose the parse with the highest f-score in the n-best list. While the WSJ parser has relatively low f-scores, adding NANC data results in a parser with comparable oracle scores as the parser trained from BROWN training. Thus, the WSJ+NANC model has better oracle rates than the WSJ model (McClosky et al., 2006) for both the WSJ and BROWN domains. 5.2 Parser Agreement In this section, we compare the output of the WSJ+NANC-trained and BROWN-trained reranking parsers. We use evalb to calculate how similar the two sets of output are on a bracket level. Table 7 shows various statistics. The two parsers achieved an 88.0% f-score between them. Additionally, the two parsers agreed on all brackets almost half the time. The part of speech tagging agreement is fairly high as well. Considering they were created from different corpora, this seems like a high level of agreement. 5.3 Statistical Analysis We conducted randomization tests for the significance of the difference in corpus f-score, based on the randomization version of the paired sample ttest described by Cohen (1995). The null hypothesis is that the two parsers being compared are in fact behaving identically, so permuting or swapping the parse trees produced by the parsers for 341 WSJ tuned parser BROWN tuned parser WSJ tuned reranking parser BROWN tuned reranking parser NANC sentences added f-score 2000k 1750k 1500k 1250k 1000k 750k 500k 250k 0k 87.8 87.0 86.0 85.0 83.8 Figure 1: Precision and recall f-scores when testing on BROWN development as a function of the number of NANC sentences added under four test conditions. “BROWN tuned” indicates that BROWN training data was used to tune the parameters (since the normal held-out section was being used for testing). For “WSJ tuned,” we tuned the parameters from section 24 of WSJ. Tuning on BROWN helps the parser, but not for the reranking parser. Parser model Parser alone Reranking parser WSJ alone 83.9 85.8 WSJ+2,500k NANC 86.4 87.7 BROWN alone 86.3 87.4 BROWN+50k NANC 86.8 88.0 BROWN+250k NANC 86.8 88.1 BROWN+500k NANC 86.7 87.8 WSJ+BROWN 86.5 88.1 WSJ+BROWN+50k NANC 86.8 88.1 WSJ+BROWN+250k NANC 86.8 88.1 WSJ+BROWN+500k NANC 86.6 87.7 Table 3: f-scores from various combinations of WSJ, NANC, and BROWN corpora on BROWN development. The reranking parser used the WSJ-trained reranker model. The BROWN parsing model is naturally better than the WSJ model for this task, but combining the two training corpora results in a better model (as in Gildea (2001)). Adding small amounts of NANC further improves the models. Parser model Parser alone WSJ-reranker BROWN-reranker WSJ 82.9 85.2 85.2 WSJ+NANC 87.1 87.8 87.9 BROWN 86.7 88.2 88.4 Table 5: Performance of various combinations of parser and reranker models when evaluated on BROWN test. The WSJ+NANC parser with the WSJ reranker comes close to the BROWN-trained reranking parser. The BROWN reranker provides only a small improvement over its WSJ counterpart, which is not statistically significant. 342 Bracketing agreement f-score 88.03% Complete match 44.92% Average crossing brackets 0.94 POS Tagging agreement 94.85% Table 7: Agreement between the WSJ+NANC parser with the WSJ reranker and the BROWN parser with the BROWN reranker. Complete match is how often the two reranking parsers returned the exact same parse. the same test sentence should not affect the corpus f-scores. By estimating the proportion of permutations that result in an absolute difference in corpus f-scores at least as great as that observed in the actual output, we obtain a distributionfree estimate of significance that is robust against parser and evaluator failures. The results of this test are shown in Table 8. The table shows that the BROWN reranker is not significantly different from the WSJ reranker. In order to better understand the difference between the reranking parser trained on Brown and the WSJ+NANC/WSJ reranking parser (a reranking parser with the first-stage trained on WSJ+NANC and the second-stage trained on WSJ) on Brown data, we constructed a logistic regression model of the difference between the two parsers’ fscores on the development data using the R statistical package5. Of the 2,078 sentences in the development data, 29 sentences were discarded because evalb failed to evaluate at least one of the parses.6 A Wilcoxon signed rank test on the remaining 2,049 paired sentence level f-scores was significant at p = 0.0003. Of these 2,049 sentences, there were 983 parse pairs with the same sentence-level f-score. Of the 1,066 sentences for which the parsers produced parses with different f-scores, there were 580 sentences for which the BROWN/BROWN parser produced a parse with a higher sentence-level f-score and 486 sentences for which the WSJ+NANC/WSJ parser produce a parse with a higher f-score. We constructed a generalized linear model with a binomial link with BROWN/BROWN f-score > WSJ+NANC/WSJ f-score as the predicted variable, and sentence length, the number of prepositions (IN), the number of conjunctions (CC) and Brown 5http://www.r-project.org 6This occurs when an apostrophe is analyzed as a possessive marker in the gold tree and a punctuation symbol in the parse tree, or vice versa. Feature Estimate z-value Pr(> |z|) (Intercept) 0.054 0.3 0.77 IN -0.134 -4.4 8.4e-06 *** ID=G 0.584 2.5 0.011 * ID=K 0.697 2.9 0.003 ** ID=L 0.552 2.3 0.021 * ID=M 0.376 0.9 0.33 ID=N 0.642 2.7 0.0055 ** ID=P 0.624 2.7 0.0069 ** ID=R 0.040 0.1 0.90 Table 9: The logistic model of BROWN/BROWN f-score > WSJ+NANC/WSJ f-score identified by model selection. The feature IN is the number prepositions in the sentence, while ID identifies the Brown subcorpus that the sentence comes from. Stars indicate significance level. subcorpus ID as explanatory variables. Model selection (using the “step” procedure) discarded all but the IN and Brown ID explanatory variables. The final estimated model is shown in Table 9. It shows that the WSJ+NANC/WSJ parser becomes more likely to have a higher f-score than the BROWN/BROWN parser as the number of prepositions in the sentence increases, and that the BROWN/BROWN parser is more likely to have a higher f-score on Brown sections K, N, P, G and L (these are the general fiction, adventure and western fiction, romance and love story, letters and memories, and mystery sections of the Brown corpus, respectively). The three sections of BROWN not in this list are F, M, and R (popular lore, science fiction, and humor). 6 Conclusions and Future Work We have demonstrated that rerankers and selftrained models can work well across domains. Models self-trained on WSJ appear to be better parsing models in general, the benefits of which are not limited to the WSJ domain. The WSJtrained reranker using out-of-domain LA Times parses (produced by the WSJ-trained reranker) achieves a labeled precision-recall f-measure of 87.8% on Brown data, nearly equal to the performance one achieves by using a purely Brown trained parser-reranker. The 87.8% f-score on Brown represents a 24% error reduction on the corpus. Of course, as corpora differences go, Brown is relatively close to WSJ. While we also find that our 343 WSJ+NANC/WSJ BROWN/WSJ BROWN/BROWN WSJ/WSJ 0.025 (0) 0.030 (0) 0.031 (0) WSJ+NANC/WSJ 0.004 (0.1) 0.006 (0.025) BROWN/WSJ 0.002 (0.27) Table 8: The difference in corpus f-score between the various reranking parsers, and the significance of the difference in parentheses as estimated by a randomization test with 106 samples. “x/y” indicates that the first-stage parser was trained on data set x and the second-stage reranker was trained on data set y. “best” WSJ-parser-reranker improves performance on the Switchboard corpus, it starts from a much lower base (74.0%), and achieves a much less significant improvement (3% absolute, 11% error reduction). Bridging these larger gaps is still for the future. One intriguing idea is what we call “self-trained bridging-corpora.” We have not yet experimented with medical text but we expect that the “best” WSJ+NANC parser will not perform very well. However, suppose one does self-training on a biology textbook instead of the LA Times. One might hope that such a text will split the difference between more “normal” newspaper articles and the specialized medical text. Thus, a selftrained parser based upon such text might do much better than our standard “best.” This is, of course, highly speculative. Acknowledgments This work was supported by NSF grants LIS9720368, and IIS0095940, and DARPA GALE contract HR0011-06-20001. We would like to thank the BLLIP team for their comments. References Michiel Bacchiani, Michael Riley, Brian Roark, and Richard Sproat. 2006. MAP adaptation of stochastic grammars. Computer Speech and Language, 20(1):41–68. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and MaxEnt discriminative reranking. In Proc. of the 2005 Meeting of the Assoc. for Computational Linguistics (ACL), pages 173–180. Andrew B. Clegg and Adrian Shepherd. 2005. Evaluating and integrating treebank parsers on a biomedical corpus. In Proceedings of the ACL Workshop on Software. Paul R. Cohen. 1995. Empirical Methods for Artificial Intelligence. The MIT Press, Cambridge, Massachusetts. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Machine Learning: Proceedings of the Seventeenth International Conference (ICML 2000), pages 175–182, Stanford, California. W. Nelson Francis and Henry Kuˇcera. 1979. Manual of Information to accompany a Standard Corpus of Present-day Edited American English, for use with Digital Computers. Brown University, Providence, Rhode Island. Daniel Gildea. 2001. Corpus variation and parser performance. In Empirical Methods in Natural Language Processing (EMNLP), pages 167–202. David Graff. 1995. North American News Text Corpus. Linguistic Data Consortium. LDC95T21. Rebecca Hwa. 1999. Supervised grammar induction using training data with limited constituent information. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 72–80, University of Maryland. Matthew Lease and Eugene Charniak. 2005. Parsing biomedical literature. In Second International Joint Conference on Natural Language Processing (IJCNLP’05). Michell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Comp. Linguistics, 19(2):313–330. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of HLT-NAACL 2006. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34(1-3):151–175. Satoshi Sekine. 1997. The domain dependence of parsing. In Proc. Applied Natural Language Processing (ANLP), pages 96–102. Mark Steedman, Miles Osborne, Anoop Sarkar, Stephen Clark, Rebecca Hwa, Julia Hockenmaier, Paul Ruhlen, Steven Baker, and Jeremiah Crim. 2003. Bootstrapping statistical parsers from small datasets. In Proc. of European ACL (EACL), pages 331–338. 344
2006
43
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 345–352, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Automatic Classification of Verbs in Biomedical Texts Anna Korhonen University of Cambridge Computer Laboratory 15 JJ Thomson Avenue Cambridge CB3 0GD, UK [email protected] Yuval Krymolowski Dept. of Computer Science Technion Haifa 32000 Israel [email protected] Nigel Collier National Institute of Informatics Hitotsubashi 2-1-2 Chiyoda-ku, Tokyo 101-8430 Japan [email protected] Abstract Lexical classes, when tailored to the application and domain in question, can provide an effective means to deal with a number of natural language processing (NLP) tasks. While manual construction of such classes is difficult, recent research shows that it is possible to automatically induce verb classes from cross-domain corpora with promising accuracy. We report a novel experiment where similar technology is applied to the important, challenging domain of biomedicine. We show that the resulting classification, acquired from a corpus of biomedical journal articles, is highly accurate and strongly domainspecific. It can be used to aid BIO-NLP directly or as useful material for investigating the syntax and semantics of verbs in biomedical texts. 1 Introduction Lexical classes which capture the close relation between the syntax and semantics of verbs have attracted considerable interest in NLP (Jackendoff, 1990; Levin, 1993; Dorr, 1997; Prescher et al., 2000). Such classes are useful for their ability to capture generalizations about a range of linguistic properties. For example, verbs which share the meaning of ‘manner of motion’ (such as travel, run, walk), behave similarly also in terms of subcategorization (I traveled/ran/walked, I traveled/ran/walked to London, I traveled/ran/walked five miles). Although the correspondence between the syntax and semantics of words is not perfect and the classes do not provide means for full semantic inferencing, their predictive power is nevertheless considerable. NLP systems can benefit from lexical classes in many ways. Such classes define the mapping from surface realization of arguments to predicateargument structure, and are therefore an important component of any system which needs the latter. As the classes can capture higher level abstractions they can be used as a means to abstract away from individual words when required. They are also helpful in many operational contexts where lexical information must be acquired from small application-specific corpora. Their predictive power can help compensate for lack of data fully exemplifying the behavior of relevant words. Lexical verb classes have been used to support various (multilingual) tasks, such as computational lexicography, language generation, machine translation, word sense disambiguation, semantic role labeling, and subcategorization acquisition (Dorr, 1997; Prescher et al., 2000; Korhonen, 2002). However, large-scale exploitation of the classes in real-world or domain-sensitive tasks has not been possible because the existing classifications, e.g. (Levin, 1993), are incomprehensive and unsuitable for specific domains. While manual classification of large numbers of words has proved difficult and time-consuming, recent research shows that it is possible to automatically induce lexical classes from corpus data with promising accuracy (Merlo and Stevenson, 2001; Brew and Schulte im Walde, 2002; Korhonen et al., 2003). A number of ML methods have been applied to classify words using features pertaining to mainly syntactic structure (e.g. statistical distributions of subcategorization frames (SCFs) or general patterns of syntactic behaviour, e.g. transitivity, passivisability) which have been extracted from corpora using e.g. part-of-speech tagging or robust statistical parsing techniques. 345 This research has been encouraging but it has so far concentrated on general language. Domainspecific lexical classification remains unexplored, although it is arguably important: existing classifications are unsuitable for domain-specific applications and these often challenging applications might benefit from improved performance by utilizing lexical classes the most. In this paper, we extend an existing approach to lexical classification (Korhonen et al., 2003) and apply it (without any domain specific tuning) to the domain of biomedicine. We focus on biomedicine for several reasons: (i) NLP is critically needed to assist the processing, mining and extraction of knowledge from the rapidly growing literature in this area, (ii) the domain lexical resources (e.g. UMLS metathesaurus and lexicon1) do not provide sufficient information about verbs and (iii) being linguistically challenging, the domain provides a good test case for examining the potential of automatic classification. We report an experiment where a classification is induced for 192 relatively frequent verbs from a corpus of 2230 biomedical journal articles. The results, evaluated with domain experts, show that the approach is capable of acquiring classes with accuracy higher than that reported in previous work on general language. We discuss reasons for this and show that the resulting classes differ substantially from those in extant lexical resources. They constitute the first syntactic-semantic verb classification for the biomedical domain and could be readily applied to support BIO-NLP. We discuss the domain-specific issues related to our task in section 2. The approach to automatic classification is presented in section 3. Details of the experimental evaluation are supplied in section 4. Section 5 provides discussion and section 6 concludes with directions for future work. 2 The Biomedical Domain and Our Task Recent years have seen a massive growth in the scientific literature in the domain of biomedicine. For example, the MEDLINE database2 which currently contains around 16M references to journal articles, expands with 0.5M new references each year. Because future research in the biomedical sciences depends on making use of all this existing knowledge, there is a strong need for the develop1http://www.nlm.nih.gov/research/umls 2http://www.ncbi.nlm.nih.gov/PubMed/ ment of NLP tools which can be used to automatically locate, organize and manage facts related to published experimental results. In recent years, major progress has been made on information retrieval and on the extraction of specific relations e.g. between proteins and cell types from biomedical texts (Hirschman et al., 2002). Other tasks, such as the extraction of factual information, remain a bigger challenge. This is partly due to the challenging nature of biomedical texts. They are complex both in terms of syntax and semantics, containing complex nominals, modal subordination, anaphoric links, etc. Researchers have recently began to use deeper NLP techniques (e.g. statistical parsing) in the domain because they are not challenged by the complex structures to the same extent than shallow techniques (e.g. regular expression patterns) are (Lease and Charniak, 2005). However, deeper techniques require richer domain-specific lexical information for optimal performance than is provided by existing lexicons (e.g. UMLS). This is particularly important for verbs, which are central to the structure and meaning of sentences. Where the lexical information is absent, lexical classes can compensate for it or aid in obtaining it in the ways described in section 1. Consider e.g. the INDICATE and ACTIVATE verb classes in Figure 1. They capture the fact that their members are similar in terms of syntax and semantics: they have similar SCFs and selectional preferences, and they can be used to make similar statements which describe similar events. Such information can be used to build a richer lexicon capable of supporting key tasks such as parsing, predicate-argument identification, event extraction and the identification of biomedical (e.g. interaction) relations. While an abundance of work has been conducted on semantic classification of biomedical terms and nouns, less work has been done on the (manual or automatic) semantic classification of verbs in the biomedical domain (Friedman et al., 2002; Hatzivassiloglou and Weng, 2002; Spasic et al., 2005). No previous work exists in this domain on the type of lexical (i.e. syntactic-semantic) verb classification this paper focuses on. To get an initial idea about the differences between our target classification and a general language classification, we examined the extent to which individual verbs and their frequencies differ in biomedical and general language texts. We 346 PROTEINS: p53 p53 Tp53 Dmp53 ... ACTIVATE suggests demonstrates indicates implies... GENES: WAF1 WAF1 CIP1 p21 ... It INDICATE that activates up-regulates induces stimulates... ... Figure 1: Sample lexical classes BIO BNC show do suggest say use make indicate go contain see describe take express get bind know require come observe give find think determine use demonstrate find perform look induce want Table 1: The 15 most frequent verbs in the biomedical data and in the BNC created a corpus of 2230 biomedical journal articles (see section 4.1 for details) and compared the distribution of verbs in this corpus with that in the British National Corpus (BNC) (Leech, 1992). We calculated the Spearman rank correlation between the 1165 verbs which occurred in both corpora. The result was only a weak correlation: 0.37 ± 0.03. When the scope was restricted to the 100 most frequent verbs in the biomedical data, the correlation was 0.12 ± 0.10 which is only 1.2σ away from zero. The dissimilarity between the distributions is further indicated by the KullbackLeibler distance of 0.97. Table 1 illustrates some of these big differences by showing the list of 15 most frequent verbs in the two corpora. 3 Approach We extended the system of Korhonen et al. (2003) with additional clustering techniques (introduced in sections 3.2.2 and 3.2.4) and used it to obtain the classification for the biomedical domain. The system (i) extracts features from corpus data and (ii) clusters them using five different methods. These steps are described in the following two sections, respectively. 3.1 Feature Extraction We employ as features distributions of SCFs specific to given verbs. We extract them from corpus data using the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997) (Korhonen, 2002). The system incorporates RASP, a domain-independent robust statistical parser (Briscoe and Carroll, 2002), which tags, lemmatizes and parses data yielding complete though shallow parses and a SCF classifier which incorporates an extensive inventory of 163 verbal SCFs3. The SCFs abstract over specific lexically-governed particles and prepositions and specific predicate selectional preferences. In our work, we parameterized two high frequency SCFs for prepositions (PP and NP + PP SCFs). No filtering of potentially noisy SCFs was done to provide clustering with as much information as possible. 3.2 Classification The SCF frequency distributions constitute the input data to automatic classification. We experiment with five clustering methods: the simple hard nearest neighbours method and four probabilistic methods – two variants of Probabilistic Latent Semantic Analysis and two information theoretic methods (the Information Bottleneck and the Information Distortion). 3.2.1 Nearest Neighbours The first method collects the nearest neighbours (NN) of each verb. It (i) calculates the JensenShannon divergence (JS) between the SCF distributions of each pair of verbs, (ii) connects each verb with the most similar other verb, and finally (iii) finds all the connected components. The NN method is very simple. It outputs only one clustering configuration and therefore does not allow examining different cluster granularities. 3.2.2 Probabilistic Latent Semantic Analysis The Probabilistic Latent Semantic Analysis (PLSA, Hoffman (2001)) assumes a generative model for the data, defined by selecting (i) a verb verbi, (ii) a semantic class classk from the distribution p(Classes | verbi), and (iii) a SCF scfj from the distribution p(SCFs | classk). PLSA uses Expectation Maximization (EM) to find the distribution ˜p(SCFs | Clusters, V erbs) which maximises the likelihood of the observed counts. It does this by minimising the cost function F = −β log Likelihood(˜p | data) + H(˜p) . 3See http://www.cl.cam.ac.uk/users/alk23/subcat/subcat.html for further detail. 347 For β = 1 minimising F is equivalent to the standard EM procedure while for β < 1 the distribution ˜p tends to be more evenly spread. We use β = 1 (PLSA/EM) and β = 0.75 (PLSAβ=0.75). We currently “harden” the output and assign each verb to the most probable cluster only4. 3.2.3 Information Bottleneck The Information Bottleneck (Tishby et al., 1999) (IB) is an information-theoretic method which controls the balance between: (i) the loss of information by representing verbs as clusters (I(Clusters; V erbs)), which has to be minimal, and (ii) the relevance of the output clusters for representing the SCF distribution (I(Clusters; SCFs)) which has to be maximal. The balance between these two quantities ensures optimal compression of data through clusters. The trade-off between the two constraints is realized through minimising the cost function: LIB = I(Clusters; V erbs) −βI(Clusters; SCFs) , where β is a parameter that balances the constraints. IB takes three inputs: (i) SCF-verb distributions, (ii) the desired number of clusters K, and (iii) the initial value of β. It then looks for the minimal β that decreases LIB compared to its value with the initial β, using the given K. IB delivers as output the probabilities p(K|V ). It gives an indication for the most informative number of output configurations: the ones for which the relevance information increases more sharply between K −1 and K clusters than between K and K + 1. 3.2.4 Information Distortion The Information Distortion method (Dimitrov and Miller, 2001) (ID) is otherwise similar to IB but LID differs from LIB by an additional term that adds a bias towards clusters of similar size: LID = −H(Clusters | V erbs) −βI(Clusters; SCFs) = LIB −H(Clusters) . ID yields more evenly divided clusters than IB. 4 Experimental Evaluation 4.1 Data We downloaded the data for our experiment from the MEDLINE database, from three of the 10 lead4The same approach was used with the information theoretic methods. It made sense in this initial work on biomedical classification. In the future we could use soft clustering a means to investigate polysemy. ing journals in biomedicine: 1) Genes & Development (molecular biology, molecular genetics), 2) Journal of Biological Chemistry (biochemistry and molecular biology) and 3) Journal of Cell Biology (cellular structure and function). 2230 fulltext articles from years 2003-2004 were used. The data included 11.5M words and 323,307 sentences in total. 192 medium to high frequency verbs (with the minimum of 300 occurrences in the data) were selected for experimentation5. This test set was big enough to produce a useful classification but small enough to enable thorough evaluation in this first attempt to classify verbs in the biomedical domain. 4.2 Processing the Data The data was first processed using the feature extraction module. 233 (preposition-specific) SCF types appeared in the resulting lexicon, 36 per verb on average.6 The classification module was then applied. NN produced Knn = 42 clusters. From the other methods we requested K = 2 to 60 clusters. We chose for evaluation the outputs corresponding to the most informative values of K: 20, 33, 53 for IB, and 17, 33, 53 for ID. 4.3 Gold Standard Because no target lexical classification was available for the biomedical domain, human experts (4 domain experts and 2 linguists) were used to create the gold standard. They were asked to examine whether the test verbs similar in terms of their syntactic properties (i.e. verbs with similar SCF distributions) are similar also in terms of semantics (i.e. they share a common meaning). Where this was the case, a verb class was identified and named. The domain experts examined the 116 verbs whose analysis required domain knowledge (e.g. activate, solubilize, harvest), while the linguists analysed the remaining 76 general or scientific text verbs (e.g. demonstrate, hypothesize, appear). The linguists used Levin (1993) classes as gold standard classes whenever possible and created novel ones when needed. The domain experts used two purely semantic classifications of biomedical verbs (Friedman et al., 2002; Spasic et al., 2005)7 as a starting point where this was pos5230 verbs were employed initially but 38 were dropped later so that each (coarse-grained) class would have the minimum of 2 members in the gold standard. 6This number is high because no filtering of potentially noisy SCFs was done. 7See http://www.cbr-masterclass.org. 348 1 Have an effect on activity (BIO/29) 8 Physical Relation 1.1 Activate / Inactivate Between Molecules (BIO/20) 1.1.1 Change activity: activate, inhibit 8.1 Binding: bind, attach 1.1.2 Suppress: suppress, repress 8.2 Translocate and Segregate 1.1.3 Stimulate: stimulate 8.2.1 Translocate: shift, switch 1.1.4 Inactivate: delay, diminish 8.2.2 Segregate: segregate, export 1.2 Affect 8.3 Transmit 1.2.1 Modulate: stabilize, modulate 8.3.1 Transport: deliver, transmit 1.2.2 Regulate: control, support 8.3.2 Link: connect, map 1.3 Increase / decrease: increase, decrease 9 Report (GEN/30) 1.4 Modify: modify, catalyze 9.1 Investigate 2 Biochemical events (BIO/12) 9.1.1 Examine: evaluate, analyze 2.1 Express: express, overexpress 9.1.2 Establish: test, investigate 2.2 Modification 9.1.3 Confirm: verify, determine 2.2.1 Biochemical modification: 9.2 Suggest dephosphorylate, phosphorylate 9.2.1 Presentational: 2.2.2 Cleave: cleave hypothesize, conclude 2.3 Interact: react, interfere 9.2.2 Cognitive: 3 Removal (BIO/6) consider, believe 3.1 Omit: displace, deplete 9.3 Indicate: demonstrate, imply 3.2 Subtract: draw, dissect 10 Perform (GEN/10) 4 Experimental Procedures (BIO/30) 10.1 Quantify 4.1 Prepare 10.1.1 Quantitate: quantify, measure 4.1.1 Wash: wash, rinse 10.1.2 Calculate: calculate, record 4.1.2 Mix: mix 10.1.3 Conduct: perform, conduct 4.1.3 Label: stain, immunoblot 10.2 Score: score, count 4.1.4 Incubate: preincubate, incubate 11 Release (BIO/4): detach, dissociate 4.1.5 Elute: elute 12 Use (GEN/4): utilize, employ 4.2 Precipitate: coprecipitate 13 Include (GEN/11) coimmunoprecipitate 13.1 Encompass: encompass, span 4.3 Solubilize: solubilize,lyse 13.2 Include: contain, carry 4.4 Dissolve: homogenize, dissolve 14 Call (GEN/3): name, designate 4.5 Place: load, mount 15 Move (GEN/12) 5 Process (BIO/5): linearize, overlap 15.1 Proceed: 6 Transfect (BIO/4): inject, microinject progress, proceed 7 Collect (BIO/6) 15.2 Emerge: 7.1 Collect: harvest, select arise, emerge 7.2 Process: centrifuge, recover 16 Appear (GEN/6): appear, occur Table 2: The gold standard classification with a few example verbs per class sible (i.e. where they included our test verbs and also captured their relevant senses)8. The experts created a 3-level gold standard which includes both broad and finer-grained classes. Only those classes / memberships were included which all the experts (in the two teams) agreed on.9 The resulting gold standard including 16, 34 and 50 classes is illustrated in table 2 with 1-2 example verbs per class. The table indicates which classes were created by domain experts (BIO) and which by linguists (GEN). Each class was associated with 1-30 member verbs10. The total number of verbs is indicated in the table (e.g. 10 for PERFORM class). 4.4 Measures The clusters were evaluated against the gold standard using measures which are applicable to all the 8Purely semantic classes tend to be finer-grained than lexical classes and not necessarily syntactic in nature. Only these two classifications were found to be similar enough to our target classification to provide a useful starting point. Section 5 includes a summary of the similarities/differences between our gold standard and these other classifications. 9Experts were allowed to discuss the problematic cases to obtain maximal accuracy - hence no inter-annotator agreement is reported. 10The minimum of 2 member verbs were required at the coarser-grained levels of 16 and 34 classes. classification methods and which deliver a numerical value easy to interpret. The first measure, the adjusted pairwise precision, evaluates clusters in terms of verb pairs: APP = 1 K K P i=1 num. of correct pairs in ki num. of pairs in ki · |ki|−1 |ki|+1 APP is the average proportion of all withincluster pairs that are correctly co-assigned. Multiplied by a factor that increases with cluster size it compensates for a bias towards small clusters. The second measure is modified purity, a global measure which evaluates the mean precision of clusters. Each cluster is associated with its prevalent class. The number of verbs in a cluster K that take this class is denoted by nprevalent(K). Verbs that do not take it are considered as errors. Clusters where nprevalent(K) = 1 are disregarded as not to introduce a bias towards singletons: mPUR = P nprevalent(ki)≥2 nprevalent(ki) number of verbs The third measure is the weighted class accuracy, the proportion of members of dominant clusters DOM-CLUSTi within all classes ci. ACC = C P i=1verbs in DOM-CLUSTi number of verbs mPUR can be seen to measure the precision of clusters and ACC the recall. We define an F measure as the harmonic mean of mPUR and ACC: F = 2 · mPUR · ACC mPUR + ACC The statistical significance of the results is measured by randomisation tests where verbs are swapped between the clusters and the resulting clusters are evaluated. The swapping is repeated 100 times for each output and the average avswaps and the standard deviation σswaps is measured. The significance is the scaled difference signif = (result −avswaps)/σswaps . 4.5 Results from Quantitative Evaluation Table 3 shows the performance of the five clustering methods for K = 42 clusters (as produced by the NN method) at the 3 levels of gold standard classification. Although the two PLSA variants (particularly PLSAβ=0.75) produce a fairly accurate coarse grained classification, they perform worse than all the other methods at the finergrained levels of gold standard, particularly according to the global measures. Being based on 349 16 Classes 34 Classes 50 Classes APP mPUR ACC F APP mPUR ACC F APP mPUR ACC F NN 81 86 39 53 64 74 62 67 54 67 73 69 IB 74 88 47 61 61 76 74 75 55 69 87 76 ID 79 89 37 52 63 78 65 70 53 70 77 73 PLSA/EM 55 72 49 58 43 53 61 57 35 47 66 55 PLSAβ=0.75 65 71 68 70 53 48 76 58 41 34 77 47 Table 3: The performance of the NN, PLSA, IB and ID methods with Knn = 42 clusters 16 Classes 34 Classes 50 Classes K APP mPUR ACC F APP mPUR ACC F APP mPUR ACC F 20 IB 74 77 66 71 60 56 86 67 54 48 93 63 17 ID 67 76 60 67 43 56 81 66 34 46 91 61 33 IB 78 87 52 65 69 75 81 77 61 67 93 77 ID 81 88 43 57 65 75 70 72 54 67 82 73 53 IB 71 87 41 55 61 78 66 71 54 72 79 75 ID 79 89 33 48 66 79 55 64 53 72 68 69 Table 4: The performance of IB and ID for the 3 levels of class hierarchy for informative values of K pairwise similarities, NN shows mostly better performance than IB and ID on the pairwise measure APP but the global measures are better for IB and ID. The differences are smaller in mPUR (yet significant: 2σ between NN and IB and 3σ between NN and ID) but more notable in ACC (which is e.g. 8 −12% better for IB than for NN). Also the F results suggest that the two information theoretic methods are better overall than the simple NN method. IB and ID also have the advantage (over NN) that they can be used to produce a hierarchical verb classification. Table 4 shows the results for IB and ID for the informative values of K. The bold font indicates the results when the match between the values of K and the number of classes at the particular level of the gold standard is the closest. IB is clearly better than ID at all levels of gold standard. It yields its best results at the medium level (34 classes) with K = 33: F = 77 and APP = 69 (the results for ID are F = 72 and APP = 65). At the most fine-grained level (50 classes), IB is equally good according to F with K = 33, but APP is 8% lower. Although ID is occasionally better than IB according to APP and mPUR (see e.g. the results for 16 classes with K = 53) this never happens in the case where the correspondence between the number of gold standard classes and the values of K is the closest. In other words, the informative values of K prove really informative for IB. The lower performance of ID seems to be due to its tendency to create evenly sized clusters. All the methods perform significantly better than our random baseline. The significance of the results with respect to two swaps was at the 2σ level, corresponding to a 97% confidence that the results are above random. 4.6 Qualitative Evaluation We performed further, qualitative analysis of clusters produced by the best performing method IB. Consider the following clusters: A: inject, transfect, microinfect, contransfect (6) B: harvest, select, collect (7.1) centrifuge, process, recover (7.2) C: wash, rinse (4.1.1) immunoblot (4.1.3) overlap (5) D: activate (1.1.1) When looking at coarse-grained outputs, interestingly, K as low as 8 learned the broad distinction between biomedical and general language verbs (the two verb types appeared only rarely in the same clusters) and produced large semantically meaningful groups of classes (e.g. the coarse-grained classes EXPERIMENTAL PROCEDURES, TRANSFECT and COLLECT were mapped together). K = 12 was sufficient to identify several classes with very particular syntax One of them was TRANSFECT (see A above) whose members were distinguished easily because of their typical SCFs (e.g. inject /transfect/microinfect/contransfect X with/into Y). On the other hand, even K = 53 could not identify classes with very similar (yet un-identical) syntax. These included many semantically similar sub-classes (e.g. the two sub-classes of COLLECT 350 shown in B whose members take similar NP and PP SCFs). However, also a few semantically different verbs clustered wrongly because of this reason, such as the ones exemplified in C. In C, immunoblot (from the LABEL class) is still somewhat related to wash and rinse (the WASH class) because they all belong to the larger EXPERIMENTAL PROCEDURES class, but overlap (from the PROCESS class) shows up in the cluster merely because of syntactic idiosyncracy. While parser errors caused by the challenging biomedical texts were visible in some SCFs (e.g. looking at a sample of SCFs, some adjunct instances were listed in the argument slots of the frames), the cases where this resulted in incorrect classification were not numerous11. One representative singleton resulting from these errors is exemplified in D. Activate appears in relatively complicated sentence structures, which gives rise to incorrect SCFs. For example, MECs cultured on 2D planar substrates transiently activate MAP kinase in response to EGF, whereas... gets incorrectly analysed as SCF NP-NP, while The effect of the constitutively activated ARF6-Q67L mutant was investigated... receives the incorrect SCF analysis NP-SCOMP. Most parser errors are caused by unknown domainspecific words and phrases. 5 Discussion Due to differences in the task and experimental setup, direct comparison of our results with previously published ones is impossible. The closest possible comparison point is (Korhonen et al., 2003) which reported 50-59% mPUR and 15-19% APP on using IB to assign 110 polysemous (general language) verbs into 34 classes. Our results are substantially better, although we made no effort to restrict our scope to monosemous verbs12 and although we focussed on a linguistically challenging domain. It seems that our better result is largely due to the higher uniformity of verb senses in the biomedical domain. We could not investigate this effect systematically because no manually sense 11This is partly because the mistakes of the parser are somewhat consistent (similar for similar verbs) and partly because the SCFs gather data from hundreds of corpus instances, many of which are analysed correctly. 12Most of our test verbs are polysemous according to WordNet (WN) (Miller, 1990), but this is not a fully reliable indication because WN is not specific to this domain. annotated data (or a comprehensive list of verb senses) exists for the domain. However, examination of a number of corpus instances suggests that the use of verbs is fairly conventionalized in our data13. Where verbs show less sense variation, they show less SCF variation, which aids the discovery of verb classes. Korhonen et al. (2003) observed the opposite with general language data. We examined, class by class, to what extent our domain-specific gold standard differs from the related general (Levin, 1993) and domain classifications (Spasic et al., 2005; Friedman et al., 2002) (recall that the latter were purely semantic classifications as no lexical ones were available for biomedicine): 33 (of the 50) classes in the gold standard are biomedical. Only 6 of these correspond (fully or mostly) to the semantic classes in the domain classifications. 17 are unrelated to any of the classes in Levin (1993) while 16 bear vague resemblance to them (e.g. our TRANSPORT verbs are also listed under Levin’s SEND verbs) but are too different (semantically and syntactically) to be combined. 17 (of the 50) classes are general (scientific) classes. 4 of these are absent in Levin (e.g. QUANTITATE). 13 are included in Levin, but 8 of them have a more restricted sense (and fewer members) than the corresponding Levin class. Only the remaining 5 classes are identical (in terms of members and their properties) to Levin classes. These results highlight the importance of building or tuning lexical resources specific to different domains, and demonstrate the usefulness of automatic lexical acquisition for this work. 6 Conclusion This paper has shown that current domainindependent NLP and ML technology can be used to automatically induce a relatively high accuracy verb classification from a linguistically challenging corpus of biomedical texts. The lexical classification resulting from our work is strongly domain-specific (it differs substantially from previous ones) and it can be readily used to aid BIONLP. It can provide useful material for investigating the syntax and semantics of verbs in biomedical data or for supplementing existing domain lexical resources with additional information (e.g. 13The different sub-domains of the biomedical domain may, of course, be even more conventionalized (Friedman et al., 2002). 351 semantic classifications with additional member verbs). Lexical resources enriched with verb class information can, in turn, better benefit practical tasks such as parsing, predicate-argument identification, event extraction, identification of biomedical relation patterns, among others. In the future, we plan to improve the accuracy of automatic classification by seeding it with domain-specific information (e.g. using named entity recognition and anaphoric linking techniques similar to those of Vlachos et al. (2006)). We also plan to conduct a bigger experiment with a larger number of verbs and demonstrate the usefulness of the bigger classification for practical BIO-NLP application tasks. In addition, we plan to apply similar technology to other interesting domains (e.g. tourism, law, astronomy). This will not only enable us to experiment with cross-domain lexical class variation but also help to determine whether automatic acquisition techniques benefit, in general, from domain-specific tuning. Acknowledgement We would like to thank Yoko Mizuta, Shoko Kawamato, Sven Demiya, and Parantu Shah for their help in creating the gold standard. References C. Brew and S. Schulte im Walde. 2002. Spectral clustering for German verbs. In Conference on Empirical Methods in Natural Language Processing, Philadelphia, USA. E. J. Briscoe and J. Carroll. 1997. Automatic extraction of subcategorization from corpora. In 5th ACL Conference on Applied Natural Language Processing, pages 356–363, Washington DC. E. J. Briscoe and J. Carroll. 2002. Robust accurate statistical annotation of general text. In 3rd International Conference on Language Resources and Evaluation, pages 1499–1504, Las Palmas, Gran Canaria. A. G. Dimitrov and J. P. Miller. 2001. Neural coding and decoding: communication channels and quantization. Network: Computation in Neural Systems, 12(4):441–472. B. Dorr. 1997. Large-scale dictionary construction for foreign language tutoring and interlingual machine translation. Machine Translation, 12(4):271–325. C. Friedman, P. Kra, and A. Rzhetsky. 2002. Two biomedical sublanguages: a description based on the theories of Zellig Harris. Journal of Biomedical Informatics, 35(4):222–235. V. Hatzivassiloglou and W. Weng. 2002. Learning anchor verbs for biological interaction patterns from published text articles. International Journal of Medical Inf., 67:19–32. L. Hirschman, J. C. Park, J. Tsujii, L. Wong, and C. H. Wu. 2002. Accomplishments and challenges in literature data mining for biology. Journal of Bioinformatics, 18(12):1553–1561. T. Hoffman. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42(1):177–196. R. Jackendoff. 1990. Semantic Structures. MIT Press, Cambridge, Massachusetts. A. Korhonen, Y. Krymolowski, and Z. Marx. 2003. Clustering polysemic subcategorization frame distributions semantically. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 64–71, Sapporo, Japan. A. Korhonen. 2002. Subcategorization Acquisition. Ph.D. thesis, University of Cambridge, UK. M. Lease and E. Charniak. 2005. Parsing biomedical literature. In Second International Joint Conference on Natural Language Processing, pages 58–69. G. Leech. 1992. 100 million words of English: the British National Corpus. Language Research, 28(1):1–13. B. Levin. 1993. English Verb Classes and Alternations. Chicago University Press, Chicago. P. Merlo and S. Stevenson. 2001. Automatic verb classification based on statistical distributions of argument structure. Computational Linguistics, 27(3):373–408. G. A. Miller. 1990. WordNet: An on-line lexical database. International Journal of Lexicography, 3(4):235–312. D. Prescher, S. Riezler, and M. Rooth. 2000. Using a probabilistic class-based lexicon for lexical ambiguity resolution. In 18th International Conference on Computational Linguistics, pages 649–655, Saarbr¨ucken, Germany. I. Spasic, S. Ananiadou, and J. Tsujii. 2005. Masterclass: A case-based reasoning system for the classification of biomedical terms. Journal of Bioinformatics, 21(11):2748–2758. N. Tishby, F. C. Pereira, and W. Bialek. 1999. The information bottleneck method. In Proc. of the 37th Annual Allerton Conference on Communication, Control and Computing, pages 368–377. A. Vlachos, C. Gasperin, I. Lewin, and E. J. Briscoe. 2006. Bootstrapping the recognition and anaphoric linking of named entitites in drosophila articles. In Pacific Symposium in Biocomputing. 352
2006
44
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 353–360, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Selection of Effective Contextual Information for Automatic Synonym Acquisition Masato Hagiwara, Yasuhiro Ogawa, and Katsuhiko Toyama Graduate School of Information Science, Nagoya University Furo-cho, Chikusa-ku, Nagoya, JAPAN 464-8603 {hagiwara, yasuhiro, toyama}@kl.i.is.nagoya-u.ac.jp Abstract Various methods have been proposed for automatic synonym acquisition, as synonyms are one of the most fundamental lexical knowledge. Whereas many methods are based on contextual clues of words, little attention has been paid to what kind of categories of contextual information are useful for the purpose. This study has experimentally investigated the impact of contextual information selection, by extracting three kinds of word relationships from corpora: dependency, sentence co-occurrence, and proximity. The evaluation result shows that while dependency and proximity perform relatively well by themselves, combination of two or more kinds of contextual information gives more stable performance. We’ve further investigated useful selection of dependency relations and modification categories, and it is found that modification has the greatest contribution, even greater than the widely adopted subjectobject combination. 1 Introduction Lexical knowledge is one of the most important resources in natural language applications, making it almost indispensable for higher levels of syntactical and semantic processing. Among many kinds of lexical relations, synonyms are especially useful ones, having broad range of applications such as query expansion technique in information retrieval and automatic thesaurus construction. Various methods (Hindle, 1990; Lin, 1998; Hagiwara et al., 2005) have been proposed for synonym acquisition. Most of the acquisition methods are based on distributional hypothesis (Harris, 1985), which states that semantically similar words share similar contexts, and it has been experimentally shown considerably plausible. However, whereas many methods which adopt the hypothesis are based on contextual clues concerning words, and there has been much consideration on the language models such as Latent Semantic Indexing (Deerwester et al., 1990) and Probabilistic LSI (Hofmann, 1999) and synonym acquisition method, almost no attention has been paid to what kind of categories of contextual information, or their combinations, are useful for word featuring in terms of synonym acquisition. For example, Hindle (1990) used cooccurrences between verbs and their subjects and objects, and proposed a similarity metric based on mutual information, but no exploration concerning the effectiveness of other kinds of word relationship is provided, although it is extendable to any kinds of contextual information. Lin (1998) also proposed an information theorybased similarity metric, using a broad-coverage parser and extracting wider range of grammatical relationship including modifications, but he didn’t further investigate what kind of relationships actually had important contributions to acquisition, either. The selection of useful contextual information is considered to have a critical impact on the performance of synonym acquisition. This is an independent problem from the choice of language model or acquisition method, and should therefore be examined by itself. The purpose of this study is to experimentally investigate the impact of contextual information selection for automatic synonym acquisition. Because nouns are the main target of 353 synonym acquisition, here we limit the target of acquisition to nouns, and firstly extract the cooccurrences between nouns and three categories of contextual information — dependency, sentence co-occurrence, and proximity — from each of three different corpora, and the performance of individual categories and their combinations are evaluated. Since dependency and modification relations are considered to have greater contributions in contextual information and in the dependency category, respectively, these categories are then broken down into smaller categories to examine the individual significance. Because the consideration on the language model and acquisition methods is not the scope of the current study, widely used vector space model (VSM), tf·idf weighting scheme, and cosine measure are adopted for similarity calculation. The result is evaluated using two automatic evaluation methods we proposed and implemented: discrimination rate and correlation coefficient based on the existing thesaurus WordNet. This paper is organized as follows: in Section 2, three kinds of contextual information we use are described, and the following Section 3 explains the synonym acquisition method. In Section 4 the evaluation method we employed is detailed, which consists of the calculation methods of reference similarity, discrimination rate, and correlation coefficient. Section 5 provides the experimental conditions and results of contextual information selection, followed by dependency and modification selection. Section 6 concludes this paper. 2 Contextual Information In this study, we focused on three kinds of contextual information: dependency between words, sentence co-occurrence, and proximity, that is, cooccurrence with other words in a window, details of which are provided the following sections. 2.1 Dependency The first category of the contextual information we employed is the dependency between words in a sentence, which we suppose is most commonly used for synonym acquisition as the context of words. The dependency here includes predicateargument structure such as subjects and objects of verbs, and modifications of nouns. As the extraction of accurate and comprehensive grammatical relations is in itself a difficult task, the sodependent mod ncmod xmod cmod detmod arg_mod arg aux conj subj_or_dobj subj ncsubj xsubj csubj comp obj clausal obj2 dobj iobj xcompccomp mod subj obj Figure 1: Hierarchy of grammatical relations and groups phisticated parser RASP Toolkit (Briscoe and Carroll, 2002) was utilized to extract this kind of word relations. RASP analyzes input sentences and provides wide variety of grammatical information such as POS tags, dependency structure, and parsed trees as output, among which we paid attention to dependency structure called grammatical relations (GRs) (Briscoe et al., 2002). GRs represent relationship among two or more words and are specified by the labels, which construct the hierarchy shown in Figure 1. In this hierarchy, the upper levels correspond to more general relations whereas the lower levels to more specific ones. Although the most general relationship in GRs is “dependent”, more specific labels are assigned whenever possible. The representation of the contextual information using GRs is as follows. Take the following sentence for example: Shipments have been relatively level since January, the Commerce Department noted. RASP outputs the extracted GRs as n-ary relations as follows: (ncsubj note Department obj) (ncsubj be Shipment _) (xcomp _ be level) (mod _ level relatively) (aux _ be have) (ncmod since be January) (mod _ Department note) (ncmod _ Department Commerce) 354 (detmod _ Department the) (ncmod _ be Department) While most of GRs extracted by RASP are binary relations of head and dependent, there are some relations that contain additional slot or extra information regarding the relations, as shown “ncsubj” and “ncmod” in the above example. To obtain the final representation that we require for synonym acquisition, that is, the co-occurrence between words and their contexts, these relationships must be converted to binary relations, i.e., co-occurrence. We consider the concatenation of all the rest of the target word as context: Department ncsubj:note:*:obj shipment ncsubj:be:*:_ January ncmod:since:be:* Department mod:_:*:note Department ncmod:_:*:Commerce Commerce ncmod:_:Department:* Department detmod:_:*:the Department ncmod:_:be:* The slot for the target word is replaced by “*” in the context. Note that only the contexts for nouns are extracted because our purpose here is the automatic extraction of synonymous nouns. 2.2 Sentence Co-occurrence As the second category of contextual information, we used the sentence co-occurrence, i.e., which sentence words appear in. Using this context is, in other words, essentially the same as featuring words with the sentences in which they occur. Treating single sentences as documents, this featuring corresponds to exploiting transposed termdocument matrix in the information retrieval context, and the underlying assumption is that words that commonly appear in the similar documents or sentences are considered semantically similar. 2.3 Proximity The third category of contextual information, proximity, utilizes tokens that appear in the vicinity of the target word in a sentence. The basic assumption here is that the more similar the distribution of proceeding and succeeding words of the target words are, the more similar meaning these two words possess, and its effectiveness has been previously shown (Macro Baroni and Sabrina Bisi, 2004). To capture the word proximity, we consider a window with a certain radius, and treat the label of the word and its position within the window as context. The contexts for the previous example sentence, when the window radius is 3, are then: shipment R1:have shipment R2:be shipment R3:relatively January L1:since January L2:level January L3:relatively January R1:, January R2:the January R3:Commerce Commerce L1:the Commerce L2:, Commerce L3:January Commerce R1:Department ... Note that the proximity includes tokens such as punctuation marks as context, because we suppose they offer useful contextual information as well. 3 Synonym Acquisition Method The purpose of the current study is to investigate the impact of the contextual information selection, not the language model itself, we employed one of the most commonly used method: vector space model (VSM) and tf·idf weighting scheme. In this framework, each word is represented as a vector in a vector space, whose dimensions correspond to contexts. The elements of the vectors given by tf·idf are the co-occurrence frequencies of words and contexts, weighted by normalized idf. That is, denoting the number of distinct words and contexts as N and M, respectively, wi = t[tf(wi, c1) · idf(c1) ... tf(wi, cM) · idf(cM)], (1) where tf(wi, cj) is the co-occurrence frequency of word wi and context cj. idf(cj) is given by idf(cj) = log(N/df(cj)) maxk log(N/df(vk)), (2) where df(cj) is the number of distinct words that co-occur with context cj. Although VSM and tf·idf are naive and simple compared to other language models like LSI and PLSI, they have been shown effective enough for the purpose (Hagiwara et al., 2005). The similarity between two words are then calculated as the cosine value of two corresponding vectors. 4 Evaluation This section describes the evaluation methods we employed for automatic synonym acquisition. The evaluation is to measure how similar the obtained similarities are to the “true” similarities. We firstly prepared the reference similarities from the existing thesaurus WordNet as described in Section 4.1, 355 and by comparing the reference and obtained similarities, two evaluation measures, discrimination rate and correlation coefficient, are calculated automatically as described in Sections 4.2 and 4.3. 4.1 Reference similarity calculation using WordNet As the basis for automatic evaluation methods, the reference similarity, which is the answer value that similarity of a certain pair of words “should take,” is required. We obtained the reference similarity using the calculation based on thesaurus tree structure (Nagao, 1996). This calculation method requires no other resources such as corpus, thus it is simple to implement and widely used. The similarity between word sense wi and word sense vj is obtained using tree structure as follows. Let the depth1 of node wi be di, the depth of node vj be dj, and the maximum depth of the common ancestors of both nodes be ddca. The similarity between wi and vj is then calculated as sim(wi, vj) = 2 · ddca di + dj , (3) which takes the value between 0.0 and 1.0. Figure 2 shows the example of calculating the similarity between the word senses “hill” and “coast.” The number on the side of each word sense represents the word’s depth. From this tree structure, the similarity is obtained: sim(“hill”, “coast”) = 2 · 3 5 + 5 = 0.6. (4) The similarity between word w with senses w1, ..., wn and word v with senses v1, ..., vm is defined as the maximum similarity between all the pairs of word senses: sim(w, v) = max i,j sim(wi, vj), (5) whose idea came from Lin’s method (Lin, 1998). 4.2 Discrimination Rate The following two sections describe two evaluation measures based on the reference similarity. The first one is discrimination rate (DR). DR, originally proposed by Kojima et al. (2004), is the rate 1To be precise, the structure of WordNet, where some word senses have more than one parent, isn’t a tree but a DAG. The depth of a node is, therefore, defined here as the “maximum distance” from the root node. entity 0 inanimate-object 1 natural-object 2 geological-formation 3 4 natural-elevation 5 hill shore 4 coast 5 Figure 2: Example of automatic similarity calculation based on tree structure (answer, reply) (phone, telephone) (sign, signal) (concern, worry) (animal, coffee) (him, technology) (track, vote) (path, youth) … … highly related unrelated Figure 3: Test-sets for discrimination rate calculation. (percentage) of pairs (w1, w2) whose degree of association between two words w1, w2 is successfully discriminated by the similarity derived by the method under evaluation. Kojima et al. dealt with three-level discrimination of a pair of words, that is, highly related (synonyms or nearly synonymous), moderately related (a certain degree of association), and unrelated (irrelevant). However, we omitted the moderately related level and limited the discrimination to two-level: high or none, because of the difficulty of preparing a test set that consists of moderately related pairs. The calculation of DR follows these steps: first, two test sets, one of which consists of highly related word pairs and the other of unrelated ones, are prepared, as shown in Figure 3. The similarity between w1 and w2 is then calculated for each pair (w1, w2) in both test sets via the method under evaluation, and the pair is labeled highly related when similarity exceeds a given threshold t and unrelated when the similarity is lower than t. The number of pairs labeled highly related in the highly related test set and unrelated in the unrelated test set are denoted na and nb, respectively. 356 DR is then given by: 1 2 µ na Na + nb Nb ¶ , (6) where Na and Nb are the numbers of pairs in highly related and unrelated test sets, respectively. Since DR changes depending on threshold t, maximum value is adopted by varying t. We used the reference similarity to create these two test sets. Firstly, Np = 100, 000 pairs of words are randomly created using the target vocabulary set for synonym acquisition. Proper nouns are omitted from the choice here because of their high ambiguity. The two testsets are then created extracting n = 2, 000 most related (with high reference similarity) and unrelated (with low reference similarity) pairs. 4.3 Correlation coefficient The second evaluation measure is correlation coefficient (CC) between the obtained similarity and the reference similarity. The higher CC value is, the more similar the obtained similarities are to WordNet, thus more accurate the synonym acquisition result is. The value of CC is calculated as follows. Let the set of the sample pairs be Ps, the sequence of the reference similarities calculated for the pairs in Ps be r = (r1, r2, ..., rn), the corresponding sequence of the target similarity to be evaluated be r = (s1, s2, ..., sn), respectively. Correlation coefficient ρ is then defined by: ρ = 1 n Pn i=1(ri −¯r)(si −¯s) σrσs , (7) where ¯r, ¯s, σr, and σs represent the average of r and s and the standard deviation of r and s, respectively. The set of the sample pairs Ps is created in a similar way to the preparation of highly related test set used in DR calculation, except that we employed Np = 4, 000, n = 2, 000 to avoid extreme nonuniformity. 5 Experiments Now we desribe the experimental conditions and results of contextual information selection. 5.1 Condition We used the following three corpora for the experiment: (1) Wall Street Journal (WSJ) corpus (approx. 68,000 sentences, 1.4 million tokens), (2) Brown Corpus (BROWN) (approx. 60,000 sentences, 1.3 million tokens), both of which are contained in Treebank 3 (Marcus, 1994), and (3) written sentences in WordBank (WB) (approx. 190,000 sentences, 3.5 million words) (HyperCollins, 2002). No additional annotation such as POS tags provided for Treebank was used, which means that we gave the plain texts stripped off any additional information to RASP as input. To distinguish nouns, using POS tags annotated by RASP, any words with POS tags APP, ND, NN, NP, PN, PP were labeled as nouns. The window radius for proximity is set to 3. We also set a threshold tf on occurrence frequency in order to filter out any words or contexts with low frequency and to reduce computational cost. More specifically, any words w such that P c tf(w, c) < tf and any contexts c such that P w tf(w, c) < tf were removed from the co-occurrence data. tf was set to tf = 5 for WSJ and BROWN, and tf = 10 for WB in Sections 5.2 and 5.3, and tf = 2 for WSJ and BROWN and tf = 5 for WB in Section 5.4. 5.2 Contextual Information Selection In this section, we experimented to discover what kind of contextual information extracted in Section 2 is useful for synonym extraction. The performances, i.e. DR and CC are evaluated for each of the three categories and their combinations. The evaluation result for three corpora is shown in Figure 4. Notice that the range and scale of the vertical axes of the graphs vary according to corpus. The result shows that dependency and proximity perform relatively well alone, while sentence co-occurrence has almost no contributions to performance. However, when combined with other kinds of context information, every category, even sentence co-occurrence, serves to “stabilize” the overall performance, although in some cases combination itself decreases individual measures slightly. It is no surprise that the combination of all categories achieves the best performance. Therefore, in choosing combination of different kinds of context information, one should take into consideration the economical efficiency and trade-off between computational complexity and overall performance stability. 5.3 Dependency Selection We then focused on the contribution of individual categories of dependency relation, i.e. groups of grammatical relations. The following four groups 357 65.0% 65.5% 66.0% 66.5% 67.0% 67.5% 68.0% 68.5% discrimination rate (DR)a 0.09 0.10 0.11 0.12 0.13 correlation coefficient (CC)) DR CC dep sent prox dep sent dep prox sent prox all (1) WSJ DR = 52.8% CC = -0.0029 sent: 65.0% 65.5% 66.0% 66.5% 67.0% 67.5% 68.0% 68.5% 69.0% discrimination rate (DR)a 0.13 0.14 0.15 correlation coefficient (CC)) DR CC dep sent prox dep sent dep prox sent prox all (2) BROWN DR = 53.8% CC = 0.060 sent: 66.0% 66.5% 67.0% 67.5% 68.0% 68.5% 69.0% discrimination rate (DR)a 0.16 0.17 0.18 0.19 correlation coefficient (CC)) DR CC dep sent prox dep sent dep prox sent prox all (3) WB DR = 52.2% CC = 0.0066 sent: Figure 4: Contextual information selection performances Discrimination rate (DR) and correlation coefficient (CC) for (1) Wall Street Journal corpus, (2) Brown Corpus, and (3) WordBank. of GRs are considered for comparison convenience: (1) subj group (“subj”, “ncsubj”, “xsubj”, and “csubj”), (2) obj group (“obj”, “dobj”, “obj2”, and “iobj”), (3) mod group (“mod”, “ncmod”, “xmod”, “cmod”, and “detmod”), and (4) etc group (others), as shown in the circles in Figure 1. This is because distinction between relations in a group is sometimes unclear, and is considered to strongly depend on the parser implementation. The final target is seven kinds of combinations of the above four groups: subj, obj, mod, etc, subj+obj, subj+obj+mod, and all. The two evaluation measures are similarly calculated for each group and combination, and shown in Figure 5. Although subjects, objects, and their combination are widely used contextual information, the performances for subj and obj categories, as well as their combination subj+obj, were relatively poor. On the contrary, the result clearly shows the importance of modification, which alone is even better than widely adopted subj+obj. The “stabilization effect” of combinations observed in the previous experiment is also confirmed here as well. Because the size of the co-occurrence data varies from one category to another, we conducted another experiment to verify that the superiority of the modification category is simply due to the difference in the quality (content) of the group, not the quantity (size). We randomly extracted 100,000 pairs from each of mod and subj+obj categories to cancel out the quantity difference and compared the performance by calculating averaged DR and CC of ten trials. The result showed that, while the overall performances substantially decreased due to the size reduction, the relation between groups was preserved before and after the extraction throughout all of the three corpora, although the detailed result is not shown due to the space limitation. This means that what essentially contributes to the performance is not the size of the modification category but its content. 5.4 Modification Selection As the previous experiment shows that modifications have the biggest significance of all the dependency relationship, we further investigated what kind of modifications is useful for the purpose. To do this, we broke down the mod group into these five categories according to modifying word’s category: (1) detmod, when the GR label is “det358 54.0% 56.0% 58.0% 60.0% 62.0% 64.0% 66.0% 68.0% discrimination rate (DR)a 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 correlation coefficient (CC)) DR CC subj obj mod etc subj obj subj obj mod all (1) WSJ 54.0% 56.0% 58.0% 60.0% 62.0% 64.0% 66.0% 68.0% discrimination rate (DR)a 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 correlation coefficient (CC)) DR CC subj obj mod etc subj obj subj obj mod all (2) BROWN 54.0% 56.0% 58.0% 60.0% 62.0% 64.0% 66.0% 68.0% 70.0% discrimination rate (DR)a 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 correlation coefficient (CC)) DR CC subj obj mod etc subj obj subj obj mod all (3) WB Figure 5: Dependency selection performances Discrimination rate (DR) and correlation coefficient (CC) for (1) Wall Street Journal corpus, (2) Brown Corpus, and (3) WordBank. 50.0% 52.0% 54.0% 56.0% 58.0% 60.0% 62.0% 64.0% 66.0% discrimination rate (DR)a 0.00 0.02 0.04 0.06 0.08 0.10 0.12 correlation coefficient (CC)) DR CC detmod ncmod-n ncmod-j ncmod-p etc all (1) WSJ 50.0% 52.0% 54.0% 56.0% 58.0% 60.0% 62.0% 64.0% 66.0% discrimination rate (DR)a 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 correlation coefficient (CC)) DR CC detmod ncmod-n ncmod-j ncmod-p etc all (2) BROWN CC = -0.018 57.0% 59.0% 61.0% 63.0% 65.0% 67.0% discrimination rate (DR)a 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 correlation coefficient (CC)) DR CC detmod ncmod-n ncmod-j ncmod-p etc all (3) WB Figure 6: Modification selection performances Discrimination rate (DR) and correlation coefficient (CC) for (1) Wall Street Journal corpus, (2) Brown Corpus, and (3) WordBank. 359 mod”, i.e., the modifying word is a determiner, (2) ncmod-n, when the GR label is “ncmod” and the modifying word is a noun, (3) ncmod-j, when the GR label is “ncmod” and the modifying word is an adjective or number, (4) ncmod-p, when the GR label is “ncmod” and the modification is through a preposition (e.g. “state” and “affairs” in “state of affairs”), and (5) etc (others). The performances for each modification category are evaluated and shown in Figure 6. Although some individual modification categories such as detmod and ncmod-j outperform other categories in some cases, the overall observation is that all the modification categories contribute to synonym acquisition to some extent, and the effect of individual categories are accumulative. We therefore conclude that the main contributing factor on utilizing modification relationship in synonym acquisition isn’t the type of modification, but the diversity of the relations. 6 Conclusion In this study, we experimentally investigated the impact of contextual information selection, by extracting three kinds of contextual information — dependency, sentence co-occurrence, and proximity — from three different corpora. The acquisition result was evaluated using two evaluation measures, DR and CC using the existing thesaurus WordNet. We showed that while dependency and proximity perform relatively well by themselves, combination of two or more kinds of contextual information, even with the poorly performing sentence co-occurrence, gives more stable result. The selection should be chosen considering the tradeoff between computational complexity and overall performance stability. We also showed that modification has the greatest contribution to the acquisition of all the dependency relations, even greater than the widely adopted subject-object combination. It is also shown that all the modification categories contribute to the acquisition to some extent. Because we limited the target to nouns, the result might be specific to nouns, but the same experimental framework is applicable to any other categories of words. Although the result also shows the possibility that the bigger the corpus is, the better the performance will be, the contents and size of the corpora we used are diverse, so their relationship, including the effect of the window radius, should be examined as the future work. References Marco Baroni and Sabrina Bisi 2004. Using cooccurrence statistics and the web to discover synonyms in a technical language. Proc. of the Fourth International Conference on Language Resources and Evaluation (LREC 2004). Ted Briscoe and John Carroll. 2002. Robust Accurate Statistical Annotation of General Text. Proc. of the Third International Conference on Language Resources and Evaluation (LREC 2002), 1499–1504. Ted Briscoe, John Carroll, Jonathan Graham and Ann Copestake 2002. Relational evaluation schemes. Proc. of the Beyond PARSEVAL Workshop at the Third International Conference on Language Resources and Evaluation, 4–8. Scott Deerwester, et al. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41(6):391–407. Christiane Fellbaum. 1998. WordNet: an electronic lexical database. MIT Press. Masato Hagiwara, Yasuhiro Ogawa, Katsuhiko Toyama. 2005. PLSI Utilization for Automatic Thesaurus Construction. Proc. of The Second International Joint Conference on Natural Language Processing (IJCNLP-05), 334–345. Zellig Harris. 1985. Distributional Structure. Jerrold J. Katz (ed.) The Philosophy of Linguistics. Oxford University Press. 26–47. Donald Hindle. 1990. Noun classification from predicate-argument structures. Proc. of the 28th Annual Meeting of the ACL, 268–275. Thomas Hofmann. 1999. Probabilistic Latent Semantic Indexing. Proc. of the 22nd International Conference on Research and Development in Information Retrieval (SIGIR ’99), 50–57. Kazuhide Kojima, Hirokazu Watabe, and Tsukasa Kawaoka. 2004. Existence and Application of Common Threshold of the Degree of Association. Proc. of the Forum on Information Technology (FIT2004) F-003. Collins. 2002. Collins Cobuild Mld Major New Edition CD-ROM. HarperCollins Publishers. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. Proc. of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational linguistics (COLING-ACL ’98), 786–774. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of English: The Penn treebank. Computational Linguistics, 19(2):313–330. Makoto Nagao (ed.). 1996. Shizengengoshori. The Iwanami Software Science Series 15, Iwanami Shoten Publishers. 360
2006
45
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 361–368, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Scaling Distributional Similarity to Large Corpora James Gorman and James R. Curran School of Information Technologies University of Sydney NSW 2006, Australia {jgorman2,james}@it.usyd.edu.au Abstract Accurately representing synonymy using distributional similarity requires large volumes of data to reliably represent infrequent words. However, the na¨ıve nearestneighbour approach to comparing context vectors extracted from large corpora scales poorly (O(n2) in the vocabulary size). In this paper, we compare several existing approaches to approximating the nearestneighbour search for distributional similarity. We investigate the trade-off between efficiency and accuracy, and find that SASH (Houle and Sakuma, 2005) provides the best balance. 1 Introduction It is a general property of Machine Learning that increasing the volume of training data increases the accuracy of results. This is no more evident than in Natural Language Processing (NLP), where massive quantities of text are required to model rare language events. Despite the rapid increase in computational power available for NLP systems, the volume of raw data available still outweighs our ability to process it. Unsupervised learning, which does not require the expensive and timeconsuming human annotation of data, offers an opportunity to use this wealth of data. Curran and Moens (2002) show that synonymy extraction for lexical semantic resources using distributional similarity produces continuing gains in accuracy as the volume of input data increases. Extracting synonymy relations using distributional similarity is based on the distributional hypothesis that similar words appear in similar contexts. Terms are described by collating information about their occurrence in a corpus into vectors. These context vectors are then compared for similarity. Existing approaches differ primarily in their definition of “context”, e.g. the surrounding words or the entire document, and their choice of distance metric for calculating similarity between the context vectors representing each term. Manual creation of lexical semantic resources is open to the problems of bias, inconsistency and limited coverage. It is difficult to account for the needs of the many domains in which NLP techniques are now being applied and for the rapid change in language use. The assisted or automatic creation and maintenance of these resources would be of great advantage. Finding synonyms using distributional similarity requires a nearest-neighbour search over the context vectors of each term. This is computationally intensive, scaling to O(n2m) for the number of terms n and the size of their context vectors m. Increasing the volume of input data will increase the size of both n and m, decreasing the efficiency of a na¨ıve nearest-neighbour approach. Many approaches to reduce this complexity have been suggested. In this paper we evaluate state-of-the-art techniques proposed to solve this problem. We find that the Spatial Approximation Sample Hierarchy (Houle and Sakuma, 2005) provides the best accuracy/efficiency trade-off. 2 Distributional Similarity Measuring distributional similarity first requires the extraction of context information for each of the vocabulary terms from raw text. These terms are then compared for similarity using a nearestneighbour search or clustering based on distance calculations between the statistical descriptions of their contexts. 361 2.1 Extraction A context relation is defined as a tuple (w, r, w′) where w is a term, which occurs in some grammatical relation r with another word w′ in some sentence. We refer to the tuple (r, w′) as an attribute of w. For example, (dog, direct-obj, walk) indicates that dog was the direct object of walk in a sentence. In our experiments context extraction begins with a Maximum Entropy POS tagger and chunker. The SEXTANT relation extractor (Grefenstette, 1994) produces context relations that are then lemmatised. The relations for each term are collected together and counted, producing a vector of attributes and their frequencies in the corpus. 2.2 Measures and Weights Both nearest-neighbour and cluster analysis methods require a distance measure to calculate the similarity between context vectors. Curran (2004) decomposes this into measure and weight functions. The measure calculates the similarity between two weighted context vectors and the weight calculates the informativeness of each context relation from the raw frequencies. For these experiments we use the Jaccard (1) measure and the TTest (2) weight functions, found by Curran (2004) to have the best performance. P (r,w′) min(w(wm, r, w′), w(wn, r, w′)) P (r,w′) max(w(wm, r, w′), w(wn, r, w′)) (1) p(w, r, w′) −p(∗, r, w′)p(w, ∗, ∗) p p(∗, r, w′)p(w, ∗, ∗) (2) 2.3 Nearest-neighbour Search The simplest algorithm for finding synonyms is a k-nearest-neighbour (k-NN) search, which involves pair-wise vector comparison of the target term with every term in the vocabulary. Given an n term vocabulary and up to m attributes for each term, the asymptotic time complexity of nearestneighbour search is O(n2m). This is very expensive, with even a moderate vocabulary making the use of huge datasets infeasible. Our largest experiments used a vocabulary of over 184,000 words. 3 Dimensionality Reduction Using a cut-off to remove low frequency terms can significantly reduce the value of n. Unfortunately, reducing m by eliminating low frequency contexts has a significant impact on the quality of the results. There are many techniques to reduce dimensionality while avoiding this problem. The simplest methods use feature selection techniques, such as information gain, to remove the attributes that are less informative. Other techniques smooth the data while reducing dimensionality. Latent Semantic Analysis (LSA, Landauer and Dumais, 1997) is a smoothing and dimensionality reduction technique based on the intuition that the true dimensionality of data is latent in the surface dimensionality. Landauer and Dumais admit that, from a pragmatic perspective, the same effect as LSA can be generated by using large volumes of data with very long attribute vectors. Experiments with LSA typically use attribute vectors of a dimensionality of around 1000. Our experiments have a dimensionality of 500,000 to 1,500,000. Decompositions on data this size are computationally difficult. Dimensionality reduction is often used before using LSA to improve its scalability. 3.1 Heuristics Another technique is to use an initial heuristic comparison to reduce the number of full O(m) vector comparisons that are performed. If the heuristic comparison is sufficiently fast and a sufficient number of full comparisons are avoided, the cost of an additional check will be easily absorbed by the savings made. Curran and Moens (2002) introduces a vector of canonical attributes (of bounded length k ≪m), selected from the full vector, to represent the term. These attributes are the most strongly weighted verb attributes, chosen because they constrain the semantics of the term more and partake in fewer idiomatic collocations. If a pair of terms share at least one canonical attribute then a full similarity comparison is performed, otherwise the terms are not compared. They show an 89% reduction in search time, with only a 3.9% loss in accuracy. There is a significant improvement in the computational complexity. If a maximum of p positive results are returned, our complexity becomes O(n2k + npm). When p ≪n, the system will be faster as many fewer full comparisons will be made, but at the cost of accuracy as more possibly near results will be discarded out of hand. 4 Randomised Techniques Conventional dimensionality reduction techniques can be computationally expensive: a more scal362 able solution is required to handle the volumes of data we propose to use. Randomised techniques provide a possible solution to this. We present two techniques that have been used recently for distributional similarity: Random Indexing (Kanerva et al., 2000) and Locality Sensitive Hashing (LSH, Broder, 1997). 4.1 Random Indexing Random Indexing (RI) is a hashing technique based on Sparse Distributed Memory (Kanerva, 1993). Karlgren and Sahlgren (2001) showed RI produces results similar to LSA using the Test of English as a Foreign Language (TOEFL) evaluation. Sahlgren and Karlgren (2005) showed the technique to be successful in generating bilingual lexicons from parallel corpora. In RI, we first allocate a d length index vector to each unique attribute. The vectors consist of a large number of 0s and small number (ϵ) number of randomly distributed ±1s. Context vectors, identifying terms, are generated by summing the index vectors of the attributes for each non-unique context in which a term appears. The context vector for a term t appearing in contexts c1 = [1, 0, 0, −1] and c2 = [0, 1, 0, −1] would be [1, 1, 0, −2]. The distance between these context vectors is then measured using the cosine measure: cos(θ(u, v)) = ⃗u · ⃗v |⃗u| |⃗v| (3) This technique allows for incremental sampling, where the index vector for an attribute is only generated when the attribute is encountered. Construction complexity is O(nmd) and search complexity is O(n2d). 4.2 Locality Sensitive Hashing LSH is a probabilistic technique that allows the approximation of a similarity function. Broder (1997) proposed an approximation of the Jaccard similarity function using min-wise independent functions. Charikar (2002) proposed an approximation of the cosine measure using random hyperplanes Ravichandran et al. (2005) used this cosine variant and showed it to produce over 70% accuracy in extracting synonyms when compared against Pantel and Lin (2002). Given we have n terms in an m′ dimensional space, we create d ≪m′ unit random vectors also of m′ dimensions, labelled {⃗r1, ⃗r2, ..., ⃗rd}. Each vector is created by sampling a Gaussian function m′ times, with a mean of 0 and a variance of 1. For each term w we construct its bit signature using the function h⃗r(⃗w) = ( 1 : ⃗r.⃗w ≥0 0 : ⃗r.⃗w < 0 where ⃗r is a spherically symmetric random vector of length d. The signature, ¯w, is the d length bit vector: ¯w = {h ⃗r1(⃗w), h ⃗r2(⃗w), . . . , h ⃗rd(⃗w)} The cost to build all n signatures is O(nm′d). For terms u and v, Goemans and Williamson (1995) approximate the angular similarity by p(h⃗r(⃗u) = h⃗r(⃗v)) = 1 −θ(⃗u,⃗u) π (4) where θ(⃗u,⃗u) is the angle between ⃗u and ⃗u. The angular similarity gives the cosine by cos(θ(⃗u,⃗u)) = cos((1 −p(h⃗r(⃗u) = h⃗r(⃗v)))π) (5) The probability can be derived from the Hamming distance: p(hr(u) = hr(v)) = 1 −H(¯u, ¯v) d (6) By combining equations 5 and 6 we get the following approximation of the cosine distance: cos(θ(⃗u,⃗u)) = cos H(¯u, ¯v) d  π  (7) That is, the cosine of two context vectors is approximated by the cosine of the Hamming distance between their two signatures normalised by the size of the signatures. Search is performed using Equation 7 and scales to O(n2d). 5 Data Structures The methods presented above fail to address the n2 component of the search complexity. Many data structures have been proposed that can be used to address this problem in similarity searching. We present three data structures: the vantage point tree (VPT, Yianilos, 1993), which indexes points in a metric space, Point Location in Equal 363 Balls (PLEB, Indyk and Motwani, 1998), a probabilistic structure that uses the bit signatures generated by LSH, and the Spatial Approximation Sample Hierarchy (SASH, Houle and Sakuma, 2005), which approximates a k-NN search. Another option inspired by IR is attribute indexing (INDEX). In this technique, in addition to each term having a reference to its attributes, each attribute has a reference to the terms referencing it. Each term is then only compared with the terms with which it shares attributes. We will give a theoretically comparison against other techniques. 5.1 Vantage Point Tree Metric space data structures provide a solution to near-neighbour searches in very high dimensions. These rely solely on the existence of a comparison function that satisfies the conditions of metricality: non-negativity, equality, symmetry and the triangle inequality. VPT is typical of these structures and has been used successfully in many applications. The VPT is a binary tree designed for range searches. These are searches limited to some distance from the target term but can be modified for k-NN search. VPT is constructed recursively. Beginning with a set of U terms, we take any term to be our vantage point p. This becomes our root. We now find the median distance mp of all other terms to p: mp = median{dist(p, u)|u ∈U}. Those terms u such that dist(p, u) ≤mp are inserted into the left sub-tree, and the remainder into the right subtree. Each sub-tree is then constructed as a new VPT, choosing a new vantage point from within its terms, until all terms are exhausted. Searching a VPT is also recursive. Given a term q and radius r, we begin by measuring the distance to the root term p. If dist(q, p) ≤r we enter p into our list of near terms. If dist(q, p) −r ≤mp we enter the left sub-tree and if dist(q, p) + r > mp we enter the right sub-tree. Both sub-trees may be entered. The process is repeated for each entered subtree, taking the vantage point of the sub-tree to be the new root term. To perform a k-NN search we use a backtracking decreasing radius search (Burkhard and Keller, 1973). The search begins with r = ∞, and terms are added to a list of the closest k terms. When the kth closest term is found, the radius is set to the distance between this term and the target. Each time a new, closer element is added to the list, the radius is updated to the distance from the target to the new kth closest term. Construction complexity is O(n log n). Search complexity is claimed to be O(log n) for small radius searches. This does not hold for our decreasing radius search, whose worst case complexity is O(n). 5.2 Point Location in Equal Balls PLEB is a randomised structure that uses the bit signatures generated by LSH. It was used by Ravichandran et al. (2005) to improve the efficiency of distributional similarity calculations. Having generated our d length bit signatures for each of our n terms, we take these signatures and randomly permute the bits. Each vector has the same permutation applied. This is equivalent to a column reordering in a matrix where the rows are the terms and the columns the bits. After applying the permutation, the list of terms is sorted lexicographically based on the bit signatures. The list is scanned sequentially, and each term is compared to its B nearest neighbours in the list. The choice of B will effect the accuracy/efficiency trade-off, and need not be related to the choice of k. This is performed q times, using a different random permutation function each time. After each iteration, the current closest k terms are stored. For a fixed d, the complexity for the permutation step is O(qn), the sorting O(qn log n) and the search O(qBn). 5.3 Spatial Approximation Sample Hierarchy SASH approximates a k-NN search by precomputing some near neighbours for each node (terms in our case). This produces multiple paths between terms, allowing SASH to shape itself to the data set (Houle, 2003). The following description is adapted from Houle and Sakuma (2005). The SASH is a directed, edge-weighted graph with the following properties (see Figure 1): • Each term corresponds to a unique node. • The nodes are arranged into a hierarchy of levels, with the bottom level containing n 2 nodes and the top containing a single root node. Each level, except the top, will contain half as many nodes as the level below. • Edges between nodes are linked to consecutive levels. Each node will have at most p parent nodes in the level above, and c child nodes in the level below. 364 A B C D E F G H I J K L 1 2 3 4 5 Figure 1: A SASH, where p = 2, c = 3 and k = 2 • Every node must have at least one parent so that all nodes are reachable from the root. Construction begins with the nodes being randomly distributed between the levels. SASH is then constructed iteratively by each node finding its closest p parents in the level above. The parent will keep the closest c of these children, forming edges in the graph, and reject the rest. Any nodes without parents after being rejected are then assigned as children of the nearest node in the previous level with fewer than c children. Searching is performed by finding the k nearest nodes at each level, which are added to a set of near nodes. To limit the search, only those nodes whose parents were found to be nearest at the previous level are searched. The k closest nodes from the set of near nodes are then returned. The search complexity is O(ck log n). In Figure 1, the filled nodes demonstrate a search for the near-neighbours of some node q, using k = 2. Our search begins with the root node A. As we are using k = 2, we must find the two nearest children of A using our similarity measure. In this case, C and D are closer than B. We now find the closest two children of C and D. E is not checked as it is only a child of B. All other nodes are checked, including F and G, which are shared as children by B and C. From this level we chose G and H. The final levels are considered similarly. At this point we now have the list of near nodes A, C, D, G, H, I, J, K and L. From this we chose the two nodes nearest q, H and I marked in black, which are then returned. k can be varied at each level to force a larger number of elements to be tested at the base of the SASH using, for instance, the equation: ki = max{ k1−h−i log n , 1 2pc } (8) This changes our search complexity to: k1+ 1 log n k 1 log n−1 + pc2 2 log n (9) We use this geometric function in our experiments. Gorman and Curran (2005a; 2005b) found the performance of SASH for distributional similarity could be improved by replacing the initial random ordering with a frequency based ordering. In accordance with Zipf’s law, the majority of terms have low frequencies. Comparisons made with these low frequency terms are unreliable (Curran and Moens, 2002). Creating SASH with high frequency terms near the root produces more reliable initial paths, but comparisons against these terms are more expensive. The best accuracy/efficiency trade-off was found when using more reliable initial paths rather than the most reliable. This is done by folding the data around some mean number of relations. For each term, if its number of relations mi is greater than some chosen number of relations M, it is given a new ranking based on the score M2 mi . Otherwise its ranking based on its number of relations. This has the effect of pushing very high and very low frequency terms away from the root. 6 Evaluation Measures The simplest method for evaluation is the direct comparison of extracted synonyms with a manually created gold standard (Grefenstette, 1994). To reduce the problem of limited coverage, our evaluation combines three electronic thesauri: the Macquarie, Roget’s and Moby thesauri. We follow Curran (2004) and use two performance measures: direct matches (DIRECT) and inverse rank (INVR). DIRECT is the percentage of returned synonyms found in the gold standard. INVR is the sum of the inverse rank of each matching synonym, e.g. matches at ranks 3, 5 and 28 365 CORPUS CUT-OFF TERMS AVERAGE RELATIONS PER TERM BNC 0 246,067 43 5 88,926 116 100 14,862 617 LARGE 0 541,722 97 5 184,494 281 100 35,618 1,400 Table 1: Extracted Context Information give an inverse rank score of 1 3 + 1 5 + 1 28. With at most 100 matching synonyms, the maximum INVR is 5.187. This more fine grained as it incorporates the both the number of matches and their ranking. The same 300 single word nouns were used for evaluation as used by Curran (2004) for his large scale evaluation. These were chosen randomly from WordNet such that they covered a range over the following properties: frequency, number of senses, specificity and concreteness. For each of these terms, the closest 100 terms and their similarity scores were extracted. 7 Experiments We use two corpora in our experiments: the smaller is the non-speech portion of the British National Corpus (BNC), 90 million words covering a wide range of domains and formats; the larger consists of the BNC, the Reuters Corpus Volume 1 and most of the English news holdings of the LDC in 2003, representing over 2 billion words of text (LARGE, Curran, 2004). The semantic similarity system implemented by Curran (2004) provides our baseline. This performs a brute-force k-NN search (NAIVE). We present results for the canonical attribute heuristic (HEURISTIC), RI, LSH, PLEB, VPT and SASH. We take the optimal canonical attribute vector length of 30 for HEURISTIC from Curran (2004). For SASH we take optimal values of p = 4 and c = 16 and use the folded ordering taking M = 1000 from Gorman and Curran (2005b). For RI, LSH and PLEB we found optimal values experimentally using the BNC. For LSH we chose d = 3, 000 (LSH3,000) and 10, 000 (LSH10,000), showing the effect of changing the dimensionality. The frequency statistics were weighted using mutual information, as in Ravichandran et al. (2005): log( p(w, r, w′) p(w, ∗, ∗)p(∗, r, w′)) (10) PLEB used the values q = 500 and B = 100. CUT-OFF 5 100 NAIVE 1.72 1.71 HEURISTIC 1.65 1.66 RI 0.80 0.93 LSH10,000 1.26 1.31 SASH 1.73 1.71 Table 2: INVR vs frequency cut-off The initial experiments on RI produced quite poor results. The intuition was that this was caused by the lack of smoothing in the algorithm. Experiments were performed using the weights given in Curran (2004). Of these, mutual information (10), evaluated with an extra log2(f(w, r, w′) + 1) factor and limited to positive values, produced the best results (RIMI). The values d = 1000 and ϵ = 5 were found to produce the best results. All experiments were performed on 3.2GHz Xeon P4 machines with 4GB of RAM. 8 Results As the accuracy of comparisons between terms increases with frequency (Curran, 2004), applying a frequency cut-off will both reduce the size of the vocabulary (n) and increase the average accuracy of comparisons. Table 1 shows the reduction in vocabulary and increase in average context relations per term as cut-off increases. For LARGE, the initial 541,722 word vocabulary is reduced by 66% when a cut-off of 5 is applied and by 86% when the cut-off is increased to 100. The average number of relations increases from 97 to 1400. The work by Curran (2004) largely uses a frequency cut-off of 5. When this cut-off was used with the randomised techniques RI and LSH, it produced quite poor results. When the cut-off was increased to 100, as used by Ravichandran et al. (2005), the results improved significantly. Table 2 shows the INVR scores for our various techniques using the BNC with cut-offs of 5 and 100. Table 3 shows the results of a full thesaurus extraction using the BNC and LARGE corpora using a cut-off of 100. The average DIRECT score and INVR are from the 300 test words. The total execution time is extrapolated from the average search time of these test words and includes the setup time. For LARGE, extraction using NAIVE takes 444 hours: over 18 days. If the 184,494 word vocabulary were used, it would take over 7000 hours, or nearly 300 days. This gives some indication of 366 BNC LARGE DIRECT INVR Time DIRECT INVR Time NAIVE 5.23 1.71 38.0hr 5.70 1.93 444.3hr HEURISTIC 4.94 1.66 2.0hr 5.51 1.93 30.2hr RI 2.97 0.93 0.4hr 2.42 0.85 1.9hr RIMI 3.49 1.41 0.4hr 4.58 1.75 1.9hr LSH3,000 2.00 0.76 0.7hr 2.92 1.07 3.6hr LSH10,000 3.68 1.31 2.3hr 3.77 1.40 8.4hr PLEB3,000 2.00 0.76 1.2hr 2.85 1.07 4.1hr PLEB10,000 3.66 1.30 3.9hr 3.63 1.37 11.8hr VPT 5.23 1.71 15.9hr 5.70 1.93 336.1hr SASH 5.17 1.71 2.0hr 5.29 1.89 23.7hr Table 3: Full thesaurus extraction the scale of the problem. The only technique to become less accurate when the corpus size is increased is RI; it is likely that RI is sensitive to high frequency, low information contexts that are more prevalent in LARGE. Weighting reduces this effect, improving accuracy. The importance of the choice of d can be seen in the results for LSH. While much slower, LSH10,000 is also much more accurate than LSH3,000, while still being much faster than NAIVE. Introducing the PLEB data structure does not improve the efficiency while incurring a small cost on accuracy. We are not using large enough datasets to show the improved time complexity using PLEB. VPT is only slightly faster slightly faster than NAIVE. This is not surprising in light of the original design of the data structure: decreasing radius search does not guarantee search efficiency. A significant influence in the speed of the randomised techniques, RI and LSH, is the fixed dimensionality. The randomised techniques use a fixed length vector which is not influenced by the size of m. The drawback of this is that the size of the vector needs to be tuned to the dataset. It would seem at first glance that HEURISTIC and SASH provide very similar results, with HEURISTIC slightly slower, but more accurate. This misses the difference in time complexity between the methods: HEURISTIC is n2 and SASH n log n. The improvement in execution time over NAIVE decreases as corpus size increases and this would be expected to continue. Further tuning of SASH parameters may improve its accuracy. RIMI produces similar result using LARGE to SASH using BNC. This does not include the cost of extracting context relations from the raw text, so the true comparison is much worse. SASH allows the free use of weight and measure functions, but RI is constrained by having to transform any context space into a RI space. This is important when LARGE CUT-OFF 0 5 100 NAIVE 541,721 184,493 35,617 SASH 10,599 8,796 6,231 INDEX 5,844 13,187 32,663 Table 4: Average number of comparisons per term considering that different tasks may require different weights and measures (Weeds and Weir, 2005). RI also suffers n2 complexity, where as SASH is n log n. Taking these into account, and that the improvements are barely significant, SASH is a better choice. The results for LSH are disappointing. It performs consistently worse than the other methods except VPT. This could be improved by using larger bit vectors, but there is a limit to the size of these as they represent a significant memory overhead, particularly as the vocabulary increases. Table 4 presents the theoretical analysis of attribute indexing. The average number of comparisons made for various cut-offs of LARGE are shown. NAIVE and INDEX are the actual values for those techniques. The values for SASH are worst case, where the maximum number of terms are compared at each level. The actual number of comparisons made will be much less. The efficiency of INDEX is sensitive to the density of attributes and increasing the cut-off increases the density. This is seen in the dramatic drop in performance as the cut-off increases. This problem of density will increase as volume of raw input data increases, further reducing its effectiveness. SASH is only dependent on the number of terms, not the density. Where the need for computationally efficiency out-weighs the need for accuracy, RIMI provides better results. SASH is the most balanced of the techniques tested and provides the most scalable, high quality results. 367 9 Conclusion We have evaluated several state-of-the-art techniques for improving the efficiency of distributional similarity measurements. We found that, in terms of raw efficiency, Random Indexing (RI) was significantly faster than any other technique, but at the cost of accuracy. Even after our modifications to the RI algorithm to significantly improve its accuracy, SASH still provides a better accuracy/efficiency trade-off. This is more evident when considering the time to extract context information from the raw text. SASH, unlike RI, also allows us to choose both the weight and the measure used. LSH and PLEB could not match either the efficiency of RI or the accuracy of SASH. We intend to use this knowledge to process even larger corpora to produce more accurate results. Having set out to improve the efficiency of distributional similarity searches while limiting any loss in accuracy, we are producing full nearestneighbour searches 18 times faster, with only a 2% loss in accuracy. Acknowledgements We would like to thank our reviewers for their helpful feedback and corrections. This work has been supported by the Australian Research Council under Discovery Project DP0453131. References Andrei Broder. 1997. On the resemblance and containment of documents. In Proceedings of the Compression and Complexity of Sequences, pages 21–29, Salerno, Italy. Walter A. Burkhard and Robert M. Keller. 1973. Some approaches to best-match file searching. Communications of the ACM, 16(4):230–236, April. Moses S. Charikar. 2002. Similarity estimation techniques from rounding algorithms. In Proceedings of the 34th Annual ACM Symposium on Theory of Computing, pages 380–388, Montreal, Quebec, Canada, 19–21 May. James Curran and Marc Moens. 2002. Improvements in automatic thesaurus extraction. In Proceedings of the Workshop of the ACL Special Interest Group on the Lexicon, pages 59–66, Philadelphia, PA, USA, 12 July. James Curran. 2004. From Distributional to Semantic Similarity. Ph.D. thesis, University of Edinburgh. Michel X. Goemans and David P. Williamson. 1995. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of Association for Computing Machinery, 42(6):1115–1145, November. James Gorman and James Curran. 2005a. Approximate searching for distributional similarity. In ACL-SIGLEX 2005 Workshop on Deep Lexical Acquisition, Ann Arbor, MI, USA, 30 June. James Gorman and James Curran. 2005b. Augmenting approximate similarity searching with lexical information. In Australasian Language Technology Workshop, Sydney, Australia, 9–11 November. Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, Boston. Michael E. Houle and Jun Sakuma. 2005. Fast approximate similarity search in extremely high-dimensional data sets. In Proceedings of the 21st International Conference on Data Engineering, pages 619–630, Tokyo, Japan. Michael E. Houle. 2003. Navigating massive data sets via local clustering. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 547–552, Washington, DC, USA. Piotr Indyk and Rajeev Motwani. 1998. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the 30th annual ACM Symposium on Theory of Computing, pages 604–613, New York, NY, USA, 24–26 May. ACM Press. Pentti Kanerva, Jan Kristoferson, and Anders Holst. 2000. Random indexing of text samples for latent semantic analysis. In Proceedings of the 22nd Annual Conference of the Cognitive Science Society, page 1036, Mahwah, NJ, USA. Pentti Kanerva. 1993. Sparse distributed memory and related models. In M.H. Hassoun, editor, Associative Neural Memories: Theory and Implementation, pages 50–76. Oxford University Press, New York, NY, USA. Jussi Karlgren and Magnus Sahlgren. 2001. From words to understanding. In Y. Uesaka, P. Kanerva, and H Asoh, editors, Foundations of Real-World Intelligence, pages 294– 308. CSLI Publications, Stanford, CA, USA. Thomas K. Landauer and Susan T. Dumais. 1997. A solution to plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211–240, April. Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of ACM SIGKDD-02, pages 613–619, 23–26 July. Deepak Ravichandran, Patrick Pantel, and Eduard Hovy. 2005. Randomized algorithms and NLP: Using locality sensitive hash functions for high speed noun clustering. In Proceedings of the 43rd Annual Meeting of the ACL, pages 622–629, Ann Arbor, USA. Mangus Sahlgren and Jussi Karlgren. 2005. Automatic bilingual lexicon acquisition using random indexing of parallel corpora. Journal of Natural Language Engineering, Special Issue on Parallel Texts, 11(3), June. Julie Weeds and David Weir. 2005. Co-occurance retrieval: A flexible framework for lexical distributional similarity. Computational Linguistics, 31(4):439–475, December. Peter N. Yianilos. 1993. Data structures and algorithms for nearest neighbor search in general metric spaces. In Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms, pages 311–321, Philadelphia. 368
2006
46
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 369–376, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Extractive Summarization using Inter- and Intra- Event Relevance Wenjie Li, Mingli Wu and Qin Lu Department of Computing The Hong Kong Polytechnic University {cswjli,csmlwu,csluqin}@comp .polyu.edu.hk Wei Xu and Chunfa Yuan Department of Computer Science and Technology, Tsinghua University {vivian00,cfyuan}@mail.ts inghua.edu.cn Abstract Event-based summarization attempts to select and organize the sentences in a summary with respect to the events or the sub-events that the sentences describe. Each event has its own internal structure, and meanwhile often relates to other events semantically, temporally, spatially, causally or conditionally. In this paper, we define an event as one or more event terms along with the named entities associated, and present a novel approach to derive intra- and inter- event relevance using the information of internal association, semantic relatedness, distributional similarity and named entity clustering. We then apply PageRank ranking algorithm to estimate the significance of an event for inclusion in a summary from the event relevance derived. Experiments on the DUC 2001 test data shows that the relevance of the named entities involved in events achieves better result when their relevance is derived from the event terms they associate. It also reveals that the topic-specific relevance from documents themselves outperforms the semantic relevance from a general purpose knowledge base like Word-Net. 1. Introduction Extractive summarization selects sentences which contain the most salient concepts in documents. Two important issues with it are how the concepts are defined and what criteria should be used to judge the salience of the concepts. Existing work has typically been based on techniques that extract key textual elements, such as keywords (also known as significant terms) as weighed by their tf*idf score, or concepts (such as events or entities) with linguistic and/or statistical analysis. Then, sentences are selected according to either the important textual units they contain or certain types of intersentence relations they hold. Event-based summarization which has emerged recently attempts to select and organize sentences in a summary with respect to events or sub-events that the sentences describe. With regard to the concept of events, people do not have the same definition when introducing it in different domains. While traditional linguistics work on semantic theory of events and the semantic structures of verbs, studies in information retrieval (IR) within topic detection and tracking framework look at events as narrowly defined topics which can be categorized or clustered as a set of related documents (TDT). IR events are broader (or to say complex) events in the sense that they may include happenings and their causes, consequences or even more extended effects. In the information extraction (IE) community, events are defined as the pre-specified and structured templates that relate an action to its participants, times, locations and other entities involved (MUC-7). IE defines what people call atomic events. Regardless of their distinct perspectives, people all agree that events are collections of activities together with associated entities. To apply the concept of events in the context of text summarization, we believe it is more appropriate to consider events at the sentence level, rather than at the document level. To avoid the complexity of deep semantic and syntactic processing, we complement the advantages of statistical techniques from the IR community and structured information provided by the IE community. 369 We propose to extract semi-structured events with shallow natural language processing (NLP) techniques and estimate their importance for inclusion in a summary with IR techniques. Though it is most likely that documents narrate more than one similar or related event, most event-based summarization techniques reported so far explore the importance of the events independently. Motivated by this observation, this paper addresses the task of event-relevance based summarization and explores what sorts of relevance make a contribution. To this end, we investigate intra-event relevance, that is actionentity relevance, and inter-event relevance, that is event-event relevance. While intra-event relevance is measured with frequencies of the associated events and entities directly, inter-event relevance is derived indirectly from a general WordNet similarity utility, distributional similarity in the documents to be summarized, named entity clustering and so on. Pagerank ranking algorithm is then applied to estimate the event importance for inclusion in a summary using the aforesaid relevance. The remainder of this paper is organized as follows. Section 2 introduces related work. Sections 3 introduces our proposed event-based summarization approaches which make use of intra- and inter- event relevance. Section 4 presents experiments and evaluates different approaches. Finally, Section 5 concludes the paper. 2. Related Work Event-based summarization has been investigated in recent research. It was first presented in (Daniel, Radev and Allison, 2003), who treated a news topic in multi-document summarization as a series of sub-events according to human understanding of the topic. They determined the degree of sentence relevance to each sub-event through human judgment and evaluated six extractive approaches. Their paper concluded that recognizing the sub-events that comprise a single news event is essential for producing better summaries. However, it is difficult to automatically break a news topic into sub-events. Later, atomic events were defined as the relationships between the important named entities (Filatova and Hatzivassiloglou, 2004), such as participants, locations and times (which are called relations) through the verbs or action nouns labeling the events themselves (which are called connectors). They evaluated sentences based on co-occurrence statistics of the named entity relations and the event connectors involved. The proposed approach claimed to outperform conventional tf*idf approach. Apparently, named entities are key elements in their model. However, the constraints defining events seemed quite stringent. The application of dependency parsing, anaphora and co-reference resolution in recognizing events were presented involving NLP and IE techniques more or less (Yoshioka and Haraguchi, 2004), (Vanderwende, Banko and Menezes, 2004) and (Leskovec, Grobelnik and Fraling, 2004). Rather than pre-specifying events, these efforts extracted (verb)-(dependent relation)-(noun) triples as events and took the triples to form a graph merged by relations. As a matter of fact, events in documents are related in some ways. Judging whether the sentences are salient or not and organizing them in a coherent summary can take advantage from event relevance. Unfortunately, this was neglected in most previous work. Barzilay and Lapata (2005) exploited the use of the distributional and referential information of discourse entities to improve summary coherence. While they captured text relatedness with entity transition sequences, i.e. entity-based summarization, we are particularly interested in relevance between events in event-based summarization. Extractive summarization requires ranking sentences with respect to their importance. Successfully used in Web-link analysis and more recently in text summarization, Google’s PageRank (Brin and Page, 1998) is one of the most popular ranking algorithms. It is a kind of graph-based ranking algorithm deciding on the importance of a node within a graph by taking into account the global information recursively computed from the entire graph, rather than relying on only the local node-specific information. A graph can be constructed by adding a node for each sentence, phrase or word. Edges between nodes are established using intersentence similarity relations as a function of content overlap or grammatically relations between words or phrases. The application of PageRank in sentence extraction was first reported in (Erkan and Radev, 2004). The similarity between two sentence nodes according to their term vectors was used to generate links and define link strength. The same idea was followed and investigated exten370 sively (Mihalcea, 2005). Yoshioka and Haraguchi (2004) went one step further toward eventbased summarization. Two sentences were linked if they shared similar events. When tested on TSC-3, the approach favoured longer summaries. In contrast, the importance of the verbs and nouns constructing events was evaluated with PageRank as individual nodes aligned by their dependence relations (Vanderwende, 2004; Leskovec, 2004). Although we agree that the fabric of event constitutions constructed by their syntactic relations can help dig out the important events, we have two comments. First, not all verbs denote event happenings. Second, semantic similarity or relatedness between action words should be taken into account. 3. Event-based Summarization 3.1. Event Definition and Event Map Events can be broadly defined as “Who did What to Whom When and Where”. Both linguistic and empirical studies acknowledge that event arguments help characterize the effects of a verb’s event structure even though verbs or other words denoting event determine the semantics of an event. In this paper, we choose verbs (such as “elect”) and action nouns (such as “supervision”) as event terms that can characterize or partially characterize actions or incident occurrences. They roughly relate to “did What”. One or more associated named entities are considered as what are denoted by linguists as event arguments. Four types of named entities are currently under the consideration. These are <Person>, <Organization>, <Location> and <Date>. They convey the information of “Who”, “Whom”, “When” and “Where”. A verb or an action noun is deemed as an event term only when it presents itself at least once between two named entities. Events are commonly related with one another semantically, temporally, spatially, causally or conditionally, especially when the documents to be summarized are about the same or very similar topics. Therefore, all event terms and named entities involved can be explicitly connected or implicitly related and weave a document or a set of documents into an event fabric, i.e. an event graphical representation (see Figure 1). The nodes in the graph are of two types. Event terms (ET) are indicated by rectangles and named entities (NE) are indicated by ellipses. They represent concepts rather than instances. Words in either their original form or morphological variations are represented with a single node in the graph regardless of how many times they appear in documents. We call this representation an event map, from which the most important concepts can be pick out in the summary. Figure 1 Sample sentences and their graphical representation The advantage of representing with separated action and entity nodes over simply combining them into one event or sentence node is to provide a convenient way for analyzing the relevance among event terms and named entities either by their semantic or distributional similarity. More importantly, this favors extraction of concepts and brings the conceptual compression available. We then integrate the strength of the connections between nodes into this graphical model in terms of the relevance defined from different perspectives. The relevance is indicated by ) , ( j i node node r , where i node and j node represent two nodes, and are either event terms ( i et ) or named entities ( j ne ). Then, the significance of each node, indicated by ) ( i node w , is calcu<Organization> America Online </Organization> was to buy <Organization> Netscape </Organization> and forge a partnership with <Organization> Sun </Organization>, benefiting all three and giving technological independence from <Organization> Microsoft </Organization>. 371 lated with PageRank ranking algorithm. Sections 3.2 and 3.3 address the issues of deriving ) , ( j i node node r according to intra- or/and inter- event relevance and calculating ) ( i node w in detail. 3.2 Intra- and Inter- Event Relevance We consider both intra-event and inter-event relevance for summarization. Intra-event relevance measures how an action itself is associated with its associated arguments. It is indicated as ) , ( NE ET R and ) , ( ET NE R in Table 1 below. This is a kind of direct relevance as the connections between actions and arguments are established from the text surface directly. No inference or background knowledge is required. We consider that when the connection between an event term i et and a named entity j ne is symmetry, then T NE ET R ET NE R ) , ( ) , ( = . Events are related as explained in Section 2. By means of inter-event relevance, we consider how an event term (or a named entity involved in an event) associate to another event term (or another named entity involved in the same or different events) syntactically, semantically and distributionally. It is indicated by ) , ( ET ET R or ) , ( NE NE R in Table 1 and measures an indirect connection which is not explicit in the event map needing to be derived from the external resource or overall event distribution. Event Term (ET) Named Entity (NE) Event Term (ET) ) , ( ET ET R ) , ( NE ET R Named Entity (NE) ) , ( ET NE R ) , ( NE NE R Table 1 Relevance Matrix The complete relevance matrix is: ⎥⎦ ⎤ ⎢⎣ ⎡ = ) , ( ) , ( ) , ( ) , ( NE NE R ET NE R NE ET R ET ET R R The intra-event relevance ) , ( NE ET R can be simply established by counting how many times i et and j ne are associated, i.e. ) , ( ) , ( j i j i Document ne et freq ne et r = (E1) One way to measure the term relevance is to make use of a general language knowledge base, such as WordNet (Fellbaum 1998). WordNet::Similarity is a freely available software package that makes it possible to measure the semantic relatedness between a pair of concepts, or in our case event terms, based on WordNet (Pedersen, Patwardhan and Michelizzi, 2004). It supports three measures. The one we choose is the function lesk. ) , ( ) , ( ) , ( j i j i j i WordNet et et lesk et et similarity et et r = = (E2) Alternatively, term relevance can be measured according to their distributions in the specified documents. We believe that if two events are concerned with the same participants, occur at same location, or at the same time, these two events are interrelated with each other in some ways. This observation motivates us to try deriving event term relevance from the number of name entities they share. |) ( ) ( | ) , ( j i j i Document et NE et NE et et r ∩ = (E3) Where ) ( i et NE is the set of named entities i et associate. | | indicates the number of the elements in the set. The relevance of named entities can be derived in a similar way. |) ( ) ( | ) , ( j i j i Document ne ET ne ET ne ne r ∩ = (E4) The relevance derived with (E3) and (E4) are indirect relevance. In previous work, a clustering algorithm, shown in Figure 2, has been proposed (Xu et al, 2006) to merge the named entity that refer to the same person (such as Ranariddh, Prince Norodom Ranariddh and President Prince Norodom Ranariddh). It is used for co-reference resolution and aims at joining the same concept into a single node in the event map. The experimental result suggests that merging named entity improves performance in some extend but not evidently. When applying the same algorithm for clustering all four types of name entities in DUC data, we observe that the name entities in the same cluster do not always refer to the same objects, even when they are indeed related in some way. For example, “Mississippi” is a state in the southeast United States, while “Mississippi River” is the secondlongest rever in the United States and flows through “Mississippi”. Step1: Each name entity is represented by ik i i i w w w ne ... 2 1 = , where i w is the ith word in it. The cluster it belongs to, indicated by ) ( i ne C , is initialled by ik i i w w w ... 2 1 itself. Step2: For each name entity ik i i i w w w ne ... 2 1 = For each name entity 372 jl j j j w w w ne ... 2 1 = , if ) ( i ne C is a sub-string of ) ( j ne C , then ) ( ) ( j i ne C ne C = . Continue Step 2 until no change occurs. Figure 2 The algorithm proposed to merge the named entities Location Person Date Organization Mississippi Professor Sir Richard Southwood first six months of last year Long Beach City Council Sir Richard Southwood San Jose City Council Mississippi River Richard Southwood last year City Council Table 2 Some results of the named entity merged It therefore provides a second way to measure named entity relevance based on the clusters found. It is actually a kind of measure of lexical similarity. ⎩ ⎨ ⎧ = otherwise ,0 cluster same in the are , ,1 ) , ( j i j i Cluster ne ne ne ne r (E5) In addition, the relevance of the named entities can be sometimes revealed by sentence context. Take the following most frequently used sentence patterns as examples: Figure 3 The example patterns Considering that two neighbouring name entities in a sentence are usually relevant, the following window-based relevance is also experimented with. ⎩ ⎨ ⎧ = otherwise ,0 size window specified pre a within are , 1, ) , ( j i j i Pattern ne ne ne ne r (E6) 3.3 Significance of Concepts The significance score, i.e. the weight ) ( i node w of each i node , is then estimated recursively with PageRank ranking algorithm which assigns the significance score to each node according to the number of nodes connecting to it as well as the strength of their connections. The equation calculating ) ( i node w using PageRank of a certain i node is shown as follows. )) , ( ) ( ... ) , ( ) ( ... ) , ( ) ( ( ) 1( ) ( 1 1 t i t j i j i i node node r node w node node r node w node node r node w d d node w + + + + + − = (E7) In (E7), j node ( t j ,... 2,1 = , i j ≠ ) are the nodes linking to i node . d is the factor used to avoid the limitation of loop in the map structure. It is set to 0.85 experimentally. The significance of each sentence to be included in the summary is then obtained from the significance of the events it contains. The sentences with higher significance are picked up into the summary as long as they are not exactly the same sentences. We are aware of the important roles of information fusion and sentence compression in summary generation. However, the focus of this paper is to evaluate event-based approaches in extracting the most important sentences. Conceptual extraction based on event relevance is our future direction. 4. Experiments and Discussions To evaluate the event based summarization approaches proposed, we conduct a set of experiments on 30 English document sets provide by the DUC 2001 multi-document summarization task. The documents are pre-processed with GATE to recognize the previously mentioned four types of name entities. On average, each set contains 10.3 documents, 602 sentences, 216 event terms and 148.5 name entities. To evaluate the quality of the generated summaries, we choose an automatic summary evaluation metric ROUGE, which has been used in DUCs. ROUGE is a recall-based metric for fixed length summaries. It bases on N-gram cooccurrence and compares the system generated summaries to human judges (Lin and Hovy, 2003). For each DUC document set, the system creates a summary of 200 word length and present three of the ROUGE metrics: ROUGE-1 (unigram-based), ROUGE-2 (bigram-based), and ROUGE-W (based on longest common subsequence weighed by the length) in the following experiments and evaluations. We first evaluate the summaries generated based on ) , ( NE ET R itself. In the pre-evaluation experiments, we have observed that some fre<Person>, a-position-name of <Organization>, does something. <Person> and another <Person> do something. 373 quently occurring nouns, such as “doctors” and “hospitals”, by themselves are not marked by general NE taggers. But they indicate persons, organizations or locations. We compare the ROUGE scores of adding frequent nouns or not to the set of named entities in Table 3. A noun is considered as a frequent noun when its frequency is larger than 10. Roughly 5% improvement is achieved when high frequent nouns are taken into the consideration. Hereafter, when we mention NE in latter experiments, the high frequent nouns are included. ) , ( NE ET R NE Without High Frequency Nouns NE With High Frequency Nouns ROUGE-1 0.33320 0.34859 ROUGE-2 0.06260 0.07157 ROUGE-W 0.12965 0.13471 Table 3 ROUGE scores using ) , ( NE ET R itself Table 4 below then presents the summarization results by using ) , ( ET ET R itself. It compares two relevance derivation approaches, WordNet R and Document R . The topic-specific relevance derived from the documents to be summarized outperforms the general purpose Word-Net relevance by about 4%. This result is reasonable as WordNet may introduce the word relatedness which is not necessary in the topic-specific documents. When we examine the relevance matrix from the event term pairs with the highest relevant, we find that the pairs, like “abort” and “confirm”, “vote” and confirm”, do reflect semantics (antonymous) and associated (causal) relations to some degree. ) , ( ET ET R Semantic Relevance from Word-Net Topic-Specific Relevance from Documents ROUGE-1 0.32917 0.34178 ROUGE-2 0.05737 0.06852 ROUGE-W 0.11959 0.13262 Table 4 ROUGE scores using ) , ( ET ET R itself Surprisingly, the best individual result is from document distributional similarity Document R ) , ( NE NE in Table 5. Looking more closely, we conclude that compared to event terms, named entities are more representative of the documents in which they are included. In other words, event terms are more likely to be distributed around all the document sets, whereas named entities are more topic-specific and therefore cluster in a particular document set more. Examples of high related named entities in relevance matrix are “Andrew” and “Florida”, “Louisiana” and “Florida”. Although their relevance is not as explicit as the same of event terms (their relevance is more contextual than semantic), we can still deduce that some events may happen in both Louisiana and Florida, or about Andrew in Florida. In addition, it also shows that the relevance we would have expected to be derived from patterns and clustering can also be discovered by ) , ( NE NE RDocument . The window size is set to 5 experimentally in window-based practice. ) , ( NE NE R Relevance from Documents Relevance from Clustering Relevance from Windowbased Context ROUGE-1 0.35212 0.33561 0.34466 ROUGE-2 0.07107 0.07286 0.07508 ROUGE-W 0.13603 0.13109 0.13523 Table 5 ROUGE scores using ) , ( NE NE R itself Next, we evaluate the integration of ) , ( NE ET R , ) , ( ET ET R and ) , ( NE NE R . As DUC 2001 provides 4 different summary sizes for evaluation, it satisfies our desire to test the sensibility of the proposed event-based summarization techniques to the length of summaries. While the previously presented results are evaluated on 200 word summaries, now we move to check the results in four different sizes, i.e. 50, 100, 200 and 400 words. The experiments results show that the event-based approaches indeed prefer longer summaries. This is coincident with what we have hypothesized. For this set of experiments, we choose to integrate the best method from each individual evaluation presented previously. It appears that using the named entities relevance which is derived from the event terms gives the best ROUGE scores in almost all the summery sizes. Compared with the results provided in (Filatova and Hatzivassiloglou, 2004) whose average ROUGE-1 score is below 0.3 on the same data set, the significant improvement is revealed. Of course, we need to test on more data in the future. ) , ( NE NE R 50 100 200 400 ROUGE-1 0.22383 0.28584 0.35212 0.41612 ROUGE-2 0.03376 0.05489 0.07107 0.10275 ROUGE-W 0.10203 0.11610 0.13603 0.13877 ) , ( NE ET R 50 100 200 400 ROUGE-1 0.22224 0.27947 0.34859 0.41644 ROUGE-2 0.03310 0.05073 0.07157 0.10369 ROUGE-W 0.10229 0.11497 0.13471 0.13850 ) , ( ET ET R 50 100 200 400 374 ROUGE-1 0.20616 0.26923 0.34178 0.41201 ROUGE-2 0.02347 0.04575 0.06852 0.10263 ROUGE-W 0.09212 0.11081 0.13262 0.13742 ) , ( NE ET R + ) , ( ET ET R + ) , ( NE NE R 50 100 200 400 ROUGE-1 0.21311 0.27939 0.34630 0.41639 ROUGE-2 0.03068 0.05127 0.07057 0.10579 ROUGE-W 0.09532 0.11371 0.13416 0.13913 Table 6 ROUGE scores using complete R matrix and with different summary lengths As discussed in Section 3.2, the named entities in the same cluster may often be relevant but not always be co-referred. In the following last set of experiments, we evaluate the two ways to use the clustering results. One is to consider them as related as if they are in the same cluster and derive the NE-NE relevance with (E5). The other is to merge the entities in one cluster as one reprehensive named entity and then use it in ET-NE with (E1). The rationality of the former approach is validated. Clustering is used to derive NE-NE Clustering is used to merge entities and then to derive ET-NE ROUGE-1 0.34072 0.33006 ROUGE-2 0.06727 0.06154 ROUGE-W 0.13229 0.12845 Table 7 ROUGE scores with regard to how to use the clustering information 5. Conclusion In this paper, we propose to integrate eventbased approaches to extractive summarization. Both inter-event and intra-event relevance are investigated and PageRank algorithm is used to evaluate the significance of each concept (including both event terms and named entities). The sentences containing more concepts and highest significance scores are chosen in the summary as long as they are not the same sentences. To derive event relevance, we consider the associations at the syntactic, semantic and contextual levels. An important finding on the DUC 2001 data set is that making use of named entity relevance derived from the event terms they associate with achieves the best result. The result of 0.35212 significantly outperforms the one reported in the closely related work whose average is below 0.3. We are interested in the issue of how to improve an event representation in order to build a more powerful event-based summarization system. This would be one of our future directions. We also want to see how concepts rather than sentences are selected into the summary in order to develop a more flexible compression technique and to know what characteristics of a document set is appropriate for applying event-based summarization techniques. Acknowledgements The work presented in this paper is supported partially by Research Grants Council on Hong Kong (reference number CERG PolyU5181/03E) and partially by National Natural Science Foundation of China (reference number: NSFC 60573186). References Chin-Yew Lin and Eduard Hovy. 2003. Automatic Evaluation of Summaries using N-gram Cooccurrence Statistics. In Proceedings of HLTNAACL 2003, pp71-78. Christiane Fellbaum. 1998, WordNet: An Electronic Lexical Database. MIT Press. Elena Filatova and Vasileios Hatzivassiloglou. 2004. Event-based Extractive summarization. In Proceedings of ACL 2004 Workshop on Summarization, pp104-111. Gunes Erkan and Dragomir Radev. 2004. LexRank: Graph-based Centrality as Salience in Text Summarization. Journal of Artificial Intelligence Research. Jure Leskovec, Marko Grobelnik and Natasa MilicFrayling. 2004. Learning Sub-structures of Document Semantic Graphs for Document Summarization. In LinkKDD 2004. Lucy Vanderwende, Michele Banko and Arul Menezes. 2004. Event-Centric Summary Generation. In Working Notes of DUC 2004. Masaharu Yoshioka and Makoto Haraguchi. 2004. Multiple News Articles Summarization based on Event Reference Information. In Working Notes of NTCIR-4, Tokyo. MUC-7. http://www-nlpir.nist.gov/related_projects/ muc/proceeings/ muc_7_toc.html Naomi Daniel, Dragomir Radev and Timothy Allison. 2003. Sub-event based Multi-document Summarization. In Proceedings of the HLT-NAACL 2003 Workshop on Text Summarization, pp9-16. 375 Page Lawrence, Brin Sergey, Motwani Rajeev and Winograd Terry. 1998. The PageRank Citation Ranking: Bring Order to the Web. Technical Report, Stanford University. Rada Mihalcea. 2005. Language Independent Extractive Summarization. ACL 2005 poster. Regina Barzilay and Michael Elhadad. 2005. Modelling Local Coherence: An Entity-based Approach. In Proceedings of ACL, pp141-148. TDT. http://projects.ldc.upenn.edu/TDT. Ted Pedersen, Siddharth Patwardhan and Jason Michelizzi. 2004. WordNet::Similarity – Measuring the Relatedness of Concepts. In Proceedings of AAAI, pp25-29. Wei Xu, Wenjie Li, Mingli Wu, Wei Li and Chunfa Yuan. 2006. Deriving Event Relevance from the Ontology Constructed with Formal Concept Analysis, in Proceedings of CiCling’06, pp480489. 376
2006
47
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 377–384, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Models for Sentence Compression: A Comparison across Domains, Training Requirements and Evaluation Measures James Clarke and Mirella Lapata School of Informatics, University of Edinburgh 2 Bucclecuch Place, Edinburgh EH8 9LW, UK [email protected], [email protected] Abstract Sentence compression is the task of producing a summary at the sentence level. This paper focuses on three aspects of this task which have not received detailed treatment in the literature: training requirements, scalability, and automatic evaluation. We provide a novel comparison between a supervised constituentbased and an weakly supervised wordbased compression algorithm and examine how these models port to different domains (written vs. spoken text). To achieve this, a human-authored compression corpus has been created and our study highlights potential problems with the automatically gathered compression corpora currently used. Finally, we assess whether automatic evaluation measures can be used to determine compression quality. 1 Introduction Automatic sentence compression has recently attracted much attention, in part because of its affinity with summarisation. The task can be viewed as producing a summary of a single sentence that retains the most important information while remaining grammatically correct. An ideal compression algorithm will involve complex text rewriting operations such as word reordering, paraphrasing, substitution, deletion, and insertion. In default of a more sophisticated compression algorithm, current approaches have simplified the problem to a single rewriting operation, namely word deletion. More formally, given an input sentence of words W = w1,w2,...,wn, a compression is formed by dropping any subset of these words. Viewing the task as word removal reduces the number of possible compressions to 2n; naturally, many of these compressions will not be reasonable or grammatical (Knight and Marcu 2002). Sentence compression could be usefully employed in wide range of applications. For example, to automatically generate subtitles for television programs; the transcripts cannot usually be used verbatim due to the rate of speech being too high (Vandeghinste and Pan 2004). Other applications include compressing text to be displayed on small screens (Corston-Oliver 2001) such as mobile phones or PDAs, and producing audio scanning devices for the blind (Grefenstette 1998). Algorithms for sentence compression fall into two broad classes depending on their training requirements. Many algorithms exploit parallel corpora (Jing 2000; Knight and Marcu 2002; Riezler et al. 2003; Nguyen et al. 2004a; Turner and Charniak 2005; McDonald 2006) to learn the correspondences between long and short sentences in a supervised manner, typically using a rich feature space induced from parse trees. The learnt rules effectively describe which constituents should be deleted in a given context. Approaches that do not employ parallel corpora require minimal or no supervision. They operationalise compression in terms of word deletion without learning specific rules and can therefore rely on little linguistic knowledge such as part-of-speech tags or merely the lexical items alone (Hori and Furui 2004). Alternatively, the rules of compression are approximated from a non-parallel corpus (e.g., the Penn Treebank) by considering context-free grammar derivations with matching expansions (Turner and Charniak 2005). Previous approaches have been developed and tested almost exclusively on written text, a notable exception being Hori and Furui (2004) who focus on spoken language. While parallel corpora of original-compressed sentences are not naturally available in the way multilingual corpora are, researchers have obtained such corpora automatically by exploiting documents accompanied by abstracts. Automatic corpus creation affords the opportunity to study compression mechanisms 377 cheaply, yet these mechanisms may not be representative of human performance. It is unlikely that authors routinely carry out sentence compression while creating abstracts for their articles. Collecting human judgements is the method of choice for evaluating sentence compression models. However, human evaluations tend to be expensive and cannot be repeated frequently; furthermore, comparisons across different studies can be difficult, particularly if subjects employ different scales, or are given different instructions. In this paper we examine some aspects of the sentence compression task that have received little attention in the literature. First, we provide a novel comparison of supervised and weakly supervised approaches. Specifically, we study how constituent-based and word-based methods port to different domains and show that the latter tend to be more robust. Second, we create a corpus of human-authored compressions, and discuss some potential problems with currently used compression corpora. Finally, we present automatic evaluation measures for sentence compression and examine whether they correlate reliably with behavioural data. 2 Algorithms for Sentence Compression In this section we give a brief overview of the algorithms we employed in our comparative study. We focus on two representative methods, Knight and Marcu’s (2002) decision-based model and Hori and Furui’s (2004) word-based model. The decision-tree model operates over parallel corpora and offers an intuitive formulation of sentence compression in terms of tree rewriting. It has inspired many discriminative approaches to the compression task (Riezler et al. 2003; Nguyen et al. 2004b; McDonald 2006) and has been extended to languages other than English (see Nguyen et al. 2004a). We opted for the decisiontree model instead of the also well-known noisychannel model (Knight and Marcu 2002; Turner and Charniak 2005). Although both models yield comparable performance, Turner and Charniak (2005) show that the latter is not an appropriate compression model since it favours uncompressed sentences over compressed ones.1 Hori and Furui’s (2004) model was originally developed for Japanese with spoken text in mind, 1The noisy-channel model uses a source model trained on uncompressed sentences. This means that the most likely compressed sentence will be identical to the original sentence as the likelihood of a constituent deletion is typically far lower than that of leaving it in. SHIFT transfers the first word from the input list onto the stack. REDUCE pops the syntactic trees located at the top of the stack, combines them into a new tree and then pushes the new tree onto the top of the stack. DROP deletes from the input list subsequences of words that correspond to a syntactic constituent. ASSIGNTYPE changes the label of the trees at the top of the stack (i.e., the POS tag of words). Table 1: Stack rewriting operations it requires minimal supervision, and little linguistic knowledge. It therefor holds promise for languages and domains for which text processing tools (e.g., taggers, parsers) are not readily available. Furthermore, to our knowledge, its performance on written text has not been assessed. 2.1 Decision-based Sentence Compression In the decision-based model, sentence compression is treated as a deterministic rewriting process of converting a long parse tree, l, into a shorter parse tree s. The rewriting process is decomposed into a sequence of shift-reduce-drop actions that follow an extended shift-reduce parsing paradigm. The compression process starts with an empty stack and an input list that is built from the original sentence’s parse tree. Words in the input list are labelled with the name of all the syntactic constituents in the original sentence that start with it. Each stage of the rewriting process is an operation that aims to reconstruct the compressed tree. There are four types of operations that can be performed on the stack, they are illustrated in Table 1. Learning cases are automatically generated from a parallel corpus. Each learning case is expressed by a set of features and represents one of the four possible operations for a given stack and input list. Using the C4.5 program (Quinlan 1993) a decision-tree model is automatically learnt. The model is applied to a parsed original sentence in a deterministic fashion. Features for the current state of the input list and stack are extracted and the classifier is queried for the next operation to perform. This is repeated until the input list is empty and the stack contains only one item (this corresponds to the parse for the compressed tree). The compressed sentence is recovered by traversing the leaves of the tree in order. 2.2 Word-based Sentence Compression The decision-based method relies exclusively on parallel corpora; the caveat here is that appropriate training data may be scarce when porting this model to different text domains (where abstracts 378 are not available for automatic corpus creation) or languages. To alleviate the problems inherent with using a parallel corpus, we have modified a weakly supervised algorithm originally proposed by Hori and Furui (2004). Their method is based on word deletion; given a prespecified compression length, a compression is formed by preserving the words which maximise a scoring function. To make Hori and Furui’s (2004) algorithm more comparable to the decision-based model, we have eliminated the compression length parameter. Instead, we search over all lengths to find the compression that gives the maximum score. This process yields more natural compressions with varying lengths. The original score measures the significance of each word (I) in the compression and the linguistic likelihood (L) of the resulting word combinations.2 We add some linguistic knowledge to this formulation through a function (SOV) that captures information about subjects, objects and verbs. The compression score is given in Equation (1). The lambdas (λI, λSOV, λL) weight the contribution of the individual scores: S(V) = M ∑ i=1 λII(vi)+λsovSOV(vi) +λLL(vi|vi−1,vi−2) (1) The sentence V = v1,v2,...,vm (of M words) that maximises the score S(V) is the best compression for an original sentence consisting of N words (M < N). The best compression can be found using dynamic programming. The λ’s in Equation (1) can be either optimised using a small amount of training data or set manually (e.g., if short compressions are preferred to longer ones, then the language model should be given a higher weight). Alternatively, weighting could be dispensed with by including a normalising factor in the language model. Here, we follow Hori and Furui’s (2004) original formulation and leave the normalisation to future work. We next introduce each measure individually. Word significance score The word significance score I measures the relative importance of a word in a document. It is similar to tf-idf, a term weighting score commonly used in information retrieval: I(wi) = fi log FA Fi (2) 2Hori and Furui (2004) also have a confidence score based upon how reliable the output of an automatic speech recognition system is. However, we need not consider this score when working with written text and manual transcripts. Where wi is the topic word of interest (topic words are either nouns or verbs), fi is the frequency of wi in the document, Fi is the corpus frequency of wi and FA is the sum of all topic word occurrences in the corpus (∑i Fi). Linguistic score The linguistic score’s L(vi|vi−1,vi−2) responsibility is to select some function words, thus ensuring that compressions remain grammatical. It also controls which topic words can be placed together. The score measures the n-gram probability of the compressed sentence. SOV Score The SOV score is based on the intuition that subjects, objects and verbs should not be dropped while words in other syntactic roles can be considered for removal. This score is based solely on the contents of the sentence considered for compression without taking into account the distribution of subjects, objects or verbs, across documents. It is defined in (3) where fi is the document frequency of a verb, or word bearing the subject/object role and λdefault is a constant weight assigned to all other words. SOV(wi) =    fi if wi in subject, object or verb role λdefault otherwise (3) The SOV score is only applied to the head word of subjects and objects. 3 Corpora Our intent was to assess the performance of the two models just described on written and spoken text. The appeal of written text is understandable since most summarisation work today focuses on this domain. Speech data not only provides a natural test-bed for compression applications (e.g., subtitle generation) but also poses additional challenges. Spoken utterances can be ungrammatical, incomplete, and often contain artefacts such as false starts, interjections, hesitations, and disfluencies. Rather than focusing on spontaneous speech which is abundant in these artefacts, we conduct our study on the less ambitious domain of broadcast news transcripts. This lies inbetween the extremes of written text and spontaneous speech as it has been scripted beforehand and is usually read off an autocue. One stumbling block to performing a comparative study between written data and speech data is that there are no naturally occurring parallel 379 speech corpora for studying compression. Automatic corpus creation is not a viable option either, speakers do not normally create summaries of their own utterances. We thus gathered our own corpus by asking humans to generate compressions for speech transcripts. In what follows we describe how the manual compressions were performed. We also briefly present the written corpus we used for our experiments. The latter was automatically constructed and offers an interesting point of comparison with our manually created corpus. Broadcast News Corpus Three annotators were asked to compress 50 broadcast news stories (1,370 sentences) taken from the HUB-4 1996 English Broadcast News corpus provided by the LDC. The HUB-4 corpus contains broadcast news from a variety of networks (CNN, ABC, CSPAN and NPR) which have been manually transcribed and split at the story and sentence level. Each document contains 27 sentences on average and the whole corpus consists of 26,151 tokens.3 The Robust Accurate Statistical Parsing (RASP) toolkit (Briscoe and Carroll 2002) was used to automatically tokenise the corpus. Each annotator was asked to perform sentence compression by removing tokens from the original transcript. Annotators were asked to remove words while: (a) preserving the most important information in the original sentence, and (b) ensuring the compressed sentence remained grammatical. If they wished they could leave a sentence uncompressed by marking it as inappropriate for compression. They were not allowed to delete whole sentences even if they believed they contained no information content with respect to the story as this would blur the task with abstracting. Ziff-Davis Corpus Most previous work (Jing 2000; Knight and Marcu 2002; Riezler et al. 2003; Nguyen et al. 2004a; Turner and Charniak 2005; McDonald 2006) has relied on automatically constructed parallel corpora for training and evaluation purposes. The most popular compression corpus originates from the Ziff-Davis corpus — a collection of news articles on computer products. The corpus was created by matching sentences that occur in an article with sentences that occur in an abstract (Knight and Marcu 2002). The abstract sentences had to contain a subset of the original sentence’s words and the word order had to remain the same. 3The compression corpus is available at http:// homepages.inf.ed.ac.uk/s0460084/data/. A1 A2 A3 Av. Ziff-Davis Comp% 88.0 79.0 87.0 84.4 97.0 CompR 73.1 79.0 70.0 73.0 47.0 Table 2: Compression Rates (Comp% measures the percentage of sentences compressed; CompR is the mean compression rate of all sentences) 1 2 3 4 5 6 7 8 9 10 Length of word span dropped 0 0.1 0.2 0.3 0.4 0.5 Relative number of drops Annotator 1 Annotator 2 Annotator 3 Ziff-Davis + Figure 1: Distribution of span of words dropped Comparisons Following the classification scheme adopted in the British National Corpus (Burnard 2000), we assume throughout this paper that Broadcast News and Ziff-Davis belong to different domains (spoken vs. written text) whereas they represent the same genre (i.e., news). Table 2 shows the percentage of sentences which were compressed (Comp%) and the mean compression rate (CompR) for the two corpora. The annotators compress the Broadcast News corpus to a similar degree. In contrast, the Ziff-Davis corpus is compressed much more aggressively with a compression rate of 47%, compared to 73% for Broadcast News. This suggests that the Ziff-Davis corpus may not be a true reflection of human compression performance and that humans tend to compress sentences more conservatively than the compressions found in abstracts. We also examined whether the two corpora differ with regard to the length of word spans being removed. Figure 1 shows how frequently word spans of varying lengths are being dropped. As can be seen, a higher percentage of long spans (five or more words) are dropped in the Ziff-Davis corpus. This suggests that the annotators are removing words rather than syntactic constituents, which provides support for a model that can act on the word level. There is no statistically significant difference between the length of spans dropped between the annotators, whereas there is a significant difference (p < 0.01) between the annotators’ spans and the Ziff-Davis’ spans (using the 380 Wilcoxon Test). The compressions produced for the Broadcast News corpus may differ slightly to the Ziff-Davis corpus. Our annotators were asked to perform sentence compression explicitly as an isolated task rather than indirectly (and possibly subconsciously) as part of the broader task of abstracting, which we can assume is the case with the ZiffDavis corpus. 4 Automatic Evaluation Measures Previous studies relied almost exclusively on human judgements for assessing the wellformedness of automatically derived compressions. Although human evaluations of compression systems are not as large-scale as in other fields (e.g., machine translation), they are typically performed once, at the end of the development cycle. Automatic evaluation measures would allow more extensive parameter tuning and crucially experimentation with larger data sets. Most human studies to date are conducted on a small compression sample, the test portion of the Ziff-Davis corpus (32 sentences). Larger sample sizes would expectedly render human evaluations time consuming and generally more difficult to conduct frequently. Here, we review two automatic evaluation measures that hold promise for the compression task. Simple String Accuracy (SSA, Bangalore et al. 2000) has been proposed as a baseline evaluation metric for natural language generation. It is based on the string edit distance between the generated output and a gold standard. It is a measure of the number of insertion (I), deletion (D) and substitution (S) errors between two strings. It is defined in (4) where R is the length of the gold standard string. Simple String Accuracy = (1−I +D+S R ) (4) The SSA score will assess whether appropriate words have been included in the compression. Another stricter automatic evaluation method is to compare the grammatical relations found in the system compressions against those found in a gold standard. This allows us “to measure the semantic aspects of summarisation quality in terms of grammatical-functional information” (Riezler et al. 2003). The standard metrics of precision, recall and F-score can then be used to measure the quality of a system against a gold standard. Our implementation of the F-score measure used the grammatical relations annotations provided by RASP (Briscoe and Carroll 2002). This parser is particularly appropriate for the compression task since it provides parses for both full sentences and sentence fragments and is generally robust enough to analyse semi-grammatical compressions. We calculated F-score over all the relations provided by RASP (e.g., subject, direct/indirect object, modifier; 15 in total). Correlation with human judgements is an important prerequisite for the wider use of automatic evaluation measures. In the following section we describe an evaluation study examining whether the measures just presented indeed correlate with human ratings of compression quality. 5 Experimental Set-up In this section we present our experimental setup for assessing the performance of the two algorithms discussed above. We explain how different model parameters were estimated. We also describe a judgement elicitation study on automatic and human-authored compressions. Parameter Estimation We created two variants of the decision-tree model, one trained on the Ziff-Davis corpus and one on the Broadcast News corpus. We used 1,035 sentences from the Ziff-Davis corpus for training; the same sentences were previously used in related work (Knight and Marcu 2002). The second variant was trained on 1,237 sentences from the Broadcast News corpus. The training data for both models was parsed using Charniak’s (2000) parser. Learning cases were automatically generated using a set of 90 features similar to Knight and Marcu (2002). For the word-based method, we randomly selected 50 sentences from each training set to optimise the lambda weighting parameters4. Optimisation was performed using Powell’s method (Press et al. 1992). Recall from Section 2.2 that the compression score has three main parameters: the significance, linguistic, and SOV scores. The significance score was calculated using 25 million tokens from the Broadcast News corpus (spoken variant) and 25 million tokens from the North American News Text Corpus (written variant). The linguistic score was estimated using a trigram language model. The language model was trained on the North Ameri4To treat both models on an equal footing, we attempted to train the decision-tree model solely on 50 sentences. However, it was unable to produce any reasonable compressions, presumably due to insufficient learning instances. 381 can corpus (25 million tokens) using the CMUCambridge Language Modeling Toolkit (Clarkson and Rosenfeld 1997) with a vocabulary size of 50,000 tokens and Good-Turing discounting. Subjects, objects, and verbs for the SOV score were obtained from RASP (Briscoe and Carroll 2002). All our experiments were conducted on sentences for which we obtained syntactic analyses. RASP failed on 17 sentences from the Broadcast news corpus and 33 from the Ziff-Davis corpus; Charniak’s (2000) parser successfully parsed the Broadcast News corpus but failed on three sentences from the Ziff-Davis corpus. Evaluation Data We randomly selected 40 sentences for evaluation purposes, 20 from the testing portion of the Ziff-Davis corpus (32 sentences) and 20 sentences from the Broadcast News corpus (133 sentences were set aside for testing). This is comparable to previous studies which have used the 32 test sentences from the Ziff-Davis corpus. None of the 20 Broadcast News sentences were used for optimisation. We ran the decision-tree system and the word-based system on these 40 sentences. One annotator was randomly selected to act as the gold standard for the Broadcast News corpus; the gold standard for the Ziff-Davis corpus was the sentence that occurred in the abstract. For each original sentence we had three compressions; two generated automatically by our systems and a human authored gold standard. Thus, the total number of compressions was 120 (3x40). Human Evaluation The 120 compressions were rated by human subjects. Their judgements were also used to examine whether the automatic evaluation measures discussed in Section 4 correlate reliably with behavioural data. Sixty unpaid volunteers participated in our elicitation study, all were self reported native English speakers. The study was conducted remotely over the Internet. Participants were presented with a set of instructions that explained the task and defined sentence compression with the aid of examples. They first read the original sentence with the compression hidden. Then the compression was revealed by pressing a button. Each participant saw 40 compressions. A Latin square design prevented subjects from seeing two different compressions of the same sentence. The order of the sentences was randomised. Participants were asked to rate each compression they saw on a five point scale taking into account the information retained by the compression and its grammaticality. They were told all o: Apparently Fergie very much wants to have a career in television. d: A career in television. w: Fergie wants to have a career in television. g: Fergie wants a career in television. o: Many debugging features, including user-defined break points and variable-watching and message-watching windows, have been added. d: Many debugging features. w: Debugging features, and windows, have been added. g: Many debugging features have been added. o: As you said, the president has just left for a busy three days of speeches and fundraising in Nevada, California and New Mexico. d: As you said, the president has just left for a busy three days. w: You said, the president has left for three days of speeches and fundraising in Nevada, California and New Mexico. g: The president left for three days of speeches and fundraising in Nevada, California and New Mexico. Table 3: Compression examples (o: original sentence, d: decision-tree compression, w: wordbased compression, g: gold standard) compressions were automatically generated. Examples of the compressions our participants saw are given in Table 3. 6 Results Our experiments were designed to answer three questions: (1) Is there a significant difference between the compressions produced by supervised (constituent-based) and weakly unsupervised (word-based) approaches? (2) How well do the two models port across domains (written vs. spoken text) and corpora types (human vs. automatically created)? (3) Do automatic evaluation measures correlate with human judgements? One of our first findings is that the the decisiontree model is rather sensitive to the style of training data. The model cannot capture and generalise single word drops as effectively as constituent drops. When the decision-tree is trained on the Broadcast News corpus, it is unable to create suitable compressions. On the evaluation data set, 75% of the compressions produced are the original sentence or the original sentence with one word removed. It is possible that the Broadcast News compression corpus contains more varied compressions than those of the Ziff-Davis and therefore a larger amount of training data would be required to learn a reliable decision-tree model. We thus used the Ziff-Davis trained decision-tree model to obtain compressions for both corpora. Our results are summarised in Tables 4 and 5. Table 4 lists the average compression rates for 382 Broadcast News CompR SSA F-score Decision-tree 0.55 0.34 0.40 Word-based 0.72 0.51 0.54 gold standard 0.71 – – Ziff-Davis CompR SSA F-score Decision-tree 0.58 0.20 0.34 Word-based 0.60 0.19 0.39 gold standard 0.54 – – Table 4: Results using automatic evaluation measures Compression Broadcast News Ziff-Davis Decision-tree 2.04 2.34 Word-based 2.78 2.43 gold standard 3.87 3.53 Table 5: Mean ratings from human evaluation each model as well as the models’ performance according to the two automatic evaluation measures discussed in Section 4. The row ‘gold standard’ displays human-produced compression rates. Table 5 shows the results of our judgement elicitation study. The compression rates (CompR, Table 4) indicate that the decision-tree model compresses more aggressively than the word-based model. This is due to the fact that it mostly removes entire constituents rather than individual words. The wordbased model is closer to the human compression rate. According to our automatic evaluation measures, the decision-tree model is significantly worse than the word-based model (using the Student t test, SSA p < 0.05, F-score p < 0.05) on the Broadcast News corpus. Both models are significantly worse than humans (SSA p < 0.05, Fscore p < 0.01). There is no significant difference between the two systems using the Ziff-Davis corpus on both simple string accuracy and relation F-score, whereas humans significantly outperform the two systems. We have performed an Analysis of Variance (ANOVA) to examine whether similar results are obtained when using human judgements. Statistical tests were done using the mean of the ratings (see Table 5). The ANOVA revealed a reliable effect of compression type by subjects and by items (p < 0.01). Post-hoc Tukey tests confirmed that the word-based model outperforms the decisiontree model (α < 0.05) on the Broadcast news corpus; however, the two models are not significantly Measure Ziff-Davis Broadcast News SSA 0.171 0.348* F-score 0.575** 0.532** *p < 0.05 **p < 0.01 Table 6: Correlation (Pearson’s r) between evaluation measures and human ratings. Stars indicate level of statistical significance. different when using the Ziff-Davis corpus. Both systems perform significantly worse than the gold standard (α < 0.05). We next examine the degree to which the automatic evaluation measures correlate with human ratings. Table 6 shows the results of correlating the simple string accuracy (SSA) and relation Fscore against compression judgements. The SSA does not correlate on both corpora with human judgements; it thus seems to be an unreliable measure of compression performance. However, the Fscore correlates significantly with human ratings, yielding a correlation coefficient of r = 0.575 on the Ziff-Davis corpus and r = 0.532 on the Broadcast news. To get a feeling for the difficulty of the task, we assessed how well our participants agreed in their ratings using leave-one-out resampling (Weiss and Kulikowski 1991). The technique correlates the ratings of each participant with the mean ratings of all the other participants. The average agreement is r = 0.679 on the Ziff-Davis corpus and r = 0.746 on the Broadcast News corpus. This result indicates that F-score’s agreement with the human data is not far from the human upper bound. 7 Conclusions and Future Work In this paper we have provided a comparison between a supervised (constituent-based) and a minimally supervised (word-based) approach to sentence compression. Our results demonstrate that the word-based model performs equally well on spoken and written text. Since it does not rely heavily on training data, it can be easily extended to languages or domains for which parallel compression corpora are scarce. When no parallel corpora are available the parameters can be manually tuned to produce compressions. In contrast, the supervised decision-tree model is not particularly robust on spoken text, it is sensitive to the nature of the training data, and did not produce adequate compressions when trained on the humanauthored Broadcast News corpus. A comparison of the automatically gathered Ziff-Davis corpus 383 with the Broadcast News corpus revealed important differences between the two corpora and thus suggests that automatically created corpora may not reflect human compression performance. We have also assessed whether automatic evaluation measures can be used for the compression task. Our results show that grammatical relationsbased F-score (Riezler et al. 2003) correlates reliably with human judgements and could thus be used to measure compression performance automatically. For example, it could be used to assess progress during system development or for comparing across different systems and system configurations with much larger test sets than currently employed. In its current formulation, the only function driving compression in the word-based model is the language model. The word significance and SOV scores are designed to single out important words that the model should not drop. We have not yet considered any functions that encourage compression. Ideally these functions should be inspired from the underlying compression process. Finding such a mechanism is an avenue of future work. We would also like to enhance the wordbased model with more linguistic knowledge; we plan to experiment with syntax-based language models and more richly annotated corpora. Another important future direction lies in applying the unsupervised model presented here to languages with more flexible word order and richer morphology than English (e.g., German, Czech). We suspect that these languages will prove challenging for creating grammatically acceptable compressions. Finally, our automatic evaluation experiments motivate the use of relations-based Fscore as a means of directly optimising compression quality, much in the same way MT systems optimise model parameters using BLEU as a measure of translation quality. Acknowledgements We are grateful to our annotators Vasilis Karaiskos, Beata Kouchnir, and Sarah Luger. Thanks to Jean Carletta, Frank Keller, Steve Renals, and Sebastian Riedel for helpful comments and suggestions. Lapata acknowledges the support of EPSRC (grant GR/T04540/01). References Bangalore, Srinivas, Owen Rambow, and Steve Whittaker. 2000. Evaluation metrics for generation. In Proceedings of the 1st INLG. Mitzpe Ramon, Israel, pages 1–8. Briscoe, E. J. and J. Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the 3rd LREC. Las Palmas, Spain, pages 1499–1504. Burnard, Lou. 2000. The Users Reference Guide for the British National Corpus (World Edition). British National Corpus Consortium, Oxford University Computing Service. Charniak, Eugene. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st NAACL. San Francisco, CA, pages 132–139. Clarkson, Philip and Ronald Rosenfeld. 1997. Statistical language modeling using the CMU–cambridge toolkit. In Proceedings of Eurospeech. Rhodes, Greece, pages 2707– 2710. Corston-Oliver, Simon. 2001. Text Compaction for Display on Very Small Screens. In Proceedings of the NAACL Workshop on Automatic Summarization. Pittsburgh, PA, pages 89–98. Grefenstette, Gregory. 1998. Producing Intelligent Telegraphic Text Reduction to Provide an Audio Scanning Service for the Blind. In Proceedings of the AAAI Symposium on Intelligent Text Summarization. Stanford, CA, pages 111–117. Hori, Chiori and Sadaoki Furui. 2004. Speech summarization: an approach through word extraction and a method for evaluation. IEICE Transactions on Information and Systems E87-D(1):15–25. Jing, Hongyan. 2000. Sentence Reduction for Automatic Text Summarization. In Proceedings of the 6th ANLP. Seattle,WA, pages 310–315. Knight, Kevin and Daniel Marcu. 2002. Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Artificial Intelligence 139(1):91–107. McDonald, Ryan. 2006. Discriminative sentence compression with soft syntactic constraints. In Proceedings of the 11th EACL. Trento, Italy, pages 297–304. Nguyen, Minh Le, Susumu Horiguchi, Akira Shimazu, and Bao Tu Ho. 2004a. Example-based sentence reduction using the hidden Markov model. ACM TALIP 3(2):146–158. Nguyen, Minh Le, Akira Shimazu, Susumu Horiguchi, Tu Bao Ho, and Masaru Fukushi. 2004b. Probabilistic sentence reduction using support vector machines. In Proceedings of the 20th COLING. Geneva, Switzerland, pages 743–749. Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 1992. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, New York, NY, USA. Quinlan, J. R. 1993. C4.5 – Programs for Machine Learning. The Morgan Kaufmann series in machine learning. Morgan Kaufman Publishers. Riezler, Stefan, Tracy H. King, Richard Crouch, and Annie Zaenen. 2003. Statistical sentence condensation using ambiguity packing and stochastic disambiguation methods for lexical-functional grammar. In Proceedings of the HLT/NAACL. Edmonton, Canada, pages 118–125. Turner, Jenine and Eugene Charniak. 2005. Supervised and unsupervised learning for sentence compression. In Proceedings of the 43rd ACL. Ann Arbor, MI, pages 290–297. Vandeghinste, Vincent and Yi Pan. 2004. Sentence compression for automated subtitling: A hybrid approach. In Proceedings of the ACL Workshop on Text Summarization. Barcelona, Spain, pages 89–95. Weiss, Sholom M. and Casimir A. Kulikowski. 1991. Computer systems that learn: classification and prediction methods from statistics, neural nets, machine learning, and expert systems. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. 384
2006
48
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 385–392, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Bottom-up Approach to Sentence Ordering for Multi-document Summarization Danushka Bollegala Naoaki Okazaki ∗ Graduate School of Information Science and Technology The University of Tokyo 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan {danushka,okazaki}@mi.ci.i.u-tokyo.ac.jp [email protected] Mitsuru Ishizuka Abstract Ordering information is a difficult but important task for applications generating natural-language text. We present a bottom-up approach to arranging sentences extracted for multi-document summarization. To capture the association and order of two textual segments (eg, sentences), we define four criteria, chronology, topical-closeness, precedence, and succession. These criteria are integrated into a criterion by a supervised learning approach. We repeatedly concatenate two textual segments into one segment based on the criterion until we obtain the overall segment with all sentences arranged. Our experimental results show a significant improvement over existing sentence ordering strategies. 1 Introduction Multi-document summarization (MDS) (Radev and McKeown, 1999) tackles the information overload problem by providing a condensed version of a set of documents. Among a number of sub-tasks involved in MDS, eg, sentence extraction, topic detection, sentence ordering, information extraction, sentence generation, etc., most MDS systems have been based on an extraction method, which identifies important textual segments (eg, sentences or paragraphs) in source documents. It is important for such MDS systems to determine a coherent arrangement of the textual segments extracted from multi-documents in order to reconstruct the text structure for summarization. Ordering information is also essential for ∗Research Fellow of the Japan Society for the Promotion of Science (JSPS) other text-generation applications such as Question Answering. A summary with improperly ordered sentences confuses the reader and degrades the quality/reliability of the summary itself. Barzilay (2002) has provided empirical evidence that proper order of extracted sentences improves their readability significantly. However, ordering a set of sentences into a coherent text is a nontrivial task. For example, identifying rhetorical relations (Mann and Thompson, 1988) in an ordered text has been a difficult task for computers, whereas our task is even more complicated: to reconstruct such relations from unordered sets of sentences. Source documents for a summary may have been written by different authors, by different writing styles, on different dates, and based on different background knowledge. We cannot expect that a set of extracted sentences from such diverse documents will be coherent on their own. Several strategies to determine sentence ordering have been proposed as described in section 2. However, the appropriate way to combine these strategies to achieve more coherent summaries remains unsolved. In this paper, we propose four criteria to capture the association of sentences in the context of multi-document summarization for newspaper articles. These criteria are integrated into one criterion by a supervised learning approach. We also propose a bottom-up approach in arranging sentences, which repeatedly concatenates textual segments until the overall segment with all sentences arranged, is achieved. 2 Related Work Existing methods for sentence ordering are divided into two approaches: making use of chronological information (McKeown et al., 1999; Lin 385 and Hovy, 2001; Barzilay et al., 2002; Okazaki et al., 2004); and learning the natural order of sentences from large corpora not necessarily based on chronological information (Lapata, 2003; Barzilay and Lee, 2004). A newspaper usually disseminates descriptions of novel events that have occurred since the last publication. For this reason, ordering sentences according to their publication date is an effective heuristic for multidocument summarization (Lin and Hovy, 2001; McKeown et al., 1999). Barzilay et al. (2002) have proposed an improved version of chronological ordering by first grouping sentences into sub-topics discussed in the source documents and then arranging the sentences in each group chronologically. Okazaki et al. (2004) have proposed an algorithm to improve chronological ordering by resolving the presuppositional information of extracted sentences. They assume that each sentence in newspaper articles is written on the basis that presuppositional information should be transferred to the reader before the sentence is interpreted. The proposed algorithm first arranges sentences in a chronological order and then estimates the presuppositional information for each sentence by using the content of the sentences placed before each sentence in its original article. The evaluation results show that the proposed algorithm improves the chronological ordering significantly. Lapata (2003) has suggested a probabilistic model of text structuring and its application to the sentence ordering. Her method calculates the transition probability from one sentence to the next from a corpus based on the Cartesian product between two sentences defined using the following features: verbs (precedent relationships of verbs in the corpus); nouns (entity-based coherence by keeping track of the nouns); and dependencies (structure of sentences). Although she has not compared her method with chronological ordering, it could be applied to generic domains, not relying on the chronological clue provided by newspaper articles. Barzilay and Lee (2004) have proposed content models to deal with topic transition in domain specific text. The content models are formalized by Hidden Markov Models (HMMs) in which the hidden state corresponds to a topic in the domain of interest (eg, earthquake magnitude or previous earthquake occurrences), and the state transitions capture possible information-presentation orderings. The evaluation results showed that their method outperformed Lapata’s approach by a wide margin. They did not compare their method with chronological ordering as an application of multi-document summarization. As described above, several good strategies/heuristics to deal with the sentence ordering problem have been proposed. In order to integrate multiple strategies/heuristics, we have formalized them in a machine learning framework and have considered an algorithm to arrange sentences using the integrated strategy. 3 Method We define notation a ≻b to represent that sentence a precedes sentence b. We use the term segment to describe a sequence of ordered sentences. When segment A consists of sentences a1, a2, ..., am in this order, we denote as: A = (a1 ≻a2 ≻... ≻am). (1) The two segments A and B can be ordered either B after A or A after B. We define the notation A ≻B to show that segment A precedes segment B. Let us consider a bottom-up approach in arranging sentences. Starting with a set of segments initialized with a sentence for each, we concatenate two segments, with the strongest association (discussed later) of all possible segment pairs, into one segment. Repeating the concatenating will eventually yield a segment with all sentences arranged. The algorithm is considered as a variation of agglomerative hierarchical clustering with the ordering information retained at each concatenating process. The underlying idea of the algorithm, a bottomup approach to text planning, was proposed by Marcu (1997). Assuming that the semantic units (sentences) and their rhetorical relations (eg, sentence a is an elaboration of sentence d) are given, he transcribed a text structuring task into the problem of finding the best discourse tree that satisfied the set of rhetorical relations. He stated that global coherence could be achieved by satisfying local coherence constraints in ordering and clustering, thereby ensuring that the resultant discourse tree was well-formed. Unfortunately, identifying the rhetorical relation between two sentences has been a difficult 386 a A B C D b c d E = (b a) G = (b a c d) F = (c d) Segments Sentences f (association strength) Figure 1: Arranging four sentences A, B, C, and D with a bottom-up approach. task for computers. However, the bottom-up algorithm for arranging sentences can still be applied only if the direction and strength of the association of the two segments (sentences) are defined. Hence, we introduce a function f(A ≻B) to represent the direction and strength of the association of two segments A and B, f(A ≻B) = ½ p (if A precedes B) 0 (if B precedes A) , (2) where p (0 ≤p ≤1) denotes the association strength of the segments A and B. The association strengths of the two segments with different directions, eg, f(A ≻B) and f(B ≻A), are not always identical in our definition, f(A ≻B) ̸= f(B ≻A). (3) Figure 1 shows the process of arranging four sentences a, b, c, and d. Firstly, we initialize four segments with a sentence for each, A = (a), B = (b), C = (c), D = (d). (4) Suppose that f(B ≻A) has the highest value of all possible pairs, eg, f(A ≻B), f(C ≻D), etc, we concatenate B and A to obtain a new segment, E = (b ≻a). (5) Then we search for the segment pair with the strongest association. Supposing that f(C ≻D) has the highest value, we concatenate C and D to obtain a new segment, F = (c ≻d). (6) Finally, comparing f(E ≻F) and f(F ≻E), we obtain the global sentence ordering, G = (b ≻a ≻c ≻d). (7) In the above description, we have not defined the association of the two segments. The previous work described in Section 2 has addressed the association of textual segments (sentences) to obtain coherent orderings. We define four criteria to capture the association of two segments: chronology; topical-closeness; precedence; and succession. These criteria are integrated into a function f(A ≻B) by using a machine learning approach. The rest of this section explains the four criteria and an integration method with a Support Vector Machine (SVM) (Vapnik, 1998) classifier. 3.1 Chronology criterion Chronology criterion reflects the chronological ordering (Lin and Hovy, 2001; McKeown et al., 1999), which arranges sentences in a chronological order of the publication date. We define the association strength of arranging segments B after A measured by a chronology criterion fchro(A ≻B) in the following formula, fchro(A ≻B) =        1 T(am) < T(b1) 1 [D(am) = D(b1)] ∧[N(am) < N(b1)] 0.5 [T(am) = T(b1)] ∧[D(am) ̸= D(b1)] 0 otherwise . (8) Here, am represents the last sentence in segment A; b1 represents the first sentence in segment B; T(s) is the publication date of the sentence s; D(s) is the unique identifier of the document to which sentence s belongs: and N(s) denotes the line number of sentence s in the original document. The chronological order of arranging segment B after A is determined by the comparison between the last sentence in the segment A and the first sentence in the segment B. The chronology criterion assesses the appropriateness of arranging segment B after A if: sentence am is published earlier than b1; or sentence am appears before b1 in the same article. If sentence am and b1 are published on the same day but appear in different articles, the criterion assumes the order to be undefined. If none of the above conditions are satisfied, the criterion estimates that segment B will precede A. 3.2 Topical-closeness criterion The topical-closeness criterion deals with the association, based on the topical similarity, of two 387 a1 a2 .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... a3 a4 .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... b1 b2 b3 b3 b2 b1 Pb1 Pb2 Pb3 .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... Segment A ? Segment B Original article for sentence b Original article for sentence b2 Original article for sentence b3 .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... Original article 1 . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... 1 Original article max average max max Figure 2: Precedence criterion segments. The criterion reflects the ordering strategy proposed by Barzilay et al (2002), which groups sentences referring to the same topic. To measure the topical closeness of two sentences, we represent each sentence with a vector whose elements correspond to the occurrence1 of the nouns and verbs in the sentence. We define the topical closeness of two segments A and B as follows, ftopic(A ≻B) = 1 |B| X b∈B max a∈A sim(a, b). (9) Here, sim(a, b) denotes the similarity of sentences a and b, which is calculated by the cosine similarity of two vectors corresponding to the sentences. For sentence b ∈B, maxa∈A sim(a, b) chooses the sentence a ∈A most similar to sentence b and yields the similarity. The topical-closeness criterion ftopic(A ≻B) assigns a higher value when the topic referred by segment B is the same as segment A. 3.3 Precedence criterion Let us think of the case where we arrange segment A before B. Each sentence in segment B has the presuppositional information that should be conveyed to a reader in advance. Given sentence b ∈B, such presuppositional information may be presented by the sentences appearing before the sentence b in the original article. However, we cannot guarantee whether a sentenceextraction method for multi-document summarization chooses any sentences before b for a summary because the extraction method usually deter1The vector values are represented by boolean values, i.e., 1 if the sentence contains a word, otherwise 0. a1 a2 .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... a3 .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... b b2 b3 a3 a2 a1 Sa1 Sa2 Sa3 . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... Segment A ? Segment B Original article for sentence a1 Original article for sentence a2 Original article for sentence a3 Original article for sentence for sentence max average max max .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... .... .. .. .... .. ....... ...... b1 Original article .... .. .. .... .. ....... ...... Original article for sentence .... .. .. .... .. ....... ...... .... .. .. .... .. ....... ...... .... .. .. .... .. ....... ...... .... .. .. .... .. ....... ...... . ... ...... .. .., .... ... .... 1 b for sentence Original article Figure 3: Succession criterion mines a set of sentences, within the constraint of summary length, that maximizes information coverage and excludes redundant information. Precedence criterion measures the substitutability of the presuppositional information of segment B (eg, the sentences appearing before sentence b) as segment A. This criterion is a formalization of the sentence-ordering algorithm proposed by Okazaki et al, (2004). We define the precedence criterion in the following formula, fpre(A ≻B) = 1 |B| X b∈B max a∈A,p∈Pb sim(a, p). (10) Here, Pb is a set of sentences appearing before sentence b in the original article; and sim(a, b) denotes the cosine similarity of sentences a and b (defined as in the topical-closeness criterion). Figure 2 shows an example of calculating the precedence criterion for arranging segment B after A. We approximate the presuppositional information for sentence b by sentences Pb, ie, sentences appearing before the sentence b in the original article. Calculating the similarity among sentences in Pb and A by the maximum similarity of the possible sentence combinations, Formula 10 is interpreted as the average similarity of the precedent sentences ∀Pb(b ∈B) to the segment A. 3.4 Succession criterion The idea of succession criterion is the exact opposite of the precedence criterion. The succession criterion assesses the coverage of the succedent information for segment A by arranging segment B 388 a b c d Partitioning point segment before the partitioning point segment after the partitioning point Partitioning window Figure 4: Partitioning a human-ordered extract into pairs of segments after A: fsucc(A ≻B) = 1 |A| X a∈A max s∈Sa,b∈B sim(s, b). (11) Here, Sa is a set of sentences appearing after sentence a in the original article; and sim(a, b) denotes the cosine similarity of sentences a and b (defined as in the topical-closeness criterion). Figure 3 shows an example of calculating the succession criterion to arrange segments B after A. The succession criterion measures the substitutability of the succedent information (eg, the sentences appearing after the sentence a ∈A) as segment B. 3.5 SVM classifier to assess the integrated criterion We integrate the four criteria described above to define the function f(A ≻B) to represent the association direction and strength of the two segments A and B (Formula 2). More specifically, given the two segments A and B, function f(A ≻B) is defined to yield the integrated association strength from four values, fchro(A ≻B), ftopic(A ≻B), fpre(A ≻B), and fsucc(A ≻B). We formalize the integration task as a binary classification problem and employ a Support Vector Machine (SVM) as the classifier. We conducted a supervised learning as follows. We partition a human-ordered extract into pairs each of which consists of two non-overlapping segments. Let us explain the partitioning process taking four human-ordered sentences, a ≻b ≻ c ≻d shown in Figure 4. Firstly, we place the partitioning point just after the first sentence a. Focusing on sentence a arranged just before the partition point and sentence b arranged just after we identify the pair {(a), (b)} of two segments (a) and (b). Enumerating all possible pairs of two segments facing just before/after the partitioning point, we obtain the following pairs, {(a), (b ≻ c)} and {(a), (b ≻c ≻d)}. Similarly, segment +1 : [fchro(A ≻B), ftopic(A ≻B), fpre(A ≻B), fsucc(A ≻B)] −1 : [fchro(B ≻A), ftopic(B ≻A), fpre(B ≻A), fsucc(B ≻A)] Figure 5: Two vectors in a training data generated from two ordered segments A ≻B pairs, {(b), (c)}, {(a ≻b), (c)}, {(b), (c ≻d)}, {(a ≻b), (c ≻d)}, are obtained from the partitioning point between sentence b and c. Collecting the segment pairs from the partitioning point between sentences c and d (i.e., {(c), (d)}, {(b ≻ c), (d)} and {(a ≻b ≻c), (d)}), we identify ten pairs in total form the four ordered sentences. In general, this process yields n(n2−1)/6 pairs from ordered n sentences. From each pair of segments, we generate one positive and one negative training instance as follows. Given a pair of two segments A and B arranged in an order A ≻B, we calculate four values, fchro(A ≻B), ftopic(A ≻B), fpre(A ≻B), and fsucc(A ≻B) to obtain the instance with the four-dimensional vector (Figure 5). We label the instance (corresponding to A ≻B) as a positive class (ie, +1). Simultaneously, we obtain another instance with a four-dimensional vector corresponding to B ≻A. We label it as a negative class (ie, −1). Accumulating these instances as training data, we obtain a binary classifier by using a Support Vector Machine with a quadratic kernel. The SVM classifier yields the association direction of two segments (eg, A ≻B or B ≻A) with the class information (ie, +1 or −1). We assign the association strength of two segments by using the class probability estimate that the instance belongs to a positive (+1) class. When an instance is classified into a negative (−1) class, we set the association strength as zero (see the definition of Formula 2). 4 Evaluation We evaluated the proposed method by using the 3rd Text Summarization Challenge (TSC-3) corpus2. The TSC-3 corpus contains 30 sets of extracts, each of which consists of unordered sentences3 extracted from Japanese newspaper articles relevant to a topic (query). We arrange the extracts by using different algorithms and evaluate 2http://lr-www.pi.titech.ac.jp/tsc/tsc3-en.html 3Each extract consists of ca. 15 sentences on average. 389 Table 1: Correlation between two sets of humanordered extracts Metric Mean Std. Dev Min Max Spearman 0.739 0.304 -0.2 1 Kendall 0.694 0.290 0 1 Average Continuity 0.401 0.404 0.001 1 the readability of the ordered extracts by a subjective grading and several metrics. In order to construct training data applicable to the proposed method, we asked two human subjects to arrange the extracts and obtained 30(topics) × 2(humans) = 60 sets of ordered extracts. Table 1 shows the agreement of the ordered extracts between the two subjects. The correlation is measured by three metrics, Spearman’s rank correlation, Kendall’s rank correlation, and average continuity (described later). The mean correlation values (0.74 for Spearman’s rank correlation and 0.69 for Kendall’s rank correlation) indicate a certain level of agreement in sentence orderings made by the two subjects. 8 out of 30 extracts were actually identical. We applied the leave-one-out method to the proposed method to produce a set of sentence orderings. In this experiment, the leave-out-out method arranges an extract by using an SVM model trained from the rest of the 29 extracts. Repeating this process 30 times with a different topic for each iteration, we generated a set of 30 extracts for evaluation. In addition to the proposed method, we prepared six sets of sentence orderings produced by different algorithms for comparison. We describe briefly the seven algorithms (including the proposed method): Agglomerative ordering (AGL) is an ordering arranged by the proposed method; Random ordering (RND) is the lowest anchor, in which sentences are arranged randomly; Human-made ordering (HUM) is the highest anchor, in which sentences are arranged by a human subject; Chronological ordering (CHR) arranges sentences with the chronology criterion defined in Formula 8. Sentences are arranged in chronological order of their publication date; Topical-closeness ordering (TOP) arranges sentences with the topical-closeness criterion defined in Formula 9; 0 20 40 60 80 100 Unacceptable Poor Acceptable Perfect HUM AGL CHR RND % Figure 6: Subjective grading Precedence ordering (PRE) arranges sentences with the precedence criterion defined in Formula 10; Suceedence ordering (SUC) arranges sentences with the succession criterion defined in Formula 11. The last four algorithms (CHR, TOP, PRE, and SUC) arrange sentences by the corresponding criterion alone, each of which uses the association strength directly to arrange sentences without the integration of other criteria. These orderings are expected to show the performance of each expert independently and their contribution to solving the sentence ordering problem. 4.1 Subjective grading Evaluating a sentence ordering is a challenging task. Intrinsic evaluation that involves human judges to rank a set of sentence orderings is a necessary approach to this task (Barzilay et al., 2002; Okazaki et al., 2004). We asked two human judges to rate sentence orderings according to the following criteria. A perfect summary is a text that we cannot improve any further by re-ordering. An acceptable summary is one that makes sense and is unnecessary to revise even though there is some room for improvement in terms of readability. A poor summary is one that loses a thread of the story at some places and requires minor amendment to bring it up to an acceptable level. An unacceptable summary is one that leaves much to be improved and requires overall restructuring rather than partial revision. To avoid any disturbance in rating, we inform the judges that the summaries were made from a same set of extracted sentences and only the ordering of sentences is different. Figure 6 shows the distribution of the subjective grading made by two judges to four sets of orderings, RND, CHR, AGL and HUM. Each set of or390 Teval = (e ≻a ≻b ≻c ≻d) Tref = (a ≻b ≻c ≻d ≻e) Figure 7: An example of an ordering under evaluation Teval and its reference Tref. derings has 30(topics) × 2(judges) = 60 ratings. Most RND orderings are rated as unacceptable. Although CHR and AGL orderings have roughly the same number of perfect orderings (ca. 25%), the AGL algorithm gained more acceptable orderings (47%) than the CHR alghrotihm (30%). This fact shows that integration of CHR experts with other experts worked well by pushing poor ordering to an acceptable level. However, a huge gap between AGL and HUM orderings was also found. The judges rated 28% AGL orderings as perfect while the figure rose as high as 82% for HUM orderings. Kendall’s coefficient of concordance (Kendall’s W), which asses the inter-judge agreement of overall ratings, reported a higher agreement between the two judges (W = 0.939). 4.2 Metrics for semi-automatic evaluation We also evaluated sentence orderings by reusing two sets of gold-standard orderings made for the training data. In general, subjective grading consumes much time and effort, even though we cannot reproduce the evaluation afterwards. The previous studies (Barzilay et al., 2002; Lapata, 2003) employ rank correlation coefficients such as Spearman’s rank correlation and Kendall’s rank correlation, assuming a sentence ordering to be a rank. Okazaki et al. (2004) propose a metric that assess continuity of pairwise sentences compared with the gold standard. In addition to Spearman’s and Kendall’s rank correlation coefficients, we propose an average continuity metric, which extends the idea of the continuity metric to continuous k sentences. A text with sentences arranged in proper order does not interrupt a human’s reading while moving from one sentence to the next. Hence, the quality of a sentence ordering can be estimated by the number of continuous sentences that are also reproduced in the reference sentence ordering. This is equivalent to measuring a precision of continuous sentences in an ordering against the reference ordering. We define Pn to measure the precision of Table 2: Comparison with human-made ordering Method Spearman Kendall Average coefficient coefficient Continuity RND -0.127 -0.069 0.011 TOP 0.414 0.400 0.197 PRE 0.415 0.428 0.293 SUC 0.473 0.476 0.291 CHR 0.583 0.587 0.356 AGL 0.603 0.612 0.459 n continuous sentences in an ordering to be evaluated as, Pn = m N −n + 1. (12) Here, N is the number of sentences in the reference ordering; n is the length of continuous sentences on which we are evaluating; m is the number of continuous sentences that appear in both the evaluation and reference orderings. In Figure 7, the precision of 3 continuous sentences P3 is calculated as: P3 = 2 5 −3 + 1 = 0.67. (13) The Average Continuity (AC) is defined as the logarithmic average of Pn over 2 to k: AC = exp à 1 k −1 k X n=2 log(Pn + α) ! . (14) Here, k is a parameter to control the range of the logarithmic average; and α is a small value in case if Pn is zero. We set k = 4 (ie, more than five continuous sentences are not included for evaluation) and α = 0.01. Average Continuity becomes 0 when evaluation and reference orderings share no continuous sentences and 1 when the two orderings are identical. In Figure 7, Average Continuity is calculated as 0.63. The underlying idea of Formula 14 was proposed by Papineni et al. (2002) as the BLEU metric for the semi-automatic evaluation of machine-translation systems. The original definition of the BLEU metric is to compare a machine-translated text with its reference translation by using the word n-grams. 4.3 Results of semi-automatic evaluation Table 2 reports the resemblance of orderings produced by six algorithms to the human-made ones with three metrics, Spearman’s rank correlation, Kendall’s rank correlation, and Average Continuity. The proposed method (AGL) outperforms the 391 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 AGL CHR SUC PRE TOP RND 8 7 6 5 4 3 2 Precision Pn Length n Figure 8: Precision vs unit of measuring continuity. rest in all evaluation metrics, although the chronological ordering (CHR) appeared to play the major role. The one-way analysis of variance (ANOVA) verified the effects of different algorithms for sentence orderings with all metrics (p < 0.01). We performed Tukey Honest Significant Differences (HSD) test to compare differences among these algorithms. The Tukey test revealed that AGL was significantly better than the rest. Even though we could not compare our experiment with the probabilistic approach (Lapata, 2003) directly due to the difference of the text corpora, the Kendall coefficient reported higher agreement than Lapata’s experiment (Kendall=0.48 with lemmatized nouns and Kendall=0.56 with verb-noun dependencies). Figure 8 shows precision Pn with different length values of continuous sentence n for the six methods compared in Table 2. The number of continuous sentences becomes sparse for a higher value of length n. Therefore, the precision values decrease as the length n increases. Although RND ordering reported some continuous sentences for lower n values, no continuous sentences could be observed for the higher n values. Four criteria described in Section 3 (ie, CHR, TOP, PRE, SUC) produce segments of continuous sentences at all values of n. 5 Conclusion We present a bottom-up approach to arrange sentences extracted for multi-document summarization. Our experimental results showed a significant improvement over existing sentence ordering strategies. However, the results also implied that chronological ordering played the major role in arranging sentences. A future direction of this study would be to explore the application of the proposed framework to more generic texts, such as documents without chronological information. Acknowledgment We used Mainichi Shinbun and Yomiuri Shinbun newspaper articles, and the TSC-3 test collection. References Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In HLT-NAACL 2004: Proceedings of the Main Conference, pages 113–120. Regina Barzilay, Noemie Elhadad, and Kathleen McKeown. 2002. Inferring strategies for sentence ordering in multidocument news summarization. Journal of Artificial Intelligence Research, 17:35–55. Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. Proceedings of the annual meeting of ACL, 2003., pages 545–552. C.Y. Lin and E. Hovy. 2001. Neats:a multidocument summarizer. Proceedings of the Document Understanding Workshop(DUC). W. Mann and S. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8:243–281. Daniel Marcu. 1997. From local to global coherence: A bottom-up approach to text planning. In Proceedings of the 14th National Conference on Artificial Intelligence, pages 629–635, Providence, Rhode Island. Kathleen McKeown, Judith Klavans, Vasileios Hatzivassiloglou, Regina Barzilay, and Eleazar Eskin. 1999. Towards multidocument summarization by reformulation: Progress and prospects. AAAI/IAAI, pages 453–460. Naoaki Okazaki, Yutaka Matsuo, and Mitsuru Ishizuka. 2004. Improving chronological sentence ordering by precedence relation. In Proceedings of 20th International Conference on Computational Linguistics (COLING 04), pages 750–756. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu:a method for automatic evaluation of machine translation. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318. Dragomir R. Radev and Kathy McKeown. 1999. Generating natural language summaries from multiple on-line sources. Computational Linguistics, 24:469–500. V. Vapnik. 1998. Statistical Learning Theory. Wiley, Chichester, GB. 392
2006
49
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 33–40, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Bootstrapping Path-Based Pronoun Resolution Shane Bergsma Department of Computing Science University of Alberta Edmonton, Alberta, Canada, T6G 2E8 [email protected] Dekang Lin Google, Inc. 1600 Amphitheatre Parkway, Mountain View, California, 94301 [email protected] Abstract We present an approach to pronoun resolution based on syntactic paths. Through a simple bootstrapping procedure, we learn the likelihood of coreference between a pronoun and a candidate noun based on the path in the parse tree between the two entities. This path information enables us to handle previously challenging resolution instances, and also robustly addresses traditional syntactic coreference constraints. Highly coreferent paths also allow mining of precise probabilistic gender/number information. We combine statistical knowledge with well known features in a Support Vector Machine pronoun resolution classifier. Significant gains in performance are observed on several datasets. 1 Introduction Pronoun resolution is a difficult but vital part of the overall coreference resolution task. In each of the following sentences, a pronoun resolution system must determine what the pronoun his refers to: (1) John needs his friend. (2) John needs his support. In (1), John and his corefer. In (2), his refers to some other, perhaps previously evoked entity. Traditional pronoun resolution systems are not designed to distinguish between these cases. They lack the specific world knowledge required in the second instance – the knowledge that a person does not usually explicitly need his own support. We collect statistical path-coreference information from a large, automatically-parsed corpus to address this limitation. A dependency path is defined as the sequence of dependency links between two potentially coreferent entities in a parse tree. A path does not include the terminal entities; for example, “John needs his support” and “He needs their support” have the same syntactic path. Our algorithm determines that the dependency path linking the Noun and pronoun is very likely to connect coreferent entities for the path “Noun needs pronoun’s friend,” while it is rarely coreferent for the path “Noun needs pronoun’s support.” This likelihood can be learned by simply counting how often we see a given path in text with an initial Noun and a final pronoun that are from the same/different gender/number classes. Cases such as “John needs her support” or “They need his support” are much more frequent in text than cases where the subject noun and pronoun terminals agree in gender/number. When there is agreement, the terminal nouns are likely to be coreferent. When they disagree, they refer to different entities. After a sufficient number of occurrences of agreement or disagreement, there is a strong statistical indication of whether the path is coreferent (terminal nouns tend to refer to the same entity) or non-coreferent (nouns refer to different entities). We show that including path coreference information enables significant performance gains on three third-person pronoun resolution experiments. We also show that coreferent paths can provide the seed information for bootstrapping other, even more important information, such as the gender/number of noun phrases. 2 Related Work Coreference resolution is generally conducted as a pairwise classification task, using various constraints and preferences to determine whether two 33 expressions corefer. Coreference is typically only allowed between nouns matching in gender and number, and not violating any intrasentential syntactic principles. Constraints can be applied as a preprocessing step to scoring candidates based on distance, grammatical role, etc., with scores developed either manually (Lappin and Leass, 1994), or through a machine-learning algorithm (Kehler et al., 2004). Constraints and preferences have also been applied together as decision nodes on a decision tree (Aone and Bennett, 1995). When previous resolution systems handle cases like (1) and (2), where no disagreement or syntactic violation occurs, coreference is therefore determined by the weighting of features or learned decisions of the resolution classifier. Without path coreference knowledge, a resolution process would resolve the pronouns in (1) and (2) the same way. Indeed, coreference resolution research has focused on the importance of the strategy for combining well known constraints and preferences (Mitkov, 1997; Ng and Cardie, 2002), devoting little attention to the development of new features for these difficult cases. The application of world knowledge to pronoun resolution has been limited to the semantic compatibility between a candidate noun and the pronoun’s context (Yang et al., 2005). We show semantic compatibility can be effectively combined with path coreference information in our experiments below. Our method for determining path coreference is similar to an algorithm for discovering paraphrases in text (Lin and Pantel, 2001). In that work, the beginning and end nodes in the paths are collected, and two paths are said to be similar (and thus likely paraphrases of each other) if they have similar terminals (i.e. the paths occur with a similar distribution). Our work does not need to store the terminals themselves, only whether they are from the same pronoun group. Different paths are not compared in any way; each path is individually assigned a coreference likelihood. 3 Path Coreference We define a dependency path as the sequence of nodes and dependency labels between two potentially coreferent entities in a dependency parse tree. We use the structure induced by the minimalist parser Minipar (Lin, 1998) on sentences from the news corpus described in Section 4. Figure 1 gives the parse tree of (2). As a short-form, we John needs his support subj gen obj Figure 1: Example dependency tree. write the dependency path in this case as “Noun needs pronoun’s support.” The path itself does not include the terminal nouns “John” and “his.” Our algorithm finds the likelihood of coreference along dependency paths by counting the number of times they occur with terminals that are either likely coreferent or non-coreferent. In the simplest version, we count paths with terminals that are both pronouns. We partition pronouns into seven groups of matching gender, number, and person; for example, the first person singular group contains I, me, my, mine, and myself. If the two terminal pronouns are from the same group, coreference along the path is likely. If they are from different groups, like I and his, then they are non-coreferent. Let NS(p) be the number of times the two terminal pronouns of a path, p, are from the same pronoun group, and let ND(p) be the number of times they are from different groups. We define the coreference of p as: C(p) = NS(p) NS(p) + ND(p) Our statistics indicate the example path, “Noun needs pronoun’s support,” has a low C(p) value. We could use this fact to prevent us from resolving “his” to “John” when “John needs his support” is presented to a pronoun resolution system. To mitigate data sparsity, we represent the path with the root form of the verbs and nouns. Also, we use Minipar’s named-entity recognition to replace named-entity nouns by the semantic category of their named-entity, when available. All modifiers not on the direct path, such as adjectives, determiners and adverbs, are not considered. We limit the maximum path length to eight nodes. Tables 1 and 2 give examples of coreferent and non-coreferent paths learned by our algorithm and identified in our test sets. Coreferent paths are defined as paths with a C(p) value (and overall number of occurrences) above a certain threshold, indicating the terminal entities are highly likely 34 Table 1: Example coreferent paths: Italicized entities generally corefer. Pattern Example 1. Noun left ... to pronoun’s wife Buffett will leave the stock to his wife. 2. Noun says pronoun intends... The newspaper says it intends to file a lawsuit. 3. Noun was punished for pronoun’s crime. The criminal was punished for his crime. 4. ... left Noun to fend for pronoun-self They left Jane to fend for herself. 5. Noun lost pronoun’s job. Dick lost his job. 6. ... created Noun and populated pronoun. Nzame created the earth and populated it 7. Noun consolidated pronoun’s power. The revolutionaries consolidated their power. 8. Noun suffered ... in pronoun’s knee ligament. The leopard suffered pain in its knee ligament. to corefer. Non-coreferent paths have a C(p) below a certain cutoff; the terminals are highly unlikely to corefer. Especially note the challenge of resolving most of the examples in Table 2 without path coreference information. Although these paths encompass some cases previously covered by Binding Theory (e.g. “Mary suspended her,” her cannot refer to Mary by Principle B (Haegeman, 1994)), most have no syntactic justification for non-coreference per se. Likewise, although Binding Theory (Principle A) could identify the reflexive pronominal relationship of Example 4 in Table 1, most cases cannot be resolved through syntax alone. Our analysis shows that successfully handling cases that may have been handled with Binding Theory constitutes only a small portion of the total performance gain using path coreference. In any case, Binding Theory remains a challenge with a noisy parser. Consider: “Alex gave her money.” Minipar parses her as a possessive, when it is more likely an object, “Alex gave money to her.” Without a correct parse, we cannot rule out the link between her and Alex through Binding Theory. Our algorithm, however, learns that the path “Noun gave pronoun’s money,” is noncoreferent. In a sense, it corrects for parser errors by learning when coreference should be blocked, given any consistent parse of the sentence. We obtain path coreference for millions of paths from our parsed news corpus (Section 4). While Tables 1 and 2 give test set examples, many other interesting paths are obtained. We learn coreference is unlikely between the nouns in “Bob married his mother,” or “Sue wrote her obituary.” The fact you don’t marry your own mother or write your own obituary is perhaps obvious, but this is the first time this kind of knowledge has been made available computationally. Naturally, exceptions to the coreference or non-coreference of some of these paths can be found; our patterns represent general trends only. And, as mentioned above, reliable path coreference is somewhat dependent on consistent parsing. Paths connecting pronouns to pronouns are different than paths connecting both nouns and pronouns to pronouns – the case we are ultimately interested in resolving. Consider “Company A gave its data on its website.” The pronoun-pronoun path coreference algorithm described above would learn the terminals in “Noun’s data on pronoun’s website” are often coreferent. But if we see the phrase “Company A gave Company B’s data on its website,” then “its” is not likely to refer to “Company B,” even though we identified this as a coreferent path! We address this problem with a two-stage extraction procedure. We first bootstrap gender/number information using the pronounpronoun paths as described in Section 4.1. We then use this gender/number information to count paths where an initial noun (with probabilisticallyassigned gender/number) and following pronoun are connected by the dependency path, recording the agreement or disagreement of their gender/number category.1 These superior paths are then used to re-bootstrap our final gender/number information used in the evaluation (Section 6). We also bootstrap paths where the nodes in the path are replaced by their grammatical category. This allows us to learn general syntactic constraints not dependent on the surface forms of the words (including, but not limited to, the Binding Theory principles). A separate set of these noncoreferent paths is also used as a feature in our sys1As desired, this modification allows the first example to provide two instances of noun-pronoun paths with terminals from the same gender/number group, linking each “its” to the subject noun “Company A”, rather than to each other. 35 Table 2: Example non-coreferent paths: Italicized entities do not generally corefer Pattern Example 1. Noun thanked ... for pronoun’s assistance John thanked him for his assistance. 2. Noun wanted pronoun to lie. The president wanted her to lie. 3. ... Noun into pronoun’s pool Max put the floaties into their pool. 4. ... use Noun to pronoun’s advantage The company used the delay to its advantage. 5. Noun suspended pronoun Mary suspended her. 6. Noun was pronoun’s relative. The Smiths were their relatives. 7. Noun met pronoun’s demands The players’ association met its demands. 8. ... put Noun at the top of pronoun’s list. The government put safety at the top of its list. tem. We also tried expanding our coverage by using paths similar to paths with known path coreference (based on distributionally similar words), but this did not generally increase performance. 4 Bootstrapping in Pronoun Resolution Our determination of path coreference can be considered a bootstrapping procedure. Furthermore, the coreferent paths themselves can serve as the seed for bootstrapping additional coreference information. In this section, we sketch previous approaches to bootstrapping in coreference resolution and explain our new ideas. Coreference bootstrapping works by assuming resolutions in unlabelled text, acquiring information from the putative resolutions, and then making inferences from the aggregate statistical data. For example, we assumed two pronouns from the same pronoun group were coreferent, and deduced path coreference from the accumulated counts. The potential of the bootstrapping approach can best be appreciated by imagining millions of documents with coreference annotations. With such a set, we could extract fine-grained features, perhaps tied to individual words or paths. For example, we could estimate the likelihood each noun belongs to a particular gender/number class by the proportion of times this noun was labelled as the antecedent for a pronoun of this particular gender/number. Since no such corpus exists, researchers have used coarser features learned from smaller sets through supervised learning (Soon et al., 2001; Ng and Cardie, 2002), manually-defined coreference patterns to mine specific kinds of data (Bean and Riloff, 2004; Bergsma, 2005), or accepted the noise inherent in unsupervised schemes (Ge et al., 1998; Cherry and Bergsma, 2005). We address the drawbacks of these approaches Table 3: Gender classification performance (%) Classifier F-Score Bergsma (2005) Corpus-based 85.4 Bergsma (2005) Web-based 90.4 Bergsma (2005) Combined 92.2 Duplicated Corpus-based 88.0 Coreferent Path-based 90.3 by using coreferent paths as the assumed resolutions in the bootstrapping. Because we can vary the threshold for defining a coreferent path, we can trade-off coverage for precision. We now outline two potential uses of bootstrapping with coreferent paths: learning gender/number information (Section 4.1) and augmenting a semantic compatibility model (Section 4.2). We bootstrap this data on our automatically-parsed news corpus. The corpus comprises 85 GB of news articles taken from the world wide web over a 1-year period. 4.1 Probabilistic Gender/Number Bergsma (2005) learns noun gender (and number) from two principal sources: 1) mining it from manually-defined lexico-syntactic patterns in parsed corpora, and 2) acquiring it on the fly by counting the number of pages returned for various gender-indicating patterns by the Google search engine. The web-based approach outperformed the corpus-based approach, while a system that combined the two sets of information resulted in the highest performance (Table 3). The combined gender-classifying system is a machine-learned classifier with 20 features. The time delay of using an Internet search engine within a large-scale anaphora resolution effort is currently impractical. Thus we attempted 36 Table 4: Example gender/number probability (%) Word masc fem neut plur company 0.6 0.1 98.1 1.2 condoleeza rice 4.0 92.7 0.0 3.2 pat 58.3 30.6 6.2 4.9 president 94.1 3.0 1.5 1.4 wife 9.9 83.3 0.8 6.1 to duplicate Bergsma’s corpus-based extraction of gender and number, where the information can be stored in advance in a table, but using a much larger data set. Bergsma ran his extraction on roughly 6 GB of text; we used roughly 85 GB. Using the test set from Bergsma (2005), we were only able to boost performance from an FScore of 85.4% to one of 88.0% (Table 3). This result led us to re-examine the high performance of Bergsma’s web-based approach. We realized that the corpus-based and web-based approaches are not exactly symmetric. The corpus-based approaches, for example, would not pick out gender from a pattern such as “John and his friends...” because “Noun and pronoun’s NP” is not one of the manually-defined gender extraction patterns. The web-based approach, however, would catch this instance with the “John * his/her/its/their” template, where “*” is the Google wild-card operator. Clearly, there are patterns useful for capturing gender and number information beyond the predefined set used in the corpus-based extraction. We thus decided to capture gender/number information from coreferent paths. If a noun is connected to a pronoun of a particular gender along a coreferent path, we count this as an instance of that noun being that gender. In the end, the probability that the noun is a particular gender is the proportion of times it was connected to a pronoun of that gender along a coreferent path. Gender information becomes a single intuitive, accessible feature (i.e. the probability of the noun being that gender) rather than Bergsma’s 20-dimensional feature vector requiring search-engine queries to instantiate. We acquire gender and number data for over 3 million nouns. We use add-one smoothing for data sparsity. Some example gender/number probabilities are given in Table 4 (cf. (Ge et al., 1998; Cherry and Bergsma, 2005)). We get a performance of 90.3% (Table 3), again meeting our requirements of high performance and allowing for a fast, practical implementation. This is lower than Bergsma’s top score of 92.2% (Figure 3), but again, Bergsma’s top system relies on Google search queries for each new word, while ours are all pre-stored in a table for fast access. We are pleased to be able to share our gender and number data with the NLP community.2 In Section 6, we show the benefit of this data as a probabilistic feature in our pronoun resolution system. Probabilistic data is useful because it allows us to rapidly prototype resolution systems without incurring the overhead of large-scale lexical databases such as WordNet (Miller et al., 1990). 4.2 Semantic Compatibility Researchers since Dagan and Itai (1990) have variously argued for and against the utility of collocation statistics between nouns and parents for improving the performance of pronoun resolution. For example, can the verb parent of a pronoun be used to select antecedents that satisfy the verb’s selectional restrictions? If the verb phrase was shatter it, we would expect it to refer to some kind of brittle entity. Like path coreference, semantic compatibility can be considered a form of world knowledge needed for more challenging pronoun resolution instances. We encode the semantic compatibility between a noun and its parse tree parent (and grammatical relationship with the parent) using mutual information (MI) (Church and Hanks, 1989). Suppose we are determining whether ham is a suitable antecedent for the pronoun it in eat it. We calculate the MI as: MI(eat:obj, ham) = log Pr(eat:obj:ham) Pr(eat:obj)Pr(ham) Although semantic compatibility is usually only computed for possessive-noun, subject-verb, and verb-object relationships, we include 121 different kinds of syntactic relationships as parsed in our news corpus.3 We collected 4.88 billion parent:rel:node triples, including over 327 million possessive-noun values, 1.29 billion subject-verb and 877 million verb-direct object. We use small probability values for unseen Pr(parent:rel:node), Pr(parent:rel), and Pr(node) cases, as well as a default MI when no relationship is parsed, roughly optimized for performance on the training set. We 2Available at http://www.cs.ualberta.ca/˜bergsma/Gender/ 3We convert prepositions to relationships to enhance our model’s semantics, e.g. Joan:of:Arc rather than Joan:prep:of 37 include both the MI between the noun and the pronoun’s parent as well as the MI between the pronoun and the noun’s parent as features in our pronoun resolution classifier. Kehler et al. (2004) saw no apparent gain from using semantic compatibility information, while Yang et al. (2005) saw about a 3% improvement with compatibility data acquired by searching on the world wide web. Section 6 analyzes the contribution of MI to our system. Bean and Riloff (2004) used bootstrapping to extend their semantic compatibility model, which they called contextual-role knowledge, by identifying certain cases of easily-resolved anaphors and antecedents. They give the example “Mr. Bush disclosed the policy by reading it.” Once we identify that it and policy are coreferent, we include read:obj:policy as part of the compatibility model. Rather than using manually-defined heuristics to bootstrap additional semantic compatibility information, we wanted to enhance our MI statistics automatically with coreferent paths. Consider the phrase, “Saddam’s wife got a Jordanian lawyer for her husband.” It is unlikely we would see “wife’s husband” in text; in other words, we would not know that husband:gen:wife is, in fact, semantically compatible and thereby we would discourage selection of “wife” as the antecedent at resolution time. However, because “Noun gets ... for pronoun’s husband” is a coreferent path, we could capture the above relationship by adding a parent:rel:node for every pronoun connected to a noun phrase along a coreferent path in text. We developed context models with and without these path enhancements, but ultimately we could find no subset of coreferent paths that improve the semantic compatibility’s contribution to training set accuracy. A mutual information model trained on 85 GB of text is fairly robust on its own, and any kind of bootstrapped extension seems to cause more damage by increased noise than can be compensated by increased coverage. Although we like knowing audiences have noses, e.g. “the audience turned up its nose at the performance,” such phrases are apparently quite rare in actual test sets. 5 Experimental Design The noun-pronoun path coreference can be used directly as a feature in a pronoun resolution system. However, path coreference is undefined for cases where there is no path between the pronoun and the candidate noun – for example, when the candidate is in the previous sentence. Therefore, rather than using path coreference directly, we have features that are true if C(p) is above or below certain thresholds. The features are thus set when coreference between the pronoun and candidate noun is likely (a coreferent path) or unlikely (a non-coreferent path). We now evaluate the utility of path coreference within a state-of-the-art machine-learned resolution system for third-person pronouns with nominal antecedents. A standard set of features is used along with the bootstrapped gender/number, semantic compatibility, and path coreference information. We refer to these features as our “probabilistic features” (Prob. Features) and run experiments using the full system trained and tested with each absent, in turn (Table 5). We have 29 features in total, including measures of candidate distance, frequency, grammatical role, and different kinds of parallelism between the pronoun and the candidate noun. Several reliable features are used as hard constraints, removing candidates before consideration by the scoring algorithm. All of the parsing, noun-phrase identification, and named-entity recognition are done automatically with Minipar. Candidate antecedents are considered in the current and previous sentence only. We use SVMlight (Joachims, 1999) to learn a linear-kernel classifier on pairwise examples in the training set. When resolving pronouns, we select the candidate with the farthest positive distance from the SVM classification hyperplane. Our training set is the anaphora-annotated portion of the American National Corpus (ANC) used in Bergsma (2005), containing 1270 anaphoric pronouns4. We test on the ANC Test set (1291 instances) also used in Bergsma (2005) (highest resolution accuracy reported: 73.3%), the anaphoralabelled portion of AQUAINT used in Cherry and Bergsma (2005) (1078 instances, highest accuracy: 71.4%), and the anaphoric pronoun subset of the MUC7 (1997) coreference evaluation formal test set (169 instances, highest precision of 62.1 reported on all pronouns in (Ng and Cardie, 2002)). These particular corpora were chosen so we could test our approach using the same data as comparable machine-learned systems exploiting probabilistic information sources. Parameters 4See http://www.cs.ualberta.ca/˜bergsma/CorefTags/ for instructions on acquiring annotations 38 Table 5: Resolution accuracy (%) Dataset ANC AQT MUC 1 Previous noun 36.7 34.5 30.8 2 No Prob. Features 58.1 60.9 49.7 3 No Prob. Gender 65.8 71.0 68.6 4 No MI 71.3 73.5 69.2 5 No C(p) 72.3 73.7 69.8 6 Full System 73.9 75.0 71.6 7 Upper Bound 93.2 92.3 91.1 were set using cross-validation on the training set; test sets were used only once to obtain the final performance values. Evaluation Metric: We report results in terms of accuracy: Of all the anaphoric pronouns in the test set, the proportion we resolve correctly. 6 Results and Discussion We compare the accuracy of various configurations of our system on the ANC, AQT and MUC datasets (Table 5). We include the score from picking the noun immediately preceding the pronoun (after our hard filters are applied). Due to the hard filters and limited search window, it is not possible for our system to resolve every noun to a correct antecedent. We thus provide the performance upper bound (i.e. the proportion of cases with a correct answer in the filtered candidate list). On ANC and AQT, each of the probabilistic features results in a statistically significant gain in performance over a model trained and tested with that feature absent.5 On the smaller MUC set, none of the differences in 3-6 are statistically significant, however, the relative contribution of the various features remains reassuringly constant. Aside from missing antecedents due to the hard filters, the main sources of error include inaccurate statistical data and a classifier bias toward preceding pronouns of the same gender/number. It would be interesting to see whether performance could be improved by adding WordNet and web-mined features. Path coreference itself could conceivably be determined with a search engine. Gender is our most powerful probabilistic feature. In fact, inspecting our system’s decisions, gender often rules out coreference regardless of path coreference. This is not surprising, since we based the acquisition of C(p) on gender. That is, 5We calculate significance with McNemar’s test, p=0.05. 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision Top-1 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision Top-2 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision Top-3 Figure 2: ANC pronoun resolution accuracy for varying SVM-thresholds. our bootstrapping assumption was that the majority of times these paths occur, gender indicates coreference or lack thereof. Thus when they occur in our test sets, gender should often sufficiently indicate coreference. Improving the orthogonality of our features remains a future challenge. Nevertheless, note the decrease in performance on each of the datasets when C(p) is excluded (#5). This is compelling evidence that path coreference is valuable in its own right, beyond its ability to bootstrap extensive and reliable gender data. Finally, we can add ourselves to the camp of people claiming semantic compatibility is useful for pronoun resolution. Both the MI from the pronoun in the antecedent’s context and vice-versa result in improvement. Building a model from enough text may be the key. The primary goal of our evaluation was to assess the benefit of path coreference within a competitive pronoun resolution system. Our system does, however, outperform previously published results on these datasets. Direct comparison of our scoring system to other current top approaches is made difficult by differences in preprocessing. Ideally we would assess the benefit of our probabilistic features using the same state-of-the-art preprocessing modules employed by others such as (Yang et al., 2005) (who additionally use a search engine for compatibility scoring). Clearly, promoting competitive evaluation of pronoun resolution scoring systems by giving competitors equivalent real-world preprocessing output along the lines of (Barbu and Mitkov, 2001) remains the best way to isolate areas for system improvement. Our pronoun resolution system is part of a larger information retrieval project where resolution ac39 curacy is not necessarily the most pertinent measure of classifier performance. More than one candidate can be useful in ambiguous cases, and not every resolution need be used. Since the SVM ranks antecedent candidates, we can test this ranking by selecting more than the top candidate (Topn) and evaluating coverage of the true antecedents. We can also resolve only those instances where the most likely candidate is above a certain distance from the SVM threshold. Varying this distance varies the precision-recall (PR) of the overall resolution. A representative PR curve for the Top-n classifiers is provided (Figure 2). The corresponding information retrieval performance can now be evaluated along the Top-n / PR configurations. 7 Conclusion We have introduced a novel feature for pronoun resolution called path coreference, and demonstrated its significant contribution to a state-of-theart pronoun resolution system. This feature aids coreference decisions in many situations not handled by traditional coreference systems. Also, by bootstrapping with the coreferent paths, we are able to build the most complete and accurate table of probabilistic gender information yet available. Preliminary experiments show path coreference bootstrapping can also provide a means of identifying pleonastic pronouns, where pleonastic neutral pronouns are often followed in a dependency path by a terminal noun of different gender, and cataphoric constructions, where the pronouns are often followed by nouns of matching gender. References Chinatsu Aone and Scott William Bennett. 1995. Evaluating automated and manual acquisition of anaphora resolution strategies. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 122– 129. Catalina Barbu and Ruslan Mitkov. 2001. Evaluation tool for rule-based anaphora resolution methods. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 34–41. David L. Bean and Ellen Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In HLT-NAACL, pages 297–304. Shane Bergsma. 2005. Automatic acquisition of gender information for anaphora resolution. In Proceedings of the Eighteenth Canadian Conference on Artificial Intelligence (Canadian AI’2005), pages 342–353. Colin Cherry and Shane Bergsma. 2005. An expectation maximization approach to pronoun resolution. In Proceedings of the Ninth Conference on Natural Language Learning (CoNLL-2005), pages 88–95. Kenneth Ward Church and Patrick Hanks. 1989. Word association norms, mutual information, and lexicography. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics (ACL’89), pages 76–83. Ido Dagan and Alan Itai. 1990. Automatic processing of large corpora for the resolution of anaphora references. In Proceedings of the 13th International Conference on Computational Linguistics (COLING-90), volume 3, pages 330–332, Helsinki, Finland. Niyu Ge, John Hale, and Eugene Charniak. 1998. A statistical approach to anaphora resolution. In Proceedings of the Sixth Workshop on Very Large Corpora, pages 161–171. Liliane Haegeman. 1994. Introduction to Government & Binding theory: Second Edition. Basil Blackwell, Cambridge, UK. Thorsten Joachims. 1999. Making large-scale SVM learning practical. In B. Sch¨olkopf and C. Burges, editors, Advances in Kernel Methods. MIT-Press. Andrew Kehler, Douglas Appelt, Lara Taylor, and Aleksandr Simma. 2004. The (non)utility of predicate-argument frequencies for pronoun interpretation. In Proceedings of HLT/NAACL-04, pages 289–296. Shalom Lappin and Herbert J. Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):535–561. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question answering. Natural Language Engineering, 7(4):343–360. Dekang Lin. 1998. Dependency-based evaluation of MINIPAR. In Proceedings of the Workshop on the Evaluation of Parsing Systems, First International Conference on Language Resources and Evaluation. George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: an on-line lexical database. International Journal of Lexicography, 3(4):235–244. Ruslan Mitkov. 1997. Factors in anaphora resolution: they are not the only things that matter. a case study based on two different approaches. In Proceedings of the ACL ’97 / EACL ’97 Workshop on Operational Factors in Practical, Robust Anaphora Resolution, pages 14–21. MUC-7. 1997. Coreference task definition (v3.0, 13 Jul 97). In Proceedings of the Seventh Message Understanding Conference (MUC-7). Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104–111. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. Xiaofeng Yang, Jian Su, and Chew Lim Tan. 2005. Improving pronoun resolution using statistics-based semantic compatibility information. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 165–172, June. 40
2006
5
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 393–400, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Learning Event Durations from Event Descriptions Feng Pan, Rutu Mulkar, and Jerry R. Hobbs Information Sciences Institute (ISI), University of Southern California 4676 Admiralty Way, Marina del Rey, CA 90292, USA {pan, rutu, hobbs}@isi.edu Abstract We have constructed a corpus of news articles in which events are annotated for estimated bounds on their duration. Here we describe a method for measuring inter-annotator agreement for these event duration distributions. We then show that machine learning techniques applied to this data yield coarse-grained event duration information, considerably outperforming a baseline and approaching human performance. 1 Introduction Consider the sentence from a news article: George W. Bush met with Vladimir Putin in Moscow. How long was the meeting? Our first reaction to this question might be that we have no idea. But in fact we do have an idea. We know the meeting was longer than 10 seconds and less than a year. How much tighter can we get the bounds to be? Most people would say the meeting lasted between an hour and three days. There is much temporal information in text that has hitherto been largely unexploited, encoded in the descriptions of events and relying on our knowledge of the range of usual durations of types of events. This paper describes one part of an exploration into how this information can be captured automatically. Specifically, we have developed annotation guidelines to minimize discrepant judgments and annotated 58 articles, comprising 2288 events; we have developed a method for measuring inter-annotator agreement when the judgments are intervals on a scale; and we have shown that machine learning techniques applied to the annotated data considerably outperform a baseline and approach human performance. This research is potentially very important in applications in which the time course of events is to be extracted from news. For example, whether two events overlap or are in sequence often depends very much on their durations. If a war started yesterday, we can be pretty sure it is still going on today. If a hurricane started last year, we can be sure it is over by now. The corpus that we have annotated currently contains all the 48 non-Wall-Street-Journal (nonWSJ) news articles (a total of 2132 event instances), as well as 10 WSJ articles (156 event instances), from the TimeBank corpus annotated in TimeML (Pustejovky et al., 2003). The nonWSJ articles (mainly political and disaster news) include both print and broadcast news that are from a variety of news sources, such as ABC, AP, and VOA. In the corpus, every event to be annotated was already identified in TimeBank. Annotators were instructed to provide lower and upper bounds on the duration of the event, encompassing 80% of the possibilities, excluding anomalous cases, and taking the entire context of the article into account. For example, here is the graphical output of the annotations (3 annotators) for the “finished” event (underlined) in the sentence After the victim, Linda Sanders, 35, had finished her cleaning and was waiting for her clothes to dry,... 393 This graph shows that the first annotator believes that the event lasts for minutes whereas the second annotator believes it could only last for several seconds. The third annotates the event to range from a few seconds to a few minutes. A logarithmic scale is used for the output because of the intuition that the difference between 1 second and 20 seconds is significant, while the difference between 1 year 1 second and 1 year 20 seconds is negligible. A preliminary exercise in annotation revealed about a dozen classes of systematic discrepancies among annotators’ judgments. We thus developed guidelines to make annotators aware of these cases and to guide them in making the judgments. For example, many occurrences of verbs and other event descriptors refer to multiple events, especially but not exclusively if the subject or object of the verb is plural. In “Iraq has destroyed its long-range missiles”, there is the time it takes to destroy one missile and the duration of the interval in which all the individual events are situated – the time it takes to destroy all its missiles. Initially, there were wide discrepancies because some annotators would annotate one value, others the other. Annotators are now instructed to make judgments on both values in this case. The use of the annotation guidelines resulted in about 10% improvement in inter-annotator agreement (Pan et al., 2006), measured as described in Section 2. There is a residual of gross discrepancies in annotators’ judgments that result from differences of opinion, for example, about how long a government policy is typically in effect. But the number of these discrepancies was surprisingly small. The method and guidelines for annotation are described in much greater detail in (Pan et al., 2006). In the current paper, we focus on how inter-annotator agreement is measured, in Section 2, and in Sections 3-5 on the machine learning experiments. Because the annotated corpus is still fairly small, we cannot hope to learn to make fine-grained judgments of event durations that are currently annotated in the corpus, but as we demonstrate, it is possible to learn useful coarse-grained judgments. Although there has been much work on temporal anchoring and event ordering in text (Hitzeman et al., 1995; Mani and Wilson, 2000; Filatova and Hovy, 2001; Boguraev and Ando, 2005), to our knowledge, there has been no serious published empirical effort to model and learn vague and implicit duration information in natural language, such as the typical durations of events, and to perform reasoning over this information. (Cyc apparently has some fuzzy duration information, although it is not generally available; Rieger (1974) discusses the issue for less than a page; there has been work in fuzzy logic on representing and reasoning with imprecise durations (Godo and Vila, 1995; Fortemps, 1997), but these make no attempt to collect human judgments on such durations or learn to extract them automatically from texts.) 2 Inter-Annotator Agreement Although the graphical output of the annotations enables us to visualize quickly the level of agreement among different annotators for each event, a quantitative measurement of the agreement is needed. The kappa statistic (Krippendorff, 1980; Carletta, 1996) has become the de facto standard to assess inter-annotator agreement. It is computed as: ) ( 1 ) ( ) ( E P E P A P − − = κ P(A) is the observed agreement among the annotators, and P(E) is the expected agreement, which is the probability that the annotators agree by chance. In order to compute the kappa statistic for our task, we have to compute P(A) and P(E), but those computations are not straightforward. P(A): What should count as agreement among annotators for our task? P(E): What is the probability that the annotators agree by chance for our task? 2.1 What Should Count as Agreement? Determining what should count as agreement is not only important for assessing inter-annotator agreement, but is also crucial for later evaluation of machine learning experiments. For example, for a given event with a known gold standard duration range from 1 hour to 4 hours, if a machine learning program outputs a duration of 3 hours to 5 hours, how should we evaluate this result? In the literature on the kappa statistic, most authors address only category data; some can handle more general data, such as data in interval scales or ratio scales. However, none of the techniques directly apply to our data, which are ranges of durations from a lower bound to an upper bound. 394 Figure 1: Overlap of Judgments of [10 minutes, 30 minutes] and [10 minutes, 2 hours]. In fact, what coders were instructed to annotate for a given event is not just a range, but a duration distribution for the event, where the area between the lower bound and the upper bound covers about 80% of the entire distribution area. Since it’s natural to assume the most likely duration for such distribution is its mean (average) duration, and the distribution flattens out toward the upper and lower bounds, we use the normal or Gaussian distribution to model our duration distributions. If the area between lower and upper bounds covers 80% of the entire distribution area, the bounds are each 1.28 standard deviations from the mean. Figure 1 shows the overlap in distributions for judgments of [10 minutes, 30 minutes] and [10 minutes, 2 hours], and the overlap or agreement is 0.508706. 2.2 Expected Agreement What is the probability that the annotators agree by chance for our task? The first quick response to this question may be 0, if we consider all the possible durations from 1 second to 1000 years or even positive infinity. However, not all the durations are equally possible. As in (Krippendorff, 1980), we assume there exists one global distribution for our task (i.e., the duration ranges for all the events), and “chance” annotations would be consistent with this distribution. Thus, the baseline will be an annotator who knows the global distribution and annotates in accordance with it, but does not read the specific article being annotated. Therefore, we must compute the global distribution of the durations, in particular, of their means and their widths. This will be of interest not only in determining expected agreement, but also in terms of -5 0 5 10 15 20 25 30 0 20 40 60 80 100 120 140 160 180 Means of Annotated Durations Number of Annotated Durations Figure 2: Distribution of Means of Annotated Durations. what it says about the genre of news articles and about fuzzy judgments in general. We first compute the distribution of the means of all the annotated durations. Its histogram is shown in Figure 2, where the horizontal axis represents the mean values in the natural logarithmic scale and the vertical axis represents the number of annotated durations with that mean. There are two peaks in this distribution. One is from 5 to 7 in the natural logarithmic scale, which corresponds to about 1.5 minutes to 30 minutes. The other is from 14 to 17 in the natural logarithmic scale, which corresponds to about 8 days to 6 months. One could speculate that this bimodal distribution is because daily newspapers report short events that happened the day before and place them in the context of larger trends. We also compute the distribution of the widths (i.e., Xupper – Xlower) of all the annotated durations, and its histogram is shown in Figure 3, where the horizontal axis represents the width in the natural logarithmic scale and the vertical axis represents the number of annotated durations with that width. Note that it peaks at about a half order of magnitude (Hobbs and Kreinovich, 2001). Since the global distribution is determined by the above mean and width distributions, we can then compute the expected agreement, i.e., the probability that the annotators agree by chance, where the chance is actually based on this global distribution. Two different methods were used to compute the expected agreement (baseline), both yielding nearly equal results. These are described in detail in (Pan et al., 2006). For both, P(E) is about 0.15. 395 -5 0 5 10 15 20 25 0 50 100 150 200 250 300 350 400 Widths of Annotated Durations Number of Annotated Durations Figure 3: Distribution of Widths of Annotated Durations. 3 Features In this section, we describe the lexical, syntactic, and semantic features that we considered in learning event durations. 3.1 Local Context For a given event, the local context features include a window of n tokens to its left and n tokens to its right, as well as the event itself, for n = {0, 1, 2, 3}. The best n determined via cross validation turned out to be 0, i.e., the event itself with no local context. But we also present results for n = 2 in Section 4.3 to evaluate the utility of local context. A token can be a word or a punctuation mark. Punctuation marks are not removed, because they can be indicative features for learning event durations. For example, the quotation mark is a good indication of quoted reporting events, and the duration of such events most likely lasts for seconds or minutes, depending on the length of the quoted content. However, there are also cases where quotation marks are used for other purposes, such as emphasis of quoted words and titles of artistic works. For each token in the local context, including the event itself, three features are included: the original form of the token, its lemma (or root form), and its part-of-speech (POS) tag. The lemma of the token is extracted from parse trees generated by the CONTEX parser (Hermjakob and Mooney, 1997) which includes rich context information in parse trees, and the Brill tagger (Brill, 1992) is used for POS tagging. The context window doesn’t cross the boundaries of sentences. When there are not enough tokens on either side of the event within the window, “NULL” is used for the feature values. Features Original Lemma POS Event signed sign VBD 1token-after the the DT 2token-after plan plan NN 1token-before Friday Friday NNP 2token-before on on IN Table 1: Local context features for the “signed” event in sentence (1) with n = 2. The local context features extracted for the “signed” event in sentence (1) is shown in Table 1 (with a window size n = 2). The feature vector is [signed, sign, VBD, the, the, DT, plan, plan, NN, Friday, Friday, NNP, on, on, IN]. (1) The two presidents on Friday signed the plan. 3.2 Syntactic Relations The information in the event’s syntactic environment is very important in deciding the durations of events. For example, there is a difference in the durations of the “watch” events in the phrases “watch a movie” and “watch a bird fly”. For a given event, both the head of its subject and the head of its object are extracted from the parse trees generated by the CONTEX parser. Similarly to the local context features, for both the subject head and the object head, their original form, lemma, and POS tags are extracted as features. When there is no subject or object for an event, “NULL” is used for the feature values. For the “signed” event in sentence (1), the head of its subject is “presidents” and the head of its object is “plan”. The extracted syntactic relation features are shown in Table 2, and the feature vector is [presidents, president, NNS, plan, plan, NN]. 3.3 WordNet Hypernyms Events with the same hypernyms may have similar durations. For example, events “ask” and “talk” both have a direct WordNet (Miller, 1990) hypernym of “communicate”, and most of the time they do have very similar durations in the corpus. However, closely related events don’t always have the same direct hypernyms. For example, “see” has a direct hypernym of “perceive”, whereas “observe” needs two steps up through the hypernym hierarchy before reaching “perceive”. Such correlation between events may be lost if only the direct hypernyms of the words are extracted. 396 Features Original Lemma POS Subject presidents president NNS Object plan plan NN Table 2: Syntactic relation features for the “signed” event in sentence (1). Feature 1-hyper 2-hyper 3-hyper Event write communicate interact Subject corporate executive executive administrator Object idea content cognition Table 3: WordNet hypernym features for the event (“signed”), its subject (“presidents”), and its object (“plan”) in sentence (1). It is useful to extract the hypernyms not only for the event itself, but also for the subject and object of the event. For example, events related to a group of people or an organization usually last longer than those involving individuals, and the hypernyms can help distinguish such concepts. For example, “society” has a “group” hypernym (2 steps up in the hierarchy), and “school” has an “organization” hypernym (3 steps up). The direct hypernyms of nouns are always not general enough for such purpose, but a hypernym at too high a level can be too general to be useful. For our learning experiments, we extract the first 3 levels of hypernyms from WordNet. Hypernyms are only extracted for the events and their subjects and objects, not for the local context words. For each level of hypernyms in the hierarchy, it’s possible to have more than one hypernym, for example, “see” has two direct hypernyms, “perceive” and “comprehend”. For a given word, it may also have more than one sense in WordNet. In such cases, as in (Gildea and Jurafsky, 2002), we only take the first sense of the word and the first hypernym listed for each level of the hierarchy. A word disambiguation module might improve the learning performance. But since the features we need are the hypernyms, not the word sense itself, even if the first word sense is not the correct one, its hypernyms can still be good enough in many cases. For example, in one news article, the word “controller” refers to an air traffic controller, which corresponds to the second sense in WordNet, but its first sense (business controller) has the same hypernym of “person” (3 levels up) as the second sense (direct hypernym). Since we take the first 3 levels of hypernyms, the correct hypernym is still extracted. P(A) P(E) Kappa 0.528 0.740 0.877 0.500 0.755 Table 4: Inter-Annotator Agreement for Binary Event Durations. When there are less than 3 levels of hypernyms for a given word, its hypernym on the previous level is used. When there is no hypernym for a given word (e.g., “go”), the word itself will be used as its hypernyms. Since WordNet only provides hypernyms for nouns and verbs, “NULL” is used for the feature values for a word that is not a noun or a verb. For the “signed” event in sentence (1), the extracted WordNet hypernym features for the event (“signed”), its subject (“presidents”), and its object (“plan”) are shown in Table 3, and the feature vector is [write, communicate, interact, corporate_executive, executive, administrator, idea, content, cognition]. 4 Experiments The distribution of the means of the annotated durations in Figure 2 is bimodal, dividing the events into those that take less than a day and those that take more than a day. Thus, in our first machine learning experiment, we have tried to learn this coarse-grained event duration information as a binary classification task. 4.1 Inter-Annotator Agreement, Baseline, and Upper Bound Before evaluating the performance of different learning algorithms, the inter-annotator agreement, the baseline and the upper bound for the learning task are assessed first. Table 4 shows the inter-annotator agreement results among 3 annotators for binary event durations. The experiments were conducted on the same data sets as in (Pan et al., 2006). Two kappa values are reported with different ways of measuring expected agreement (P(E)), i.e., whether or not the annotators have prior knowledge of the global distribution of the task. The human agreement before reading the guidelines (0.877) is a good estimate of the upper bound performance for this binary classification task. The baseline for the learning task is always taking the most probable class. Since 59.0% of the total data is “long” events, the baseline performance is 59.0%. 397 Class Algor. Prec. Recall F-Score SVM 0.707 0.606 0.653 NB 0.567 0.768 0.652 Short C4.5 0.571 0.600 0.585 SVM 0.793 0.857 0.823 NB 0.834 0.665 0.740 Long C4.5 0.765 0.743 0.754 Table 5: Test Performance of Three Algorithms. 4.2 Data The original annotated data can be straightforwardly transformed for this binary classification task. For each event annotation, the most likely (mean) duration is calculated first by averaging (the logs of) its lower and upper bound durations. If its most likely (mean) duration is less than a day (about 11.4 in the natural logarithmic scale), it is assigned to the “short” event class, otherwise it is assigned to the “long” event class. (Note that these labels are strictly a convenience and not an analysis of the meanings of “short” and “long”.) We divide the total annotated non-WSJ data (2132 event instances) into two data sets: a training data set with 1705 event instances (about 80% of the total non-WSJ data) and a held-out test data set with 427 event instances (about 20% of the total non-WSJ data). The WSJ data (156 event instances) is kept for further test purposes (see Section 4.4). 4.3 Experimental Results (non-WSJ) Learning Algorithms. Three supervised learning algorithms were evaluated for our binary classification task, namely, Support Vector Machines (SVM) (Vapnik, 1995), Naïve Bayes (NB) (Duda and Hart, 1973), and Decision Trees C4.5 (Quinlan, 1993). The Weka (Witten and Frank, 2005) machine learning package was used for the implementation of these learning algorithms. Linear kernel is used for SVM in our experiments. Each event instance has a total of 18 feature values, as described in Section 3, for the event only condition, and 30 feature values for the local context condition when n = 2. For SVM and C4.5, all features are converted into binary features (6665 and 12502 features). Results. 10-fold cross validation was used to train the learning models, which were then tested on the unseen held-out test set, and the performance (including the precision, recall, and F-score1 1 F-score is computed as the harmonic mean of the precision and recall: F = (2*Prec*Rec)/(Prec+Rec). Algorithm Precision Baseline 59.0% C4.5 69.1% NB 70.3% SVM 76.6% Human Agreement 87.7% Table 6: Overall Test Precision on non-WSJ Data. for each class) of the three learning algorithms is shown in Table 5. The significant measure is overall precision, and this is shown for the three algorithms in Table 6, together with human agreement (the upper bound of the learning task) and the baseline. We can see that among all three learning algorithms, SVM achieves the best F-score for each class and also the best overall precision (76.6%). Compared with the baseline (59.0%) and human agreement (87.7%), this level of performance is very encouraging, especially as the learning is from such limited training data. Feature Evaluation. The best performing learning algorithm, SVM, was then used to examine the utility of combinations of four different feature sets (i.e., event, local context, syntactic, and WordNet hypernym features). The detailed comparison is shown in Table 7. We can see that most of the performance comes from event word or phrase itself. A significant improvement above that is due to the addition of information about the subject and object. Local context does not help and in fact may hurt, and hypernym information also does not seem to help2. It is of interest that the most important information is that from the predicate and arguments describing the event, as our linguistic intuitions would lead us to expect. 4.4 Test on WSJ Data Section 4.3 shows the experimental results with the learned model trained and tested on the data with the same genre, i.e., non-WSJ articles. In order to evaluate whether the learned model can perform well on data from different news genres, we tested it on the unseen WSJ data (156 event instances). The performance (including the precision, recall, and F-score for each class) is shown in Table 8. The precision (75.0%) is very close to the test performance on the non-WSJ 2 In the “Syn+Hyper” cases, the learning algorithm with and without local context gives identical results, probably because the other features dominate. 398 Event Only (n = 0) Event Only + Syntactic Event + Syn + Hyper Class Prec. Rec. F Prec. Rec. F Prec. Rec. F Short 0.742 0.465 0.571 0.758 0.587 0.662 0.707 0.606 0.653 Long 0.748 0.908 0.821 0.792 0.893 0.839 0.793 0.857 0.823 Overall Prec. 74.7% 78.2% 76.6% Local Context (n = 2) Context + Syntactic Context + Syn + Hyper Short 0.672 0.568 0.615 0.710 0.600 0.650 0.707 0.606 0.653 Long 0.774 0.842 0.806 0.791 0.860 0.824 0.793 0.857 0.823 Overall Prec. 74.2% 76.6% 76.6% Table 7: Feature Evaluation with Different Feature Sets using SVM. Class Prec. Rec. F Short 0.692 0.610 0.649 Long 0.779 0.835 0.806 Overall Prec. 75.0% Table 8: Test Performance on WSJ data. P(A) P(E) Kappa 0.151 0.762 0.798 0.143 0.764 Table 9: Inter-Annotator Agreement for Most Likely Temporal Unit. data, and indicates the significant generalization capacity of the learned model. 5 Learning the Most Likely Temporal Unit These encouraging results have prompted us to try to learn more fine-grained event duration information, viz., the most likely temporal units of event durations (cf. (Rieger 1974)’s ORDERHOURS, ORDERDAYS). For each original event annotation, we can obtain the most likely (mean) duration by averaging its lower and upper bound durations, and assigning it to one of seven classes (i.e., second, minute, hour, day, week, month, and year) based on the temporal unit of its most likely duration. However, human agreement on this more finegrained task is low (44.4%). Based on this observation, instead of evaluating the exact agreement between annotators, an “approximate agreement” is computed for the most likely temporal unit of events. In “approximate agreement”, temporal units are considered to match if they are the same temporal unit or an adjacent one. For example, “second” and “minute” match, but “minute” and “day” do not. Some preliminary experiments have been conducted for learning this multi-classification task. The same data sets as in the binary classification task were used. The only difference is that the class for each instance is now labeled with one Algorithm Precision Baseline 51.5% C4.5 56.4% NB 65.8% SVM 67.9% Human Agreement 79.8% Table 10: Overall Test Precisions. of the seven temporal unit classes. The baseline for this multi-classification task is always taking the temporal unit which with its two neighbors spans the greatest amount of data. Since the “week”, “month”, and “year” classes together take up largest portion (51.5%) of the data, the baseline is always taking the “month” class, where both “week” and “year” are also considered a match. Table 9 shows the interannotator agreement results for most likely temporal unit when using “approximate agreement”. Human agreement (the upper bound) for this learning task increases from 44.4% to 79.8%. 10-fold cross validation was also used to train the learning models, which were then tested on the unseen held-out test set. The performance of the three algorithms is shown in Table 10. The best performing learning algorithm is again SVM with 67.9% test precision. Compared with the baseline (51.5%) and human agreement (79.8%), this again is a very promising result, especially for a multi-classification task with such limited training data. It is reasonable to expect that when more annotated data becomes available, the learning algorithm will achieve higher performance when learning this and more fine-grained event duration information. Although the coarse-grained duration information may look too coarse to be useful, computers have no idea at all whether a meeting event takes seconds or centuries, so even coarse-grained estimates would give it a useful rough sense of how long each event may take. More fine-grained duration information is definitely more desirable for temporal reasoning tasks. But coarse-grained 399 durations to a level of temporal units can already be very useful. 6 Conclusion In the research described in this paper, we have addressed a problem -- extracting information about event durations encoded in event descriptions -- that has heretofore received very little attention in the field. It is information that can have a substantial impact on applications where the temporal placement of events is important. Moreover, it is representative of a set of problems – making use of the vague information in text – that has largely eluded empirical approaches in the past. In (Pan et al., 2006), we explicate the linguistic categories of the phenomena that give rise to grossly discrepant judgments among annotators, and give guidelines on resolving these discrepancies. In the present paper, we describe a method for measuring inter-annotator agreement when the judgments are intervals on a scale; this should extend from time to other scalar judgments. Inter-annotator agreement is too low on fine-grained judgments. However, for the coarse-grained judgments of more than or less than a day, and of approximate agreement on temporal unit, human agreement is acceptably high. For these cases, we have shown that machine-learning techniques achieve impressive results. Acknowledgments This work was supported by the Advanced Research and Development Activity (ARDA), now the Disruptive Technology Office (DTO), under DOD/DOI/ARDA Contract No. NBCHC040027. The authors have profited from discussions with Hoa Trang Dang, Donghui Feng, Kevin Knight, Daniel Marcu, James Pustejovsky, Deepak Ravichandran, and Nathan Sobo. References B. Boguraev and R. K. Ando. 2005. TimeMLCompliant Text Analysis for Temporal Reasoning. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI). E. Brill. 1992. A simple rule-based part of speech tagger. In Proceedings of the Third Conference on Applied Natural Language Processing. J. Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational Lingustics, 22(2):249–254. R. O. Duda and P. E. Hart. 1973. Pattern Classification and Scene Analysis. Wiley, New York. E. Filatova and E. Hovy. 2001. Assigning TimeStamps to Event-Clauses. Proceedings of ACL Workshop on Temporal and Spatial Reasoning. P. Fortemps. 1997. Jobshop Scheduling with Imprecise Durations: A Fuzzy Approach. IEEE Transactions on Fuzzy Systems Vol. 5 No. 4. D. Gildea and D. Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3):245-288. L. Godo and L. Vila. 1995. Possibilistic Temporal Reasoning based on Fuzzy Temporal Constraints. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI). U. Hermjakob and R. J. Mooney. 1997. Learning Parse and Translation Decisions from Examples with Rich Context. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL). J. Hitzeman, M. Moens, and C. Grover. 1995. Algorithms for Analyzing the Temporal Structure of Discourse. In Proceedings of EACL. Dublin, Ireland. J. R. Hobbs and V. Kreinovich. 2001. Optimal Choice of Granularity in Commonsense Estimation: Why Half Orders of Magnitude, In Proceedings of Joint 9th IFSA World Congress and 20th NAFIPS International Conference, Vacouver, British Columbia. K. Krippendorf. 1980. Content Analysis: An introduction to its methodology. Sage Publications. I. Mani and G. Wilson. 2000. Robust Temporal Processing of News. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL). G. A. Miller. 1990. WordNet: an On-line Lexical Database. International Journal of Lexicography 3(4). F. Pan, R. Mulkar, and J. R. Hobbs. 2006. An Annotated Corpus of Typical Durations of Events. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC), Genoa, Italy. J. Pustejovsky, P. Hanks, R. Saurí, A. See, R. Gaizauskas, A. Setzer, D. Radev, B. Sundheim, D. Day, L. Ferro and M. Lazo. 2003. The timebank corpus. In Corpus Linguistics, Lancaster, U.K. J. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco. C. J. Rieger. 1974. Conceptual memory: A theory and computer program for processing and meaning content of natural language utterances. Stanford AIM-233. V. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag, New York. I. H. Witten and E. Frank. 2005. Data Mining: Practical machine learning tools and techniques, 2nd Edition, Morgan Kaufmann, San Francisco. 400
2006
50
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 401–408, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Automatic learning of textual entailments with cross-pair similarities Fabio Massimo Zanzotto DISCo University of Milano-Bicocca Milan, Italy [email protected] Alessandro Moschitti Department of Computer Science University of Rome “Tor Vergata” Rome, Italy [email protected] Abstract In this paper we define a novel similarity measure between examples of textual entailments and we use it as a kernel function in Support Vector Machines (SVMs). This allows us to automatically learn the rewrite rules that describe a non trivial set of entailment cases. The experiments with the data sets of the RTE 2005 challenge show an improvement of 4.4% over the state-of-the-art methods. 1 Introduction Recently, textual entailment recognition has been receiving a lot of attention. The main reason is that the understanding of the basic entailment processes will allow us to model more accurate semantic theories of natural languages (Chierchia and McConnell-Ginet, 2001) and design important applications (Dagan and Glickman, 2004), e.g., Question Answering and Information Extraction. However, previous work (e.g., (Zaenen et al., 2005)) suggests that determining whether or not a text T entails a hypothesis H is quite complex even when all the needed information is explicitly asserted. For example, the sentence T1: “At the end of the year, all solid companies pay dividends.” entails the hypothesis H1: “At the end of the year, all solid insurance companies pay dividends.” but it does not entail the hypothesis H2: “At the end of the year, all solid companies pay cash dividends.” Although these implications are uncontroversial, their automatic recognition is complex if we rely on models based on lexical distance (or similarity) between hypothesis and text, e.g., (Corley and Mihalcea, 2005). Indeed, according to such approaches, the hypotheses H1 and H2 are very similar and seem to be similarly related to T1. This suggests that we should study the properties and differences of such two examples (negative and positive) to derive more accurate entailment models. For example, if we consider the following entailment: T3 ⇒H3? T3 “All wild animals eat plants that have scientifically proven medicinal properties.” H3 “All wild mountain animals eat plants that have scientifically proven medicinal properties.” we note that T3 is structurally (and somehow lexically similar) to T1 and H3 is more similar to H1 than to H2. Thus, from T1 ⇒H1 we may extract rules to derive that T3 ⇒H3. The above example suggests that we should rely not only on a intra-pair similarity between T and H but also on a cross-pair similarity between two pairs (T ′, H′) and (T ′′, H′′). The latter similarity measure along with a set of annotated examples allows a learning algorithm to automatically derive syntactic and lexical rules that can solve complex entailment cases. In this paper, we define a new cross-pair similarity measure based on text and hypothesis syntactic trees and we use such similarity with traditional intra-pair similarities to define a novel semantic kernel function. We experimented with such kernel using Support Vector Machines (Vapnik, 1995) on the test tests of the Recognizing Textual Entailment (RTE) challenges (Dagan et al., 2005; Bar Haim et al., 2006). The comparative results show that (a) we have designed an effective way to automatically learn entailment rules from examples and (b) our approach is highly accurate and exceeds the accuracy of the current state-of-the-art 401 models (Glickman et al., 2005; Bayer et al., 2005) by about 4.4% (i.e. 63% vs. 58.6%) on the RTE 1 test set (Dagan et al., 2005). In the remainder of this paper, Sec. 2 illustrates the related work, Sec. 3 introduces the complexity of learning entailments from examples, Sec. 4 describes our models, Sec. 6 shows the experimental results and finally Sec. 7 derives the conclusions. 2 Related work Although the textual entailment recognition problem is not new, most of the automatic approaches have been proposed only recently. This has been mainly due to the RTE challenge events (Dagan et al., 2005; Bar Haim et al., 2006). In the following we report some of such researches. A first class of methods defines measures of the distance or similarity between T and H either assuming the independence between words (Corley and Mihalcea, 2005; Glickman et al., 2005) in a bag-of-word fashion or exploiting syntactic interpretations (Kouylekov and Magnini, 2005). A pair (T, H) is then in entailment when sim(T, H) > α. These approaches can hardly determine whether the entailment holds in the examples of the previous section. From the point of view of bag-of-word methods, the pairs (T1, H1) and (T1, H2) have both the same intra-pair similarity since the sentences of T1 and H1 as well as those of T1 and H2 differ by a noun, insurance and cash, respectively. At syntactic level, also, we cannot capture the required information as such nouns are both noun modifiers: insurance modifies companies and cash modifies dividends. A second class of methods can give a solution to the previous problem. These methods generally combine a similarity measure with a set of possible transformations T applied over syntactic and semantic interpretations. The entailment between T and H is detected when there is a transformation r ∈T so that sim(r(T), H) > α. These transformations are logical rules in (Bos and Markert, 2005) or sequences of allowed rewrite rules in (de Salvo Braz et al., 2005). The disadvantage is that such rules have to be manually designed. Moreover, they generally model better positive implications than negative ones and they do not consider errors in syntactic parsing and semantic analysis. 3 Challenges in learning from examples In the introductory section, we have shown that, to carry out automatic learning from examples, we need to define a cross-pair similarity measure. Its definition is not straightforward as it should detect whether two pairs (T ′, H′) and (T ′′, H′′) realize the same rewrite rules. This measure should consider pairs similar when: (1) T ′ and H′ are structurally similar to T ′′ and H′′, respectively and (2) the lexical relations within the pair (T ′, H′) are compatible with those in (T ′′, H′′). Typically, T and H show a certain degree of overlapping, thus, lexical relations (e.g., between the same words) determine word movements from T to H (or vice versa). This is important to model the syntactic/lexical similarity between example pairs. Indeed, if we encode such movements in the syntactic parse trees of texts and hypotheses, we can use interesting similarity measures defined for syntactic parsing, e.g., the tree kernel devised in (Collins and Duffy, 2002). To consider structural and lexical relation similarity, we augment syntactic trees with placeholders which identify linked words. More in detail: - We detect links between words wt in T that are equal, similar, or semantically dependent on words wh in H. We call anchors the pairs (wt, wh) and we associate them with placeholders. For example, in Fig. 1, the placeholder 2” indicates the (companies,companies) anchor between T1 and H1. This allows us to derive the word movements between text and hypothesis. - We align the trees of the two texts T ′ and T ′′ as well as the tree of the two hypotheses H′ and H′′ by considering the word movements. We find a correct mapping between placeholders of the two hypothesis H′ and H′′ and apply it to the tree of H′′ to substitute its placeholders. The same mapping is used to substitute the placeholders in T ′′. This mapping should maximize the structural similarity between the four trees by considering that placeholders augment the node labels. Hence, the cross-pair similarity computation is reduced to the tree similarity computation. The above steps define an effective cross-pair similarity that can be applied to the example in Fig. 1: T1 and T3 share the subtree in bold starting with S →NP VP. The lexicals in T3 and H3 are quite different from those T1 and H1, but we can rely on the structural properties expressed by their bold subtrees. These are more similar to the subtrees of T1 and H1 than those of T1 and H2, respectively. Indeed, H1 and H3 share the production NP →DT JJ NN NNS while H2 and H3 do 402 T1 T3 S PP IN At NP 0 NP 0 DT the NN 0 end 0 PP IN of NP 1 DT the NN 1 year 1 , , NP 2 DT all JJ 2 solid 2’ NNS 2 companies 2” VP 3 VBP 3 pay 3 NP 4 NNS 4 dividends 4 S NP a DT All JJ a wild a’ NNS a animals a” VP b VBP b eat b NP c plants c ... properties H1 H3 S PP IN At NP 0 NP 0 DT the NN 0 end 0 PP IN of NP 1 DT the NN 1 year 1 , , NP 2 DT all JJ 2 solid 2’ NN insurance NNS 2 companies 2” VP 3 VBP 3 pay 3 NP 4 NNS 4 dividends 4 S NP a DT All JJ a wild a’ NN mountain NNS a animals a” VP b VBP b eat b NP c plants c ... properties H2 H3 S PP At ... year NP 2 DT all JJ 2 solid 2’ NNS 2 companies 2” VP 3 VBP 3 pay 3 NP 4 NN cash NNS 4 dividends 4 S NP a DT All JJ a wild a’ NN mountain NNS a animals a” VP b VBP b eat b NP c plants c ... properties Figure 1: Relations between (T1, H1), (T1, H2), and (T3, H3). not. Consequently, to decide if (T3,H3) is a valid entailment, we should rely on the decision made for (T1, H1). Note also that the dashed lines connecting placeholders of two texts (hypotheses) indicate structurally equivalent nodes. For instance, the dashed line between 3 and b links the main verbs both in the texts T1 and T3 and in the hypotheses H1 and H3. After substituting 3 with b and 2 with a, we can detect if T1 and T3 share the bold subtree S →NP 2 VP 3. As such subtree is shared also by H1 and H3, the words within the pair (T1, H1) are correlated similarly to the words in (T3, H3). The above example emphasizes that we need to derive the best mapping between placeholder sets. It can be obtained as follows: let A′ and A′′ be the placeholders of (T ′, H′) and (T ′′, H′′), respectively, without loss of generality, we consider |A′| ≥|A′′| and we align a subset of A′ to A′′. The best alignment is the one that maximizes the syntactic and lexical overlapping of the two subtrees induced by the aligned set of anchors. More precisely, let C be the set of all bijective mappings from a′ ⊆A′ : |a′| = |A′′| to A′′, an element c ∈C is a substitution function. We define as the best alignment the one determined by cmax = argmaxc∈C(KT (t(H′, c), t(H′′, i))+ KT (t(T ′, c), t(T ′′, i)) (1) where (a) t(S, c) returns the syntactic tree of the hypothesis (text) S with placeholders replaced by means of the substitution c, (b) i is the identity substitution and (c) KT (t1, t2) is a function that measures the similarity between the two trees t1 and t2 (for more details see Sec. 4.2). For example, the cmax between (T1, H1) and (T3, H3) is {( 2’ , a’ ), ( 2” , a” ), ( 3 , b ), ( 4 , c )}. 4 Similarity Models In this section we describe how anchors are found at the level of a single pair (T, H) (Sec. 4.1). The anchoring process gives the direct possibility of 403 implementing an inter-pair similarity that can be used as a baseline approach or in combination with the cross-pair similarity. This latter will be implemented with tree kernel functions over syntactic structures (Sec. 4.2). 4.1 Anchoring and Lexical Similarity The algorithm that we design to find the anchors is based on similarity functions between words or more complex expressions. Our approach is in line with many other researches (e.g., (Corley and Mihalcea, 2005; Glickman et al., 2005)). Given the set of content words (verbs, nouns, adjectives, and adverbs) WT and WH of the two sentences T and H, respectively, the set of anchors A ⊂WT ×WH is built using a similarity measure between two words simw(wt, wh). Each element wh ∈WH will be part of a pair (wt, wh) ∈A if: 1) simw(wt, wh) ̸= 0 2) simw(wt, wh) = maxw′ t∈WT simw(w′ t, wh) According to these properties, elements in WH can participate in more than one anchor and conversely more than one element in WH can be linked to a single element w ∈WT . The similarity simw(wt, wh) can be defined using different indicators and resources. First of all, two words are maximally similar if these have the same surface form wt = wh. Second, we can use one of the WordNet (Miller, 1995) similarities indicated with d(lw, lw′) (in line with what was done in (Corley and Mihalcea, 2005)) and different relation between words such as the lexical entailment between verbs (Ent) and derivationally relation between words (Der). Finally, we use the edit distance measure lev(wt, wh) to capture the similarity between words that are missed by the previous analysis for misspelling errors or for the lack of derivationally forms not coded in WordNet. As result, given the syntactic category cw ∈ {noun, verb, adjective, adverb} and the lemmatized form lw of a word w, the similarity measure between two words w and w′ is defined as follows: simw(w, w′) =                  1 if w = w′∨ lw = lw′ ∧cw = cw′ ∨ ((lw, cw), (lw′ , cw′ )) ∈Ent∨ ((lw, cw), (lw′ , cw′ )) ∈Der∨ lev(w, w′) = 1 d(lw, lw′ ) if cw = cw′ ∧d(lw, lw′ ) > 0.2 0 otherwise (2) It is worth noticing that, the above measure is not a pure similarity measure as it includes the entailment relation that does not represent synonymy or similarity between verbs. To emphasize the contribution of each used resource, in the experimental section, we will compare Eq. 2 with some versions that exclude some word relations. The above word similarity measure can be used to compute the similarity between T and H. In line with (Corley and Mihalcea, 2005), we define it as: s1(T, H) = X (wt,wh)∈A simw(wt, wh) × idf(wh) X wh∈WH idf(wh) (3) where idf(w) is the inverse document frequency of the word w. For sake of comparison, we consider also the corresponding more classical version that does not apply the inverse document frequency s2(T, H) = X (wt,wh)∈A simw(wt, wh)/|WH| (4) ¿From the above intra-pair similarities, s1 and s2, we can obtain the baseline cross-pair similarities based on only lexical information: Ki((T ′, H′), (T ′′, H′′)) = si(T ′, H′) × si(T ′′, H′′), (5) where i ∈{1, 2}. In the next section we define a novel cross-pair similarity that takes into account syntactic evidence by means of tree kernel functions. 4.2 Cross-pair syntactic kernels Section 3 has shown that to measure the syntactic similarity between two pairs, (T ′, H′) and (T ′′, H′′), we should capture the number of common subtrees between texts and hypotheses that share the same anchoring scheme. The best alignment between anchor sets, i.e. the best substitution cmax, can be found with Eq. 1. As the corresponding maximum quantifies the alignment degree, we could define a cross-pair similarity as follows: Ks((T ′, H′), (T ′′, H′′)) = max c∈C  KT (t(H′, c), t(H′′, i)) +KT (t(T ′, c), t(T ′′, i)  , (6) where as KT (t1, t2) we use the tree kernel function defined in (Collins and Duffy, 2002). This evaluates the number of subtrees shared by t1 and t2, thus defining an implicit substructure space. Formally, given a subtree space F = {f1, f2, . . . , f|F|}, the indicator function Ii(n) is equal to 1 if the target fi is rooted at node n and equal to 0 otherwise. A treekernel function over t1 and t2 is KT (t1, t2) = P n1∈Nt1 P n2∈Nt2 ∆(n1, n2), where Nt1 and Nt2 are the sets of the t1’s and t2’s nodes, respectively. In turn ∆(n1, n2) = P|F| i=1 λl(fi)Ii(n1)Ii(n2), 404 where 0 ≤λ ≤1 and l(fi) is the number of levels of the subtree fi. Thus λl(fi) assigns a lower weight to larger fragments. When λ = 1, ∆is equal to the number of common fragments rooted at nodes n1 and n2. As described in (Collins and Duffy, 2002), ∆can be computed in O(|Nt1| × |Nt2|). The KT function has been proven to be a valid kernel, i.e. its associated Gram matrix is positivesemidefinite. Some basic operations on kernel functions, e.g. the sum, are closed with respect to the set of valid kernels. Thus, if the maximum held such property, Eq. 6 would be a valid kernel and we could use it in kernel based machines like SVMs. Unfortunately, a counterexample illustrated in (Boughorbel et al., 2004) shows that the max function does not produce valid kernels in general. However, we observe that: (1) Ks((T ′, H′), (T ′′, H′′)) is a symmetric function since the set of transformation C are always computed with respect to the pair that has the largest anchor set; (2) in (Haasdonk, 2005), it is shown that when kernel functions are not positive semidefinite, SVMs still solve a data separation problem in pseudo Euclidean spaces. The drawback is that the solution may be only a local optimum. Therefore, we can experiment Eq. 6 with SVMs and observe if the empirical results are satisfactory. Section 6 shows that the solutions found by Eq. 6 produce accuracy higher than those evaluated on previous automatic textual entailment recognition approaches. 5 Refining cross-pair syntactic similarity In the previous section we have defined the intra and the cross pair similarity. The former does not show relevant implementation issues whereas the latter should be optimized to favor its applicability with SVMs. The Eq. 6 improvement depends on three factors: (1) its computation complexity; (2) a correct marking of tree nodes with placeholders; and, (3) the pruning of irrelevant information in large syntactic trees. 5.1 Controlling the computational cost The computational cost of cross-pair similarity between two tree pairs (Eq. 6) depends on the size of C. This is combinatorial in the size of A′ and A′′, i.e. |C| = (|A′| −|A′′|)!|A′′|! if |A′| ≥|A′′|. Thus we should keep the sizes of A′ and A′′ reasonably small. To reduce the number of placeholders, we consider the notion of chunk defined in (Abney, 1996), i.e., not recursive kernels of noun, verb, adjective, and adverb phrases. When placeholders are in a single chunk both in the text and hypothesis we assign them the same name. For example, Fig. 1 shows the placeholders 2’ and 2” that are substituted by the placeholder 2. The placeholder reduction procedure also gives the possibility of resolving the ambiguity still present in the anchor set A (see Sec. 4.1). A way to eliminate the ambiguous anchors is to select the ones that reduce the final number of placeholders. 5.2 Augmenting tree nodes with placeholders Anchors are mainly used to extract relevant syntactic subtrees between pairs of text and hypothesis. We also use them to characterize the syntactic information expressed by such subtrees. Indeed, Eq. 6 depends on the number of common subtrees between two pairs. Such subtrees are matched when they have the same node labels. Thus, to keep track of the argument movements, we augment the node labels with placeholders. The larger number of placeholders two hypotheses (texts) match the larger the number of their common substructures is (i.e. higher similarity). Thus, it is really important where placeholders are inserted. For example, the sentences in the pair (T1, H1) have related subjects 2 and related main verbs 3. The same occurs in the sentences of the pair (T3, H3), respectively a and b . To obtain such node marking, the placeholders are propagated in the syntactic tree, from the leaves1 to the target nodes according to the head of constituents. The example of Fig. 1 shows that the placeholder 0 climbs up to the node governing all the NPs. 5.3 Pruning irrelevant information in large text trees Often only a portion of the parse trees is relevant to detect entailments. For instance, let us consider the following pair from the RTE 2005 corpus: 1To increase the generalization capacity of the tree kernel function we choose not to assign any placeholder to the leaves. 405 T ⇒H (id: 929) T “Ron Gainsford, chief executive of the TSI, said: ”It is a major concern to us that parents could be unwittingly exposing their children to the risk of sun damage, thinking they are better protected than they actually are.” H “Ron Gainsford is the chief executive of the TSI.” Only the bold part of T supports the implication; the rest is useless and also misleading: if we used it to compute the similarity it would reduce the importance of the relevant part. Moreover, as we normalize the syntactic tree kernel (KT ) with respect to the size of the two trees, we need to focus only on the part relevant to the implication. The anchored leaves are good indicators of relevant parts but also some other parts may be very relevant. For example, the function word not plays an important role. Another example is given by the word insurance in H1 and mountain in H3 (see Fig. 1). They support the implication T1 ⇒H1 and T1 ⇒H3 as well as cash supports T1 ⇏H2. By removing these words and the related structures, we cannot determine the correct implications of the first two and the incorrect implication of the second one. Thus, we keep all the words that are immediately related to relevant constituents. The reduction procedure can be formally expressed as follows: given a syntactic tree t, the set of its nodes N(t), and a set of anchors, we build a tree t′ with all the nodes N ′ that are anchors or ancestors of any anchor. Moreover, we add to t′ the leaf nodes of the original tree t that are direct children of the nodes in N ′. We apply such procedure only to the syntactic trees of texts before the computation of the kernel function. 6 Experimental investigation The aim of the experiments is twofold: we show that (a) entailment recognition rules can be learned from examples and (b) our kernel functions over syntactic structures are effective to derive syntactic properties. The above goals can be achieved by comparing the different intra and cross pair similarity measures. 6.1 Experimental settings For the experiments, we used the Recognizing Textual Entailment Challenge data sets, which we name as follows: - D1, T1 and D2, T2, are the development and the test sets of the first (Dagan et al., 2005) and second (Bar Haim et al., 2006) challenges, respectively. D1 contains 567 examples whereas T1, D2 and T2 have all the same size, i.e. 800 training/testing instances. The positive examples constitute the 50% of the data. - ALL is the union of D1, D2, and T1, which we also split in 70%-30%. This set is useful to test if we can learn entailments from the data prepared in the two different challenges. - D2(50%)′ and D2(50%)′′ is a random split of D2. It is possible that the data sets of the two competitions are quite different thus we created this homogeneous split. We also used the following resources: - The Charniak parser (Charniak, 2000) and the morpha lemmatiser (Minnen et al., 2001) to carry out the syntactic and morphological analysis. - WordNet 2.0 (Miller, 1995) to extract both the verbs in entailment, Ent set, and the derivationally related words, Der set. - The wn::similarity package (Pedersen et al., 2004) to compute the Jiang&Conrath (J&C) distance (Jiang and Conrath, 1997) as in (Corley and Mihalcea, 2005). This is one of the best figure method which provides a similarity score in the [0, 1] interval. We used it to implement the d(lw, lw′) function. - A selected portion of the British National Corpus2 to compute the inverse document frequency (idf). We assigned the maximum idf to words not found in the BNC. - SVM-light-TK3 (Moschitti, 2006) which encodes the basic tree kernel function, KT , in SVMlight (Joachims, 1999). We used such software to implement Ks (Eq. 6), K1, K2 (Eq. 5) and Ks + Ki kernels. The latter combines our new kernel with traditional approaches (i ∈{1, 2}). 6.2 Results and analysis Table 1 reports the results of different similarity kernels on the different training and test splits described in the previous section. The table is organized as follows: The first 5 rows (Experiment settings) report the intra-pair similarity measures defined in Section 4.1, the 6th row refers to only the idf similarity metric whereas the following two rows report the cross-pair similarity carried out with Eq. 6 with (Synt Trees with placeholders) and without (Only Synt Trees) augmenting the trees with placeholders, respectively. Each column in the Experiment 2http://www.natcorp.ox.ac.uk/ 3SVM-light-TK is available at http://ai-nlp.info .uniroma2.it/moschitti/ 406 Experiment Settings w = w′ ∨lw = lw′ ∧cw = cw′ √ √ √ √ √ √ √ √ cw = cw′ ∧d(lw, lw′ ) > 0.2 √ √ √ √ √ √ ((lw, cw), (lw′ , cw′ )) ∈Der √ √ √ √ ((lw, cw), (lw′ , cw′ )) ∈Ent √ √ √ √ lev(w, w′) = 1 √ √ √ idf √ √ √ √ √ √ Only Synt Trees √ Synt Trees with placeholders √ Datasets “Train:D1-Test:T 1” 0.5388 0.5813 0.5500 0.5788 0.5900 0.5888 0.6213 0.6300 “Train:T 1-Test:D1” 0.5714 0.5538 0.5767 0.5450 0.5591 0.5644 0.5732 0.5838 “Train:D2(50%)′-Test:D2(50%)′′” 0.6034 0.5961 0.6083 0.6010 0.6083 0.6083 0.6156 0.6350 “Train:D2(50%)′′-Test:D2(50%)′” 0.6452 0.6375 0.6427 0.6350 0.6324 0.6272 0.5861 0.6607 “Train:D2-Test:T 2” 0.6000 0.5950 0.6025 0.6050 0.6050 0.6038 0.6238 0.6388 Mean 0.5918 0.5927 0.5960 0.5930 0.5990 0.5985 0.6040 0.6297 (± 0.0396 ) (± 0.0303 ) (± 0.0349 ) (± 0.0335 ) (± 0.0270 ) (± 0.0235 ) (± 0.0229 ) (± 0.0282 ) “Train:ALL(70%)-Test:ALL(30%)” 0.5902 0.6024 0.6009 0.6131 0.6193 0.6086 0.6376 “Train:ALL-Test:T 2” 0.5863 0.5975 0.5975 0.6038 0.6213 0.6250 Table 1: Experimental results of the different methods over different test settings settings indicates a different intra-pair similarity measure built by means of a combination of basic similarity approaches. These are specified with the check sign √. For example, Column 5 refers to a model using: the surface word form similarity, the d(lw, lw′) similarity and the idf. The next 5 rows show the accuracy on the data sets and splits used for the experiments and the next row reports the average and Std. Dev. over the previous 5 results. Finally, the last two rows report the accuracy on ALL dataset split in 70/30% and on the whole ALL dataset used for training and T2 for testing. ¿From the table we note the following aspects: - First, the lexical-based distance kernels K1 and K2 (Eq. 5) show accuracy significantly higher than the random baseline, i.e. 50%. In all the datasets (except for the first one), the simw(T, H) similarity based on the lexical overlap (first column) provides an accuracy essentially similar to the best lexical-based distance method. - Second, the dataset “Train:D1-Test:T1” allows us to compare our models with the ones of the first RTE challenge (Dagan et al., 2005). The accuracy reported for the best systems, i.e. 58.6% (Glickman et al., 2005; Bayer et al., 2005), is not significantly different from the result obtained with K1 that uses the idf. - Third, the dramatic improvement observed in (Corley and Mihalcea, 2005) on the dataset “Train:D1-Test:T1” is given by the idf rather than the use of the J&C similarity (second vs. third columns). The use of J&C with the idf decreases the accuracy of the idf alone. - Next, our approach (last column) is significantly better than all the other methods as it provides the best result for each combination of training and test sets. On the “Train:D1-Test:T1” test set, it exceeds the accuracy of the current state-of-theart models (Glickman et al., 2005; Bayer et al., 2005) by about 4.4 absolute percent points (63% vs. 58.6%) and 4% over our best lexical similarity measure. By comparing the average on all datasets, our system improves on all the methods by at least 3 absolute percent points. - Finally, the accuracy produced by Synt Trees with placeholders is higher than the one obtained with Only Synt Trees. Thus, the use of placeholders is fundamental to automatically learn entailments from examples. 6.2.1 Qualitative analysis Hereafter we show some instances selected from the first experiment “Train:T1-Test:D1”. They were correctly classified by our overall model (last column) and miss-classified by the models in the seventh and in the eighth columns. The first is an example in entailment: T ⇒H (id: 35) T “Saudi Arabia, the biggest oil producer in the world, was once a supporter of Osama bin Laden and his associates who led attacks against the United States.” H “Saudi Arabia is the world’s biggest oil exporter.” It was correctly classified by exploiting examples like these two: T ⇒H (id: 929) T “Ron Gainsford, chief executive of the TSI, said: ...” H “Ron Gainsford is the chief executive of the TSI.” T ⇒H (id: 976) T “Harvey Weinstein, the co-chairman of Miramax, who was instrumental in popularizing both independent and foreign films with broad audiences, agrees.” H “Harvey Weinstein is the co-chairman of Miramax.” 407 The rewrite rule is: ”X, Y, ...” implies ”X is Y”. This rule is also described in (Hearst, 1992). A more interesting rule relates the following two sentences which are not in entailment: T ⇏H (id: 2045) T “Mrs. Lane, who has been a Director since 1989, is Special Assistant to the Board of Trustees and to the President of Stanford University.” H “Mrs. Lane is the president of Stanford University.” It was correctly classified using instances like the following: T ⇏H (id: 2044) T “Jacqueline B. Wender is Assistant to the President of Stanford University.” H “Jacqueline B. Wender is the President of Stanford University.” T ⇏H (id: 2069) T “Grieving father Christopher Yavelow hopes to deliver one million letters to the queen of Holland to bring his children home.” H “Christopher Yavelow is the queen of Holland.” Here, the implicit rule is: ”X (VP (V ...) (NP (to Y) ...)” does not imply ”X is Y”. 7 Conclusions We have presented a model for the automatic learning of rewrite rules for textual entailments from examples. For this purpose, we devised a novel powerful kernel based on cross-pair similarities. We experimented with such kernel using Support Vector Machines on the RTE test sets. The results show that (1) learning entailments from positive and negative examples is a viable approach and (2) our model based on kernel methods is highly accurate and improves on the current state-of-the-art entailment systems. In the future, we would like to study approaches to improve the computational complexity of our kernel function and to design approximated versions that are valid Mercer’s kernels. References Steven Abney. 1996. Part-of-speech tagging and partial parsing. In G.Bloothooft K.Church, S.Young, editor, Corpusbased methods in language and speech. Kluwer academic publishers, Dordrecht. Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The II PASCAL RTE challenge. In RTE Workshop, Venice, Italy. Samuel Bayer, John Burger, Lisa Ferro, John Henderson, and Alexander Yeh. 2005. MITRE’s submissions to the eu PASCAL RTE challenge. In Proceedings of the 1st RTE Workshop, Southampton, UK. Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical inference. In Proc. of HLT-EMNLP Conference, Canada. S. Boughorbel, J-P. Tarel, and F. Fleuret. 2004. Non-mercer kernel for svm object recognition. In Proceedings of BMVC 2004. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proc. of the 1st NAACL,Seattle, Washington. Gennaro Chierchia and Sally McConnell-Ginet. 2001. Meaning and Grammar: An introduction to Semantics. MIT press, Cambridge, MA. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of ACL02. Courtney Corley and Rada Mihalcea. 2005. Measuring the semantic similarity of texts. In Proc. of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, Ann Arbor, Michigan. Ido Dagan and Oren Glickman. 2004. Probabilistic textual entailment: Generic applied modeling of language variability. In Proceedings of the Workshop on Learning Methods for Text Understanding and Mining, Grenoble, France. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL RTE challenge. In RTE Workshop, Southampton, U.K. Rodrigo de Salvo Braz, Roxana Girju, Vasin Punyakanok, Dan Roth, and Mark Sammons. 2005. An inference model for semantic entailment in natural language. In Proc. of the RTE Workshop, Southampton, U.K. Oren Glickman, Ido Dagan, and Moshe Koppel. 2005. Web based probabilistic textual entailment. In Proceedings of the 1st RTE Workshop, Southampton, UK. Bernard Haasdonk. 2005. Feature space interpretation of SVMs with indefinite kernels. IEEE Trans Pattern Anal Mach Intell. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of the 15th CoLing, Nantes, France. Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proc. of the 10th ROCLING, Tapei, Taiwan. Thorsten Joachims. 1999. Making large-scale svm learning practical. In Advances in Kernel Methods-Support Vector Learning. MIT Press. Milen Kouylekov and Bernardo Magnini. 2005. Tree edit distance for textual entailment. In Proc. of the RANLP2005, Borovets, Bulgaria. George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, November. Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natural Language Engineering. Alessandro Moschitti. 2006. Making tree kernels practical for natural language learning. In Proceedings of EACL’06, Trento, Italy. Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. Wordnet::similarity - measuring the relatedness of concepts. In Proc. of 5th NAACL, Boston, MA. Vladimir Vapnik. 1995. The Nature of Statistical Learning Theory. Springer. Annie Zaenen, Lauri Karttunen, and Richard Crouch. 2005. Local textual inference: Can it be defined or circumscribed? In Proc. of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, Ann Arbor, Michigan. 408
2006
51
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 409–416, Sydney, July 2006. c⃝2006 Association for Computational Linguistics An Improved Redundancy Elimination Algorithm for Underspecified Representations Alexander Koller and Stefan Thater Dept. of Computational Linguistics Universität des Saarlandes, Saarbrücken, Germany {koller,stth}@coli.uni-sb.de Abstract We present an efficient algorithm for the redundancy elimination problem: Given an underspecified semantic representation (USR) of a scope ambiguity, compute an USR with fewer mutually equivalent readings. The algorithm operates on underspecified chart representations which are derived from dominance graphs; it can be applied to the USRs computed by large-scale grammars. We evaluate the algorithm on a corpus, and show that it reduces the degree of ambiguity significantly while taking negligible runtime. 1 Introduction Underspecification is nowadays the standard approach to dealing with scope ambiguities in computational semantics (van Deemter and Peters, 1996; Copestake et al., 2004; Egg et al., 2001; Blackburn and Bos, 2005). The basic idea behind it is to not enumerate all possible semantic representations for each syntactic analysis, but to derive a single compact underspecified representation (USR). This simplifies semantics construction, and current algorithms support the efficient enumeration of the individual semantic representations from an USR (Koller and Thater, 2005b). A major promise of underspecification is that it makes it possible, in principle, to rule out entire subsets of readings that we are not interested in wholesale, without even enumerating them. For instance, real-world sentences with scope ambiguities often have many readings that are semantically equivalent. Subsequent modules (e.g. for doing inference) will typically only be interested in one reading from each equivalence class, and all others could be deleted. This situation is illustrated by the following two (out of many) sentences from the Rondane treebank, which is distributed with the English Resource Grammar (ERG; Flickinger (2002)), a large-scale HPSG grammar of English. (1) For travellers going to Finnmark there is a bus service from Oslo to Alta through Sweden. (Rondane 1262) (2) We quickly put up the tents in the lee of a small hillside and cook for the first time in the open. (Rondane 892) For the annotated syntactic analysis of (1), the ERG derives an USR with eight scope bearing operators, which results in a total of 3960 readings. These readings are all semantically equivalent to each other. On the other hand, the USR for (2) has 480 readings, which fall into two classes of mutually equivalent readings, characterised by the relative scope of “the lee of” and “a small hillside.” In this paper, we present an algorithm for the redundancy elimination problem: Given an USR, compute an USR which has fewer readings, but still describes at least one representative of each equivalence class – without enumerating any readings. This algorithm makes it possible to compute the one or two representatives of the semantic equivalence classes in the examples, so subsequent modules don’t have to deal with all the other equivalent readings. It also closes the gap between the large number of readings predicted by the grammar and the intuitively perceived much lower degree of ambiguity of these sentences. Finally, it can be helpful for a grammar designer because it is much more feasible to check whether two readings are linguistically reasonable than 480. Our algorithm is applicable to arbitrary USRs (not just those computed by the ERG). While its effect is particularly significant on the ERG, which uniformly treats all kinds of noun phrases, including proper names and pronouns, as generalised quantifiers, it will generally help deal with spurious ambiguities (such as scope ambiguities between indef409 inites), which have been a ubiquitous problem in most theories of scope since Montague Grammar. We model equivalence in terms of rewrite rules that permute quantifiers without changing the semantics of the readings. The particular USRs we work with are underspecified chart representations, which can be computed from dominance graphs (or USRs in some other underspecification formalisms) efficiently (Koller and Thater, 2005b). We evaluate the performance of the algorithm on the Rondane treebank and show that it reduces the median number of readings from 56 to 4, by up to a factor of 666.240 for individual USRs, while running in negligible time. To our knowledge, our algorithm and its less powerful predecessor (Koller and Thater, 2006) are the first redundancy elimination algorithms in the literature that operate on the level of USRs. There has been previous research on enumerating only some representatives of each equivalence class (Vestre, 1991; Chaves, 2003), but these approaches don’t maintain underspecification: After running their algorithms, they are left with a set of readings rather than an underspecified representation, i.e. we could no longer run other algorithms on an USR. The paper is structured as follows. We will first define dominance graphs and review the necessary background theory in Section 2. We will then introduce our notion of equivalence in Section 3, and present the redundancy elimination algorithm in Section 4. In Section 5, we describe the evaluation of the algorithm on the Rondane corpus. Finally, Section 6 concludes and points to further work. 2 Dominance graphs The basic underspecification formalism we assume here is that of (labelled) dominance graphs (Althaus et al., 2003). Dominance graphs are equivalent to leaf-labelled normal dominance constraints (Egg et al., 2001), which have been discussed extensively in previous literature. Definition 1. A (compact) dominance graph is a directed graph (V,E ⊎D) with two kinds of edges, tree edges E and dominance edges D, such that: 1. The graph (V,E) defines a collection of node disjoint trees of height 0 or 1. We call the trees in (V,E) the fragments of the graph. 2. If (v,v′) is a dominance edge in D, then v is a hole and v′ is a root. A node v is a root if v does not have incoming tree edges; otherwise, v is a hole. A labelled dominance graph over a ranked signature Σ is a triple G = (V,E ⊎D,L) such that (V,E ⊎D) is a dominance graph and L : V ⇝Σ is a partial labelling function which assigns a node v a label with arity n iff v is a root with n outgoing tree edges. Nodes without labels (i.e. holes) must have outgoing dominance edges. We will write R(F) for the root of the fragment F, and we will typically just say “graph” instead of “labelled dominance graph”. An example of a labelled dominance graph is shown to the left of Fig. 1. Tree edges are drawn as solid lines, and dominance edges as dotted lines, directed from top to bottom. This graph can serve as an USR for the sentence “a representative of a company saw a sample” if we demand that the holes are “plugged” by roots while realising the dominance edges as dominance, as in the two configurations (of five) shown to the right. These configurations are trees that encode semantic representations of the sentence. We will freely read configurations as ground terms over the signature Σ. 2.1 Hypernormally connected graphs Throughout this paper, we will only consider hypernormally connected (hnc) dominance graphs. Hnc graphs are equivalent to chain-connected dominance constraints (Koller et al., 2003), and are closely related to dominance nets (Niehren and Thater, 2003). Fuchss et al. (2004) have presented a corpus study that strongly suggests that all dominance graphs that are generated by current largescale grammars are (or should be) hnc. Technically, a graph G is hypernormally connected iff each pair of nodes is connected by a simple hypernormal path in G. A hypernormal path (Althaus et al., 2003) in G is a path in the undirected version Gu of G that does not use two dominance edges that are incident to the same hole. Hnc graphs have a number of very useful structural properties on which this paper rests. One which is particularly relevant here is that we can predict in which way different fragments can dominate each other. Definition 2. Let G be a hnc dominance graph. A fragment F1 in G is called a possible dominator of another fragment F2 in G iff it has exactly one hole h which is connected to R(F2) by a simple hy410 ay sampley seex,y ax repr-ofx,z az compz 1 2 3 4 5 6 7 ay ax az 1 2 3 sampley seex,y repr-ofx,z compz ay ax sampley seex,y repr-ofx,z az compz 1 2 3 Figure 1: A dominance graph that represents the five readings of the sentence “a representative of a company saw a sample” (left) and two of its five configurations. {1,2,3,4,5,6,7} :⟨1,h1 7→{4},h2 7→{2,3,5,6,7}⟩ ⟨2,h3 7→{1,4,5},h4 7→{3,6,7}⟩ ⟨3,h5 7→{5},h6 7→{1,2,4,5,7}⟩ {2,3,5,6,7} :⟨2,h3 7→{5},h4 7→{3,6,7}⟩ ⟨3,h5 7→{6},h6 7→{2,5,7}⟩ {3,6,7} :⟨3,h5 7→{6},h6 7→{7}⟩ {2,5,7} :⟨2,h3 7→{5},h4 7→{7}⟩ {1,4,5} :⟨1,h1 7→{4},h2 7→{5}⟩ {1,2,4,5,7} :⟨1,h1 7→{4},h2 7→{2,5,7}⟩ ⟨2,h3 7→{1,4,5},h4 7→{7}⟩ Figure 2: The chart for the graph in Fig. 1. pernormal path which doesn’t use R(F1). We write ch(F1,F2) for this unique h. Lemma 1 (Koller and Thater (2006)). Let F1, F2 be fragments in a hnc dominance graph G. If there is a configurationC of G in which R(F1) dominates R(F2), then F1 is a possible dominator of F2, and in particular ch(F1,F2) dominates R(F2) in C. By applying this rather abstract result, we can derive a number of interesting facts about the example graph in Fig. 1. The fragments 1, 2, and 3 are possible dominators of all other fragments (and of each other), while the fragments 4 through 7 aren’t possible dominators of anything (they have no holes); so 4 through 7 must be leaves in any configuration of the graph. In addition, if fragment 2 dominates fragment 3 in any configuration, then in particular the right hole of 2 will dominate the root of 3; and so on. 2.2 Dominance charts Below we will not work with dominance graphs directly. Rather, we will use dominance charts (Koller and Thater, 2005b) as our USRs: they are more explicit USRs, which support a more finegrained deletion of reading sets than graphs. A dominance chart for the graph G is a mapping of weakly connected subgraphs of G to sets of splits (see Fig. 2), which describe possible ways of constructing configurations of the subgraph. A subgraph G′ is assigned one split for each fragment F in G′ which can be at the root of a configuration of G′. If the graph is hnc, removing F from the graph splits G′ into a set of weakly connected components (wccs), each of which is connected to exactly one hole of F. We also record the wccs, and the hole to which each wcc belongs, in the split. In order to compute all configurations represented by a split, we can first compute recursively the configurations of each component; then we plug each combination of these subconfigurations into the appropriate holes of the root fragment. We define the configurations associated with a subgraph as the union over its splits, and those of the entire chart as the configurations associated with the complete graph. Fig. 2 shows the dominance chart corresponding to the graph in Fig. 1. The chart represents exactly the configuration set of the graph, and is minimal in the sense that every subgraph and every split in the chart can be used in constructing some configuration. Such charts can be computed efficiently (Koller and Thater, 2005b) from a dominance graph, and can also be used to compute the configurations of a graph efficiently. The example chart expresses that three fragments can be at the root of a configuration of the complete graph: 1, 2, and 3. The entry for the split with root fragment 2 tells us that removing 2 splits the graph into the subgraphs {1,4,5} and {3,6,7} (see Fig. 3). If we configure these two subgraphs recursively, we obtain the configurations shown in the third column of Fig. 3; we can then plug these sub-configurations into the appropriate holes of 2 and obtain a configuration for the entire graph. Notice that charts can be exponentially larger than the original graph, but they are still exponentially smaller than the entire set of readings because common subgraphs (such as the graph {2,5,7} in the example) are represented only once, 411 1 2 3 4 5 6 7 h2 h1 h4 h3 h6 h5 1 3 4 5 6 7 h2 h1 h6 h5 → → 1 3 4 5 6 7 2 1 3 4 5 6 7 → Figure 3: Extracting a configuration from a chart. and are small in practice (see (Koller and Thater, 2005b) for an analysis). Thus the chart can still serve as an underspecified representation. 3 Equivalence Now let’s define equivalence of readings more precisely. Equivalence of semantic representations is traditionally defined as the relation between formulas (say, of first-order logic) which have the same interpretation. However, even first-order equivalence is an undecidable problem, and broadcoverage semantic representations such as those computed by the ERG usually have no welldefined model-theoretic semantics and therefore no concept of semantic equivalence. On the other hand, we do not need to solve the full semantic equivalence problem, as we only want to compare formulas that are readings of the same sentence, i.e. different configurations of the same USR. Such formulas only differ in the way that the fragments are combined. We can therefore approximate equivalence by using a rewrite system that permutes fragments and defining equivalence of configurations as mutual rewritability as usual. By way of example, consider again the two configurations shown in Fig. 1. We can obtain the second configuration from the (semantically equivalent) first one by applying the following rewrite rule, which rotates the fragments 1 and 2: ax(az(P,Q),R) →az(P,ax(Q,R)) (3) Thus we take these two configurations to be equivalent with respect to the rewrite rule. (We could also have argued that the second configuration can be rewritten into the first by using the inverted rule.) We formalise this rewriting-based notion of equivalence as follows. The definition uses the abbreviation x[1,k) for the sequence x1,...,xk−1, and x(k,n] for xk+1,...,xn. Definition 3. A permutation system R is a system of rewrite rules over the signature Σ of the following form: f1(x[1,i), f2(y[1,k),z,y(k,m]),x(i,n]) → f2(y[1,k), f1(x[1,i),z,x(i,n]),y(k,m]) The permutability relation P(R) is the binary relation P(R) ⊆(Σ × N)2 which contains exactly the tuples ((f1,i),(f2,k)) and ((f2,k),(f1,i)) for each such rewrite rule. Two terms are equivalent with respect to R, s ≈R t, iff there is a sequence of rewrite steps and inverse rewrite steps that rewrite s into t. If G is a graph over Σ and R a permutation system, then we write SCR(G) for the set of equivalence classes Conf(G)/≈R, where Conf(G) is the set of configurations of G. The rewrite rule (3) above is an instance of this schema, as are the other three permutations of existential quantifiers. These rules approximate classical semantic equivalence of first-order logic, as they rewrite formulas into classically equivalent ones. Indeed, all five configurations of the graph in Fig. 1 are rewriting-equivalent to each other. In the case of the semantic representations generated by the ERG, we don’t have access to an underlying interpretation. But we can capture linguistic intuitions about the equivalence of readings in permutation rules. For instance, proper names and pronouns (which the ERG analyses as scopebearers, although they can be reduced to constants without scope) can be permuted with anything. Indefinites and definites permute with each other if they occur in each other’s scope, but not if they occur in each other’s restriction; and so on. 4 Redundancy elimination Given a permutation system, we can now try to get rid of readings that are equivalent to other readings. One way to formalise this is to enumerate exactly one representative of each equivalence class. However, after such a step we would be left with a collection of semantic representations rather than an USR, and could not use the USR for ruling out further readings. Besides, a naive algorithm which 412 first enumerates all configurations would be prohibitively slow. We will instead tackle the following underspecified redundancy elimination problem: Given an USR G, compute an USR G′ with Conf(G′) ⊆ Conf(G) and SCR(G) = SCR(G′). We want Conf(G′) to be as small as possible. Ideally, it would contain no two equivalent readings, but in practice we won’t always achieve this kind of completeness. Our redundancy elimination algorithm will operate on a dominance chart and successively delete splits and subgraphs from the chart. 4.1 Permutable fragments Because the algorithm must operate on USRs rather than configurations, it needs a way to predict from the USR alone which fragments can be permuted in configurations. This is not generally possible in unrestricted graphs, but for hnc graphs it is captured by the following criterion. Definition 4. Let R be a permutation system. Two fragments F1 and F2 with root labels f1 and f2 in a hnc graph G are called R-permutable iff they are possible dominators of each other and ((f1,ch(F1,F2)),(f2,ch(F2,F1))) ∈P(R). For example, in Fig. 1, the fragments 1 and 2 are permutable, and indeed they can be permuted in any configuration in which one is the parent of the other. This is true more generally: Lemma 2 (Koller and Thater (2006)). Let G be a hnc graph, F1 and F2 be R-permutable fragments with root labels f1 and f2, and C1 any configuration of G of the form C(f1(..., f2(...),...)) (where C is the context of the subterm). Then C1 can be R-rewritten into a tree C2 of the form C(f2(..., f1(...),...)) which is also a configuration of G. The proof uses the hn connectedness of G in two ways: in order to ensure that C2 is still a configuration of G, and to make sure that F2 is plugged into the correct hole of F1 for a rule application (cf. Lemma 1). Note that C2 ≈R C1 by definition. 4.2 The redundancy elimination algorithm Now we can use permutability of fragments to define eliminable splits. Intuitively, a split of a subgraph G is eliminable if each of its configurations is equivalent to a configuration of some other split of G. Removing such a split from the chart will rule out some configurations; but it does not change the set of equivalence classes. Definition 5. Let R be a permutation system. A split S = (F,...,hi 7→Gi,...) of a graph G is called eliminable in a chart Ch if some Gi contains a fragment F′ such that (a) Ch contains a split S′ of G with root fragment F′, and (b) F′ is R-permutable with F and all possible dominators of F′ in Gi. In Fig. 1, each of the three splits is eliminable. For example, the split with root fragment 1 is eliminable because the fragment 3 permutes both with 2 (which is the only possible dominator of 3 in the same wcc) and with 1 itself. Proposition 3. Let Ch be a dominance chart, and let S be an eliminable split of a hnc subgraph. Then SC(Ch) = SC(Ch−S). Proof. Let C be an arbitrary configuration of S = (F,h1 7→G1,...,hn 7→Gn), and let F′ ∈Gi be the root fragment of the assumed second split S′. Let F1,...,Fn be those fragments in C that are properly dominated by F and properly dominate F′. All of these fragments must be possible dominators of F′, and all of them must be in Gi as well, so F′ is permutable with each of them. F′ must also be permutable with F. This means that we can apply Lemma 2 repeatedly to move F′ to the root of the configuration, obtaining a configuration of S′ which is equivalent to C. Notice that we didn’t require that Ch must be the complete chart of a dominance graph. This means we can remove eliminable splits from a chart repeatedly, i.e. we can apply the following redundancy elimination algorithm: REDUNDANCY-ELIMINATION(Ch,R) 1 for each split S in Ch 2 do if S is eliminable with respect to R 3 then remove S from Ch Prop. 3 shows that the algorithm is a correct algorithm for the underspecified redundancy elimination problem. The particular order in which eliminable splits are removed doesn’t affect the correctness of the algorithm, but it may change the number of remaining configurations. The algorithm generalises an earlier elimination algorithm (Koller and Thater, 2006) in that the earlier algorithm required the existence of a single split which could be used to establish eliminability of all other splits of the same subgraph. We can further optimise this algorithm by keeping track of how often each subgraph is referenced 413 everyz Dx,y,z ay ax 1 2 3 Ax By Cz 4 5 6 7 Figure 4: A graph for which the algorithm is not complete. by the splits in the chart. Once a reference count drops to zero, we can remove the entry for this subgraph and all of its splits from the chart. This doesn’t change the set of configurations of the chart, but may further reduce the chart size. The overall runtime for the algorithm is O(n2S), where S is the number of splits in Ch and n is the number of nodes in the graph. This is asymptotically not much slower than the runtime O((n + m)S) it takes to compute the chart in the first place (where m is the number of edges in the graph). 4.3 Examples and discussion Let’s look at a run of the algorithm on the chart in Fig. 2. The algorithm can first delete the eliminable split with root 1 for the entire graph G. After this deletion, the splits for G with root fragments 2 and 3 are still eliminable; so we can e.g. delete the split for 3. At this point, only one split is left for G. The last split for a subgraph can never be eliminable, so we are finished with the splits for G. This reduces the reference count of some subgraphs (e.g. {2,3,5,6,7}) to 0, so we can remove these subgraphs too. The output of the algorithm is the chart shown below, which represents a single configuration (the one shown in Fig. 3). {1,2,3,4,5,6,7} :⟨2,h2 7→{1,4},h4 7→{3,6,7}⟩ {1,4} :⟨1,h1 7→{4}⟩ {3,6,7} :⟨3,h5 7→{6},h6 7→{7}⟩ In this case, the algorithm achieves complete reduction, in the sense that the final chart has no two equivalent configurations. It remains complete for all variations of the graph in Fig. 1 in which some or all existential quantifiers are replaces by universal quantifiers. This is an improvement over our earlier algorithm (Koller and Thater, 2006), which computed a chart with four configurations for the graph in which 1 and 2 are existential and 3 is universal, as opposed to the three equivalence classes of this graph’s configurations. However, the present algorithm still doesn’t achieve complete reduction for all USRs. One example is shown in Fig. 4. This graph has six configurations in four equivalence classes, but no split of the whole graph is eliminable. The algorithm will delete a split for the subgraph {1,2,4,5,7}, but the final chart will still have five, rather than four, configurations. A complete algorithm would have to recognise that {1,3,4,6,7} and {2,3,5,6,7} have splits (for 1 and 2, respectively) that lead to equivalent configurations and delete one of them. But it is far from obvious how such a non-local decision could be made efficiently, and we leave this for future work. 5 Evaluation In this final section, we evaluate the the effectiveness and efficiency of the elimination algorithm: We run it on USRs from a treebank and measure how many readings are redundant, to what extent the algorithm eliminates this redundancy, and how much time it takes to do this. Resources. The experiments are based on the Rondane corpus, a Redwoods (Oepen et al., 2002) style corpus which is distributed with the English Resource Grammar (Flickinger, 2002). The corpus contains analyses for 1076 sentences from the tourism domain, which are associated with USRs based upon Minimal Recursion Semantics (MRS). The MRS representations are translated into dominance graphs using the open-source utool tool (Koller and Thater, 2005a), which is restricted to MRS representations whose translations are hnc. By restricting ourselves to such MRSs, we end up with a data set of 999 dominance graphs. The average number of scope bearing operators in the data set is 6.5, and the median number of readings is 56. We then defined a (rather conservative) rewrite system RERG for capturing the permutability relation of the quantifiers in the ERG. This amounted to 34 rule schemata, which are automatically expanded to 494 rewrite rules. Experiment: Reduction. We first analysed the extent to which our algorithm eliminated the redundancy of the USRs in the corpus. We computed dominance charts for all USRs, ran the algorithm on them, and counted the number of configurations of the reduced charts. We then compared these numbers against a baseline and an upper bound. The upper bound is the true number of 414 1 10 100 1000 10000 100000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 log(#configurations) Factor Algorithm Baseline Classes Figure 5: Mean reduction factor on Rondane. equivalence classes with respect to RERG; for efficiency reasons we could only compute this number for USRs with up to 500.000 configurations (95 % of the data set). The baseline is given by the number of readings that remain if we replace proper names and pronouns by constants and variables, respectively. This simple heuristic is easy to compute, and still achieves nontrivial redundancy elimination because proper names and pronouns are quite frequent (28% of the noun phrase occurrences in the data set). It also shows the degree of non-trivial scope ambiguity in the corpus. For each measurement, we sorted the USRs according to the number N of configurations, and grouped USRs according to the natural logarithm of N (rounded down) to obtain a logarithmic scale. First, we measured the mean reduction factor for each log(N) class, i.e. the ratio of the number of all configurations to the number of remaining configurations after redundancy elimination (Fig. 5). The upper-bound line in the figure shows that there is a great deal of redundancy in the USRs in the data set. The average performance of our algorithm is close to the upper bound and much 0% 20% 40% 60% 80% 100% 0 1 2 3 4 5 6 7 8 9 10 11 12 13 log(#configurations) Algorithm Baseline Figure 6: Percentage of USRs for which the algorithm and the baseline achieve complete reduction. 0 1 10 100 1000 10000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 log(#configurations) time (ms) Full Chart Reduced Chart Enumeration Figure 7: Mean runtimes. better than the baseline. For USRs with fewer than e8 = 2980 configurations (83 % of the data set), the mean reduction factor of our algorithm is above 86 % of the upper bound. The median number of configurations for the USRs in the whole data set is 56, and the median number of equivalence classes is 3; again, the median number of configurations of the reduced charts is very close to the upper bound, at 4 (baseline: 8). The highest reduction factor for an individual USR is 666.240. We also measured the ratio of USRs for which the algorithm achieves complete reduction (Fig. 6): The algorithm is complete for 56 % of the USRs in the data set. It is complete for 78 % of the USRs with fewer than e5 = 148 configurations (64 % of the data set), and still complete for 66 % of the USRs with fewer than e8 configurations. Experiment: Efficiency. Finally, we measured the runtime of the elimination algorithm. The runtime of the elimination algorithm is generally comparable to the runtime for computing the chart in the first place. However, in our experiments we used an optimised version of the elimination algorithm, which computes the reduced chart directly from a dominance graph by checking each split for eliminability before it is added to the chart. We compare the performance of this algorithm to the baseline of computing the complete chart. For comparison, we have also added the time it takes to enumerate all configurations of the graph, as a lower bound for any algorithm that computes the equivalence classes based on the full set of configurations. Fig. 7 shows the mean runtimes for each log(N) class, on the USRs with less than one million configurations (958 USRs). As the figure shows, the asymptotic runtimes for computing the complete chart and the reduced chart are about the same, whereas the time for 415 enumerating all configurations grows much faster. (Note that the runtime is reported on a logarithmic scale.) For USRs with many configurations, computing the reduced chart actually takes less time on average than computing the complete chart because the chart-filling algorithm is called on fewer subgraphs. While the reduced-chart algorithm seems to be slower than the complete-chart one for USRs with less than e5 configurations, these runtimes remain below 20 milliseconds on average, and the measurements are thus quite unreliable. In summary, we can say that there is no overhead for redundancy elimination in practice. 6 Conclusion We presented an algorithm for redundancy elimination on underspecified chart representations. This algorithm successively deletes eliminable splits from the chart, which reduces the set of described readings while making sure that at least one representative of each original equivalence class remains. Equivalence is defined with respect to a certain class of rewriting systems; this definition approximates semantic equivalence of the described formulas and fits well with the underspecification setting. The algorithm runs in polynomial time in the size of the chart. We then evaluated the algorithm on the Rondane corpus and showed that it is useful in practice: the median number of readings drops from 56 to 4, and the maximum individual reduction factor is 666.240. The algorithm achieves complete reduction for 56% of all sentences. It does this in negligible runtime; even the most difficult sentences in the corpus are reduced in a matter of seconds, whereas the enumeration of all readings would take about a year. This is the first corpus evaluation of a redundancy elimination in the literature. The algorithm improves upon previous work (Koller and Thater, 2006) in that it eliminates more splits from the chart. It is an improvement over earlier algorithms for enumerating irredundant readings (Vestre, 1991; Chaves, 2003) in that it maintains underspecifiedness; note that these earlier papers never made any claims with respect to, or evaluated, completeness. There are a number of directions in which the present algorithm could be improved. We are currently pursuing some ideas on how to improve the completeness of the algorithm further. It would also be worthwhile to explore heuristics for the order in which splits of the same subgraph are eliminated. The present work could be extended to allow equivalence with respect to arbitrary rewrite systems. Most generally, we hope that the methods developed here will be useful for defining other elimination algorithms, which take e.g. full world knowledge into account. References E. Althaus, D. Duchier, A. Koller, K. Mehlhorn, J. Niehren, and S. Thiel. 2003. An efficient graph algorithm for dominance constraints. Journal of Algorithms, 48:194–219. P. Blackburn and J. Bos. 2005. Representation and Inference for Natural Language. A First Course in Computational Semantics. CSLI Publications. R. P. Chaves. 2003. Non-redundant scope disambiguation in underspecified semantics. In Proc. 8th ESSLLI Student Session. A. Copestake, D. Flickinger, C. Pollard, and I. Sag. 2004. Minimal recursion semantics: An introduction. Journal of Language and Computation. To appear. M. Egg, A. Koller, and J. Niehren. 2001. The Constraint Language for Lambda Structures. Logic, Language, and Information, 10. D. Flickinger. 2002. On building a more efficient grammar by exploiting types. In J. Tsujii S. Oepen, D. Flickinger and H. Uszkoreit, editors, Collaborative Language Engineering. CSLI Publications, Stanford. R. Fuchss, A. Koller, J. Niehren, and S. Thater. 2004. Minimal recursion semantics as dominance constraints: Translation, evaluation, and analysis. In Proc. of the 42nd ACL. A. Koller and S. Thater. 2005a. Efficient solving and exploration of scope ambiguities. In ACL-05 Demonstration Notes, Ann Arbor. A. Koller and S. Thater. 2005b. The evolution of dominance constraint solvers. In Proceedings of the ACL-05 Workshop on Software, Ann Arbor. A. Koller and S. Thater. 2006. Towards a redundancy elimination algorithm for underspecified descriptions. In Proc. 5th Intl. Workshop on Inference in Computational Semantics (ICoS-5). A. Koller, J. Niehren, and S. Thater. 2003. Bridging the gap between underspecification formalisms: Hole semantics as dominance constraints. In Proc. 10th EACL. J. Niehren and S. Thater. 2003. Bridging the gap between underspecification formalisms: Minimal recursion semantics as dominance constraints. In Proc. of the 41st ACL. S. Oepen, K. Toutanova, S. Shieber, C. Manning, D. Flickinger, and T. Brants. 2002. The LinGO Redwoods treebank: Motivation and preliminary applications. In Proceedings of COLING’02. K. van Deemter and S. Peters. 1996. Semantic Ambiguity and Underspecification. CSLI, Stanford. E. Vestre. 1991. An algorithm for generating non-redundant quantifier scopings. In Proc. of the Fifth EACL, Berlin. 416
2006
52
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 417–424, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Integrating Syntactic Priming into an Incremental Probabilistic Parser, with an Application to Psycholinguistic Modeling Amit Dubey and Frank Keller and Patrick Sturt Human Communication Research Centre, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, UK {amit.dubey,patrick.sturt,frank.keller}@ed.ac.uk Abstract The psycholinguistic literature provides evidence for syntactic priming, i.e., the tendency to repeat structures. This paper describes a method for incorporating priming into an incremental probabilistic parser. Three models are compared, which involve priming of rules between sentences, within sentences, and within coordinate structures. These models simulate the reading time advantage for parallel structures found in human data, and also yield a small increase in overall parsing accuracy. 1 Introduction Over the last two decades, the psycholinguistic literature has provided a wealth of experimental evidence for syntactic priming, i.e., the tendency to repeat syntactic structures (e.g., Bock, 1986). Most work on syntactic priming has been concerned with sentence production; however, recent studies also demonstrate a preference for structural repetition in human parsing. This includes the so-called parallelism effect demonstrated by Frazier et al. (2000): speakers processes coordinated structures more quickly when the second conjunct repeats the syntactic structure of the first conjunct. Two alternative accounts of the parallelism effect have been proposed. Dubey et al. (2005) argue that the effect is simply an instance of a pervasive syntactic priming mechanism in human parsing. They provide evidence from a series of corpus studies which show that parallelism is not limited to co-ordination, but occurs in a wide range of syntactic structures, both within and between sentences, as predicted if a general priming mechanism is assumed. (They also show this effect is stronger in coordinate structures, which could explain Frazier et al.’s (2000) results.) Frazier and Clifton (2001) propose an alternative account of the parallelism effect in terms of a copying mechanism. Unlike priming, this mechanism is highly specialized and only applies to coordinate structures: if the second conjunct is encountered, then instead of building new structure, the language processor simply copies the structure of the first conjunct; this explains why a speedup is observed if the two conjuncts are parallel. If the copying account is correct, then we would expect parallelism effects to be restricted to coordinate structures and not to apply in other contexts. This paper presents a parsing model which implements both the priming mechanism and the copying mechanism, making it possible to compare their predictions on human reading time data. Our model also simulates other important aspects of human parsing: (i) it is broad-coverage, i.e., it yields accurate parses for unrestricted input, and (ii) it processes sentences incrementally, i.e., on a word-by-word basis. This general modeling framework builds on probabilistic accounts of human parsing as proposed by Jurafsky (1996) and Crocker and Brants (2000). A priming-based parser is also interesting from an engineering point of view. To avoid sparse data problems, probabilistic parsing models make strong independence assumptions; in particular, they generally assume that sentences are independent of each other, in spite of corpus evidence for structural repetition between sentences. We therefore expect a parsing model that includes structural repetition to provide a better fit with real corpus data, resulting in better parsing performance. A simple and principled approach to handling structure re-use would be to use adaptation probabilities for probabilistic grammar rules (Church, 2000), analogous to cache probabilities used in caching language models (Kuhn and de Mori, 1990). This is the approach we will pursue in this paper. Dubey et al. (2005) present a corpus study that demonstrates the existence of parallelism in corpus data. This is an important precondition for understanding the parallelism effect; however, they 417 do not develop a parsing model that accounts for the effect, which means they are unable to evaluate their claims against experimental data. The present paper overcomes this limitation. In Section 2, we present a formalization of the priming and copying models of parallelism and integrate them into an incremental probabilistic parser. In Section 3, we evaluate this parser against reading time data taken from Frazier et al.’s (2000) parallelism experiments. In Section 4, we test the engineering aspects of our model by demonstrating that a small increase in parsing accuracy can be obtained with a parallelism-based model. Section 5 provides an analysis of the performance of our model, focusing on the role of the distance between prime and target. 2 Priming Models We propose three models designed to capture the different theories of structural repetition discussed above. To keep our model as simple as possible, each formulation is based on an unlexicalized probabilistic context free grammar (PCFG). In this section, we introduce the models and discuss the novel techniques used to model structural similarity. We also discuss the design of the probabilistic parser used to evaluate the models. 2.1 Baseline Model The unmodified PCFG model serves as the Baseline. A PCFG assigns trees probabilities by treating each rule expansion as conditionally independent given the parent node. The probability of a rule LHS →RHS is estimated as: P(RHS|LHS) = c(LHS →RHS) c(LHS) 2.2 Copy Model The first model we introduce is a probabilistic variant of Frazier and Clifton’s (2001) copying mechanism: it models parallelism in coordination and nothing else. This is achieved by assuming that the default operation upon observing a coordinator (assumed to be anything with a CC tag, e.g., ‘and’) is to copy the full subtree of the preceding coordinate sister. Copying impacts on how the parser works (see Section 2.5), and in a probabilistic setting, it also changes the probability of trees with parallel coordinated structures. If coordination is present, the structure of the second item is either identical to the first, or it is not.1 Let us call 1The model only considers two-item coordination or the last two sisters of multiple-item coordination. the probability of having a copied tree as pident. This value may be estimated directly from a corpus using the formula ˆpident = cident ctotal Here, cident is the number of coordinate structures in which the two conjuncts have the same internal structure and ctotal is the total number of coordinate structures. Note we assume there is only one parameter pident applicable everywhere (i.e., it has the same value for all rules). How is this used in a PCFG parser? Let t1 and t2 represent, respectively, the first and second coordinate sisters and let PPCFG(t) be the PCFG probability of an arbitrary subtree t. Because of the independence assumptions of the PCFG, we know that pident ≫PPCFG(t). One way to proceed would be to assign a probability of pident when structures match, and (1 −pident) · PPCFG(t2) when structures do not match. However, some probability mass is lost this way: there is a nonzero PCFG probability (namely, PPCFG(t1)) that the structures match. In other words, we may have identical subtrees in two different ways: either due to a copy operation, or due to a PCFG derivation. If pcopy is the probability of a copy operation, we can write this fact more formally as: pident = PPCFG(t1)+ pcopy. Thus, if the structures do match, we assign the second sister a probability of: pcopy +PPCFG(t1) If they do not match, we assign the second conjunct the following probability: 1−PPCFG(t1)−pcopy 1−PPCFG(t1) ·PPCFG(t2) This accounts for both a copy mismatch and a PCFG derivation mismatch, and assures the probabilities still sum to one. These probabilities for parallel and non-parallel coordinate sisters, therefore, gives us the basis of the Copy model. This leaves us with the problem of finding an estimate for pcopy. This value is approximated as: ˆpcopy = ˆpident −1 |T2| ∑ t∈T2 PPCFG(t) In this equation, T2 is the set of all second conjuncts. 2.3 Between Model While the Copy model limits itself to parallelism in coordination, the next two models simulate structural priming in general. Both are similar in design, and are based on a simple insight: we may 418 condition a PCFG rule expansion on whether the rule occurred in some previous context. If Prime is a binary-valued random variable denoting if a rule occurred in the context, then we define: P(RHS|LHS,Prime) = c(LHS →RHS,Prime) c(LHS,Prime) This is essentially an instantiation of Church’s (2000) adaptation probability, albeit with PCFG rules instead of words. For our first model, this context is the previous sentence. Thus, the model can be said to capture the degree to which rule use is primed between sentences. We henceforth refer to this as the Between model. Following the convention in the psycholinguistic literature, we refer to a rule use in the previous sentence as a ‘prime’, and a rule use in the current sentence as the ‘target’. Each rule acts once as a target (i.e., the event of interest) and once as a prime. We may classify such adapted probabilities into ‘positive adaptation’, i.e., the probability of a rule given the rule occurred in the preceding sentence, and ‘negative adaptation’, i.e., the probability of a rule given that the rule did not occur in the preceding sentence. 2.4 Within Model Just as the Between model conditions on rules from the previous sentence, the Within sentence model conditions on rules from earlier in the current sentence. Each rule acts once as a target, and possibly several times as a prime (for each subsequent rule in the sentence). A rule is considered ‘used’ once the parser passes the word on the leftmost corner of the rule. Because the Within model is finer grained than the Between model, it can be used to capture the parallelism effect in coordination. In other words, this model could explain parallelism in coordination as an instance of a more general priming effect. 2.5 Parser As our main purpose is to build a psycholinguistic model of structure repetition, the most important feature of the parsing model is to build structures incrementally.2 Reading time experiments, including the parallelism studies of Frazier et al. (2000), make wordby-word measurements of the time taken to read 2In addition to incremental parsing, a characteristic some of psycholinguistic models of sentence comprehension is to parse deterministically. While we can compute the best incremental analysis at any point, ours models do not parse deterministically. However, following the principles of rational analysis (Anderson, 1991), our goal is not to mimic the human parsing mechanism, but rather to create a model of human parsing behavior. a novel and a book wrote 0 3 Terry 4 5 6 1 2 7 NP NP NP a novel and a book Terry wrote 0 3 1 4 5 6 2 7 NP NP NP Figure 1: Upon encountering a coordinator, the copy model copies the most likely first conjunct. sentences. Slower reading times are known to be correlated with processing difficulty, and faster reading times (as is the case with parallel structures) are correlated with processing ease. A probabilistic parser may be considered to be a sentence processing model via a ‘linking hypothesis’, which links the parser’s word-by-word behavior to human reading behavior. We discuss this topic in more detail in Section 3. At this point, it suffices to say that we require a parser which has the prefix property, i.e., which parses incrementally, from left to right. Therefore, we use an Earley-style probabilistic parser, which outputs Viterbi parses (Stolcke, 1995). We have two versions of the parser: one which parses exhaustively, and a second which uses a variable width beam, pruning any edges whose merit is 1 2000 of the best edge. The merit of an edge is its inside probability times a prior P(LHS) times a lookahead probability (Roark and Johnson, 1999). To speed up parsing time, we right binarize the grammar,3 remove empty nodes, coindexation and grammatical functions. As our goal is to create the simplest possible model which can nonetheless model experimental data, we do not make any tree modification designed to improve accuracy (as, e.g., Klein and Manning 2003). The approach used to implement the Copy model is to have the parser copy the subtree of the first conjunct whenever it comes across a CC tag. Before copying, though, the parser looks ahead to check if the part-of-speech tags after the CC are equivalent to those inside the first conjunct. The copying model is visualized in Figure 1: the top panel depicts a partially completed edge upon seeing a CC tag, and the second panel shows the completed copying operation. It should be clear that 3We found that using an unbinarized grammar did not alter the results, at least in the exhaustive parsing case. 419 the copy operation gives the most probable subtree in a given span. To illustrate this, consider Figure 1. If the most likely NP between spans 2 and 7 does not involve copying (i.e. only standard PCFG rule derivations), the parser will find it using normal rule derivations. If it does involve copying, for this particular rule, it must involve the most likely NP subtree from spans 2 to 3. As we parse incrementally, we are guaranteed to have found this edge, and can use it to construct the copied conjunct over spans 5 to 7 and therefore the whole co-ordinated NP from spans 2 to 7. To simplify the implementation of the copying operation, we turn off right binarization so that the constituent before and after a coordinator are part of the same rule, and therefore accessible from the same edge. This makes it simple to calculate the new probability: construct the copied subtree, and decide where to place the resulting edge on the chart. The Between and Within models require a cache of recently used rules. This raises two dilemmas. First, in the Within model, keeping track of full contextual history is incompatible with chart parsing. Second, whenever a parsing error occurs, the accuracy of the contextual history is compromised. As we are using a simple unlexicalized parser, such parsing errors are probably quite frequent. We handle the first problem by using one single parse as an approximation of the history. The more realistic choice for this single parse is the best parse so far according to the parser. Indeed, this is the approach we use for our main results in Section 3. However, because of the second problem noted above, in Section 4, we simulated the context by filling the cache with rules from the correct tree. In the Between model, these are the rules of the correct parse of the previous tree; in the Within model, these are the rules used in the correct parse at points up to (but not including) the current word. 3 Human Reading Time Experiment In this section, we test our models by applying them to experimental reading time data. Frazier et al. (2000) reported a series of experiments that examined the parallelism preference in reading. In one of their experiments, they monitored subjects’ eye-movements while they read sentences like (1): (1) a. Hilda noticed a strange man and a tall woman when she entered the house. b. Hilda noticed a man and a tall woman when she entered the house. They found that total reading times were faster on the phrase tall woman in (1a), where the coordinated noun phrases are parallel in structure, compared with in (1b), where they are not. There are various approaches to modeling processing difficulty using a probabilistic approach. One possibility is to use an incremental parser with a beam search or an n-best approach. Processing difficulty is predicted at points in the input string where the current best parse is replaced by an alternative derivation (Jurafsky, 1996; Crocker and Brants, 2000). An alternative is to keep track of all derivations, and predict difficulty at points where there is a large change in the shape of the probability distribution across adjacent parsing states (Hale, 2001). A third approach is to calculate the forward probability (Stolcke, 1995) of the sentence using a PCFG. Low probabilities are then predicted to correspond to high processing difficulty. A variant of this third approach is to assume that processing difficulty is correlated with the (log) probability of the best parse (Keller, 2003). This final formulation is the one used for the experiments presented in this paper. 3.1 Method The item set was adapted from that of Frazier et al. (2000). The original two relevant conditions of their experiment (1a,b) differ in terms of length. This results in a confound in the PCFG framework, because longer sentences tend to result in lower probabilities (as the parses tend to involve more rules). To control for such length differences, we adapted the materials by adding two extra conditions in which the relation between syntactic parallelism and length was reversed. This resulted in the following four conditions: (2) a. DT JJ NN and DT JJ NN (parallel) Hilda noticed a tall man and a strange woman when she entered the house. b. DT NN and DT JJ NN (non-parallel) Hilda noticed a man and a strange woman when she entered the house. c. DT JJ NN and DT NN (non-parallel) Hilda noticed a tall man and a woman when she entered the house. d. DT NN and DT NN (parallel) Hilda noticed a man and a woman when she entered the house. 420 In order to account for Frazier et al.’s parallelism effect a probabilistic model should predict a greater difference in probability between (2a) and (2b) than between (2c) and (2d) (i.e., (2a)−(2b) > (2c)−(2d)). This effect will not be confounded with length, because the relation between length and parallelism is reversed between (2a,b) and (2c,d). We added 8 items to the original Frazier et al. materials, resulting in a new set of 24 items similar to (2). We tested three of our PCFG-based models on all 24 sets of 4 conditions. The models were the Baseline, the Within and the Copy models, trained exactly as described above. The Between model was not tested as the experimental stimuli were presented without context. Each experimental sentence was input as a sequence of correct POS tags, and the log probability estimate of the best parse was recorded. 3.2 Results and Discussion Table 1 shows the mean log probabilities estimated by the models for the four conditions, along with the relevant differences between parallel and nonparallel conditions. Both the Within and the Copy models show a parallelism advantage, with this effect being much more pronounced for the Copy model than the Within model. To evaluate statistical significance, the two differences for each item were compared using a Wilcoxon signed ranks test. Significant results were obtained both for the Within model (N = 24, Z = 1.67, p < .05, one-tailed) and for the Copy model (N = 24, Z = 4.27, p < .001, onetailed). However, the effect was much larger for the Copy model, a conclusion which is confirmed by comparing the differences of differences between the two models (N = 24, Z = 4.27, p < .001, one-tailed). The Baseline model was not evaluated statistically, because by definition it predicts a constant value for (2a)−(2b) and (2c)−(2d) across all items. This is simply a consequence of the PCFG independence assumption, coupled with the fact that the four conditions of each experimental item differ only in the occurrences of two NP rules. The results show that the approach taken here can be successfully applied to the modeling of experimental data. In particular, both the Within and the Copy models show statistically reliable parallelism effects. It is not surprising that the copy model shows a large parallelism effect for the Frazier et al. (2000) items, as it was explicitly designed to prefer structurally parallel conjuncts. The more interesting result is the parallelism effect found for the Within model, which shows that such an effect can arise from a more general probabilistic priming mechanism. 4 Parsing Experiment In the previous section, we were able to show that the Copy and Within models are able to account for human reading-time performance for parallel coordinate structures. While this result alone is sufficient to claim success as a psycholinguistic model, it has been argued that more realistic psycholinguistic models ought to also exhibit high accuracy and broad-coverage, both crucial properties of the human parsing mechanism (e.g., Crocker and Brants, 2000). This should not be difficult: our starting point was a PCFG, which already has broad coverage behavior (albeit with only moderate accuracy). However, in this section we explore what effects our modifications have to overall coverage, and, perhaps more interestingly, to parsing accuracy. 4.1 Method The models used here were the ones introduced in Section 2 (which also contains a detailed description of the parser that we used to apply the models). The corpus used for both training and evaluation is the Wall Street Journal part of the Penn Treebank. We use sections 1–22 for training, section 0 for development and section 23 for testing. Because the Copy model posits coordinated structures whenever POS tags match, parsing efficiency decreases if POS tags are not predetermined. Therefore, we assume POS tags as input, using the gold-standard tags from the treebank (following, e.g., Roark and Johnson 1999). 4.2 Results and Discussion Table 2 lists the results in terms of F-score on the test set.4 Using exhaustive search, the baseline model achieves an F-score of 73.3, which is comparable to results reported for unlexicalized incremental parsers in the literature (e.g. the RB1 model of Roark and Johnson, 1999). All models exhibit a small decline in performance when beam search is used. For the Within model we observe a slight improvement in performance over the baseline, both for the exhaustive search and the beam 4Based on a χ2 test on precision and recall, all results are statistically different from each other. The Copy model actually performs slightly better than the Baseline in the exhaustive case. 421 Model para: (2a) non-para: (2b) non-para: (2c) para: (2d) (2a)−(2b) (2c)−(2d) Baseline −33.47 −32.37 −32.37 −31.27 −1.10 −1.10 Within −33.28 −31.67 −31.70 −29.92 −1.61 −1.78 Copy −16.18 −27.22 −26.91 −15.87 11.04 −11.04 Table 1: Mean log probability estimates for Frazier et al (2000) items Exhaustive Search Beam Search Beam + Coord Fixed Coverage Model F-score Coverage F-score Coverage F-score Coverage F-score Coverage Baseline 73.3 100 73.0 98.0 73.1 98.1 73.0 97.5 Within 73.6 100 73.4 98.4 73.0 98.5 73.4 97.5 Between 71.6 100 71.7 98.7 71.5 99.0 71.8 97.5 Copy 73.3 100 – – 73.0 98.1 73.1 97.5 Table 2: Parsing results for the Within, Between, and Copy model compared to a PCFG baseline. search conditions. The Between model, however, resulted in a decrease in performance. We also find that the Copy model performs at the baseline level. Recall that in order to simplify the implementation of the copying, we had to disable binarization for coordinate constituents. This means that quaternary rules were used for coordination (X →X1 CC X2 X′), while normal binary rules (X →Y X′) were used everywhere else. It is conceivable that this difference in binarization explains the difference in performance between the Between and Within models and the Copy model when beam search was used. We therefore also state the performance for Between and Within models with binarization limited to noncoordinate structures in the column labeled ‘Beam + Coord’ in Table 2. The pattern of results, however, remains the same. The fact that coverage differs between models poses a problem in that it makes it difficult to compare the F-scores directly. We therefore compute separate F-scores for just those sentences that were covered by all four models. The results are reported in the ‘Fixed Coverage’ column of Table 2. Again, we observe that the copy model performs at baseline level, while the Within model slightly outperforms the baseline, and the Between model performs worse than the baseline. In Section 5 below we will present an error analysis that tries to investigate why the adaptation models do not perform as well as expected. Overall, we find that the modifications we introduced to model the parallelism effect in humans have a positive, but small, effect on parsing accuracy. Nonetheless, the results also indicate the success of both the Copy and Within approaches to parallelism as psycholinguistic models: a modification primarily useful for modeling human behavior has no negative effects on computational measures of coverage or accuracy. 5 Distance Between Rule Uses Although both the Within and Copy models succeed at the main task of modeling the parallelism effect, the parsing experiments in Section 4 showed mixed results with respect to F-scores: a slight increase in F-score was observed for the Within model, but the Between model performed below the baseline. We therefore turn to an error analysis, focusing on these two models. Recall that the Within and Between models estimate two probabilities for a rule, which we have been calling the positive adaptation (the probability of a rule when the rule is also in the history), and the negative adaptation (the probability of a rule when the rule is not in the history). While the effect is not always strong, we expect positive adaptation to be higher than negative adaptation (Dubey et al., 2005). However, this is not always the case. In the Within model, for example, the rule NP →DT JJ NN has a higher negative than positive adaptation (we will refer to such rules as ‘negatively adapted’). The more common rule NP → DT NN has a higher positive adaptation (‘positively adapted’). Since the latter is three times more common, this raises a concern: what if adaptation is an artifact of frequency? This ‘frequency’ hypothesis posits that a rule recurring in a sentence is simply an artifact of the its higher frequency. The frequency hypothesis could explain an interesting fact: while the majority of rules tokens have positive adaptation, the majority of rule types have negative adaptation. An important corollary of the frequency hypothesis is that we would not expect to find a bias towards local rule re-uses. 422 Iterate through the treebank Remember how many words each constituent spans Iterate through the treebank Iterate through each tree Upon finding a constituent spanning 1-4 words Swap it with a randomly chosen constituent of 1-4 words Update the remembered size of the swapped constituents and their subtrees Iterate through the treebank 4 more times Swap constituents of size 5-9, 10-19, 20-35 and 35+ words, respectively Figure 2: The treebank randomization algorithm Nevertheless, the NP →DT JJ NN rule is an exception: most negatively adapted rules have very low frequencies. This raises the possibility that sparse data is the cause of the negatively adapted rules. This makes intuitive sense: we need many rule occurrences to accurately estimate positive or negative adaptation. We measure the distribution of rule use to explore if negatively adapted rules owe more to frequency effects or to sparse data. This distributional analysis also serves to measure ‘decay’ effects in structural repetition. The decay effect in priming has been observed elsewhere (Szmrecsanyi, 2005), and suggests that positive adaptation is higher the closer together two rules are. 5.1 Method We investigate the dispersion of rules by plotting histograms of the distance between subsequent rule uses. The basic premise is to look for evidence of an early peak or skew, which suggests rule re-use. To ensure that the histogram itself is not sensitive to sparse data problems, we group all rules into two categories: those which are positively adapted, and those which are negatively adapted. If adaptation is not due to frequency alone, we would expect the histograms for both positively and negatively adapted rules to be skewed towards local rule repetition. Detecting a skew requires a baseline without repetition. We propose the concept of ‘randomizing’ the treebank to create such a baseline. The randomization algorithm is described in Figure 2. The algorithm entails swapping subtrees, taking care that small subtrees are swapped first (otherwise large chunks would be swapped at once, preserving a great deal of context). This removes local effects, giving a distribution due frequency alone. After applying the randomization algorithm to the treebank, we may construct the distance his0 5 10 Logarithm of Word Distance 0 0.005 0.01 0.015 0.02 Normalized Frequency of Rule Occurance + Adapt, Untouched Corpus + Adapt, Randomized Corpus - Adapt, Untouched Corpus - Adapt, Randomized Corpus Figure 3: Log of number of words between rule invocations togram for both the non-randomized and randomized treebanks. The distance between two occurrences of a rule is calculated as the number of words between the first word on the left corner of each rule. A special case occurs if a rule expansion invokes another use of the same rule. When this happens, we do not count the distance between the first and second expansion. However, the second expansion is still remembered as the most recent. We group rules into those that have a higher positive adaptation and those that have a higher negative adaptation. We then plot a histogram of rule re-occurrence distance for both groups, in both the non-randomized and randomized corpora. 5.2 Results and Discussion The resulting plot for the Within model is shown in Figure 3. For both the positive and negatively adapted rules, we find that randomization results in a lower, less skewed peak, and a longer tail. We conclude that rules tend to be repeated close to one another more than we expect by chance, even for negatively adapted rules. This is evidence against the frequency hypothesis, and in favor of the sparse data hypothesis. This means that the small size of the increase in F-score we found in Section 4 is not due to the fact that the adaption is just an artifact of rule frequency. Rather, it can probably be attributed to data sparseness. Note also that the shape of the histogram provides a decay curve. Speculatively, we suggest that this shape could be used to parameterize the decay effect and therefore provide an estimate for adaptation which is more robust to sparse data. However, we leave the development of such a smoothing function to future research. 423 6 Conclusions and Future Work The main contribution of this paper has been to show that an incremental parser can simulate syntactic priming effects in human parsing by incorporating probability models that take account of previous rule use. Frazier et al. (2000) argued that the best account of their observed parallelism advantage was a model in which structure is copied from one coordinate sister to another. Here, we explored a probabilistic variant of the copy mechanism, along with two more general models based on within- and between-sentence priming. Although the copy mechanism provided the strongest parallelism effect in simulating the human reading time data, the effect was also successfully simulated by a general within-sentence priming model. On the basis of simplicity, we therefore argue that it is preferable to assume a simpler and more general mechanism, and that the copy mechanism is not needed. This conclusion is strengthened when we turn to consider the performance of the parser on the standard Penn Treebank test set: the Within model showed a small increase in F-score over the PCFG baseline, while the copy model showed no such advantage.5 All the models we proposed offer a broadcoverage account of human parsing, not just a limited model on a hand-selected set of examples, such as the models proposed by Jurafsky (1996) and Hale (2001) (but see Crocker and Brants 2000). A further contribution of the present paper has been to develop a methodology for analyzing the (re-)use of syntactic rules over time in a corpus. In particular, we have defined an algorithm for randomizing the constituents of a treebank, yielding a baseline estimate of chance repetition. In the research reported in this paper, we have adopted a very simple model based on an unlexicalized PCFG. In the future, we intend to explore the consequences of introducing lexicalization into the parser. This is particularly interesting from the point of view of psycholinguistic modeling, because there are well known interactions between lexical repetition and syntactic priming, which require lexicalization for a proper treatment. Future work will also involve the use of smoothing to increase the benefit of priming for parsing accuracy. The investigations reported 5The broad-coverage parsing experiment speaks against a ‘facilitation’ hypothesis, i.e., that the copying and priming mechanisms work together. However, a full test of this (e.g., by combining the two models) is left to future research. in Section 5 provide a basis for estimating the smoothing parameters. References Anderson, John. 1991. Cognitive architectures in a rational analysis. In K. VanLehn, editor, Architectures for Intelligence, Lawrence Erlbaum Associates, Hillsdale, N.J., pages 1–24. Bock, J. Kathryn. 1986. Syntactic persistence in language production. Cognitive Psychology 18:355–387. Church, Kenneth W. 2000. Empirical estimates of adaptation: the chance of two Noriegas is closer to p/2 than p2. In Proceedings of the 17th Conference on Computational Linguistics. Saarbr¨ucken, Germany, pages 180–186. Crocker, Matthew W. and Thorsten Brants. 2000. Widecoverage probabilistic sentence processing. Journal of Psycholinguistic Research 29(6):647–669. Dubey, Amit, Patrick Sturt, and Frank Keller. 2005. Parallelism in coordination as an instance of syntactic priming: Evidence from corpus-based modeling. In Proceedings of the Human Language Technology Conference and the Conference on Empirical Methods in Natural Language Processing. Vancouver, pages 827–834. Frazier, Lyn, Alan Munn, and Chuck Clifton. 2000. Processing coordinate structures. Journal of Psycholinguistic Research 29(4):343–370. Frazier, Lynn and Charles Clifton. 2001. Parsing coordinates and ellipsis: Copy α. Syntax 4(1):1–22. Hale, John. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the 2nd Conference of the North American Chapter of the Association for Computational Linguistics. Pittsburgh, PA. Jurafsky, Daniel. 1996. A probabilistic model of lexical and syntactic access and disambiguation. Cognitive Science 20(2):137–194. Keller, Frank. 2003. A probabilistic parser as a model of global processing difficulty. In R. Alterman and D. Kirsh, editors, Proceedings of the 25th Annual Conference of the Cognitive Science Society. Boston, pages 646–651. Klein, Dan and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Sapporo, Japan, pages 423–430. Kuhn, Roland and Renate de Mori. 1990. A cache-based natural language model for speech recognition. IEEE Transanctions on Pattern Analysis and Machine Intelligence 12(6):570–583. Roark, Brian and Mark Johnson. 1999. Efficient probabilistic top-down and left-corner parsing. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. pages 421–428. Stolcke, Andreas. 1995. An efficient probabilistic contextfree parsing algorithm that computes prefix probabilities. Computational Linguistics 21(2):165–201. Szmrecsanyi, Benedikt. 2005. Creatures of habit: A corpuslinguistic analysis of persistence in spoken English. Corpus Linguistics and Linguistic Theory 1(1):113–149. 424
2006
53
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 425–432, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Fast, Accurate Deterministic Parser for Chinese Mengqiu Wang Kenji Sagae Teruko Mitamura Language Technologies Institute School of Computer Science Carnegie Mellon University {mengqiu,sagae,teruko}@cs.cmu.edu Abstract We present a novel classifier-based deterministic parser for Chinese constituency parsing. Our parser computes parse trees from bottom up in one pass, and uses classifiers to make shift-reduce decisions. Trained and evaluated on the standard training and test sets, our best model (using stacked classifiers) runs in linear time and has labeled precision and recall above 88% using gold-standard part-of-speech tags, surpassing the best published results. Our SVM parser is 2-13 times faster than state-of-the-art parsers, while producing more accurate results. Our Maxent and DTree parsers run at speeds 40-270 times faster than state-of-the-art parsers, but with 5-6% losses in accuracy. 1 Introduction and Background Syntactic parsing is one of the most fundamental tasks in Natural Language Processing (NLP). In recent years, Chinese syntactic parsing has also received a lot of attention in the NLP community, especially since the release of large collections of annotated data such as the Penn Chinese Treebank (Xue et al., 2005). Corpus-based parsing techniques that are successful for English have been applied extensively to Chinese. Traditional statistical approaches build models which assign probabilities to every possible parse tree for a sentence. Techniques such as dynamic programming, beam-search, and best-first-search are then employed to find the parse tree with the highest probability. The massively ambiguous nature of wide-coverage statistical parsing,coupled with cubic-time (or worse) algorithms makes this approach too slow for many practical applications. Deterministic parsing has emerged as an attractive alternative to probabilistic parsing, offering accuracy just below the state-of-the-art in syntactic analysis of English, but running in linear time (Sagae and Lavie, 2005; Yamada and Matsumoto, 2003; Nivre and Scholz, 2004). Encouraging results have also been shown recently by Cheng et al. (2004; 2005) in applying deterministic models to Chinese dependency parsing. We present a novel classifier-based deterministic parser for Chinese constituency parsing. In our approach, which is based on the shift-reduce parser for English reported in (Sagae and Lavie, 2005), the parsing task is transformed into a succession of classification tasks. The parser makes one pass through the input sentence. At each parse state, it consults a classifier to make shift/reduce decisions. The parser then commits to a decision and enters the next parse state. Shift/reduce decisions are made deterministically based on the local context of each parse state, and no backtracking is involved. This process can be viewed as a greedy search where only one path in the whole search space is considered. Our parser produces both dependency and constituent structures, but in this paper we will focus on constituent parsing. By separating the classification task from the parsing process, we can take advantage of many machine learning techniques such as classifier ensemble. We conducted experiments with four different classifiers: support vector machines (SVM), Maximum-Entropy (Maxent), Decision Tree (DTree) and memory-based learning (MBL). We also compared the performance of three different classifier ensemble approaches (simple voting, classifier stacking and meta-classifier). Our best model (using stacked classifiers) runs in linear time and has labeled precision and recall above 88% using gold-standard part-ofspeech tags, surpassing the best published results (see Section 5). Our SVM parser is 2-13 times faster than state-of-the-art parsers, while produc425 ing more accurate results. Our Maxent and DTree parsers are 40-270 times faster than state-of-theart parsers, but with 5-6% losses in accuracy. 2 Deterministic parsing model Like other deterministic parsers, our parser assumes input has already been segmented and tagged with part-of-speech (POS) information during a preprocessing step1. The main data structures used in the parsing algorithm are a queue and a stack. The input word-POS pairs to be processed are stored in the queue. The stack holds the partial parse trees that are built during parsing. A parse state is represented by the content of the stack and queue. The classifier makes shift/reduce decisions based on contextual features that represent the parse state. A shift action removes the first item on the queue and puts it onto the stack. A reduce action is in the form of Reduce-{Binary|Unary}X, where {Binary|Unary} denotes whether one or two items are to be removed from the stack, and X is the label of a new tree node that will be dominating the removed items. Because a reduction is either unary or binary, the resulting parse tree will only have binary and/or unary branching nodes. Parse trees are also lexicalized to produce dependency structures. For lexicalization, we used the same head-finding rules reported in (Bikel, 2004). With this additional information, reduce actions are now in the form of Reduce-{Binary |Unary}-X-Direction. The “Direction” tag gives information about whether to take the head-node of the left subtree or the right subtree to be the head of the new tree, in the case of binary reduction. A simple transformation process as described in (Sagae and Lavie, 2005) is employed to convert between arbitrary branching trees and binary trees. This transformation breaks multi-branching nodes down into binary-branching nodes by inserting temporary nodes; temporary nodes are collapsed and removed when we transform a binary tree back into a multi-branching tree. The parsing process succeeds when all the items in the queue have been processed and there is only one item (the final parse tree) left on the stack. If the classifier returns a shift action when there are no items left on the queue, or a reduce action when there are no items on the stack, the 1We constructed our own POS tagger based on SVM; see Section 3.3. parser fails. In this case, the parser simply combines all the items on the stack into one IP node, and outputs this as a partial parse. Sagae and Lavie (2005) have shown that this algorithm has linear time complexity, assuming that classification takes constant time. The next example illustrates the process for the input “Y‹ (Brown) 6¯ (visits) Þ0 (Shanghai)” that is tagged with the POS sequence “NR (Proper Noun) VV (Verb) NR (Proper Noun)”. 1. In the initial parsing state, the stack (S) is empty, and the queue (Q) holds word and POS tag pairs for the input sentence. (S): Empty (Q): NR Y‹ VV 6¯ NR Þ0 2. The first action item that the classifier gives is a shift action. (S): NR Y‹ (Q): VV 6¯ NR Þ0 3. The next action is a reduce-Unary-NP, which means reducing the first item on the stack to a NP node. Node (NR Y‹) becomes the head of the new NP node and this information is marked by brackets. The new parse state is: (S): NP (NR Y‹) NR Y‹ (Q): VV 6¯ NR Þ0 4. The next action is shift. (S): NP (NR Y‹) NR Y‹ VV 6¯ (Q): NR Þ0 5. The next action is again shift. (S): NP (NR Y‹) NR Y‹ VV 6¯ NR Þ0 (Q): Empty 6. The next action is reduce-Unary-NP. (S): NP (NR Y‹) NR Y‹ VV 6¯ NP (NR Þ0) NR Þ0 (Q): Empty 7. The next action is reduce-Binary-VP-Left. The node (VV 6¯) will be the head of the 426 new VP node. (S): NP (NR Y‹) NR Y‹ VP (VV 6¯) VV 6¯ NP (NR Þ0) NR Þ0 (Q): Empty 8. The next action is reduce-Binary-IP-Right. Since after the action is performed, there will be only one tree node(IP) left on the stack and no items on the queue, this is the final action. The final state is: (S): IP (VV 6¯) NP (NR Y‹) NR Y‹ VP (VV 6¯) VV 6¯ NP (NR Þ0) NR Þ0 (Q): Empty 3 Classifiers and Feature Selection Classification is the key component of our parsing model. We conducted experiments with four different types of classifiers. 3.1 Classifiers Support Vector Machine: Support Vector Machine is a discriminative classification technique which solves the binary classification problem by finding a hyperplane in a high dimensional space that gives the maximum soft margin, based on the Structural Risk Minimization Principle. We used the TinySVM toolkit (Kudo and Matsumoto, 2000), with a degree 2 polynomial kernel. To train a multi-class classifier, we used the one-against-all scheme. Maximum-Entropy Classifier: In a Maximum-entropy model, the goal is to estimate a set of parameters that would maximize the entropy over distributions that satisfy certain constraints. These constraints will force the model to best account for the training data (Ratnaparkhi, 1999). Maximum-entropy models have been used for Chinese character-based parsing (Fung et al., 2004; Luo, 2003) and POS tagging (Ng and Low, 2004). In our experiments, we used Le’s Maxent toolkit (Zhang, 2004). This implementation uses the Limited-Memory Variable Metric method for parameter estimation. We trained all our models using 300 iterations with no event cut-off, and a Gaussian prior smoothing value of 2. Maxent classifiers output not only a single class label, but also a number of possible class labels and their associated probability estimate. Decision Tree Classifier: Statistical decision tree is a classic machine learning technique that has been extensively applied to NLP. For example, decision trees were used in the SPATTER system (Magerman, 1994) to assign probability distribution over the space of possible parse trees. In our experiment, we used the C4.5 decision tree classifier, and ignored lexical features whose counts were less than 7. Memory-Based Learning: Memory-Based Learning approaches the classification problem by storing training examples explicitly in memory, and classifying the current case by finding the most similar stored cases (using k-nearestneighbors). We used the TiMBL toolkit (Daelemans et al., 2004) in our experiment, with k = 5. 3.2 Feature selection For each parse state, a set of features are extracted and fed to each classifier. Features are distributionally-derived or linguisticallybased, and carry the context of a particular parse state. When input to the classifier, each feature is treated as a contextual predicate which maps an outcome and a context to true, false value. The specific features used with the classifiers are listed in Table 1. Sun and Jurafsky (2003) studied the distributional property of rhythm in Chinese, and used the rhythmic feature to augment a PCFG model for a practical shallow parsing task. This feature has the value 1, 2 or 3 for monosyllabic, bi-syllabic or multi-syllabic nouns or verbs. For noun and verb phrases, the feature is defined as the number of words in the phrase. Sun and Jurafsky found that in NP and VP constructions there are strong constraints on the word length for verbs and nouns (a kind of rhythm), and on the number of words in a constituent. We employed these same rhythmic features to see whether this property holds for the Penn Chinese Treebank data, and if it helps in the disambiguation of phrase types. Experiments show that this feature does increase classification accuracy of the SVM model by about 1%. In both Chinese and English, there are punctuation characters that come in pairs (e.g., parentheses). In Chinese, such pairs are more frequent (quotes, single quotes, and book-name marks). During parsing, we note how many opening punc427 1 A Boolean feature indicates if a closing punctuation is expected or not. 2 A Boolean value indicates if the queue is empty or not. 3 A Boolean feature indicates whether there is a comma separating S(1) and S(2) or not. 4 Last action given by the classifier, and number of words in S(1) and S(2). 5 Headword and its POS of S(1), S(2), S(3) and S(4), and word and POS of Q(1), Q(2), Q(3) and Q(4). 6 Nonterminal label of the root of S(1) and S(2), and number of punctuations in S(1) and S(2). 7 Rhythmic features and the linear distance between the head-words of the S(1) and S(2). 8 Number of words found so far to be dependents of the head-words of S(1) and S(2). 9 Nonterminal label, POS and headword of the immediate left and right child of the root of S(1) and S(2). 10 Most recently found word and POS pair that is to the left of the head-word of S(1) and S(2). 11 Most recently found word and POS pair that is to the right of the head-word of S(1) and S(2). Table 1: Features for classification tuations we have seen on the stack. If the number is odd, then feature 2 will have value 1, otherwise 0. A boolean feature is used to indicate whether or not an odd number of opening punctuations have been seen and a closing punctuation is expected; in this case the feature gives a strong hint to the parser that all the items in the queue before the closing punctuation, and the items on the stack after the opening punctuation should be under a common constituent node which begins and ends with the two punctuations. 3.3 POS tagging In our parsing model, POS tagging is treated as a separate problem and it is assumed that the input has already been tagged with POS. To compare with previously published work, we evaluated the parser performance on automatically tagged data. We constructed a simple POS tagger using an SVM classifier. The tagger makes two passes over the input sentence. The first pass extracts features from the two words and POS tags that came before the current word, the two words following the current word, and the current word itself (the length of the word, whether the word contains numbers, special symbols that separates foreign first and last names, common Chinese family names, western alphabets or dates). Then the tag is assigned to the word according to SVM classifier’s output. In the second pass, additional features such as the POS tags of the two words following the current word, and the POS tag of the current word (assigned in the first pass) are used. This tagger had a measured precision of 92.5% for sentences ≤40 words. 4 Experiments We performed experiments using the Penn Chinese Treebank. Sections 001-270 (3484 sentences, 84,873 words) were used for training, 271-300 (348 sentences, 7980 words) for development, and 271-300 (348 sentences, 7980 words) for testing. The whole dataset contains 99629 words, which is about 1/10 of the size of the English Penn Treebank. Standard corpus preparation steps were done prior to parsing, so that empty nodes were removed, and the resulting A over A unary rewrite nodes are collapsed. Functional labels of the nonterminal nodes are also removed, but we did not relabel the punctuations, unlike in (Jiang, 2004). Bracket scoring was done by the EVALB program2, and preterminals were not counted as constituents. In all our experiments, we used labeled recall (LR), labeled precision (LP) and F1 score (harmonic mean of LR and LP) as our evaluation metrics. 4.1 Results of different classifiers Table 2 shows the classification accuracy and parsing accuracy of the four different classifiers on the development set for sentences ≤40 words, with gold-standard POS tagging. The runtime (Time) of each model and number of failed parses (Fail) are also shown. Classification Parsing Accuracy Model Accuracy LR LP F1 Fail Time SVM 94.3% 86.9% 87.9% 87.4% 0 3m 19s Maxent 92.6% 84.1% 85.2% 84.6% 5 0m 21s DTree1 92.0% 78.8% 80.3% 79.5% 42 0m 12s DTree2 N/A 81.6% 83.6% 82.6% 30 0m 18s MBL 90.6% 74.3% 75.2% 74.7% 2 16m 11s Table 2: Comparison of different classifier models’ parsing accuracies on development set for sentences ≤40 words, with gold-standard POS For the DTree learner, we experimented with two different classification strategies. In our first approach, the classification is done in a single stage (DTree1). The learner is trained for a multi2http://nlp.cs.nyu.edu/evalb/ 428 class classification problem where the class labels include shift and all possible reduce actions. But this approach yielded a lot of parse failures (42 out of 350 sentences failed during parsing, and partial parse tree was returned). These failures were mostly due to false shift actions in cases where the queue is empty. To alleviate this problem, we broke the classification process down to two stages (DTree2). A first stage classifier makes a binary decision on whether the action is shift or reduce. If the output is reduce, a second-stage classifier decides which reduce action to take. Results showed that breaking down the classification task into two stages increased overall accuracy, and the number of failures was reduced to 30. The SVM model achieved the highest classification accuracy and the best parsing results. It also successfully parsed all sentences. The Maxent model’s classification error rate (7.4%) was 30% higher than the error rate of the SVM model (5.7%), and its F1 (84.6%) was 3.2% lower than SVM model’s F1 (87.4%). But Maxent model was about 9.5 times faster than the SVM model. The DTree classifier achieved 81.6% LR and 83.6% LP. The MBL model did not perform well; although MBL and SVM differed in accuracy by only about 3 percent, the parsing results showed a difference of more than 10 percent. One possible explanation for the poor performance of the MBL model is that all the features we used were binary features, and memory-based learner is known to work better with multivalue features than binary features in natural language learning tasks (van den Bosch and Zavrel, 2000). In terms of speed and accuracy trade-off, there is a 5.5% trade-off in F1 (relative to SVM’s F1) for a roughly 14 times speed-up between SVM and two-stage DTree. Maxent is more balanced in the sense that its accuracy was slightly lower (3.2%) than SVM, and was just about as fast as the two-stage DTree on the development set. The high speed of the DTree and Maxent models make them very attractive in applications where speed is more critical than accuracy. While the SVM model takes more CPU time, we show in Section 5 that when compared to existing parsers, SVM achieves about the same or higher accuracy but is at least twice as fast. Using gold-standard POS tagging, the best classifier model (SVM) achieved LR of 87.2% and LP of 88.3%, as shown in Table 4. Both measures surpass the previously known best results on parsing using gold-standard tagging. We also tested the SVM model using data automatically tagged by our POS tagger, and it achieved LR of 78.1% and LP of 81.1% for sentences ≤40 words, as shown in Table 3. 4.2 Classifier Ensemble Experiments Classifier ensemble by itself has been a fruitful research direction in machine learning in recent years. The basic idea in classifier ensemble is that combining multiple classifiers can often give significantly better results than any single classifier alone. We experimented with three different classifier ensemble strategies: classifier stacking, meta-classifier, and simple voting. Using the SVM classifier’s results as a baseline, we tested these approaches on the development set. In classifier stacking, we collect the outputs from Maxent, DTree and TiMBL, which are all trained on a separate dataset from the training set (section 400-650 of the Penn Chinese Treebank, smaller than the original training set). We use their classification output as features, in addition to the original feature set, to train a new SVM model on the original training set. We achieved LR of 90.3% and LP of 90.5% on the development set, a 3.4% and 2.6% improvement in LR and LP, respectively. When tested on the test set, we gained 1% improvement in F1 when gold-standard POS tagging is used. When tested with automatic tagging, we achieved a 0.5% improvement in F1. Using Bikel’s significant tester with 10000 times random shuffle, the p-value for LR and LP are 0.008 and 0.457, respectively. The increase in recall is statistically significant, and it shows classifier stacking can improve performance. On the other hand, we did not find metaclassification and simple voting very effective. In simple voting, we make the classifiers to vote in each step for every parse action. The F1 of simple voting method is downgraded by 5.9% relative to SVM model’s F1. By analyzing the interagreement among classifiers, we found that there were no cases where Maxent’s top output and DTree’s output were both correct and SVM’s output was wrong. Using the top output from Maxent and DTree directly does not seem to be complementary to SVM. In the meta-classifier approach, we first collect the output from each classifier trained on sec429 MODEL ≤40 words ≤100 words Unlimited LR LP F1 POS LR LP F1 POS LR LP F1 POS Bikel & Chiang 2000 76.8% 77.8% 77.3% 73.3% 74.6% 74.0% Levy & Manning 2003 79.2% 78.4% 78.8% Xiong et al. 2005 78.7% 80.1% 79.4% Bikel’s Thesis 2004 78.0% 81.2% 79.6% 74.4% 78.5% 76.4% Chiang & Bikel 2002 78.8% 81.1% 79.9% 75.2% 78.0% 76.6% Jiang’s Thesis 2004 80.1% 82.0% 81.1% 92.4% Sun & Jurafsky 2004 85.5% 86.4% 85.9% 83.3% 82.2% 82.7% DTree model 71.8% 76.9% 74.4% 92.5% 69.2% 74.5% 71.9% 92.2% 68.7% 74.2% 71.5% 92.1% SVM model 78.1% 81.1% 79.6% 92.5% 75.5% 78.5% 77.0% 92.2% 75.0% 78.0% 76.5% 92.1% Stacked classifier model 79.2% 81.1% 80.1% 92.5% 76.7% 78.4% 77.5% 92.2% 76.2% 78.0% 77.1% 92.1% Table 3: Comparison with related work on the test set using automatically generated POS tion 1-210 (roughly 3/4 of the entire training set). Then specifically for Maxent, we collected the top output as well as its associated probability estimate. Then we used the outputs and probability estimate as features to train an SVM classifier that makes a decision on which classifier to pick. Meta-classifier results did not change at all from our baseline. In fact, the meta-classifier always picked SVM as its output. This agrees with our observation for the simple voting case. 5 Comparison with Related Work Bikel and Chiang (2000) constructed two parsers using a lexicalized PCFG model that is based on Collins’ model 2 (Collins, 1999), and a statistical Tree-adjoining Grammar(TAG) model. They used the same train/development/test split, and achieved LR/LP of 76.8%/77.8%. In Bikel’s thesis (2004), the same Collins emulation model was used, but with tweaked head-finding rules. Also a POS tagger was used for assigning tags for unseen words. The refined model achieved LR/LP of 78.0%/81.2%. Chiang and Bikel (2002) used inside-outside unsupervised learning algorithm to augment the rules for finding heads, and achieved an improved LR/LP of 78.8%/81.1%. Levy and Manning (2003) used a factored model that combines an unlexicalized PCFG model with a dependency model. They achieved LR/LP of 79.2%/78.4% on a different test/development split. Xiong et al. (2005) used a similar model to the BBN’s model in (Bikel and Chiang, 2000), and augmented the model by semantic categorical information and heuristic rules. They achieved LR/LP of 78.7%/80.1%. Hearne and Way (2004) used a Data-Oriented Parsing (DOP) approach that was optimized for top-down computation. They achieved F1 of 71.3 on a different test and training set. Jiang (2004) reported LR/LP of 80.1%/82.0% on sentences ≤40 words (results not available for sentences ≤100 words) by applying Collins’ parser to Chinese. In Sun and Jurafsky (2004)’s work on Chinese shallow semantic parsing, they also applied Collin’s parser to Chinese. They reported up-to-date the best parsing performance on Chinese Treebank. They achieved LR/LP of 85.5%/86.4% on sentences ≤ 40 words, and LR/LP of 83.3%/82.2% on sentences ≤100 words, far surpassing all other previously reported results. Luo (2003) and Fung et al. (2004) addressed the issue of Chinese text segmentation in their work by constructing characterbased parsers. Luo integrated segmentation, POS tagging and parsing into one maximum-entropy framework. He achieved a F1 score of 81.4% in parsing. But the score was achieved using 90% of the 250K-CTB (roughly 2.5 times bigger than our training set) for training and 10% for testing. Fung et al.(2004) also took the maximum-entropy modeling approach, but augmented by transformationbased learning. They used the standard training and testing split. When tested with gold-standard segmentation, they achieved a F1 score of 79.56%, but POS-tagged words were treated as constituents in their evaluation. In comparison with previous work, our parser’s accuracy is very competitive. Compared to Jiang’s work and Sun and Jurafsky’s work, the classifier ensemble model of our parser is lagging behind by 1% and 5.8% in F1, respectively. But compared to all other works, our classifier stacking model gave better or equal results for all three measures. In particular, the classifier ensemble model and SVM model of our parser achieved second and third highest LP, LR and F1 for sentences ≤100 words as shown in Table 3. (Sun and Jurafsky did not report results on sentences ≤100 words, but it is worth noting that out of all the test sentences, 430 only 2 sentences have length > 100). Jiang (2004) and Bikel (2004)3 also evaluated their parsers on the test set for sentences ≤40 words, using gold-standard POS tagged input. Our parser gives significantly better results as shown in Table 4. The implication of this result is twofold. On one hand, it shows that if POS tagging accuracy can be increased, our parser is likely to benefit more than the other two models; on the other hand, it also indicates that our deterministic model is less resilient to POS errors. Further detailed analysis is called for, to study the extent to which POS tagging errors affects the deterministic parsing model. Model LR LP F1 Bikel’s Thesis 2004 80.9% 84.5% 82.7% Jiang’s Thesis 2004 84.5% 88.0% 86.2% DTree model 80.5% 83.9% 82.2% Maxent model 81.4% 82.8% 82.1% SVM model 87.2% 88.3% 87.8% Stacked classifier model 88.3% 88.1% 88.2% Table 4: Comparison with related work on the test set for sentence ≤40 words, using gold-standard POS To measure efficiency, we ran two publicly available parsers (Levy and Manning’s PCFG parser (2003) and Bikel’s parser (2004)) on the standard test set and compared the runtime4. The runtime of these parsers are shown in minute:second format in Table 5. Our SVM model is more than 2 times faster than Levy and Manning’s parser, and more than 13 times faster than Bikel’s parser. Our DTree model is 40 times faster than Levy and Manning’s parser, and 270 times faster than Bikel’s parser. Another advantage of our parser is that it does not take as much memory as these other parsers do. In fact, none of the models except MBL takes more than 60 megabytes of memory at runtime. In comparison, Levy and Manning’s PCFG parser requires more than 400 mega-bytes of memory when parsing long sentences (70 words or longer). 6 Discussion and future work One unique attraction of this deterministic parsing framework is that advances in machine learning field can be directly applied to parsing, which 3Bikel’s parser used gold-standard POS tags for unseen words only. Also, the results are obtained from a parser trained on 250K-CTB, about 2.5 times bigger than CTB 1.0. 4All the experiments were conducted on a Pentium IV 2.4GHz machine with 2GB of RAM. Model runtime Bikel 54m 6s Levy & Manning 8m 12s Our DTree model 0m 14s Our Maxent model 0m 24s Our SVM model 3m 50s Table 5: Comparison of parsing speed opens up lots of possibilities for continuous improvements, both in terms of accuracy and efficiency. For example, in this paper we experimented with one method of simple voting. An alternative way of doing simple voting is to let the parsers vote on membership of constituents after each parser has produced its own parse tree (Henderson and Brill, 1999), instead of voting at each step during parsing. Our initial attempt to increase the accuracy of the DTree model by applying boosting techniques did not yield satisfactory results. In our experiment, we implemented the AdaBoost.M1 (Freund and Schapire, 1996) algorithm using resampling to vary the training set distribution. Results showed AdaBoost suffered severe overfitting problems and hurts accuracy greatly, even with a small number of samples. One possible reason for this is that our sample space is very unbalanced across the different classes. A few classes have lots of training examples while a large number of classes are rare, which could raise the chance of overfitting. In our experiments, SVM model gave better results than the Maxent model. But it is important to note that although the same set of features were used in both models, a degree 2 polynomial kernel was used in the SVM classifier while Maxent only has degree 1 features. In our future work, we will experiment with degree 2 features and L1 regularization in the Maxent model, which may give us closer performance to the SVM model with a much faster speed. 7 Conclusion In this paper, we presented a novel deterministic parser for Chinese constituent parsing. Using gold-standard POS tags, our best model (using stacked classifiers) runs in linear time and has labeled recall and precision of 88.3% and 88.1%, respectively, surpassing the best published results. And with a trade-off of 5-6% in accuracy, our DTree and Maxent parsers run at speeds 40-270 times faster than state-of-the-art parsers. Our re431 sults have shown that the deterministic parsing framework is a viable and effective approach to Chinese parsing. For future work, we will further improve the speed and accuracy of our models, and apply them to more Chinese and multilingual natural language applications that require high speed and accurate parsing. Acknowledgment This work was supported in part by ARDA’s AQUAINT Program. We thank Eric Nyberg for his help during the final preparation of this paper. References Daniel M. Bikel and David Chiang. 2000. Two statistical parsing models applied to the Chinese Treebank. In Proceedings of the Second Chinese Language Processing Workshop, ACL ’00. Daniel M. Bikel. 2004. On the Parameter Space of Generative Lexicalized Statistical Parsing Models. Ph.D. thesis, University of Pennsylvania. Yuchang Cheng, Masayuki Asahara, and Yuji Matsumoto. 2004. Deterministic dependency structure analyzer for Chinese. In Proceedings of IJCNLP ’04. Yuchang Cheng, Masayuki Asahara, and Yuji Matsumoto. 2005. Machine learning-based dependency analyzer for Chinese. In Proceedings of ICCC ’05. David Chiang and Daniel M. Bikel. 2002. Recovering latent information in treebanks. In Proceedings of COLING ’02. Michael John Collins. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2004. Timbl version 5.1 reference guide. Technical report, Tilburg University. Yoav Freund and Robert E. Schapire. 1996. Experiments with a new boosting algorithm. In Proceedings of ICML ’96. Pascale Fung, Grace Ngai, Yongsheng Yang, and Benfeng Chen. 2004. A maximum-entropy Chinese parser augmented by transformation-based learning. ACM Transactions on Asian Language Information Processing, 3(2):159–168. Mary Hearne and Andy Way. 2004. Data-oriented parsing and the Penn Chinese Treebank. In Proceedings of IJCNLP ’04. John Henderson and Eric Brill. 1999. Exploiting diversity in natural language processing: Combining parsers. In Proceedings of EMNLP ’99. Zhengping Jiang. 2004. Statistical Chinese parsing. Honours thesis, National University of Singapore. Taku Kudo and Yuji Matsumoto. 2000. Use of support vector learning for chunk identification. In Proceedings of CoNLL and LLL ’00. Roger Levy and Christopher D. Manning. 2003. Is it harder to parse Chinese, or the Chinese Treebank? In Proceedings of ACL ’03. Xiaoqiang Luo. 2003. A maximum entropy Chinese character-based parser. In Proceedings of EMNLP ’03. David M. Magerman. 1994. Natural Language Parsing as Statistical Pattern Recognition. Ph.D. thesis, Stanford University. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese partof-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Proceedings of EMNLP ’04. Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING ’04. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34(1-3):151–175. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the IWPT ’05. Honglin Sun and Daniel Jurafsky. 2003. The effect of rhythm on structural disambiguation in Chinese. In Proceedings of SIGHAN Workshop ’03. Honglin Sun and Daniel Jurafsky. 2004. Shallow semantic parsing of Chinese. In Proceedings of the HLT/NAACL ’04. Antal van den Bosch and Jakub Zavrel. 2000. Unpacking multi-valued symbolic features and classes in memory-based language learning. In Proceedings of ICML ’00. Deyi Xiong, Shuanglong Li, Qun Liu, Shouxun Lin, and Yueliang Qian. 2005. Parsing the Penn Chinese Treebank with semantic knowledge. In Proceedings of IJCNLP ’05. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT ’03. Le Zhang, 2004. Maximum Entropy Modeling Toolkit for Python and C++. Reference Manual. 432
2006
54
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 433–440, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Learning Accurate, Compact, and Interpretable Tree Annotation Slav Petrov Leon Barrett Romain Thibaux Dan Klein Computer Science Division, EECS Department University of California at Berkeley Berkeley, CA 94720 {petrov, lbarrett, thibaux, klein}@eecs.berkeley.edu Abstract We present an automatic approach to tree annotation in which basic nonterminal symbols are alternately split and merged to maximize the likelihood of a training treebank. Starting with a simple Xbar grammar, we learn a new grammar whose nonterminals are subsymbols of the original nonterminals. In contrast with previous work, we are able to split various terminals to different degrees, as appropriate to the actual complexity in the data. Our grammars automatically learn the kinds of linguistic distinctions exhibited in previous work on manual tree annotation. On the other hand, our grammars are much more compact and substantially more accurate than previous work on automatic annotation. Despite its simplicity, our best grammar achieves an F1 of 90.2% on the Penn Treebank, higher than fully lexicalized systems. 1 Introduction Probabilistic context-free grammars (PCFGs) underlie most high-performance parsers in one way or another (Collins, 1999; Charniak, 2000; Charniak and Johnson, 2005). However, as demonstrated in Charniak (1996) and Klein and Manning (2003), a PCFG which simply takes the empirical rules and probabilities off of a treebank does not perform well. This naive grammar is a poor one because its context-freedom assumptions are too strong in some places (e.g. it assumes that subject and object NPs share the same distribution) and too weak in others (e.g. it assumes that long rewrites are not decomposable into smaller steps). Therefore, a variety of techniques have been developed to both enrich and generalize the naive grammar, ranging from simple tree annotation and symbol splitting (Johnson, 1998; Klein and Manning, 2003) to full lexicalization and intricate smoothing (Collins, 1999; Charniak, 2000). In this paper, we investigate the learning of a grammar consistent with a treebank at the level of evaluation symbols (such as NP, VP, etc.) but split based on the likelihood of the training trees. Klein and Manning (2003) addressed this question from a linguistic perspective, starting with a Markov grammar and manually splitting symbols in response to observed linguistic trends in the data. For example, the symbol NP might be split into the subsymbol NPˆS in subject position and the subsymbol NPˆVP in object position. Recently, Matsuzaki et al. (2005) and also Prescher (2005) exhibited an automatic approach in which each symbol is split into a fixed number of subsymbols. For example, NP would be split into NP-1 through NP-8. Their exciting result was that, while grammars quickly grew too large to be managed, a 16-subsymbol induced grammar reached the parsing performance of Klein and Manning (2003)’s manual grammar. Other work has also investigated aspects of automatic grammar refinement; for example, Chiang and Bikel (2002) learn annotations such as head rules in a constrained declarative language for tree-adjoining grammars. We present a method that combines the strengths of both manual and automatic approaches while addressing some of their common shortcomings. Like Matsuzaki et al. (2005) and Prescher (2005), we induce splits in a fully automatic fashion. However, we use a more sophisticated split-and-merge approach that allocates subsymbols adaptively where they are most effective, like a linguist would. The grammars recover patterns like those discussed in Klein and Manning (2003), heavily articulating complex and frequent categories like NP and VP while barely splitting rare or simple ones (see Section 3 for an empirical analysis). Empirically, hierarchical splitting increases the accuracy and lowers the variance of the learned grammars. Another contribution is that, unlike previous work, we investigate smoothed models, allowing us to split grammars more heavily before running into the oversplitting effect discussed in Klein and Manning (2003), where data fragmentation outweighs increased expressivity. Our method is capable of learning grammars of substantially smaller size and higher accuracy than previous grammar refinement work, starting from a simpler initial grammar. For example, even beginning with an X-bar grammar (see Section 1.1) with 98 symbols, our best grammar, using 1043 symbols, achieves a test set F1 of 90.2%. This is a 27% reduction in error and a significant reduction in size1 over the most accurate gram1This is a 97.5% reduction in number of symbols. Matsuzaki et al. (2005) do not report a number of rules, but our small number of symbols and our hierarchical training (which 433 (a) FRAG RB Not NP DT this NN year . . (b) ROOT FRAG FRAG RB Not NP DT this NN year . . Figure 1: (a) The original tree. (b) The X-bar tree. mar in Matsuzaki et al. (2005). Our grammar’s accuracy was higher than fully lexicalized systems, including the maximum-entropy inspired parser of Charniak and Johnson (2005). 1.1 Experimental Setup We ran our experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank using the standard setup: we trained on sections 2 to 21, and we used section 1 as a validation set for tuning model hyperparameters. Section 22 was used as development set for intermediate results. All of section 23 was reserved for the final test. We used the EVALB parseval reference implementation, available from Sekine and Collins (1997), for scoring. All reported development set results are averages over four runs. For the final test we selected the grammar that performed best on the development set. Our experiments are based on a completely unannotated X-bar style grammar, obtained directly from the Penn Treebank by the binarization procedure shown in Figure 1. For each local tree rooted at an evaluation nonterminal X, we introduce a cascade of new nodes labeled X so that each has two children. Rather than experiment with head-outward binarization as in Klein and Manning (2003), we simply used a left branching binarization; Matsuzaki et al. (2005) contains a comparison showing that the differences between binarizations are small. 2 Learning To obtain a grammar from the training trees, we want to learn a set of rule probabilities β on latent annotations that maximize the likelihood of the training trees, despite the fact that the original trees lack the latent annotations. The Expectation-Maximization (EM) algorithm allows us to do exactly that.2 Given a sentence w and its unannotated tree T , consider a nonterminal A spanning (r, t) and its children B and C spanning (r, s) and (s, t). Let Ax be a subsymbol of A, By of B, and Cz of C. Then the inside and outside probabilities PIN(r, t, Ax) def = P(wr:t|Ax) and POUT(r, t, Ax) def = P(w1:rAxwt:n) can be computed reencourages sparsity) suggest a large reduction. 2Other techniques are also possible; Henderson (2004) uses neural networks to induce latent left-corner parser states. cursively: PIN(r, t, Ax) = X y,z β(Ax →ByCz) ×PIN(r, s, By)PIN(s, t, Cz) POUT(r, s, By) = X x,z β(Ax →ByCz) ×POUT(r, t, Ax)PIN(s, t, Cz) POUT(s, t, Cz) = X x,y β(Ax →ByCz) ×POUT(r, t, Ax)PIN(r, s, By) Although we show only the binary component here, of course there are both binary and unary productions that are included. In the Expectation step, one computes the posterior probability of each annotated rule and position in each training set tree T : P((r, s, t, Ax →ByCz)|w, T ) ∝POUT(r, t, Ax) ×β(Ax →ByCz)PIN(r, s, By)PIN(s, t, Cz) (1) In the Maximization step, one uses the above probabilities as weighted observations to update the rule probabilities: β(Ax →ByCz) := #{Ax →ByCz} P y′,z′ #{Ax →By′Cz′} Note that, because there is no uncertainty about the location of the brackets, this formulation of the insideoutside algorithm is linear in the length of the sentence rather than cubic (Pereira and Schabes, 1992). For our lexicon, we used a simple yet robust method for dealing with unknown and rare words by extracting a small number of features from the word and then computing appproximate tagging probabilities.3 2.1 Initialization EM is only guaranteed to find a local maximum of the likelihood, and, indeed, in practice it often gets stuck in a suboptimal configuration. If the search space is very large, even restarting may not be sufficient to alleviate this problem. One workaround is to manually specify some of the annotations. For instance, Matsuzaki et al. (2005) start by annotating their grammar with the identity of the parent and sibling, which are observed (i.e. not latent), before adding latent annotations.4 If these manual annotations are good, they reduce the search space for EM by constraining it to a smaller region. On the other hand, this pre-splitting defeats some of the purpose of automatically learning latent annotations, 3A word is classified into one of 50 unknown word categories based on the presence of features such as capital letters, digits, and certain suffixes and its tagging probability is given by: P′(word|tag) = k ˆP(class|tag) where k is a constant representing P(word|class) and can simply be dropped. Rare words are modeled using a combination of their known and unknown distributions. 4In other words, in the terminology of Klein and Manning (2003), they begin with a (vertical order=2, horizontal order=1) baseline grammar. 434 DT the (0.50) a (0.24) The (0.08) that (0.15) this (0.14) some (0.11) this (0.39) that (0.28) That (0.11) this (0.52) that (0.36) another (0.04) That (0.38) This (0.34) each (0.07) some (0.20) all (0.19) those (0.12) some (0.37) all (0.29) those (0.14) these (0.27) both (0.21) Some (0.15) the (0.54) a (0.25) The (0.09) the (0.80) The (0.15) a (0.01) the (0.96) a (0.01) The (0.01) The (0.93) A(0.02) No(0.01) a (0.61) the (0.19) an (0.10) a (0.75) an (0.12) the (0.03) Figure 2: Evolution of the DT tag during hierarchical splitting and merging. Shown are the top three words for each subcategory and their respective probability. leaving to the user the task of guessing what a good starting annotation might be. We take a different, fully automated approach. We start with a completely unannotated X-bar style grammar as described in Section 1.1. Since we will evaluate our grammar on its ability to recover the Penn Treebank nonterminals, we must include them in our grammar. Therefore, this initialization is the absolute minimum starting grammar that includes the evaluation nonterminals (and maintains separate grammar symbols for each of them).5 It is a very compact grammar: 98 symbols,6 236 unary rules, and 3840 binary rules. However, it also has a very low parsing performance: 65.8/59.8 LP/LR on the development set. 2.2 Splitting Beginning with this baseline grammar, we repeatedly split and re-train the grammar. In each iteration we initialize EM with the results of the smaller grammar, splitting every previous annotation symbol in two and adding a small amount of randomness (1%) to break the symmetry. The results are shown in Figure 3. Hierarchical splitting leads to better parameter estimates over directly estimating a grammar with 2k subsymbols per symbol. While the two procedures are identical for only two subsymbols (F1: 76.1%), the hierarchical training performs better for four subsymbols (83.7% vs. 83.2%). This advantage grows as the number of subsymbols increases (88.4% vs. 87.3% for 16 subsymbols). This trend is to be expected, as the possible interactions between the subsymbols grows as their number grows. As an example of how staged training proceeds, Figure 2 shows the evolution of the subsymbols of the determiner (DT) tag, which first splits demonstratives from determiners, then splits quantificational elements from demonstratives along one branch and definites from indefinites along the other. 5If our purpose was only to model language, as measured for instance by perplexity on new text, it could make sense to erase even the labels of the Penn Treebank to let EM find better labels by itself, giving an experiment similar to that of Pereira and Schabes (1992). 645 part of speech tags, 27 phrasal categories and the 26 intermediate symbols which were added during binarization Because EM is a local search method, it is likely to converge to different local maxima for different runs. In our case, the variance is higher for models with few subcategories; because not all dependencies can be expressed with the limited number of subcategories, the results vary depending on which one EM selects first. As the grammar size increases, the important dependencies can be modeled, so the variance decreases. 2.3 Merging It is clear from all previous work that creating more latent annotations can increase accuracy. On the other hand, oversplitting the grammar can be a serious problem, as detailed in Klein and Manning (2003). Adding subsymbols divides grammar statistics into many bins, resulting in a tighter fit to the training data. At the same time, each bin gives a less robust estimate of the grammar probabilities, leading to overfitting. Therefore, it would be to our advantage to split the latent annotations only where needed, rather than splitting them all as in Matsuzaki et al. (2005). In addition, if all symbols are split equally often, one quickly (4 split cycles) reaches the limits of what is computationally feasible in terms of training time and memory usage. Consider the comma POS tag. We would like to see only one sort of this tag because, despite its frequency, it always produces the terminal comma (barring a few annotation errors in the treebank). On the other hand, we would expect to find an advantage in distinguishing between various verbal categories and NP types. Additionally, splitting symbols like the comma is not only unnecessary, but potentially harmful, since it needlessly fragments observations of other symbols’ behavior. It should be noted that simple frequency statistics are not sufficient for determining how often to split each symbol. Consider the closed part-of-speech classes (e.g. DT, CC, IN) or the nonterminal ADJP. These symbols are very common, and certainly do contain subcategories, but there is little to be gained from exhaustively splitting them before even beginning to model the rarer symbols that describe the complex inner correlations inside verb phrases. Our solution is to use a split-and-merge approach broadly reminiscent of ISODATA, a classic clustering procedure (Ball and 435 Hall, 1967). To prevent oversplitting, we could measure the utility of splitting each latent annotation individually and then split the best ones first. However, not only is this impractical, requiring an entire training phase for each new split, but it assumes the contributions of multiple splits are independent. In fact, extra subsymbols may need to be added to several nonterminals before they can cooperate to pass information along the parse tree. Therefore, we go in the opposite direction; that is, we split every symbol in two, train, and then measure for each annotation the loss in likelihood incurred when removing it. If this loss is small, the new annotation does not carry enough useful information and can be removed. What is more, contrary to the gain in likelihood for splitting, the loss in likelihood for merging can be efficiently approximated.7 Let T be a training tree generating a sentence w. Consider a node n of T spanning (r, t) with the label A; that is, the subtree rooted at n generates wr:t and has the label A. In the latent model, its label A is split up into several latent labels, Ax. The likelihood of the data can be recovered from the inside and outside probabilities at n: P(w, T ) = X x PIN(r, t, Ax)POUT(r, t, Ax) (2) Consider merging, at n only, two annotations A1 and A2. Since A now combines the statistics of A1 and A2, its production probabilities are the sum of those of A1 and A2, weighted by their relative frequency p1 and p2 in the training data. Therefore the inside score of A is: PIN(r, t, A) = p1PIN(r, t, A1) + p2PIN(r, t, A2) Since A can be produced as A1 or A2 by its parents, its outside score is: POUT(r, t, A) = POUT(r, t, A1) + POUT(r, t, A2) Replacing these quantities in (2) gives us the likelihood Pn(w, T ) where these two annotations and their corresponding rules have been merged, around only node n. We approximate the overall loss in data likelihood due to merging A1 and A2 everywhere in all sentences wi by the product of this loss for each local change: ∆ANNOTATION(A1, A2) = Y i Y n∈Ti Pn(wi, Ti) P(wi, Ti) This expression is an approximation because it neglects interactions between instances of a symbol at multiple places in the same tree. These instances, however, are 7The idea of merging complex hypotheses to encourage generalization is also examined in Stolcke and Omohundro (1994), who used a chunking approach to propose new productions in fully unsupervised grammar induction. They also found it necessary to make local choices to guide their likelihood search. often far apart and are likely to interact only weakly, and this simplification avoids the prohibitive cost of running an inference algorithm for each tree and annotation. We refer to the operation of splitting annotations and re-merging some them based on likelihood loss as a split-merge (SM) cycle. SM cycles allow us to progressively increase the complexity of our grammar, giving priority to the most useful extensions. In our experiments, merging was quite valuable. Depending on how many splits were reversed, we could reduce the grammar size at the cost of little or no loss of performance, or even a gain. We found that merging 50% of the newly split symbols dramatically reduced the grammar size after each splitting round, so that after 6 SM cycles, the grammar was only 17% of the size it would otherwise have been (1043 vs. 6273 subcategories), while at the same time there was no loss in accuracy (Figure 3). Actually, the accuracy even increases, by 1.1% at 5 SM cycles. The numbers of splits learned turned out to not be a direct function of symbol frequency; the numbers of symbols for both lexical and nonlexical tags after 4 SM cycles are given in Table 2. Furthermore, merging makes large amounts of splitting possible. It allows us to go from 4 splits, equivalent to the 24 = 16 substates of Matsuzaki et al. (2005), to 6 SM iterations, which take a few days to run on the Penn Treebank. 2.4 Smoothing Splitting nonterminals leads to a better fit to the data by allowing each annotation to specialize in representing only a fraction of the data. The smaller this fraction, the higher the risk of overfitting. Merging, by allowing only the most beneficial annotations, helps mitigate this risk, but it is not the only way. We can further minimize overfitting by forcing the production probabilities from annotations of the same nonterminal to be similar. For example, a noun phrase in subject position certainly has a distinct distribution, but it may benefit from being smoothed with counts from all other noun phrases. Smoothing the productions of each subsymbol by shrinking them towards their common base symbol gives us a more reliable estimate, allowing them to share statistical strength. We perform smoothing in a linear way. The estimated probability of a production px = P(Ax → By Cz) is interpolated with the average over all subsymbols of A. p′ x = (1 −α)px + α¯p where ¯p = 1 n X x px Here, α is a small constant: we found 0.01 to be a good value, but the actual quantity was surprisingly unimportant. Because smoothing is most necessary when production statistics are least reliable, we expect smoothing to help more with larger numbers of subsymbols. This is exactly what we observe in Figure 3, where smoothing initially hurts (subsymbols are quite distinct 436 and do not need their estimates pooled) but eventually helps (as symbols have finer distinctions in behavior and smaller data support). 2.5 Parsing When parsing new sentences with an annotated grammar, returning the most likely (unannotated) tree is intractable: to obtain the probability of an unannotated tree, one must sum over combinatorially many annotation trees (derivations) for each tree (Sima’an, 1992). Matsuzaki et al. (2005) discuss two approximations. The first is settling for the most probable derivation rather than most probable parse, i.e. returning the single most likely (Viterbi) annotated tree (derivation). This approximation is justified if the sum is dominated by one particular annotated tree. The second approximation that Matsuzaki et al. (2005) present is the Viterbi parse under a new sentence-specific PCFG, whose rule probabilities are given as the solution of a variational approximation of the original grammar. However, their rule probabilities turn out to be the posterior probability, given the sentence, of each rule being used at each position in the tree. Their algorithm is therefore the labelled recall algorithm of Goodman (1996) but applied to rules. That is, it returns the tree whose expected number of correct rules is maximal. Thus, assuming one is interested in a per-position score like F1 (which is its own debate), this method of parsing is actually more appropriate than finding the most likely parse, not simply a cheap approximation of it, and it need not be derived by a variational argument. We refer to this method of parsing as the max-rule parser. Since this method is not a contribution of this paper, we refer the reader to the fuller presentations in Goodman (1996) and Matsuzaki et al. (2005). Note that contrary to the original labelled recall algorithm, which maximizes the number of correct symbols, this tree only contains rules allowed by the grammar. As a result, the percentage of complete matches with the max-rule parser is typically higher than with the Viterbi parser. (37.5% vs. 35.8% for our best grammar). These posterior rule probabilities are still given by (1), but, since the structure of the tree is no longer known, we must sum over it when computing the inside and outside probabilities: PIN(r, t, Ax) = X B,C,s X y,z β(Ax →ByCz)× PIN(r, s, By)PIN(s, t, Cz) POUT(r, s, By) = X A,C,t X x,z β(Ax →ByCz)× POUT(r, t, Ax)PIN(s, t, Cz) POUT(s, t, Cz) = X A,B,r X x,y β(Ax →ByCz)× POUT(r, t, Ax)PIN(r, s, By) For efficiency reasons, we use a coarse-to-fine pruning scheme like that of Caraballo and Charniak (1998). For a given sentence, we first run the inside-outside algorithm using the baseline (unannotated) grammar, 74 76 78 80 82 84 86 88 90 200 400 600 800 1000 F1 Total number of grammar symbols 50% Merging and Smoothing 50% Merging Splitting but no Merging Flat Training Figure 3: Hierarchical training leads to better parameter estimates. Merging reduces the grammar size significantly, while preserving the accuracy and enabling us to do more SM cycles. Parameter smoothing leads to even better accuracy for grammars with high complexity. producing a packed forest representation of the posterior symbol probabilities for each span. For example, one span might have a posterior probability of 0.8 of the symbol NP, but e−10 for PP. Then, we parse with the larger annotated grammar, but, at each span, we prune away any symbols whose posterior probability under the baseline grammar falls below a certain threshold (e−8 in our experiments). Even though our baseline grammar has a very low accuracy, we found that this pruning barely impacts the performance of our better grammars, while significantly reducing the computational cost. For a grammar with 479 subcategories (4 SM cycles), lowering the threshold to e−15 led to an F1 improvement of 0.13% (89.03 vs. 89.16) on the development set but increased the parsing time by a factor of 16. 3 Analysis So far, we have presented a split-merge method for learning to iteratively subcategorize basic symbols like NP and VP into automatically induced subsymbols (subcategories in the original sense of Chomsky (1965)). This approach gives parsing accuracies of up to 90.7% on the development set, substantially higher than previous symbol-splitting approaches, while starting from an extremely simple base grammar. However, in general, any automatic induction system is in danger of being entirely uninterpretable. In this section, we examine the learned grammars, discussing what is learned. We focus particularly on connections with the linguistically motivated annotations of Klein and Manning (2003), which we do generally recover. Inspecting a large grammar by hand is difficult, but fortunately, our baseline grammar has less than 100 nonterminal symbols, and even our most complicated grammar has only 1043 total (sub)symbols. It is there437 VBZ VBZ-0 gives sells takes VBZ-1 comes goes works VBZ-2 includes owns is VBZ-3 puts provides takes VBZ-4 says adds Says VBZ-5 believes means thinks VBZ-6 expects makes calls VBZ-7 plans expects wants VBZ-8 is ’s gets VBZ-9 ’s is remains VBZ-10 has ’s is VBZ-11 does Is Does NNP NNP-0 Jr. Goldman INC. NNP-1 Bush Noriega Peters NNP-2 J. E. L. NNP-3 York Francisco Street NNP-4 Inc Exchange Co NNP-5 Inc. Corp. Co. NNP-6 Stock Exchange York NNP-7 Corp. Inc. Group NNP-8 Congress Japan IBM NNP-9 Friday September August NNP-10 Shearson D. Ford NNP-11 U.S. Treasury Senate NNP-12 John Robert James NNP-13 Mr. Ms. President NNP-14 Oct. Nov. Sept. NNP-15 New San Wall JJS JJS-0 largest latest biggest JJS-1 least best worst JJS-2 most Most least DT DT-0 the The a DT-1 A An Another DT-2 The No This DT-3 The Some These DT-4 all those some DT-5 some these both DT-6 That This each DT-7 this that each DT-8 the The a DT-9 no any some DT-10 an a the DT-11 a this the CD CD-0 1 50 100 CD-1 8.50 15 1.2 CD-2 8 10 20 CD-3 1 30 31 CD-4 1989 1990 1988 CD-5 1988 1987 1990 CD-6 two three five CD-7 one One Three CD-8 12 34 14 CD-9 78 58 34 CD-10 one two three CD-11 million billion trillion PRP PRP-0 It He I PRP-1 it he they PRP-2 it them him RBR RBR-0 further lower higher RBR-1 more less More RBR-2 earlier Earlier later IN IN-0 In With After IN-1 In For At IN-2 in for on IN-3 of for on IN-4 from on with IN-5 at for by IN-6 by in with IN-7 for with on IN-8 If While As IN-9 because if while IN-10 whether if That IN-11 that like whether IN-12 about over between IN-13 as de Up IN-14 than ago until IN-15 out up down RB RB-0 recently previously still RB-1 here back now RB-2 very highly relatively RB-3 so too as RB-4 also now still RB-5 however Now However RB-6 much far enough RB-7 even well then RB-8 as about nearly RB-9 only just almost RB-10 ago earlier later RB-11 rather instead because RB-12 back close ahead RB-13 up down off RB-14 not Not maybe RB-15 n’t not also Table 1: The most frequent three words in the subcategories of several part-of-speech tags. fore relatively straightforward to review the broad behavior of a grammar. In this section, we review a randomly-selected grammar after 4 SM cycles that produced an F1 score on the development set of 89.11. We feel it is reasonable to present only a single grammar because all the grammars are very similar. For example, after 4 SM cycles, the F1 scores of the 4 trained grammars have a variance of only 0.024, which is tiny compared to the deviation of 0.43 obtained by Matsuzaki et al. (2005)). Furthermore, these grammars allocate splits to nonterminals with a variance of only 0.32, so they agree to within a single latent state. 3.1 Lexical Splits One of the original motivations for lexicalization of parsers is the fact that part-of-speech (POS) tags are usually far too general to encapsulate a word’s syntactic behavior. In the limit, each word may well have its own unique syntactic behavior, especially when, as in modern parsers, semantic selectional preferences are lumped in with traditional syntactic trends. However, in practice, and given limited data, the relationship between specific words and their syntactic contexts may be best modeled at a level more fine than POS tag but less fine than lexical identity. In our model, POS tags are split just like any other grammar symbol: the subsymbols for several tags are shown in Table 1, along with their most frequent members. In most cases, the categories are recognizable as either classic subcategories or an interpretable division of some other kind. Nominal categories are the most heavily split (see Table 2), and have the splits which are most semantic in nature (though not without syntactic correlations). For example, plural common nouns (NNS) divide into the maximum number of categories (16). One category consists primarily of dates, whose typical parent is an NP subsymbol whose typical parent is a root S, essentially modeling the temporal noun annotation discussed in Klein and Manning (2003). Another category specializes in capitalized words, preferring as a parent an NP with an S parent (i.e. subject position). A third category specializes in monetary units, and so on. These kinds of syntactico-semantic categories are typical, and, given distributional clustering results like those of Schuetze (1998), unsurprising. The singular nouns are broadly similar, if slightly more homogenous, being dominated by categories for stocks and trading. The proper noun category (NNP, shown) also splits into the maximum 16 categories, including months, countries, variants of Co. and Inc., first names, last names, initials, and so on. Verbal categories are also heavily split. Verbal subcategories sometimes reflect syntactic selectional preferences, sometimes reflect semantic selectional preferences, and sometimes reflect other aspects of verbal syntax. For example, the present tense third person verb subsymbols (VBZ) are shown. The auxiliaries get three clear categories: do, have, and be (this pattern repeats in other tenses), as well a fourth category for the ambiguous ’s. Verbs of communication (says) and 438 NNP 62 CC 7 WP$ 2 NP 37 CONJP 2 JJ 58 JJR 5 WDT 2 VP 32 FRAG 2 NNS 57 JJS 5 -RRB2 PP 28 NAC 2 NN 56 : 5 ” 1 ADVP 22 UCP 2 VBN 49 PRP 4 FW 1 S 21 WHADVP 2 RB 47 PRP$ 4 RBS 1 ADJP 19 INTJ 1 VBG 40 MD 3 TO 1 SBAR 15 SBARQ 1 VB 37 RBR 3 $ 1 QP 9 RRC 1 VBD 36 WP 2 UH 1 WHNP 5 WHADJP 1 CD 32 POS 2 , 1 PRN 4 X 1 IN 27 PDT 2 “ 1 NX 4 ROOT 1 VBZ 25 WRB 2 SYM 1 SINV 3 LST 1 VBP 19 -LRB2 RP 1 PRT 2 DT 17 . 2 LS 1 WHPP 2 NNPS 11 EX 2 # 1 SQ 2 Table 2: Number of latent annotations determined by our split-merge procedure after 6 SM cycles propositional attitudes (beleives) that tend to take inflected sentential complements dominate two classes, while control verbs (wants) fill out another. As an example of a less-split category, the superlative adjectives (JJS) are split into three categories, corresponding principally to most, least, and largest, with most frequent parents NP, QP, and ADVP, respectively. The relative adjectives (JJR) are split in the same way. Relative adverbs (RBR) are split into a different three categories, corresponding to (usually metaphorical) distance (further), degree (more), and time (earlier). Personal pronouns (PRP) are well-divided into three categories, roughly: nominative case, accusative case, and sentence-initial nominative case, which each correlate very strongly with syntactic position. As another example of a specific trend which was mentioned by Klein and Manning (2003), adverbs (RB) do contain splits for adverbs under ADVPs (also), NPs (only), and VPs (not). Functional categories generally show fewer splits, but those splits that they do exhibit are known to be strongly correlated with syntactic behavior. For example, determiners (DT) divide along several axes: definite (the), indefinite (a), demonstrative (this), quantificational (some), negative polarity (no, any), and various upper- and lower-case distinctions inside these types. Here, it is interesting to note that these distinctions emerge in a predictable order (see Figure 2 for DT splits), beginning with the distinction between demonstratives and non-demonstratives, with the other distinctions emerging subsequently; this echoes the result of Klein and Manning (2003), where the authors chose to distinguish the demonstrative constrast, but not the additional ones learned here. Another very important distinction, as shown in Klein and Manning (2003), is the various subdivisions in the preposition class (IN). Learned first is the split between subordinating conjunctions like that and proper prepositions. Then, subdivisions of each emerge: wh-subordinators like if, noun-modifying prepositions like of, predominantly verb-modifying ones like from, and so on. Many other interesting patterns emerge, including ADVP ADVP-0 RB-13 NP-2 RB-13 PP-3 IN-15 NP-2 ADVP-1 NP-3 RB-10 NP-3 RBR-2 NP-3 IN-14 ADVP-2 IN-5 JJS-1 RB-8 RB-6 RB-6 RBR-1 ADVP-3 RBR-0 RB-12 PP-0 RP-0 ADVP-4 RB-3 RB-6 ADVP-2 SBAR-8 ADVP-2 PP-5 ADVP-5 RB-5 NP-3 RB-10 RB-0 ADVP-6 RB-4 RB-0 RB-3 RB-6 ADVP-7 RB-7 IN-5 JJS-1 RB-6 ADVP-8 RB-0 RBS-0 RBR-1 IN-14 ADVP-9 RB-1 IN-15 RBR-0 SINV SINV-0 VP-14 NP-7 VP-14 VP-15 NP-7 NP-9 VP-14 NP-7 .-0 SINV-1 S-6 ,-0 VP-14 NP-7 .-0 S-11 VP-14 NP-7 .-0 Table 3: The most frequent three productions of some latent annotations. many classical distinctions not specifically mentioned or modeled in previous work. For example, the whdeterminers (WDT) split into one class for that and another for which, while the wh-adverbs align by reference type: event-based how and why vs. entity-based when and where. The possesive particle (POS) has one class for the standard ’s, but another for the plural-only apostrophe. As a final example, the cardinal number nonterminal (CD) induces various categories for dates, fractions, spelled-out numbers, large (usually financial) digit sequences, and others. 3.2 Phrasal Splits Analyzing the splits of phrasal nonterminals is more difficult than for lexical categories, and we can merely give illustrations. We show some of the top productions of two categories in Table 3. A nonterminal split can be used to model an otherwise uncaptured correlation between that symbol’s external context (e.g. its parent symbol) and its internal context (e.g. its child symbols). A particularly clean example of a split correlating external with internal contexts is the inverted sentence category (SINV), which has only two subsymbols, one which usually has the ROOT symbol as its parent (and which has sentence final puncutation as its last child), and a second subsymbol which occurs in embedded contexts (and does not end in punctuation). Such patterns are common, but often less easy to predict. For example, possesive NPs get two subsymbols, depending on whether their possessor is a person / country or an organization. The external correlation turns out to be that people and countries are more likely to possess a subject NP, while organizations are more likely to possess an object NP. Nonterminal splits can also be used to relay information between distant tree nodes, though untangling this kind of propagation and distilling it into clean examples is not trivial. As one example, the subsymbol S-12 (matrix clauses) occurs only under the ROOT symbol. S-12’s children usually include NP-8, which in turn usually includes PRP-0, the capitalized nominative pronouns, DT-{1,2,6} (the capitalized determin439 ers), and so on. This same propagation occurs even more frequently in the intermediate symbols, with, for example, one subsymbol of NP symbol specializing in propagating proper noun sequences. Verb phrases, unsurprisingly, also receive a full set of subsymbols, including categories for infinitive VPs, passive VPs, several for intransitive VPs, several for transitive VPs with NP and PP objects, and one for sentential complements. As an example of how lexical splits can interact with phrasal splits, the two most frequent rewrites involving intransitive past tense verbs (VBD) involve two different VPs and VBDs: VP-14 → VBD-13 and VP-15 →VBD-12. The difference is that VP-14s are main clause VPs, while VP-15s are subordinate clause VPs. Correspondingly, VBD-13s are verbs of communication (said, reported), while VBD12s are an assortment of verbs which often appear in subordinate contexts (did, began). Other interesting phenomena also emerge. For example, intermediate symbols, which in previous work were very heavily, manually split using a Markov process, end up encoding processes which are largely Markov, but more complex. For example, some classes of adverb phrases (those with RB-4 as their head) are ‘forgotten’ by the VP intermediate grammar. The relevant rule is the very probable VP-2 →VP-2 ADVP-6; adding this ADVP to a growing VP does not change the VP subsymbol. In essense, at least a partial distinction between verbal arguments and verbal adjucts has been learned (as exploited in Collins (1999), for example). 4 Conclusions By using a split-and-merge strategy and beginning with the barest possible initial structure, our method reliably learns a PCFG that is remarkably good at parsing. Hierarchical split/merge training enables us to learn compact but accurate grammars, ranging from extremely compact (an F1 of 78% with only 147 symbols) to extremely accurate (an F1 of 90.2% for our largest grammar with only 1043 symbols). Splitting provides a tight fit to the training data, while merging improves generalization and controls grammar size. In order to overcome data fragmentation and overfitting, we smooth our parameters. Smoothing allows us to add a larger number of annotations, each specializing in only a fraction of the data, without overfitting our training set. As one can see in Table 4, the resulting parser ranks among the best lexicalized parsers, beating those of Collins (1999) and Charniak and Johnson (2005).8 Its F1 performance is a 27% reduction in error over Matsuzaki et al. (2005) and Klein and Manning (2003). Not only is our parser more accurate, but the learned grammar is also significantly smaller than that of previous work. While this all is accomplished with only automatic learning, the resulting grammar is 8Even with the Viterbi parser our best grammar achieves 88.7/88.9 LP/LR. ≤40 words LP LR CB 0CB Klein and Manning (2003) 86.9 85.7 1.10 60.3 Matsuzaki et al. (2005) 86.6 86.7 1.19 61.1 Collins (1999) 88.7 88.5 0.92 66.7 Charniak and Johnson (2005) 90.1 90.1 0.74 70.1 This Paper 90.3 90.0 0.78 68.5 all sentences LP LR CB 0CB Klein and Manning (2003) 86.3 85.1 1.31 57.2 Matsuzaki et al. (2005) 86.1 86.0 1.39 58.3 Collins (1999) 88.3 88.1 1.06 64.0 Charniak and Johnson (2005) 89.5 89.6 0.88 67.6 This Paper 89.8 89.6 0.92 66.3 Table 4: Comparison of our results with those of others. human-interpretable. It shows most of the manually introduced annotations discussed by Klein and Manning (2003), but also learns other linguistic phenomena. References G. Ball and D. Hall. 1967. A clustering technique for summarizing multivariate data. Behavioral Science. S. Caraballo and E. Charniak. 1998. New figures of merit for best–first probabilistic chart parsing. In Computational Lingusitics, p. 275–298. E. Charniak and M. Johnson. 2005. Coarse-to-fine n-best parsing and maxent discriminative reranking. In ACL’05, p. 173–180. E. Charniak. 1996. Tree-bank grammars. In AAAI ’96, p. 1031–1036. E. Charniak. 2000. A maximum–entropy–inspired parser. In NAACL ’00, p. 132–139. D. Chiang and D. Bikel. 2002. Recovering latent information in treebanks. In Computational Linguistics. N. Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, U. of Pennsylvania. J. Goodman. 1996. Parsing algorithms and metrics. In ACL ’96, p. 177–183. J. Henderson. 2004. Discriminative training of a neural network statistical parser. In ACL ’04. M. Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24:613–632. D. Klein and C. Manning. 2003. Accurate unlexicalized parsing. ACL ’03, p. 423–430. T. Matsuzaki, Y. Miyao, and J. Tsujii. 2005. Probabilistic CFG with latent annotations. In ACL ’05, p. 75–82. F. Pereira and Y. Schabes. 1992. Inside-outside reestimation from partially bracketed corpora. In ACL ’92, p. 128–135. D. Prescher. 2005. Inducing head-driven PCFGs with latent heads: Refining a tree-bank grammar for parsing. In ECML’05. H. Schuetze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97–124. S. Sekine and M. J. Collins. 1997. EVALB bracket scoring program. http://nlp.cs.nyu.edu/evalb/. K. Sima’an. 1992. Computatoinal complexity of probabilistic disambiguation. Grammars, 5:125–151. A. Stolcke and S. Omohundro. 1994. Inducing probabilistic grammars by bayesian model merging. In Grammatical Inference and Applications, p. 106–118. 440
2006
55
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 441–448, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Semi-Supervised Learning of Partial Cognates using Bilingual Bootstrapping Oana Frunza and Diana Inkpen School of Information Technology and Engineering University of Ottawa Ottawa, ON, Canada, K1N 6N5 {ofrunza,diana}@site.uottawa.ca Abstract Partial cognates are pairs of words in two languages that have the same meaning in some, but not all contexts. Detecting the actual meaning of a partial cognate in context can be useful for Machine Translation tools and for Computer-Assisted Language Learning tools. In this paper we propose a supervised and a semisupervised method to disambiguate partial cognates between two languages: French and English. The methods use only automatically-labeled data; therefore they can be applied for other pairs of languages as well. We also show that our methods perform well when using corpora from different domains. 1 Introduction When learning a second language, a student can benefit from knowledge in his / her first language (Gass, 1987), (Ringbom, 1987), (LeBlanc et al. 1989). Cognates – words that have similar spelling and meaning – can accelerate vocabulary acquisition and facilitate the reading comprehension task. On the other hand, a student has to pay attention to the pairs of words that look and sound similar but have different meanings – false friends pairs, and especially to pairs of words that share meaning in some but not all contexts – the partial cognates. Carroll (1992) claims that false friends can be a hindrance in second language learning. She suggests that a cognate pairing process between two words that look alike happens faster in the learner’s mind than a false-friend pairing. Experiments with second language learners of different stages conducted by Van et al. (1998) suggest that missing false-friend recognition can be corrected when cross-language activation is used – sounds, pictures, additional explanation, feedback. Machine Translation (MT) systems can benefit from extra information when translating a certain word in context. Knowing if a word in the source language is a cognate or a false friend with a word in the target language can improve the translation results. Cross-Language Information Retrieval systems can use the knowledge of the sense of certain words in a query in order to retrieve desired documents in the target language. Our task, disambiguating partial cognates, is in a way equivalent to coarse grain cross-language Word-Sense Discrimination. Our focus is disambiguating French partial cognates in context: deciding if they are used as cognates with an English word, or if they are used as false friends. There is a lot of work done on monolingual Word Sense Disambiguation (WSD) systems that use supervised and unsupervised methods and report good results on Senseval data, but there is less work done to disambiguate cross-language words. The results of this process can be useful in many NLP tasks. Although French and English belong to different branches of the Indo-European family of languages, their vocabulary share a great number of similarities. Some are words of Latin and Greek origin: e.g., education and theory. A small number of very old, “genetic" cognates go back all the way to Proto-Indo-European, e.g., mére - mother and pied - foot. The majority of these pairs of words penetrated the French and English language due to the geographical, historical, and cultural contact between the two countries over 441 many centuries (borrowings). Most of the borrowings have changed their orthography, following different orthographic rules (LeBlanc and Seguin, 1996) and most likely their meaning as well. Some of the adopted words replaced the original word in the language, while others were used together but with slightly or completely different meanings. In this paper we describe a supervised and also a semi-supervised method to discriminate the senses of partial cognates between French and English. In the following sections we present some definitions, the way we collected the data, the methods that we used, and evaluation experiments with results for both methods. 2 Definitions We adopt the following definitions. The definitions are language-independent, but the examples are pairs of French and English words, respectively. Cognates, or True Friends (Vrais Amis), are pairs of words that are perceived as similar and are mutual translations. The spelling can be identical or not, e.g., nature - nature, reconnaissance - recognition. False Friends (Faux Amis) are pairs of words in two languages that are perceived as similar but have different meanings, e.g., main (= hand) - main (= principal or essential), blesser (= to injure) - bless (= bénir). Partial Cognates are pairs of words that have the same meaning in both languages in some but not all contexts. They behave as cognates or as false friends, depending on the sense that is used in each context. For example, in French, facteur means not only factor, but also mailman, while étiquette can also mean label or sticker, in addition to the cognate sense. Genetic Cognates are word pairs in related languages that derive directly from the same word in the ancestor (proto-)language. Because of gradual phonetic and semantic changes over long periods of time, genetic cognates often differ in form and/or meaning, e.g., père - father, chef - head. This category excludes lexical borrowings, i.e., words transferred from one language to another at some point of time, such as concierge. 3 Related Work As far as we know there is no work done to disambiguate partial cognates between two languages. Ide (2000) has shown on a small scale that cross-lingual lexicalization can be used to define and structure sense distinctions. Tufis et al. (2004) used cross-lingual lexicalization, wordnets alignment for several languages, and a clustering algorithm to perform WSD on a set of polysemous English words. They report an accuracy of 74%. One of the most active researchers in identifying cognates between pairs of languages is Kondrak (2001; 2004). His work is more related to the phonetic aspect of cognate identification. He used in his work algorithms that combine different orthographic and phonetic measures, recurrent sound correspondences, and some semantic similarity based on glosses overlap. Guy (1994) identified letter correspondence between words and estimates the likelihood of relatedness. No semantic component is present in the system, the words are assumed to be already matched by their meanings. Hewson (1993), Lowe and Mazadon (1994) used systematic sound correspondences to determine protoprojections for identifying cognate sets. WSD is a task that has attracted researchers since 1950 and it is still a topic of high interest. Determining the sense of an ambiguous word, using bootstrapping and texts from a different language was done by Yarowsky (1995), Hearst (1991), Diab (2002), and Li and Li (2004). Yarowsky (1995) has used a few seeds and untagged sentences in a bootstrapping algorithm based on decision lists. He added two constrains – words tend to have one sense per discourse and one sense per collocation. He reported high accuracy scores for a set of 10 words. The monolingual bootstrapping approach was also used by Hearst (1991), who used a small set of handlabeled data to bootstrap from a larger corpus for training a noun disambiguation system for English. Unlike Yarowsky (1995), we use automatic collection of seeds. Besides our monolingual bootstrapping technique, we also use bilingual bootstrapping. Diab (2002) has shown that unsupervised WSD systems that use parallel corpora can achieve results that are close to the results of a supervised approach. She used parallel corpora in French, English, and Spanish, automatically-produced with MT tools to determine cross-language lexicalization sets of target words. The major goal of her work was to perform monolingual English WSD. Evaluation was performed on the nouns from the English all words data in Senseval2. Additional knowledge was added to the system 442 from WordNet in order to improve the results. In our experiments we use the parallel data in a different way: we use words from parallel sentences as features for Machine Learning (ML). Li and Li (2004) have shown that word translation and bilingual bootstrapping is a good combination for disambiguation. They were using a set of 7 pairs of Chinese and English words. The two senses of the words were highly distinctive: e.g. bass as fish or music; palm as tree or hand. Our work described in this paper shows that monolingual and bilingual bootstrapping can be successfully used to disambiguate partial cognates between two languages. Our approach differs from the ones we mentioned before not only from the point of human effort needed to annotate data – we require almost none, and from the way we use the parallel data to automatically collect training examples for machine learning, but also by the fact that we use only off-the-shelf tools and resources: free MT and ML tools, and parallel corpora. We show that a combination of these resources can be used with success in a task that would otherwise require a lot of time and human effort. 4 Data for Partial Cognates We performed experiments with ten pairs of partial cognates. We list them in Table 1. For a French partial cognate we list its English cognate and several false friends in English. Often the French partial cognate has two senses (one for cognate, one for false friend), but sometimes it has more than two senses: one for cognate and several for false friends (nonetheless, we treat them together). For example, the false friend words for note have one sense for grades and one for bills. The partial cognate (PC), the cognate (COG) and false-friend (FF) words were collected from a web resource1. The resource contained a list of 400 false-friends with 64 partial cognates. All partial cognates are words frequently used in the language. We selected ten partial cognates presented in Table 1 according to the number of extracted sentences (a balance between the two meanings), to evaluate and experiment our proposed methods. The human effort that we required for our methods was to add more false-friend English words, than the ones we found in the web resource. We wanted to be able to distinguish the 1 http://french.about.com/library/fauxamis/blfauxam_a.htm senses of cognate and false-friends for a wider variety of senses. This task was done using a bilingual dictionary2. Table 1. The ten pairs of partial cognates. French partial cognate English cognate English false friends blanc blank white, livid circulation circulation traffic client client customer, patron, patient, spectator, user, shopper corps corps body, corpse détail detail retail mode mode fashion, trend, style, vogue note note mark, grade, bill, check, account police police policy, insurance, font, face responsable responsible in charge, responsible party, official, representative, person in charge, executive, officer route route road, roadside 4.1 Seed Set Collection Both the supervised and the semi-supervised method that we will describe in Section 5 are using a set of seeds. The seeds are parallel sentences, French and English, which contain the partial cognate. For each partial-cognate word, a part of the set contains the cognate sense and another part the false-friend sense. As we mentioned in Section 3, the seed sentences that we use are not hand-tagged with the sense (the cognate sense or the false-friend sense); they are automatically annotated by the way we collect them. To collect the set of seed sentences we use parallel corpora from Hansard3, and EuroParl4, and the, manually aligned BAF corpus.5 The cognate sense sentences were created by extracting parallel sentences that had on the French side the French cognate and on the English side the English cognate. See the upper part of Table 2 for an example. The same approach was used to extract sentences with the false-friend sense of the partial cognate, only this time we used the false-friend English words. See lower the part of Table 2. 2 http://www.wordreference.com 3 http://www.isi.edu/natural-language/download/hansard/ and http://www.tsrali.com/ 4 http://people.csail.mit.edu/koehn/publications/europarl/ 5 http://rali.iro.umontreal.ca/Ressources/BAF/ 443 Table 2. Example sentences from parallel corpus. Fr (PC:COG) Je note, par exemple, que l'accusé a fait une autre déclaration très incriminante à Hall environ deux mois plus tard. En (COG) I note, for instance, that he made another highly incriminating statement to Hall two months later. Fr (PC:FF) S'il gèle les gens ne sont pas capables de régler leur note de chauffage En (FF) If there is a hard frost, people are unable to pay their bills. To keep the methods simple and languageindependent, no lemmatization was used. We took only sentences that had the exact form of the French and English word as described in Table 1. Some improvement might be achieved when using lemmatization. We wanted to see how well we can do by using sentences as they are extracted from the parallel corpus, with no additional pre-processing and without removing any noise that might be introduced during the collection process. From the extracted sentences, we used 2/3 of the sentences for training (seeds) and 1/3 for testing when applying both the supervised and semisupervised approach. In Table 3 we present the number of seeds used for training and testing. We will show in Section 6, that even though we started with a small amount of seeds from a certain domain – the nature of the parallel corpus that we had, an improvement can be obtained in discriminating the senses of partial cognates using free text from other domains. Table 3. Number of parallel sentences used as seeds. Partial Cognates Train CG Train FF Test CG Test FF Blanc 54 78 28 39 Circulation 213 75 107 38 Client 105 88 53 45 Corps 88 82 44 42 Détail 120 80 60 41 Mode 76 104 126 53 Note 250 138 126 68 Police 154 94 78 48 Responsable 200 162 100 81 Route 69 90 35 46 AVERAGE 132.9 99.1 66.9 50.1 5 Methods In this section we describe the supervised and the semi-supervised methods that we use in our experiments. We will also describe the data sets that we used for the monolingual and bilingual bootstrapping technique. For both methods we have the same goal: to determine which of the two senses (the cognate or the false-friend sense) of a partial-cognate word is present in a test sentence. The classes in which we classify a sentence that contains a partial cognate are: COG (cognate) and FF (falsefriend). 5.1 Supervised Method For both the supervised and semi-supervised method we used the bag-of-words (BOW) approach of modeling context, with binary values for the features. The features were words from the training corpus that appeared at least 3 times in the training sentences. We removed the stopwords from the features. A list of stopwords for English and one for French was used. We ran experiments when we kept the stopwords as features but the results did not improve. Since we wanted to learn the contexts in which a partial cognate has a cognate sense and the contexts in which it has a false-friend sense, the cognate and false friend words were not taken into account as features. Leaving them in would mean to indicate the classes, when applying the methods for the English sentences since all the sentences with the cognate sense contain the cognate word and all the false-friend sentences do not contain it. For the French side all collected sentences contain the partial cognate word, the same for both senses. As a baseline for the experiments that we present we used the ZeroR classifier from WEKA6, which predicts the class that is the most frequent in the training corpus. The classifiers for which we report results are: Naïve Bayes with a kernel estimator, Decision Trees - J48, and a Support Vector Machine implementation - SMO. All the classifiers can be found in the WEKA package. We used these classifiers because we wanted to have a probabilistic, a decision-based and a functional classifier. The decision tree classifier allows us to see which features are most discriminative. Experiments were performed with other classifiers and with different levels of tuning, on a 10fold cross validation approach as well; the classifiers we mentioned above were consistently the ones that obtained the best accuracy results. The supervised method used in our experiments consists in training the classifiers on the 6 http://www.cs.waikato.ac.nz/ml/weka/ 444 automatically-collected training seed sentences, for each partial cognate, and then test their performance on the testing set. Results for this method are presented later, in Table 5. 5.2 Semi-Supervised Method For the semi-supervised method we add unlabelled examples from monolingual corpora: the French newspaper LeMonde7 1994, 1995 (LM), and the BNC8 corpus, different domain corpora than the seeds. The procedure of adding and using this unlabeled data is described in the Monolingual Bootstrapping (MB) and Bilingual Bootstrapping (BB) sections. 5.2.1 Monolingual Bootstrapping The monolingual bootstrapping algorithm that we used for experiments on French sentences (MB-F) and on English sentences (MB-E) is: For each pair of partial cognates (PC) 1. Train a classifier on the training seeds – using the BOW approach and a NB-K classifier with attribute selection on the features. 2. Apply the classifier on unlabeled data – sentences that contain the PC word, extracted from LeMonde (MB-F) or from BNC (MB-E) 3. Take the first k newly classified sentences, both from the COG and FF class and add them to the training seeds (the most confident ones – the prediction accuracy greater or equal than a threshold =0.85) 4. Rerun the experiments training on the new training set 5. Repeat steps 2 and 3 for t times endFor For the first step of the algorithm we used NB-K classifier because it was the classifier that consistently performed better. We chose to perform attribute selection on the features after we tried the method without attribute selection. We obtained better results when using attribute selection. This sub-step was performed with the WEKA tool, the Chi-Square attribute selection was chosen. In the second step of the MB algorithm the classifier that was trained on the training seeds was then used to classify the unlabeled data that was collected from the two additional resources. For the MB algorithm on the French side we trained the classifier on the French side of the 7 http://www.lemonde.fr/ 8 http://www.natcorp.ox.ac.uk/ training seeds and then we applied the classifier to classify the sentences that were extracted from LeMonde and contained the partial cognate. The same approach was used for the MB on the English side only this time we were using the English side of the training seeds for training the classifier and the BNC corpus to extract new examples. In fact, the MB-E step is needed only for the BB method. Only the sentences that were classified with a probability greater than 0.85 were selected for later use in the bootstrapping algorithm. The number of sentences that were chosen from the new corpora and used in the first step of the MB and BB are presented in Table 4. Table 4. Number of sentences selected from the LeMonde and BNC corpus. PC LM COG LM FF BNC COG BNC FF Blanc 45 250 0 241 Circulation 250 250 70 180 Client 250 250 77 250 Corps 250 250 131 188 Détail 250 163 158 136 Mode 151 250 176 262 Note 250 250 178 281 Police 250 250 186 200 Responsable 250 250 177 225 Route 250 250 217 118 For the partial-cognate Blanc with the cognate sense, the number of sentences that had a probability distribution greater or equal with the threshold was low. For the rest of partial cognates the number of selected sentences was limited by the value of parameter k in the algorithm. 5.2.2 Bilingual Bootstrapping The algorithm for bilingual bootstrapping that we propose and tried in our experiments is: 1. Translate the English sentences that were collected in the MB-E step into French using an online MT9 tool and add them to the French seed training data. 2. Repeat the MB-F and MB-E steps for T times. For the both monolingual and bilingual bootstrapping techniques the value of the parameters t and T is 1 in our experiments. 9 http://www.freetranslation.com/free/web.asp 445 6 Evaluation and Results In this section we present the results that we obtained with the supervised and semisupervised methods that we applied to disambiguate partial cognates. Due to space issue we show results only for testing on the testing sets and not for the 10-fold cross validation experiments on the training data. For the same reason, we present the results that we obtained only with the French side of the parallel corpus, even though we trained classifiers on the English sentences as well. The results for the 10-fold cross validation and for the English sentences are not much different than the ones from Table 5 that describe the supervised method results on French sentences. Table 5. Results for the Supervised Method. PC ZeroR NB-K Trees SMO Blanc 58% 95.52% 98.5% 98.5% Circulation 74% 91.03% 80% 89.65% Client 54.08% 67.34% 66.32% 61.22% Corps 51.16% 62% 61.62% 69.76% Détail 59.4% 85.14% 85.14% 87.12% Mode 58.24% 89.01% 89.01% 90% Note 64.94% 89.17% 77.83% 85.05% Police 61.41% 79.52% 93.7% 94.48% Responsable 55.24% 85.08% 70.71% 75.69% Route 56.79% 54.32% 56.79% 56.79% AVERAGE 59.33% 80.17% 77.96% 80.59% Table 6 and Table 7 present results for the MB and BB. More experiments that combined MB and BB techniques were also performed. The results are presented in Table 9. Our goal is to disambiguate partial cognates in general, not only in the particular domain of Hansard and EuroParl. For this reason we used another set of automatically determined sentences from a multi-domain parallel corpus. The set of new sentences (multi-domain) was extracted in the same manner as the seeds from Hansard and EuroParl. The new parallel corpus is a small one, approximately 1.5 million words, but contains texts from different domains: magazine articles, modern fiction, texts from international organizations and academic textbooks. We are using this set of sentences in our experiments to show that our methods perform well on multidomain corpora and also because our aim is to be able to disambiguate PC in different domains. From this parallel corpus we were able to extract the number of sentences shown in Table 8. With this new set of sentences we performed different experiments both for MB and BB. All results are described in Table 9. Due to space issue we report the results only on the average that we obtained for all the 10 pairs of partial cognates. The symbols that we use in Table 9 represent: S – the seed training corpus, TS – the seed test set, BNC and LM – sentences extracted from LeMonde and BNC (Table 4), and NC – the sentences that were extracted from the multi-domain new corpus. When we use the + symbol we put together all the sentences extracted from the respective corpora. Table 6. Monolingual Bootstrapping on the French side. PC ZeroR NB-K Dec.Tree SMO Blanc 58.20% 97.01% 97.01% 98.5% Circulation 73.79% 90.34% 70.34% 84.13% Client 54.08% 71.42% 54.08% 64.28% Corps 51.16% 78% 56.97% 69.76% Détail 59.4% 88.11% 85.14% 82.17% Mode 58.24% 89.01% 90.10% 85% Note 64.94% 85.05% 71.64% 80.41% Police 61.41% 71.65% 92.91% 71.65% Responsable 55.24% 87.29% 77.34% 81.76% Route 56.79% 51.85% 56.79% 56.79% AVERAGE 59.33% 80.96% 75.23% 77.41% Table 7. Bilingual Bootstrapping. PC ZeroR NB-K Dec.Tree SMO Blanc 58.2% 95.52% 97.01% 98.50% Circulation 73.79% 92.41% 63.44% 87.58% Client 45.91% 70.4% 45.91% 63.26% Corps 48.83% 83% 67.44% 82.55% Détail 59% 91.08% 85.14% 86.13% Mode 58.24% 87.91% 90.1% 87% Note 64.94% 85.56% 77.31% 79.38% Police 61.41% 80.31% 96.06% 96.06% Responsable 44.75% 87.84% 74.03% 79.55% Route 43.2% 60.49% 45.67% 64.19% AVERAGE 55.87% 83.41% 74.21% 82.4% 446 Table 8. New Corpus (NC) sentences. PC COG FF Blanc 18 222 Circulation 26 10 Client 70 44 Corps 4 288 Détail 50 0 Mode 166 12 Note 214 20 Police 216 6 Responsable 104 66 Route 6 100 6.1 Discussion of the Results The results of the experiments and the methods that we propose show that we can use with success unlabeled data to learn from, and that the noise that is introduced due to the seed set collection is tolerable by the ML techniques that we use. Some results of the experiments we present in Table 9 are not as good as others. What is important to notice is that every time we used MB or BB or both, there was an improvement. For some experiments MB did better, for others BB was the method that improved the performance; nonetheless for some combinations MB together with BB was the method that worked best. In Tables 5 and 7 we show that BB improved the results on the NB-K classifier with 3.24%, compared with the supervised method (no bootstrapping), when we tested only on the test set (TS), the one that represents 1/3 of the initiallycollected parallel sentences. This improvement is not statistically significant, according to a t-test. In Table 9 we show that our proposed methods bring improvements for different combinations of training and testing sets. Table 9, lines 1 and 2 show that BB with NB-K brought an improvement of 1.95% from no bootstrapping, when we tested on the multi-domain corpus NC. For the same setting, there was an improvement of 1.55% when we tested on TS (Table 9, lines 6 and 8). When we tested on the combination TS+NC, again BB brought an improvement of 2.63% from no bootstrapping (Table 9, lines 10 and 12). The difference between MB and BB with this setting is 6.86% (Table 9, lines 11 and 12). According to a t-test the 1.95% and 6.86% improvements are statistically significant. Table 9. Results for different experiments with monolingual and bilingual bootstrapping (MB and BB). Train Test ZeroR NB-K Trees SMO S (no bootstrapping) NC 67% 71.97% 73.75% 76.75% S+BNC (BB) NC 64% 73.92% 60.49% 74.80% S+LM (MB) NC 67.85% 67.03% 64.65% 65.57% S +LM+BNC (MB+BB) NC 64.19% 70.57% 57.03% 66.84% S+LM+BNC (MB+BB) TS 55.87% 81.98% 74.37% 78.76% S+NC (no bootstr.) TS 57.44% 82.03% 76.91% 80.71% S+NC+LM (MB) TS 57.44% 82.02% 73.78% 77.03% S+NC+BNC (BB) TS 56.63% 83.58% 68.36% 82.34% S+NC+LM+ BNC(MB+BB) TS 58% 83.10% 75.61% 79.05% S (no bootstrapping) TS+NC 62.70% 77.20% 77.23% 79.26% S+LM (MB) TS+NC 62.70% 72.97% 70.33% 71.97% S+BNC (BB) TS+NC 61.27% 79.83% 67.06% 78.80% S+LM+BNC (MB+BB) TS+NC 61.27% 77.28% 65.75% 73.87% The number of features that were extracted from the seeds was more than double at each MB and BB experiment, showing that even though we started with seeds from a language restricted domain, the method is able to capture knowledge form different domains as well. Besides the change in the number of features, the domain of the features has also changed form the parliamentary one to others, more general, showing that the method will be able to disambiguate sentences where the partial cognates cover different types of context. Unlike previous work that has done with monolingual or bilingual bootstrapping, we tried to disambiguate not only words that have senses that are very different e.g. plant – with a sense of biological plant or with the sense of factory. In our set of partial cognates the French word route is a difficult word to disambiguate even for humans: it has a cognate sense when it refers to a maritime or trade route and a false-friend sense when it is used as road. The same observation applies to client (the cognate sense is client, and the false friend sense is customer, patron, or patient) and to circulation (cognate in air or blood circulation, false friend in street traffic). 447 7 Conclusion and Future Work We showed that with simple methods and using available tools we can achieve good results in the task of partial cognate disambiguation. The accuracy might be increased by using dependencies relations, lemmatization, part-ofspeech tagging – extract sentences where the partial cognate has the same POS, and other types of data representation combined with different semantic tools (e.g. decision lists, rule based systems). In our experiments we use a machine language representation – binary feature values, and we show that nonetheless machines are capable of learning from new information, using an iterative approach, similar to the learning process of humans. New information was collected and extracted by classifiers when additional corpora were used for training. In addition to the applications that we mentioned in Section 1, partial cognates can also be useful in Computer-Assisted Language Learning (CALL) tools. Search engines for E-Learning can find useful a partial cognate annotator. A teacher that prepares a test to be integrated into a CALL tool can save time by using our methods to automatically disambiguate partial cognates, even though the automatic classifications need to be checked by the teacher. In future work we plan to try different representations of the data, to use knowledge of the relations that exists between the partial cognate and the context words, and to run experiments when we iterate the MB and BB steps more than once. References Susane Carroll 1992. On Cognates. Second Language Research, 8(2):93-119 Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of the 40th Meeting of the Association for Computational Linguistics (ACL 2002), Philadelphia, pp. 255-262. S. M. Gass. 1987. The use and acquisition of the second language lexicon (Special issue). Studies in Second Language Acquisition, 9 (2). Jacques B. M. Guy. 1994. An algorithm for identifying cognates in bilingual word lists and its applicability to machine translation. Journal of Quantitative Linguistics, 1(1):35-42. Marti Hearst 1991. Noun homograph disambiguation using local context in large text corpora. 7th Annual Conference of the University of Waterloo Center for the new OED and Text Research, Oxford. W.J.B Van Heuven, A. Dijkstra, and J. Grainger. 1998. Orthographic neighborhood effects in bilingual word recognition. Journal of Memory and Language 39: 458-483. John Hewson 1993. A Computer-Generated Dictionary of Proto-Algonquian. Ottawa: Canadian Museum of Civilization. Nancy Ide. 2000 Cross-lingual sense determination: Can it work? Computers and the Humanities, 34:12, Special Issue on the Proceedings of the SIGLEX SENSEVAL Workshop, pp.223-234. Grzegorz Kondrak. 2004. Combining Evidence in Cognate Identification. Proceedings of Canadian AI 2004: 17th Conference of the Canadian Society for Computational Studies of Intelligence, pp.4459. Grzegorz Kondrak. 2001. Identifying Cognates by Phonetic and Semantic Similarity. Proceedings of NAACL 2001: 2nd Meeting of the North American Chapter of the Association for Computational Linguistics, pp.103-110. Raymond LeBlanc and Hubert Séguin. 1996. Les congénères homographes et parographes anglaisfrançais. Twenty-Five Years of Second Language Teaching at the University of Ottawa, pp.69-91. Hang Li and Cong Li. 2004. Word translation disambiguation using bilingual bootstrap. Computational Linguistics, 30(1):1-22. John B. Lowe and Martine Mauzaudon. 1994. The reconstruction engine: a computer implementation of the comparative method. Computational Linguistics, 20:381-417. Hakan Ringbom. 1987. The Role of the First Language in Foreign Language Learning. Multilingual Matters Ltd., Clevedon, England. Dan Tufis, Ion Radu, Nancy Ide 2004. Fine-Grained Word Sense Disambiguation Based on Parallel Corpora, Word Alignment, Word Clustering and Aligned WordNets. Proceedings of the 20th International Conference on Computational Linguistics, COLING 2004, Geneva, pp. 1312-1318. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proceedings of the 33th Annual Meeting of the Association for Computational Linguistics, Cambridge, MA, pp 189-196. 448
2006
56
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 449–456, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Direct Word Sense Matching for Lexical Substitution Ido Dagan1, Oren Glickman1, Alfio Gliozzo2, Efrat Marmorshtein1, Carlo Strapparava2 1Department of Computer Science, Bar Ilan University, Ramat Gan, 52900, Israel 2ITC-Irst, via Sommarive, I-38050, Trento, Italy Abstract This paper investigates conceptually and empirically the novel sense matching task, which requires to recognize whether the senses of two synonymous words match in context. We suggest direct approaches to the problem, which avoid the intermediate step of explicit word sense disambiguation, and demonstrate their appealing advantages and stimulating potential for future research. 1 Introduction In many language processing settings it is needed to recognize that a given word or term may be substituted by a synonymous one. In a typical information seeking scenario, an information need is specified by some given source words. When looking for texts that match the specified need the source words might be substituted with synonymous target words. For example, given the source word ‘weapon’ a system may substitute it with the target synonym ‘arm’. This scenario, which is generally referred here as lexical substitution, is a common technique for increasing recall in Natural Language Processing (NLP) applications. In Information Retrieval (IR) and Question Answering (QA) it is typically termed query/question expansion (Moldovan and Mihalcea, 2000; Negri, 2004). Lexical Substitution is also commonly applied to identify synonyms in text summarization, for paraphrasing in text generation, or is integrated into the features of supervised tasks such as Text Categorization and Information Extraction. Naturally, lexical substitution is a very common first step in textual entailment recognition, which models semantic inference between a pair of texts in a generalized application independent setting (Dagan et al., 2005). To perform lexical substitution NLP applications typically utilize a knowledge source of synonymous word pairs. The most commonly used resource for lexical substitution is the manually constructed WordNet (Fellbaum, 1998). Another option is to use statistical word similarities, such as in the database constructed by Dekang Lin (Lin, 1998). We generically refer to such resources as substitution lexicons. When using a substitution lexicon it is assumed that there are some contexts in which the given synonymous words share the same meaning. Yet, due to polysemy, it is needed to verify that the senses of the two words do indeed match in a given context. For example, there are contexts in which the source word ‘weapon’ may be substituted by the target word ‘arm’; however one should recognize that ‘arm’ has a different sense than ‘weapon’ in sentences such as “repetitive movements could cause injuries to hands, wrists and arms.” A commonly proposed approach to address sense matching in lexical substitution is applying Word Sense Disambiguation (WSD) to identify the senses of the source and target words. Then, substitution is applied only if the words have the same sense (or synset, in WordNet terminology). In settings in which the source is given as a single term without context, sense disambiguation is performed only for the target word; substitution is then applied only if the target word’s sense matches at least one of the possible senses of the source word. One might observe that such application of WSD addresses the task at hand in a somewhat indirect manner. In fact, lexical substitution only requires knowing that the source and target senses 449 do match, but it does not require that the matching senses will be explicitly identified. Selecting explicitly the right sense in context, which is then followed by verifying the desired matching, might be solving a harder intermediate problem than required. Instead, we can define the sense matching problem directly as a binary classification task for a pair of synonymous source and target words. This task requires to decide whether the senses of the two words do or do not match in a given context (but it does not require to identify explicitly the identity of the matching senses). A highly related task was proposed in (McCarthy, 2002). McCarthy’s proposal was to ask systems to suggest possible “semantically similar replacements” of a target word in context, where alternative replacements should be grouped together. While this task is somewhat more complicated as an evaluation setting than our binary recognition task, it was motivated by similar observations and applied goals. From another perspective, sense matching may be viewed as a lexical sub-case of the general textual entailment recognition setting, where we need to recognize whether the meaning of the target word “entails” the meaning of the source word in a given context. This paper provides a first investigation of the sense matching problem. To allow comparison with the classical WSD setting we derived an evaluation dataset for the new problem from the Senseval-3 English lexical sample dataset (Mihalcea and Edmonds, 2004). We then evaluated alternative supervised and unsupervised methods that perform sense matching either indirectly or directly (i.e. with or without the intermediate sense identification step). Our findings suggest that in the supervised setting the results of the direct and indirect approaches are comparable. However, addressing directly the binary classification task has practical advantages and can yield high precision values, as desired in precision-oriented applications such as IR and QA. More importantly, direct sense matching sets the ground for implicit unsupervised approaches that may utilize practically unlimited volumes of unlabeled training data. Furthermore, such approaches circumvent the sisyphean need for specifying explicitly a set of stipulated senses. We present an initial implementation of such an approach using a one-class classifier, which is trained on unlabeled occurrences of the source word and applied to occurrences of the target word. Our current results outperform the unsupervised baseline and put forth a whole new direction for future research. 2 WSD and Lexical Expansion Despite certain initial skepticism about the usefulness of WSD in practical tasks (Voorhees, 1993; Sanderson, 1994), there is some evidence that WSD can improve performance in typical NLP tasks such as IR and QA. For example, (Sh¨utze and Pederson, 1995) gives clear indication of the potential for WSD to improve the precision of an IR system. They tested the use of WSD on a standard IR test collection (TREC-1B), improving precision by more than 4%. The use of WSD has produced successful experiments for query expansion techniques. In particular, some attempts exploited WordNet to enrich queries with semantically-related terms. For instance, (Voorhees, 1994) manually expanded 50 queries over the TREC-1 collection using synonymy and other WordNet relations. She found that the expansion was useful with short and incomplete queries, leaving the task of proper automatic expansion as an open problem. (Gonzalo et al., 1998) demonstrates an increment in performance over an IR test collection using the sense data contained in SemCor over a purely term based model. In practice, they experimented searching SemCor with disambiguated and expanded queries. Their work shows that a WSD system, even if not performing perfectly, combined with synonymy enrichment increases retrieval performance. (Moldovan and Mihalcea, 2000) introduces the idea of using WordNet to extend Web searches based on semantic similarity. Their results showed that WSD-based query expansion actually improves retrieval performance in a Web scenario. Recently (Negri, 2004) proposed a sense-based relevance feedback scheme for query enrichment in a QA scenario (TREC-2003 and ACQUAINT), demonstrating improvement in retrieval performance. While all these works clearly show the potential usefulness of WSD in practical tasks, nonetheless they do not necessarily justify the efforts for refining fine-grained sense repositories and for building large sense-tagged corpora. We suggest that the sense matching task, as presented in the intro450 duction, may relieve major drawbacks of applying WSD in practical scenarios. 3 Problem Setting and Dataset To investigate the direct sense matching problem it is necessary to obtain an appropriate dataset of examples for this binary classification task, along with gold standard annotation. While there is no such standard (application independent) dataset available it is possible to derive it automatically from existing WSD evaluation datasets, as described below. This methodology also allows comparing direct approaches for sense matching with classical indirect approaches, which apply an intermediate step of identifying the most likely WordNet sense. We derived our dataset from the Senseval-3 English lexical sample dataset (Mihalcea and Edmonds, 2004), taking all 25 nouns, adjectives and adverbs in this sample. Verbs were excluded since their sense annotation in Senseval-3 is not based on WordNet senses. The Senseval dataset includes a set of example occurrences in context for each word, split to training and test sets, where each example is manually annotated with the corresponding WordNet synset. For the sense matching setting we need examples of pairs of source-target synonymous words, where at least one of these words should occur in a given context. Following an applicative motivation, we mimic an IR setting in which a single source word query is expanded (substituted) by a synonymous target word. Then, it is needed to identify contexts in which the target word appears in a sense that matches the source word. Accordingly, we considered each of the 25 words in the Senseval sample as a target word for the sense matching task. Next, we had to pick for each target word a corresponding synonym to play the role of the source word. This was done by creating a list of all WordNet synonyms of the target word, under all its possible senses, and picking randomly one of the synonyms as the source word. For example, the word ‘disc’ is one of the words in the Senseval lexical sample. For this target word the synonym ‘record’ was picked, which matches ‘disc’ in its musical sense. Overall, 59% of all possible synsets of our target words included an additional synonym, which could play the role of the source word (that is, 41% of the synsets consisted of the target word only). Similarly, 62% of the test examples of the target words were annotated by a synset that included an additional synonym. While creating source-target synonym pairs it was evident that many WordNet synonyms correspond to very infrequent senses or word usages, such as the WordNet synonyms germ and source. Such source synonyms are useless for evaluating sense matching with the target word since the senses of the two words would rarely match in perceivable contexts. In fact, considering our motivation for lexical substitution, it is usually desired to exclude such obscure synonym pairs from substitution lexicons in practical applications, since they would mostly introduce noise to the system. To avoid this problem the list of WordNet synonyms for each target word was filtered by a lexicographer, who excluded manually obscure synonyms that seemed worthless in practice. The source synonym for each target word was then picked randomly from the filtered list. Table 1 shows the 25 source-target pairs created for our experiments. In future work it may be possible to apply automatic methods for filtering infrequent sense correspondences in the dataset, by adopting algorithms such as in (McCarthy et al., 2004). Having source-target synonym pairs, a classification instance for the sense matching task is created from each example occurrence of the target word in the Senseval dataset. A classification instance is thus defined by a pair of source and target words and a given occurrence of the target word in context. The instance should be classified as positive if the sense of the target word in the given context matches one of the possible senses of the source word, and as negative otherwise. Table 2 illustrates positive and negative example instances for the source-target synonym pair ‘record-disc’, where only occurrences of ‘disc’ in the musical sense are considered positive. The gold standard annotation for the binary sense matching task can be derived automatically from the Senseval annotations and the corresponding WordNet synsets. An example occurrence of the target word is considered positive if the annotated synset for that example includes also the source word, and Negative otherwise. Notice that different positive examples might correspond to different senses of the target word. This happens when the source and target share several senses, and hence they appear together in several synsets. Finally, since in Senseval an example may be an451 source-target source-target source-target source-target source-target statement-argument subdivision-arm atm-atmosphere hearing-audience camber-bank level-degree deviation-difference dissimilar-different trouble-difficulty record-disc raging-hot ikon-image crucial-important sake-interest bare-simple opinion-judgment arrangement-organization newspaper-paper company-party substantial-solid execution-performance design-plan protection-shelter variety-sort root-source Table 1: Source and target pairs sentence annotation This is anyway a stunning disc, thanks to the playing of the Moscow Virtuosi with Spivakov. positive He said computer networks would not be affected and copies of information should be made on floppy discs. negative Before the dead soldier was placed in the ditch his personal possessions were removed, leaving one disc on the body for identification purposes negative Table 2: positive and negative examples for the source-target synonym pair ‘record-disc’ notated with more than one sense, it was considered positive if any of the annotated synsets for the target word includes the source word. Using this procedure we derived gold standard annotations for all the examples in the Senseval3 training section for our 25 target words. For the test set we took up to 40 test examples for each target word (some words had fewer test examples), yielding 913 test examples in total, out of which 239 were positive. This test set was used to evaluate the sense matching methods described in the next section. 4 Investigated Methods As explained in the introduction, the sense matching task may be addressed by two general approaches. The traditional indirect approach would first disambiguate the target word relative to a predefined set of senses, using standard WSD methods, and would then verify that the selected sense matches the source word. On the other hand, a direct approach would address the binary sense matching task directly, without selecting explicitly a concrete sense for the target word. This section describes the alternative methods we investigated under supervised and unsupervised settings. The supervised methods utilize manual sense annotations for the given source and target words while unsupervised methods do not require any annotated sense examples. For the indirect approach we assume the standard WordNet sense repository and corresponding annotations of the target words with WordNet synsets. 4.1 Feature set and classifier As a vehicle for investigating different classification approaches we implemented a “vanilla” state of the art architecture for WSD. Following common practice in feature extraction (e.g. (Yarowsky, 1994)), and using the mxpost1 part of speech tagger and WordNet’s lemmatization, the following feature set was used: bag of word lemmas for the context words in the preceding, current and following sentence; unigrams of lemmas and parts of speech in a window of +/- three words, where each position provides a distinct feature; and bigrams of lemmas in the same window. The SVMLight (Joachims, 1999) classifier was used in the supervised settings with its default parameters. To obtain a multi-class classifier we used a standard one-vs-all approach of training a binary SVM for each possible sense and then selecting the highest scoring sense for a test example. To verify that our implementation provides a reasonable replication of state of the art WSD we applied it to the standard Senseval-3 Lexical Sample WSD task. The obtained accuracy2 was 66.7%, which compares reasonably with the mid-range of systems in the Senseval-3 benchmark (Mihalcea and Edmonds, 2004). This figure is just a few percent lower than the (quite complicated) best Senseval-3 system, which achieved about 73% accuracy, and it is much higher than the standard Senseval baselines. We thus regard our classifier as a fair vehicle for comparing the alternative approaches for sense matching on equal grounds. 1ftp://ftp.cis.upenn.edu/pub/adwait/jmx/jmx.tar.gz 2The standard classification accuracy measure equals precision and recall as defined in the Senseval terminology when the system classifies all examples, with no abstentions. 452 4.2 Supervised Methods 4.2.1 Indirect approach The indirect approach for sense matching follows the traditional scheme of performing WSD for lexical substitution. First, the WSD classifier described above was trained for the target words of our dataset, using the Senseval-3 sense annotated training data for these words. Then, the classifier was applied to the test examples of the target words, selecting the most likely sense for each example. Finally, an example was classified as positive if the selected synset for the target word includes the source word, and as negative otherwise. 4.2.2 Direct approach As explained above, the direct approach addresses the binary sense matching task directly, without selecting explicitly a sense for the target word. In the supervised setting it is easy to obtain such a binary classifier using the annotation scheme described in Section 3. Under this scheme an example was annotated as positive (for the binary sense matching task) if the source word is included in the Senseval gold standard synset of the target word. We trained the classifier using the set of Senseval-3 training examples for each target word, considering their derived binary annotations. Finally, the trained classifier was applied to the test examples of the target words, yielding directly a binary positive-negative classification. 4.3 Unsupervised Methods It is well known that obtaining annotated training examples for WSD tasks is very expensive, and is often considered infeasible in unrestricted domains. Therefore, many researchers investigated unsupervised methods, which do not require annotated examples. Unsupervised approaches have usually been investigated within Senseval using the “All Words” dataset, which does not include training examples. In this paper we preferred using the same test set which was used for the supervised setting (created from the Senseval-3 “Lexical Sample” dataset, as described above), in order to enable comparison between the two settings. Naturally, in the unsupervised setting the sense labels in the training set were not utilized. 4.3.1 Indirect approach State-of-the-art unsupervised WSD systems are quite complex and they are not easy to be replicated. Thus, we implemented the unsupervised version of the Lesk algorithm (Lesk, 1986) as a reference system, since it is considered a standard simple baseline for unsupervised approaches. The Lesk algorithm is one of the first algorithms developed for semantic disambiguation of all-words in unrestricted text. In its original unsupervised version, the only resource required by the algorithm is a machine readable dictionary with one definition for each possible word sense. The algorithm looks for words in the sense definitions that overlap with context words in the given sentence, and chooses the sense that yields maximal word overlap. We implemented a version of this algorithm using WordNet sense-definitions with context length of ±10 words before and after the target word. 4.3.2 The direct approach: one-class learning The unsupervised settings for the direct method are more problematic because most of unsupervised WSD algorithms (such as the Lesk algorithm) rely on dictionary definitions. For this reason, standard unsupervised techniques cannot be applied in a direct approach for sense matching, in which the only external information is a substitution lexicon. In this subsection we present a direct unsupervised method for sense matching. It is based on the assumption that typical contexts in which both the source and target words appear correspond to their matching senses. Unlabeled occurrences of the source word can then be used to provide evidence for lexical substitution because they allow us to recognize whether the sense of the target word matches that of the source. Our strategy is to represent in a learning model the typical contexts of the source word in unlabeled training data. Then, we exploit such model to match the contexts of the target word, providing a decision criterion for sense matching. In other words, we expect that under a matching sense the target word would occur in prototypical contexts of the source word. To implement such approach we need a learning technique that does not rely on the availability of negative evidence, that is, a one-class learning algorithm. In general, the classification performance of one-class approaches is usually quite poor, if compared to supervised approaches for the same tasks. However, in many practical settings oneclass learning is the only available solution. For our experiments we adopted the one-class SVM learning algorithm (Sch¨olkopf et al., 2001) 453 implemented in the LIBSVM package,3 and represented the unlabeled training examples by adopting the feature set described in Subsection 4.1. Roughly speaking, a one-class SVM estimates the smallest hypersphere enclosing most of the training data. New test instances are then classified positively if they lie inside the sphere, while outliers are regarded as negatives. The ratio between the width of the enclosed region and the number of misclassified training examples can be varied by setting the parameter ν ∈(0, 1). Smaller values of ν will produce larger positive regions, with the effect of increasing recall. The appealing advantage of adopting one-class learning for sense matching is that it allows us to define a very elegant learning scenario, in which it is possible to train “off-line” a different classifier for each (source) word in the lexicon. Such a classifier can then be used to match the sense of any possible target word for the source which is given in the substitution lexicon. This is in contrast to the direct supervised method proposed in Subsection 4.2, where a different classifier for each pair of source - target words has to be defined. 5 Evaluation 5.1 Evaluation measures and baselines In the lexical substitution (and expansion) setting, the standard WSD metrics (Mihalcea and Edmonds, 2004) are not suitable, because we are interested in the binary decision of whether the target word matches the sense of a given source word. In analogy to IR, we are more interested in positive assignments, while the opposite case (i.e. when the two words cannot be substituted) is less interesting. Accordingly, we utilize the standard definitions of precision, recall and F1 typically used in IR benchmarks. In the rest of this section we will report micro averages for these measures on the test set described in Section 3. Following the Senseval methodology, we evaluated two different baselines for unsupervised and supervised methods. The random baseline, used for the unsupervised algorithms, was obtained by choosing either the positive or the negative class at random resulting in P = 0.262, R = 0.5, F1 = 0.344. The Most Frequent baseline has been used for the supervised algorithms and is obtained by assigning the positive class when the 3Freely available from www.csie.ntu.edu.tw/ /∼cjlin/libsvm. percentage of positive examples in the training set is above 50%, resulting in P = 0.65, R = 0.41, F1 = 0.51. 5.2 Supervised Methods Both the indirect and the direct supervised methods presented in Subsection 4.2 have been tested and compared to the most frequent baseline. Indirect. For the indirect methodology we trained the supervised WSD system for each target word on the sense-tagged training sample. As described in Subsection 4.2, we implemented a simple SVM-based WSD system (see Section 4.2) and applied it to the sense-matching task. Results are reported in Table 3. The direct strategy surpasses the most frequent baseline F1 score, but the achieved precision is still below it. We note that in this multi-class setting it is less straightforward to tradeoff recall for precision, as all senses compete with each other. Direct. In the direct supervised setting, sense matching is performed by training a binary classifier, as described in Subsection 4.2. The advantage of adopting a binary classification strategy is that the precision/recall tradeoff can be tuned in a meaningful way. In SVM learning, such tuning is achieved by varying the parameter J, that allows us to modify the cost function of the SVM learning algorithm. If J = 1 (default), the weight for the positive examples is equal to the weight for the negatives. When J > 1, negative examples are penalized (increasing recall), while, whenever 0 < J < 1, positive examples are penalized (increasing precision). Results obtained by varying this parameter are reported in Figure 1. Figure 1: Direct supervised results varying J 454 Supervised P R F1 Unsupervised P R F1 Most Frequent Baseline 0.65 0.41 0.51 Random Baseline 0.26 0.50 0.34 Multiclass SVM Indirect 0.59 0.63 0.61 Lesk Indirect 0.24 0.19 0.21 Binary SVM (J = 0.5) Direct 0.80 0.26 0.39 One-Class ν = 0.3 Direct 0.26 0.72 0.39 Binary SVM (J = 1) Direct 0.76 0.46 0.57 One-Class ν = 0.5 Direct 0.29 0.56 0.38 Binary SVM (J = 2) Direct 0.68 0.53 0.60 One-Class ν = 0.7 Direct 0.28 0.36 0.32 Binary SVM (J = 3) Direct 0.69 0.55 0.61 One-Class ν = 0.9 Direct 0.23 0.10 0.14 Table 3: Classification results on the sense matching task Adopting the standard parameter settings (i.e. J = 1, see Table 3), the F1 of the system is slightly lower than for the indirect approach, while it reaches the indirect figures when J increases. More importantly, reducing J allows us to boost precision towards 100%. This feature is of great interest for lexical substitution, particularly in precision oriented applications like IR and QA, for filtering irrelevant candidate answers or documents. 5.3 Unsupervised methods Indirect. To evaluate the indirect unsupervised settings we implemented the Lesk algorithm, described in Subsection 4.3.1, and evaluated it on the sense matching task. The obtained figures, reported in Table 3, are clearly below the baseline, suggesting that simple unsupervised indirect strategies cannot be used for this task. In fact, the error of the first step, due to low WSD accuracy of the unsupervised technique, is propagated in the second step, producing poor sense matching. Unfortunately, state-of-the-art unsupervised systems are actually not much better than Lesk on allwords task (Mihalcea and Edmonds, 2004), discouraging the use of unsupervised indirect methods for the sense matching task. Direct. Conceptually, the most appealing solution for the sense matching task is the one-class approach proposed for the direct method (Section 4.3.2). To perform our experiments, we trained a different one-class SVM for each source word, using a sample of its unlabeled occurrences in the BNC corpus as training set. To avoid huge training sets and to speed up the learning process, we fixed the maximum number of training examples to 10000 occurrences per word, collecting on average about 6500 occurrences per word. For each target word in the test sample, we applied the classifier of the corresponding source word. Results for different values of ν are reported in Figure 2 and summarized in Table 3. Figure 2: One-class evaluation varying ν While the results are somewhat above the baseline, just small improvements in precision are reported, and recall is higher than the baseline for ν < 0.6. Such small improvements may suggest that we are following a relevant direction, even though they may not be useful yet for an applied sense-matching setting. Further analysis of the classification results for each word revealed that optimal F1 values are obtained by adopting different values of ν for different words. In the optimal (in retrospect) parameter settings for each word, performance for the test set is noticeably boosted, achieving P = 0.40, R = 0.85 and F1 = 0.54. Finding a principled unsupervised way to automatically tune the ν parameter is thus a promising direction for future work. Investigating further the results per word, we found that the correlation coefficient between the optimal ν values and the degree of polysemy of the corresponding source words is 0.35. More interestingly, we noticed a negative correlation (r = -0.30) between the achieved F1 and the degree of polysemy of the word, suggesting that polysemous source words provide poor training models for sense matching. This can be explained by observing that polysemous source words can be substituted with the target words only for a strict sub455 set of their senses. On the other hand, our oneclass algorithm was trained on all the examples of the source word, which include irrelevant examples that yield noisy training sets. A possible solution may be obtained using clustering-based word sense discrimination methods (Pedersen and Bruce, 1997; Sch¨utze, 1998), in order to train different one-class models from different sense clusters. Overall, the analysis suggests that future research may obtain better binary classifiers based just on unlabeled examples of the source word. 6 Conclusion This paper investigated the sense matching task, which captures directly the polysemy problem in lexical substitution. We proposed a direct approach for the task, suggesting the advantages of natural control of precision/recall tradeoff, avoiding the need in an explicitly defined sense repository, and, most appealing, the potential for novel completely unsupervised learning schemes. We speculate that there is a great potential for such approaches, and suggest that sense matching may become an appealing problem and possible track in lexical semantic evaluations. Acknowledgments This work was partly developed under the collaboration ITC-irst/University of Haifa. References Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment. C. Fellbaum. 1998. WordNet. An Electronic Lexical Database. MIT Press. J. Gonzalo, F. Verdejo, I. Chugur, and J. Cigarran. 1998. Indexing with wordnet synsets can improve text retrieval. In ACL, Montreal, Canada. T. Joachims. 1999. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in kernel methods: support vector learning, chapter 11, pages 169 – 184. MIT Press. M. Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the ACM-SIGDOC Conference, Toronto, Canada. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th international conference on Computational linguistics, pages 768–774, Morristown, NJ, USA. Association for Computational Linguistics. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Automatic identification of infrequent word senses. In Proceedings of COLING, pages 1220–1226. Diana McCarthy. 2002. Lexical substitution as a task for wsd evaluation. In Proceedings of the ACL02 workshop on Word sense disambiguation, pages 109–115, Morristown, NJ, USA. Association for Computational Linguistics. R. Mihalcea and P. Edmonds, editors. 2004. Proceedings of SENSEVAL-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, Barcelona, Spain, July. D. Moldovan and R. Mihalcea. 2000. Using wordnet and lexical operators to improve internet searches. IEEE Internet Computing, 4(1):34–43, January. M. Negri. 2004. Sense-based blind relevance feedback for question answering. In SIGIR-2004 Workshop on Information Retrieval For Question Answering (IR4QA), Sheffield, UK, July. T. Pedersen and R. Bruce. 1997. Distinguishing word sense in untagged text. In EMNLP, Providence, August. M. Sanderson. 1994. Word sense disambiguation and information retrieval. In SIGIR, Dublin, Ireland, June. B. Sch¨olkopf, J. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. 2001. Estimating the support of a high-dimensional distribution. Neural Computation, 13:1443–1471. H. Sch¨utze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1). H. Sh¨utze and J. Pederson. 1995. Information retrieval based on word senses. In Proceedings of the 4th Annual Symposium on Document Analysis and Information Retrieval, Las Vegas. E. Voorhees. 1993. Using WordNet to disambiguate word sense for text retrieval. In SIGIR, Pittsburgh, PA. E. Voorhees. 1994. Query expansion using lexicalsemantic relations. In Proceedings of the 17th ACM SIGIR Conference, Dublin, Ireland, June. D. Yarowsky. 1994. Decision lists for lexical ambiguity resolution: Application to accent restoration in spanish and french. In ACL, pages 88–95, Las Cruces, New Mexico. 456
2006
57
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 457–464, Sydney, July 2006. c⃝2006 Association for Computational Linguistics An Equivalent Pseudoword Solution to Chinese Word Sense Disambiguation Zhimao Lu+ Haifeng Wang++ Jianmin Yao+++ Ting Liu+ Sheng Li+ + Information Retrieval Laboratory, School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, China {lzm, tliu, lisheng}@ir-lab.org ++ Toshiba (China) Research and Development Center 5/F., Tower W2, Oriental Plaza, No. 1, East Chang An Ave., Beijing, 100738, China [email protected] +++ School of Computer Science and Technology Soochow University, Suzhou, 215006, China [email protected] Abstract This paper presents a new approach based on Equivalent Pseudowords (EPs) to tackle Word Sense Disambiguation (WSD) in Chinese language. EPs are particular artificial ambiguous words, which can be used to realize unsupervised WSD. A Bayesian classifier is implemented to test the efficacy of the EP solution on Senseval-3 Chinese test set. The performance is better than state-of-the-art results with an average F-measure of 0.80. The experiment verifies the value of EP for unsupervised WSD. 1 Introduction Word sense disambiguation (WSD) has been a hot topic in natural language processing, which is to determine the sense of an ambiguous word in a specific context. It is an important technique for applications such as information retrieval, text mining, machine translation, text classification, automatic text summarization, and so on. Statistical solutions to WSD acquire linguistic knowledge from the training corpus using machine learning technologies, and apply the knowledge to disambiguation. The first statistical model of WSD was built by Brown et al. (1991). Since then, most machine learning methods have been applied to WSD, including decision tree, Bayesian model, neural network, SVM, maximum entropy, genetic algorithms, and so on. For different learning methods, supervised methods usually achieve good performance at a cost of human tagging of training corpus. The precision improves with larger size of training corpus. Compared with supervised methods, unsupervised methods do not require tagged corpus, but the precision is usually lower than that of the supervised methods. Thus, knowledge acquisition is critical to WSD methods. This paper proposes an unsupervised method based on equivalent pseudowords, which acquires WSD knowledge from raw corpus. This method first determines equivalent pseudowords for each ambiguous word, and then uses the equivalent pseudowords to replace the ambiguous word in the corpus. The advantage of this method is that it does not need parallel corpus or seed corpus for training. Thus, it can use a largescale monolingual corpus for training to solve the data-sparseness problem. Experimental results show that our unsupervised method performs better than the supervised method. The remainder of the paper is organized as follows. Section 2 summarizes the related work. Section 3 describes the conception of Equivalent Pseudoword. Section 4 describes EP-based Unsupervised WSD Method and the evaluation result. The last section concludes our approach. 2 Related Work For supervised WSD methods, a knowledge acquisition bottleneck is to prepare the manually 457 tagged corpus. Unsupervised method is an alternative, which often involves automatic generation of tagged corpus, bilingual corpus alignment, etc. The value of unsupervised methods lies in the knowledge acquisition solutions they adopt. 2.1 Automatic Generation of Training Corpus Automatic corpus tagging is a solution to WSD, which generates large-scale corpus from a small seed corpus. This is a weakly supervised learning or semi-supervised learning method. This reinforcement algorithm dates back to Gale et al. (1992a). Their investigation was based on a 6word test set with 2 senses for each word. Yarowsky (1994 and 1995), Mihalcea and Moldovan (2000), and Mihalcea (2002) have made further research to obtain large corpus of higher quality from an initial seed corpus. A semi-supervised method proposed by Niu et al. (2005) clustered untagged instances with tagged ones starting from a small seed corpus, which assumes that similar instances should have similar tags. Clustering was used instead of bootstrapping and was proved more efficient. 2.2 Method Based on Parallel Corpus Parallel corpus is a solution to the bottleneck of knowledge acquisition. Ide et al. (2001 and 2002), Ng et al. (2003), and Diab (2003, 2004a, and 2004b) made research on the use of alignment for WSD. Diab and Resnik (2002) investigated the feasibility of automatically annotating large amounts of data in parallel corpora using an unsupervised algorithm, making use of two languages simultaneously, only one of which has an available sense inventory. The results showed that wordlevel translation correspondences are a valuable source of information for sense disambiguation. The method by Li and Li (2002) does not require parallel corpus. It avoids the alignment work and takes advantage of bilingual corpus. In short, technology of automatic corpus tagging is based on the manually labeled corpus. That is to say, it still need human intervention and is not a completely unsupervised method. Large-scale parallel corpus; especially wordaligned corpus is highly unobtainable, which has limited the WSD methods based on parallel corpus. 3 Equivalent Pseudoword This section describes how to obtain equivalent pseudowords without a seed corpus. Monosemous words are unambiguous priori knowledge. According to our statistics, they account for 86%~89% of the instances in a dictionary and 50% of the items in running corpus, they are potential knowledge source for WSD. A monosemous word is usually synonymous to some polysemous words. For example the words "信守, 严守, 恪守 遵照遵从遵循 , , , , 遵守" has similar meaning as one of the senses of the ambiguous word "保守", while "康健, 强 健, 健旺健壮壮健 , , 强壮精壮壮实敦实 , , , , , 硬朗康泰健朗健硕 , , , " are the same for "健康". This is quite common in Chinese, which can be used as a knowledge source for WSD. 3.1 Definition of Equivalent Pseudoword If the ambiguous words in the corpus are replaced with its synonymous monosemous word, then is it convenient to acquire knowledge from raw corpus? For example in table 1, the ambiguous word "把握" has three senses, whose synonymous monosemous words are listed on the right column. These synonyms contain some information for disambiguation task. An artificial ambiguous word can be coined with the monosemous words in table 1. This process is similar to the use of general pseudowords (Gale et al., 1992b; Gaustad, 2001; Nakov and Hearst, 2003), but has some essential differences. This artificial ambiguous word need to simulate the function of the real ambiguous word, and to acquire semantic knowledge as the real ambiguous word does. Thus, we call it an equivalent pseudoword (EP) for its equivalence with the real ambiguous word. It's apparent that the equivalent pseudoword has provided a new way to unsupervised WSD. S1 信心/自信心 S2 握住/在握/把住/抓住/控制 把握(ba3 wo4) S3 领会/理解/领悟/深谙/体会 Table 1. Synonymous Monosemous Words for the Ambiguous Word "把握" The equivalence of the EP with the real ambiguous word is a kind of semantic synonym or similarity, which demands a maximum similarity between the two words. An ambiguous word has the same number of EPs as of senses. Each EP's sense maps to a sense of ambiguous word. The semantic equivalence demands further equivalence at each sense level. Every corre458 sponding sense should have the maximum similarity, which is the strictest limit to the construction of an EP. The starting point of unsupervised WSD based on EP is that EP can substitute the original word for knowledge acquisition in model training. Every instance of each morpheme of the EP can be viewed as an instance of the ambiguous word, thus the training set can be enlarged easily. EP is a solution to data sparseness for lack of human tagging in WSD. 3.2 Basic Assumption for EP-based WSD It is based on the following assumptions that EPs can substitute the original ambiguous word for knowledge acquisition in WSD model training. Assumption 1: Words of the same meaning play the same role in a language. The sense is an important attribute of a word. This plays as the basic assumption in this paper. Assumption 2: Words of the same meaning occur in similar context. This assumption is widely used in semantic analysis and plays as a basis for much related research. For example, some researchers cluster the contexts of ambiguous words for WSD, which shows good performance (Schutze, 1998). Because an EP has a higher similarity with the ambiguous word in syntax and semantics, it is a useful knowledge source for WSD. 3.3 Design and Construction of EPs Because of the special characteristics of EPs, it's more difficult to construct an EP than a general pseudo word. To ensure the maximum similarity between the EP and the original ambiguous word, the following principles should be followed. 1) Every EP should map to one and only one original ambiguous word. 2) The morphemes of an EP should map one by one to those of the original ambiguous word. 3) The sense of the EP should be the same as the corresponding ambiguous word, or has the maximum similarity with the word. 4) The morpheme of a pseudoword stands for a sense, while the sense should consist of one or more morphemes. 5) The morpheme should be a monosemous word. The fourth principle above is the biggest difference between the EP and a general pseudo word. The sense of an EP is composed of one or several morphemes. This is a remarkable feature of the EP, which originates from its equivalent linguistic function with the original word. To construct the EP, it must be ensured that the sense of the EP maps to that of the original word. Usually, a candidate monosemous word for a morpheme stands for part of the linguistic function of the ambiguous word, thus we need to choose several morphemes to stand for one sense. The relatedness of the senses refers to the similarity of the contexts of the original ambiguous word and its EP. The similarity between the words means that they serve as synonyms for each other. This principle demands that both semantic and pragmatic information should be taken into account in choosing a morpheme word. 3.4 Implementation of the EP-based Solution An appropriate machine-readable dictionary is needed for construction of the EPs. A Chinese thesaurus is adopted and revised to meet this demand. Extended Version of TongYiCiCiLin To extend the TongYiCiCiLin (Cilin) to hold more words, several linguistic resources are adopted for manually adding new words. An extended version of the Cilin is achieved, which includes 77,343 items. A hierarchy of three levels is organized in the extended Cilin for all items. Each node in the lowest level, called a minor class, contains several words of the same class. The words in one minor class are divided into several groups according to their sense similarity and relatedness, and each group is further divided into several lines, which can be viewed as the fifth level of the thesaurus. The 5-level hierarchy of the extended Cilin is shown in figure 1. The lower the level is, the more specific the sense is. The fifth level often contains a few words or only one word, which is called an atom word group, an atom class or an atom node. The words in the same atom node hold the smallest semantic distance. From the root node to the leaf node, the sense is described more and more detailed, and the words in the same node are more and more related. Words in the same fifth level node have the same sense and linguistic function, which ensures that they can substitute for each other without leading to any change in the meaning of a sentence. 459 … … … …… …… … … … … … … … … … … … Level 1 Level 2 Level 3 Level 4 Level 5 … … Figure 1. Organization of Cilin (extended) The extended version of extended Cilin is freely downloadable from the Internet and has been used by over 20 organizations in the world1. Construction of EPs According to the position of the ambiguous word, a proper word is selected as the morpheme of the EP. Almost every ambiguous word has its corresponding EP constructed in this way. The first step is to decide the position of the ambiguous word starting from the leaf node of the tree structure. Words in the same leaf node are identical or similar in the linguistic function and word sense. Other words in the leaf node of the ambiguous word are called brother words of it. If there is a monosemous brother word, it can be taken as a candidate morpheme for the EP. If there does not exist such a brother word, trace to the fourth level. If there is still no monosemous brother word in the fourth level, trace to the third level. Because every node in the third level contains many words, candidate morpheme for the ambiguous can usually be found. In most cases, candidate morphemes can be found at the fifth level. It is not often necessary to search to the fourth level, less to the third. According to our statistics, the extended Cilin contains about monosemous words for 93% of the ambiguous words in the fifth level, and 97% in the fourth level. There are only 112 ambiguous words left, which account for the other 3% and mainly are functional words. Some of the 3% words are rarely used, which cannot be found in even a large corpus. And words that lead to semantic misunderstanding are usually content words. In WSD research for English, only nouns, verbs, adjectives and adverbs are considered. 1 It is located at http://www.ir-lab.org/. From this aspect, the extended version of Cilin meets our demand for the construction of EPs. If many monosemous brother words are found in the fourth or third level, there are many candidate morphemes to choose from. A further selection is made based on calculation of sense similarity. More similar brother words are chosen. Computing of EPs Generally, several morpheme words are needed for better construction of an EP. We assume that every morpheme word stands for a specific sense and does not influence each other. It is more complex to construct an EP than a common pseudo word, and the formulation and statistical information are also different. An EP is described as follows: i ik i i i i k k W W W W S W W W W S W W W W S L M M M M M M L L , , , : , , , : , , , : 3 2 1 2 23 22 21 2 1 13 12 11 1 2 1 WEP —————————— Where WEP is the EP word, Si is a sense of the ambiguous word, and Wik is a morpheme word of the EP. The statistical information of the EP is calculated as follows: 1) stands for the frequency of the S ) ( iS C i : ∑ = k ik i W C S C ) ( ) ( 2) stands for the co-occurrence frequency of S ) , ( f i W S C i and the contextual word Wf : ∑ = k f ik f i W W C W S C ) , ( ) , ( 460 Ambiguous word citation (Qin and Wang, 2005) Ours Ambiguous word citation (Qin and Wang, 2005) Ours 把握(ba3 wo4) 0.56 0.87 没有(mei2 you3) 0.75 0.68 包(bao1) 0.59 0.75 起来(qi3 lai2) 0.82 0.54 材料(cai2 liao4) 0.67 0.79 钱(qian2) 0.75 0.62 冲击(chong1 ji1) 0.62 0.69 日子(ri4 zi3) 0.75 0.68 穿(chuan1) 0.80 0.57 少(shao3) 0.69 0.56 地方(di4 fang1) 0.65 0.65 突出(tu1 chu1) 0.82 0.86 分子(fen1 zi3) 0.91 0.81 研究(yan2 jiu1) 0.69 0.63 运动(yun4 dong4) 0.61 0.82 活动(huo2 dong4) 0.79 0.88 老(lao3) 0.59 0.50 走(zou3) 0.72 0.60 路(lu4) 0.74 0.64 坐(zuo4) 0.90 0.73 Average 0.72 0.69 Note: Average of the 20 words Table 2. The F-measure for the Supervised WSD 4 EP-based Unsupervised WSD Method EP is a solution to the semantic knowledge acquisition problem, and it does not limit the choice of statistical learning methods. All of the mathematical modeling methods can be applied to EP-based WSD methods. This section focuses on the application of the EP concept to WSD, and chooses Bayesian method for the classifier construction. 4.1 A Sense Classifier Based on the Bayesian Model Because the model acquires knowledge from the EPs but not from the original ambiguous word, the method introduced here does not need human tagging of training corpus. In the training stage for WSD, statistics of EPs and context words are obtained and stored in a database. Senseval-3 data set plus unsupervised learning method are adopted to investigate into the value of EP in WSD. To ensure the comparability of experiment results, a Bayesian classifier is used in the experiments. Bayesian Classifier Although the Bayesian classifier is simple, it is quite efficient, and it shows good performance on WSD. The Bayesian classifier used in this paper is described in (1) ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ + = ∑ ∈i j k c v k j k S i S v P S P w S ) | ( log ) ( log max arg ) ( (1) Where wi is the ambiguous word, is the occurrence probability of the sense S ) ( k S P k, is the conditional probability of the context word v ) | ( k j S v P j, and ci is the set of the context words. To simplify the experiment process, the Naive Bayesian modeling is adopted for the sense classifier. Feature selection and ensemble classification are not applied, which is both to simplify the calculation and to prove the effect of EPs in WSD. Experiment Setup and Results The Senseval-3 Chinese ambiguous words are taken as the testing set, which includes 20 words, each with 2-8 senses. The data for the ambiguous words are divided into a training set and a testing set by a ratio of 2:1. There are 15-20 training instances for each sense of the words, and occurs by the same frequency in the training and test set. Supervised WSD is first implemented using the Bayesian model on the Senseval-3 data set. With a context window of (-10, +10), the open test results are shown in table 2. The F-measure in table 2 is defined in (2). R P R P F + × × = 2 (2) 461 Where P and R refer to the precision and recall of the sense tagging respectively, which are calculated as shown in (3) and (4) ) tagged ( ) correct ( C C P = (3) ) all ( ) correct ( C C R = (4) Where C(tagged) is the number of tagged instances of senses, C(correct) is the number of correct tags, and C(all) is the number of tags in the gold standard set. Every sense of the ambiguous word has a P value, a R value and a F value. The F value in table 2 is a weighted average of all the senses. In the EP-based unsupervised WSD experiment, a 100M corpus (People's Daily for year 1998) is used for the EP training instances. The Senseval-3 data is used for the test. In our experiments, a context window of (-10, +10) is taken. The detailed results are shown in table 3. 4.2 Experiment Analysis and Discussion Experiment Evaluation Method Two evaluation criteria are used in the experiments, which are the F-measure and precision. Precision is a usual criterion in WSD performance analysis. Only in recent years, the precision, recall, and F-measure are all taken to evaluate the WSD performance. In this paper, we will only show the f-measure score because it is a combined score of precision and recall. Result Analysis on Bayesian Supervised WSD Experiment The experiment results in table 2 reveals that the results of supervised WSD and those of (Qin and Wang, 2005) are different. Although they are all based on the Bayesian model, Qin and Wang (2005) used an ensemble classifier. However, the difference of the average value is not remarkable. As introduced above, in the supervised WSD experiment, the various senses of the instances are evenly distributed. The lower bound as Gale et al. (1992c) suggested should be very low and it is more difficult to disambiguate if there are more senses. The experiment verifies this reasoning, because the highest F-measure is less than 90%, and the lowest is less than 60%, averaging about 70%. With the same number of senses and the same scale of training data, there is a big difference between the WSD results. This shows that other factors exist which influence the performance other than the number of senses and training data size. For example, the discriminability among the senses is an important factor. The WSD task becomes more difficult if the senses of the ambiguous word are more similar to each other. Experiment Analysis of the EP-based WSD The EP-based unsupervised method takes the same open test set as the supervised method. The unsupervised method shows a better performance, with the highest F-measure score at 100%, lowest at 59% and average at 80%. The results shows that EP is useful in unsupervised WSD. Sequence Number Ambiguous word F-measure Sequence Number Ambiguous word F-measure (%) 1 把握(ba3 wo4) 0.93 11 没有(mei2 you3) 1.00 2 包(bao1) 0.74 12 起来(qi3 lai2) 0.59 3 料(cai2 liao4) 0.80 13 钱(qian2) 0.71 4 冲击(chong1 ji1) 0.85 14 日子(ri4 zi3) 0.62 5 穿(chuan1) 0.79 15 少(shao3) 0.82 6 地方(di4 fang1) 0.78 16 突出(tu1 chu1) 0.93 7 分子(fen1 zi3) 0.94 17 研究(yan2 jiu1) 0.71 8 运动(yun4 dong4) 0.94 18 活动(huo2 dong4) 0.89 9 老(lao3) 0.85 19 走(zou3) 0.68 10 路(lu4) 0.81 20 坐(zuo4) 0.67 Average 0.80 Note: Average of the 20 words Table 3. The Results for Unsupervised WSD based on EPs 462 From the results in table 2 and table 3, it can be seen that 16 among the 20 ambiguous words show better WSD performance in unsupervised SWD than in supervised WSD, while only 2 of them shows similar results and 2 performs worse . The average F-measure of the unsupervised method is higher by more than 10%. The reason lies in the following aspects: 1) Because there are several morpheme words for every sense of the word in construction of the EP, rich semantic information can be acquired in the training step and is an advantage for sense disambiguation. 2) Senseval-3 has provided a small-scale training set, with 15-20 training instances for each sense, which is not enough for the WSD modeling. The lack of training information leads to a low performance of the supervised methods. 3) With a large-scale training corpus, the unsupervised WSD method has got plenty of training instances for a high performance in disambiguation. 4) The discriminability of some ambiguous word may be low, but the corresponding EPs could be easier to disambiguate. For example, the ambiguous word "穿" has two senses which are difficult to distinguish from each other, but its Eps' senses of "越过/穿过/穿越" and "戳/捅/ 通/扎"can be easily disambiguated. It is the same for the word "冲击", whose Eps' senses are "撞 击/磕碰/碰撞" and "损害/伤害". EP-based knowledge acquisition of these ambiguous words for WSD has helped a lot to achieve high performance. 5 Conclusion As discussed above, the supervised WSD method shows a low performance because of its dependency on the size of the training data. This reveals its weakness in knowledge acquisition bottleneck. EP-based unsupervised method has overcame this weakness. It requires no manually tagged corpus to achieve a satisfactory performance on WSD. Experimental results show that EP-based method is a promising solution to the large-scale WSD task. In future work, we will examine the effectiveness of EP-based method in other WSD techniques. References Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1991. WordSense Disambiguation Using Statistical Methods. In Proc. of the 29th Annual Meeting of the Association for Computational Linguistics (ACL-1991), pages 264-270. Mona Talat Diab. 2003. Word Sense Disambiguation Within a Multilingual Framework. PhD thesis, University of Maryland College Park. Mona Diab. 2004a. Relieving the Data Acquisition Bottleneck in Word Sense Disambiguation. In Proc. of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-2004), pages 303310. Mona T. Diab. 2004b. An Unsupervised Approach for Bootstrapping Arabic Sense Tagging. In Proc. of Arabic Script Based Languages Workshop at COLING 2004, pages 43-50. Mona Diab and Philip Resnik. 2002. An Unsupervised Method for Word Sense Tagging Using Parallel Corpora. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), pages 255-262. William Gale, Kenneth Church, and David Yarowsky. 1992a. Using Bilingual Materials to Develop Word Sense Disambiguation Methods. In Proc. of the 4th International Conference on Theoretical and Methodolgical Issues in Machine Translation(TMI-92), pages 101-112. William Gale, Kenneth Church, and David Yarowsky. 1992b. Work on Statistical Methods for Word Sense Disambiguation. In Proc. of AAAI Fall Symposium on Probabilistic Approaches to Natural Language, pages 54-60. William Gale, Kenneth Ward Church, and David Yarowsky. 1992c. Estimating Upper and Lower Bounds on the Performance of Word Sense Disambiguation Programs. In Proc. of the 30th Annual Meeting of the Association for Computational Linguistics (ACL-1992), pages 249-256. Tanja Gaustad. 2001. Statistical Corpus-Based Word Sense Disambiguation: Pseudowords vs. Real Ambiguous Words. In Proc. of the 39th ACL/EACL, Student Research Workshop, pages 61-66. Nancy Ide, Tomaz Erjavec, and Dan Tufiş. 2001. Automatic Sense Tagging Using Parallel Corpora. In Proc. of the Sixth Natural Language Processing Pacific Rim Symposium, pages 83-89. Nancy Ide, Tomaz Erjavec, and Dan Tufis. 2002. Sense Discrimination with Parallel Corpora. In Workshop on Word Sense Disambiguation: Recent Successes and Future Directions, pages 54-60. Cong Li and Hang Li. 2002. Word Translation Disambiguation Using Bilingual Bootstrapping. In Proc. of the 40th Annual Meeting of the Association 463 for Computational Linguistics (ACL-2002), pages 343-351. Rada Mihalcea and Dan Moldovan. 2000. An Iterative Approach to Word Sense Disambiguation. In Proc. of Florida Artificial Intelligence Research Society Conference (FLAIRS 2000), pages 219-223. Rada F. Mihalcea. 2002. Bootstrapping Large Sense Tagged Corpora. In Proc. of the 3rd International Conference on Languages Resources and Evaluations (LREC 2002), pages 1407-1411. Preslav I. Nakov and Marti A. Hearst. 2003. Category-based Pseudowords. In Companion Volume to the Proceedings of HLT-NAACL 2003, Short Papers, pages 67-69. Hwee Tou. Ng, Bin Wang, and Yee Seng Chan. 2003. Exploiting Parallel Texts for Word Sense Disambiguation: An Empirical Study. In Proc. of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003), pages 455-462. Zheng-Yu Niu, Dong-Hong Ji, and Chew-Lim Tan. 2005. Word Sense Disambiguation Using Label Propagation Based Semi-Supervised Learning. In Proc. of the 43th Annual Meeting of the Association for Computational Linguistics (ACL-2005), pages 395-402. Ying Qin and Xiaojie Wang. 2005. A Track-based Method on Chinese WSD. In Proc. of Joint Symposium of Computational Linguistics of China (JSCL2005), pages 127-133. Hinrich. Schutze. 1998. Automatic Word Sense Discrimination. Computational Linguistics, 24(1): 97123. David Yarowsky. 1994. Decision Lists for Lexical Ambiguity Resolution: Application to Accent Restoration in Spanish and French. In Proc. of the 32nd Annual Meeting of the Association for Computational Linguistics(ACL-1994), pages 88-95. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proc. of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL-1995), pages 189-196. 464
2006
58
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 465–472, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Improving the Scalability of Semi-Markov Conditional Random Fields for Named Entity Recognition Daisuke Okanohara† Yusuke Miyao† Yoshimasa Tsuruoka ‡ Jun’ichi Tsujii†‡§ †Department of Computer Science, University of Tokyo Hongo 7-3-1, Bunkyo-ku, Tokyo, Japan ‡School of Informatics, University of Manchester POBox 88, Sackville St, MANCHESTER M60 1QD, UK §SORST, Solution Oriented Research for Science and Technology Honcho 4-1-8, Kawaguchi-shi, Saitama, Japan {hillbig,yusuke,tsuruoka,tsujii}@is.s.u-tokyo.ac.jp Abstract This paper presents techniques to apply semi-CRFs to Named Entity Recognition tasks with a tractable computational cost. Our framework can handle an NER task that has long named entities and many labels which increase the computational cost. To reduce the computational cost, we propose two techniques: the first is the use of feature forests, which enables us to pack feature-equivalent states, and the second is the introduction of a filtering process which significantly reduces the number of candidate states. This framework allows us to use a rich set of features extracted from the chunk-based representation that can capture informative characteristics of entities. We also introduce a simple trick to transfer information about distant entities by embedding label information into non-entity labels. Experimental results show that our model achieves an F-score of 71.48% on the JNLPBA 2004 shared task without using any external resources or post-processing techniques. 1 Introduction The rapid increase of information in the biomedical domain has emphasized the need for automated information extraction techniques. In this paper we focus on the Named Entity Recognition (NER) task, which is the first step in tackling more complex tasks such as relation extraction and knowledge mining. Biomedical NER (Bio-NER) tasks are, in general, more difficult than ones in the news domain. For example, the best F-score in the shared task of Bio-NER in COLING 2004 JNLPBA (Kim et al., 2004) was 72.55% (Zhou and Su, 2004) 1, whereas the best performance at MUC-6, in which systems tried to identify general named entities such as person or organization names, was an accuracy of 95% (Sundheim, 1995). Many of the previous studies of Bio-NER tasks have been based on machine learning techniques including Hidden Markov Models (HMMs) (Bikel et al., 1997), the dictionary HMM model (Kou et al., 2005) and Maximum Entropy Markov Models (MEMMs) (Finkel et al., 2004). Among these methods, conditional random fields (CRFs) (Lafferty et al., 2001) have achieved good results (Kim et al., 2005; Settles, 2004), presumably because they are free from the so-called label bias problem by using a global normalization. Sarawagi and Cohen (2004) have recently introduced semi-Markov conditional random fields (semi-CRFs). They are defined on semi-Markov chains and attach labels to the subsequences of a sentence, rather than to the tokens2. The semiMarkov formulation allows one to easily construct entity-level features. Since the features can capture all the characteristics of a subsequence, we can use, for example, a dictionary feature which measures the similarity between a candidate segment and the closest element in the dictionary. Kou et al. (2005) have recently showed that semiCRFs perform better than CRFs in the task of recognition of protein entities. The main difficulty of applying semi-CRFs to Bio-NER lies in the computational cost at training 1Krauthammer (2004) reported that the inter-annotator agreement rate of human experts was 77.6% for bio-NLP, which suggests that the upper bound of the F-score in a BioNER task may be around 80%. 2Assuming that non-entity words are placed in unit-length segments. 465 Table 1: Length distribution of entities in the training set of the shared task in 2004 JNLPBA Length # entity Ratio 1 21646 42.19 2 15442 30.10 3 7530 14.68 4 3505 6.83 5 1379 2.69 6 732 1.43 7 409 0.80 8 252 0.49 >8 406 0.79 total 51301 100.00 because the number of named entity classes tends to be large, and the training data typically contain many long entities, which makes it difficult to enumerate all the entity candidates in training. Table 1 shows the length distribution of entities in the training set of the shared task in 2004 JNLPBA. Formally, the computational cost of training semiCRFs is O(KLN), where L is the upper bound length of entities, N is the length of sentence and K is the size of label set. And that of training in first order semi-CRFs is O(K2LN). The increase of the cost is used to transfer non-adjacent entity information. To improve the scalability of semi-CRFs, we propose two techniques: the first is to introduce a filtering process that significantly reduces the number of candidate entities by using a “lightweight” classifier, and the second is to use feature forest (Miyao and Tsujii, 2002), with which we pack the feature equivalent states. These enable us to construct semi-CRF models for the tasks where entity names may be long and many class-labels exist at the same time. We also present an extended version of semi-CRFs in which we can make use of information about a preceding named entity in defining features within the framework of first order semi-CRFs. Since the preceding entity is not necessarily adjacent to the current entity, we achieve this by embedding the information on preceding labels for named entities into the labels for non-named entities. 2 CRFs and Semi-CRFs CRFs are undirected graphical models that encode a conditional probability distribution using a given set of features. CRFs allow both discriminative training and bi-directional flow of probabilistic information along the sequence. In NER, we often use linear-chain CRFs, which define the conditional probability of a state sequence y = y1, ..., yn given the observed sequence x = x1,...,xn by: p(y|x, λ) = 1 Z(x) exp(Σn i=1Σjλjfj(yi−1, yi, x, i)), (1) where fj(yi−1, yi, x, i) is a feature function and Z(x) is the normalization factor over all the state sequences for the sequence x. The model parameters are a set of real-valued weights λ = {λj}, each of which represents the weight of a feature. All the feature functions are real-valued and can use adjacent label information. Semi-CRFs are actually a restricted version of order-L CRFs in which all the labels in a chunk are the same. We follow the definitions in (Sarawagi and Cohen, 2004). Let s = ⟨s1, ..., sp⟩denote a segmentation of x, where a segment sj = ⟨tj, uj, yj⟩consists of a start position tj, an end position uj, and a label yj. We assume that segments have a positive length bounded above by the pre-defined upper bound L (tj ≤uj, uj −tj + 1 ≤L) and completely cover the sequence x without overlapping, that is, s satisfies t1 = 1, up = |x|, and tj+1 = uj + 1 for j = 1, ..., p −1. Semi-CRFs define a conditional probability of a state sequence y given an observed sequence x by: p(y|x, λ) = 1 Z(x) exp(ΣjΣiλifi(sj)), (2) where fi(sj) := fi(yj−1, yj, x, tj, uj) is a feature function and Z(x) is the normalization factor as defined for CRFs. The inference problem for semi-CRFs can be solved by using a semi-Markov analog of the usual Viterbi algorithm. The computational cost for semi-CRFs is O(KLN) where L is the upper bound length of entities, N is the length of sentence and K is the number of label set. If we use previous label information, the cost becomes O(K2LN). 3 Using Non-Local Information in Semi-CRFs In conventional CRFs and semi-CRFs, one can only use the information on the adjacent previous label when defining the features on a certain state or entity. In NER tasks, however, information about a distant entity is often more useful than 466 O protein O O DNA O protein O-protein O-protein DNA Figure 1: Modification of “O” (other labels) to transfer information on a preceding named entity. information about the previous state (Finkel et al., 2005). For example, consider the sentence “... including Sp1 and CP1.” where the correct labels of “Sp1” and “CP1” are both “protein”. It would be useful if the model could utilize the (non-adjacent) information about “Sp1” being “protein” to classify “CP1” as “protein”. On the other hand, information about adjacent labels does not necessarily provide useful information because, in many cases, the previous label of a named entity is “O”, which indicates a non-named entity. For 98.0% of the named entities in the training data of the shared task in the 2004 JNLPBA, the label of the preceding entity was “O”. In order to incorporate such non-local information into semi-CRFs, we take a simple approach. We divide the label of “O” into “O-protein” and “O” so that they convey the information on the preceding named entity. Figure 1 shows an example of this conversion, in which the two labels for the third and fourth states are converted from “O” to “O-protein”. When we define the features for the fifth state, we can use the information on the preceding entity “protein” by looking at the fourth state. Since this modification changes only the label set, we can do this within the framework of semi-CRF models. This idea is originally proposed in (Peshkin and Pfeffer, 2003). However, they used a dynamic Bayesian network (DBNs) rather than a semi-CRF, and semi-CRFs are likely to have significantly better performance than DBNs. In previous work, such non-local information has usually been employed at a post-processing stage. This is because the use of long distance dependency violates the locality of the model and prevents us from using dynamic programming techniques in training and inference. Skip-CRFs (Sutton and McCallum, 2004) are a direct implementation of long distance effects to the model. However, they need to determine the structure for propagating non-local information in advance. In a recent study by Finkel et al., (2005), nonlocal information is encoded using an independence model, and the inference is performed by Gibbs sampling, which enables us to use a stateof-the-art factored model and carry out training efficiently, but inference still incurs a considerable computational cost. Since our model handles limited type of non-local information, i.e. the label of the preceding entity, the model can be solved without approximation. 4 Reduction of Training/Inference Cost The straightforward implementation of this modeling in semi-CRFs often results in a prohibitive computational cost. In biomedical documents, there are quite a few entity names which consist of many words (names of 8 words in length are not rare). This makes it difficult for us to use semi-CRFs for biomedical NER, because we have to set L to be eight or larger, where L is the upper bound of the length of possible chunks in semi-CRFs. Moreover, in order to take into account the dependency between named entities of different classes appearing in a sentence, we need to incorporate multiple labels into a single probabilistic model. For example, in the shared task in COLING 2004 JNLPBA (Kim et al., 2004) the number of labels is six (“protein”, “DNA”, “RNA”, “cell line”, “cell type” and “other”). This also increases the computational cost of a semi-CRF model. To reduce the computational cost, we propose two methods (see Figure 2). The first is employing a filtering process using a lightweight classifier to remove unnecessary state candidates beforehand (Figure 2 (2)), and the second is the using the feature forest model (Miyao and Tsujii, 2002) (Figure 2 (3)), which employs dynamic programming at training “as much as possible”. 4.1 Filtering with a naive Bayes classifier We introduce a filtering process to remove low probability candidate states. This is the first step of our NER system. After this filtering step, we construct semi-CRFs on the remaining candidate states using a feature forest. Therefore the aim of this filtering is to reduce the number of candidate states, without removing correct entities. This idea 467 (1) Enumerate Candidate States (2) Filtering by Naïve Bayes (3) Construct feature forest Training/ Inference : other : entity : other with preceding entity information Figure 2: The framework of our system. We first enumerate all possible candidate states, and then filter out low probability states by using a light-weight classifier, and represent them by using feature forest. Table 2: Features used in the naive Bayes Classifier for the entity candidate: ws, ws+1, ..., we. spi is the result of shallow parsing at wi. Feature Name Example of Features Start/End Word ws, we Inside Word ws, ws+1, ... , we Context Word ws−1, we+1 Start/End SP sps, spe Inside SP sps, sps+1, ..., spe Context SP sps−1, spe+1 is similar to the method proposed by Tsuruoka and Tsujii (2005) for chunk parsing, in which implausible phrase candidates are removed beforehand. We construct a binary naive Bayes classifier using the same training data as those for semi-CRFs. In training and inference, we enumerate all possible chunks (the max length of a chunk is L as for semi-CRFs) and then classify those into “entity” or “other”. Table 2 lists the features used in the naive Bayes classifier. This process can be performed independently of semi-CRFs Since the purpose of the filtering is to reduce the computational cost, rather than to achieve a good F-score by itself, we chose the threshold probability of filtering so that the recall of filtering results would be near 100 %. 4.2 Feature Forest In estimating semi-CRFs, we can use an efficient dynamic programming algorithm, which is similar to the forward-backward algorithm (Sarawagi and Cohen, 2004). The proposal here is a more general framework for estimating sequential conditional random fields. This framework is based on the feature forest DNA protein Other DNA protein Other : or node (disjunctive node) : and node (conjunctive node) pos i i+1 … … Figure 3: Example of feature forest representation of linear chain CRFs. Feature functions are assigned to “and” nodes. protein O-protein protein uj =8 prev-entity:protein uj = 8 prev-entity: protein packed pos 8 7 9 Figure 4: Example of packed representation of semi-CRFs. The states that have the same end position and prev-entity label are packed. model, which was originally proposed for disambiguation models for parsing (Miyao and Tsujii, 2002). A feature forest model is a maximum entropy model defined over feature forests, which are abstract representations of an exponential number of sequence/tree structures. A feature forest is an “and/or” graph: in Figure 3, circles represent 468 “and” nodes (conjunctive nodes), while boxes denote “or” nodes (disjunctive nodes). Feature functions are assigned to “and” nodes. We can use the information of the previous “and” node for designing the feature functions through the previous “or” node. Each sequence in a feature forest is obtained by choosing a conjunctive node for each disjunctive node. For example, Figure 3 represents 3 × 3 = 9 sequences, since each disjunctive node has three candidates. It should be noted that feature forests can represent an exponential number of sequences with a polynomial number of conjunctive/disjunctive nodes. One can estimate a maximum entropy model for the whole sequence with dynamic programming by representing the probabilistic events, i.e. sequence of named entity tags, by feature forests (Miyao and Tsujii, 2002). In the previous work (Lafferty et al., 2001; Sarawagi and Cohen, 2004), “or” nodes are considered implicitly in the dynamic programming framework. In feature forest models, “or” nodes are packed when they have same conditions. For example, “or” nodes are packed when they have same end positions and same labels in the first order semi-CRFs, In general, we can pack different “or” nodes that yield equivalent feature functions in the following nodes. In other words, “or” nodes are packed when the following states use partial information on the preceding states. Consider the task of tagging entity and O-entity, where the latter tag is actually O tags that distinguish the preceding named entity tags. When we simply apply first-order semi-CRFs, we must distinguish states that have different previous states. However, when we want to distinguish only the preceding named entity tags rather than the immediate previous states, feature forests can represent these events more compactly (Figure 4). We can implement this as follows. In each “or” node, we generate the following “and” nodes and their feature functions. Then we check whether there exist “or” node which has same conditions by using its information about “end position” and “previous entity”. If so, we connect the “and” node to the corresponding “or” node. If not, we generate a new “or” node and continue the process. Since the states with label O-entity and entity are packed, the computational cost of training in our model (First order semi-CRFs) becomes the half of the original one. 5 Experiments 5.1 Experimental Setting Our experiments were performed on the training and evaluation set provided by the shared task in COLING 2004 JNLPBA (Kim et al., 2004). The training data used in this shared task came from the GENIA version 3.02 corpus. In the task there are five semantic labels: protein, DNA, RNA, cell line and cell type. The training set consists of 2000 abstracts from MEDLINE, and the evaluation set consists of 404 abstracts. We divided the original training set into 1800 abstracts and 200 abstracts, and the former was used as the training data and the latter as the development data. For semi-CRFs, we used amis3 for training the semiCRF with feature-forest. We used GENIA taggar4 for POS-tagging and shallow parsing. We set L = 10 for training and evaluation when we do not state L explicitly , where L is the upper bound of the length of possible chunks in semiCRFs. 5.2 Features Table 3 lists the features used in our semi-CRFs. We describe the chunk-dependent features in detail, which cannot be encoded in token-level features. “Whole chunk” is the normalized names attached to a chunk, which performs like the closed dictionary. “Length” and “Length and EndWord” capture the tendency of the length of a named entity. “Count feature” captures the tendency for named entities to appear repeatedly in the same sentence. “Preceding Entity and Prev Word” are features that capture specifically words for conjunctions such as “and” or “, (comma)”, e.g., for the phrase “OCIM1 and K562”, both “OCIM1” and “K562” are assigned cell line labels. Even if the model can determine only that “OCIM1” is a cell line , this feature helps “K562” to be assigned the label cell line. 5.3 Results We first evaluated the filtering performance. Table 4 shows the result of the filtering on the training 3http://www-tsujii.is.s.u-tokyo.ac.jp/amis/ 4http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/tagger/ Note that the evaluation data are not used for training the GENIA tagger. 469 Table 3: Feature templates used for the chunk s := ws ws+1 ... we where ws and we represent the words at the beginning and ending of the target chunk respectively. pi is the part of speech tag of wi and sci is the shallow parse result of wi. Feature Name description of features Non-Chunk Features Word/POS/SC with Position BEGIN + ws, END + we, IN + ws+1, ..., IN + we−1, BEGIN + ps,... Context Uni-gram/Bi-gram ws−1, we+1, ws−2 + ws−1, we+1 + we+2, ws−1 + we+1 Prefix/Suffix of Chunk 2/3-gram character prefix of ws, 2/3/4-gram character suffix of we Orthography capitalization and word formation of ws...we Chunk Features Whole chunk ws + ws+1 + ... + we Word/POS/SC End Bi-grams we−1 + we, pe−1 + pe, sce−1 + sce Length, Length and End Word |s|, |s|+we Count Feature the frequency of wsws+1..we in a sentence is greater than one Preceding Entity Features Preceding Entity /and Prev Word PrevState, PrevState + ws−1 Table 4: Filtering results using the naive Bayes classifier. The number of entity candidates for the training set was 4179662, and that of the development set was 418628. Training set Threshold probability reduction ratio recall 1.0 × 10−12 0.14 0.984 1.0 × 10−15 0.20 0.993 Development set Threshold probability reduction ratio recall 1.0 × 10−12 0.14 0.985 1.0 × 10−15 0.20 0.994 and evaluation data. The naive Bayes classifiers effectively reduced the number of candidate states with very few falsely removed correct entities. We then examined the effect of filtering on the final performance. In this experiment, we could not examine the performance without filtering using all the training data, because training on all the training data without filtering required much larger memory resources (estimated to be about 80G Byte) than was possible for our experimental setup. We thus compared the result of the recognizers with and without filtering using only 2000 sentences as the training data. Table 5 shows the result of the total system with different filtering thresholds. The result indicates that the filtering method achieved very well without decreasing the overall performance. We next evaluate the effect of filtering, chunk information and non-local information on final performance. Table 6 shows the performance result for the recognition task. L means the upper bound of the length of possible chunks in semiCRFs. We note that we cannot examine the result of L = 10 without filtering because of the intractable computational cost. The row “w/o Chunk Feature” shows the result of the system which does not employ Chunk-Features in Table 3 at training and inference. The row “Preceding Entity” shows the result of a system which uses Preceding Entity and Preceding Entity and Prev Word features. The results indicate that the chunk features contributed to the performance, and the filtering process enables us to use full chunk representation (L = 10). The results of McNemar’s test suggest that the system with chunk features is significantly better than the system without it (the p-value is less than 1.0 < 10−4). The result of the preceding entity information improves the performance. On the other hand, the system with preceding information is not significantly better than the system without it5. Other non-local information may improve performance with our framework and this is a topic for future work. Table 7 shows the result of the overall performance in our best setting, which uses the information about the preceding entity and 1.0×10−15 threshold probability for filtering. We note that the result of our system is similar to those of other sys5The result of the classifier on development data is 74.64 (without preceding information) and 75.14 (with preceding information). 470 Table 5: Performance with filtering on the development data. (< 1.0 × 10−12) means the threshold probability of the filtering is 1.0 × 10−12. Recall Precision F-score Memory Usage (MB) Training Time (s) Small Training Data = 2000 sentences Without filtering 65.77 72.80 69.10 4238 7463 Filtering (< 1.0 × 10.0−12) 64.22 70.62 67.27 600 1080 Filtering (< 1.0 × 10.0−15) 65.34 72.52 68.74 870 2154 All Training Data = 16713 sentences Without filtering Not available Not available Filtering (< 1.0 × 10.0−12) 70.05 76.06 72.93 10444 14661 Filtering (< 1.0 × 10.0−15) 72.09 78.47 75.14 15257 31636 Table 6: Overall performance on the evaluation set. L is the upper bound of the length of possible chunks in semi-CRFs. Recall Precision F-score L < 5 64.33 65.51 64.92 L = 10 + Filtering (< 1.0 × 10.0−12) 70.87 68.33 69.58 L = 10 + Filtering (< 1.0 × 10.0−15) 72.59 70.16 71.36 w/o Chunk Feature 70.53 69.92 70.22 + Preceding Entity 72.65 70.35 71.48 tems in several respects, that is, the performance of cell line is not good, and the performance of the right boundary identification (78.91% in F-score) is better than that of the left boundary identification (75.19% in F-score). Table 8 shows a comparison between our system and other state-of-the-art systems. Our system has achieved a comparable performance to these systems and would be still improved by using external resources or conducting pre/post processing. For example, Zhou et. al (2004) used post processing, abbreviation resolution and external dictionary, and reported that they improved Fscore by 3.1%, 2.1% and 1.2% respectively. Kim et. al (2005) used the original GENIA corpus to employ the information about other semantic classes for identifying term boundaries. Finkel et. al (2004) used gazetteers, web-querying, surrounding abstracts, and frequency counts from the BNC corpus. Settles (2004) used semantic domain knowledge of 17 types of lexicon. Since our approach and the use of external resources/knowledge do not conflict but are complementary, examining the combination of those techniques should be an interesting research topic. Table 7: Performance of our system on the evaluation set Class Recall Precision F-score protein 77.74 68.92 73.07 DNA 69.03 70.16 69.59 RNA 69.49 67.21 68.33 cell type 65.33 82.19 72.80 cell line 57.60 53.14 55.28 overall 72.65 70.35 71.48 Table 8: Comparison with other systems System Recall Precision F-score Zhou et. al (2004) 75.99 69.42 72.55 Our system 72.65 70.35 71.48 Kim et.al (2005) 72.77 69.68 71.19 Finkel et. al (2004) 68.56 71.62 70.06 Settles (2004) 70.3 69.3 69.8 471 6 Conclusion In this paper, we have proposed a single probabilistic model that can capture important characteristics of biomedical named entities. To overcome the prohibitive computational cost, we have presented an efficient training framework and a filtering method which enabled us to apply first order semi-CRF models to sentences having many labels and entities with long names. Our results showed that our filtering method works very well without decreasing the overall performance. Our system achieved an F-score of 71.48% without the use of gazetteers, post-processing or external resources. The performance of our system came close to that of the current best performing system which makes extensive use of external resources and rule based post-processing. The contribution of the non-local information introduced by our method was not significant in the experiments. However, other types of nonlocal information have also been shown to be effective (Finkel et al., 2005) and we will examine the effectiveness of other non-local information which can be embedded into label information. As the next stage of our research, we hope to apply our method to shallow parsing, in which segments tend to be long and non-local information is important. References Daniel M. Bikel, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: a high-performance learning name-finder. In Proc. of the Fifth Conference on Applied Natural Language Processing. Jenny Finkel, Shipra Dingare, Huy Nguyen, Malvina Nissim, Gail Sinclair, and Christopher Manning. 2004. Exploiting context for biomedical entity recognition: From syntax to the web. In Proc. of JNLPBA-04. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proc. of ACL 2005, pages 363–370. Jin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, and Nigel Collier. 2004. Introduction to the bio-entity recognition task at JNLPBA. In Proc. of JNLPBA-04, pages 70–75. Seonho Kim, Juntae Yoon, Kyung-Mi Park, and HaeChang Rim. 2005. Two-phase biomedical named entity recognition using a hybrid method. In Proc. of the Second International Joint Conference on Natural Language Processing (IJCNLP-05). Zhenzhen Kou, William W. Cohen, and Robert F. Murphy. 2005. High-recall protein entity recognition using a dictionary. Bioinformatics 2005 21. Micahel Krauthammer and Goran Nenadic. 2004. Term identification in the biomedical literature. Jornal of Biomedical Informatics. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of ICML 2001. Yusuke Miyao and Jun’ichi Tsujii. 2002. Maximum entropy estimation for feature forests. In Proc. of HLT 2002. Peshkin and Pfeffer. 2003. Bayesian information extraction network. In IJCAI. Sunita Sarawagi and William W. Cohen. 2004. Semimarkov conditional random fields for information extraction. In NIPS 2004. Burr Settles. 2004. Biomedical named entity recognition using conditional random fields and rich feature sets. In Proc. of JNLPBA-04. Beth M. Sundheim. 1995. Overview of results of the MUC-6 evaluation. In Sixth Message Understanding Conference (MUC-6), pages 13–32. Charles Sutton and Andrew McCallum. 2004. Collective segmentation and labeling of distant entities in information extraction. In ICML workshop on Statistical Relational Learning. Yoshimasa Tsuruoka and Jun’ichi Tsujii. 2005. Chunk parsing revisited. In Proceedings of the 9th International Workshop on Parsing Technologies (IWPT 2005). GuoDong Zhou and Jian Su. 2004. Exploring deep knowledge resources in biomedical name recognition. In Proc. of JNLPBA-04. 472
2006
59
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 41–48, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Kernel-Based Pronoun Resolution with Structured Syntactic Knowledge Xiaofeng Yang† Jian Su† Chew Lim Tan‡ †Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore, 119613 {xiaofengy,sujian}@i2r.a-star.edu.sg ‡ Department of Computer Science National University of Singapore, Singapore, 117543 [email protected] Abstract Syntactic knowledge is important for pronoun resolution. Traditionally, the syntactic information for pronoun resolution is represented in terms of features that have to be selected and defined heuristically. In the paper, we propose a kernel-based method that can automatically mine the syntactic information from the parse trees for pronoun resolution. Specifically, we utilize the parse trees directly as a structured feature and apply kernel functions to this feature, as well as other normal features, to learn the resolution classifier. In this way, our approach avoids the efforts of decoding the parse trees into the set of flat syntactic features. The experimental results show that our approach can bring significant performance improvement and is reliably effective for the pronoun resolution task. 1 Introduction Pronoun resolution is the task of finding the correct antecedent for a given pronominal anaphor in a document. Prior studies have suggested that syntactic knowledge plays an important role in pronoun resolution. For a practical pronoun resolution system, the syntactic knowledge usually comes from the parse trees of the text. The issue that arises is how to effectively incorporate the syntactic information embedded in the parse trees to help resolution. One common solution seen in previous work is to define a set of features that represent particular syntactic knowledge, such as the grammatical role of the antecedent candidates, the governing relations between the candidate and the pronoun, and so on. These features are calculated by mining the parse trees, and then could be used for resolution by using manually designed rules (Lappin and Leass, 1994; Kennedy and Boguraev, 1996; Mitkov, 1998), or using machine-learning methods (Aone and Bennett, 1995; Yang et al., 2004; Luo and Zitouni, 2005). However, such a solution has its limitation. The syntactic features have to be selected and defined manually, usually by linguistic intuition. Unfortunately, what kinds of syntactic information are effective for pronoun resolution still remains an open question in this research community. The heuristically selected feature set may be insufficient to represent all the information necessary for pronoun resolution contained in the parse trees. In this paper we will explore how to utilize the syntactic parse trees to help learning-based pronoun resolution. Specifically, we directly utilize the parse trees as a structured feature, and then use a kernel-based method to automatically mine the knowledge embedded in the parse trees. The structured syntactic feature, together with other normal features, is incorporated in a trainable model based on Support Vector Machine (SVM) (Vapnik, 1995) to learn the decision classifier for resolution. Indeed, using kernel methods to mine structural knowledge has shown success in some NLP applications like parsing (Collins and Duffy, 2002; Moschitti, 2004) and relation extraction (Zelenko et al., 2003; Zhao and Grishman, 2005). However, to our knowledge, the application of such a technique to the pronoun resolution task still remains unexplored. Compared with previous work, our approach has several advantages: (1) The approach utilizes the parse trees as a structured feature, which avoids the efforts of decoding the parse trees into a set of syntactic features in a heuristic manner. (2) The approach is able to put together the structured feature and the normal flat features in a trainable model, which allows different types of 41 information to be considered in combination for both learning and resolution. (3) The approach is applicable for practical pronoun resolution as the syntactic information can be automatically obtained from machine-generated parse trees. And our study shows that the approach works well under the commonly available parsers. We evaluate our approach on the ACE data set. The experimental results over the different domains indicate that the structured syntactic feature incorporated with kernels can significantly improve the resolution performance (by 5%∼8% in the success rates), and is reliably effective for the pronoun resolution task. The remainder of the paper is organized as follows. Section 2 gives some related work that utilizes the structured syntactic knowledge to do pronoun resolution. Section 3 introduces the framework for the pronoun resolution, as well as the baseline feature space and the SVM classifier. Section 4 presents in detail the structured feature and the kernel functions to incorporate such a feature in the resolution. Section 5 shows the experimental results and has some discussion. Finally, Section 6 concludes the paper. 2 Related Work One of the early work on pronoun resolution relying on parse trees was proposed by Hobbs (1978). For a pronoun to be resolved, Hobbs’ algorithm works by searching the parse trees of the current text. Specifically, the algorithm processes one sentence at a time, using a left-to-right breadth-first searching strategy. It first checks the current sentence where the pronoun occurs. The first NP that satisfies constraints, like number and gender agreements, would be selected as the antecedent. If the antecedent is not found in the current sentence, the algorithm would traverse the trees of previous sentences in the text. As the searching processing is completely done on the parse trees, the performance of the algorithm would rely heavily on the accuracy of the parsing results. Lappin and Leass (1994) reported a pronoun resolution algorithm which uses the syntactic representation output by McCord’s Slot Grammar parser. A set of salience measures (e.g. Subject, Object or Accusative emphasis) is derived from the syntactic structure. The candidate with the highest salience score would be selected as the antecedent. In their algorithm, the weights of Category: whether the candidate is a definite noun phrase, indefinite noun phrase, pronoun, named-entity or others. Reflexiveness: whether the pronominal anaphor is a reflexive pronoun. Type: whether the pronominal anaphor is a male-person pronoun (like he), female-person pronoun (like she), single gender-neuter pronoun (like it), or plural gender-neuter pronoun (like they) Subject: whether the candidate is a subject of a sentence, a subject of a clause, or not. Object: whether the candidate is an object of a verb, an object of a preposition, or not. Distance: the sentence distance between the candidate and the pronominal anaphor. Closeness: whether the candidate is the candidate closest to the pronominal anaphor. FirstNP: whether the candidate is the first noun phrase in the current sentence. Parallelism: whether the candidate has an identical collocation pattern with the pronominal anaphor. Table 1: Feature set for the baseline pronoun resolution system salience measures have to be assigned manually. Luo and Zitouni (2005) proposed a coreference resolution approach which also explores the information from the syntactic parse trees. Different from Lappin and Leass (1994)’s algorithm, they employed a maximum entropy based model to automatically compute the importance (in terms of weights) of the features extracted from the trees. In their work, the selection of their features is mainly inspired by the government and binding theory, aiming to capture the c-command relationships between the pronoun and its antecedent candidate. By contrast, our approach simply utilizes the parse trees as a structured feature, and lets the learning algorithm discover all possible embedded information that is necessary for pronoun resolution. 3 The Resolution Framework Our pronoun resolution system adopts the common learning-based framework similar to those by Soon et al. (2001) and Ng and Cardie (2002). In the learning framework, a training or testing instance is formed by a pronoun and one of its antecedent candidate. During training, for each pronominal anaphor encountered, a positive instance is created by paring the anaphor and its closest antecedent. Also a set of negative instances is formed by paring the anaphor with each of the 42 non-coreferential candidates. Based on the training instances, a binary classifier is generated using a particular learning algorithm. During resolution, a pronominal anaphor to be resolved is paired in turn with each preceding antecedent candidate to form a testing instance. This instance is presented to the classifier which then returns a class label with a confidence value indicating the likelihood that the candidate is the antecedent. The candidate with the highest confidence value will be selected as the antecedent of the pronominal anaphor. 3.1 Feature Space As with many other learning-based approaches, the knowledge for the reference determination is represented as a set of features associated with the training or test instances. In our baseline system, the features adopted include lexical property, morphologic type, distance, salience, parallelism, grammatical role and so on. Listed in Table 1, all these features have been proved effective for pronoun resolution in previous work. 3.2 Support Vector Machine In theory, any discriminative learning algorithm is applicable to learn the classifier for pronoun resolution. In our study, we use Support Vector Machine (Vapnik, 1995) to allow the use of kernels to incorporate the structured feature. Suppose the training set S consists of labelled vectors {(xi, yi)}, where xi is the feature vector of a training instance and yi is its class label. The classifier learned by SVM is f(x) = sgn( X i=1 yiaix ∗xi + b) (1) where ai is the learned parameter for a support vector xi. An instance x is classified as positive (negative) if f(x) > 0 (f(x) < 0)1. One advantage of SVM is that we can use kernel methods to map a feature space to a particular high-dimension space, in case that the current problem could not be separated in a linear way. Thus the dot-product x1 ∗x2 is replaced by a kernel function (or kernel) between two vectors, that is K(x1, x2). For the learning with the normal features listed in Table 1, we can just employ the well-known polynomial or radial basis kernels that can be computed efficiently. In the next section we 1For our task, the result of f(x) is used as the confidence value of the candidate to be the antecedent of the pronoun described by x. will discuss how to use kernels to incorporate the more complex structured feature. 4 Incorporating Structured Syntactic Information 4.1 Main Idea A parse tree that covers a pronoun and its antecedent candidate could provide us much syntactic information related to the pair. The commonly used syntactic knowledge for pronoun resolution, such as grammatical roles or the governing relations, can be directly described by the tree structure. Other syntactic knowledge that may be helpful for resolution could also be implicitly represented in the tree. Therefore, by comparing the common substructures between two trees we can find out to what degree two trees contain similar syntactic information, which can be done using a convolution tree kernel. The value returned from the tree kernel reflects the similarity between two instances in syntax. Such syntactic similarity can be further combined with other knowledge to compute the overall similarity between two instances, through a composite kernel. And thus a SVM classifier can be learned and then used for resolution. This is just the main idea of our approach. 4.2 Structured Syntactic Feature Normally, parsing is done on the sentence level. However, in many cases a pronoun and an antecedent candidate do not occur in the same sentence. To present their syntactic properties and relations in a single tree structure, we construct a syntax tree for an entire text, by attaching the parse trees of all its sentences to an upper node. Having obtained the parse tree of a text, we shall consider how to select the appropriate portion of the tree as the structured feature for a given instance. As each instance is related to a pronoun and a candidate, the structured feature at least should be able to cover both of these two expressions. Generally, the more substructure of the tree is included, the more syntactic information would be provided, but at the same time the more noisy information that comes from parsing errors would likely be introduced. In our study, we examine three possible structured features that contain different substructures of the parse tree: Min-Expansion This feature records the minimal structure covering both the pronoun and 43 Min-Expansion Simple-Expansion Full-Expansion Figure 1: structured-features for the instance i{“him”, “the man”} the candidate in the parse tree. It only includes the nodes occurring in the shortest path connecting the pronoun and the candidate, via the nearest commonly commanding node. For example, considering the sentence “The man in the room saw him.”, the structured feature for the instance i{“him”,“the man”} is circled with dash lines as shown in the leftmost picture of Figure 1. Simple-Expansion Min-Expansion could, to some degree, describe the syntactic relationships between the candidate and pronoun. However, it is incapable of capturing the syntactic properties of the candidate or the pronoun, because the tree structure surrounding the expression is not taken into consideration. To incorporate such information, feature Simple-Expansion not only contains all the nodes in Min-Expansion, but also includes the first-level children of these nodes2. The middle of Figure 1 shows such a feature for i{“him”, ”the man”}. We can see that the nodes “PP” (for “in the room”) and “VB” (for “saw”) are included in the feature, which provides clues that the candidate is modified by a prepositional phrase and the pronoun is the object of a verb. Full-Expansion This feature focusses on the whole tree structure between the candidate and pronoun. It not only includes all the nodes in Simple-Expansion, but also the nodes (beneath the nearest commanding parent) that cover the words between the candidate and the pronoun3. Such a feature keeps the most information related to the pronoun 2If the pronoun and the candidate are not in the same sentence, we will not include the nodes denoting the sentences before the candidate or after the pronoun. 3We will not expand the nodes denoting the sentences other than where the pronoun and the candidate occur. and candidate pair. The rightmost picture of Figure 1 shows the structure for feature FullExpansion of i{“him”, ”the man”}. As illustrated, different from in Simple-Expansion, the subtree of “PP” (for “in the room”) is fully expanded and all its children nodes are included in Full-Expansion. Note that to distinguish from other words, we explicitly mark up in the structured feature the pronoun and the antecedent candidate under consideration, by appending a string tag “ANA” and “CANDI” in their respective nodes (e.g.,“NNCANDI” for “man” and “PRP-ANA” for “him” as shown in Figure 1). 4.3 Structural Kernel and Composite Kernel To calculate the similarity between two structured features, we use the convolution tree kernel that is defined by Collins and Duffy (2002) and Moschitti (2004). Given two trees, the kernel will enumerate all their subtrees and use the number of common subtrees as the measure of the similarity between the trees. As has been proved, the convolution kernel can be efficiently computed in polynomial time. The above tree kernel only aims for the structured feature. We also need a composite kernel to combine together the structured feature and the normal features described in Section 3.1. In our study we define the composite kernel as follows: Kc(x1, x2) = Kn(x1, x2) |Kn(x1, x2)| ∗Kt(x1, x2) |Kt(x1, x2)|(2) where Kt is the convolution tree kernel defined for the structured feature, and Kn is the kernel applied on the normal features. Both kernels are divided by their respective length4 for normalization. The new composite kernel Kc, defined as the 4The length of a kernel K is defined as |K(x1, x2)| = p K(x1, x1) ∗K(x2, x2) 44 multiplier of normalized Kt and Kn, will return a value close to 1 only if both the structured features and the normal features from the two vectors have high similarity under their respective kernels. 5 Experiments and Discussions 5.1 Experimental Setup In our study we focussed on the third-person pronominal anaphora resolution. All the experiments were done on the ACE-2 V1.0 corpus (NIST, 2003), which contain two data sets, training and devtest, used for training and testing respectively. Each of these sets is further divided into three domains: newswire (NWire), newspaper (NPaper), and broadcast news (BNews). An input raw text was preprocessed automatically by a pipeline of NLP components, including sentence boundary detection, POS-tagging, Text Chunking and Named-Entity Recognition. The texts were parsed using the maximum-entropybased Charniak parser (Charniak, 2000), based on which the structured features were computed automatically. For learning, the SVM-Light software (Joachims, 1999) was employed with the convolution tree kernel implemented by Moschitti (2004). All classifiers were trained with default learning parameters. The performance was evaluated based on the metric success, the ratio of the number of correctly resolved5 anaphor over the number of all anaphors. For each anaphor, the NPs occurring within the current and previous two sentences were taken as the initial antecedent candidates. Those with mismatched number and gender agreements were filtered from the candidate set. Also, pronouns or NEs that disagreed in person with the anaphor were removed in advance. For training, there were 1207, 1440, and 1260 pronouns with non-empty candidate set found pronouns in the three domains respectively, while for testing, the number was 313, 399 and 271. On average, a pronoun anaphor had 6∼9 antecedent candidates ahead. Totally, we got around 10k, 13k and 8k training instances for the three domains. 5.2 Baseline Systems Table 2 lists the performance of different systems. We first tested Hobbs’ algorithm (Hobbs, 1978). 5An anaphor was deemed correctly resolved if the found antecedent is in the same coreference chain of the anaphor. NWire NPaper BNews Hobbs (1978) 66.1 66.4 72.7 NORM 74.4 77.4 74.2 NORM MaxEnt 72.8 77.9 75.3 NORM C5 71.9 75.9 71.6 S Min 76.4 81.0 76.8 S Simple 73.2 82.7 82.3 S Full 73.2 80.5 79.0 NORM+S Min 77.6 82.5 82.3 NORM+S Simple 79.2 82.7 82.3 NORM+S Full 81.5 83.2 81.5 Table 2: Results of the syntactic structured features Described in Section 2, the algorithm uses heuristic rules to search the parse tree for the antecedent, and will act as a good baseline to compare with the learned-based approach with the structured feature. As shown in the first line of Table 2, Hobbs’ algorithm obtains 66%∼72% success rates on the three domains. The second block of Table 2 shows the baseline system (NORM) that uses only the normal features listed in Table 1. Throughout our experiments, we applied the polynomial kernel on the normal features to learn the SVM classifiers. In the table we also compared the SVM-based results with those using other learning algorithms, i.e., Maximum Entropy (Maxent) and C5 decision tree, which are more commonly used in the anaphora resolution task. As shown in the table, the system with normal features (NORM) obtains 74%∼77% success rates for the three domains. The performance is similar to other published results like those by Keller and Lapata (2003), who adopted a similar feature set and reported around 75% success rates on the ACE data set. The comparison between different learning algorithms indicates that SVM can work as well as or even better than Maxent (NORM MaxEnt) or C5 (NORM C5). 5.3 Systems with Structured Features The last two blocks of Table 2 summarize the results using the three syntactic structured features, i.e, Min Expansion (S MIN), Simple Expansion (S SIMPLE) and Full Expansion (S FULL). Between them, the third block is for the systems using the individual structured feature alone. We can see that all the three structured features per45 NWire NPaper BNews Sentence Distance 0 1 2 0 1 2 0 1 2 (Number of Prons) (192) (102) (19) (237) (147) (15) (175) (82) (14) NORM 80.2 72.5 26.3 81.4 75.5 33.3 80.0 65.9 50.0 S Simple 79.7 70.6 21.1 87.3 81.0 26.7 89.7 70.7 57.1 NORM+S Simple 85.4 76.5 31.6 87.3 79.6 40.0 88.6 74.4 50.0 Table 3: The resolution results for pronouns with antecedent in different sentences apart NWire NPaper BNews Type person neuter person neuter person neuter (Number of Prons) (171) (142) (250) (149) (153) (118) NORM 81.9 65.5 80.0 73.2 74.5 73.7 S Simple 81.9 62.7 83.2 81.9 82.4 82.2 NORM+S Simple 87.1 69.7 83.6 81.2 86.9 76.3 Table 4: The resolution results for different types of pronouns form better than the normal features for NPaper (up to 5.3% success) and BNews (up to 8.1% success), or equally well (±1 ∼2% in success) for NWire. When used together with the normal features, as shown in the last block, the three structured features all outperform the baselines. Especially, the combinations of NORM+S SIMPLE and NORM+S FULL can achieve significantly6 better results than NORM, with the success rate increasing by (4.8%, 5.3% and 8.1%) and (7.1%, 5.8%, 7.2%) respectively. All these results prove that the structured syntactic feature is effective for pronoun resolution. We further compare the performance of the three different structured features. As shown in Table 2, when used together with the normal features, Full Expansion gives the highest success rates in NWire and NPaper, but nevertheless the lowest in BNews. This should be because feature Full-Expansion captures a larger portion of the parse trees, and thus can provide more syntactic information than Min Expansion or Simple Expansion. However, if the texts are less-formally structured as those in BNews, FullExpansion would inevitably involve more noises and thus adversely affect the resolution performance. By contrast, feature Simple Expansion would achieve balance between the information and the noises to be introduced: from Table 2 we can find that compared with the other two features, Simple Expansion is capable of producing average results for all the three domains. And for this 6p < 0.05 by a 2-tailed t test. reason, our subsequent reports will focus on Simple Expansion, unless otherwise specified. As described, to compute the structured feature, parse trees for different sentences are connected to form a large tree for the text. It would be interesting to find how the structured feature works for pronouns whose antecedents reside in different sentences. For this purpose we tested the success rates for the pronouns with the closest antecedent occurring in the same sentence, one-sentence apart, and two-sentence apart. Table 3 compares the learning systems with/without the structured feature present. From the table, for all the systems, the success rates drop with the increase of the distances between the pronoun and the antecedent. However, in most cases, adding the structured feature would bring consistent improvement against the baselines regardless of the number of sentence distance. This observation suggests that the structured syntactic information is helpful for both intra-sentential and intersentential pronoun resolution. We were also concerned about how the structured feature works for different types of pronouns. Table 4 lists the resolution results for two types of pronouns: person pronouns (i.e., “he”, “she”) and neuter-gender pronouns (i.e., “it” and “they”). As shown, with the structured feature incorporated, the system NORM+S Simple can significantly boost the performance of the baseline (NORM), for both personal pronoun and neutergender pronoun resolution. 46 1 2 3 4 5 6 7 8 9 10 0.65 0.7 0.75 0.8 Number of Training Documents Success NORM S_Simple NORM+S_Simple 2 4 6 8 10 12 0.65 0.7 0.75 0.8 Number of Training Documents Success NORM S_Simple NORM+S_Simple 1 2 3 4 5 6 7 8 0.65 0.7 0.75 0.8 Number of Training Documents Success NORM S_Simple NORM+S_Simple NWire NPaper BNews Figure 2: Learning curves of systems with different features 5.4 Learning Curves Figure 2 plots the learning curves for the systems with three feature sets, i.e, normal features (NORM), structured feature alone (S Simple), and combined features (NORM+S Simple). We trained each system with different number of instances from 1k, 2k, 3k, .. ., till the full size. Each point in the figures was the average over two trails with instances selected forwards and backwards respectively. From the figures we can find that (1) Used in combination (NORM+S Simple), the structured feature shows superiority over NORM, achieving results consistently better than the normal features (NORM) do in all the three domains. (2) With training instances above 3k, the structured feature, used either in isolation (S Simple) or in combination (NORM+S Simple), leads to steady increase in the success rates and exhibit smoother learning curves than the normal features (NORM). These observations further prove the reliability of the structured feature in pronoun resolution. 5.5 Feature Analysis In our experiment we were also interested to compare the structured feature with the normal flat features extracted from the parse tree, like feature Subject and Object. For this purpose we took out these two grammatical features from the normal feature set, and then trained the systems again. As shown in Table 5, the two grammaticalrole features are important for the pronoun resolution: removing these features results in up to 5.7% (NWire) decrease in success. However, when the structured feature is included, the loss in success reduces to 1.9% and 1.1% for NWire and BNews, and a slight improvement can even be achieved for NPaper. This indicates that the structured feature can effectively provide the syntactic information NWire NPaper BNews NORM 74.4 77.4 74.2 NORM - subj/obj 68.7 76.2 72.7 NORM + S Simple 79.2 82.7 82.3 NORM + S Simple - subj/obj 77.3 83.0 81.2 NORM + Luo05 75.7 77.9 74.9 Table 5: Comparison of the structured feature and the flat features extracted from parse trees Feature Parser NWire NPaper BNews Charniak00 73.2 82.7 82.3 S Simple Collins99 75.1 83.2 80.4 NORM+ Charniak00 79.2 82.7 82.3 S Simple Collins99 80.8 81.5 82.3 Table 6: Results using different parsers important for pronoun resolution. We also tested the flat syntactic feature set proposed in Luo and Zitouni (2005)’s work. As described in Section 2, the feature set is inspired the binding theory, including those features like whether the candidate is c commanding the pronoun, and the counts of “NP”, “VP”, “S” nodes in the commanding path. The last line of Table 5 shows the results by adding these features into the normal feature set. In line with the reports in (Luo and Zitouni, 2005) we do observe the performance improvement against the baseline (NORM) for all the domains. However, the increase in the success rates (up to 1.3%) is not so large as by adding the structured feature (NORM+S Simple) instead. 5.6 Comparison with Different Parsers As mentioned, the above reported results were based on Charniak (2000)’s parser. It would be interesting to examine the influence of different parsers on the resolution performance. For this purpose, we also tried the parser by Collins (1999) 47 (Mode II)7, and the results are shown in Table 6. We can see that Charniak (2000)’s parser leads to higher success rates for NPaper and BNews, while Collins (1999)’s achieves better results for NWire. However, the difference between the results of the two parsers is not significant (less than 2% success) for the three domains, no matter whether the structured feature is used alone or in combination. 6 Conclusion The purpose of this paper is to explore how to make use of the structured syntactic knowledge to do pronoun resolution. Traditionally, syntactic information from parse trees is represented as a set of flat features. However, the features are usually selected and defined by heuristics and may not necessarily capture all the syntactic information provided by the parse trees. In the paper, we propose a kernel-based method to incorporate the information from parse trees. Specifically, we directly utilize the syntactic parse tree as a structured feature, and then apply kernels to such a feature, together with other normal features, to learn the decision classifier and do the resolution. Our experimental results on ACE data set show that the system with the structured feature included can achieve significant increase in the success rate by around 5%∼8%, for all the different domains. The deeper analysis on various factors like training size, feature set or parsers further proves that the structured feature incorporated with our kernelbased method is reliably effective for the pronoun resolution task. References C. Aone and S. W. Bennett. 1995. Evaluating automated and manual acquisition of anaphora resolution strategies. In Proceedings of the 33rd Annual Meeting of the Association for Compuational Linguistics, pages 122–129. E. Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of North American chapter of the Association for Computational Linguistics annual meeting, pages 132–139. M. Collins and N. Duffy. 2002. New ranking algorithms for parsing and tagging: kernels over discrete structures and the voted perceptron. In Proceedings of the 40th Annual Meeting of the Association 7As in their pulic reports on Section 23 of WSJ TreeBank, Charniak (2000)’s parser achieves 89.6% recall and 89.5% precision with 0.88 crossing brackets (words ≤100), against Collins (1999)’s 88.1% recall and 88.3% precision with 1.06 crossing brackets. for Computational Linguistics (ACL’02), pages 263– 270. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. J. Hobbs. 1978. Resolving pronoun references. Lingua, 44:339–352. T. Joachims. 1999. Making large-scale svm learning practical. In Advances in Kernel Methods - Support Vector Learning. MIT Press. F. Keller and M. Lapata. 2003. Using the web to obtain freqencies for unseen bigrams. Computational Linguistics, 29(3):459–484. C. Kennedy and B. Boguraev. 1996. Anaphora for everyone: pronominal anaphra resolution without a parser. In Proceedings of the 16th International Conference on Computational Linguistics, pages 113–118, Copenhagen, Denmark. S. Lappin and H. Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):525–561. X. Luo and I. Zitouni. 2005. Milti-lingual coreference resolution with syntactic features. In Proceedings of Human Language Techonology conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 660–667. R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proceedings of the 17th Int. Conference on Computational Linguistics, pages 869– 875. A. Moschitti. 2004. A study on convolution kernels for shallow semantic parsing. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL’04), pages 335–342. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104–111, Philadelphia. W. Soon, H. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521– 544. V. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer. X. Yang, J. Su, G. Zhou, and C. Tan. 2004. Improving pronoun resolution by incorporating coreferential information of candidates. In Proceedings of 42th Annual Meeting of the Association for Computational Linguistics, pages 127–134, Barcelona. D. Zelenko, C. Aone, and A. Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, 3(6):1083 – 1106. S. Zhao and R. Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proceedings of 43rd Annual Meeting of the Association for Computational Linguistics (ACL05), pages 419–426. 48
2006
6
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 473–480, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Factorizing Complex Models: A Case Study in Mention Detection Radu Florian, Hongyan Jing, Nanda Kambhatla and Imed Zitouni IBM TJ Watson Research Center Yorktown Heights, NY 10598 {raduf,hjing,nanda,izitouni}@us.ibm.com Abstract As natural language understanding research advances towards deeper knowledge modeling, the tasks become more and more complex: we are interested in more nuanced word characteristics, more linguistic properties, deeper semantic and syntactic features. One such example, explored in this article, is the mention detection and recognition task in the Automatic Content Extraction project, with the goal of identifying named, nominal or pronominal references to real-world entities—mentions— and labeling them with three types of information: entity type, entity subtype and mention type. In this article, we investigate three methods of assigning these related tags and compare them on several data sets. A system based on the methods presented in this article participated and ranked very competitively in the ACE’04 evaluation. 1 Introduction Information extraction is a crucial step toward understanding and processing natural language data, its goal being to identify and categorize important information conveyed in a discourse. Examples of information extraction tasks are identification of the actors and the objects in written text, the detection and classification of the relations among them, and the events they participate in. These tasks have applications in, among other fields, summarization, information retrieval, data mining, question answering, and language understanding. One of the basic tasks of information extraction is the mention detection task. This task is very similar to named entity recognition (NER), as the objects of interest represent very similar concepts. The main difference is that the latter will identify, however, only named references, while mention detection seeks named, nominal and pronominal references. In this paper, we will call the identified references mentions – using the ACE (NIST, 2003) nomenclature – to differentiate them from entities which are the real-world objects (the actual person, location, etc) to which the mentions are referring to1. Historically, the goal of the NER task was to find named references to entities and quantity references – time, money (MUC-6, 1995; MUC-7, 1997). In recent years, Automatic Content Extraction evaluation (NIST, 2003; NIST, 2004) expanded the task to also identify nominal and pronominal references, and to group the mentions into sets referring to the same entity, making the task more complicated, as it requires a co-reference module. The set of identified properties has also been extended to include the mention type of a reference (whether it is named, nominal or pronominal), its subtype (a more specific type dependent on the main entity type), and its genericity (whether the entity points to a specific entity, or a generic one2), besides the customary main entity type. To our knowledge, little research has been done in the natural language processing context or otherwise on investigating the specific problem of how such multiple labels are best assigned. This article compares three methods for such an assignment. The simplest model which can be considered for the task is to create an atomic tag by “gluing” together the sub-task labels and considering the new label atomic. This method transforms the problem into a regular sequence classification task, similar to part-of-speech tagging, text chunking, and named entity recognition tasks. We call this model the all-in-one model. The immediate drawback of this model is that it creates a large classification space (the cross-product of the sub-task classification spaces) and that, during decoding, partially similar classifications will compete instead of cooperate - more details are presented in Section 3.1. Despite (or maybe due to) its relative simplicity, this model obtained good results in several instances in the past, for POS tagging in morphologically rich languages (Hajic and Hladk´a, 1998) 1In a pragmatic sense, entities are sets of mentions which co-refer. 2This last attribute, genericity, depends only loosely on local context. As such, it should be assigned while examining all mentions in an entity, and for this reason is beyond the scope of this article. 473 and mention detection (Jing et al., 2003; Florian et al., 2004). At the opposite end of classification methodology space, one can use a cascade model, which performs the sub-tasks sequentially in a predefined order. Under such a model, described in Section 3.3, the user will build separate models for each subtask. For instance, it could first identify the mention boundaries, then assign the entity type, subtype, and mention level information. Such a model has the immediate advantage of having smaller classification spaces, with the drawback that it requires a specific model invocation path. In between the two extremes, one can use a joint model, which models the classification space in the same way as the all-in-one model, but where the classifications are not atomic. This system incorporates information about sub-model parts, such as whether the current word starts an entity (of any type), or whether the word is part of a nominal mention. The paper presents a novel contrastive analysis of these three models, comparing them on several datasets in three languages selected from the ACE 2003 and 2004 evaluations. The methods described here are independent of the underlying classifiers, and can be used with any sequence classifiers. All experiments in this article use our in-house implementation of a maximum entropy classifier (Florian et al., 2004), which we selected because of its flexibility of integrating arbitrary types of features. While we agree that the particular choice of classifier will undoubtedly introduce some classifier bias, we want to point out that the described procedures have more to do with the organization of the search space, and will have an impact, one way or another, on most sequence classifiers, including conditional random field classifiers.3 The paper is organized as follows: Section 2 describes the multi-task classification problem and prior work, Section 3.3 presents and contrasts the three meta-classification models. Section 4 outlines the experimental setup and the obtained results, and Section 5 concludes the paper. 2 Multi-Task Classification Many tasks in Natural Language Processing involve labeling a word or sequence of words with a specific property; classic examples are part-ofspeech tagging, text chunking, word sense disambiguation and sentiment classification. Most of the time, the word labels are atomic labels, containing a very specific piece of information (e.g. the word 3While not wishing to delve too deep into the issue of label bias, we would also like to point out (as it was done, for instance, in (Klein, 2003)) that the label bias of MEMM classifiers can be significantly reduced by allowing them to examine the right context of the classification point - as we have done with our model. is noun plural, or starts a noun phrase, etc). There are cases, though, where the labels consist of several related, but not entirely correlated, properties; examples include mention detection—the task we are interested in—, syntactic parsing with functional tag assignment (besides identifying the syntactic parse, also label the constituent nodes with their functional category, as defined in the Penn Treebank (Marcus et al., 1993)), and, to a lesser extent, part-of-speech tagging in highly inflected languages.4 The particular type of mention detection that we are examining in this paper follows the ACE general definition: each mention in the text (a reference to a real-world entity) is assigned three types of information:5 • An entity type, describing the type of the entity it points to (e.g. person, location, organization, etc) • An entity subtype, further detailing the type (e.g. organizations can be commercial, governmental and non-profit, while locations can be a nation, population center, or an international region) • A mention type, specifying the way the entity is realized – a mention can be named (e.g. John Smith), nominal (e.g. professor), or pronominal (e.g. she). Such a problem – where the classification consists of several subtasks or attributes – presents additional challenges, when compared to a standard sequence classification task. Specifically, there are inter-dependencies between the subtasks that need to be modeled explicitly; predicting the tags independently of each other will likely result in inconsistent classifications. For instance, in our running example of mention detection, the subtype task is dependent on the entity type; one could not have a person with the subtype non-profit. On the other hand, the mention type is relatively independent of the entity type and/or subtype: each entity type could be realized under any mention type and viceversa. The multi-task classification problem has been subject to investigation in the past. Caruana et al. (1997) analyzed the multi-task learning 4The goal there is to also identify word properties such as gender, number, and case (for nouns), mood and tense (for verbs), etc, besides the main POS tag. The task is slightly different, though, as these properties tend to have a stronger dependency on the lexical form of the classified word. 5There is a fourth assigned type – a flag specifying whether a mention is specific (i.e. it refers at a clear entity), generic (refers to a generic type, e.g. “the scientists believe ..”), unspecified (cannot be determined from the text), or negative (e.g. “ no person would do this”). The classification of this type is beyond the goal of this paper. 474 (MTL) paradigm, where individual related tasks are trained together by sharing a common representation of knowledge, and demonstrated that this strategy yields better results than one-task-ata-time learning strategy. The authors used a backpropagation neural network, and the paradigm was tested on several machine learning tasks. It also contains an excellent discussion on how and why the MTL paradigm is superior to single-task learning. Florian and Ngai (2001) used the same multitask learning strategy with a transformation-based learner to show that usually disjointly handled tasks perform slightly better under a joint model; the experiments there were run on POS tagging and text chunking, Chinese word segmentation and POS tagging. Sutton et al. (2004) investigated the multitask classification problem and used a dynamic conditional random fields method, a generalization of linear-chain conditional random fields, which can be viewed as a probabilistic generalization of cascaded, weighted finite-state transducers. The subtasks were represented in a single graphical model that explicitly modeled the sub-task dependence and the uncertainty between them. The system, evaluated on POS tagging and base-noun phrase segmentation, improved on the sequential learning strategy. In a similar spirit to the approach presented in this article, Florian (2002) considers the task of named entity recognition as a two-step process: the first is the identification of mention boundaries and the second is the classification of the identified chunks, therefore considering a label for each word being formed from two sub-labels: one that specifies the position of the current word relative in a mention (outside any mentions, starts a mention, is inside a mention) and a label specifying the mention type . Experiments on the CoNLL’02 data show that the two-process model yields considerably higher performance. Hacioglu et al. (2005) explore the same task, investigating the performance of the AIO and the cascade model, and find that the two models have similar performance, with the AIO model having a slight advantage. We expand their study by adding the hybrid joint model to the mix, and further investigate different scenarios, showing that the cascade model leads to superior performance most of the time, with a few ties, and show that the cascade model is especially beneficial in cases where partially-labeled data (only some of the component labels are given) is available. It turns out though, (Hacioglu, 2005) that the cascade model in (Hacioglu et al., 2005) did not change to a “mention view” sequence classification6 (as we did in Section 3.3) in the tasks following the entity detection, to allow the system to use longer range features. 6As opposed to a “word view”. 3 Classification Models This section presents the three multi-task classification models, which we will experimentally contrast in Section 4. We are interested in performing sequence classification (e.g. assigning a label to each word in a sentence, otherwise known as tagging). Let X denote the space of sequence elements (words) and Y denote the space of classifications (labels), both of them being finite spaces. Our goal is to build a classifier h : X + →Y+ which has the property that |h (¯x)| = |¯x| , ∀¯x ∈X + (i.e. the size of the input sequence is preserved). This classifier will select the a posteriori most likely label sequence ¯y = arg max ¯y′ p ¡ ¯y′|¯x ¢ ; in our case p (¯y|¯x) is computed through the standard Markov assumption: p (y1,m| ¯x) = Y i p (yi|¯x, yi−n+1,i−1) (1) where yi,j denotes the sequence of labels yi..yj. Furthermore, we will assume that each label y is composed of a number of sub-labels y = ¡ y1y2 . . . yk¢7; in other words, we will assume the factorization of the label space into k subspaces Y = Y1 × Y2 × . . . × Yk. The classifier we used in the experimental section is a maximum entropy classifier (similar to (McCallum et al., 2000))—which can integrate several sources of information in a rigorous manner. It is our empirical observation that, from a performance point of view, being able to use a diverse and abundant feature set is more important than classifier choice, and the maximum entropy framework provides such a utility. 3.1 The All-In-One Model As the simplest model among those presented here, the all-in-one model ignores the natural factorization of the output space and considers all labels as atomic, and then performs regular sequence classification. One way to look at this process is the following: the classification space Y = Y1 × Y2 × . . . × Yk is first mapped onto a same-dimensional space Z through a one-to-one mapping o : Y →Z; then the features of the system are defined on the space X + × Z, instead of X + × Y. While having the advantage of being simple, it suffers from some theoretical disadvantages: • The classification space can be very large, being the product of the dimensions of sub-task spaces. In the case of the 2004 ACE data there are 7 entity types, 4 mention types and many subtypes; the observed number of actual 7We can assume, without any loss of generality, that all labels have the same number of sub-labels. 475 All-In-One Model Joint Model B-PER B-LOC B-ORG BB-MISC Table 1: Features predicting start of an entity in the all-in-one and joint models sub-label combinations on the training data is 401. Since the dynamic programing (Viterbi) search’s runtime dependency on the classification space is O (|Z|n) (n is the Markov dependency size), using larger spaces will negatively impact the decoding run time.8 • The probabilities p (zi|¯x, zi−n,i−1) require large data sets to be computed properly. If the training data is limited, the probabilities might be poorly estimated. • The model is not friendly to partial evaluation or weighted sub-task evaluation: different, but partially similar, labels will compete against each other (because the system will return a probability distribution over the classification space), sometimes resulting in wrong partial classification.9 • The model cannot directly use data that is only partially labeled (i.e. not all sub-labels are specified). Despite the above disadvantages, this model has performed well in practice: Hajic and Hladk´a (1998) applied it successfully to find POS sequences for Czech and Florian et al. (2004) reports good results on the 2003 ACE task. Most systems that participated in the CoNLL 2002 and 2003 shared tasks on named entity recognition (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) applied this model, as they modeled the identification of mention boundaries and the assignment of mention type at the same time. 3.2 The Joint Model The joint model differs from the all-in-one model in the fact that the labels are no longer atomic: the features of the system can inspect the constituent sub-labels. This change helps alleviate the data 8From a practical point of view, it might not be very important, as the search is pruned in most cases to only a few hypotheses (beam-search); in our case, pruning the beam only introduced an insignificant model search error (0.1 F-measure). 9To exemplify, consider that the system outputs the following classifications and probabilities: O (0.2), BPER-NAM (0.15), B-PER-NOM (0.15); even the latter 2 suggest that the word is the start of a person mention, the O label will win because the two labels competed against each other. Detect Boundaries & Entity Types Assemble full tag Detect Entity Subtype Detect Mention Type Figure 1: Cascade flow example for mention detection. sparsity encountered by the previous model by allowing sub-label modeling. The joint model theoretically compares favorably with the all-in-one model: • The probabilities p (yi|¯x, yi−n,i−1) = p µ¡ y1 i , . . . , yk i ¢ |¯x, ³ yj i−n,i−1 ´ j=1,k ¶ might require less training data to be properly estimated, as different sub-labels can be modeled separately. • The joint model can use features that predict just one or a subset of the sub-labels. Table 1 presents the set of basic features that predict the start of a mention for the CoNLL shared tasks for the two models. While the joint model can encode the start of a mention in one feature, the all-in-one model needs to use four features, resulting in fewer counts per feature and, therefore, yielding less reliably estimated features (or, conversely, it needs more data for the same estimation confidence). • The model can predict some of the sub-tags ahead of the others (i.e. create a dependency structure on the sub-labels). The model used in the experimental section predicts the sublabels by using only sub-labels for the previous words, though. • It is possible, though computationally expensive, for the model to use additional data that is only partially labeled, with the model change presented later in Section 3.4. 3.3 The Cascade Model For some tasks, there might already exist a natural hierarchy among the sub-labels: some sub-labels could benefit from knowing the value of other, primitive, sub-labels. For example, • For mention detection, identifying the mention boundaries can be considered as a primitive task. Then, knowing the mention boundaries, one can assign an entity type, subtype, and mention type to each mention. • In the case of parsing with functional tags, one can perform syntactic parsing, then assign the functional tags to the internal constituents. 476 Words Since Donna Karan International went public in 1996 ... Labels O B-ORG I-ORG I-ORG O O O O ... Figure 2: Sequence tagging for mention detection: the case for a cascade model. • For POS tagging, one can detect the main POS first, then detect the other specific properties, making use of the fact that one knows the main tag. The cascade model is essentially a factorization of individual classifiers for the sub-tasks; in this framework, we will assume that there is a more or less natural dependency structure among subtasks, and that models for each of the subtasks will be built and applied in the order defined by the dependency structure. For example, as shown in Figure 1, one can detect mention boundaries and entity type (at the same time), then detect mention type and subtype in “parallel” (i.e. no dependency exists between these last 2 sub-tags). A very important advantage of the cascade model is apparent in classification cases where identifying chunks is involved (as is the case with mention detection), similar to advantages that rescoring hypotheses models have: in the second stage, the chunk classification stage, it can switch to a mention view, where the classification units are entire mentions and words outside of mentions. This allows the system to make use of aggregate features over the mention words (e.g. all the words are capitalized), and to also effectively use a larger Markov window (instead of 2-3 words, it will use 23 chunks/words around the word of interest). Figure 2 contains an example of such a case: the cascade model will have to predict the type of the entire phrase Donna Karan International, in the context ’Since <chunk> went public in ..’, which will give it a better opportunity to classify it as an organization. In contrast, because the joint model and AIO have a word view of the sentence, will lack the benefit of examining the larger region, and will not have access at features that involve partial future classifications (such as the fact that another mention of a particular type follows). Compared with the other two models, this classification method has the following advantages: • The classification spaces for each subtask are considerably smaller; this fact enables the creation of better estimated models • The problem of partially-agreeing competing labels is completely eliminated • One can easily use different/additional data to train any of the sub-task models. 3.4 Adding Partially Labeled Data Annotated data can be sometimes expensive to come by, especially if the label set is complex. But not all sub-tasks were created equal: some of them might be easier to predict than others and, therefore, require less data to train effectively in a cascade setup. Additionally, in realistic situations, some sub-tasks might be considered to have more informational content than others, and have precedence in evaluation. In such a scenario, one might decide to invest resources in annotating additional data only for the particularly interesting sub-task, which could reduce this effort significantly. To test this hypothesis, we annotated additional data with the entity type only. The cascade model can incorporate this data easily: it just adds it to the training data for the entity type classifier model. While it is not immediately apparent how to incorporate this new data into the all-in-one and joint models, in order to maintain fairness in comparing the models, we modified the procedures to allow for the inclusion. Let T denote the original training data, and T ′ denote the additional training data. For the all-in-one model, the additional training data cannot be incorporated directly; this is an inherent deficiency of the AIO model. To facilitate a fair comparison, we will incorporate it in an indirect way: we train a classifier C on the additional training data T ′, which we then use to classify the original training data T . Then we train the allin-one classifier on the original training data T , adding the features defined on the output of applying the classifier C on T . The situation is better for the joint model: the new training data T ′ can be incorporated directly into the training data T .10 The maximum entropy model estimates the model parameters by maximizing the data log-likelihood L = X (x,y) ˆp (x, y) log qλ (y|x) where ˆp (x, y) is the observed probability distribution of the pair (x, y) and qλ (y|x) = 1 Z Q j exp (λj · fj (x, y)) is the conditional ME probability distribution as computed by the model. In the case where some of the data is partially annotated, the log-likelihood becomes L = X (x,y)∈T ∪T ′ ˆp (x, y) log qλ (y|x) 10The solution we present here is particular for MEMM models (though similar solutions may exist for other models as well). We also assume the reader is familiar with the normal MaxEnt training procedure; we present here only the differences to the standard algorithm. See (Manning and Sch¨utze, 1999) for a good description. 477 = X (x,y)∈T ˆp (x, y) log qλ (y|x) + X (x,y)∈T ′ ˆp (x, y) log qλ (y|x) (2) The only technical problem that we are faced with here is that we cannot directly estimate the observed probability ˆp (x, y) for examples in T ′, since they are only partially labeled. Borrowing the idea from the expectation-maximization algorithm (Dempster et al., 1977), we can replace this probability by the re-normalized system proposed probability: for (x, yx) ∈T ′, we define ˆq (x, y) = ˆp (x) δ (y ∈yx) qλ (y|x) P y′∈yx qλ (y′|x) | {z } =ˆqλ(y|x) where yx is the subset of labels from Y which are consistent with the partial classification of x in T ′. δ (y ∈yx) is 1 if and only if y is consistent with the partial classification yx.11 The log-likelihood computation in Equation (2) becomes L = X (x,y)∈T ˆp (x, y) log qλ (y|x) + X (x,y)∈T ′ ˆq (x, y) log qλ (y|x) To further simplify the evaluation, the quantities ˆq (x, y) are recomputed every few steps, and are considered constant as far as finding the optimum λ values is concerned (the partial derivative computations and numerical updates otherwise become quite complicated, and the solution is no longer unique). Given this new evaluation function, the training algorithm will proceed exactly the same way as in the normal case where all the data is fully labeled. 4 Experiments All the experiments in this section are run on the ACE 2003 and 2004 data sets, in all the three languages covered: Arabic, Chinese, and English. Since the evaluation test set is not publicly available, we have split the publicly available data into a 80%/20% data split. To facilitate future comparisons with work presented here, and to simulate a realistic scenario, the splits are created based on article dates: the test data is selected as the last 20% of the data in chronological order. This way, the documents in the training and test data sets do not overlap in time, and the ones in the test data are posterior to the ones in the training data. Table 2 presents the number of documents in the training/test datasets for the three languages. 11For instance, the full label B-PER is consistent with the partial label B, but not with O or I. Language Training Test Arabic 511 178 Chinese 480 166 English 2003 658 139 English 2004 337 114 Table 2: Datasets size (number of documents) Each word in the training data is labeled with one of the following properties:12 • if it is not part of any entity, it’s labeled as O • if it is part of an entity, it contains a tag specifying whether it starts a mention (B-) or is inside a mention (I -). It is also labeled with the entity type of the mention (seven possible types: person, organization, location, facility, geo-political entity, weapon, and vehicle), the mention type (named, nominal, pronominal, or premodifier13), and the entity subtype (depends on the main entity type). The underlying classifier used to run the experiments in this article is a maximum entropy model with a Gaussian prior (Chen and Rosenfeld, 1999), making use of a large range of features, including lexical (words and morphs in a 3-word window, prefixes and suffixes of length up to 4, WordNet (Miller, 1995) for English), syntactic (POS tags, text chunks), gazetteers, and the output of other information extraction models. These features were described in (Florian et al., 2004), and are not discussed here. All three methods (AIO, joint, and cascade) instantiate classifiers based on the same feature types whenever possible. In terms of language-specific processing, the Arabic system uses as input morphological segments, while the Chinese system is a character-based model (the input elements x ∈X are characters), but it has access to word segments as features. Performance in the ACE task is officially evaluated using a special-purpose measure, the ACE value metric (NIST, 2003; NIST, 2004). This metric assigns a score based on the similarity between the system’s output and the gold-standard at both mention and entity level, and assigns different weights to different entity types (e.g. the person entity weights considerably more than a facility entity, at least in the 2003 and 2004 evaluations). Since this article focuses on the mention detection task, we decided to use the more intuitive (unweighted) F-measure: the harmonic mean of precision and recall. 12The mention encoding is the IOB2 encoding presented in (Tjong Kim Sang and Veenstra, 1999) and introduced by (Ramshaw and Marcus, 1994) for the task of base noun phrase chunking. 13This is a special class, used for mentions that modify other labeled mentions; e.g. French in “French wine”. This tag is specific only to ACE’04. 478 For the cascade model, the sub-task flow is presented in Figure 1. In the first step, we identify the mention boundaries together with their entity type (e.g. person, organization, etc). In preliminary experiments, we tried to “cascade” this task. The performance was similar on both strategies; the separated model would yield higher recall at the expense of precision, while the combined model would have higher precision, but lower recall. We decided to use in the system with higher precision. Once the mentions are identified and classified with the entity type property, the data is passed, in parallel, to the mention type detector and the subtype detector. For English and Arabic, we spent three personweeks to annotate additional data labeled with only the entity type information: 550k words for English and 200k words for Arabic. As mentioned earlier, adding this data to the cascade model is a trivial task: the data just gets added to the training data, and the model is retrained. For the AIO model, we have build another mention classifier on the additional training data, and labeled the original ACE training data with it. It is important to note here that the ACE training data (called T in Section 3.4) is consistent with the additional training data T ′: the annotation guidelines for T ′ are the same as for the original ACE data, but we only labeled entity type information. The resulting classifications are then used as features in the final AIO classifier. The joint model uses the additional partially-labeled data in the way described in Section 3.4; the probabilities ˆq (x, y) are updated every 5 iterations. Table 3 presents the results: overall, the cascade model performs significantly better than the allin-one model in four out the six tested cases - the numbers presented in bold reflect that the difference in performance to the AIO model is statistically significant.14 The joint model, while managing to recover some ground, falls in between the AIO and the cascade models. When additional partially-labeled data was available, the cascade and joint models receive a statistically significant boost in performance, while the all-in-one model’s performance barely changes. This fact can be explained by the fact that the entity type-only model is in itself errorful; measuring the performance of the model on the training data yields a performance of 82 F-measure;15 therefore the AIO model will only access partially-correct 14To assert the statistical significance of the results, we ran a paired Wilcoxon test over the series obtained by computing F-measure on each document in the test set. The results are significant at a level of at least 0.009. 15Since the additional training data is consistent in the labeling of the entity type, such a comparison is indeed possible. The above mentioned score is on entity types only. Language Data+ A-I-O Joint Cascade Arabic’04 no 59.2 59.1 59.7 yes 59.4 60.0 60.7 English’04 no 72.1 72.3 73.7 yes 72.5 74.1 75.2 Chinese’04 no 71.2 71.7 71.7 English ’03 no 79.5 79.5 79.7 Table 3: Experimental results: F-measure on the full label Language Data+ A-I-O Joint Cascade Arabic’04 no 66.3 66.5 67.5 yes 66.4 67.9 68.9 English’04 no 77.9 78.1 79.2 yes 78.3 80.5 82.6 Chinese’04 no 75.4 76.1 76.8 English ’03 no 80.4 80.4 81.1 Table 4: F-measure results on entity type only data, and is unable to make effective use of it. In contrast, the training data for the entity type in the cascade model effectively triples, and this change is reflected positively in the 1.5 increase in F-measure. Not all properties are equally valuable: the entity type is arguably more interesting than the other properties. If we restrict ourselves to evaluating the entity type output only (by projecting the output label to the entity type only), the difference in performance between the all-in-one model and cascade is even more pronounced, as shown in Table 4. The cascade model outperforms here both the all-in-one and joint models in all cases except English’03, where the difference is not statistically significant. As far as run-time speed is concerned, the AIO and cascade models behave similarly: our implementation tags approximately 500 tokens per second (averaged over the three languages, on a Pentium 3, 1.2Ghz, 2Gb of memory). Since a MaxEnt implementation is mostly dependent on the number of features that fire on average on a example, and not on the total number of features, the joint model runs twice as slow: the average number of features firing on a particular example is considerably higher. On average, the joint system can tag approximately 240 words per second. The train time is also considerably longer; it takes 15 times as long to train the joint model as it takes to train the all-in-one model (60 mins/iteration compared to 4 mins/iteration); the cascade model trains faster than the AIO model. One last important fact that is worth mentioning is that a system based on the cascade model participated in the ACE’04 competition, yielding very competitive results in all three languages. 479 5 Conclusion As natural language processing becomes more sophisticated and powerful, we start focus our attention on more and more properties associated with the objects we are seeking, as they allow for a deeper and more complex representation of the real world. With this focus comes the question of how this goal should be accomplished – either detect all properties at once, one at a time through a pipeline, or a hybrid model. This paper presents three methods through which multi-label sequence classification can be achieved, and evaluates and contrasts them on the Automatic Content Extraction task. On the ACE mention detection task, the cascade model which predicts first the mention boundaries and entity types, followed by mention type and entity subtype outperforms the simple allin-one model in most cases, and the joint model in a few cases. Among the proposed models, the cascade approach has the definite advantage that it can easily and productively incorporate additional partiallylabeled data. We also presented a novel modification of the joint system training that allows for the direct incorporation of additional data, which increased the system performance significantly. The all-in-one model can only incorporate additional data in an indirect way, resulting in little to no overall improvement. Finally, the performance obtained by the cascade model is very competitive: when paired with a coreference module, it ranked very well in the “Entity Detection and Tracking” task in the ACE’04 evaluation. References R. Caruana, L. Pratt, and S. Thrun. 1997. Multitask learning. Machine Learning, 28:41. Stanley F. Chen and Ronald Rosenfeld. 1999. A gaussian prior for smoothing maximum entropy models. Technical Report CMU-CS-99-108, Computer Science Department, Carnegie Mellon University. A. P. Dempster, N. M. Laird, , and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal statistical Society, 39(1):1–38. R. Florian and G. Ngai. 2001. Multidimensional transformation-based learning. In Proceedings of CoNLL’01, pages 1–8. R. Florian, H. Hassan, A. Ittycheriah, H. Jing, N. Kambhatla, X. Luo, N Nicolov, and S Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 1–8. R. Florian. 2002. Named entity recognition as a house of cards: Classifier stacking. In Proceedings of CoNLL-2002, pages 175–178. Kadri Hacioglu, Benjamin Douglas, and Ying Chen. 2005. Detection of entity mentions occuring in english and chinese text. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 379–386, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. Kadri Hacioglu. 2005. Private communication. J. Hajic and Hladk´a. 1998. Tagging inflective languages: Prediction of morphological categories for a rich, structured tagset. In Proceedings of the 36th Annual Meeting of the ACL and the 17th ICCL, pages 483–490, Montr´eal, Canada. H. Jing, R. Florian, X. Luo, T. Zhang, and A. Ittycheriah. 2003. HowtogetaChineseName(Entity): Segmentation and combination issues. In Proceedings of EMNLP’03, pages 200–207. Dan Klein. 2003. Maxent models, conditional estimation, and optimization, without the magic. Tutorial presented at NAACL-03 and ACL-03. C. D. Manning and H. Sch¨utze. 1999. Foundations of Statistical Natural Language Processing. MIT Press. M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19:313–330. Andrew McCallum, Dayne Freitag, and Fernando Pereira. 2000. Maximum entropy markov models for information extraction and segmentation. In Proceedings of ICML-2000. G. A. Miller. 1995. WordNet: A lexical database. Communications of the ACM, 38(11). MUC-6. 1995. The sixth message understanding conference. www.cs.nyu.edu/cs/faculty/grishman/muc6.html. MUC-7. 1997. The seventh message understanding conference. www.itl.nist.gov/iad/894.02/related projects/ muc/proceedings/muc 7 toc.html. NIST. 2003. The ACE evaluation plan. www.nist.gov/speech/tests/ace/index.htm. NIST. 2004. The ACE evaluation plan. www.nist.gov/speech/tests/ace/index.htm. L. Ramshaw and M. Marcus. 1994. Exploring the statistical derivation of transformational rule sequences for part-of-speech tagging. In Proceedings of the ACL Workshop on Combining Symbolic and Statistical Approaches to Language, pages 128–135. C. Sutton, K. Rohanimanesh, and A. McCallum. 2004. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. In In Proceedings of the TwentyFirst International Conference on Machine Learning (ICML-2004). Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Walter Daelemans and Miles Osborne, editors, Proceedings of CoNLL-2003, pages 142–147. Edmonton, Canada. E. F. Tjong Kim Sang and J. Veenstra. 1999. Representing text chunks. In Proceedings of EACL’99. E. F. Tjong Kim Sang. 2002. Introduction to the conll2002 shared task: Language-independent named entity recognition. In Proceedings of CoNLL-2002, pages 155–158. 480
2006
60
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 481–488, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Segment-based Hidden Markov Models for Information Extraction Zhenmei Gu David R. Cheriton School of Computer Science University of Waterloo Waterloo, Ontario, Canada N2l 3G1 [email protected] Nick Cercone Faculty of Computer Science Dalhousie University Halifax, Nova Scotia, Canada B3H 1W5 [email protected] Abstract Hidden Markov models (HMMs) are powerful statistical models that have found successful applications in Information Extraction (IE). In current approaches to applying HMMs to IE, an HMM is used to model text at the document level. This modelling might cause undesired redundancy in extraction in the sense that more than one filler is identified and extracted. We propose to use HMMs to model text at the segment level, in which the extraction process consists of two steps: a segment retrieval step followed by an extraction step. In order to retrieve extractionrelevant segments from documents, we introduce a method to use HMMs to model and retrieve segments. Our experimental results show that the resulting segment HMM IE system not only achieves near zero extraction redundancy, but also has better overall extraction performance than traditional document HMM IE systems. 1 Introduction A Hidden Markov Model (HMM) is a finite state automaton with stochastic state transitions and symbol emissions (Rabiner, 1989). The automaton models a random process that can produce a sequence of symbols by starting from some state, transferring from one state to another state with a symbol being emitted at each state, until a final state is reached. Formally, a hidden Markov model (HMM) is specified by a five-tuple (S, K, Π, A, B), where S is a set of states; K is the alphabet of observation symbols; Π is the initial state distribution; A is the probability distribution of state transitions; and B is the probability distribution of symbol emissions. When the structure of an HMM is determined, the complete model parameters can be represented as λ = (A, B, Π). HMMs are particularly useful in modelling sequential data. They have been applied in several areas within natural language processing (NLP), with one of the most successful efforts in speech recognition. HMMs have also been applied in information extraction. An early work of using HMMs for IE is (Leek, 1997) in which HMMs are trained to extract gene name-location facts from a collection of scientific abstracts. Another related work is (Bikel et al., 1997) which used HMMs as part of its modelling for the name finding problem in information extraction. A more recent work on applying HMMs to IE is (Freitag and McCallum, 1999), in which a separate HMM is built for extracting fillers for each slot. To train an HMM for extracting fillers for a specific slot, maximum likelihood estimation is used to determine the probabilities (i.e., the initial state probabilities, the state transition probabilities, and the symbol emission probabilities) associated with each HMM from labelled texts. One characteristic of current HMM-based IE systems is that an HMM models the entire document. Each document is viewed as a long sequence of tokens (i.e., words, punctuation marks etc.), which is the observation generated from the given HMM. The extraction is performed by finding the best state sequence for this observed long token sequence constituting the whole document, and the subsequences of tokens that pass through the target filler state are extracted as fillers. We call such approaches to applying HMMs to IE at the document level as document-based HMM IE or document HMM IE for brevity. 481 In addition to HMMs, there are other Markovian sequence models that have been applied to IE. Examples of these models include maximum entropy Markov models (McCallum et al., 2000), Bayesian information extraction network (Peshkin and Pfeffer, 2003), and conditional random fields (McCallum, 2003) (Peng and McCallum, 2004). In the IE systems using these models, extraction is performed by sequential tag labelling. Similar to HMM IE, each document is considered to be a single steam of tokens in these IE models as well. In this paper, we introduce the concept of extraction redundancy, and show that current document HMM IE systems often produce undesired redundant extractions. In order to address this extraction redundancy issue, we propose a segmentbased two-step extraction approach in which a segment retrieval step is imposed before the extraction step. Our experimental results show that the resulting segment-based HMM IE system not only achieves near-zero extraction redundancy but also improves the overall extraction performance. This paper is organized as follows. In section 2, we describe our document HMM IE system in which the Simple Good-Turning (SGT) smoothing is applied for probability estimation. We also evaluate our document HMM IE system, and compare it to the related work. In Section 3, we point out the extraction redundancy issue in a document HMM IE system. The definition of the extraction redundancy is introduced for better evaluation of an IE system with possible redundant extraction. In order to address this extraction redundancy issue, we propose our segment-based HMM IE method in Section 4, in which a segment retrieval step is applied before the extraction is performed. Section 5 presents a segment retrieval algorithm by using HMMs to model and retrieve segments. We compare the performance between the segment HMM IE system and the document HMM IE system in Section 6. Finally, conclusions are made and some future work is mentioned in Section 7. 2 Document-based HMM IE with the SGT smoothing 2.1 HMM structure We use a similar HMM structure (named as HMM Context) as in (Freitag and McCallum, 1999) for our document HMM IE system. An example of such an HMM is shown in Figure 1, in which the number of pre-context states, postcontext states, and the number of parallel filler paths are all set to 4, the default model parameter setting in our system.                                                                                      Figure 1: An example of HMM Context structure HMM Context consists of the following four kinds of states in addition to the special start and end states. Filler states Fillermn, m = 1, 2, 3, 4 and n = 1, · · · , m states, correspond to the occurrences of filler tokens. Background state This state corresponds to the occurrences of the tokens that are not related to fillers or their contexts. Pre context states Pre4, Pre3, Pre2, Pre1 states correspond to the events present when context tokens occur before the fillers at the specific positions relative to the fillers, respectively. Post context states Post1, Post2, Post3, Post4 states correspond to the events present when context tokens occur after the fillers at the specific positions relative to the fillers, respectively. Our HMM structure differs from the one used in (Freitag and McCallum, 1999) in that we have added the transitions from the last post context state to every pre context state as well as every first filler state. This handles the situation where two filler occurrences in the document are so close to each other that the text segment between these two 482 fillers is shorter than the sum of the pre context and the post context sizes. 2.2 Smoothing in HMM IE There are many probabilities that need to be estimated to train an HMM for information extraction from a limited number of labelled documents. The data sparseness problem commonly occurring in probabilistic learning would also be an issue in the training for an HMM IE system, especially when more advanced HMM Context models are used. Since the emission vocabulary is usually large with respect to the number of training examples, maximum likelihood estimation of emission probabilities will lead to inappropriate zero probabilities for many words in the alphabet. The Simple Good-Turning (SGT) smoothing (Gale and Sampson, 1995) is a simple version of Good-Turning approach, which is a population frequency estimator used to adjust the observed term frequencies to estimate the real population term frequencies. The observed frequency distribution from the sample can be represented as a vector of (r, nr) pairs, r = 1, 2, · · · . r values are the observed term frequencies from the training data, and nr refers to the number of different terms that occur with frequency r in the sample. For each r observed in the sample, the GoodTurning method gives an estimation for its real population frequency as r∗= (r + 1)E(nr+1) E(nr) , where E(nr) is the expected number of terms with frequency r. For unseen events, an amount of probability P0 is assigned to all these unseen events, P0 = E(n1) N ≈n1 N , where N is the total number of term occurrences in the sample. The SGT smoothing has been successfully applied to naive Bayes IE systems in (Gu and Cercone, 2006) for more robust probability estimation. We apply the SGT smoothing method to our HMM IE systems to alleviate the data sparseness problem in HMM training. In particular, the emission probability distribution for each state is smoothed using the SGT method. The number of unseen emission terms is estimated, as the observed alphabet size difference between the specific state emission term distribution and the all term distribution, for each state before assigning the total unseen probability obtained from the SGT smoothing among all these unseen terms. The data sparseness problem in probability estimation for HMMs has been addressed to some extent in previous HMM based IE systems (e.g., (Leek, 1997) and (Freitag and McCallum, 1999)). Smoothing methods such as absolute discounting have been used for this purpose. Moreover, (Freitag and McCallum, 1999) uses a shrinkage technique for estimating word emission probabilities of HMMs in the face of sparse training data. It first defines a shrinkage topology over HMM states, then learns the mixture weights for producing interpolated emission probabilities by using a separate data set that is “held-out” from the labelled data. This technique is called deleted interpolation in speech recognition (Jelinek and Mercer, 1980). 2.3 Experimental results on document HMM IE and comparison to related work We evaluated our document HMM IE system on the seminar announcements IE domain using tenfold cross validation evaluation. The data set consists of 485 annotated seminar announcements, with the fillers for the following four slots specified for each seminar: location (the location of a seminar), speaker (the speaker of a seminar), stime (the starting time of a seminar) and etime (the ending time of a seminar). In our HMM IE experiments, the structure parameters are set to system default values, i.e., 4 for both pre-context and postcontext size, and 4 for the number of parallel filler paths. Table 1 shows F1 scores (95% confidence intervals) of our Document HMM IE system (Doc HMM). The performance numbers from other HMM IE systems (Freitag and McCallum, 1999) are also listed in Table 1 for comparison, where HMM None is their HMM IE system that uses absolute discounting but with no shrinkage, and HMM Global is the representative version of their HMM IE system with shrinkage. By using the same structure parameters (i.e., the same context size) as in (Freitag and McCallum, 1999), our Doc HMM system performs consistently better on all slots than their HMM IE system using absolute discounting. Even compared to their much more complex version of HMM IE with shrinkage, our system has achieved comparable results on location, speaker and stime, but obtained significantly better performance on the etime slot. It is noted that our smoothing method is much simpler to apply, and does not require any extra effort such as specifying shrinkage topology or any extra labelled data for a held-out set. 483 Table 1: F1 of Document HMM IE systems on seminar announcements Learner location speaker stime etime Doc HMM 0.8220±0.022 0.7135±0.025 1.0000±0.0 0.9488±0.012 HMM None 0.735 0.513 0.991 0.814 HMM Global 0.839 0.711 0.991 0.595 3 Document extraction redundancy in HMM IE 3.1 Issue with document-based HMM IE In existing HMM based IE systems, an HMM is used to model the entire document as one long observation sequence emitted from the HMM. The extracted fillers are identified by any part of the sequence in which tokens in it are labelled as one of the filler states. The commonly used structure of the hidden Markov models in IE allows multiple passes through the paths of the filler states. So it is possible for the labelled state sequences to present multiple filler extractions. It is not known from the performance reports from previous works (e.g., (Freitag and McCallum, 1999)) that how exactly a correct extraction for one document is defined in HMM IE evaluation. One way to define a correct extraction for a document is to require that at least one of the text segments that pass the filler states is the same as a labelled filler. Alternatively, we can define the correctness by requiring that all the text segments that pass the filler states are same as the labelled fillers. In this case, it is actually required an exact match between the HMM state sequence determined by the system and the originally labelled one for that document. Very likely, the former correctness criterion was used in evaluating these document-based HMM IE systems. We used the same criterion for evaluating our document HMM IE systems in Section 2. Although it might be reasonable to define that a document is correctly extracted if any one of the identified fillers from the state sequence labelled by the system is a correct filler, certain issues exist when a document HMM IE system returns multiple extractions for the same slot for one document. For example, it is possible that some of the fillers found by the system are not correct extractions. In this situation, such document-wise extraction evaluation alone would not be sufficient to measure the performance of an HMM IE system. Document HMM IE modelling does provide any guidelines for selecting one mostly likely filler from the ones identified by the state sequence matching over the whole document. For the template filling IE problem that is of our interest in this paper, the ideal extraction result is one slot filler per document. Otherwise, some further postprocessing would be required to choose only one extraction, from the multiple fillers possibly extracted by a document HMM IE system, for filling in the slot template for that document. 3.2 Concept of document extraction redundancy in HMM IE In order to make a more complete extraction performance evaluation in an HMM-based IE system, we introduce another performance measure, document extraction redundancy as defined in Definition 1, to be used with the document-wise extraction correctness measure . Definition 1. Document extraction redundancy is defined over the documents that contain correct extraction(s), as the ratio of the incorrectly extracted fillers to all returned fillers from the document HMM IE system. For example, when the document HMM IE system issues more than one slot extraction for a document, if all the issued extractions are correct ones, then the extraction redundancy for that document is 0. Among all the issued extractions, the larger of the number of incorrect extractions is, the closer the extraction redundancy for that document is to 1. However, the extraction redundancy can never be 1 according to our definition, since this measure is only defined over the documents that contain at lease one correct extraction. Now let us have a look at the extraction redundancy in the document HMM IE system from Section 2. We calculate the average document extraction redundancy over all the documents that are judged as correctly extracted. The evaluation results for the document extraction redundancy (shown in column R) are listed in Table 2, paired with their corresponding F1 scores from the 484 document-wise extraction evaluation. Table 2: F1 / redundancy in document HMM IE on SA domain Slot F1 R location 0.8220 0.0543 speaker 0.7135 0.0952 stime 1.0000 0.1312 etime 0.9488 0.0630 Generally speaking, the HMM IE systems based on document modelling has exhibited a certain extraction redundancy for any slot in this IE domain, and in some cases such as for speaker and stime, the average extraction redundancy is by all means not negligible. 4 Segment-based HMM IE Modelling In order to make the IE system capable of producing the ideal extraction result that issues only one slot filler for each document, we propose a segment-based HMM IE framework in the following sections of this paper. We expect this framework can dramatically reduce the document extraction redundancy and make the resulting IE system output extraction results to the template filling IE task with the least post-processing requirement. The basic idea of our approach is to use HMMs to extract fillers from only extraction-relevant part of text instead of the entire document. We refer to this modelling as segment-based HMM IE, or segment HMM IE for brevity. The unit of the extraction-relevant text segments is definable according to the nature of the texts. For most texts, one sentence in the text can be regarded as a text segment. For some texts that are not written in a grammatical style and sentence boundaries are hard to identify, we can define a extractionrelevant text segment be the part of text that includes a filler occurrence and its contexts. 4.1 Segment-based HMM IE modelling: the procedure By imposing an extraction-relevant text segment retrieval in the segment HMM IE modelling, we perform an extraction on a document by completing the following two successive sub-tasks. Step 1: Identify from the entire documents the text segments that are relevant to a specific slot extraction. In other words, the document is filtered by locating text segments that might contain a filler. Step 2: Extraction is performed by applying the segment HMM only on the extractionrelevant text segments that are obtained from the first step. Each retrieved segment is labelled with the most probable state sequence by the HMM, and all these segments are sorted according to their normalized likelihoods of their best state sequences. The filler(s) identified by the segment having the largest likelihood is/are returned as the extraction result. 4.2 Extraction from relevant segments Since it is usual that more than one segment have been retrieved at Step 1, these segments need to compete at step 2 for issuing extraction(s) from their best state sequences found with regard to the HMM λ used for extraction. For each segment s with token length of n, its normalized best state sequence likelihood is defined as follows. l(s) = log ¡ max all Q P(Q, s|λ) ¢ × 1 n, (1) where λ is the HMM and Q is any possible state sequence associated with s. All the retrieved segments are then ranked according to their l(s), and the segment with the highest l(s) number is selected and the extraction is identified from its labelled state sequence by the segment HMM. This proposed two-step HMM based extraction procedure requires that the training of the IE models follows the same style. First, we need to learn an extraction-relevance segment retrieval system from the labelled texts which will be described in detail in Section 5. Then, an HMM is trained for each slot extraction by only using the extractionrelevant text segments instead of the whole documents. By limiting the HMM training to a much smaller part of the texts, basically including the fillers and their surrounding contexts, the alphabet size of all emission symbols associated with the HMM would be significantly reduced. Compared to the common document-based HMM IE modelling, our proposed segment-based HMM IE modelling would also ease the HMM training difficulty caused by the data sparseness problem since we are working on a smaller alphabet. 485 5 Extraction-relevant segment retrieval using HMMs We propose a segment retrieval approach for performing the first subtask by also using HMMs. In particular, it trains an HMM from labelled segments in texts, and then use the learned HMM to determine whether a segment is relevant or not with regard to a specific extraction task. In order to distinguish the HMM used for segment retrieval in the first step from the HMM used for the extraction in the second step, we call the former one as the retrieval HMM and the later one as the extractor HMM. 5.1 Training HMMs for segment retrieval To train a retrieval HMM, it requires each training segment to be labelled in the same way as in the annotated training document. After the training texts are segmented into sentences (we are using sentence as the segment unit), the obtained segments that carry the original slot filler tags are used directly as the training examples for the retrieval HMM. An HMM with the same IE specific structure is trained from the prepared training segments in exactly the same way as we train an HMM in the document HMM IE system from a set of training documents. The difference is that much shorter labelled observation sequences are used. 5.2 Segment retrieval using HMMs After a retrieval HMM is trained from the labelled segments, we use this HMM to determine whether an unseen segment is relevant or not to a specific extraction task. This is done by estimating, from the HMM, how likely the associated state sequence of the given segment passes the target filler states. The HMM λ trained from labelled segments has the structure as shown in Figure 1. So for a segment s, all the possible state sequences can be categorized into two kinds: the state sequences passing through one of the target filler path, and the state sequences not passing through any target filler states. Because of the structure constraints of the specified HMM in IE, we can see that the second kind of state sequences actually have only one possible path, denoted as Qbg in which the whole observation sequence of s starts at the background state qbg and continues staying in the background state until the end. Let s = O1O2 · · · OT , where T is the length of s in tokens. The probability of s following this particular background state path Qbg can be easily calculated with respect to the HMM λ as follows: P(s, Qbg|λ) =πqbgbqbg(O1)aqbgqbgbqbg(O2) · · · aqbgqbgbqbg(OT ), where πi is the initial state probability for state i, bi(Ot) is the emission probability of symbol Ot at state i, and aij is the state transition probability from state i to state j. We know that the probability of observing s given the HMM λ actually sums over the probabilities of observing s on all the possible state sequences given the HMM, i.e., P(s|λ) = X all Q P(s, Q|λ) Let Qfiller denote the set of state sequences that pass through any filler states. We have {all Q} = Qbg∪Qfiller. P(s|λ) can be calculated efficiently using the forward-backward procedure which makes the estimate for the total probability of all state paths that go through filler states straightforward to be: P(s, Qfiller|λ) ∆= X allQ∈Qfiller P(s, Q|λ) = P(s|λ) −P(s, Qbg|λ). Now it is clear to see that, if the calculated P(s, Qfiller|λ) > P(s, Qbg|λ), then segment s is considered more likely to have filler occurrence(s). Therefore in this case we classify s as an extraction relevant segment and it will be retrieved. 5.3 Document-wise retrieval performance Since the purpose of our segment retrieval is to identify relevant segments from each document, we need to define how to determine whether a document is correctly filtered (i.e., with extraction relevant segments retrieved) by a given segment retrieval system. We consider two criteria, first a loose correctness definition as follows: Definition 2. A document is least correctly filtered by the segment retrieval system when at least one of the extraction relevant segments in that document has been retrieved by the system; otherwise, we say the system fails on that document. Then we define a stricter correctness measure as follows: 486 Definition 3. A document is most correctly filtered by the segment retrieval system only when all the extraction relevant segments in that document have been retrieved by the system; otherwise, we say the system fails on that document. The overall segment retrieval performance is measured by retrieval precision (i.e., ratio of the number of correctly filtered documents to the number of documents from which the system has retrieved at least one segments) and retrieval recall (i.e., ratio of the number of correctly filtered documents to the number of documents that contain relevant segments). According to the just defined two correctness measures, the overall retrieval performance for the all testing documents can be evaluated under both the least correctly filtered and the least correctly filtered measures. We also evaluate average document-wise segment retrieval redundancy, as defined in Definition 4 to measure the segment retrieval accuracy. Definition 4. Document-wise segment retrieval redundancy is defined over the documents which are least correctly filtered by the segment retrieval system, as the ratio of the retrieved irrelevant segments to all retrieved segments for that document. 5.4 Experimental results on segment retrieval Table 3 shows the document-wise segment retrieval performance evaluation results under both least correctly filtered and most correctly filtered measures, as well as the related average number of retrieved segments for each document (as in Column nSeg) and the average retrieval redundancy. Shown from Table 3, the segment retrieval results have achieved high recall especially with the least correctly filtered correctness criterion. In addition, the system has produced the retrieval results with relatively small redundancy which means most of the segments that are fed to the segment HMM extractor from the retrieval step are actually extraction-related segments. 6 Segment vs. document HMM IE We conducted experiments to evaluate our segment-based HMM IE model, using the proposed segment retrieval approach, and comparing their final extraction performance to the document-based HMM IE model. Table 4 shows the overall performance comparison between the document HMM IE system (Doc HMM) and the segment HMM IE system (Seg HMM). Compared to the document-based HMM IE modelling, the extraction performance on location is significantly improved by our segment HMM IE system. The important improvement from the segment HMM IE system that it has achieved zero extraction redundancy for all the slots in this experiment. 7 Conclusions and future work In current HMM based IE systems, an HMM is used to model at the document level which causes certain redundancy in the extraction. We propose a segment-based HMM IE modelling method in order to achieve near-zero redundancy extraction. In our segment HMM IE approach, a segment retrieval step is first applied so that the HMM extractor identifies fillers from a smaller set of extraction-relevant segments. The resulting segment HMM IE system using the segment retrieval method has not only achieved nearly zero extraction redundancy, but also improved the overall extraction performance. The effect of the segmentbased HMM extraction goes beyond applying a post-processing step to the document-based HMM extraction, since the latter can only reduce the redundancy but not improve the F1 scores. For the template-filling style IE problems, it is more reasonable to perform extraction by HMM state labelling on segments, instead of on the entire document. When the observation sequence to be labelled becomes longer, finding the best single state sequence for it would become a more difficult task. Since the effect of changing a small part in a very long state sequence would not be as obvious, with regard to the state path probability calculation, as changing the same subsequence in a much shorter state sequence. In fact, this perspective not only applies in HMM IE modelling, but also applies in any IE modelling in which extraction is performed by sequential state labelling. We are working on extending this segment-based framework to other Markovian sequence models used for IE. Segment retrieval for extraction is an important step in segment HMM IE, since it filters out irrelevant segments from the document. The HMM for extraction is supposed to model extractionrelevant segments, so the irrelevant segments that are fed to the second step would make the extraction more difficult by adding noise to the competition among relevant segments. We have 487 Table 3: Segment retrieval results Slot least correctly most correctly Precision Recall Precision Recall nSeg Redundancy location 0.8948 0.9177 0.8758 0.8982 2.6064 0.4569 speaker 0.8791 0.7633 0.6969 0.6042 1.6082 0.1664 stime 1.0000 1.0000 0.9464 0.9464 2.6576 0.1961 etime 0.4717 0.9952 0.4570 0.9609 1.7896 0.1050 Table 4: F1 comparison on seminar announcements (document HMM IE vs. segment HMM IE) Learner location speaker stime etime F1 R F1 R F1 R F1 R Doc HMM 0.822±0.022 0.0543 0.7135±0.025 0.0952 1.0000±0.0 0.131 0.9488±0.012 0.063 Seg HMM 0.8798±0.018 0 0.7162±0.025 0 0.998±0.003 0 0.9611±0.011 0 presented and evaluated our segment retrieval method. Document-wise retrieval performance can give us more insights on the goodness of a particular segment retrieval method for our purpose: the document-wise retrieval recall using the least correctly filtered measure provides an upper bound on the final extraction performance. Our current segment retrieval method requires the training documents to be segmented in advance. Although sentence segmentation is a relatively easy task in NLP, some segmentation errors are still unavoidable especially for ungrammatical online texts. For example, an improper segmentation could set a segment boundary in the middle of a filler, which would definitely affect the final extraction performance of the segment HMM IE system. In the future, we intend to design segment retrieval methods that do not require documents to be segmented before retrieval, hence avoiding the possibility of early-stage errors introduced from the text segmentation step. A very promising idea is to adapt a naive Bayes IE to perform redundant extractions directly on an entire document to retrieve filler-containing text segments for a segment HMM IE system. References [Bikel et al.1997] D. M. Bikel, S. Miller, R. Schwartz, and R. Weischedel. 1997. Nymble: a highperformance learning name-finder. In Proceedings of ANLP-97, pages 194–201. [Freitag and McCallum1999] D. Freitag and A. McCallum. 1999. Information extraction with HMMs and shrinkage. In Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction. [Gale and Sampson1995] W. Gale and G. Sampson. 1995. Good-turning smoothing without tears. Journal of Quantitative Linguistics, 2:217–37. [Gu and Cercone2006] Z. Gu and N. Cercone. 2006. Naive bayes modeling with proper smoothing for information extraction. In Proceedings of the 2006 IEEE International Conference on Fuzzy Systems. [Jelinek and Mercer1980] F. Jelinek and R. L. Mercer. 1980. Intepolated estimation of markov source parameters from sparse data. In E. S. Gelesma and L. N. Kanal, editors, Proceedings of the Wrokshop on Pattern Recognition in Practice, pages 381–397, Amsterdam, The Netherlands: North-Holland, May. [Leek1997] T. R. Leek. 1997. Information extraction using hidden markov models. Master’s thesis, UC San Diego. [McCallum et al.2000] A. McCallum, D. Freitag, and F. Pereira. 2000. Maximum entropy Markov models for informaion extraction and segmentation. In Proceedings of ICML-2000. [McCallum2003] Andrew McCallum. 2003. Efficiently inducing features of conditional random fields. In Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI03). [Peng and McCallum2004] F. Peng and A. McCallum. 2004. Accurate information extraction from research papers using conditional random fields. In Proceedings of Human Language Technology Conference and North American Chapter of the Association for Computational Linguistics. [Peshkin and Pfeffer2003] L. Peshkin and A. Pfeffer. 2003. Bayesian information extraction network. In Proceedings of the Eighteenth International Joint Conf. on Artificial Intelligence. [Rabiner1989] L. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. In Proceedings of the IEEE, volume 77(2). 488
2006
61
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 489–496, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A DOM Tree Alignment Model for Mining Parallel Data from the Web Lei Shi1, Cheng Niu1, Ming Zhou1, and Jianfeng Gao2 1Microsoft Research Asia, 5F Sigma Center, 49 Zhichun Road, Beijing 10080, P. R. China 2Microsoft Research, One Microsoft Way, Redmond, WA 98052, USA {leishi,chengniu,mingzhou,jfgao}@microsoft.com Abstract This paper presents a new web mining scheme for parallel data acquisition. Based on the Document Object Model (DOM), a web page is represented as a DOM tree. Then a DOM tree alignment model is proposed to identify the translationally equivalent texts and hyperlinks between two parallel DOM trees. By tracing the identified parallel hyperlinks, parallel web documents are recursively mined. Compared with previous mining schemes, the benchmarks show that this new mining scheme improves the mining coverage, reduces mining bandwidth, and enhances the quality of mined parallel sentences. 1 Introduction Parallel bilingual corpora are critical resources for statistical machine translation (Brown 1993), and cross-lingual information retrieval (Nie 1999). Additionally, parallel corpora have been exploited for various monolingual natural language processing (NLP) tasks, such as wordsense disambiguation (Ng 2003) and paraphrase acquisition (Callison 2005). However, large scale parallel corpora are not readily available for most language pairs. Even where resources are available, such as for English-French, the data are usually restricted to government documents (e.g., the Hansard corpus, which consists of French-English translations of debates in the Canadian parliament) or newswire texts. The "governmentese" that characterizes these document collections cannot be used on its own to train data-driven machine translation systems for a range of domains and language pairs. With a sharply increasing number of bilingual web sites, web mining for parallel data becomes a promising solution to this knowledge acquisition problem. In an effort to estimate the amount of bilingual data on the web, (Ma and Liberman 1999) surveyed web pages in the de (German web site) domain, showing that of 150,000 websites in the .de domain, 10% are German-English bilingual. Based on such observations, some web mining systems have been developed to automatically obtain parallel corpora from the web (Nie et al 1999; Ma and Liberman 1999; Chen, Chau and Yeh 2004; Resnik and Smith 2003 Zhang et al 2006 ). These systems mine parallel web documents within bilingual web sites, exploiting the fact that URLs of many parallel web pages are named with apparent patterns to facilitate website maintenance. Hence given a bilingual website, the mining systems use pre-defined URL patterns to discover candidate parallel documents within the site. Then content-based features will be used to verify the translational equivalence of the candidate pairs. However, due to the diversity of web page styles and website maintenance mechanisms, bilingual websites use varied naming schemes for parallel documents. For example, the United Nation’s website, which contains thousands of parallel pages, simply names the majority of its web pages with some computer generated ad-hoc URLs. Such a website then cannot be mined by the URL pattern-based mining scheme. To further improve the coverage of web mining, other patterns associated with translational parallelism are called for. Besides, URL pattern-based mining may raise concerns on high bandwidth cost and slow download speed. Based on descriptions of (Nie et al 1999; Ma and Liberman 1999; Chen, Chau and Yeh 2004), the mining process requires a full host crawling to collect URLs before using URL patterns to discover the parallel documents. Since in many bilingual web sites, parallel documents are much sparser than comparable documents, a significant portion of internet bandwidth is wasted on downloading web pages without translational counterparts. Furthermore, there is a lack of discussion on the quality of mined data. To support machine translation, parallel sentences should be extracted from the mined parallel documents. However, current sentence alignment models, (Brown et al 1991; Gale & Church 1991; Wu 1994; Chen 489 1993; Zhao and Vogel, 2002; etc.) are targeted on traditional textual documents. Due to the noisy nature of the web documents, parallel web pages may consist of non-translational content and many out-of-vocabulary words, both of which reduce sentence alignment accuracy. To improve sentence alignment performance on the web data, the similarity of the HTML tag structures between the parallel web documents should be leveraged properly in the sentence alignment model. In order to improve the quality of mined data and increase the mining coverage and speed, this paper proposes a new web parallel data mining scheme. Given a pair of parallel web pages as seeds, the Document Object Model1 (DOM) is used to represent the web pages as a pair of DOM trees. Then a stochastic DOM tree alignment model is used to align translationally equivalent content, including both textual chunks and hyperlinks, between the DOM tree pairs. The parallel hyperlinks discovered are regarded as anchors to new parallel data. This makes the mining scheme an iterative process. The new mining scheme has three advantages: (i) Mining coverage is increased. Parallel hyperlinks referring to parallel web page is a general and reliable pattern for parallel data mining. Many bilingual websites not supporting URL pattern-based mining scheme support this new mining scheme. Our mining experiment shows that, using the new web mining scheme, the web mining throughput is increased by 32%; (ii) The quality of the mined data is improved. By leveraging the web pages’ HTML structures, the sentence aligner supported by the DOM tree alignment model outperforms conventional ones by 7% in both precision and recall; (iii) The bandwidth cost is reduced by restricting web page downloads to the links that are very likely to be parallel. The rest of the paper is organized as follows: In the next section, we introduce the related work. In Section 3, a new web parallel data mining scheme is presented. Three component technologies, the DOM tree alignment model, the sentence aligner, and the candidate parallel page verification model are presented in Section 4, 5, and 6. Section 7 presents experiments and benchmarks. The paper is finally concluded in Section 8. 1 See http://www.w3.org/DOM/ 2 Related Work The parallel data available on the web have been an important knowledge source for machine translation. For example, Hong Kong Laws, an English-Chinese Parallel corpus released by Linguistic Data Consortium (LDC) is downloaded from the Department of Justice of the Hong Kong Special Administrative Region website. Recently, web mining systems have been built to automatically acquire parallel data from the web. Exemplary systems include PTMiner (Nie et al 1999), STRAND (Resnik and Smith, 2003), BITS (Ma and Liberman, 1999), and PTI (Chen, Chau and Yeh, 2004). Given a bilingual website, these systems identify candidate parallel documents using pre-defined URL patterns. Then content-based features are employed for candidate verification. Particularly, HTML tag similarities have been exploited to verify parallelism between pages. But it is done by simplifying HTML tags as a string sequence instead of a hierarchical DOM tree. Tens of thousands parallel documents have been acquired with accuracy over 90%. To support machine translation, parallel sentence pairs should be extracted from the parallel web documents. A number of techniques for aligning sentences in parallel corpora have been proposed. (Gale & Church 1991; Brown et al. 1991; Wu 1994) used sentence length as the basic feature for alignment. (Kay & Roscheisen 1993; and Chen 1993) used lexical information for sentence alignment. Models combining length and lexicon information were proposed in (Zhao and Vogel, 2002; Moore 2002). Signal processing techniques is also employed in sentence alignment by (Church 1993; Fung & McKeown 1994). Recently, much research attention has been paid to aligning sentences in comparable documents (Utiyama et al 2003, Munteanu et al 2004). The DOM tree alignment model is the key technique of our mining approach. Although, to our knowledge, this is the first work discussing DOM tree alignments, there is substantial research focusing on syntactic tree alignment model for machine translation. For example, (Wu 1997; Alshawi, Bangalore, and Douglas, 2000; Yamada and Knight, 2001) have studied synchronous context free grammar. This formalism requires isomorphic syntax trees for the source sentence and its translation. (Shieber and Schabes 1990) presents a synchronous tree adjoining grammar (STAG) which is able to align two syn490 tactic trees at the linguistic minimal units. The synchronous tree substitution grammar (STSG) presented in (Hajic etc. 2004) is a simplified version of STAG which allows tree substitution operation, but prohibits the operation of tree adjunction. 3 A New Parallel Data Mining Scheme Supported by DOM Tree Alignment Our new web parallel data mining scheme consists of the following steps: (1) Given a web site, the root page and web pages directly linked from the root page are downloaded. Then for each of the downloaded web page, all of its anchor texts (i.e. the hyperlinked words on a web page) are compared with a list of predefined strings known to reflect translational equivalence among web pages (Nie et al 1999). Examples of such predefined trigger strings include: (i) trigger words for English translation {English, English Version,  ,   , etc.}; and (ii) trigger words for Chinese translation {Chinese, Chinese Version, Simplified Chinese, Traditional Chinese,   ,   , etc.}. If both categories of trigger words are found, the web site is considered bilingual, and every web page pair are sent to Step 2 for parallelism verification. (2) Given a pair of the plausible parallel web pages, a verification module is called to determine if the page pair is truly translationally equivalent. (3) For each verified pair of parallel web pages, a DOM tree alignment model is called to extract parallel text chunks and hyperlinks. (4) Sentence alignment is performed on each pair of the parallel text chunks, and the resulting parallel sentences are saved in an output file. (5) For each pair of parallel hyperlinks, the corresponding pair of web pages is downloaded, and then goes to Step 2 for parallelism verification. If no more parallel hyperlinks are found, stop the mining process. Our new mining scheme is iterative in nature. It fully exploits the information contained in the parallel data and effectively uses it to pinpoint the location holding more parallel data. This approach is based on our observation that parallel pages share similar structures holding parallel content, and parallel hyperlinks refer to new parallel pages. By exploiting both the HTML tag similarity and the content-based translational equivalences, the DOM tree alignment model extracts parallel text chunks. Working on the parallel text chunks instead of the text of the whole web page, the sentence alignment accuracy can be improved by a large margin. In the next three sections, three component techniques, the DOM tree alignment model, sentence alignment model, and candidate web page pair verification model are introduced. 4 DOM Tree Alignment Model The Document Object Model (DOM) is an application programming interface for valid HTML documents. Using DOM, the logical structure of a HTML document is represented as a tree where each node belongs to some pre-defined node types (e.g. Document, DocumentType, Element, Text, Comment, ProcessingInstruction etc.). Among all these types of nodes, the nodes most relevant to our purpose are Element nodes (corresponding to the HTML tags) and Text nodes (corresponding to the texts). To simplify the description of the alignment model, minor modifications of the standard DOM tree are made: (i) Only the Element nodes and Text nodes are kept in our document tree model. (ii) The ALT attribute is represented as Text node in our document tree model. The ALT text are textual alternative when images cannot be displayed, hence is helpful to align images and hyperlinks. (iii) the Text node (which must be a leaf) and its parent Element node are combined into one node in order to concise the representation of the alignment model. The above three modifications are exemplified in Fig. 1. Fig. 1 Difference between Standard DOM and Our Document Tree Despite these minor differences, our document tree is still referred as DOM tree throughout this paper. 491 4.1 DOM Tree Alignment Similar to STSG, our DOM tree alignment model supports node deletion, insertion and substitution. Besides, both STSG and our DOM tree alignment model define the alignment as a tree hierarchical invariance process, i.e. if node A is aligned with node B, then the children of A are either deleted or aligned with the children of B. But two major differences exist between STSG and our DOM tree alignment model: (i) Our DOM tree alignment model requires the alignment a sequential order invariant process, i.e. if node A is aligned with node B, then the sibling nodes following A have to be either deleted or aligned with the sibling nodes following B. (ii) (Hajic etc. 2004) presents STSG in the context of language generation, while we search for the best alignment on the condition that both trees are given. To facilitate the presentation of the tree alignment model, the following symbols are introduced: given a HTML document D, D T refers to the corresponding DOM tree; D i N refers to the ith node of D T (here the index of the node is in the breadth-first order), and D iT refers to the sub-tree rooted at D i N , so D 1 N refers to the root of D T , and D T = D 1T ; [ ] D j i, T refers to the forest consisting of the sub-trees rooted at nodes from D iT to D jT . t. N D i refers to the text of node D i N ; l. N D i refers to the HTML tag of the node D i N ; j C . N D i refers to the jth child of the node D i N ; [ ] n m C , D i . N refers to the consecutive sequence of D i N ’s children nodes from m C . N D i to n C . N D i ; the sub-tree rooted at j C . N D i is represented as j TC . N D i and the forest rooted at [ ] n m C , D i . N is represented as [ ] n m TC , D i . N . Finally NULL refers to the empty node introduced for node deletion. To accommodate the hierarchical structure of the DOM tree, two different translation probabilities are defined: ( ) E i F m T T Pr : probability of translating sub-tree E iT into sub-tree F m T ; ( ) E i F m N N Pr : probability of translating node E i N into F m N . Besides, [ ] [ ] ( ) A T T E j i F n m , Pr , , represents the probability of translating the forest [ ] E j iT , into [ ] F n m T , based on the alignment A. The tree alignment A is defined as a mapping from target nodes onto source nodes or the null node. Given two HTML documents F (in French) and E (in English), the tree alignment task is defined as searching for A which maximizes the following probability: ( ) ( ) ( ) E E F E F T A A T T T T A Pr , Pr , Pr ∝ (1) where ( ) E T A Pr represents the prior knowledge of the alignment configurations. By introducing dp which refers to the probability of a source or target node deletion occurring in an alignment configuration, the alignment prior ( ) E T A Pr is assumed as the following binominal distribution: ( ) ( ) M d L d E p p T A − ∝1 Pr where L is the count of non-empty alignments in A, and M is the count of source and target node deletions in A. As to ( ) A T T E F , Pr , we can estimate as ( ) ( ) A T T A T T E F E F , Pr , Pr 1 1 = , and ( ) A T T r E i F l , P can be calculated recursively depending on the alignment configuration of A: (1) If F l N is aligned with E i N , and the children of F l N are aligned with the children of E i N , then we have ( ) ( ) [ ]       =  A TC N TC N N N A T T K E i K F l E i F l E i F l , . . Pr Pr , Pr ' ,1 ,1 where K and K’ are degree of F l N and E i N . (2) If F l N is deleted, and the children of F l N is aligned with E iT , then we have ( ) ( ) [ ] ( ) A T TC N NULL N A T T E i K F l F l E i F l , . Pr Pr , Pr ,1 = where K is the degree of F l N (3) If E i N is deleted, and F l N is aligned with the children of E i N , then ( ) ( ) A TC T T A T T K E i F l E i F l , . Pr , Pr ] ,1 [ = where K is the degree of E i N . To complete the alignment model, [ ] ( ) A T T r E j i F n m , P , ] , [ is to be estimated. As mentioned before, only the alignment configurations with unchanged node sequential order are considered as valid. So, [ ] ( ) A T T r E j i F n m , P , ] , [ is estimated recursively according to the following five alignment configurations of A: (4) If F m T is aligned with E iT , and [ ] F n m T ,1 + is 492 aligned with [ ] E j iT ,1 + , then [ ] ( ) ( ) [ ] ( ) A T T r N N A T T r E j i F n m E i F m E j i F n m , P Pr , P ,1 ] ,1 [ , ] , [ + + = (5) If F m T is deleted, and [ ] F n m T ,1 + is aligned with [ ] E j iT , , then [ ] ( ) ( ) [ ] ( ) A T T r NULL N A T T r E j i F n m F m E j i F n m , P Pr , P , ] ,1 [ , ] , [ + = (6) If E iT is deleted, and [ ] F n m T , is aligned with [ ] E j iT ,1 + , then [ ] ( ) [ ] ( ) A T T A T T r E j i F n m E j i F n m , Pr , P ,1 ] , [ , ] , [ + = (7) If F m N is deleted, and F m N ’s children [ ] K F m C N ,1 . is combined with [ ] F n m T ,1 + to aligned with [ ] E j iT , , then [ ] ( ) ( ) [ ] ( ) A T T TC N r NULL N A T T r E j i F n m K F m F m E j i F n m , . P Pr , P , ] ,1 [ ] ,1 [ , ] , [ + = where K is the degree of . F m N (8) E i N is deleted, and E i N ’s children [ ] K E i C N ,1 . is combined with [ ] E j iT ,1 + to be aligned with [ ] F n m T , , then [ ] ( ) [ ] ( ) A T TC N T A T T r E K E i F E F j i n m j i n m , . Pr , P ,1 ] , [ , ] , [ ] ,1 [ + = where K is the degree of . E i N Finally, the node translation probability is modeled as ( ) ( ) ( )t N t N l N l N N N E i F l E i F l E j F l . . Pr . . Pr Pr ≈ . And the text translation probability ( ) E F t t Pr is model using IBM model I (Brown et al 1993). 4.2 Parameter Estimation Using Expectation-Maximization Our tree alignment model involves three categories of parameters: the text translation probability ( ) E F t t Pr , tag mapping probability ( ) ' Pr ll , and node deletion probability dp . Conventional parallel data released by LDC are used to train IBM model I for estimating the text translation probability ( ) E F t t Pr . One way to estimate ( ) ' Pr ll and dp is to manually align nodes between parallel DOM trees, and use them as training corpora for maximum likelihood estimation. However, this is a very time-consuming and error-prone procedure. In this paper, the inside outside algorithm presented in (Lari and Young, 1990) is extended to train parameters ( ) ' Pr ll and dp by optimally fitting the existing parallel DOM trees. 4.3 Dynamic Programming for Decoding It is observed that if two trees are optimally aligned, the alignment of their sub-trees must be optimal as well. In the decoding process, dynamic programming techniques can be applied to find the optimal tree alignment using that of the sub-trees in a bottom up manner. The following is the pseudo-code of the decoding algorithm: For i= | | F T to 1 (bottom-up) { For j= | | E T to 1 (bottom-up) { derive the best alignments among [ ] i K F i TC T ,1 . and [ ] j K E j TC T ,1 . , and then compute the best alignment between F i N and E j N . where | | F T and | | E T are number of nodes in F T and E T ; i K and j K are the degrees of F i N and E j N . The time complexity of the decoding algorithm is ) )) ( degree ) ( (degree | | | T O(| 2 F E F E T T T + × × , where the degree of a tree is defined as the largest degree of its nodes. 5 Aligning Sentences Using Tree Alignment Model To exploit the HTML structure similarities between parallel web documents, a cascaded approach is used in our sentence aligner implementation. First, text chunks associated with DOM tree nodes are aligned using the DOM tree alignment model. Then for each pair of parallel text chunks, the sentence aligner described in (Zhao et al 2002), which combines IBM model I and the length model of (Gale & Church 1991) under a maximum likelihood criterion, is used to align parallel sentences. 6 Web Document Pair Verification Model To verify whether a candidate web document pair is truly parallel, a binary maximum entropy based classifier is used. Following (Nie et al 1999) and (Resnik and Smith, 2003), three features are used: (i) file length ratio; (ii) HTML tag similarity; (iii) sentence alignment score. 493 The HTML tag similarity feature is computed as follows: all of the HTML tags of a given web page are extracted, and concatenated as a string. Then, a minimum edit distance between the two tag strings associated with the candidate pair is computed, and the HMTL tag similarity score is defined as the ratio of match operation number to the total operation number. The sentence alignment score is defined as the ratio of the number of aligned sentences and the total number of sentences in both files. Using these three features, the maximum entropy model is trained on 1,000 pairs of web pages manually labeled as parallel or nonparallel. The Iterative Scaling algorithm (Pietra, Pietra and Lafferty 1995) is used for the training. 7 Experimental Results The DOM tree alignment based mining system is used to acquire English-Chinese parallel data from the web. The mining procedure is initiated by acquiring Chinese website list. We have downloaded about 300,000 URLs of Chinese websites from the web directories at cn.yahoo.com, hk.yahoo.com and tw.yahoo.com. And each website is sent to the mining system for English-Chinese parallel data acquisition. To ensure that the whole mining experiment to be finished in schedule, we stipulate that it takes at most 10 hours on mining each website. Totally 11,000 English-Chinese websites are discovered, from which 63,214 pairs of English-Chinese parallel web documents are mined. After sentence alignment, totally 1,069,423 pairs of EnglishChinese parallel sentences are extracted. In order to compare the system performance, 100 English-Chinese bilingual websites are also mined using the URL pattern based mining scheme. Following (Nie et al 1999; Ma and Liberman 1999; Chen, Chau and Yeh 2004), the URL pattern-based mining consists of three steps: (i) host crawling for URL collection; (ii) candidate pair identification by pre-defined URL pattern matching; (iii) candidate pair verification. Based on these mining results, the quality of the mined data, the mining coverage and mining efficiency are measured. First, we benchmarked the precision of the mined parallel documents. 3,000 pairs of English-Chinese candidate documents are randomly selected from the output of each mining system, and are reviewed by human annotators. The document level precision is shown in Table 1. URL pattern DOM Tree Alignment Precision 93.5% 97.2% Table 1: Precision of Mined Parallel Documents The document-level mining precision solely depends on the candidate document pair verification module. The verification modules of both mining systems use the same features, and the only difference is that in the new mining system the sentence alignment score is computed with DOM tree alignment support. So the 3.7% improvement in document-level precision indirectly confirms the enhancement of sentence alignment. Secondly, the accuracy of sentence alignment model is benchmarked as follows: 150 EnglishChinese parallel document pairs are randomly taken from our mining results. All parallel sentence pairs in these document pairs are manually annotated by two annotators with crossvalidation. We have compared sentence alignment accuracy with and without DOM tree alignment support. In case of no tree alignment support, all the texts in the web pages are extracted and sent to sentence aligner for alignment. The benchmarks are shown in Table 2. Alignment Method Number Right Number Wrong Number Missed Precision Recall Eng-Chi (no DOM tree) 2172 285 563 86.9% 79.4% Eng-Chi (with DOM tree) 2369 156 366 93.4% 86.6% Table 2: sentence alignment accuracy Table 2 shows that with DOM tree alignment support, the sentence alignment accuracy is greatly improved by 7% in both precision and recall. We also observed that the recall is lower than precision. This is because web pages tend to contain many short sentences (one or two words only) whose alignment is hard to identify due to the lack of content information. Although Table 2 benchmarks the accuracy of sentence aligner, but the quality of the final sentence pair outputs depend on many other modules as well, e.g. the document level parallelism verification, sentence breaker, Chinese word breaker, etc. To further measure the quality of the mined data, 2,000 sentence pairs are randomly picked from the final output, and are manually classified into three categories: (i) exact parallel, (ii) roughly parallel: two parallel sentences involving missing words or erroneous additions; (iii) not parallel. Two annotators are 494 assigned for this task with cross-validation. As is shown in Table 3, 93.5% of output sentence pairs are either exact or roughly parallel. Corpus Exact Parallel Roughly Parallel Not Parallel Mined 1703 167 130 Table 3 Quality of Mined Parallel Sentences As we know, the absolute value of mining system recall is hard to estimate because it is impractical to evaluate all the parallel data held by a bilingual website. Instead, we compare mining coverage and efficiency between the two systems. 100 English-Chinese bilingual website are mined by both of the system. And the mining efficiency comparison is reported in Table 4. Mining System Parallel Page Pairs found & verified # of page downloads # of downloads per pair URL pattern-based Mining 4383 84942 19.38 DOM Tree Alignmentbased Mining 5785 13074 2.26 Table 4. Mining Efficiency Comparison on 100 Bilingual Websites Although it downloads less data, the DOM tree based mining scheme increases the parallel data acquisition throughput by 32%. Furthermore, the ratio of downloaded page count per parallel pair is 2.26, which means the bandwidth usage is almost optimal. Another interesting topic is the complementarities between both mining systems. As reported in Table (5), 1797 pairs of parallel documents mined by the new scheme is not covered by the URL pattern-based scheme. So if both systems are used, the throughput can be further increased by 41%. # of Parallel Page Pairs Mined by Both Systems # of Parallel Page Pairs Mined by URL Patterns only # of Parallel Page Pairs Mined by Tree Alignment only 3988 395 1797 Table 5. Mining Results Complementarities on 100 Bilingual Website 8 Discussion and Conclusion Mining parallel data from web is a promising method to overcome the knowledge bottleneck faced by machine translation. To build a practical mining system, three research issues should be fully studied: (i) the quality of mined data, (ii) the mining coverage, and (iii) the mining speed. Exploiting DOM tree similarities helps in all the three issues. Motivated by this observation, this paper presents a new web mining scheme for parallel data acquisition. A DOM tree alignment model is proposed to identify translationally equivalent text chunks and hyperlinks between two HTML documents. Parallel hyperlinks are used to pinpoint new parallel data, and make parallel data mining a recursive process. Parallel text chunks are fed into sentence aligner to extract parallel sentences. Benchmarks show that sentence aligner supported by DOM tree alignment achieves performance enhancement by 7% in both precision and recall. Besides, the new mining scheme reduce the bandwidth cost by 8~9 times on average compared with the URL pattern-based mining scheme. In addition, the new mining scheme is more general and reliable, and is able to mine more data. Using the new mining scheme alone, the mining throughput is increased by 32%, and when combined with URL pattern-based scheme, the mining throughput is increased by 41%. References Alshawi, H., S. Bangalore, and S. Douglas. 2000. Learning Dependency Translation Models as Collections of Finite State Head Transducers. Computational Linguistics, 26(1). Brown, P. F., J. C. Lai and R. L. Mercer. 1991. Aligning Sentences in Parallel Corpora. In Proceedings of 29th Annual Meeting of the Association for Computational Linguistics. Brown, P. E., S. A. D. Pietra, V. J. D. Pietra, and R. L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, V19(2). Callison-Burch, C. and C. Bannard. 2005. Paraphrasing with Bilingual Parallel Corpora. In Proceedings of 43th Annual Meeting of the Association for Computational Linguistics. Chen, J., R. Chau, and C.-H. Yeh. 1991. Discovering Parallel Text from the World Wide Web. In Proceedings of the second workshop on Australasian Information Security, Data Mining and Web Intelligence, and Software Internationalization. Chen, S. 1993. Aligning Sentences in Bilingual Corpora Using Lexical Information. In Proceedings of 31st Annual Meeting of the Association for Computational Linguistics. Church, K. W. 1993. Char_align: A Program for Aligning Parallel Texts at the Character Level. In 495 Proceedings of 31st Annual Meeting of the Association for Computational Linguistics. Fung, P. and K. Mckeown. 1994. Aligning Noisy Parallel Corpora across Language Groups: Word Pair Feature Matching by Dynamic Time Warping. In Proceedings of the First Conference of the Association for Machine Translation in the Americas. Gale W. A. and K. Church. 1991. A Program for Aligning Sentences in Parallel Corpora. In Proceedings of 29th Annual Meeting of the Association for Computational Linguistics. Hajic J., et al. 2004. Final Report: Natural Language Generation in the Context of Machine Translation. Kay M. and M. Roscheisen. 1993. Text-Translation Alignment. Computational Linguistics, 19(1). Lari K. and S. J. Young. 1990. The Estimation of Stochastic Context Free Grammars using the InsideOutside Algorithm. Computer Speech and Language, 4:35—56, 1990. Ma, X. and M. Liberman. 1999. Bits: A Method for Bilingual Text Search over the Web. In Proceedings of Machine Translation Summit VII. Ng, H. T., B. Wang, and Y. S. Chan. 2003. Exploiting Parallel Texts for Word Sense Disambiguation: An Empirical Study. In Proceedings of 41st Annual Meeting of the Association for Computational Linguistics. Nie, J. Y., M. S. P. Isabelle, and R. Durand. 1999. Cross-language Information Retrieval based on Parallel Texts and Automatic Mining of Parallel Texts from the Web. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development. Moore, R. C. 2002. Fast and Accurate Sentence Alignment of Bilingual Corpora. In Proceedings of 5th Conference of the Association for Machine Translation in the Americas. Munteanu D. S, A. Fraser, and D. Marcu. D., 2002. Improved Machine Translation Performance via Parallel Sentence Extraction from Comparable Corpora. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. Pietra, S. D., V. D. Pietra, and J. Lafferty. 1995. Inducing Features Of Random Fields. In IEEE Transactions on Pattern Analysis and Machine Intelligence. Resnik, P. and N. A. Smith. 2003. The Web as a Parallel Corpus. Computational Linguistics, 29(3) Shieber, S. M. and Y. Schabes. 1990. Synchronous tree-adjoining grammars. In Proceedings of the 13th International Conference on Computational linguistics. Utiyama, M. and H. Isahara 2003. Reliable Measures for Aligning Japanese-English News Articles and Sentences. In Proceedings of 41st Annual Meeting of the Association for Computational Linguistics.ACL 2003. Wu, D. 1994. Aligning a parallel English-Chinese corpus statistically with lexical criterias. In Proceedings of of 32nd Annual Meeting of the Association for Computational Linguistics. Wu, D. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3). Yamada K. and K. Knight. 2001. A Syntax Based Statistical Translation Model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics. Zhao B. and S. Vogel. 2002. Adaptive Parallel Sentences Mining From Web Bilingual News Collection. In 2002 IEEE International Conference on Data Mining. Zhang, Y., K. Wu, J. Gao, and Phil Vines. 2006. Automatic Acquisition of Chinese-English Parallel Corpus from the Web. In Proceedings of 28th European Conference on Information Retrieval. 496
2006
62
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 497–504, Sydney, July 2006. c⃝2006 Association for Computational Linguistics QuestionBank: Creating a Corpus of Parse-Annotated Questions John Judge1, Aoife Cahill1, and Josef van Genabith1,2 1National Centre for Language Technology and School of Computing, Dublin City University, Dublin, Ireland 2IBM Dublin Center for Advanced Studies, IBM Dublin, Ireland {jjudge,acahill,josef}@computing.dcu.ie Abstract This paper describes the development of QuestionBank, a corpus of 4000 parseannotated questions for (i) use in training parsers employed in QA, and (ii) evaluation of question parsing. We present a series of experiments to investigate the effectiveness of QuestionBank as both an exclusive and supplementary training resource for a state-of-the-art parser in parsing both question and non-question test sets. We introduce a new method for recovering empty nodes and their antecedents (capturing long distance dependencies) from parser output in CFG trees using LFG f-structure reentrancies. Our main findings are (i) using QuestionBank training data improves parser performance to 89.75% labelled bracketing f-score, an increase of almost 11% over the baseline; (ii) back-testing experiments on nonquestion data (Penn-II WSJ Section 23) shows that the retrained parser does not suffer a performance drop on non-question material; (iii) ablation experiments show that the size of training material provided by QuestionBank is sufficient to achieve optimal results; (iv) our method for recovering empty nodes captures long distance dependencies in questions from the ATIS corpus with high precision (96.82%) and low recall (39.38%). In summary, QuestionBank provides a useful new resource in parser-based QA research. 1 Introduction Parse-annotated corpora (treebanks) are crucial for developing machine learning and statistics-based parsing resources for a given language or task. Large treebanks are available for major languages, however these are often based on a specific text type or genre, e.g. financial newspaper text (the Penn-II Treebank (Marcus et al., 1993)). This can limit the applicability of grammatical resources induced from treebanks in that such resources underperform when used on a different type of text or for a specific task. In this paper we present work on creating QuestionBank, a treebank of parse-annotated questions, which can be used as a supplementary training resource to allow parsers to accurately parse questions (as well as other text). Alternatively, the resource can be used as a stand-alone training corpus to train a parser specifically for questions. Either scenario will be useful in training parsers for use in question answering (QA) tasks, and it also provides a suitable resource to evaluate the accuracy of these parsers on questions. We use a semi-automatic “bootstrapping” method to create the question treebank from raw text. We show that a parser trained on the question treebank alone can accurately parse questions. Training on a combined corpus consisting of the question treebank and an established training set (Sections 02-21 of the Penn-II Treebank), the parser gives state-of-the-art performance on both questions and a non-question test set (Section 23 of the Penn-II Treebank). Section 2 describes background work and motivation for the research presented in this paper. Section 3 describes the data we used to create the corpus. In Section 4 we describe the semiautomatic method to “bootstrap” the question corpus, discuss some interesting and problematic phenomena, and show how the manual vs. automatic workload distribution changed as work progressed. Two sets of experiments using our new question corpus are presented in Section 5. In Section 6 we introduce a new method for recovering empty nodes and their antecedents using Lexical Functional Grammar (LFG) f-structure reen497 trancies. Section 7 concludes and outlines future work. 2 Background and Motivation High quality probabilistic, treebank-based parsing resources can be rapidly induced from appropriate treebank material. However, treebank- and machine learning-based grammatical resources reflect the characteristics of the training data. They generally underperform on test data substantially different from the training data. Previous work on parser performance and domain variation by Gildea (2001) showed that by training a parser on the Penn-II Treebank and testing on the Brown corpus, parser accuracy drops by 5.7% compared to parsing the Wall Street Journal (WSJ) based Penn-II Treebank Section 23. This shows a negative effect on parser performance even when the test data is not radically different from the training data (both the Penn II and Brown corpora consist primarily of written texts of American English, the main difference is the considerably more varied nature of the text in the Brown corpus). Gildea also shows how to resolve this problem by adding appropriate data to the training corpus, but notes that a large amount of additional data has little impact if it is not matched to the test material. Work on more radical domain variance and on adapting treebank-induced LFG resources to analyse ATIS (Hemphill et al., 1990) question material is described in Judge et al. (2005). The research established that even a small amount of additional training data can give a substantial improvement in question analysis in terms of both CFG parse accuracy and LFG grammatical functional analysis, with no significant negative effects on non-question analysis. Judge et al. (2005) suggest, however, that further improvements are possible given a larger question training corpus. Clark et al. (2004) worked specifically with question parsing to generate dependencies for QA with Penn-II treebank-based Combinatory Categorial Grammars (CCG’s). They use “what” questions taken from the TREC QA datasets as the basis for a What-Question corpus with CCG annotation. 3 Data Sources The raw question data for QuestionBank comes from two sources, the TREC 8-11 QA track test sets1, and a question classifier training set produced by the Cognitive Computation Group (CCG2) at the University of Illinois at UrbanaChampaign.3 We use equal amounts of data from each source so as not to bias the corpus to either data source. 3.1 TREC Questions The TREC evaluations have become the standard evaluation for QA systems. Their test sets consist primarily of fact seeking questions with some imperative statements which request information, e.g. “List the names of cell phone manufacturers.” We included 2000 TREC questions in the raw data from which we created the question treebank. These 2000 questions consist of the test questions for the first three years of the TREC QA track (1893 questions) and 107 questions from the 2003 TREC test set. 3.2 CCG Group Questions The CCG provide a number of resources for developing QA systems. One of these resources is a set of 5500 questions and their answer types for use in training question classifiers. The 5500 questions were stripped of answer type annotation, duplicated TREC questions were removed and 2000 questions were used for the question treebank. The CCG 5500 questions come from a number of sources (Li and Roth, 2002) and some of these questions contain minor grammatical mistakes so that, in essence, this corpus is more representative of genuine questions that would be put to a working QA system. A number of changes in tokenisation were corrected (eg. separating contractions), but the minor grammatical errors were left unchanged because we believe that it is necessary for a parser for question analysis to be able to cope with this sort of data if it is to be used in a working QA system. 4 Creating the Treebank 4.1 Bootstrapping a Question Treebank The algorithm used to generate the question treebank is an iterative process of parsing, manual correction, retraining, and parsing. 1http://trec.nist.gov/data/qa.html 2Note that the acronym CCG here refers to Cognitive Computation Group, rather than Combinatory Categorial Grammar mentioned in Section 2. 3http://l2r.cs.uiuc.edu/ cogcomp/tools.php 498 Algorithm 1 Induce a parse-annotated treebank from raw data repeat Parse a new section of raw data Manually correct errors in the parser output Add the corrected data to the training set Extract a new grammar for the parser until All the data has been processed Algorithm 1 summarises the bootstrapping algorithm. A section of raw data is parsed. The parser output is then manually corrected, and added to the parser’s training corpus. A new grammar is then extracted, and the next section of raw data is parsed. This process continues until all the data has been parsed and hand corrected. 4.2 Parser The parser used to process the raw questions prior to manual correction was that of Bikel (2002)4, a retrainable emulation of Collins (1999) model 2 parser. Bikel’s parser is a history-based parser which uses a lexicalised generative model to parse sentences. We used WSJ Sections 02-21 of the Penn-II Treebank to train the parser for the first iteration of the algorithm. The training corpus for subsequent iterations consisted of the WSJ material and increasing amounts of processed questions. 4.3 Basic Corpus Development Statistics Our question treebank was created over a period of three months at an average annotation speed of about 60 questions per day. This is quite rapid for treebank development. The speed of the process was helped by two main factors: the questions are generally quite short (typically about 10 words long), and, due to retraining on the continually increasing training set, the quality of the parses output by the parser improved dramatically during the development of the treebank, with the effect that corrections during the later stages were generally quite small and not as time consuming as during the initial phases of the bootstrapping process. For example, in the first week of the project the trees from the parser were of relatively poor quality and over 78% of the trees needed to be corrected manually. This slowed the annotation process considerably and parse-annotated questions 4Downloaded from http://www.cis.upenn.edu/∼dbikel /software.html#stat-parser were being produced at an average rate of 40 trees per day. During the later stages of the project this had changed dramatically. The quality of trees from the parser was much improved with less than 20% of the trees requiring manual correction. At this stage parse-annotated questions were being produced at an average rate of 90 trees per day. 4.4 Corpus Development Error Analysis Some of the more frequent errors in the parser output pertain to the syntactic analysis of WHphrases (WHNP, WHPP, etc). In Sections 02-21 of the Penn-II Treebank, these are used more often in relative clause constructions than in questions. As a result many of the corpus questions were given syntactic analyses corresponding to relative clauses (SBAR with an embedded S) instead of as questions (SBARQ with an embedded SQ). Figure 1 provides an example. SBAR WHNP WP Who S VP VBD created NP DT the NN Muppets (a) SBARQ WHNP WP Who SQ VP VBD created NP DT the NNPS Muppets (b) Figure 1: Example tree before (a) and after correction (b) Because the questions are typically short, an error like this has quite a large effect on the accuracy for the overall tree; in this case the f-score for the parser output (Figure 1(a)) would be only 60%. Errors of this nature were quite frequent in the first section of questions analysed by the parser, but with increased training material becoming available during successive iterations, this error became less frequent and towards the end of 499 the project it was only seen in rare cases. WH-XP marking was the source of a number of consistent (though infrequent) errors during annotation. This occurred mostly in PP constructions containing WHNPs. The parser would output a structure like Figure 2(a), where the PP mother of the WHNP is not correctly labelled as a WHPP as in Figure 2(b). PP IN by WHNP WP$ whose NN authority WHPP IN by WHNP WP$ whose NN authority (a) (b) Figure 2: WH-XP assignment The parser output often had to be rearranged structurally to varying degrees. This was common in the longer questions. A recurring error in the parser output was failing to identify VPs in SQs with a single object NP. In these cases the verb and the object NP were left as daughters of the SQ node. Figure 3(a) illustrates this, and Figure 3(b) shows the corrected tree with the VP node inserted. SBARQ WHNP WP Who SQ VBD killed NP Ghandi SBARQ WHNP WP Who SQ VP VBD killed NP Ghandi (a) (b) Figure 3: VP missing inside SQ with a single NP On inspection, we found that the problem was caused by copular constructions, which, according to the Penn-II annotation guidelines, do not feature VP constituents. Since almost half of the question data contain copular constructions, the parser trained on this data would sometimes misanalyse non-copular constructions or, conversely, incorrectly bracket copular constructions using a VP constituent (Figure 4(a)). The predictable nature of these errors meant that they were simple to correct. This is due to the particular context in which they occur and the finite number of forms of the copular verb. SBARQ WHNP WP What SQ VP VBZ is NP a fear of shadows SBARQ WHNP WP What SQ VBZ is NP a fear of shadows (a) (b) Figure 4: Erroneous VP in copular constructions 5 Experiments with QuestionBank In order to test the effect training on the question corpus has on parser performance, we carried out a number of experiments. In cross-validation experiments with 90%/10% splits we use all 4000 trees in the completed QuestionBank as the test set. We performed ablation experiments to investigate the effect of varying the amount of question and non-question training data on the parser’s performance. For these experiments we divided the 4000 questions into two sets. We randomly selected 400 trees to be held out as a gold standard test set against which to evaluate, the remaining 3600 trees were then used as a training corpus. 5.1 Establishing the Baseline The baseline we use for our experiments is provided by Bikel’s parser trained on WSJ Sections 02-21 of the Penn-II Treebank. We test on all 4000 questions in our question treebank, and also Section 23 of the Penn-II Treebank. QuestionBank Coverage 100 F-Score 78.77 WSJ Section 23 Coverage 100 F-Score 82.97 Table 1: Baseline parsing results Table 1 shows the results for our baseline evaluations on question and non-question test sets. While the coverage for both tests is high, the parser underperforms significantly on the question test set with a labelled bracketing f-score of 78.77 compared to 82.97 on Section 23 of the Penn-II Treebank. Note that unlike the published results for Bikel’s parser in our evaluations we test on Section 23 and include punctuation. 5.2 Cross-Validation Experiments We carried out two cross-validation experiments. In the first experiment we perform a 10-fold crossvalidation experiment using our 4000 question 500 treebank. In each case a randomly selected set of 10% of the questions in QuestionBank was held out during training and used as a test set. In this way parses from unseen data were generated for all 4000 questions and evaluated against the QuestionBank trees. The second cross-validation experiment was similar to the first, but in each of the 10 folds we train on 90% of the 4000 questions in QuestionBank and on all of Sections 02-21 of the Penn-II Treebank. In both experiments we also backtest each of the ten grammars on Section 23 of the Penn-II Treebank and report the average scores. QuestionBank Coverage 100 F-Score 88.82 Backtest on Sect 23 Coverage 98.79 F-Score 59.79 Table 2: Cross-validation experiment using the 4000 question treebank Table 2 shows the results for the first crossvalidation experiment, using only the 4000 sentence QuestionBank. Compared to Table 1, the results show a significant improvement of over 10% on the baseline f-score for questions. However, the tests on the non-question Section 23 data show not only a significant drop in accuracy but also a drop in coverage. Questions Coverage 100 F-Score 89.75 Backtest on Sect 23 Coverage 100 F-Score 82.39 Table 3: Cross-validation experiment using PennII Treebank Sections 02-21 and 4000 questions Table 3 shows the results for the second crossvalidation experiment using Sections 02-21 of the Penn-II Treebank and the 4000 questions in QuestionBank. The results show an even greater increase on the baseline f-score than the experiments using only the question training set (Table 2). The non-question results are also better and are comparable to the baseline (Table 1). 5.3 Ablation Runs In a further set of experiments we investigated the effect of varying the amount of data in the parser’s training corpus. We experiment with varying both the amount of QuestionBank and Penn-II Treebank data that the parser is trained on. In each experiment we use the 400 question test set and Section 23 of the Penn-II Treebank to evaluate against, and the 3600 question training set described above and Sections 02-21 of the Penn-II Treebank as the basis for the parser’s training corpus. We report on three experiments: In the first experiment we train the parser using only the 3600 question training set. We performed ten training and parsing runs in this experiment, incrementally reducing the size of the QuestionBank training corpus by 10% of the whole on each run. The second experiment is similar to the first but in each run we add Sections 02-21 of the Penn-II Treebank to the (shrinking) training set of questions. The third experiment is the converse of the second, the amount of questions in the training set remains fixed (all 3600) and the amount of PennII Treebank material is incrementally reduced by 10% on each run. 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Coverage/F-Score Percentage of 3600 questions in the training corpus FScore Questions FScore Section 23 Coverage Questions Coverage Section 23 Figure 5: Results for ablation experiment reducing 3600 training questions in steps of 10% Figure 5 graphs the coverage and f-score for the parser in tests on the 400 question test set, and Section 23 of the Penn-II Treebank in ten parsing runs with the amount of data in the 3600 question training corpus reducing incrementally on each run. The results show that training on only a small amount of questions, the parser can parse questions with high accuracy. For example when trained on only 10% of the 3600 questions used in this experiment, the parser successfully parses all of the 400 question test set and achieves an fscore of 85.59. However the results for the tests on WSJ Section 23 are considerably worse. The parser never manages to parse the full test set, and the best score at 59.61 is very low. Figure 6 graphs the results for the second abla501 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Coverage/F-Score Percentage of 3600 questions in the training corpus FScore Questions FScore Section 23 Coverage Questions Coverage Section 23 Figure 6: Results for ablation experiment using PTB Sections 02-21 (fixed) and reducing 3600 questions in steps of 10% 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Coverage/F-Score Percentage of PTB Stetcions 2-21 in the training corpus FScore Questions FScore Section 23 Coverage Questions Coverage Section 23 Figure 7: Results for ablation experiment using 3600 questions (fixed) and reducing PTB Sections 02-21 in steps of 10% tion experiment. The training set for the parser consists of a fixed amount of Penn-II Treebank data (Sections 02-21) and a reducing amount of question data from the 3600 question training set. Each grammar is tested on both the 400 question test set, and WSJ Section 23. The results here are significantly better than in the previous experiment. In all of the runs the coverage for both test sets is 100%, f-scores for the question test set decrease as the amount of question data in the training set is reduced (though they are still quite high.) There is little change in the f-scores for the tests on Section 23, the results all fall in the range 82.36 to 82.46, which is comparable to the baseline score. Figure 7 graphs the results for the third ablation experiment. In this case the training set is a fixed amount of the question training set described above (all 3600 questions) and a reducing amount of data from Sections 02-21 of the Penn Treebank. The graph shows that the parser performs consistently well on the question test set in terms of both coverage and accuracy. The tests on Section 23, however, show that as the amount of Penn-II Treebank material in the training set decreases, the fscore also decreases. 6 Long Distance Dependencies Long distance dependencies are crucial in the proper analysis of question material. In English wh-questions, the fronted wh-constituent refers to an argument position of a verb inside the interrogative construction. Compare the superficially similar 1. Who1 [t1] killed Harvey Oswald? 2. Who1 did Harvey Oswald kill [t1]? (1) queries the agent (syntactic subject) of the described eventuality, while (2) queries the patient (syntactic object). In the Penn-II and ATIS treebanks, dependencies such as these are represented in terms of empty productions, traces and coindexation in CFG tree representations (Figure 8). SBARQ WHNP-1 WP Who SQ NP *T*-1 VP VBD killed NP Harvey Oswald (a) SBARQ WHNP-1 WP Who SQ AUX did NP Harvey Oswald VP VB kill NP *T*-1 (b) Figure 8: LDD resolved treebank style trees With few exceptions5 the trees produced by current treebank-based probabilistic parsers do not represent long distance dependencies (Figure 9). Johnson (2002) presents a tree-based method for reconstructing LDD dependencies in PennII trained parser output trees. Cahill et al. (2004) present a method for resolving LDDs 5Collins’ Model 3 computes a limited number of whdependencies in relative clause constructions. 502 SBARQ WHNP WP Who SQ VP VBD killed NP Harvey Oswald (a) SBARQ WHNP WP Who SQ AUX did NP Harvey Oswald VP VB kill (b) Figure 9: Parser output trees at the level of Lexical-Functional Grammar fstructure (attribute-value structure encodings of basic predicate-argument structure or dependency relations) without the need for empty productions and coindexation in parse trees. Their method is based on learning finite approximations of functional uncertainty equations (regular expressions over paths in f-structure) from an automatically fstructure annotated version of the Penn-II treebank and resolves LDDs at f-structure. In our work we use the f-structure-based method of Cahill et al. (2004) to “reverse engineer” empty productions, traces and coindexation in parser output trees. We explain the process by way of a worked example. We use the parser output tree in Figure 9(a) (without empty productions and coindexation) and automatically annotate the tree with f-structure information and compute LDD-resolution at the level of f-structure using the resources of Cahill et al. (2004). This generates the f-structure annotated tree6 and the LDD resolved f-structure in Figure 10. Note that the LDD is indicated in terms of a reentrancy 1 between the question FOCUS and the SUBJ function in the resolved f-structure. Given the correspondence between the f-structure and fstructure annotated nodes in the parse tree, we compute that the SUBJ function newly introduced and reentrant with the FOCUS function is an argument of the PRED ‘kill’ and the verb form ‘killed’ in the tree. In order to reconstruct the corresponding empty subject NP node in the parser output tree, we need to determine candidate anchor sites 6Lexical annotations are suppressed to aid readability. SBARQ WHNP ↑FOCUS =↓ WP ↑=↓ Who SQ ↑=↓ VP ↑=↓ VBD ↑=↓ killed NP ↑OBJ =↓ Harvey Oswald (a)   FOCUS  PRED who  1 PRED ’kill⟨SUBJ OBJ⟩’ OBJ  PRED ’Harvey Oswald’  SUBJ  PRED ’who’  1   (b) Figure 10: Annotated tree and f-structure for the empty node. These anchor sites can only be realised along the path up to the maximal projection of the governing verb indicated by ↑=↓annotations in LFG. This establishes three anchor sites: VP, SQ and the top level SBARQ. From the automatically f-structure annotated Penn-II treebank, we extract f-structure annotated PCFG rules for each of the three anchor sites whose RHSs contain exactly the information (daughter categories plus LFG annotations) in the tree in Figure 10 (in the same order) plus an additional node (of whatever CFG category) annotated ↑SUBJ=↓, located anywhere within the RHSs. This will retrieve rules of the form VP →NP[↑SUBJ =↓] V BD[↑=↓] NP[↑OBJ =↓] V P →. . . . . . SQ →NP[↑SUBJ =↓] V P[↑=↓] SQ →. . . . . . SBARQ →. . . . . . each with their associated probabilities. We select the rule with the highest probability and cut the rule into the tree in Figure 10 at the appropriate anchor site (as determined by the rule LHS). In our case this selects SQ →NP[↑SUBJ=↓]V P[↑=↓] and the resulting tree is given in Figure 11. From this tree, it is now easy to compute the tree with the coindexed trace in Figure 8 (a). In order to evaluate our empty node and coindexation recovery method, we conducted two experiments, one using 146 gold-standard ATIS question trees and one using parser output on the corresponding strings for the 146 ATIS question trees. 503 SBARQ WHNP-1 ↑FOCUS =↓ WP ↑=↓ Who SQ ↑=↓ NP ↑SUBJ =↓ -NONE*T*-1 VP ↑=↓ VBD ↑=↓ killed NP ↑OBJ =↓ Harvey Oswald Figure 11: Resolved tree In the first experiment, we delete empty nodes and coindexation from the ATIS gold standard trees and and reconstruct them using our method and the preprocessed ATIS trees. In the second experiment, we parse the strings corresponding to the ATIS trees with Bikel’s parser and reconstruct the empty productions and coindexation. In both cases we evaluate against the original (unreduced) ATIS trees and score if and only if all of insertion site, inserted CFG category and coindexation match. Parser Output Gold Standard Trees Precision 96.77 96.82 Recall 38.75 39.38 Table 4: Scores for LDD recovery (empty nodes and antecedents) Table 4 shows that currently the recall of our method is quite low at 39.38% while the accuracy is very high with precision at 96.82% on the ATIS trees. Encouragingly, evaluating parser output for the same sentences shows little change in the scores with recall at 38.75% and precision at 96.77%. 7 Conclusions The data represented in Figure 5 show that training a parser on 50% of QuestionBank achieves an f-score of 88.56% as against 89.24% for training on all of QuestionBank. This implies that while we have not reached an absolute upper bound, the question corpus is sufficiently large that the gain in accuracy from adding more data is so small that it does not justify the effort. We will evaluate grammars learned from QuestionBank as part of a working QA system. A beta-release of the non-LDD-resolved QuestionBank is available for download at http://www.computing.dcu.ie/∼ jjudge/qtreebank/4000qs.txt. The final, hand-corrected, LDD-resolved version will be available in October 2006. Acknowledgments We are grateful to the anonymous reviewers for their comments and suggestions. This research was supported by Science Foundation Ireland (SFI) grant 04/BR/CS0370 and an Irish Research Council for Science Engineering and Technology (IRCSET) PhD scholarship 2002-05. References Daniel M. Bikel. 2002. Design of a multi-lingual, parallelprocessing statistical parsing engine. In Proceedings of HLT 2002, pages 24–27, San Diego, CA. Aoife Cahill, Michael Burke, Ruth O’Donovan, Josef van Genabith, and Andy Way. 2004. Long-Distance Dependency Resolution in Automatically Acquired WideCoverage PCFG-Based LFG Approximations. In Proceedings of ACL-04, pages 320–327, Barcelona, Spain. Stephen Clark, Mark Steedman, and James R. Curran. 2004. Object-extraction and question-parsing using ccg. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP-04, pages 111–118, Barcelona, Spain. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. Daniel Gildea. 2001. Corpus variation and parser performance. In Lillian Lee and Donna Harman, editors, Proceedings of EMNLP, pages 167–202, Pittsburgh, PA. Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS Spoken Language Systems pilot corpus. In Proceedings of DARPA Speech and Natural Language Workshop, pages 96–101, Hidden Valley, PA. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings ACL-02, University of Pennsylvania, Philadelphia, PA. John Judge, Aoife Cahill, Michael Burke, Ruth O’Donovan, Josef van Genabith, and Andy Way. 2005. Strong Domain Variation and Treebank-Induced LFG Resources. In Proceedings LFG-05, pages 186–204, Bergen, Norway, July. Xin Li and Dan Roth. 2002. Learning question classifiers. In Proceedings of COLING-02, pages 556–562, Taipei, Taiwan. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. 504
2006
63
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 505–512, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Creating a CCGbank and a wide-coverage CCG lexicon for German Julia Hockenmaier Institute for Research in Cognitive Science University of Pennsylvania Philadelphia, PA 19104, USA [email protected] Abstract We present an algorithm which creates a German CCGbank by translating the syntax graphs in the German Tiger corpus into CCG derivation trees. The resulting corpus contains 46,628 derivations, covering 95% of all complete sentences in Tiger. Lexicons extracted from this corpus contain correct lexical entries for 94% of all known tokens in unseen text. 1 Introduction A number of wide-coverage TAG, CCG, LFG and HPSG grammars (Xia, 1999; Chen et al., 2005; Hockenmaier and Steedman, 2002a; O’Donovan et al., 2005; Miyao et al., 2004) have been extracted from the Penn Treebank (Marcus et al., 1993), and have enabled the creation of widecoverage parsers for English which recover local and non-local dependencies that approximate the underlying predicate-argument structure (Hockenmaier and Steedman, 2002b; Clark and Curran, 2004; Miyao and Tsujii, 2005; Shen and Joshi, 2005). However, many corpora (B¨ohomv´a et al., 2003; Skut et al., 1997; Brants et al., 2002) use dependency graphs or other representations, and the extraction algorithms that have been developed for Penn Treebank style corpora may not be immediately applicable to this representation. As a consequence, research on statistical parsing with “deep” grammars has largely been confined to English. Free-word order languages typically pose greater challenges for syntactic theories (Rambow, 1994), and the richer inflectional morphology of these languages creates additional problems both for the coverage of lexicalized formalisms such as CCG or TAG, and for the usefulness of dependency counts extracted from the training data. On the other hand, formalisms such as CCG and TAG are particularly suited to capture the crossing dependencies that arise in languages such as Dutch or German, and by choosing an appropriate linguistic representation, some of these problems may be mitigated. Here, we present an algorithm which translates the German Tiger corpus (Brants et al., 2002) into CCG derivations. Similar algorithms have been developed by Hockenmaier and Steedman (2002a) to create CCGbank, a corpus of CCG derivations (Hockenmaier and Steedman, 2005) from the Penn Treebank, by C¸ akıcı (2005) to extract a CCG lexicon from a Turkish dependency corpus, and by Moortgat and Moot (2002) to induce a type-logical grammar for Dutch. The annotation scheme used in Tiger is an extension of that used in the earlier, and smaller, German Negra corpus (Skut et al., 1997). Tiger is better suited for the extraction of subcategorization information (and thus the translation into “deep” grammars of any kind), since it distinguishes between PP complements and modifiers, and includes “secondary” edges to indicate shared arguments in coordinate constructions. Tiger also includes morphology and lemma information. Negra is also provided with a “Penn Treebank”style representation, which uses flat phrase structure trees instead of the crossing dependency structures in the original corpus. This version has been used by Cahill et al. (2005) to extract a German LFG. However, Dubey and Keller (2003) have demonstrated that lexicalization does not help a Collins-style parser that is trained on this corpus, and Levy and Manning (2004) have shown that its context-free representation is a poor approximation to the underlying dependency structure. The resource presented here will enable future research to address the question whether “deep” grammars such as CCG, which capture the underlying dependencies directly, are better suited to parsing German than linguistically inadequate context-free approximations. 505 1. Standard main clause Peter gibt Maria das Buch                 2. Main clause with fronted adjunct 3. Main clause with fronted complement dann gibt Peter Maria das Buch              Maria gibt Peter das Buch                 Figure 1: CCG uses topicalization (1.), a type-changing rule (2.), and type-raising (3.) to capture the different variants of German main clause order with the same lexical category for the verb. 2 German syntax and morphology Morphology German verbs are inflected for person, number, tense and mood. German nouns and adjectives are inflected for number, case and gender, and noun compounding is very productive. Word order German has three different word orders that depend on the clause type. Main clauses (1) are verb-second. Imperatives and questions are verb-initial (2). If a modifier or one of the objects is moved to the front, the word order becomes verb-initial (2). Subordinate and relative clauses are verb-final (3): (1) a. Peter gibt Maria das Buch. Peter gives Mary the book. b. ein Buch gibt Peter Maria. c. dann gibt Peter Maria das Buch. (2) a. Gibt Peter Maria das Buch? b. Gib Maria das Buch! (3) a. dass Peter Maria das Buch gibt. b. das Buch, das Peter Maria gibt. Local Scrambling In the so-called “Mittelfeld” all orders of arguments and adjuncts are potentially possible. In the following example, all 5! permutations are grammatical (Rambow, 1994): (4) dass [eine Firma] [meinem Onkel] [die M¨obel] [vor drei Tagen] [ohne Voranmeldung] zugestellt hat. that [a company] [to my uncle] [the furniture] [three days ago] [without notice] delivered has. Long-distance scrambling Objects of embedded verbs can also be extraposed unboundedly within the same sentence (Rambow, 1994): (5) dass [den Schrank] [niemand] [zu reparieren] versprochen hat. that [the wardrobe] [nobody] [to repair] promised has. 3 A CCG for German 3.1 Combinatory Categorial Grammar CCG (Steedman (1996; 2000)) is a lexicalized grammar formalism with a completely transparent syntax-semantics interface. Since CCG is mildly context-sensitive, it can capture the crossing dependencies that arise in Dutch or German, yet is efficiently parseable. In categorial grammar, words are associated with syntactic categories, such as  or  for English intransitive and transitive verbs. Categories of the form  or  are functors, which take an argument  to their left or right (depending on the the direction of the slash) and yield a result . Every syntactic category is paired with a semantic interpretation (usually a -term). Like all variants of categorial grammar, CCG uses function application to combine constituents, but it also uses a set of combinatory rules such as composition () and type-raising (). Non-orderpreserving type-raising is used for topicalization: Application:         Composition:                     Type-raising:      Topicalization:     Hockenmaier and Steedman (2005) advocate the use of additional “type-changing” rules to deal with complex adjunct categories (e.g.    for ing-VPs that act as noun phrase modifiers). Here, we also use a small number of such rules to deal with similar adjunct cases. 506 3.2 Capturing German word order We follow Steedman (2000) in assuming that the underlying word order in main clauses is always verb-initial, and that the sententce-initial subject is in fact topicalized. This enables us to capture different word orders with the same lexical category (Figure 1). We use the features   and   to distinguish verbs in main and subordinate clauses. Main clauses have the feature  , requiring either a sentential modifier with category   , a topicalized subject (  ), or a type-raised argument (  ), where  can be any argument category, such as a noun phrase, prepositional phrase, or a non-finite VP. Here is the CCG derivation for the subordinate clause () example: dass Peter Maria das Buch gibt                        For simplicity’s sake our extraction algorithm ignores the issues that arise through local scrambling, and assumes that there are different lexical category for each permutation.1 Type-raising and composition are also used to deal with wh-extraction and with long-distance scrambling (Figure 2). 4 Translating Tiger graphs into CCG 4.1 The Tiger corpus The Tiger corpus (Brants et al., 2002) is a publicly available2 corpus of ca. 50,000 sentences (almost 900,000 tokens) taken from the Frankfurter Rundschau newspaper. The annotation is based on a hybrid framework which contains features of phrase-structure and dependency grammar. Each sentence is represented as a graph whose nodes are labeled with syntactic categories (NP, VP, S, PP, etc.) and POS tags. Edges are directed and labeled with syntactic functions (e.g. head, subject, accusative object, conjunct, appositive). The edge labels are similar to the Penn Treebank function tags, but provide richer and more explicit information. Only 72.5% of the graphs have no crossing edges; the remaining 27.5% are marked as dis1Variants of CCG, such as Set-CCG (Hoffman, 1995) and Multimodal-CCG (Baldridge, 2002), allow a more compact lexicon for free word order languages. 2http://www.ims.uni-stuttgart.de/projekte/TIGER continuous. 7.3% of the sentences have one or more “secondary” edges, which are used to indicate double dependencies that arise in coordinated structures which are difficult to bracket, such as right node raising, argument cluster coordination or gapping. There are no traces or null elements to indicate non-local dependencies or wh-movement. Figure 2 shows the Tiger graph for a PP whose NP argument is modified by a relative clause. There is no NP level inside PPs (and no noun level inside NPs). Punctuation marks are often attached at the so-called “virtual” root (VROOT) of the entire graph. The relative pronoun is a dative object (edge label DA) of the embedded infinitive, and is therefore attached at the VP level. The relative clause itself has the category S; the incoming edge is labeled RC (relative clause). 4.2 The translation algorithm Our translation algorithm has the following steps: translate(TigerGraph g): TigerTree t = createTree(g); preprocess(t); if (t  null) CCGderiv d = translateToCCG(t); if (d  null); if (isCCGderivation(d)) return d; else fail; else fail; else fail; 1. Creating a planar tree: After an initial preprocessing step which inserts punctuation that is attached to the “virtual” root (VROOT) of the graph in the appropriate locations, discontinuous graphs are transformed into planar trees. Starting at the lowest nonterminal nodes, this step turns the Tiger graph into a planar tree without crossing edges, where every node spans a contiguous substring. This is required as input to the actual translation step, since CCG derivations are planar binary trees. If the first to the th child of a node  span a contiguous substring that ends in the th word, and the    th child spans a substring starting at     , we attempt to move the first  children of  to its parent  (if the head position of  is greater than ). Punctuation marks and adjuncts are simply moved up the tree and treated as if they were originally attached to . This changes the syntactic scope of adjuncts, but typically only VP modifiers are affected which could also be attached at a higher VP or S node without a change in meaning. The main exception 507 1. The original Tiger graph: an in APPR einem a ART Höchsten Highest NN dem whom PRELS sich refl. PRF fraglos without questions ADJD habe have VAFIN HD HD MO DA SB OC NK NK AC RC PP VP der the ART Mensch human NN kleine small ADJA NK NK NK NP S zu to PTKZU unterwerfen submit VVVIN PM HD VZ OA , $, 2. After transformation into a planar tree and preprocessing: PP APPR-AC an NP-ARG ART-HD einem NOUN-ARG NN-NK H¨ochsten PKT , SBAR-RC PRELS-EXTRA-DA dem S-ARG NP-SB ART-NK der NOUN-ARG ADJA-NK kleine NN-HD Mensch VP-OC PRF-ADJ sich ADJD-MO fraglos VZ-HD PTKZU-PM zu VVINF unterwerfen VAFIN-HD habe 3. The resulting CCG derivation   an     einem  H¨ochsten  ,    dem            der   kleine   Mensch      sich   fraglos    zu   unterwerfen    habe Figure 2: From Tiger graphs to CCG derivations are extraposed relative clauses, which CCG treats as sentential modifiers with an anaphoric dependency. Arguments that are moved up are marked as extracted, and an additional “extraction” edge (explained below) from the original head is introduced to capture the correct dependencies in the CCG derivation. Discontinuous dependencies between resumptive pronouns (“place holders”, PH) and their antecedents (“repeated elements”, RE) are also dissolved. 2. Additional preprocessing: In order to obtain the desired CCG analysis, a certain amount of preprocessing is required. We insert NPs into PPs, nouns into NPs3, and change sentences whose first element is a complementizer (dass, ob, etc.) into an SBAR (a category which does not exist in the original Tiger annotation) with S argu3The span of nouns is given by the NK edge label. ment. This is necessary to obtain the desired CCG derivations where complementizers and prepositions take a sentential or nominal argument to their right, whereas they appear at the same level as their arguments in the Tiger corpus. Further preprocessing is required to create the required structures for wh-extraction and certain coordination phenomena (see below). In figure 2, preprocessing of the original Tiger graph (top) yields the tree shown in the middle (edge labels are shown as Penn Treebank-style function tags).4 We will first present the basic translation algorithm before we explain how we obtain a derivation which captures the dependency between the relative pronoun and the embedded verb. 4We treat reflexive pronouns as modifiers. 508 3. The basic translation step Our basic translation algorithm is very similar to Hockenmaier and Steedman (2005). It requires a planar tree without crossing edges, where each node is marked as head, complement or adjunct. The latter information is represented in the Tiger edge labels, and only a small number of additional head rules is required. Each individual translation step operates on local trees, which are typically flat. N C C  ... C  ... C C  Assuming the CCG category of  is , and its head position is , the algorithm traverses first the left nodes ...  from left to right to create a right-branching derivation tree, and then the right nodes ( ...  ) from right to left to create a left-branching tree. The algorithm starts at the root category and recursively traverses the tree. N C L C  L  ... R R R H  ... C C  The CCG category of complements and of the root of the graph is determined from their Tiger label. VPs are , where the feature  distinguishes bare infinitives, zu-infinitives, passives, and (active) past participles. With the exception of passives, these features can be determined from the POS tags alone.5 Embedded sentences (under an SBAR-node) are always  . NPs and nouns ( and ) have a case feature, e.g. .6 Like the English CCGbank, our grammar ignores number and person agreement. Special cases: Wh-extraction and extraposition In Tiger, wh-extraction is not explicitly marked. Relative clauses, wh-questions and free relatives are all annotated as S-nodes,and the wh-word is a normal argument of the verb. After turning the graph into a planar tree, we can identify these constructions by searching for a relative pronoun in the leftmost child of an S node (which may be marked as extraposed in the case of extraction from an embedded verb). As shown in figure 2, we turn this S into an SBAR (a category which does not exist in Tiger) with the first edge as complementizer and move the remaining chil5Eventive (“werden”) passive is easily identified by context; however, we found that not all stative (“sein”) passives seem to be annotated as such. 6In some contexts, measure nouns (e.g. Mark, Kilometer) lack case annotation. dren under a new S node which becomes the second daughter of the SBAR. The relative pronoun is the head of this SBAR and takes the S-node as argument. Its category is  , since all clauses with a complementizer are verb-final. In order to capture the long-range dependency, a “trace” is introduced, and percolated down the tree, much like in the algorithm of Hockenmaier and Steedman (2005), and similar to GPSG’s slash-passing (Gazdar et al., 1985). These trace categories are appended to the category of the head node (and other arguments are type-raised as necessary). In our case, the trace is also associated with the verb whose argument it is. If the span of this verb is within the span of a complement, the trace is percolated down this complement. When the VP that is headed by this verb is reached, we assume a canonical order of arguments in order to “discharge” the trace. If a complement node is marked as extraposed, it is also percolated down the head tree until the constituent whose argument it is is found. When another complement is found whose span includes the span of the constituent whose argument the extraposed edge is, the extraposed category is percolated down this tree (we assume extraction out of adjuncts is impossible).7 In order to capture the topicalization analysis, main clause subjects also introduce a trace. Fronted complements or subjects, and the first adjunct in main clauses are analyzed as described in figure 1. Special case: coordination – secondary edges Tiger uses “secondary edges” to represent the dependencies that arise in coordinate constructions such as gapping, argument cluster coordination and right (or left) node raising (Figure 3). In right (left) node raising, the shared elements are arguments or adjuncts that appear on the right periphery of the last, (or left periphery of the first) conjunct. CCG uses type-raising and composition to combine the incomplete conjuncts into one constituent which combines with the shared element: liest immer und beantwortet gerne jeden Brief. always reads and gladly replies to every letter.       7In our current implementation, each node cannot have more than one forward and one backward extraposed element and one forward and one backward trace. It may be preferable to use list structures instead, especially for extraposition. 509 Complex coordinations: a Tiger graph with secondary edges MO während while KOUS 78 78 CARD Prozent percent NN und and KON sich refl. PRF aussprachen argued VVFIN HD SB CP für for APPR Bush Bush NE S OA vier vier CARD Prozent percent NN für for APPR Clinton Clinton NE NK AC PP NK AC PP NK NK NP NK NK NP SB MO S CD CJ CJ CS The planar tree after preprocessing: SBAR KOUS-HD w¨ahrend S-ARG ARGCLUSTER S-CJ NP-SB 78 Prozent PRF-MO sich PP-MO f¨ur Bush KON-CD und S-CJ NP-SB vier Prozent PP-MO f¨ur Clinton VVFIN-HD aussprachen The resulting CCG derivation:    w¨ahrend                     78 Prozent  sich  f¨ur Bush       und           vier Prozent  f¨ur Clinton    aussprachen Figure 3: Processing secondary edges in Tiger In order to obtain this analysis, we lift such shared peripheral constituents inside the conjuncts of conjoined sentences CS (or verb phrases, CVP) to new S (VP) level that we insert in between the CS and its parent. In argument cluster coordination (Figure 3), the shared peripheral element (aussprachen) is the head.8 In CCG, the remaining arguments and adjuncts combine via composition and typeraising into a functor category which takes the category of the head as argument (e.g. a ditransitive verb), and returns the same category that would result from a non-coordinated structure (e.g. a VP). The result category of the furthest element in each conjunct is equal to the category of the entire VP (or sentence), and all other elements are type-raised and composed with this to yield a category which takes as argument a verb with the required subcat frame and returns a verb phrase (sentence). Tiger assumes instead that there are two conjuncts (one of which is headless), and uses secondary edges 8W¨ahrend has scope over the entire coordinated structure. to indicate the dependencies between the head and the elements in the distant conjunct. Coordinated sentences and VPs (CS and CVP) that have this annotation are rebracketed to obtain the CCG constituent structure, and the conjuncts are marked as argument clusters. Since the edges in the argument cluster are labeled with their correct syntactic functions, we are able to mimic the derivation during category assignment. In sentential gapping, the main verb is shared and appears in the middle of the first conjunct: (6) Er trinkt Bier und sie Wein. He drinks beer and she wine. As in the English CCGbank, we ignore this construction, which requires a non-combinatory “decomposition” rule (Steedman, 1990). 5 Evaluation Translation coverage The algorithm can fail at several stages. If the graph cannot be turned into a tree, it cannot be translated. This happens in 1.3% (647) of all sentences. In many cases, this is due 510 to coordinated NPs or PPs where one or more conjuncts are extraposed. We believe that these are anaphoric, and further preprocessing could take care of this. In other cases, this is due to verb topicalization (gegeben hat Peter Maria das Buch), which our algorithm cannot currently deal with.9 For 1.9% of the sentences, the algorithm cannot obtain a correct CCG derivation. Mostly this is the case because some traces and extraposed elements cannot be discharged properly. Typically this happens either in local scrambling, where an object of the main verb appears between the auxiliary and the subject (hat das Buch Peter...)10, or when an argument of a noun that appears in a relative clause is extraposed to the right. There are also a small number of constituents whose head is not annotated. We ignore any gapping construction or argument cluster coordination that we cannot get into the right shape (1.5%), 732 sentences). There are also a number of other constructions that we do not currently deal with. We do not process sentences if the root of the graph is a “virtual root” that does not expand into a sentence (1.7%, 869). This is mostly the case for strings such as Frankfurt (Reuters)), or if we cannot identify a head child of the root node (1.3%, 648; mostly fragments or elliptical constructions). Overall, we obtain CCG derivations for 92.4% (46,628) of all 54,0474 sentences, including 88.4% (12,122) of those whose Tiger graphs are marked as discontinuous (13,717), and 95.2% of all 48,957 full sentences (excluding headless roots, and fragments, but counting coordinate structures such as gapping). Lexicon size There are 2,506 lexical category types, but 1,018 of these appear only once. 933 category types appear more than 5 times. Lexical coverage In order to evaluate coverage of the extracted lexicon on unseen data, we split the corpus into segments of 5,000 sentences (ignoring the last 474), and perform 10-fold crossvalidation, using 9 segments to extract a lexicon and the 10th to test its coverage. Average coverage is 86.7% (by token) of all lexical categories. Coverage varies between 84.4% and 87.6%. On average, 92% (90.3%-92.6%) of the lexical tokens 9The corresponding CCG derivation combines the remnant complements as in argument cluster coordination. 10This problem arises because Tiger annotates subjects as arguments of the auxiliary. We believe this problem could be avoided if they were instead arguments of the non-finite verb. that appear in the held-out data also appear in the training data. On these seen tokens, coverage is 94.2% (93.5%-92.6%). More than half of all missing lexical entries are nouns. In the English CCGbank, a lexicon extracted from section 02-21 (930,000 tokens) has 94% coverage on all tokens in section 00, and 97.7% coverage on all seen tokens (Hockenmaier and Steedman, 2005). In the English data set, the proportion of seen tokens (96.2%) is much higher, most likely because of the relative lack of derivational and inflectional morphology. The better lexical coverage on seen tokens is also to be expected, given that the flexible word order of German requires case markings on all nouns as well as at least two different categories for each tensed verb, and more in order to account for local scrambling. 6 Conclusion and future work We have presented an algorithm which converts the syntax graphs in the German Tiger corpus (Brants et al., 2002) into Combinatory Categorial Grammar derivation trees. This algorithm is currently able to translate 92.4% of all graphs in Tiger, or 95.2% of all full sentences. Lexicons extracted from this corpus contain the correct entries for 86.7% of all and 94.2% of all seen tokens. Good lexical coverage is essential for the performance of statistical CCG parsers (Hockenmaier and Steedman, 2002a). Since the Tiger corpus contains complete morphological and lemma information for all words, future work will address the question of how to identify and apply a set of (non-recursive) lexical rules (Carpenter, 1992) to the extracted CCG lexicon to create a much larger lexicon. The number of lexical category types is almost twice as large as that of the English CCGbank. This is to be expected, since our grammar includes case features, and German verbs require different categories for main and subordinate clauses. We currently perform only the most essential preprocessing steps, although there are a number of constructions that might benefit from additional changes (e.g. comparatives, parentheticals, or fragments), both to increase coverage and accuracy of the extracted grammar. Since Tiger corpus is of comparable size to the Penn Treebank, we hope that the work presented here will stimulate research into statistical widecoverage parsing of free word order languages such as German with deep grammars like CCG. 511 Acknowledgments I would like to thank Mark Steedman and Aravind Joshi for many helpful discussions. This research is supported by NSF ITR grant 0205456. References Jason Baldridge. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, School of Informatics, University of Edinburgh. Alena B¨ohomv´a, Jan Hajiˇc, Eva Hajiˇcov´a, and Barbora Hladk´a. 2003. The Prague Dependency Treebank: Threelevel annotation scenario. In Anne Abeill´e, editor, Treebanks: Building and Using Syntactially Annotated Corpora. Kluwer. Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lexius, and George Smith. 2002. The TIGER treebank. In Workshop on Treebanks and Linguistic Theories, Sozpol. Aoife Cahill, Martin Forst, Mairead McCarthy, Ruth O’Donovan, Christian Rohrer, Josef van Genabith, and Andy Way. 2005. Treebank-based acquisition of multilingual unification-grammar resources. Journal of Research on Language and Computation. Ruken C¸ akıcı. 2005. Automatic induction of a CCG grammar for Turkish. In ACL Student Research Workshop, pages 73–78, Ann Arbor, MI, June. Bob Carpenter. 1992. Categorial grammars, lexical rules, and the English predicative. In Robert Levine, editor, Formal Grammar: Theory and Implementation, chapter 3. Oxford University Press. John Chen, Srinivas Bangalore, and K. Vijay-Shanker. 2005. Automated extraction of Tree-Adjoining Grammars from treebanks. Natural Language Engineering. Stephen Clark and James R. Curran. 2004. Parsing the WSJ using CCG and log-linear models. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain. Amit Dubey and Frank Keller. 2003. Probabilistic parsing for German using Sister-Head dependencies. In Erhard Hinrichs and Dan Roth, editors, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 96–103, Sapporo, Japan. Gerald Gazdar, Ewan Klein, Geoffrey K. Pullum, and Ivan A. Sag. 1985. Generalised Phrase Structure Grammar. Blackwell, Oxford. Julia Hockenmaier and Mark Steedman. 2002a. Acquiring compact lexicalized grammars from a cleaner Treebank. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC), pages 1974–1981, Las Palmas, Spain, May. Julia Hockenmaier and Mark Steedman. 2002b. Generative models for statistical parsing with Combinatory Categorial Grammar. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 335– 342, Philadelphia, PA. Julia Hockenmaier and Mark Steedman. 2005. CCGbank: Users’ manual. Technical Report MS-CIS-05-09, Computer and Information Science, University of Pennsylvania. Beryl Hoffman. 1995. Computational Analysis of the Syntax and Interpretation of ‘Free’ Word-order in Turkish. Ph.D. thesis, University of Pennsylvania. IRCS Report 95-17. Roger Levy and Christopher Manning. 2004. Deep dependencies from context-free statistical parsers: correcting the surface dependency approximation. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19:313–330. Yusuke Miyao and Jun’ichi Tsujii. 2005. Probabilistic disambiguation models for wide-coverage HPSG parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 83–90, Ann Arbor, MI. Yusuke Miyao, Takashi Ninomiya, and Jun’ichi Tsujii. 2004. Corpus-oriented grammar development for acquiring a Head-driven Phrase Structure Grammar from the Penn Treebank. In Proceedings of the First International Joint Conference on Natural Language Processing (IJCNLP04). Michael Moortgat and Richard Moot. 2002. Using the Spoken Dutch Corpus for type-logical grammar induction. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC). Ruth O’Donovan, Michael Burke, Aoife Cahill, Josef van Genabith, and Andy Way. 2005. Large-scale induction and evaluation of lexical resources from the PennII and Penn-III Treebanks. Computational Linguistics, 31(3):329 – 365, September. Owen Rambow. 1994. Formal and Computational Aspects of Natural Language Syntax. Ph.D. thesis, University of Pennsylvania, Philadelphia PA. Libin Shen and Aravind K. Joshi. 2005. Incremental LTAG parsing. In Proceedings of the Human Language Technology Conference / Conference of Empirical Methods in Natural Language Processing (HLT/EMNLP). Wojciech Skut, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1997. An annotation scheme for free word order languages. In Fifth Conference on Applied Natural Language Processing. Mark Steedman. 1990. Gapping as constituent coordination. Linguistics and Philosophy, 13:207–263. Mark Steedman. 1996. Surface Structure and Interpretation. MIT Press, Cambridge, MA. Linguistic Inquiry Monograph, 30. Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA. Fei Xia. 1999. Extracting Tree Adjoining Grammars from bracketed corpora. In Proceedings of the 5th Natural Language Processing Pacific Rim Symposium (NLPRS-99). 512
2006
64
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 513–520, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Improved Discriminative Bilingual Word Alignment Robert C. Moore Wen-tau Yih Andreas Bode Microsoft Research Redmond, WA 98052, USA {bobmoore,scottyhi,abode}@microsoft.com Abstract For many years, statistical machine translation relied on generative models to provide bilingual word alignments. In 2005, several independent efforts showed that discriminative models could be used to enhance or replace the standard generative approach. Building on this work, we demonstrate substantial improvement in word-alignment accuracy, partly though improved training methods, but predominantly through selection of more and better features. Our best model produces the lowest alignment error rate yet reported on Canadian Hansards bilingual data. 1 Introduction Until recently, almost all work in statistical machine translation was based on word alignments obtained from combinations of generative probabalistic models developed at IBM by Brown et al. (1993), sometimes augmented by an HMMbased model or Och and Ney’s “Model 6” (Och and Ney, 2003). In 2005, however, several independent efforts (Liu et al., 2005; Fraser and Marcu, 2005; Ayan et al., 2005; Taskar et al., 2005; Moore, 2005; Ittycheriah and Roukos, 2005) demonstrated that discriminatively trained models can equal or surpass the alignment accuracy of the standard models, if the usual unlabeled bilingual training corpus is supplemented with human-annotated word alignments for only a small subset of the training data. The work cited above makes use of various training procedures and a wide variety of features. Indeed, whereas it can be difficult to design a factorization of a generative model that incorporates all the desired information, it is relatively easy to add arbitrary features to a discriminative model. We take advantage of this, building on our existing framework (Moore, 2005), to substantially reduce the alignment error rate (AER) we previously reported, given the same training and test data. Through a careful choice of features, and modest improvements in training procedures, we obtain the lowest error rate yet reported for word alignment of Canadian Hansards data. 2 Overall Approach As in our previous work (Moore, 2005), we train two models we call stage 1 and stage 2, both in the form of a weighted linear combination of feature values extracted from a pair of sentences and a proposed word alignment of them. The possible alignment having the highest overall score is selected for each sentence pair. Thus, for a sentence pair (e, f) we seek the alignment ˆa such that ˆa = argmaxa n X i=1 λifi(a, e, f) where the fi are features and the λi are weights. The models are trained on a large number of bilingual sentence pairs, a small number of which have hand-created word alignments provided to the training procedure. A set of hand alignments of a different subset of the overall training corpus is used to evaluate the models. In the stage 1 model, all the features are based on surface statistics of the training data, plus the hypothesized alignment. The entire training corpus is then automatically aligned using this model. The stage 2 model uses features based not only on the parallel sentences themselves but also on statistics of the alignments produced by the stage 513 1 model. The stage 1 model is discussed in Section 3, and the stage 2 model, in Section 4. After experimenting with many features and combinations of features, we made the final selection based on minimizing training set AER. For alignment search, we use a method nearly identical to our previous beam search procedure, which we do not discuss in detail. We made two minor modifications to handle the possiblity that more than one alignment may have the same score, which we previously did not take into account. First, we modified the beam search so that the beam size dynamically expands if needed to accomodate all the possible alignments that have the same score. Second we implemented a structural tie breaker, so that the same alignment will always be chosen as the one-best from a set of alignments having the same score. Neither of these changes significantly affected the alignment results. The principal training method is an adaptation of averaged perceptron learning as described by Collins (2002). The differences between our current and earlier training methods mainly address the observation that perceptron training is very sensitive to the order in which data is presented to the learner. We also investigated the large-margin training technique described by Tsochantaridis et al. (2004). The training procedures are described in Sections 5 and 6. 3 Stage 1 Model In our previous stage 1 model, we used five features. The most informative feature was the sum of bilingual word-association scores for all linked word pairs, computed as a log likelihood ratio. We used two features to measure the degree of nonmonotonicity of alignments, based on traversing the alignment in the order of the source sentence tokens, and noting the instances where the corresponding target sentence tokens were not in leftto-right order. One feature counted the number of times there was a backwards jump in the order of the target sentence tokens, and the other summed the magnitudes of these jumps. In order to model the trade-off between one-to-one and many-to-one alignments, we included a feature that counted the number of alignment links such that one of the linked words participated in another link. Our fifth feature was the count of the number of words in the sentence pair left unaligned. In addition to these five features, we employed two hard constraints. One constraint was that the only alignment patterns allowed were 1–1, 1–2, 1– 3, 2–1, and 3–1. Thus, many-to-many link patterns were disallowed, and a single word could be linked to at most three other words. The second constraint was that a possible link was considered only if it involved the strongest degree of association within the sentence pair for at least one of the words to be linked. If both words had stronger associations with other words in the sentence pair, then the link was disallowed. Our new stage 1 model includes all the features we used previously, plus the constraint on alignment patterns. The constraint involving strongest association is not used. In addition, our new stage 1 model employs the following features: association score rank features We define the rank of an association with respect to a word in a sentence pair to be the number of association types (word-type to word-type) for that word that have higher association scores, such that words of both types occur in the sentence pair. The contraint on strength of association we previously used can be stated as a requirement that no link be considered unless the corresponding association is of rank 0 for at least one of the words. We replace this hard constraint with two features based on association rank. One feature totals the sum of the association ranks with respect to both words involved in each link. The second feature sums the minimum of association ranks with respect to both words involved in each link. For alignments that obey the previous hard constraint, the value of this second feature would always be 0. jump distance difference feature In our original models, the only features relating to word order were those measuring nonmonotonicity. The likelihoods of various forward jump distances were not modeled. If alignments are dense enough, measuring nonmonotonicity gets at this indirectly; if every word is aligned, it is impossible to have large forward jumps without correspondingly large backwards jumps, because something has to link to the words that are jumped over. If word alignments are sparse, however, due to free translation, it is possible to have alignments with very different forward jumps, but the same backwards jumps. To differentiate such alignments, we introduce a feature that sums the differences between the distance between consecutive aligned 514 source words and the distance between the closest target words they are aligned to. many-to-one jump distance features It seems intuitive that the likelihood of a large forward jump on either the source or target side of an alignment is much less if the jump is between words that are both linked to the same word of the other language. This motivates the distinction between the d1 and d>1 parameters in IBM Models 4 and 5. We model this by including two features. One feature sums, for each word w, the number of words not linked to w that fall between the first and last words linked to w. The other features counts only such words that are linked to some word other than w. The intuition here is that it is not so bad to have a function word not linked to anything, between two words linked to the same word. exact match feature We have a feature that sums the number of words linked to identical words. This is motivated by the fact that proper names or specialized terms are often the same in both languages, and we want to take advantage of this to link such words even when they are too rare to have a high association score. lexical features Taskar et al. (2005) gain considerable benefit by including features counting the links between particular high frequency words. They use 25 such features, covering all pairs of the five most frequent non-punctuation words in each language. We adopt this type of feature but do so more agressively. We include features for all bilingual word pairs that have at least two cooccurrences in the labeled training data. In addition, we include features counting the number of unlinked occurrences of each word having at least two occurrences in the labeled training data. In training our new stage 1 model, we were concerned that using so many lexical features might result in overfitting to the training data. To try to prevent this, we train the stage 1 model by first optimizing the weights for all other features, then optimizing the weights for the lexical features, with the other weights held fixed to their optimium values without lexical features. 4 Stage 2 Model In our original stage 2 model, we replaced the loglikelihood-based word association statistic with the logarithm of the estimated conditional probability of a cluster of words being linked by the stage 1 model, given that they co-occur in a pair of aligned sentences, computed over the full (500,000 sentence pairs) training data. We estimated these probabilities using a discounted maximum likelihood estimate, in which a small fixed amount was subtracted from each link count: LPd(w1, . . . , wk) = links1(w1, . . . , wk) −d cooc(w1, . . . , wk) LPd(w1, . . . , wk) represents the estimated conditional link probability for the cluster of words w1, . . . , wk; links1(w1, . . . , wk) is the number of times they are linked by the stage 1 model, d is the discount; and cooc(w1, . . . , wk) is the number of times they co-occur. We found that d = 0.4 seemed to minimize training set AER. An important difference between our stage 1 and stage 2 models is that the stage 1 model considers each word-to-word link separately, but allows multiple links per word, as long as they lead to an alignment consisting only of one-to-one and one-to-many links (in either direction). The stage 2 model, however, uses conditional probabilities for both one-to-one and one-to-many clusters, but requires all clusters to be disjoint. Our original stage 2 model incorporated the same addtional features as our original stage 1 model, except that the feature that counts the number of links involved in non-one-to-one link clusters was omitted. Our new stage 2 model differs in a number of ways from the original version. First we replace the estimated conditional probability of a cluster of words being linked with the estimated conditional odds of a cluster of words being linked: LO(w1, . . . , wk) = links1(w1, . . . , wk) + 1 (cooc(w1, . . . , wk) −links1(w1, . . . , wk)) + 1 LO(w1, . . . , wk) represents the estimated conditional link odds for the cluster of words w1, . . . , wk. Note that we use “add-one” smoothing in place of a discount. Additional features in our new stage 2 model include the unaligned word feature used previously, plus the following features: symmetrized nonmonotonicity feature We symmetrize the previous nonmonontonicity feature that sums the magnitude of backwards jumps, by averaging the sum of of backwards jumps in the target sentence order relative to the source 515 sentence order, with the sum of the backwards jumps in the source sentence order relative to the target sentence order. We omit the feature that counts the number of backwards jumps. multi-link feature This feature counts the number of link clusters that are not one-to-one. This enables us to model whether the link scores for these clusters are more or less reliable than the link scores for one-to-one clusters. empirically parameterized jump distance feature We take advantage of the stage 1 alignment to incorporate a feature measuring the jump distances between alignment links that are more sophisticated than simply measuring the difference in source and target distances, as in our stage 1 model. We measure the (signed) source and target distances between all pairs of links in the stage 1 alignment of the full training data. From this, we estimate the odds of each possible target distance given the corresponding source distance: JO(dt|ds) = C(target dist = dt ∧source dist = ds) + 1 C(target dist ̸= dt ∧source dist = ds) + 1 We similarly estimate the odds of each possible source distance given the corresponding target distance. The feature values consist of the sum of the scaled log odds of the jumps between consecutive links in a hypothesized alignment, computed in both source sentence and target sentence order. This feature is applied only when both the source and target jump distances are non-zero, so that it applies only to jumps between clusters, not to jumps on the “many” side of a many-to-one cluster. We found it necessary to linearly scale these feature values in order to get good results (in terms of training set AER) when using perceptron training.1 We found empirically that we could get good results in terms of training set AER by dividing each log odds estimate by the largest absolute value of any such estimate computed. 5 Perceptron Training We optimize feature weights using a modification of averaged perceptron learning as described by Collins (2002). Given an initial set of feature weight values, the algorithm iterates through the 1Note that this is purely for effective training, since after training, one could adjust the feature weights according to the scale factor, and use the original feature values. labeled training data multiple times, comparing, for each sentence pair, the best alignment ahyp according to the current model with the reference alignment aref. At each sentence pair, the weight for each feature is is incremented by a multiple of the difference between the value of the feature for the best alignment according to the model and the value of the feature for the reference alignment: λi ←λi + η(fi(aref, e, f) −fi(ahyp, e, f)) The updated feature weights are used to compute ahyp for the next sentence pair. The multiplier η is called the learning rate. In the averaged perceptron, the feature weights for the final model are the average of the weight values over all the data rather than simply the values after the final sentence pair of the final iteration. Differences between our approach and Collins’s include averaging feature weights over each pass through the data, rather than over all passes; randomizing the order of the data for each learning pass; and performing an evaluation pass after each learning pass, with feature weights fixed to their average values for the preceding learning pass, during which training set AER is measured. This procedure is iterated until a local minimum on training set AER is found. We initialize the weight of the anticipated mostinformative feature (word-association scores in stage 1; conditional link probabilities or odds in stage 2) to 1.0, with other feature weights intialized to 0. The weight for the most informative feature is not updated. Allowing all weights to vary allows many equivalent sets of weights that differ only by a constant scale factor. Fixing one weight eliminates a spurious apparent degree of freedom. Previously, we set the learning rate η differently in training his stage 1 and stage 2 models. For the stage 2 model, we used a single learning rate of 0.01. For the stage 1 model, we used a sequence of learning rates: 1000, 100, 10, and 1.0. At each transition between learning rates, we re-initialized the feature weights to the optimum values found with the previous learning rate. In our current work, we make a number of modifications to this procedure. We reset the feature weights to the best averaged values we have yet seen at the begining of each learning pass through the data. Anecdotally, this seems to result in faster convergence to a local AER minimum. We also use multiple learning rates for both the stage 1 and 516 stage 2 models, setting the learning rates automatically. The initial learning rate is the maximum absolute value (for one word pair/cluster) of the word association, link probability, or link odds feature, divided by the number of labeled training sentence pairs. Since many of the feature values are simple counts, this allows a minimal difference of 1 in the feature value, if repeated in every training example, to permit a count feature to have as large a weighted value as the most informative feature, after a single pass through the data. After the learning search terminates for a given learning rate, we reduce the learning rate by a factor of 10, and iterate until we judge that we are at a local minimum for this learning rate. We continue with progressively smaller learning rates until an entire pass through the data produces feature weights that differ so little from their values at the beginning of the pass that the training set AER does not change. Two final modifications are inspired by the realization that the results of perceptron training are very sensitive to the order in which the data is presented. Since we randomize the order of the data on every pass, if we make a pass through the training data, and the training set AER increases, it may be that we simply encountered an unfortunate ordering of the data. Therefore, when training set AER increases, we retry two additional times with the same initial weights, but different random orderings of the data, before giving up and trying a smaller learning rate. Finally, we repeat the entire training process multiple times, and average the feature weights resulting from each of these runs. We currently use 10 runs of each model. This final averaging is inspired by the idea of “Bayes-point machines” (Herbrich and Graepel, 2001). 6 SVM Training After extensive experiments with perceptron training, we wanted to see if we could improve the results obtained with our best stage 2 model by using a more sophisticated training method. Perceptron training has been shown to obtain good results for some problems, but occasionally very poor results are reported, notably by Taskar et al. (2005) for the word-alignment problem. We adopted the support vector machine (SVM) method for structured output spaces of Tsochantaridis et al. (2005), using Joachims’ SV Mstruct package. Like standard SVM learning, this method tries to find the hyperplane that separates the training examples with the largest margin. Despite a very large number of possible output labels (e.g., all possible alignments of a given pair of sentences), the optimal hyperplane can be efficiently approximated given the desired error rate, using a cutting plane algorithm. In each iteration of the algorithm, it adds the “best” incorrect predictions given the current model as constraints, and optimizes the weight vector subject only to them. The main advantage of this algorithm is that it does not pose special restrictions on the output structure, as long as “decoding” can be done efficiently. This is crucial to us because several features we found very effective in this task are difficult to incorporate into structured learning methods that require decomposable features. This method also allows a variety of loss functions, but we use only simple 0-1 loss, which in our case means whether or not the alignment of a sentence pair is completely correct, since this worked as well as anything else we tried. Our SVM method has a number of free parameters, which we tried tuning in two different ways. One way is minimizing training set AER, which is how we chose the stopping points in perceptron training. The other is five-fold cross validation. In this method, we train five times on 80% of the training data and test on the other 20%, with five disjoint subsets used for testing. The parameter values yielding the best averaged AER on the five test subsets of the training set are used to train the final model on the entire training set. 7 Evaluation We used the same training and test data as in our previous work, a subset of the Canadian Hansards bilingual corpus supplied for the bilingual word alignment workshop held at HLT-NAACL 2003 (Mihalcea and Pedersen, 2003). This subset comprised 500,000 English-French sentences pairs, including 224 manually word-aligned sentence pairs for labeled training data, and 223 labeled sentences pairs as test data. Automatic sentence alignment of the training data was provided by Ulrich Germann, and the hand alignments of the labeled data were created by Franz Och and Hermann Ney (Och and Ney, 2003). For baselines, Table 1 shows the test set results we previously reported, along with results for IBM Model 4, trained with Och’s Giza++ software 517 Alignment Recall Precision AER Prev LLR 0.829 0.848 0.160 CLP1 0.889 0.934 0.086 CLP2 0.898 0.947 0.075 Giza E →F 0.870 0.890 0.118 Giza F →E 0.876 0.907 0.106 Giza union 0.929 0.845 0.124 Giza intersection 0.817 0.981 0.097 Giza refined 0.908 0.929 0.079 Table 1: Baseline Results. package, using the default configuration file (Och and Ney, 2003).2 “Prev LLR” is our earlier stage 1 model, and CLP1 and CLP2 are two versions of our earlier stage 2 model. For CLP1, conditional link probabilities were estimated from the alignments produced by our “Prev LLR” model, and for CLP2, they were obtained from a yet earlier, heuristic alignment model. Results for IBM Model 4 are reported for models trained in both directions, English-to-French and French-toEnglish, and for the union, intersection, and what Och and Ney (2003) call the “refined” combination of the those two alignments. Results for our new stage 1 model are presented in Table 2. The first line is for the model described in Section 3, optimizing non-lexical features before lexical features. The second line gives results for optimizing all features simultaneously. The next line omits lexical features entirely. The last line is for our original stage 1 model, but trained using our improved perceptron training method. As we can see, our best stage 1 model reduces the error rate of previous stage 1 model by almost half. Comparing the first two lines shows that twophase training of non-lexical and lexical features produces a 0.7% reduction in test set error. Although the purpose of the two-phase training was to mitigate overfitting to the training data, we also found training set AER was reduced (7.3% vs. 8.8%). Taken all together, the results show a 7.9% total reduction in error rate: 4.0% from new nonlexical features, 3.3% from lexical features with two-phase training, and 0.6% from other improvements in perceptron training. Table 3 presents results for perceptron training of our new stage 2 model. The first line is for the model as described in Section 4. Since the use of log odds is somewhat unusual, in the second line 2Thanks to Chris Quirk for providing Giza++ alignments. Alignment Recall Precision AER Two-phase train 0.907 0.928 0.081 One-phase train 0.911 0.912 0.088 No lex feats 0.889 0.885 0.114 Prev LLR (new train) 0.834 0.855 0.154 Table 2: Stage 1 Model Results. Alignment Recall Precision AER Log odds 0.935 0.964 0.049 Log probs 0.934 0.962 0.051 CLP1 (new A & T) 0.925 0.952 0.060 CLP1 (new A) 0.917 0.955 0.063 Table 3: Stage 2 Model Results. we show results for a similiar model, but using log probabilities instead of log odds for both the link model and the jump model. This result is 0.2% worse than the log-odds-based model, but the difference is small enough to warrant testing its significance. Comparing the errors on each test sentence pair with a 2-tailed paired t test, the results were suggestive, but not significant (p = 0.28) The third line of Table 3 shows results for our earlier CLP1 model with probabilities estimated from our new stage 1 model alignments (“new A”), using our recent modifications to perceptron training (“new T”). These results are significantly worse than either of the two preceding models (p < 0.0008). The fourth line is for the same model and stage 1 alignments, but with our earlier perceptron training method. While the results are 0.3% worse than with our new training method, the difference is not significant (p = 0.62). Table 4 shows the results of SVM training of the model that was best under perceptron training, tuning free parameters either by minimizing error on the entire training set or by 5-fold cross validation on the training set. The cross-validation method produced slightly lower test-set AER, but both results rounded to 4.7%. While these results are somewhat better than with perceptron training, the differences are not significant (p ≥0.47). 8 Comparisons to Other Work At the time we carried out the experiments described above, our sub-5% AER results were the best we were aware of for word alignment of Canadian Hansards bilingual data, although direct comparisons are problematic due to differences in 518 Alignment Recall Precision AER Min train err 0.941 0.962 0.047 5 × CV 0.942 0.962 0.047 Table 4: SVM Training Results. total training data, labeled training data, and test data. The best previously reported result was by Och and Ney (2003), who obtained 5.2% AER for a combination including all the IBM models except Model 2, plus the HMM model and their Model 6, together with a bilingual dictionary, for the refined alignment combination, trained on three times as much data as we used. Cherry and Lin’s (2003) method obtained an AER of 5.7% as reported by Mihalcea and Pedersen (2003), the previous lowest reported error rate for a method that makes no use of the IBM models. Cherry and Lin’s method is similar to ours in using explicit estimates of the probability of a link given the co-occurence of the linked words; but it is generative rather than discriminative, it requires a parser for the English side of the corpus, and it does not model many-to-one links. Taskar et al. (2005) reported 5.4% AER for a discriminative model that includes predictions from the intersection of IBM Model 4 alignments as a feature. Their best result without using information from the IBM models was 10.7% AER. After completing the experiments described in Section 7, we became aware further developments in the line of research reported by Taskar et al. (Lacoste-Julien et al., 2006). By modifying their previous approach to allow many-to-one alignments and first-order interactions between alignments, Lacoste-Julien et al. have improved their best AER without using information from the more complex IBM models to 6.2%. Their best result, however, is obtained from a model that includes both a feature recording intersected IBM Model 4 predictions, plus a feature whose values are the alignment probabilities obtained from a pair of HMM alignment models trained in both directions in such a way that they agree on the alignment probabilities (Liang et al., 2006). With this model, they obtained a much lower 3.8% AER. Lacoste-Julien very graciously provided both the IBM Model 4 predictions and the probabilities estimated by the bidirectional HMM models that they had used to compute these additional feature values. We then added features based on this information to see how much we could improve our best model. We also eliminated one other difference between our results and those of LacosteJulien et al., by training on all 1.1 million EnglishFrench sentence pairs from the 2003 word alignment workshop, rather than the 500,000 sentence pairs we had been using. Since all our other feature values derived from probabilities are expressed as log odds, we also converted the HMM probabilities estimated by Liang et al. to log odds. To make this well defined in all cases, we thresholded high probabilities (including 1.0) at 0.999999, and low probabilities (including 0.0) at 0.1 (which we found produced lower training set error than using a very small non-zero probability, although we have not searched systematically for the optimal value). In our latest experiments, we first established that simply increasing the unlabled training data to 1.1 million sentence pairs made very little difference, reducing the test-set AER of our stage 2 model under perceptron training only from 4.9% to 4.8%. Combining our stage 2 model features with the HMM log odds feature using SVM training with 5-fold cross validation yielded a substantial reduction in test-set AER to 3.9% (96.9% precision, 95.1% recall). We found it somewhat difficult to improve these results further by including IBM Model 4 intersection feature. We finally obtained our best results, however, for both trainingset and test-set AER, by holding the stage 2 model feature weights at the values obtained by SVM training with the HMM log odds feature, and optimizing the HMM log odds feature weight and IBM Model 4 intersection feature weight with perceptron training.3 This produced a test-set AER of 3.7% (96.9% precision, 95.5% recall). 9 Conclusions For Canadian Hansards data, the test-set AER of 4.7% for our stage 2 model is one of the lowest yet reported for an aligner that makes no use of the expensive IBM models, and our test-set AER of 3.7% for the stage 2 model in combination with the HMM log odds and Model 4 intersection features is the lowest yet reported for any aligner.4 Perhaps if any general conclusion is to be drawn from our results, it is that in creating a discrim3At this writing we have not yet had time to try this with SVM training. 4However, the difference between our result and the 3.8% of Lacoste-Julien et al. is almost certainly not significant. 519 inative word alignment model, the model structure and features matter the most, with the discriminative training method of secondary importance. While we obtained a small improvements by varying the training method, few of the differences were statistically significant. Having better features was much more important. References Necip Fazil Ayan, Bonnie J. Dorr, and Christof Monz. 2005. NeurAlign: Combining Word Alignments Using Neural Networks. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 65–72, Vancouver, British Columbia. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263–311. Colin Cherry and Dekang Lin. 2003. A Probability Model to Improve Word Alignment. In Proceedings of the 41st Annual Meeting of the ACL, pp. 88–95, Sapporo, Japan. Michael Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1–8, Philadelphia, Pennsylvania. Alexander Fraser and Daniel Marcu. 2005. ISI’s Participation in the Romanian-English Alignment Task. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, pp. 91–94, Ann Arbor, Michigan. Ralf Herbrich and Thore Graepel. 2001. Large Scale Bayes Point Machines Advances. In Neural Information Processing Systems 13, pp. 528–534. Abraham Ittycheriah and Salim Roukos. 2005. A Maximum Entropy Word Aligner for ArabicEnglish Machine Translation. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 89–96, Vancouver, British Columbia. Simon Lacoste-Julien, Ben Taskar, Dan Klein, and Michael Jordan. 2006. Word Alignment via Quadratic Assignment. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 112–119, New York City. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by Agreement. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 104–111, New York City. Yang Liu, Qun Liu, and Shouxun Lin. 2005. Loglinear Models for Word Alignment. In Proceedings of the 43rd Annual Meeting of the ACL, pp. 459–466, Ann Arbor, Michigan. Rada Mihalcea and Ted Pedersen. 2003. An Evaluation Exercise for Word Alignment. In Proceedings of the HLT-NAACL 2003 Workshop, Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, pp. 1–6, Edmonton, Alberta. Robert C. Moore. 2005. A Discriminative Framework for Bilingual Word Alignment. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 81– 88, Vancouver, British Columbia. Franz Joseph Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19–51. Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A Discriminative Matching Approach to Word Alignment. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 73–80, Vancouver, British Columbia. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2005. Large Margin Methods for Structured and Interdependent Output Variables. Journal of Machine Learning Research (JMLR), pp. 1453–1484. 520
2006
65
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 521–528, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation Deyi Xiong Institute of Computing Technology Chinese Academy of Sciences Beijing, China, 100080 Graduate School of Chinese Academy of Sciences [email protected] Qun Liu and Shouxun Lin Institute of Computing Technology Chinese Academy of Sciences Beijing, China, 100080 {liuqun, sxlin}@ict.ac.cn Abstract We propose a novel reordering model for phrase-based statistical machine translation (SMT) that uses a maximum entropy (MaxEnt) model to predicate reorderings of neighbor blocks (phrase pairs). The model provides content-dependent, hierarchical phrasal reordering with generalization based on features automatically learned from a real-world bitext. We present an algorithm to extract all reordering events of neighbor blocks from bilingual data. In our experiments on Chineseto-English translation, this MaxEnt-based reordering model obtains significant improvements in BLEU score on the NIST MT-05 and IWSLT-04 tasks. 1 Introduction Phrase reordering is of great importance for phrase-based SMT systems and becoming an active area of research recently. Compared with word-based SMT systems, phrase-based systems can easily address reorderings of words within phrases. However, at the phrase level, reordering is still a computationally expensive problem just like reordering at the word level (Knight, 1999). Many systems use very simple models to reorder phrases 1. One is distortion model (Och and Ney, 2004; Koehn et al., 2003) which penalizes translations according to their jump distance instead of their content. For example, if N words are skipped, a penalty of N will be paid regardless of which words are reordered. This model takes the risk of penalizing long distance jumps 1In this paper, we focus our discussions on phrases that are not necessarily aligned to syntactic constituent boundary. which are common between two languages with very different orders. Another simple model is flat reordering model (Wu, 1996; Zens et al., 2004; Kumar et al., 2005) which is not content dependent either. Flat model assigns constant probabilities for monotone order and non-monotone order. The two probabilities can be set to prefer monotone or non-monotone orientations depending on the language pairs. In view of content-independency of the distortion and flat reordering models, several researchers (Och et al., 2004; Tillmann, 2004; Kumar et al., 2005; Koehn et al., 2005) proposed a more powerful model called lexicalized reordering model that is phrase dependent. Lexicalized reordering model learns local orientations (monotone or non-monotone) with probabilities for each bilingual phrase from training data. During decoding, the model attempts to finding a Viterbi local orientation sequence. Performance gains have been reported for systems with lexicalized reordering model. However, since reorderings are related to concrete phrases, researchers have to design their systems carefully in order not to cause other problems, e.g. the data sparseness problem. Another smart reordering model was proposed by Chiang (2005). In his approach, phrases are reorganized into hierarchical ones by reducing subphrases to variables. This template-based scheme not only captures the reorderings of phrases, but also integrates some phrasal generalizations into the global model. In this paper, we propose a novel solution for phrasal reordering. Here, under the ITG constraint (Wu, 1997; Zens et al., 2004), we need to consider just two kinds of reorderings, straight and inverted between two consecutive blocks. Therefore reordering can be modelled as a problem of 521 classification with only two labels, straight and inverted. In this paper, we build a maximum entropy based classification model as the reordering model. Different from lexicalized reordering, we do not use the whole block as reordering evidence, but only features extracted from blocks. This is more flexible. It makes our model reorder any blocks, observed in training or not. The whole maximum entropy based reordering model is embedded inside a log-linear phrase-based model of translation. Following the Bracketing Transduction Grammar (BTG) (Wu, 1996), we built a CKY-style decoder for our system, which makes it possible to reorder phrases hierarchically. To create a maximum entropy based reordering model, the first step is learning reordering examples from training data, similar to the lexicalized reordering model. But in our way, any evidences of reorderings will be extracted, not limited to reorderings of bilingual phrases of length less than a predefined number of words. Secondly, features will be extracted from reordering examples according to feature templates. Finally, a maximum entropy classifier will be trained on the features. In this paper we describe our system and the MaxEnt-based reordering model with the associated algorithm. We also present experiments that indicate that the MaxEnt-based reordering model improves translation significantly compared with other reordering approaches and a state-of-the-art distortion-based system (Koehn, 2004). 2 System Overview 2.1 Model Under the BTG scheme, translation is more like monolingual parsing through derivations. Throughout the translation procedure, three rules are used to derive the translation A [ ] →(A1, A2) (1) A ⟨⟩ →(A1, A2) (2) A →(x, y) (3) During decoding, the source sentence is segmented into a sequence of phrases as in a standard phrase-based model. Then the lexical rule (3) 2 is 2Currently, we restrict phrases x and y not to be null. Therefore neither deletion nor insertion is carried out during decoding. However, these operations are to be considered in our future version of model. used to translate source phrase y into target phrase x and generate a block A. Later, the straight rule (1) merges two consecutive blocks into a single larger block in the straight order; while the inverted rule (2) merges them in the inverted order. These two merging rules will be used continuously until the whole source sentence is covered. When the translation is finished, a tree indicating the hierarchical segmentation of the source sentence is also produced. In the following, we will define the model in a straight way, not in the dynamic programming recursion way used by (Wu, 1996; Zens et al., 2004). We focus on defining the probabilities of different rules by separating different features (including the language model) out from the rule probabilities and organizing them in a log-linear form. This straight way makes it clear how rules are used and what they depend on. For the two merging rules straight and inverted, applying them on two consecutive blocks A1 and A2 is assigned a probability Prm(A) Prm(A) = ΩλΩ· △λLM pLM(A1,A2) (4) where the Ωis the reordering score of block A1 and A2, λΩis its weight, and △pLM(A1,A2) is the increment of the language model score of the two blocks according to their final order, λLM is its weight. For the lexical rule, applying it is assigned a probability Prl(A) Prl(A) = p(x|y)λ1 · p(y|x)λ2 · plex(x|y)λ3 ·plex(y|x)λ4 · exp(1)λ5 · exp(|x|)λ6 ·pλLM LM (x) (5) where p(·) are the phrase translation probabilities in both directions, plex(·) are the lexical translation probabilities in both directions, and exp(1) and exp(|x|) are the phrase penalty and word penalty, respectively. These features are very common in state-of-the-art systems (Koehn et al., 2005; Chiang, 2005) and λs are weights of features. For the reordering model Ω, we define it on the two consecutive blocks A1 and A2 and their order o ∈{straight, inverted} Ω= f(o, A1, A2) (6) Under this framework, different reordering models can be designed. In fact, we defined four reordering models in our experiments. The first one 522 is NONE, meaning no explicit reordering features at all. We set Ωto 1 for all different pairs of blocks and their orders. So the phrasal reordering is totally dependent on the language model. This model is obviously different from the monotone search, which does not use the inverted rule at all. The second one is a distortion style reordering model, which is formulated as Ω= ( exp(0), o = straight exp(|A1|) + (|A2|), o = inverted where |Ai| denotes the number of words on the source side of blocks. When λΩ< 0, this design will penalize those non-monotone translations. The third one is a flat reordering model, which assigns probabilities for the straight and inverted order. It is formulated as Ω= ( pm, o = straight 1 −pm, o = inverted In our experiments on Chinese-English tasks, the probability for the straight order is set at pm = 0.95. This is because word order in Chinese and English is usually similar. The last one is the maximum entropy based reordering model proposed by us, which will be described in the next section. We define a derivation D as a sequence of applications of rules (1) −(3), and let c(D) and e(D) be the Chinese and English yields of D. The probability of a derivation D is Pr(D) = Y i Pr(i) (7) where Pr(i) is the probability of the ith application of rules. Given an input sentence c, the final translation e∗is derived from the best derivation D∗ D∗ = argmax c(D)=c Pr(D) e∗ = e(D∗) (8) 2.2 Decoder We developed a CKY style decoder that employs a beam search algorithm, similar to the one by Chiang (2005). The decoder finds the best derivation that generates the input sentence and its translation. From the best derivation, the best English e∗ is produced. Given a source sentence c, firstly we initiate the chart with phrases from phrase translation table by applying the lexical rule. Then for each cell that spans from i to j on the source side, all possible derivations spanning from i to j are generated. Our algorithm guarantees that any sub-cells within (i, j) have been expanded before cell (i, j) is expanded. Therefore the way to generate derivations in cell (i, j) is to merge derivations from any two neighbor sub-cells. This combination is done by applying the straight and inverted rules. Each application of these two rules will generate a new derivation covering cell (i, j). The score of the new generated derivation is derived from the scores of its two sub-derivations, reordering model score and the increment of the language model score according to the Equation (4). When the whole input sentence is covered, the decoding is over. Pruning of the search space is very important for the decoder. We use three pruning ways. The first one is recombination. When two derivations in the same cell have the same w leftmost/rightmost words on the English yields, where w depends on the order of the language model, they will be recombined by discarding the derivation with lower score. The second one is the threshold pruning which discards derivations that have a score worse than α times the best score in the same cell. The last one is the histogram pruning which only keeps the top n best derivations for each cell. In all our experiments, we set n = 40, α = 0.5 to get a tradeoff between speed and performance in the development set. Another feature of our decoder is the k-best list generation. The k-best list is very important for the minimum error rate training (Och, 2003a) which is used for tuning the weights λ for our model. We use a very lazy algorithm for the k-best list generation, which runs two phases similarly to the one by Huang et al. (2005). In the first phase, the decoder runs as usual except that it keeps some information of weaker derivations which are to be discarded during recombination. This will generate not only the first-best of final derivation but also a shared forest. In the second phase, the lazy algorithm runs recursively on the shared forest. It finds the second-best of the final derivation, which makes its children to find their secondbest, and children’s children’s second-best, until the leaf node’s second-best. Then it finds the thirdbest, forth-best, and so on. In all our experiments, we set k = 200. 523 The decoder is implemented in C++. Using the pruning settings described above, without the kbest list generation, it takes about 6 seconds to translate a sentence of average length 28.3 words on a 2GHz Linux system with 4G RAM memory. 3 Maximum Entropy Based Reordering Model In this section, we discuss how to create a maximum entropy based reordering model. As described above, we defined the reordering model Ω on the three factors: order o, block A1 and block A2. The central problem is, given two neighbor blocks A1 and A2, how to predicate their order o ∈{straight, inverted}. This is a typical problem of two-class classification. To be consistent with the whole model, the conditional probability p(o|A1, A2) is calculated. A simple way to compute this probability is to take counts from the training data and then to use the maximum likelihood estimate (MLE) p(o|A1, A2) = Count(o, A1, A2) Count(A1, A2) (9) The similar way is used by lexicalized reordering model. However, in our model this way can’t work because blocks become larger and larger due to using the merging rules, and finally unseen in the training data. This means we can not use blocks as direct reordering evidences. A good way to this problem is to use features of blocks as reordering evidences. Good features can not only capture reorderings, avoid sparseness, but also integrate generalizations. It is very straight to use maximum entropy model to integrate features to predicate reorderings of blocks. Under the MaxEnt model, we have Ω= pθ(o|A1, A2) = exp(P i θihi(o, A1, A2)) P o exp(P i θihi(o, A1, A2)) (10) where the functions hi ∈{0, 1} are model features and the θi are weights of the model features which can be trained by different algorithms (Malouf, 2002). 3.1 Reordering Example Extraction Algorithm The input for the algorithm is a bilingual corpus with high-precision word alignments. We obtain the word alignments using the way of Koehn et al. (2005). After running GIZA++ (Och and Ney, target source b1 b2 b3 b4 c1 c2 Figure 1: The bold dots are corners. The arrows from the corners are their links. Corner c1 is shared by block b1 and b2, which in turn are linked by the STRAIGHT links, bottomleft and topright of c1. Similarly, block b3 and b4 are linked by the INVERTED links, topleft and bottomright of c2. 2000) in both directions, we apply the “growdiag-final” refinement rule on the intersection alignments for each sentence pair. Before we introduce this algorithm, we introduce some formal definitions. The first one is block which is a pair of source and target contiguous sequences of words b = (si2 i1, tj2 j1) b must be consistent with the word alignment M ∀(i, j) ∈M, i1 ≤i ≤i2 ↔j1 ≤j ≤j2 This definition is similar to that of bilingual phrase except that there is no length limitation over block. A reordering example is a triple of (o, b1, b2) where b1 and b2 are two neighbor blocks and o is the order between them. We define each vertex of block as corner. Each corner has four links in four directions: topright, topleft, bottomright, bottomleft, and each link links a set of blocks which have the corner as their vertex. The topright and bottomleft link blocks with the straight order, so we call them STRAIGHT links. Similarly, we call the topleft and bottomright INVERTED links since they link blocks with the inverted order. For convenience, we use b ←- L to denote that block b is linked by the link L. Note that the STRAIGHT links can not coexist with the INVERTED links. These definitions are illustrated in Figure 1. The reordering example extraction algorithm is shown in Figure 2. The basic idea behind this algorithm is to register all neighbor blocks to the associated links of corners which are shared by them. To do this, we keep an array to record link 524 1: Input: sentence pair (s, t) and their alignment M 2: ℜ:= ∅ 3: for each span (i1, i2) ∈s do 4: find block b = (si2 i1, tj2 j1) that is consistent with M 5: Extend block b on the target boundary with one possible non-aligned word to get blocks E(b) 6: for each block b∗∈b S E(b) do 7: Register b∗to the links of four corners of it 8: end for 9: end for 10: for each corner C in the matrix M do 11: if STRAIGHT links exist then 12: ℜ:= ℜS {(straight, b1, b2)}, b1 ←- C.bottomleft, b2 ←- C.topright 13: else if INVERTED links exist then 14: ℜ:= ℜS {(inverted, b1, b2)}, b1 ←- C.topleft, b2 ←- C.bottomright 15: end if 16: end for 17: Output: reordering examples ℜ Figure 2: Reordering Example Extraction Algorithm. information of corners when extracting blocks. Line 4 and 5 are similar to the phrase extraction algorithm by Och (2003b). Different from Och, we just extend one word which is aligned to null on the boundary of target side. If we put some length limitation over the extracted blocks and output them, we get bilingual phrases used in standard phrase-based SMT systems and also in our system. Line 7 updates all links associated with the current block. You can attach the current block to each of these links. However this will increase reordering examples greatly, especially those with the straight order. In our Experiments, we just attach the smallest blocks to the STRAIGHT links, and the largest blocks to the INVERTED links. This will keep the number of reordering examples acceptable but without performance degradation. Line 12 and 14 extract reordering examples. 3.2 Features With the extracted reordering examples, we can obtain features for our MaxEnt-based reordering model. We design two kinds of features, lexical features and collocation features. For a block b = (s, t), we use s1 to denote the first word of the source s, t1 to denote the first word of the target t. Lexical features are defined on the single word s1 or t1. Collocation features are defined on the combination s1 or t1 between two blocks b1 and b2. Three kinds of combinations are used. The first one is source collocation, b1.s1&b2.s1. The second is target collocation, b1.t1&b2.t1. The last one hi(o, b1, b2) = n 1, b1.t1 = E1, o = O 0, otherwise hj(o, b1, b2) = n 1, b1.t1 = E1, b2.t1 = E2, o = O 0, otherwise Figure 3: MaxEnt-based reordering feature templates. The first one is a lexical feature, and the second one is a target collocation feature, where Ei are English words, O ∈{straight, inverted}. is block collocation, b1.s1&b1.t1 and b2.s1&b2.t1. The templates for the lexical feature and the collocation feature are shown in Figure 3. Why do we use the first words as features? These words are nicely at the boundary of blocks. One of assumptions of phrase-based SMT is that phrase cohere across two languages (Fox, 2002), which means phrases in one language tend to be moved together during translation. This indicates that boundary words of blocks may keep information for their movements/reorderings. To test this hypothesis, we calculate the information gain ratio (IGR) for boundary words as well as the whole blocks against the order on the reordering examples extracted by the algorithm described above. The IGR is the measure used in the decision tree learning to select features (Quinlan, 1993). It represents how precisely the feature predicate the class. For feature f and class c, the IGR(f, c) IGR(f, c) = En(c) −En(c|f) En(f) (11) where En(·) is the entropy and En(·|·) is the conditional entropy. To our surprise, the IGR for the four boundary words (IGR(⟨b1.s1, b2.s1, b1.t1, b2.t1⟩, order) = 0.2637) is very close to that for the two blocks together (IGR(⟨b1, b2⟩, order) = 0.2655). Although our reordering examples do not cover all reordering events in the training data, this result shows that boundary words do provide some clues for predicating reorderings. 4 Experiments We carried out experiments to compare against various reordering models and systems to demonstrate the competitiveness of MaxEnt-based reordering: 1. Monotone search: the inverted rule is not used. 525 2. Reordering variants: the NONE, distortion and flat reordering models described in Section 2.1. 3. Pharaoh: A state-of-the-art distortion-based decoder (Koehn, 2004). 4.1 Corpus Our experiments were made on two Chinese-toEnglish translation tasks: NIST MT-05 (news domain) and IWSLT-04 (travel dialogue domain). NIST MT-05. In this task, the bilingual training data comes from the FBIS corpus with 7.06M Chinese words and 9.15M English words. The trigram language model training data consists of English texts mostly derived from the English side of the UN corpus (catalog number LDC2004E12), which totally contains 81M English words. For the efficiency of minimum error rate training, we built our development set using sentences of length at most 50 characters from the NIST MT-02 evaluation test data. IWSLT-04. For this task, our experiments were carried out on the small data track. Both the bilingual training data and the trigram language model training data are restricted to the supplied corpus, which contains 20k sentences, 179k Chinese words and 157k English words. We used the CSTAR 2003 test set consisting of 506 sentence pairs as development set. 4.2 Training We obtained high-precision word alignments using the way described in Section 3.1. Then we ran our reordering example extraction algorithm to output blocks of length at most 7 words on the Chinese side together with their internal alignments. We also limited the length ratio between the target and source language (max(|s|, |t|)/min(|s|, |t|)) to 3. After extracting phrases, we calculated the phrase translation probabilities and lexical translation probabilities in both directions for each bilingual phrase. For the minimum-error-rate training, we reimplemented Venugopal’s trainer 3 (Venugopal et al., 2005) in C++. For all experiments, we ran this trainer with the decoder iteratively to tune the weights λs to maximize the BLEU score on the development set. 3See http://www.cs.cmu.edu/ ashishv/mer.html. This is a Matlab implementation. Pharaoh We shared the same phrase translation tables between Pharaoh and our system since the two systems use the same features of phrases. In fact, we extracted more phrases than Pharaoh’s trainer with its default settings. And we also used our reimplemented trainer to tune lambdas of Pharaoh to maximize its BLEU score. During decoding, we pruned the phrase table with b = 100 (default 20), pruned the chart with n = 100, α = 10−5 (default setting), and limited distortions to 4 (default 0). MaxEnt-based Reordering Model We firstly ran our reordering example extraction algorithm on the bilingual training data without any length limitations to obtain reordering examples and then extracted features from these examples. In the task of NIST MT-05, we obtained about 2.7M reordering examples with the straight order, and 367K with the inverted order, from which 112K lexical features and 1.7M collocation features after deleting those with one occurrence were extracted. In the task of IWSLT-04, we obtained 79.5k reordering examples with the straight order, 9.3k with the inverted order, from which 16.9K lexical features and 89.6K collocation features after deleting those with one occurrence were extracted. Finally, we ran the MaxEnt toolkit by Zhang 4 to tune the feature weights. We set iteration number to 100 and Gaussian prior to 1 for avoiding overfitting. 4.3 Results We dropped unknown words (Koehn et al., 2005) of translations for both tasks before evaluating their BLEU scores. To be consistent with the official evaluation criterions of both tasks, casesensitive BLEU-4 scores were computed For the NIST MT-05 task and case-insensitive BLEU-4 scores were computed for the IWSLT-04 task 5. Experimental results on both tasks are shown in Table 1. Italic numbers refer to results for which the difference to the best result (indicated in bold) is not statistically significant. For all scores, we also show the 95% confidence intervals computed using Zhang’s significant tester (Zhang et al., 2004) which was modified to conform to NIST’s 4See http://homepages.inf.ed.ac.uk/s0450736 /maxent toolkit.html. 5Note that the evaluation criterion of IWSLT-04 is not totally matched since we didn’t remove punctuation marks. 526 definition of the BLEU brevity penalty. We observe that if phrasal reordering is totally dependent on the language model (NONE) we get the worst performance, even worse than the monotone search. This indicates that our language models were not strong to discriminate between straight orders and inverted orders. The flat and distortion reordering models (Row 3 and 4) show similar performance with Pharaoh. Although they are not dependent on phrases, they really reorder phrases with penalties to wrong orders supported by the language model and therefore outperform the monotone search. In row 6, only lexical features are used for the MaxEnt-based reordering model; while row 7 uses lexical features and collocation features. On both tasks, we observe that various reordering approaches show similar and stable performance ranks in different domains and the MaxEnt-based reordering models achieve the best performance among them. Using all features for the MaxEnt model (lex + col) is marginally better than using only lex features (lex). 4.4 Scaling to Large Bitexts In the experiments described above, collocation features do not make great contributions to the performance improvement but make the total number of features increase greatly. This is a problem for MaxEnt parameter estimation if it is scaled to large bitexts. Therefore, for the integration of MaxEnt-based phrase reordering model in the system trained on large bitexts, we remove collocation features and only use lexical features from the last words of blocks (similar to those from the first words of blocks with similar performance). This time the bilingual training data contain 2.4M sentence pairs (68.1M Chinese words and 73.8M English words) and two trigram language models are used. One is trained on the English side of the bilingual training data. The other is trained on the Xinhua portion of the Gigaword corpus with 181.1M words. We also use some rules to translate numbers, time expressions and Chinese person names. The new Bleu score on NIST MT-05 is 0.291 which is very promising. 5 Discussion and Future Work In this paper we presented a MaxEnt-based phrase reordering model for SMT. We used lexical features and collocation features from boundary words of blocks to predicate reorderings of neighSystems NIST MT-05 IWSLT-04 monotone 20.1 ± 0.8 37.8 ± 3.2 NONE 19.6 ± 0.8 36.3 ± 2.9 Distortion 20.9 ± 0.8 38.8 ± 3.0 Flat 20.5 ± 0.8 38.7 ± 2.8 Pharaoh 20.8 ± 0.8 38.9 ± 3.3 MaxEnt (lex) 22.0 ± 0.8 42.4 ± 3.3 MaxEnt (lex + col) 22.2 ± 0.8 42.8 ± 3.3 Table 1: BLEU-4 scores (%) with the 95% confidence intervals. Italic numbers refer to results for which the difference to the best result (indicated in bold) is not statistically significant. bor blocks. Experiments on standard ChineseEnglish translation tasks from two different domains showed that our method achieves a significant improvement over the distortion/flat reordering models. Traditional distortion/flat-based SMT translation systems are good for learning phrase translation pairs, but learn nothing for phrasal reorderings from real-world data. This is our original motivation for designing a new reordering model, which can learn reorderings from training data just like learning phrasal translations. Lexicalized reordering model learns reorderings from training data, but it binds reorderings to individual concrete phrases, which restricts the model to reorderings of phrases seen in training data. On the contrary, the MaxEnt-based reordering model is not limited by this constraint since it is based on features of phrase, not phrase itself. It can be easily generalized to reorder unseen phrases provided that some features are fired on these phrases. Another advantage of the MaxEnt-based reordering model is that it can take more features into reordering, even though they are nonindependent. Tillmann et. al (2005) also use a MaxEnt model to integrate various features. The difference is that they use the MaxEnt model to predict not only orders but also blocks. To do that, it is necessary for the MaxEnt model to incorporate real-valued features such as the block translation probability and the language model probability. Due to the expensive computation, a local model is built. However, our MaxEnt model is just a module of the whole log-linear model of translation which uses its score as a real-valued feature. The modularity afforded by this design does not incur any computation problems, and make it eas527 ier to update one sub-model with other modules unchanged. Beyond the MaxEnt-based reordering model, another feature deserving attention in our system is the CKY style decoder which observes the ITG. This is different from the work of Zens et. al. (2004). In their approach, translation is generated linearly, word by word and phrase by phrase in a traditional way with respect to the incorporation of the language model. It can be said that their decoder did not violate the ITG constraints but not that it observed the ITG. The ITG not only decreases reorderings greatly but also makes reordering hierarchical. Hierarchical reordering is more meaningful for languages which are organized hierarchically. From this point, our decoder is similar to the work by Chiang (2005). The future work is to investigate other valuable features, e.g. binary features that explain blocks from the syntactical view. We think that there is still room for improvement if more contributing features are used. Acknowledgements This work was supported in part by National High Technology Research and Development Program under grant #2005AA114140 and National Natural Science Foundation of China under grant #60573188. Special thanks to Yajuan L¨u for discussions of the manuscript of this paper and three anonymous reviewers who provided valuable comments. References Ashish Venugopal, Stephan Vogel. 2005. Considerations in Maximum Mutual Information and Minimum Classification Error training for Statistical Machine Translation. In the Proceedings of EAMT-05, Budapest, Hungary May 3031. Christoph Tillmann. 2004. A block orientation model for statistical machine translation. In HLT-NAACL, Boston, MA, USA. Christoph Tillmann and Tong Zhang. 2005. A Localized Prediction Model for statistical machine translation. In Proceedings of ACL 2005, pages 557–564. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL 2005, pages 263–270. Dekai Wu. 1996. A Polynomial-Time Algorithm for Statistical Machine Translation. In Proceedings of ACL 1996. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23:377–404. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of ACL 2000, pages 440–447. Franz Josef Och. 2003a. Minimum error rate training in statistical machine translation. In Proceedings of ACL 2003, pages 160–167. Franz Josef Och. 2003b. Statistical Machine Translation: From Single-Word Models to Alignment Templates Thesis. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30:417–449. Franz Josef Och, Ignacio Thayer, Daniel Marcu, Kevin Knight, Dragos Stefan Munteanu, Quamrul Tipu, Michel Galley, and Mark Hopkins. 2004. Arabic and Chinese MT at USC/ISI. Presentation given at NIST Machine Translation Evaluation Workshop. Heidi J. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proceedings of EMNLP 2002. J. R. Quinlan. 1993. C4.5: progarms for machine learning. Morgan Kaufmann Publishers. Kevin Knight. 1999. Decoding complexity in wordreplacement translation models. Computational Linguistics, Squibs & Discussion, 25(4). Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the Ninth International Workshop on Parsing Technology, Vancouver, October, pages 53–64. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of HLT/NAACL. Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Proceedings of the Sixth Conference of the Association for Machine Translation in the Americas, pages 115–124. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In International Workshop on Spoken Language Translation. R. Zens, H. Ney, T. Watanabe, and E. Sumita. 2004. Reordering Constraints for Phrase-Based Statistical Machine Translation. In Proceedings of CoLing 2004, Geneva, Switzerland, pp. 205-211. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of the Sixth Conference on Natural Language Learning (CoNLL2002). Shankar Kumar and William Byrne. 2005. Local phrase reordering models for statistical machine translation. In Proceedings of HLT-EMNLP. Ying Zhang, Stephan Vogel, and Alex Waibel. 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system? In Proceedings of LREC 2004, pages 2051– 2054. 528
2006
66
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 529–536, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Distortion Models For Statistical Machine Translation Yaser Al-Onaizan and Kishore Papineni IBM T.J. Watson Research Center 1101 Kitchawan Road Yorktown Heights, NY 10598, USA {onaizan, papineni}@us.ibm.com Abstract In this paper, we argue that n-gram language models are not sufficient to address word reordering required for Machine Translation. We propose a new distortion model that can be used with existing phrase-based SMT decoders to address those n-gram language model limitations. We present empirical results in Arabic to English Machine Translation that show statistically significant improvements when our proposed model is used. We also propose a novel metric to measure word order similarity (or difference) between any pair of languages based on word alignments. 1 Introduction A language model is a statistical model that gives a probability distribution over possible sequences of words. It computes the probability of producing a given word w1 given all the words that precede it in the sentence. An n-gram language model is an n-th order Markov model where the probability of generating a given word depends only on the last n −1 words immediately preceding it and is given by the following equation: P(wk 1) = P(w1)P(w2|w1) · · · P(wn|wn−1 1 ) (1) where k >= n. N-gram language models have been successfully used in Automatic Speech Recognition (ASR) as was first proposed by (Bahl et al., 1983). They play an important role in selecting among several candidate word realization of a given acoustic signal. N-gram language models have also been used in Statistical Machine Translation (SMT) as proposed by (Brown et al., 1990; Brown et al., 1993). The run-time search procedure used to find the most likely translation (or transcription in the case of Speech Recognition) is typically referred to as decoding. There is a fundamental difference between decoding for machine translation and decoding for speech recognition. When decoding a speech signal, words are generated in the same order in which their corresponding acoustic signal is consumed. However, that is not necessarily the case in MT due to the fact that different languages have different word order requirements. For example, in Spanish and Arabic adjectives are mainly noun post-modifiers, whereas in English adjectives are noun pre-modifiers. Therefore, when translating between Spanish and English, words must usually be reordered. Existing statistical machine translation decoders have mostly relied on language models to select the proper word order among many possible choices when translating between two languages. In this paper, we argue that a language model is not sufficient to adequately address this issue, especially when translating between languages that have very different word orders as suggested by our experimental results in Section 5. We propose a new distortion model that can be used as an additional component in SMT decoders. This new model leads to significant improvements in MT quality as measured by BLEU (Papineni et al., 2002). The experimental results we report in this paper are for Arabic-English machine translation of news stories. We also present a novel method for measuring word order similarity (or differences) between any given pair of languages based on word alignments as described in Section 3. The rest of this paper is organized as follows. Section 2 presents a review of related work. In Section 3 we propose a method for measuring the distortion between any given pair of languages. In Section 4, we present our proposed distortion model. In Section 5, we present some empirical results that show the utility of our distortion model for statistical machine translation systems. Then, we conclude this paper with a discussion in Section 6. 2 Related Work Different languages have different word order requirements. SMT decoders attempt to generate translations in the proper word order by attempting many possible 529 word reorderings during the translation process. Trying all possible word reordering is an NP-Complete problem as shown in (Knight, 1999), which makes searching for the optimal solution among all possible permutations computationally intractable. Therefore, SMT decoders typically limit the number of permutations considered for efficiency reasons by placing reordering restrictions. Reordering restrictions for word-based SMT decoders were introduced by (Berger et al., 1996) and (Wu, 1996). (Berger et al., 1996) allow only reordering of at most n words at any given time. (Wu, 1996) propose using contiguity restrictions on the reordering. For a comparison and a more detailed discussion of the two approaches see (Zens and Ney, 2003). A different approach to allow for a limited reordering is to reorder the input sentence such that the source and the target sentences have similar word order and then proceed to monotonically decode the reordered source sentence. Monotone decoding translates words in the same order they appear in the source language. Hence, the input and output sentences have the same word order. Monotone decoding is very efficient since the optimal decoding can be found in polynomial time. (Tillmann et al., 1997) proposed a DP-based monotone search algorithm for SMT. Their proposed solution to address the necessary word reordering is to rewrite the input sentence such that it has a similar word order to the desired target sentence. The paper suggests that reordering the input reduces the translation error rate. However, it does not provide a methodology on how to perform this reordering. (Xia and McCord, 2004) propose a method to automatically acquire rewrite patterns that can be applied to any given input sentence so that the rewritten source and target sentences have similar word order. These rewrite patterns are automatically extracted by parsing the source and target sides of the training parallel corpus. Their approach show a statistically-significant improvement over a phrase-based monotone decoder. Their experiments also suggest that allowing the decoder to consider some word order permutations in addition to the rewrite patterns already applied to the source sentence actually decreases the BLEU score. Rewriting the input sentence whether using syntactic rules or heuristics makes hard decisions that can not be undone by the decoder. Hence, reordering is better handled during the search algorithm and as part of the optimization function. Phrase-based monotone decoding does not directly address word order issues. Indirectly, however, the phrase dictionary1 in phrase-based decoders typically captures local reorderings that were seen in the training data. However, it fails to generalize to word reorderings that were never seen in the training data. For example, a phrase-based decoder might translate the Ara1Also referred to in the literature as the set of blocks or clumps. bic phrase AlwlAyAt AlmtHdp2 correctly into English as the United States if it was seen in its training data, was aligned correctly, and was added to the phrase dictionary. However, if the phrase Almmlkp AlmtHdp is not in the phrase dictionary, it will not be translated correctly by a monotone phrase decoder even if the individual units of the phrase Almmlkp and AlmtHdp, and their translations (Kingdom and United, respectively) are in the phrase dictionary since that would require swapping the order of the two words. (Och et al., 1999; Tillmann and Ney, 2003) relax the monotonicity restriction in their phrase-based decoder by allowing a restricted set of word reorderings. For their translation task, word reordering is done only for words belonging to the verb group. The context in which they report their results is a Speech-to-Speech translation from German to English. (Yamada and Knight, 2002) propose a syntax-based decoder that restrict word reordering based on reordering operations on syntactic parse-trees of the input sentence. They reported results that are better than word-based IBM4-like decoder. However, their decoder is outperformed by phrase-based decoders such as (Koehn, 2004), (Och et al., 1999), and (Tillmann and Ney, 2003) . Phrase-based SMT decoders mostly rely on the language model to select among possible word order choices. However, in our experiments we show that the language model is not reliable enough to make the choices that lead to a better MT quality. This observation is also reported by (Xia and McCord, 2004).We argue that the distortion model we propose leads to a better translation as measured by BLEU. Distortion models were first proposed by (Brown et al., 1993) in the so-called IBM Models. IBM Models 2 and 3 define the distortion parameters in terms of the word positions in the sentence pair, not the actual words at those positions. Distortion probability is also conditioned on the source and target sentence lengths. These models do not generalize well since their parameters are tied to absolute word position within sentences which tend to be different for the same words across sentences. IBM Models 4 and 5 alleviate this limitation by replacing absolute word positions with relative positions. The latter models define the distortion parameters for a cept (one or more words). This models phrasal movement better since words tend to move in blocks and not independently. The distortion is conditioned on classes of the aligned source and target words. The entire source and target vocabularies are reduced to a small number of classes (e.g., 50) for the purpose of estimating those parameters. Similarly, (Koehn et al., 2003) propose a relative distortion model to be used with a phrase decoder. The model is defined in terms of the difference between the position of the current phrase and the position of the previous phrase in the source sentence. It does not con2Arabic text appears throughout this paper in Tim Buckwalter’s Romanization. 530 Arabic Ezp1 AbrAhym2 ystqbl3 ms&wlA4 AqtSAdyA5 sEwdyA6 fy7 bgdAd8 English Izzet1 Ibrahim2 Meets3 Saudi4 Trade5 official6 in7 Baghdad8 Word Alignment (Ezp1,Izzet1) (AbrAhym2,Ibrahim2) (ystqbl3,Meets3) ( ms&wlA4,official6) (AqtSAdyA5,Trade5) (sEwdyA6,Saudi4) (fy7,in7) (bgdAd8,Baghdad8) Reordered English Izzet1 Ibrahim2 Meets3 official6 Trade5 Saudi4 in7 Baghdad8 Table 1: Alignment-based word reordering. The indices are not part of the sentence pair, they are only used to illustrate word positions in the sentence. The indices in the reordered English denote word position in the original English order. sider the words in those positions. The distortion model we propose assigns a probability distribution over possible relative jumps conditioned on source words. Conditioning on the source words allows for a much more fine-grained model. For instance, words that tend to act as modifers (e.g., adjectives) would have a different distribution than verbs or nouns. Our model’s parameters are directly estimated from word alignments as we will further explain in Section 4. We will also show how to generalize this word distortion model to a phrase-based model. (Och et al., 2004; Tillman, 2004) propose orientation-based distortion models lexicalized on the phrase level. There are two important distinctions between their models and ours. First, they lexicalize their model on the phrases, which have many more parameters and hence would require much more data to estimate reliably. Second, their models consider only the direction (i.e., orientation) and not the relative jump. We are not aware of any work on measuring word order differences between a given language pair in the context of statistical machine translation. 3 Measuring Word Order Similarity Between Two Language In this section, we propose a simple, novel method for measuring word order similarity (or differences) between any given language pair. This method is based on word-alignments and the BLEU metric. We assume that we have word-alignments for a set of sentence pairs. We first reorder words in the target sentence (e.g., English when translating from Arabic to English) according to the order in which they are aligned to the source words as shown in Table 1. If a target word is not aligned, then, we assume that it is aligned to the same source word that the preceding aligned target word is aligned to. Once the reordered target (here English) sentences are generated, we measure the distortion between the language pair by computing the BLEU3 score between the original target and reordered target, treating the original target as the reference. Table 2 shows these scores for Arabic-English and 3the BLEU scores reported throughout this paper are for case-sensitive BLEU. The number of references used is also reported (e.g., BLEUr1n4c: r1 means 1 reference, n4 means upto 4-gram are considred, c means case sensitive). Chinese-English. The word alignments we use are both annotated manually by human annotators. The ArabicEnglish test set is the NIST MT Evaluation 2003 test set. It contains 663 segments (i.e., sentences). The Arabic side consists of 16,652 tokens and the English consists of 19,908 tokens. The Chinese-English test set contains 260 segments. The Chinese side is word segmented and consists of 4,319 tokens and the English consists of 5,525 tokens. As suggested by the BLEU scores reported in Table 2, Arabic-English has more word order differences than Chinese-English. The difference in n-gPrec is bigger for smaller values of n, which suggests that ArabicEnglish has more local word order differences than in Chinese-English. 4 Proposed Distortion Model The distortion model we are proposing consists of three components: outbound, inbound, and pair distortion. Intuitively our distortion models attempt to capture the order in which source words need to be translated. For instance, the outbound distortion component attempts to capture what is typically translated immediately after the word that has just been translated. Do we tend to translate words that precede it or succeed it? Which word position to translate next? Our distortion parameters are directly estimated from word alignments by simple counting over alignment links in the training data. Any aligner such as (Al-Onaizan et al., 1999) or (Vogel et al., 1996) can be used to obtain word alignments. For the results reported in this paper word alignments were obtained using a maximum-posterior word aligner4 described in (Ge, 2004). We will illustrate the components of our model with a partial word alignment. Let us assume that our source sentence5 is (f10, f250, f300)6, and our target sentence is (e410, e20), and their word alignment is a = ((f10, e410), (f300, e20)). Word Alignment a can 4We also estimated distortion parameters using a Maximum Entropy aligner and the differences were negligible. 5In practice, we add special symbols at the start and end of the source and target sentences, we also assume that the start symbols in the source and target are aligned, and similarly for the end symbols. Those special symbols are omitted in our example for ease of presentation. 6The indices here represent source and target vocabulary ids. 531 N-gram Precision Arabic-English Chinese-English 1-gPrec 1 1 2-gPrec 0.6192 0.7378 3-gPrec 0.4547 0.5382 4-gPrec 0.3535 0.3990 5-gPrec 0.2878 0.3075 6-gPrec 0.2378 0.2406 7-gPrec 0.1977 0.1930 8-gPrec 0.1653 0.1614 9-gPrec 0.1380 0.1416 BLEUr1n4c 0.3152 0.3340 95% Confidence σ 0.0180 0.0370 Table 2: Word order similarity for two language pairs: Arabic-English and Chinese-English. n-gPrec is the n-gram precision as defined in BLEU. be rewritten as a1 = 1 and a2 = 3 (i.e., the second target word is aligned to the third source word). From this partial alignment we increase the counts for the following outbound, inbound, and pair distortions: Po(δ = +2|f10), Pi(δ = +2|f300). and Pp(δ = +2|f10, f300). Formally, our distortion model components are defined as follows: Outbound Distortion: Po(δ|fi) = C(δ|fi) P k C(δk |fi) (2) where fi is a foreign word (i.e., Arabic in our case), δ is the step size, and C(δ|fi) is the observed count of this parameter over all word alignments in the training data. The value for δ, in theory, ranges from −max to +max (where max is the maximum source sentence length observed), but in practice only a small number of those step sizes are observed in the training data, and hence, have non-zero value). Inbound Distortion: Pi(δ|fj) = C(δ|fj) P k C(δk|fj) (3) Pairwise Distortion: Pp(δ|fi, fj) = C(δ|fi, fj) P k C(δk|fi, fj) (4) In order to use these probability distributions in our decoder, they are then turned into costs. The outbound distortion cost is defined as: Co(δ|fi) = log {αPo(δ|fi) + (1 −α)Ps(δ)} (5) where Ps(δ) is a smoothing distribution 7 and α is a linear-mixture parameter 8. 7The smoothing we use is a geometrically decreasing distribution as the step size increases. 8For the experiments reported here we use α = 0.1, which is set empirically. The inbound and pair costs (Ci(δ|fi) and Cp(δ|fi, fj)) can be defined in a similar fashion. So far, our distortion cost is defined in terms of words, not phrases. Therefore, we need to generalize the distortion cost in order to use it in a phrasebased decoder. This generalization is defined in terms of the internal word alignment within phrases (we used the Viterbi word alignment). We illustrate this with an example: Suppose the last position translated in the source sentence so far is n and we are to cover a source phrase p=wlAyp wA$nTn that begins at position m in the source sentence. Also, suppose that our phrase dictionary provided the translation Washington State, with internal word alignment a = (a1 = 2, a2 = 1) (i.e., a=(<Washington,wA$nTn>,<State,wlAyp>), then the outbound phrase cost is defined as: Co(p, n, m, a) =Co(δ = (m −n)|fn)+ l−1 X i=1 Co(δ = (ai+1 −ai) |fai) (6) where l is the length of the target phrase, a is the internal word alignment, fn is source word at position n (in the sentence), and fai is the source word that is aligned to the i-th word in the target side of the phrase (not the sentence). The inbound and pair distortion costs (i..e, Ci(p, n, m, a) and Cp(p, n, m, a)) can be defined in a similar fashion. The above distortion costs are used in conjunction with other cost components used in our decoder. The ultimate word order choice made is influenced by both the language model cost as well as the distortion cost. 5 Experimental Results The phrase-based decoder we use is inspired by the decoder described in (Tillmann and Ney, 2003) and similar to that described in (Koehn, 2004). It is a multistack, multi-beam search decoder with n stacks (where n is the length of the source sentence being decoded) 532 s 0 1 1 1 1 1 2 2 2 2 w 0 4 6 8 10 12 4 6 8 10 BLEUr1n4c 0.5617 0.6507 0.6443 0.6430 0.6461 0.6456 0.6831 0.6706 0.6609 0.6596 2 3 3 3 3 3 4 4 4 4 4 12 4 6 8 10 12 4 6 8 10 12 0.6626 0.6919 0.6751 0.6580 0.6505 0.6490 0.6851 0.6592 0.6317 0.6237 0.6081 Table 3: BLEU scores for the word order restoration task. The BLEU scores reported here are with 1 reference. The input is the reordered English in the reference. The 95% Confidence σ ranges from 0.011 to 0.016 and a beam associated with each stack as described in (Al-Onaizan, 2005). The search is done in n time steps. In time step i, only hypotheses that cover exactly i source words are extended. The beam search algorithm attempts to find the translation (i.e., hypothesis that covers all source words) with the minimum cost as in (Tillmann and Ney, 2003) and (Koehn, 2004) . The distortion cost is added to the log-linear mixture of the hypothesis extension in a fashion similar to the language model cost. A hypothesis covers a subset of the source words. The final translation is a hypothesis that covers all source words and has the minimum cost among all possible 9 hypotheses that cover all source words. A hypothesis h is extended by matching the phrase dictionary against source word sequences in the input sentence that are not covered in h. The cost of the new hypothesis C(hnew) = C(h) + C(e), where C(e) is the cost of this extension. The main components of the cost of extension e can be defined by the following equation: C(e) = λ1CLM(e) + λ2CT M(e) + λ3CD(e) where CLM(e) is the language model cost, CT M(e) is the translation model cost, and CD(e) is the distortion cost. The extension cost depends on the hypothesis being extended, the phrase being used in the extension, and the source word positions being covered. The word reorderings that are explored by the search algorithm are controlled by two parameters s and w as described in (Tillmann and Ney, 2003). The first parameter s denotes the number of source words that are temporarily skipped (i.e., temporarily left uncovered) during the search to cover a source word to the right of the skipped words. The second parameter is the window width w, which is defined as the distance (in number of source words) between the left-most uncovered source word and the right-most covered source word. To illustrate these restrictions, let us assume the input sentence consists of the following sequence (f1, f2, f3, f4). For s=1 and w=2, the permissible permutations are (f1, f2, f3, f4), (f2, f1, f3, f4), 9Exploring all possible hypothesis with all possible word permutations is computationally intractable. Therefore, the search algorithm gives an approximation to the optimal solution. All possible hypotheses refers to all hypotheses that were explored by the decoder. (f2, f3, f1, f4), (f1, f3, f2, f4),(f1, f3, f4, f2), and (f1, f2, f4, f3). 5.1 Experimental Setup The experiments reported in this section are in the context of SMT from Arabic into English. The training data is a 500K sentence-pairs subsample of the 2005 Large Track Arabic-English Data for NIST MT Evaluation. The language model used is an interpolated trigram model described in (Bahl et al., 1983). The language model is trained on the LDC English GigaWord Corpus. The test set used in the experiments in this section is the 2003 NIST MT Evaluation test set (which is not part of the training data). 5.2 Reordering with Perfect Translations In the experiments in this section, we show the utility of a trigram language model in restoring the correct word order for English. The task is a simplified translation task, where the input is reordered English (English written in Arabic word order) and the output is English in the correct order. The source sentence is a reordered English sentence in the same manner we described in Section 3. The objective of the decoder is to recover the correct English order. We use the same phrase-based decoder we use for our SMT experiments, except that only the language model cost is used here. Also, the phrase dictionary used is a one-to-one function that maps every English word in our vocabulary to itself. The language model we use for the experiments reported here is the same as the one used for other experiments reported in this paper. The results in Table 3 illustrate how the language model performs reasonably well for local reorderings (e.g., for s = 3 and w = 4), but its perfromance deteriorates as we relax the reordering restrictions by increasing the reordering window size (w). Table 4 shows some examples of original English, English in Arabic order, and the decoder output for two different sets of reordering parameters. 5.3 SMT Experiments The phrases in the phrase dictionary we use in the experiments reported here are a combination 533 Eng Ar Opposition Iraqi Prepares for Meeting mid - January in Kurdistan Orig. Eng. Iraqi Opposition Prepares for mid - January Meeting in Kurdistan Output1 Iraqi Opposition Meeting Prepares for mid - January in Kurdistan Output2 Opposition Meeting Prepares for Iraqi Kurdistan in mid - January Eng Ar Head of Congress National Iraqi Visits Kurdistan Iraqi Orig. Eng. Head of Iraqi National Congress Visits Iraqi Kurdistan Output1 Head of Iraqi National Congress Visits Iraqi Kurdistan Output2 Head Visits Iraqi National Congress of Iraqi Kurdistan Eng Ar House White Confirms Presence of Tape New Bin Laden Orig. Eng. White House Confirms Presence of New Bin Laden Tape Output1 White House Confirms Presence of Bin Laden Tape New Output2 White House of Bin Laden Tape Confirms Presence New Table 4: Examples of reordering with perfect translations. The examples show English in Arabic order (Eng Ar.), English in its original order (Orig. Eng.) and decoding with two different parameter settings. Output1 is decoding with (s=3,w=4). Output2 is decoding with (s=4,w=12). The sentence lengths of the examples presented here are much shorter than the average in our test set (∼28.5). s w Distortion Used? BLEUr4n4c 0 0 NO 0.4468 1 8 NO 0.4346 1 8 YES 0.4715 2 8 NO 0.4309 2 8 YES 0.4775 3 8 NO 0.4283 3 8 YES 0.4792 4 8 NO 0.4104 4 8 YES 0.4782 Table 5: BLEU scores for the Arabic-English machine translation task. The 95% Confidence σ ranges from 0.0158 to 0.0176. s is the number of words temporarily skipped, and w is the word permutation window size. of phrases automatically extracted from maximumposterior alignments and maximum entropy alignments. Only phrases that conform to the so-called consistent alignment restrictions (Och et al., 1999) are extracted. Table 5 shows BLEU scores for our SMT decoder with different parameter settings for skip s, window width w, with and without our distortion model. The BLEU scores reported in this table are based on 4 reference translations. The language model, phrase dictionary, and other decoder tuning parameters remain the same in all experiments reported in this table. Table 5 clearly shows that as we open the search and consider wider range of word reorderings, the BLEU score decreases in the absence of our distortion model when we rely solely on the language model. Wrong reorderings look attractive to the decoder via the language model which suggests that we need a richer model with more parameter. In the absence of richer models such as the proposed distortion model, our results suggest that it is best to decode monotonically and only allow local reorderings that are captured in our phrase dictionary. However, when the distortion model is used, we see statistically significant increases in the BLEU score as we consider more word reorderings. The best BLEU score achieved when using the distortion model is 0.4792 , compared to a best BLEU score of 0.4468 when the distortion model is not used. Our results on the 2004 and 2005 NIST MT Evaluation test sets using the distortion model are 0.4497 and 0.464610, respectively. Table 6 shows some Arabic-English translation examples using our decoder with and without the distortion model. 6 Conclusion and Future Work We presented a new distortion model that can be integrated with existing phrase-based SMT decoders. The proposed model shows statistically significant improvement over a state-of-the-art phrase-based SMT decoder. We also showed that n-gram language mod10The MT05 BLEU score is the from the official NIST evaluation. The MT04 BLEU score is only our second run on MT04. 534 Input (Ar) kwryA Al$mAlyp mstEdp llsmAH lwA$nTn bAltHqq mn AnhA lA tSnE AslHp nwwyp Ref. (En) North Korea Prepared to allow Washington to check it is not Manufacturing Nuclear Weapons Out1 North Korea to Verify Washington That It Was Not Prepared to Make Nuclear Weapons Out2 North Korea Is Willing to Allow Washington to Verify It Does Not Make Nuclear Weapons Input (Ar) wAkd AldblwmAsy An ”AnsHAb (kwryA Al$mAlyp mn AlmEAhdp) ybd> AEtbArA mn Alywm”. Ref. (En) The diplomat confirmed that ”North Korea’s withdrawal from the treaty starts as of today.” Out1 The diplomat said that ” the withdrawal of the Treaty (start) North Korea as of today. ” Out2 The diplomat said that the ” withdrawal of (North Korea of the treaty) will start as of today ”. Input (Ar) snrfE *lk AmAm Almjls Aldstwry”. Ref. (En) We will bring this before the Constitutional Assembly.” Out1 The Constitutional Council to lift it. ” Out2 This lift before the Constitutional Council ”. Input (Ar) wAkd AlbrAdEy An mjls AlAmn ”ytfhm” An 27 kAnwn AlvAny/ynAyr lys mhlp nhA}yp. Ref. (En) Baradei stressed that the Security Council ”appreciates” that January 27 is not a final ultimatum. Out1 Elbaradei said that the Security Council ” understand ” that is not a final period January 27. Out2 Elbaradei said that the Security Council ” understand ” that 27 January is not a final period. Table 6: Selected examples of our Arabic-English SMT output. The English is one of the human reference translations. Output 1 is decoding without the distortion model and (s=4, w=8), which corresponds to 0.4104 BLEU score. Output 2 is decoding with the distortion model and (s=3, w=8), which corresponds to 0.4792 BLEU score. The sentences presented here are much shorter than the average in our test set. The average length of the arabic sentence in the MT03 test set is ∼24.7. els are not sufficient to model word movement in translation. Our proposed distortion model addresses this weakness of the n-gram language model. We also propose a novel metric to measure word order similarity (or differences) between any pair of languages based on word alignments. Our metric shows that Chinese-English have a closer word order than Arabic-English. Our proposed distortion model relies solely on word alignments and is conditioned on the source words. The majority of word movement in translation is mainly due to syntactic differences between the source and target language. For example, Arabic is verb-initial for the most part. So, when translating into English, one needs to move the verb after the subject, which is often a long compounded phrase. Therefore, we would like to incorporate syntactic or part-of-speech information in our distortion model. Acknowledgment This work was partially supported by DARPA GALE program under contract number HR0011-06-2-0001. It was also partially supported by DARPA TIDES program monitored by SPAWAR under contract number N66001-99-2-8916. References Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, FranzJosef Och, David Purdy, Noah Smith, and David Yarowsky. 1999. Statistical Machine Translation: Final Report, Johns Hopkins University Summer Workshop (WS 99) on Language Engineering, Center for Language and Speech Processing, Baltimore, MD. Yaser Al-Onaizan. 2005. IBM Arabic-to-English MT Submission. Presentation given at DARPA/TIDES NIST MT Evaluation workshop. Lalit R. Bahl, Frederick Jelinek, and Robert L. Mercer. 1983. A Maximum Likelihood Approach to Continuous Speech Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI5(2):179–190. Adam L. Berger, Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Andrew S. Kehler, and Robert L. Mercer. 1996. Language Translation Apparatus and Method of Using Context-Based Translation Models. United States Patent, Patent Number 5510981, April. Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Frederick Jelinek, John D Lafferty, Robert L Mercer, and Paul S Roossin. 1990. A Statistical Approach to Machine Translation. Computational Linguistics, 16(2):79–85. 535 Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263–311. Niyu Ge. 2004. Improvements in Word Alignments. Presentation given at DARPA/TIDES NIST MT Evaluation workshop. Kevin Knight. 1999. Decoding Complexity in WordReplacement Translation Models. Computational Linguistics, 25(4):607–615. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Marti Hearst and Mari Ostendorf, editors, HLT-NAACL 2003: Main Proceedings, pages 127–133, Edmonton, Alberta, Canada, May 27 – June 1. Association for Computational Linguistics. Philipp Koehn. 2004. Pharaoh: a Beam Search Decoder for Phrase-Based Statistical Machine Translation Models. In Proceedings of the 6th Conference of the Association for Machine Translation in the Americas, pages 115–124, Washington DC, September-October. The Association for Machine Translation in the Americas (AMTA). Franz Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved Alignment Models for Statistical Machine Translation. In Joint Conf. of Empirical Methods in Natural Language Processing and Very Large Corpora, pages 20–28, College Park, Maryland. Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, Viren Jain, Zhen Jin, and Dragomir Radev. 2004. A Smorgasbord of Features for Statistical Machine Translation. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 161–168, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of machine translation. In 40th Annual Meeting of the Association for Computational Linguistics (ACL 02), pages 311–318, Philadelphia, PA, July. Christoph Tillman. 2004. A unigram orientation model for statistical machine translation. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Short Papers, pages 101– 104, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Christoph Tillmann and Hermann Ney. 2003. Word Re-ordering and a DP Beam Search Algorithm for Statistical Machine Translation. Computational Linguistics, 29(1):97–133. Christoph Tillmann, Stephan Vogel, Hermann Ney, and Alex Zubiaga. 1997. A DP-Based Search Using Monotone Alignments in Statistical Translation. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 289–296, Madrid. Association for Computational Linguistics. Stefan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-BasedWord Alignment in Statistical Machine Translation. In Proc. of the 16th Int. Conf. on Computational Linguistics (COLING 1996), pages 836–841, Copenhagen, Denmark, August. Dekai Wu. 1996. A Polynomial-Time Algorithm for Statistical Machine Translation. In Proc. of the 34th Annual Conf. of the Association for Computational Linguistics (ACL 96), pages 152–158, Santa Cruz, CA, June. Fei Xia and Michael McCord. 2004. Improving a Statistical MT System with Automatically Learned Rewrite Patterns. In Proc. of the 20th International Conference on Computational Linguistics (COLING 2004), Geneva, Switzerland. Kenji Yamada and Kevin Knight. 2002. A Decoder for Syntax-based Statistical MT. In Proc. of the 40th Annual Conf. of the Association for Computational Linguistics (ACL 02), pages 303–310, Philadelphia, PA, July. Richard Zens and Hermann Ney. 2003. A Comparative Study on Reordering Constraints in Statistical Machine Translation. In Erhard Hinrichs and Dan Roth, editors, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 144–151, Sapporo, Japan. 536
2006
67
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 537–544, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Study on Automatically Extracted Keywords in Text Categorization Anette Hulth and Be´ata B. Megyesi Department of Linguistics and Philology Uppsala University, Sweden [email protected] [email protected] Abstract This paper presents a study on if and how automatically extracted keywords can be used to improve text categorization. In summary we show that a higher performance — as measured by micro-averaged F-measure on a standard text categorization collection — is achieved when the full-text representation is combined with the automatically extracted keywords. The combination is obtained by giving higher weights to words in the full-texts that are also extracted as keywords. We also present results for experiments in which the keywords are the only input to the categorizer, either represented as unigrams or intact. Of these two experiments, the unigrams have the best performance, although neither performs as well as headlines only. 1 Introduction Automatic text categorization is the task of assigning any of a set of predefined categories to a document. The prevailing approach is that of supervised machine learning, in which an algorithm is trained on documents with known categories. Before any learning can take place, the documents must be represented in a form that is understandable to the learning algorithm. A trained prediction model is subsequently applied to previously unseen documents, to assign the categories. In order to perform a text categorization task, there are two major decisions to make: how to represent the text, and what learning algorithm to use to create the prediction model. The decision about the representation is in turn divided into two subquestions: what features to select as input and which type of value to assign to these features. In most studies, the best performing representation consists of the full length text, keeping the tokens in the document separate, that is as unigrams. In recent years, however, a number of experiments have been performed in which richer representations have been evaluated. For example, Caropreso et al. (2001) compare unigrams and bigrams; Moschitti et al. (2004) add complex nominals to their bag-of-words representation, while Kotcz et al. (2001), and Mihalcea and Hassan (2005) present experiments where automatically extracted sentences constitute the input to the representation. Of these three examples, only the sentence extraction seems to have had any positive impact on the performance of the automatic text categorization. In this paper, we present experiments in which keywords, that have been automatically extracted, are used as input to the learning, both on their own and in combination with a full-text representation. That the keywords are extracted means that the selected terms are present verbatim in the document. A keyword may consist of one or several tokens. In addition, a keyword may well be a whole expression or phrase, such as snakes and ladders. The main goal of the study presented in this paper is to investigate if automatically extracted keywords can improve automatic text categorization. We investigate what impact keywords have on the task by predicting text categories on the basis of keywords only, and by combining full-text representations with automatically extracted keywords. We also experiment with different ways of representing keywords, either as unigrams or intact. In addition, we investigate the effect of using the headlines — represented as unigrams — as input, 537 to compare their performance to that of the keywords. The outline of the paper is as follows: in Section 2, we present the algorithm used to automatically extract the keywords. In Section 3, we present the corpus, the learning algorithm, and the experimental setup for the performed text categorization experiments. In Section 4, the results are described. An overview of related studies is given in Section 5, and Section 6 concludes the paper. 2 Selecting the Keywords This section describes the method that was used to extract the keywords for the text categorization experiments discussed in this paper. One reason why this method, developed by Hulth (2003; 2004), was chosen is because it is tuned for short texts (more specifically for scientific journal abstracts). It was thus suitable for the corpus used in the described text categorization experiments. The approach taken to the automatic keyword extraction is that of supervised machine learning, and the prediction models were trained on manually annotated data. No new training was done on the text categorization documents, but models trained on other data were used. As a first step to extract keywords from a document, candidate terms are selected from the document in three different manners. One term selection approach is statistically oriented. This approach extracts all uni-, bi-, and trigrams from a document. The two other approaches are of a more linguistic character, utilizing the words’ parts-of-speech (PoS), that is, the word class assigned to a word. One approach extracts all noun phrase (NP) chunks, and the other all terms matching any of a set of empirically defined PoS patterns (frequently occurring patterns of manual keywords). All candidate terms are stemmed. Four features are calculated for each candidate term: term frequency; inverse document frequency; relative position of the first occurrence; and the PoS tag or tags assigned to the candidate term. To make the final selection of keywords, the three predictions models are combined. Terms that are subsumed by another keyword selected for the document are removed. For each selected stem, the most frequently occurring unstemmed form in the document is presented as a keyword. Each document is assigned at the most twelve keywords, provided that the added regression value Assign. Corr. mean mean P R F 8.6 3.6 41.5 46.9 44.0 Table 1: The number of assigned (Assign.) keywords in mean per document; the number of correct (Corr.) keywords in mean per document; precision (P); recall (R); and F-measure (F), when 3– 12 keywords are extracted per document. (given by the prediction models) is higher than an empirically defined threshold value. To avoid that a document gets no keywords, at least three keywords are assigned although the added regression value is below the threshold (provided that there are at least three candidate terms). In Hulth (2004) an evaluation on 500 abstracts in English is presented. For the evaluation, keywords assigned to the test documents by professional indexers are used as a gold standard, that is, the manual keywords are treated as the one and only truth. The evaluation measures are precision (how many of the automatically assigned keywords that are also manually assigned keywords) and recall (how many of the manually assigned keywords that are found by the automatic indexer). The third measure used for the evaluations is the F-measure (the harmonic mean of precision and recall). Table 1 shows the result on that particular test set. This result may be considered to be state-of-the-art. 3 Text Categorization Experiments This section describes in detail the four experimental settings for the text categorization experiments. 3.1 Corpus For the text categorization experiments we used the Reuters-21578 corpus, which contains 20 000 newswire articles in English with multiple categories (Lewis, 1997). More specifically, we used the ModApte split, containing 9 603 documents for training and 3 299 documents in the fixed test set, and the 90 categories that are present in both training and test sets. As a first pre-processing step, we extracted the texts contained in the TITLE and BODY tags. The pre-processed documents were then given as input to the keyword extraction algorithm. In Table 2, the number of keywords assigned to the doc538 uments in the training set and the test set are displayed. As can be seen in this table, three is the number of keywords that is most often extracted. In the training data set, 9 549 documents are assigned keywords, while 54 are empty, as they have no text in the TITLE or BODY tags. Of the 3 299 documents in the test set, 3 285 are assigned keywords, and the remaining fourteen are those that are empty. The empty documents are included in the result calculations for the fixed test set, in order to enable comparisons with other experiments. The mean number of keyword extracted per document in the training set is 6.4 and in the test set 6.1 (not counting the empty documents). Keywords Training docs Test docs 0 54 14 1 68 36 2 829 272 3 2 016 838 4 868 328 5 813 259 6 770 252 7 640 184 8 527 184 9 486 177 10 688 206 11 975 310 12 869 239 Table 2: Number of automatically extracted keywords per document in training set and test set respectively. 3.2 Learning Method The focus of the experiments described in this paper was the text representation. For this reason, we used only one learning algorithm, namely an implementation of Linear Support Vector Machines (Joachims, 1999). This is the learning method that has obtained the best results in text categorization experiments (Dumais et al., 1998; Yang and Liu, 1999). 3.3 Representations This section describes in detail the input representations that we experimented with. An important step for the feature selection is the dimensionality reduction, that is reducing the number of features. This can be done by removing words that are rare (that occur in too few documents or have too low term frequency), or very common (by applying a stop-word list). Also, terms may be stemmed, meaning that they are merged into a common form. In addition, any of a number of feature selection metrics may be applied to further reduce the space, for example chi-square, or information gain (see for example Forman (2003) for a survey). Once that the features have been set, the final decision to make is what feature value to assign. There are to this end three common possibilities: a boolean representation (that is, the term exists in the document or not), term frequency, or tf*idf. Two sets of experiments were run in which the automatically extracted keywords were the only input to the representation. In the first set, keywords that contained several tokens were kept intact. For example a keyword such as paradise fruit was represented as paradise fruit and was — from the point of view of the classifier — just as distinct from the single token fruit as from meatpackers. No stemming was performed in this set of experiments. In the second set of keywords-only experiments, the keywords were split up into unigrams, and also stemmed. For this purpose, we used Porter’s stemmer (Porter, 1980). Thereafter the experiments were performed identically for the two keyword representations. In a third set of experiments, we extracted only the content in the TITLE tags, that is, the headlines. The tokens in the headlines were stemmed and represented as unigrams. The main motivation for the title experiments was to compare their performance to that of the keywords. For all of these three feature inputs, we first evaluated which one of the three possible feature values to use (boolean, tf, or tf*idf). Thereafter, we reduced the space by varying the minimum number of occurrences in the training data, for a feature to be kept. The starting point for the fourth set of experiments was a full-text representation, where all stemmed unigrams occurring three or more times in the training data were selected, with the feature value tf*idf. Assuming that extracted keywords convey information about a document’s gist, the feature values in the full-text representation were given higher weights if the feature was identical to a keyword token. This was achieved by adding the term frequency of a full-text unigram to the term 539 frequency of an identical keyword unigram. Note that this does not mean that the term frequency value was necessarily doubled, as a keyword often contains more than one token, and it was the term frequency of the whole keyword that was added. 3.4 Training and Validation This section describes the parameter tuning, for which we used the training data set. This set was divided into five equally sized folds, to decide which setting of the following two parameters that resulted in the best performing classifier: what feature value to use, and the threshold for the minimum number of occurrence in the training data (in this particular order). To obtain a baseline, we made a full-text unigram run with boolean as well as with tf*idf feature values, setting the occurrence threshold to three. As stated previously, in this study, we were concerned only with the representation, and more specifically with the feature input. As we did not tune any other parameters than the two mentioned above, the results can be expected to be lower than the state-of-the art, even for the full-text run with unigrams. The number of input features for the full-text unigram representation for the whole training set was 10 676, after stemming and removing all tokens that contained only digits, as well as those tokens that occurred less than three times. The total number of keywords assigned to the 9 603 documents in the training data was 61 034. Of these were 29 393 unique. When splitting up the keywords into unigrams, the number of unique stemmed tokens was 11 273. 3.5 Test As a last step, we tested the best performing representations in the four different experimental settings on the independent test set. The number of input features for the full-text unigram representation was 10 676. The total number of features for the intact keyword representation was 4 450 with the occurrence threshold set to three, while the number of stemmed keyword unigrams was 6 478, with an occurrence threshold of two. The total number of keywords extracted from the 3 299 documents in the test set was 19 904. Next, we present the results for the validation and test procedures. 4 Results To evaluate the performance, we used precision, recall, and micro-averaged F-measure, and we let the F-measure be decisive. The results for the 5fold cross validation runs are shown in Table 3, where the values given are the average of the five runs made for each experiment. As can be seen in this table, the full-text run with a boolean feature value gave 92.3% precision, 69.4% recall, and 79.2% F-measure. The full-text run with tf*idf gave a better result as it yielded 92.9% precision, 71.3% recall, and 80.7% F-measure. Therefore we defined the latter as baseline. In the first type of the experiment where each keyword was treated as a feature independently of the number of tokens contained, the recall rates were considerably lower (between 32.0% and 42.3%) and the precision rates were somewhat lower (between 85.8% and 90.5%) compared to the baseline. The best performance was obtained when using a boolean feature value, and setting the minimum number of occurrence in training data to three (giving an F-measure of 56.9%). In the second type of experiments, where the keywords were split up into unigrams and stemmed, recall was higher but still low (between 60.2% and 64.8%) and precision was somewhat lower (88.9–90.2%) when compared to the baseline. The best results were achieved with a boolean representation (similar to the first experiment) and the minimum number of occurrence in the training data set to two (giving an F-measure of 75.0%) In the third type of experiments, where only the text in the TITLE tags was used and was represented as unigrams and stemmed, precision rates increased above the baseline to 93.3–94.5%. Here, the best representation was tf*idf with a token occurring at least four times in the training data (with an F-measure of 79.9%). In the fourth and last set of experiments, we gave higher weights to full-text tokens if the same token was present in an automatically extracted keyword. Here we obtained the best results. In these experiments, the term frequency of a keyword unigram was added to the term frequency for the full-text features, whenever the stems were identical. For this representation, we experimented with setting the minimum number of occurrence in training data both before and after that the term frequency for the keyword token was added to the term frequency of the unigram. The 540 Input feature Feature value Min. occurrence Precision Recall F-measure full-text unigram bool 3 92.31 69.40 79.22 full-text unigram tf*idf 3 92.89 71.30 80.67 keywords-only intact bool 1 90.54 36.64 52.16 keywords-only intact tf 1 88.68 33.74 48.86 keywords-only intact tf*idf 1 89.41 32.05 47.18 keywords-only intact bool 2 89.27 40.43 55.64 keywords-only intact bool 3 87.11 42.28 56.90 keywords-only intact bool 4 85.81 41.97 56.35 keywords-only unigram bool 1 89.12 64.61 74.91 keywords-only unigram tf 1 89.89 60.23 72.13 keywords-only unigram tf*idf 1 90.17 60.36 72.31 keywords-only unigram bool 2 89.02 64.83 75.02 keywords-only unigram bool 3 88.90 64.82 74.97 title bool 1 94.17 68.17 79.08 title tf 1 94.37 67.89 78.96 title tf*idf 1 94.46 68.49 79.40 title tf*idf 2 93.92 69.19 79.67 title tf*idf 3 93.75 69.65 79.91 title tf*idf 4 93.60 69.74 79.92 title tf*idf 5 93.31 69.40 79.59 keywords+full tf*idf 3 (before adding) 92.73 72.02 81.07 keywords+full tf*idf 3 (after adding) 92.75 71.94 81.02 Table 3: The average results from 5-fold cross validations for the baseline candidates and the four types of experiments, with various parameter settings. highest recall (72.0%) and F-measure (81.1%) for all validation runs were achieved when the occurrence threshold was set before the addition of the keywords. Next, the results on the fixed test data set for the four experimental settings with the best performance on the validation runs are presented. Table 4 shows the results obtained on the fixed test data set for the baseline and for those experiments that obtained the highest F-measure for each one of the four experiment types. We can see that the baseline — where the fulltext is represented as unigrams with tf*idf as feature value — yields 93.0% precision, 71.7% recall, and 81.0% F-measure. When the intact keywords are used as feature input with a boolean feature value and at least three occurrences in training data, the performance decreases greatly both considering the correctness of predicted categories and the number of categories that are found. When the keywords are represented as unigrams, a better performance is achieved than when they are kept intact. This is in line with the findings on n-grams by Caropreso et al. (2001). However, the results are still not satisfactory since both the precision and recall rates are lower than the baseline. Titles, on the other hand, represented as unigrams and stemmed, are shown to be a useful information source when it comes to correctly predicting the text categories. Here, we achieve the highest precision rate of 94.2% although the recall rate and the F-measure are lower than the baseline. Full-texts combined with keywords result in the highest recall value, 72.9%, as well as the highest F-measure, 81.7%, both above the baseline. Our results clearly show that automatically extracted keywords can be a valuable supplement to full-text representations and that the combination of them yields the best performance, measured as both recall and micro-averaged F-measure. Our experiments also show that it is possible to do a satisfactory categorization having only keywords, given that we treat them as unigrams. Lastly, for higher precision in text classification, we can use the stemmed tokens in the headlines as features 541 Input feature Feature value Min. occurrence Precision Recall F-measure full-text unigram tf*idf 3 93.03 71.69 80.98 keywords-only intact bool 3 89.56 41.48 56.70 keywords-only unigram bool 2 90.23 64.16 74.99 title tf*idf 4 94.23 68.43 79.28 keywords+full tf*idf 3 92.89 72.94 81.72 Table 4: Results on the fixed test set. with tf*idf values. As discussed in Section 2 and also presented in Table 2, the number of keywords assigned per document varies from zero to twelve. In Figure 1, we have plotted how the precision, the recall, and the F-measure for the test set vary with the number of assigned keywords for the keywords-only unigram representation. 100 90 80 70 60 50 40 30 12(239) 11(310) 10(206) 9(177) 8(184) 7(184) 6(252) 5(259) 4(328) 3(838) 2(272) 1(36) Per cent Number of assigned keywords (number of documents) Precision F-measure Recall Figure 1: Precision, recall, and F-measure for each number of assigned keywords. The values in brackets denote the number of documents. We can see that the F-measure and the recall reach their highest points when three keywords are extracted. The highest precision (100%) is obtained when the classification is performed on a single extracted keyword, but then there are only 36 documents present in this group, and the recall is low. Further experiments are needed in order to establish the optimal number of keywords to extract. 5 Related Work For the work presented in this paper, there are two aspects that are of interest in previous work. These are in how the alternative input features (that is, alternative from unigrams) are selected and in how this alternative representation is used in combination with a bag-of-words representation (if it is). An early work on linguistic phrases is done by F¨urnkranz et al. (1998), where all noun phrases matching any of a number of syntactic heuristics are used as features. This approach leads to a higher precision at the low recall end, when evaluated on a corpus of Web pages. Aizawa (2001) extracts PoS-tagged compounds, matching predefined PoS patterns. The representation contains both the compounds and their constituents, and a small improvement is shown in the results on Reuters-21578. Moschitti and Basili (2004) add complex nominals as input features to their bagof-words representation. The phrases are extracted by a system for terminology extraction1. The more complex representation leads to a small decrease on the Reuters corpus. In these studies, it is unclear how many phrases that are extracted and added to the representations. Li et al. (2003) map documents (e-mail messages) that are to be classified into a vector space of keywords with associated probabilities. The mapping is based on a training phase requiring both texts and their corresponding summaries. Another approach to combine different representations is taken by Sahlgren and C¨oster (2004), where the full-text representation is combined with a concept-based representation by selecting one or the other for each category. They show that concept-based representations can outperform traditional word-based representations, and that a combination of the two different types of representations improves the performance of the classifier over all categories. Keywords assigned to a particular text can be seen as a dense summary of the same. Some reports on how automatic summarization can be used to improve text categorization exist. For ex1In terminology extraction all terms describing a domain are to be extracted. The aim of automatic keyword indexing, on the other hand, is to find a small set of terms that describes a specific document, independently of the domain it belongs to. Thus, the set of terms must be limited to contain only the most salient ones. 542 ample, Ko et al. (2004) use methods from text summarization to find the sentences containing the important words. The words in these sentences are then given a higher weight in the feature vectors, by modifying the term frequency value with the sentence’s score. The F-measure increases from 85.8 to 86.3 on the Newsgroups data set using Support vector machines. Mihalcea and Hassan (2004) use an unsupervised method2 to extract summaries, which in turn are used to categorize the documents. In their experiments on a sub-set of Reuters-21578 (among others), Mihalcea and Hassan show that the precision is increased when using the summaries rather than the full length documents. ¨Ozg¨ur et al. (2005) have shown that limiting the representation to 2 000 features leads to a better performance, as evaluated on Reuters-21578. There is thus evidence that using only a sub-set of a document can give a more accurate classification. The question, though, is which sub-set to use. In summary, the work presented in this paper has the most resemblance with the work by Ko et al. (2004), who also use a more dense version of a document to alter the feature values of a bag-ofwords representation of a full-length document. 6 Concluding Remarks In the experiments described in this paper, we investigated if automatically extracted keywords can improve automatic text categorization. More specifically, we investigated what impact keywords have on the task of text categorization by making predictions on the basis of keywords only, represented either as unigrams or intact, and by combining the full-text representation with automatically extracted keywords. The combination was obtained by giving higher weights to words in the full-texts that were also extracted as keywords. Throughout the study, we were concerned with the data representation and feature selection procedure. We investigated what feature value should be used (boolean, tf, or tf*idf) and the minimum number of occurrence of the tokens in the training data. We showed that keywords can improve the performance of the text categorization. When keywords were used as a complement to the full-text representation an F-measure of 81.7% was ob2This method has also been used to extract keywords (Mihalcea and Tarau, 2004). tained, higher than without the keywords (81.0%). Our results also clearly indicate that keywords alone can be used for the text categorization task when treated as unigrams, obtaining an F-measure of 75.0%. Lastly, for higher precision (94.2%) in text classification, we can use the stemmed tokens in the headlines. The results presented in this study are lower than the state-of-the-art, even for the full-text run with unigrams, as we did not tune any other parameters than the feature values (boolean, term frequency, or tf*idf) and the threshold for the minimum number of occurrence in the training data. There are, of course, possibilities for further improvements. One possibility could be to combine the tokens in the headlines and keywords in the same way as the full-text representation was combined with the keywords. Another possible improvement concerns the automatic keyword extraction process. The keywords are presented in order of their estimated “keywordness”, based on the added regression value given by the three prediction models. This means that one alternative experiment would be to give different weights depending on which rank the keyword has achieved from the keyword extraction system. Another alternative would be to use the actual regression value. We would like to emphasize that the automatically extracted keywords used in our experiments are not statistical phrases, such as bigrams or trigrams, but meaningful phrases selected by including linguistic analysis in the extraction procedure. One insight that we can get from these experiments is that the automatically extracted keywords, which themselves have an F-measure of 44.0, can yield an F-measure of 75.0 in the categorization task. One reason for this is that the keywords have been evaluated using manually assigned keywords as the gold standard, meaning that paraphrasing and synonyms are severely punished. Kotcz et al. (2001) propose to use text categorization as a way to more objectively judge automatic text summarization techniques, by comparing how well an automatic summary fares on the task compared to other automatic summaries (that is, as an extrinsic evaluation method). The same would be valuable for automatic keyword indexing. Also, such an approach would facilitate comparisons between different systems, as common test-beds are lacking. 543 In this study, we showed that automatic text categorization can benefit from automatically extracted keywords, although the bag-of-words representation is competitive with the best performance. Automatic keyword extraction as well as automatic text categorization are research areas where further improvements are needed in order to be useful for more efficient information retrieval. Acknowledgments The authors are grateful to the anonymous reviewers for their valuable suggestions on how to improve the paper. References Akiko Aizawa. 2001. Linguistic techniques to improve the performance of automatic text categorization. In Proceedings of NLPRS-01, 6th Natural Language Processing Pacific Rim Symposium, pages 307–314. Maria Fernanda Caropreso, Stan Matwin, and Fabrizio Sebastiani. 2001. A learner-independent evaluation of the usefulness of statistical phrases for automated text categorization. In Text Databases and Document Management: Theory and Practice, pages 78– 102. Susan Dumais, John Platt, David Heckerman, and Mehran Sahami. 1998. Inductive learning algorithms and representations for text categorization. In Proceedings of the Seventh International Conference on Information and Knowledge Management (CIKM’98), pages 148–155. George Forman. 2003. An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research, 3:1289– 1305, March. Johannes F¨urnkranz, Tom Mitchell, and Ellen Riloff. 1998. A case study using linguistic phrases for text categorization on the WWW. In AAAI-98 Workshop on Learning for Text Categorization. Anette Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2003), pages 216–223. Anette Hulth. 2004. Combining Machine Learning and Natural Language Processing for Automatic Keyword Extraction. Ph.D. thesis, Department of Computer and Systems Sciences, Stockholm University. Thorsten Joachims. 1999. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods: Support Vector Learning. MIT-Press. Youngjoong Ko, Jinwoo Park, and Jungyun Seo. 2004. Improving text categorization using the importance of sentences. Information Processing and Management, 40(1):65–79. Aleksander Kolcz, Vidya Prabakarmurthi, and Jugal Kalita. 2001. Summarization as feature selection for text categorization. In Proceedings of the Tenth International Conference on Information and Knowledge Management (CIKM’01), pages 365– 370. David D. Lewis. 1997. Reuters-21578 text categorization test collection, Distribution 1.0. AT&T Labs Research. Cong Li, Ji-Rong Wen, and Hang Li. 2003. Text classification using stochastic keyword generation. In Proceedings of the 20th International Conference on Machine Learning (ICML-2003). Rada Mihalcea and Samer Hassan. 2005. Using the essence of texts to improve document classification. In Proceedings of the Conference on Recent Advances in Natural Language Processing (RANLP 2005). Rada Mihalcea and Paul Tarau. 2004. TextRank: bringing order into texts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2004). Alessandro Moschitti and Roberto Basili. 2004. Complex linguistic features for text classification: A comprehensive study. In Sharon McDonald and John Tait, editors, Proceedings of ECIR-04, 26th European Conference on Information Retrieval Research, pages 181–196. Springer-Verlag. Arzucan ¨Ozg¨ur, Levent ¨Ozg¨ur, and Tunga G¨ung¨or. 2005. Text categorization with class-based and corpus-based keyword selection. In Proceedings of the 20th International Symposium on Computer and Information Sciences, volume 3733 of Lecture Notes in Computer Science, pages 607–616. Springer-Verlag. Martin Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130–137. Magnus Sahlgren and Rickard C¨oster. 2004. Using bag-of-concepts to improve the performance of support vector machines in text categorization. In Proceedings of the 20th International Conference on Computational Linguistics (COLING 2004), pages 487–493. Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 42–49. 544
2006
68
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 545–552, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Comparison and Semi-Quantitative Analysis of Words and Character-Bigrams as Features in Chinese Text Categorization Jingyang Li Maosong Sun Xian Zhang National Lab. of Intelligent Technology & Systems, Department of Computer Sci. & Tech. Tsinghua University, Beijing 100084, China [email protected] [email protected] [email protected] Abstract Words and character-bigrams are both used as features in Chinese text processing tasks, but no systematic comparison or analysis of their values as features for Chinese text categorization has been reported heretofore. We carry out here a full performance comparison between them by experiments on various document collections (including a manually word-segmented corpus as a golden standard), and a semi-quantitative analysis to elucidate the characteristics of their behavior; and try to provide some preliminary clue for feature term choice (in most cases, character-bigrams are better than words) and dimensionality setting in text categorization systems. 1 Introduction1 Because of the popularity of the Vector Space Model (VSM) in text information processing, document indexing (term extraction) acts as a pre-requisite step in most text information processing tasks such as Information Retrieval (Baeza-Yates and Ribeiro-Neto, 1999) and Text Categorization (Sebastiani, 2002). It is empirically known that the indexing scheme is a nontrivial complication to system performance, especially for some Asian languages in which there are no explicit word margins and even no natural semantic unit. Concretely, in Chinese Text Categorization tasks, the two most important index 1 This research is supported by the National Natural Science Foundation of China under grant number 60573187 and 60321002, and the Tsinghua-ALVIS Project co-sponsored by the National Natural Science Foundation of China under grant number 60520130299 and EU FP6. ing units (feature terms) are word and characterbigram, so the problem is: which kind of terms2 should be chosen as the feature terms, words or character-bigrams? To obtain an all-sided idea about feature choice beforehand, we review here the possible feature variants (or, options). First, at the word level, we can do stemming, do stop-word pruning, include POS (Part of Speech) information, etc. Second, term combinations (such as “wordbigram”, “word + word-bigram”, “characterbigram + character-trigram”3, etc.) can also be used as features (Nie et al., 2000). But, for Chinese Text Categorization, the “word or bigram” question is fundamental. They have quite different characteristics (e.g. bigrams overlap each other in text, but words do not) and influence the classification performance in different ways. In Information Retrieval, it is reported that bigram indexing schemes outperforms word schemes to some or little extent (Luk and Kwok, 1997; Leong and Zhou 1998; Nie et al., 2000). Few similar comparative studies have been reported for Text Categorization (Li et al., 2003) so far in literature. Text categorization and Information Retrieval are tasks that sometimes share identical aspects (Sebastiani, 2002) apart from term extraction (document indexing), such as tfidf term weighting and performance evaluation. Nevertheless, they are different tasks. One of the generally accepted connections between Information Retrieval and Text Categorization is that an information retrieval task could be partially taken as a binary classification problem with the query as the only positive training document. From this 2 The terminology “term” stands for both word and character-bigram. Term or combination of terms (in word-bigram or other forms) might be chosen as “feature”. 3 The terminology “character” stands for Chinese character, and “bigram” stands for character-bigram in this paper. 545 viewpoint, an IR task and a general TC task have a large difference in granularity. To better illustrate this difference, an example is present here. The words “制片人(film producer)” and “译制 片(dubbed film)” should be taken as different terms in an IR task because a document with one would not necessarily be a good match for a query with the other, so the bigram “制片(film production)” is semantically not a shared part of these two words, i.e. not an appropriate feature term. But in a Text Categorization task, both words might have a similar meaning at the category level (“film” category, generally), which enables us to regard the bigram “制片” as a semantically acceptable representative word snippet for them, or for the category. There are also differences in some other aspects of IR and TC. So it is significant to make a detailed comparison and analysis here on the relative value of words and bigrams as features in Text Categorization. The organization of this paper is as follows: Section 2 shows some experiments on different document collections to observe the common trends in the performance curves of the word-scheme and bigram-scheme; Section 3 qualitatively analyses these trends; Section 4 makes some statistical analysis to corroborate the issues addressed in Section 3; Section 5 summarizes the results and concludes. 2 Performance Comparison Three document collections in Chinese language are used in this study. The electronic version of Chinese Encyclopedia (“CE”): It has 55 subject categories and 71674 single-labeled documents (entries). It is randomly split by a proportion of 9:1 into a training set with 64533 documents and a test set with 7141 documents. Every document has the fulltext. This data collection does not have much of a sparseness problem. The training data from a national Chinese text categorization evaluation4 (“CTC”): It has 36 subject categories and 3600 single-labeled5 documents. It is randomly split by a proportion of 4:1 into a training set with 2800 documents and a test set with 720 documents. Documents in this data collection are from various sources including news websites, and some documents 4 The Annual Evaluation of Chinese Text Categorization 2004, by 863 National Natural Science Foundation. 5 In the original document collection, a document might have a secondary category label. In this study, only the primary category label is reserved. may be very short. This data collection has a moderate sparseness problem. A manually word-segmented corpus from the State Language Affairs Commission (“LC”): It has more than 100 categories and more than 20000 single-labeled documents6. In this study, we choose a subset of 12 categories with the most documents (totally 2022 documents). It is randomly split by a proportion of 2:1 into a training set and a test set. Every document has the full-text and has been entirely wordsegmented7 by hand (which could be regarded as a golden standard of segmentation). All experiments in this study are carried out at various feature space dimensionalities to show the scalability. Classifiers used in this study are Rocchio and SVM. All experiments here are multi-class tasks and each document is assigned a single category label. The outline of this section is as follows: Subsection 2.1 shows experiments based on the Rocchio classifier, feature selection schemes besides Chi and term weighting schemes besides tfidf to compare the automatic segmented word features with bigram features on CE and CTC, and both document collections lead to similar behaviors; Subsection 2.2 shows experiments on CE by a SVM classifier, in which, unlike with the Rocchio method, Chi feature selection scheme and tfidf term weighting scheme outperform other schemes; Subsection 2.3 shows experiments by a SVM classifier with Chi feature selection and tfidf term weighting on LC (manual word segmentation) to compare the best word features with bigram features. 2.1 The Rocchio Method and Various Settings The Rocchio method is rooted in the IR tradition, and is very different from machine learning ones (such as SVM) (Joachims, 1997; Sebastiani, 2002). Therefore, we choose it here as one of the representative classifiers to be examined. In the experiment, the control parameter of negative examples is set to 0, so this Rocchio based classifier is in fact a centroid-based classifier. Chimax is a state-of-the-art feature selection criterion for dimensionality reduction (Yang and Peterson, 1997; Rogati and Yang, 2002). Chimax*CIG (Xue and Sun, 2003a) is reported to be better in Chinese text categorization by a cen 6 Not completed. 7 And POS (part-of-speech) tagged as well. But POS tags are not used in this study. 546 troid based classifier, so we choose it as another representative feature selection criterion besides Chimax. Likewise, as for term weighting schemes, in addition to tfidf, the state of the art (Baeza-Yates and Ribeiro-Neto, 1999), we also choose tfidf*CIG (Xue and Sun, 2003b). Two word segmentation schemes are used for the word-indexing of documents. One is the maximum match algorithm (“mmword” in the figures), which is a representative of simple and fast word segmentation algorithms. The other is ICTCLAS8 (“lqword” in the figures). ICTCLAS is one of the best word segmentation systems (SIGHAN 2003) and reaches a segmentation precision of more than 97%, so we choose it as a representative of state-of-the-art schemes for automatic word-indexing of document). For evaluation of single-label classifications, F1-measure, precision, recall and accuracy (Baeza-Yates and Ribeiro-Neto, 1999; Sebastiani, 2002) have the same value by microaveraging9, and are labeled with “performance” in the following figures. 1 2 3 4 5 6 7 8 x 10 4 0.5 0.6 0.7 0.8 performance mmword chi-tfidf chicig-tfidfcig 1 2 3 4 5 6 7 8 x 10 4 0.5 0.6 0.7 0.8 lqword performance chi-tfidf chicig-tfidfcig 1 2 3 4 5 6 7 8 x 10 4 0.5 0.6 0.7 0.8 bigram performance dimensionality chi-tfidf chicig-tfidfcig Figure 1. chi-tfidf and chicig-tfidfcig on CE Figure 1 shows the performancedimensionality curves of the chi-tfidf approach and the approach with CIG, by mmword, lqword and bigram document indexing, on the CE document collection. We can see that the original chi-tfidf approach is better at low dimensionalities (less than 10000 dimensions), while the CIG version is better at high dimensionalities and reaches a higher limit.10 8 http://www.nlp.org.cn/project/project.php?proj_id=6 9 Microaveraging is more prefered in most cases than macroaveraging (Sebastiani 2002). 10 In all figures in this paper, curves might be truncated due to the large scale of dimensionality, especially the curves of 1 2 3 4 5 6 7 8 x 10 4 0.5 0.6 0.7 0.8 performance mmword chi-tfidf chicig-tfidfcig 1 2 3 4 5 6 7 8 x 10 4 0.5 0.6 0.7 0.8 lqword performance chi-tfidf chicig-tfidfcig 1 2 3 4 5 6 7 8 x 10 4 0.5 0.6 0.7 0.8 bigram performance dimensionality chi-tfidf chicig-tfidfcig Figure 2. chi-tfidf and chicig-tfidfcig on CTC Figure 2 shows the same group of curves for the CTC document collection. The curves fluctuate more than the curves for the CE collection because of sparseness; The CE collection is more sensitive to the additions of terms that come with the increase of dimensionality. The CE curves in the following figures show similar fluctuations for the same reason. For a parallel comparison among mmword, lqword and bigram schemes, the curves in Figure 1 and Figure 2 are regrouped and shown in Figure 3 and Figure 4. 2 4 6 8 x 10 4 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 performance dimensionality chi-tfidf mmword lqword bigram 2 4 6 8 x 10 4 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 dimensionality chicig-tfidfcig mmword lqword bigram Figure 3. mmword, lqword and bigram on CE 1 2 3 4 5 x 10 4 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 performance dimensionality chi-tfidf mmword lqword bigram 1 2 3 4 5 x 10 4 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 dimensionality chicig-tfidfcig mmword lqword bigram Figure 4. mmword, lqword and bigram on CTC bigram scheme. For these kinds of figures, at least one of the following is satisfied: (a) every curve has shown its zenith; (b) only one curve is not complete and has shown a higher zenith than other curves; (c) a margin line is shown to indicate the limit of the incomplete curve. 547 We can see that the lqword scheme outperforms the mmword scheme at almost any dimensionality, which means the more precise the word segmentation the better the classification performance. At the same time, the bigram scheme outperforms both of the word schemes on a high dimensionality, wherea the word schemes might outperform the bigram scheme on a low dimensionality. Till now, the experiments on CE and CTC show the same characteristics despite the performance fluctuation on CTC caused by sparseness. Hence in the next subsections CE is used instead of both of them because its curves are smoother. 2.2 SVM on Words and Bigrams As stated in the previous subsection, the lqword scheme always outperforms the mmword scheme; we compare here only the lqword scheme with the bigram scheme. Support Vector Machine (SVM) is one of the best classifiers at present (Vapnik, 1995; Joachims, 1998), so we choose it as the main classifier in this study. The SVM implementation used here is LIBSVM (Chang, 2001); the type of SVM is set to “C-SVC” and the kernel type is set to linear, which means a one-with-one scheme is used in the multi-class classification. Because the CIG’s effectiveness on a SVM classifier is not examined in Xue and Sun (2003a, 2003b)’s report, we make here the four combinations of schemes with and without CIG in feature selection and term weighting. The experiment results are shown in Figure 5. The collection used is CE. 1 2 3 4 5 6 7 x 10 4 0.6 0.65 0.7 0.75 0.8 0.85 0.9 performance dimensionality lqword chi-tfidf chi-tfidfcig chicig-tfidf chicig-tfidfcig 1 2 3 4 5 6 7 x 10 4 0.6 0.65 0.7 0.75 0.8 0.85 0.9 dimensionality bigram chi-tfidf chi-tfidfcig chicig-tfidf chicig-tfidfcig Figure 5. chi-tfidf and cig-involved approaches on lqword and bigram Here we find that the chi-tfidf combination outperforms any approach with CIG, which is the opposite of the results with the Rocchio method. And the results with SVM are all better than the results with the Rocchio method. So we find that the feature selection scheme and the term weighting scheme are related to the classifier, which is worth noting. In other words, no feature selection scheme or term weighting scheme is absolutely the best for all classifiers. Therefore, a reasonable choice is to select the best performing combination of feature selection scheme, term weighting scheme and classifier, i.e. chi-tfidf and SVM. The curves for the lqword scheme and the bigram scheme are redrawn in Figure 6 to make them clearer. 1 2 3 4 5 6 7 x 10 4 0.75 0.8 0.85 0.9 performance dimensionality lqword bigram Figure 6. lqword and bigram on CE The curves shown in Figure 6 are similar to those in Figure 3. The differences are: (a) a larger dimensionality is needed for the bigram scheme to start outperforming the lqword scheme; (b) the two schemes have a smaller performance gap. The lqword scheme reaches its top performance at a dimensionality of around 40000, and the bigram scheme reaches its top performance at a dimensionality of around 60000 to 70000, after which both schemes’ performances slowly decrease. The reason is that the low ranked terms in feature selection are in fact noise and do not help to classification, which is why the feature selection phase is necessary. 2.3 Comparing Manually Segmented Words and Bigrams 0 1 2 3 4 5 6 7 8 9 10 x 10 4 72 74 76 78 80 82 84 86 88 dimansionality performance word bigram bigram limit Figure 7. word and bigram on LC 548 Up to now, bigram features seem to be better than word ones for fairly large dimensionalities. But it appears that word segmentation precision impacts classification performance. So we choose here a fully manually segmented document collection to detect the best performance a word scheme could reach and compare it with the bigram scheme. Figure 7 shows such an experiment result on the LC document collection (the circles indicate the maximums and the dash-dot lines indicate the superior limit and the asymptotic interior limit of the bigram scheme). The word scheme reaches a top performance around the dimensionality of 20000, which is a little higher than the bigram scheme’s zenith around 70000. Besides this experiment on 12 categories of the LC document collection, some experiments on fewer (2 to 6) categories of this subset were also done, and showed similar behaviors. The word scheme shows a better performance than the bigram scheme and needs a much lower dimensionality. The simpler the classification task is, the more distinct this behavior is. 3 Qualitative Analysis To analyze the performance of words and bigrams as feature terms in Chinese text categorization, we need to investigate two aspects as follows. 3.1 An Individual Feature Perspective The word is a natural semantic unit in Chinese language and expresses a complete meaning in text. The bigram is not a natural semantic unit and might not express a complete meaning in text, but there are also reasons for the bigram to be a good feature term. First, two-character words and three-character words account for most of all multi-character Chinese words (Liu and Liang, 1986). A twocharacter word can be substituted by the same bigram. At the granularity of most categorization tasks, a three-character words can often be substituted by one of its sub-bigrams (namely the “intraword bigram” in the next section) without a change of meaning. For instance, “标赛” is a sub-bigram of the word “锦标赛(tournament)” and could represent it without ambiguity. Second, a bigram may overlap on two successive words (namely the “interword bigram” in the next section), and thus to some extent fills the role of a word-bigram. The word-bigram as a more definite (although more sparse) feature surely helps the classification. For instance, “气 预” is a bigram overlapping on the two successive words “ 天气(weather)” and “ 预报 (forecast)”, and could almost replace the wordbigram (also a phrase) “天气预报(weather forecast)”, which is more likely to be a representative feature of the category “气象学(meteorology)” than either word. Third, due to the first issue, bigram features have some capability of identifying OOV (outof-vocabulary) words 11, and help improve the recall of classification. The above issues state the advantages of bigrams compared with words. But in the first and second issue, the equivalence between bigram and word or word-bigram is not perfect. For instance, the word “文学(literature)” is a also subbigram of the word “天文学(astronomy)”, but their meanings are completely different. So the loss and distortion of semantic information is a disadvantage of bigram features over word features. Furthermore, one-character words cover about 7% of words and more than 30% of word occurrences in the Chinese language; they are effevtive in the word scheme and are not involved in the above issues. Note that the impact of effective one-character words on the classification is not as large as their total frequency, because the high frequency ones are often too common to have a good classification power, for instance, the word “的 (of, ‘s)”. 3.2 A Mass Feature Perspective Features are not independently acting in text classification. They are assembled together to constitute a feature space. Except for a few models such as Latent Semantic Indexing (LSI) (Deerwester et al., 1990), most models assume the feature space to be orthogonal. This assumption might not affect the effectiveness of the models, but the semantic redundancy and complementation among the feature terms do impact on the classification efficiency at a given dimensionality. According to the first issue addressed in the previous subsection, a bigram might cover for more than one word. For instance, the bigram “织物” is a sub-bigram of the words “织物 (fabric)”, “ 棉织物(cotton fabric)”, “ 针织物 (knitted fabric)”, and also a good substitute of 11 The “OOV words” in this paper stand for the words that occur in the test documents but not in the training document. 549 them. So, to a certain extent, word features are redundant with regard to the bigram features associated to them. Similarly, according to the second issue addressed, a bigram might cover for more than one word-bigram. For instance, the bigram “篇小” is a sub-bigram of the wordbigrams (phrases) “短篇小说(short story)”, “中 篇小说(novelette)”, “长篇小说(novel)” and also a good substitute for them. So, as an addition to the second issue stated in the previous subsection, a bigram feature might even cover for more than one word-bigram. On the other hand, bigrams features are also redundant with regard to word features associated with them. For instance, the “锦标” and “标 赛” are both sub-bigrams of the previously mentioned word “锦标赛”. In some cases, more than one sub-bigram can be a good representative of a word. We make a word list and a bigram list sorted by the feature selection criterion in a descending order. We now try to find how the relative redundancy degrees of the word list and the bigram list vary with the dimensionality. Following issues are elicited by an observation on the two lists (not shown here due to space limitations). The relative redundancy rate in the word list keeps even while the dimensionality varies to a certain extent, because words that share a common sub-bigram might not have similar statistics and thus be scattered in the word feature list. Note that these words are possibly ranked lower in the list than the sub-bigram because feature selection criteria (such as Chi) often prefer higher frequency terms to lower frequency ones, and every word containing the bigram certainly has a lower frequency than the bigram itself. The relative redundancy in the bigram list might be not as even as in the word list. Good (representative) sub-bigrams of a word are quite likely to be ranked close to the word itself. For instance, “作曲” and “曲家” are sub-bigrams of the word “作曲家(music composer)”, both the bigrams and the word are on the top of the lists. Theretofore, the bigram list has a relatively large redundancy rate at low dimensionalities. The redundancy rate should decrease along with the increas of dimensionality for: (a) the relative redundancy in the word list counteracts the redundancy in the bigram list, because the words that contain a same bigram are gradually included as the dimensionality increases; (b) the proportion of interword bigrams increases in the bigram list and there is generally no redundancy between interword bigrams and intraword bigrams. Last, there are more bigram features than word features because bigrams can overlap each other in the text but words can not. Thus the bigrams as a whole should theoretically contain more information than the words as a whole. From the above analysis and observations, bigram features are expected to outperform word features at high dimensionalities. And word features are expected to outperform bigram features at low dimensionalities. 4 Semi-Quantitative Analysis In this section, a preliminary statistical analysis is presented to corroborate the statements in the above qualitative analysis and expected to be identical with the experiment results shown in Section 1. All statistics in this section are based on the CE document collection and the lqword segmentation scheme (because the CE document collection is large enough to provide good statistical characteristics). 4.1 Intraword Bigrams and Interword Bigrams In the previous section, only the intraword bigrams were discussed together with the words. But every bigram may have both intraword occurrences and interword occurrences. Therefore we need to distinguish these two kinds of bigrams at a statistical level. For every bigram, the number of intraword occurrences and the number of interword occurrences are counted and we can use 1 log 1 interword# intraword# + ⎛ ⎞ ⎜ ⎟ + ⎝ ⎠ as a metric to indicate its natual propensity to be a intraword bigram. The probability density of bigrams about on this metric is shown in Figure 8. -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 log(intraword#/interword#) probability density Figure 8. Bigram Probability Density on log(intraword#/interword#) 550 The figure shows a mixture of two Gaussian distributions, the left one for “natural interword bigrams” and the right one for “natural intraword bigrams”. We can moderately distinguish these two kinds of bigrams by a division at -1.4. 4.2 Overall Information Quantity of a Feature Space The performance limit of a classification is related to the quantity of information used. So a quantitative metric of the information a feature space can provide is need. Feature Quantity (Aizawa, 2000) is suitable for this purpose because it comes from information theory and is additive; tfidf was also reported as an appropriate metric of feature quantity (defined as “probability ⋅information”). Because of the probability involved as a factor, the overall information provided by a feature space can be calculated on training data by summation. The redundancy and complementation mentioned in Subsection 3.2 must be taken into account in the calculation of overall information quantity. For bigrams, the redundancy with regard to words associated with them between two intraword bigrams is given by { } 1,2 1 2 ( ) min ( ), ( ) b w tf w idf b idf b ⊂ ⋅ ∑ in which b1 and b2 stand for the two bigrams and w stands for any word containing both of them. The overall information quantity is obtained by subtracting the redundancy between each pair of bigrams from the sum of all features’ feature quantity (tfidf). Redundancy among more than two bigrams is ignored. For words, there is only complementation among words but not redundancy, the complementation with regard to bigrams associated with them is given by { } if exists; if does not exists. ( ) min ( ) , ( ) ( ), b w b b tf w idf b tf w idf w ⊂ ⋅ ⎧⎪⎨ ⋅ ⎪⎩ in which b is an intraword bigram contained by w. The overall information is calculated by summing the complementations of all words. 4.3 Statistics and Discussion Figure 9 shows the variation of these overall information metrics on the CE document collection. It corroborates the characteristics analyzed in Section 3 and corresponds with the performance curves in Section 2. Figure 10 shows the proportion of interword bigrams at different dimensionalities, which also corresponds with the analysis in Section 3. 0 2 4 6 8 10 12 14 16 x 10 4 0 2 4 6 8 10 12 14 16 x 10 7 dimensionality overall information quantity word bigram Figure 9. Overall Information Quantity on CE The curves do not cross at exactly the same dimensionality as in the figures in Section 1, because other complications impact on the classification performance: (a) OOV word identifying capability, as stated in Subsection 3.1; (b) word segmentation precision; (c) granularity of the categories (words have more definite semantic meaning than bigrams and lead to a better performance for small category granularities); (d) noise terms, introduced in the feature space during the increase of dimensionality. With these factors, the actual curves would not keep increasing as they do in Figure 9. 0 2 4 6 8 10 12 14 16 x 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 dimensionality interword bigram proportion Figure 10. Interword Bigram Proportion on CE 5 Conclusion In this paper, we aimed to thoroughly compare the value of words and bigrams as feature terms in text categorization, and make the implicit mechanism explicit. Experimental comparison showed that the Chi feature selection scheme and the tfidf term weighting scheme are still the best choices for (Chinese) text categorization on a SVM classifier. In most cases, the bigram scheme outperforms the word scheme at high dimensionalities and usually reaches its top performance at a dimen551 sionality of around 70000. The word scheme often outperforms the bigram scheme at low dimensionalities and reaches its top performance at a dimensionality of less than 40000. Whether the best performance of the word scheme is higher than the best performance scheme depends considerably on the word segmentation precision and the number of categories. The word scheme performs better with a higher word segmentation precision and fewer (<10) categories. A word scheme costs more document indexing time than a bigram scheme does; however a bigram scheme costs more training time and classification time than a word scheme does at the same performance level due to its higher dimensionality. Considering that the document indexing is needed in both the training phase and the classification phase, a high precision word scheme is more time consuming as a whole than a bigram scheme. As a concluding suggestion: a word scheme is more fit for small-scale tasks (with no more than 10 categories and no strict classification speed requirements) and needs a high precision word segmentation system; a bigram scheme is more fit for large-scale tasks (with dozens of categories or even more) without too strict training speed requirements (because a high dimensionality and a large number of categories lead to a long training time). Reference Akiko Aizawa. 2000. The Feature Quantity: An Information Theoretic Perspective of Tfidf-like Measures, Proceedings of ACM SIGIR 2000, 104111. Ricardo Baeza-Yates, Berthier Ribeiro-Neto. 1999. Modern Information Retrieval, Addison-Wesley Chih-Chung Chang, Chih-Jen Lin. 2001. LIBSVM: A Library for Support Vector Machines, Software available at http://www.csie.ntu.edu.tw/~cjlin/ libsvm Steve Deerwester, Sue T. Dumais, George W. Furnas, Richard Harshman. 1990. Indexing by Latent Semantic Analysis, Journal of the American Society for Information Science, 41:391-407. Thorsten Joachims. 1997. A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization, Proceedings of 14th International Conference on Machine Learning (Nashville, TN, 1997), 143-151. Thorsten Joachims. 1998. Text Categorization with Support Vector Machine: Learning with Many Relevant Features, Proceedings of the 10th European Conference on Machine Learning, 137-142. Mun-Kew Leong, Hong Zhou. 1998. Preliminary Qualitative Analysis of Segmented vs. Bigram Indexing in Chinese, The 6th Text Retrieval Conference (TREC-6), NIST Special Publication 500-240, 551-557. Baoli Li, Yuzhong Chen, Xiaojing Bai, Shiwen Yu. 2003. Experimental Study on Representing Units in Chinese Text Categorization, Proceedings of the 4th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing 2003), 602-614. Yuan Liu, Nanyuan Liang. 1986. Basic Engineering for Chinese Processing – Contemporary Chinese Words Frequency Count, Journal of Chinese Information Processing, 1(1):17-25. Robert W.P. Luk, K.L. Kwok. 1997. Comparing representations in Chinese information retrieval. Proceedings of ACM SIGIR 1997, 34-41. Jianyun Nie, Fuji Ren. 1999. Chinese Information Retrieval: Using Characters or Words? Information Processing and Management, 35:443-462. Jianyun Nie, Jianfeng Gao, Jian Zhang, Ming Zhou. 2000. On the Use of Words and N-grams for Chinese Information Retrieval, Proceedings of 5th International Workshop on Information Retrieval with Asian Languages Monica Rogati, Yiming Yang. 2002. High-performing Feature Selection for Text Classification, Proceedings of ACM Conference on Information and Knowledge Management 2002, 659-661. Gerard Salton, Christopher Buckley. 1988. Term Weighting Approaches in Automatic Text Retrieval, Information Processing and Management, 24(5):513-523. Fabrizio Sebastiani. 2002. Machine Learning in Automated Text Categorization, ACM Computing Surveys, 34(1):1-47 Dejun Xue, Maosong Sun. 2003a. Select Strong Information Features to Improve Text Categorization Effectiveness, Journal of Intelligent Systems, Special Issue. Dejun Xue, Maosong Sun. 2003b. A Study on Feature Weighting in Chinese Text Categorization, Proceedings of the 4th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing 2003), 594-604. Vladimir Vapnik. 1995. The Nature of Statistical Learning Theory, Springer. Yiming Yang, Jan O. Pederson. 1997. A Comparative Study on Feature Selection in Text Categorization, Proceedings of ICML 1997, 412-420. 552
2006
69
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 49–56, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Finite-State Model of Human Sentence Processing Jihyun Park and Chris Brew Department of Linguisitcs The Ohio State University Columbus, OH, USA {park|cbrew}@ling.ohio-state.edu Abstract It has previously been assumed in the psycholinguistic literature that finite-state models of language are crucially limited in their explanatory power by the locality of the probability distribution and the narrow scope of information used by the model. We show that a simple computational model (a bigram part-of-speech tagger based on the design used by Corley and Crocker (2000)) makes correct predictions on processing difficulty observed in a wide range of empirical sentence processing data. We use two modes of evaluation: one that relies on comparison with a control sentence, paralleling practice in human studies; another that measures probability drop in the disambiguating region of the sentence. Both are surprisingly good indicators of the processing difficulty of garden-path sentences. The sentences tested are drawn from published sources and systematically explore five different types of ambiguity: previous studies have been narrower in scope and smaller in scale. We do not deny the limitations of finite-state models, but argue that our results show that their usefulness has been underestimated. 1 Introduction The main purpose of the current study is to investigate the extent to which a probabilistic part-ofspeech (POS) tagger can correctly model human sentence processing data. Syntactically ambiguous sentences have been studied in great depth in psycholinguistics because the pattern of ambiguity resolution provides a window onto the human sentence processing mechanism (HSPM). Prima facie it seems unlikely that such a tagger will be adequate, because almost all previous researchers have assumed, following standard linguistic theory, that a formally adequate account of recursive syntactic structure is an essential component of any model of the behaviour. In this study, we tested a bigram POS tagger on different types of structural ambiguities and (as a sanity check) to the well-known asymmetry of subject and object relative clause processing. Theoretically, the garden-path effect is defined as processing difficulty caused by reanalysis. Empirically, it is attested as comparatively slower reading time or longer eye fixation at a disambiguating region in an ambiguous sentence compared to its control sentences (Frazier and Rayner, 1982; Trueswell, 1996). That is, the garden-path effect detected in many human studies, in fact, is measured through a “comparative” method. This characteristic of the sentence processing research design is reconstructed in the current study using a probabilistic POS tagging system. Under the assumption that larger probability decrease indicates slower reading time, the test results suggest that the probabilistic POS tagging system can predict reading time penalties at the disambiguating region of garden-path sentences compared to that of non-garden-path sentences (i.e. control sentences). 2 Previous Work Corley and Crocker (2000) present a probabilistic model of lexical category disambiguation based on a bigram statistical POS tagger. Kim et al. (2002) suggest the feasibility of modeling human syntactic processing as lexical ambiguity resolution using a syntactic tagging system called Super-Tagger 49 (Joshi and Srinivas, 1994; Bangalore and Joshi, 1999). Probabilistic parsing techniques also have been used for sentence processing modeling (Jurafsky, 1996; Narayanan and Jurafsky, 2002; Hale, 2001; Crocker and Brants, 2000). Jurafsky (1996) proposed a probabilistic model of HSPM using a parallel beam-search parsing technique based on the stochastic context-free grammar (SCFG) and subcategorization probabilities. Crocker and Brants (2000) used broad coverage statistical parsing techniques in their modeling of human syntactic parsing. Hale (2001) reported that a probabilistic Earley parser can make correct predictions of garden-path effects and the subject/object relative asymmetry. These previous studies have used small numbers of examples of, for example, the Reduced-relative clause ambiguity and the DirectObject/Sentential-Complement ambiguity. The current study is closest in spirit to a previous attempt to use the technology of partof-speech tagging (Corley and Crocker, 2000). Among the computational models of the HSPM mentioned above, theirs is the simplest. They tested a statistical bigram POS tagger on lexically ambiguous sentences to investigate whether the POS tagger correctly predicted reading-time penalty. When a previously preferred POS sequence is less favored later, the tagger makes a repair. They claimed that the tagger’s reanalysis can model the processing difficulty in human’s disambiguating lexical categories when there exists a discrepancy between lexical bias and resolution. 3 Experiments In the current study, Corley and Crocker’s model is further tested on a wider range of so-called structural ambiguity types. A Hidden Markov Model POS tagger based on bigrams was used. We made our own implementation to be sure of getting as close as possible to the design of Corley and Crocker (2000). Given a word string, w0, w1, · · · , wn, the tagger calculates the probability of every possible tag path, t0, · · · , tn. Under the Markov assumption, the joint probability of the given word sequence and each possible POS sequence can be approximated as a product of conditional probability and transition probability as shown in (1). (1) P(w0, w1, · · · , wn, t0, t1, · · · , tn) ≈Πn i=1P(wi|ti) · P(ti|ti−1), where n ≥1. Using the Viterbi algorithm (Viterbi, 1967), the tagger finds the most likely POS sequence for a given word string as shown in (2). (2) arg max P(t0, t1, · · · , tn|w0, w1, · · · , wn, µ). This is known technology, see Manning and Sch¨utze (1999), but the particular use we make of it is unusual. The tagger takes a word string as an input, outputs the most likely POS sequence and the final probability. Additionally, it presents accumulated probability at each word break and probability re-ranking, if any. Note that the running probability at the beginning of a sentence will be 1, and will keep decreasing at each word break since it is a product of conditional probabilities. We tested the predictability of the model on empirical reading data with the probability decrease and the presence or absence of probability reranking. Adopting the standard experimental design used in human sentence processing studies, where word-by-word reading time or eye-fixation time is compared between an experimental sentence and its control sentence, this study compares probability at each word break between a pair of sentences. Comparatively faster or larger drop of probability is expected to be a good indicator of comparative processing difficulty. Probability reranking, which is a simplified model of the reanalysis process assumed in many human studies, is also tested as another indicator of garden-path effect. Given a word string, all the possible POS sequences compete with each other based on their probability. Probability re-ranking occurs when an initially dispreferred POS sub-sequence becomes the preferred candidate later in the parse, because it fits in better with later words. The model parameters, P(wi|ti) and P(ti|ti−1), are estimated from a small section (970,995 tokens,47,831 distinct words) of the British National Corpus (BNC), which is a 100 million-word collection of British English, both written and spoken, developed by Oxford University Press (Burnard, 1995). The BNC was chosen for training the model because it is a POS-annotated corpus, which allows supervised training. In the implementation we use log probabilities to avoid underflow, and we report log probabilities in the sequel. 3.1 Hypotheses If the HSPM is affected by frequency information, we can assume that it will be easier to process 50 events with higher frequency or probability compared to those with lower frequency or probability. Under this general assumption, the overall difficulty of a sentence is expected to be measured or predicted by the mean size of probability decrease. That is, probability will drop faster in garden-path sentences than in control sentences (e.g. unambiguous sentences or ambiguous but non-gardenpath sentences). More importantly, the probability decrease pattern at disambiguating regions will predict the trends in the reading time data. All other things being equal, we might expect a reading time penalty when the size of the probability decrease at the disambiguating region in garden-path sentences is greater compared to the control sentences. This is a simple and intuitive assumption that can be easily tested. We could have formed the sum over all possible POS sequences in association with the word strings, but for the present study we simply used the Viterbi path: justifying this because this is the best single-path approximation to the joint probability. Lastly, re-ranking of POS sequences is expected to predict reanalysis of lexical categories. This is because re-ranking in the tagger is parallel to reanalysis in human subjects, which is known to be cognitively costly. 3.2 Materials In this study, five different types of ambiguity were tested including Lexical Category ambiguity, Reduced Relative ambiguity (RR ambiguity), Prepositional Phrase Attachment ambiguity (PP ambiguity), Direct-Object/Sentential-Complement ambiguity (DO/SC ambiguity), and Clausal Boundary ambiguity. The following are example sentences for each ambiguity type, shown with the ambiguous region italicized and the disambiguating region bolded. All of the example sentences are garden-path sentneces. (3) Lexical Category ambiguity The foreman knows that the warehouse prices the beer very modestly. (4) RR ambiguity The horse raced past the barn fell. (5) PP ambiguity Katie laid the dress on the floor onto the bed. (6) DO/SC ambiguity He forgot Pam needed a ride with him. (7) Clausal Boundary ambiguity Though George kept on reading the story really bothered him. There are two types of control sentences: unambiguous sentences and ambiguous but non-gardenpath sentences as shown in the examples below. Again, the ambiguous region is italicized and the disambiguating region is bolded. (8) Garden-Path Sentence The horse raced past the barn fell. (9) Ambiguous but Non-Garden-Path Control The horse raced past the barn and fell. (10) Unambiguous Control The horse that was raced past the barn fell. Note that the garden-path sentence (8) and its ambiguous control sentence (9) share exactly the same word sequence except for the disambiguating region. This allows direct comparison of probability at the critical region (i.e. disambiguating region) between the two sentences. Test materials used in experimental studies are constructed in this way in order to control extraneous variables such as word frequency. We use these sentences in the same form as the experimentalists so we inherit their careful design. In this study, a total of 76 sentences were tested: 10 for lexical category ambiguity, 12 for RR ambiguity, 20 for PP ambiguity, 16 for DO/SC ambiguity, and 18 for clausal boundary ambiguity. This set of materials is, to our knowledge, the most comprehensive yet subjected to this type of study. The sentences are directly adopted from various psycholinguistic studies (Frazier, 1978; Trueswell, 1996; Frazier and Clifton, 1996; Ferreira and Clifton, 1986; Ferreira and Henderson, 1986). As a baseline test case of the tagger, the well-established asymmetry between subject- and object-relative clauses was tested as shown in (11). (11) a. The editor who kicked the writer fired the entire staff. (Subject-relative) b. The editor who the writer kicked fired the entire staff. (Object-relative) The reading time advantage of subject-relative clauses over object-relative clauses is robust in English (Traxler et al., 2002) as well as other languages (Mak et al., 2002; Homes et al., 1981). For this test, materials from Traxler et al. (2002) (96 sentences) are used. 51 4 Results 4.1 The Probability Decrease per Word Unambiguous sentences are usually longer than garden-path sentences. To compare sentences of different lengths, the joint probability of the whole sentence and tags was divided by the number of words in the sentence. The result showed that the average probability decrease was greater in garden-path sentences compared to their unambiguous control sentences. This indicates that garden-path sentences are more difficult than unambiguous sentences, which is consistent with empirical findings. Probability decreased faster in object-relative sentences than in subject relatives as predicted. In the psycholinguistics literature, the comparative difficulty of object-relative clauses has been explained in terms of verbal working memory (King and Just, 1991), distance between the gap and the filler (Bever and McElree, 1988), or perspective shifting (MacWhinney, 1982). However, the test results in this study provide a simpler account for the effect. That is, the comparative difficulty of an object-relative clause might be attributed to its less frequent POS sequence. This account is particularly convincing since each pair of sentences in the experiment share the exactly same set of words except their order. 4.2 Probability Decrease at the Disambiguating Region A total of 30 pairs of a garden-path sentence and its ambiguous, non-garden-path control were tested for a comparison of the probability decrease at the disambiguating region. In 80% of the cases, the probability drops more sharply in garden-path sentences than in control sentences at the critical word. The test results are presented in (12) with the number of test sets for each ambiguous type and the number of cases where the model correctly predicted reading-time penalty of garden-path sentences. (12) Ambiguity Type (Correct Predictions/Test Sets) a. Lexical Category Ambiguity (4/4) b. PP Ambiguity (10/10) c. RR Ambiguity (3/4) d. DO/SC Ambiguity (4/6) e. Clausal Boundary Ambiguity (3/6) −60 −55 −50 −45 −40 −35 Log Probability (a) PP Attachment Ambiguity Katie put the dress on the floor and / onto the ... −35 −30 −25 −20 −15 Log Probability (b) DO / SC Ambiguity (DO Bias) He forgot Susan but / remembered ... the and the floor the onto Susan but remembered forgot Figure 1: Probability Transition (Garden-Path vs. Non Garden-Path) (a) −◦−: Non-Garden-Path (Adjunct PP), −∗−: Garden -Path (Complement PP) (b) −◦−: Non-Garden-Path (DO-Biased, DO-Resolved), −∗−: Garden-Path (DO-Biased, SC-Resolved) The two graphs in Figure 1 illustrate the comparison of probability decrease between a pair of sentence. The y-axis of both graphs in Figure 1 is log probability. The first graph compares the probability drop for the prepositional phrase (PP) attachment ambiguity (Katie put the dress on the floor and/onto the bed....) The empirical result for this type of ambiguity shows that reading time penalty is observed when the second PP, onto the bed, is introduced, and there is no such effect for the other sentence. Indeed, the sharper probability drop indicates that the additional PP is less likely, which makes a prediction of a comparative processing difficulty. The second graph exhibits the probability comparison for the DO/SC ambiguity. The verb forget is a DO-biased verb and thus processing difficulty is observed when it has a sentential complement. Again, this effect was replicated here. The results showed that the disambiguating word given the previous context is more difficult in garden-path sentences compared to control sentences. There are two possible explanations for the processing difficulty. One is that the POS sequence of a garden-path sentence is less probable than that of its control sentence. The other account is that the disambiguating word in a garden-path 52 sentence is a lower frequency word compared to that of its control sentence. For example, slower reading time was observed in (13a) and (14a) compared to (13b) and (14b) at the disambiguating region that is bolded. (13) Different POS at the Disambiguating Region a. Katie laid the dress on the floor onto (−57.80) the bed. b. Katie laid the dress on the floor after (−55.77) her mother yelled at her. (14) Same POS at the Disambiguating Region a. The umpire helped the child on (−42.77) third base. b. The umpire helped the child to (−42.23) third base. The log probability for each disambiguating word is given at the end of each sentence. As expected, the probability at the disambiguating region in (13a) and (14a) is lower than in (13b) and (14b) respectively. The disambiguating words in (13) have different POS’s; Preposition in (13a) and Conjunction (13b). This suggests that the probabilities of different POS sequences can account for different reading time at the region. In (14), however, both disambiguating words are the same POS (i.e. Preposition) and the POS sequences for both sentences are identical. Instead, “on” and “to”, have different frequencies and this information is reflected in the conditional probability P(wordi|state). Therefore, the slower reading time in (14b) might be attributable to the lower frequency of the disambiguating word, “to” compared to “on”. 4.3 Probability Re-ranking The probability re-ranking reported in Corley and Crocker (2000) was replicated. The tagger successfully resolved the ambiguity by reanalysis when the ambiguous word was immediately followed by the disambiguating word (e.g. Without her he was lost.). If the disambiguating word did not immediately follow the ambiguous region, (e.g. Without her contributions would be very inadequate.) the ambiguity is sometimes incorrectly resolved. When revision occurred, probability dropped more sharply at the revision point and at the disambiguation region compared to the control sen−41 −36 −31 −26 −21 (b) " The woman told the joke did not ... " −30 −25 −20 −15 −10 −5 (a) " The woman chased by ... " the woman chased (MV) chased (PP) by the told the joke did but Figure 2: Probability Transition in the RR Ambiguity (a) −◦−: Non-Garden-Path (Past Tense Verb), −∗−: Garden-Path (Past Participle) (b) −◦−: Non-Garden-Path (Past Tense Verb), −∗−: Garden-Path, (Past Participle) tences. When the ambiguity was not correctly resolved, the probability comparison correctly modeled the comparative difficulty of the garden-path sentences Of particular interest in this study is RR ambiguity resolution. The tagger predicted the processing difficulty of the RR ambiguity with probability re-ranking. That is, the tagger initially favors the main-verb interpretation for the ambiguous -ed form, and later it makes a repair when the ambiguity is resolved as a past-participle. In the first graph of Figure 2, “chased” is resolved as a past participle also with a revision since the disambiguating word “by” is immediately following. When revision occurred, probability dropped more sharply at the revision point and at the disambiguation region compared to the control sentences. When the disambiguating word is not immediately followed by the ambiguous word as in the second graph of Figure 2, the ambiguity was not resolved correctly, but the probababiltiy decrease at the disambiguating regions correctly predict that the garden-path sentence would be harder. The RR ambiguity is often categorized as a syntactic ambiguity, but the results suggest that the ambiguity can be resolved locally and its processing difficulty can be detected by a finite state model. This suggests that we should be cautious 53 in assuming that a structural explanation is needed for the RR ambiguity resolution, and it could be that similar cautions are in order for other ambiguities usually seen as syntactic. Although the probability re-ranking reported in the previous studies (Corley and Crocker, 2000; Frazier, 1978) is correctly replicated, the tagger sometimes made undesired revisions. For example, the tagger did not make a repair for the sentence The friend accepted by the man was very impressed (Trueswell, 1996) because accepted is biased as a past participle. This result is compatible with the findings of Trueswell (1996). However, the bias towards past-participle produces a repair in the control sentence, which is unexpected. For the sentence, The friend accepted the man who was very impressed, the tagger showed a repair since it initially preferred a past-participle analysis for accepted and later it had to reanalyze. This is a limitation of our model, and does not match any previous empirical finding. 5 Discussion The current study explores Corley and Crocker’s model(2000) further on the model’s account of human sentence processing data seen in empirical studies. Although there have been studies on a POS tagger evaluating it as a potential cognitive module of lexical category disambiguation, there has been little work that tests it as a modeling tool of syntactically ambiguous sentence processing. The findings here suggest that a statistical POS tagging system is more informative than Crocker and Corley demonstrated. It has a predictive power of processing delay not only for lexically ambiguous sentences but also for structurally garden-pathed sentences. This model is attractive since it is computationally simpler and requires few statistical parameters. More importantly, it is clearly defined what predictions can be and cannot be made by this model. This allows systematic testability and refutability of the model unlike some other probabilistic frameworks. Also, the model training and testing is transparent and observable, and true probability rather than transformed weights are used, all of which makes it easy to understand the mechanism of the proposed model. Although the model we used in the current study is not a novelty, the current work largely differs from the previous study in its scope of data used and the interpretation of the model for human sentence processing. Corley and Crocker clearly state that their model is strictly limited to lexical ambiguity resolution, and their test of the model was bounded to the noun-verb ambiguity. However, the findings in the current study play out differently. The experiments conducted in this study are parallel to empirical studies with regard to the design of experimental method and the test material. The garden-path sentences used in this study are authentic, most of them are selected from the cited literature, not conveniently coined by the authors. The word-by-word probability comparison between garden-path sentences and their controls is parallel to the experimental design widely adopted in empirical studies in the form of regionby-region reading or eye-gaze time comparison. In the word-by-word probability comparison, the model is tested whether or not it correctly predicts the comparative processing difficulty at the garden-path region. Contrary to the major claim made in previous empirical studies, which is that the garden-path phenomena are either modeled by syntactic principles or by structural frequency, the findings here show that the same phenomena can be predicted without such structural information. Therefore, the work is neither a mere extended application of Corley and Crocker’s work to a broader range of data, nor does it simply confirm earlier observations that finite state machines might accurately account for psycholinguistic results to some degree. The current study provides more concrete answers to what finite state machine is relevant to what kinds of processing difficulty and to what extent. 6 Future Work Even though comparative analysis is a widely adopted research design in experimental studies, a sound scientific model should be independent of this comparative nature and should be able to make systematic predictions. Currently, probability re-ranking is one way to make systematic module-internal predictions about the garden-path effect. This brings up the issue of encoding more information in lexical entries and increasing ambiguity so that other ambiguity types also can be disambiguated in a similar way via lexical category disambiguation. This idea has been explored as one of the lexicalist approaches to sentence processing (Kim et al., 2002; Bangalore and Joshi, 54 1999). Kim et al. (2002) suggest the feasibility of modeling structural analysis as lexical ambiguity resolution. They developed a connectionist neural network model of word recognition, which takes orthographic information, semantic information, and the previous two words as its input and outputs a SuperTag for the current word. A SuperTag is an elementary syntactic tree, or simply a structural description composed of features like POS, the number of complements, category of each complement, and the position of complements. In their view, structural disambiguation is simply another type of lexical category disambiguation, i.e. SuperTag disambiguation. When applied to DO/SC ambiguous fragments, such as “The economist decided ...”, their model showed a general bias toward the NP-complement structure. This NP-complement bias was overcome by lexical information from high-frequency S-biased verbs, meaning that if the S-biased verb was a high frequency word, it was correctly tagged, but if the verb had low frequency, then it was more likely to be tagged as NP-complement verb. This result is also reported in other constraint-based model studies (e.g. Juliano and Tanenhaus (1994)), but the difference between the previous constraint-based studies and Kim et. al is that the result of the latter is based on training of the model on noisier data (sentences that were not tailored to the specific research purpose). The implementation of SuperTag advances the formal specification of the constraint-based lexicalist theory. However, the scope of their sentence processing model is limited to the DO/SC ambiguity, and the description of their model is not clear. In addition, their model is far beyond a simple statistical model: the interaction of different sources of information is not transparent. Nevertheless, Kim et al. (2002) provides a future direction for the current study and a starting point for considering what information should be included in the lexicon. The fundamental goal of the current research is to explore a model that takes the most restrictive position on the size of parameters until additional parameters are demanded by data. Equally important, the quality of architectural simplicity should be maintained. Among the different sources of information manipulated by Kim et. al., the socalled elementary structural information is considered as a reasonable and ideal parameter for addition to the current model. The implementation and the evaluation of the model will be exactly the same as a statistical POS tagger provided with a large parsed corpus from which elementary trees can be extracted. 7 Conclusion Our studies show that, at least for the sample of test materials that we culled from the standard literature, a statistical POS tagging system can predict processing difficulty in structurally ambiguous garden-path sentences. The statistical POS tagger was surprisingly effective in modeling sentence processing data, given the locality of the probability distribution. The findings in this study provide an alternative account for the garden-path effect observed in empirical studies, specifically, that the slower processing times associated with garden-path sentences are due in part to their relatively unlikely POS sequences in comparison with those of non-garden-path sentences and in part to differences in the emission probabilities that the tagger learns. One attractive future direction is to carry out simulations that compare the evolution of probabilities in the tagger with that in a theoretically more powerful model trained on the same data, such as an incremental statistical parser (Kim et al., 2002; Roark, 2001). In so doing we can find the places where the prediction problem faced both by the HSPM and the machines that aspire to emulate it actually warrants the greater power of structurally sensitive models, using this knowledge to mine large corpora for future experiments with human subjects. We have not necessarily cast doubt on the hypothesis that the HSPM makes crucial use of structural information, but we have demonstrated that much of the relevant behavior can be captured in a simple model. The ’structural’ regularities that we observe are reasonably well encoded into this model. For purposes of initial real-time processing it could be that the HSPM is using a similar encoding of structural regularities into convenient probabilistic or neural form. It is as yet unclear what the final form of a cognitively accurate model along these lines would be, but it is clear from our study that it is worthwhile, for the sake of clarity and explicit testability, to consider models that are simpler and more precisely specified than those assumed by dominant theories of human sentence processing. 55 Acknowledgments This project was supported by the Cognitive Science Summer 2004 Research Award at the Ohio State University. We acknowledge support from NSF grant IIS 0347799. References S. Bangalore and A. K. Joshi. Supertagging: an approach to almost parsing. Computational Linguistics, 25(2):237–266, 1999. T. G. Bever and B. McElree. Empty categories access their antecedents during comprehension. Linguistic Inquiry, 19:35–43, 1988. L Burnard. Users Guide for the British National Corpus. British National Corpus Consortium, Oxford University Computing Service, 1995. S. Corley and M. W Crocker. The Modular Statistical Hypothesis: Exploring Lexical Category Ambiguity. Architectures and Mechanisms for Language Processing, M. Crocker, M. Pickering. and C. Charles (Eds.) Cambridge University Press, 2000. W. C. Crocker and T. Brants. Wide-coverage probabilistic sentence processing, 2000. F. Ferreira and C. Clifton. The independence of syntactic processing. Journal of Memory and Language, 25:348–368, 1986. F. Ferreira and J. Henderson. Use of verb information in syntactic parsing: Evidence from eye movements and word-by-word self-paced reading. Journal of Experimental Psychology, 16: 555–568, 1986. L. Frazier. On comprehending sentences: Syntactic parsing strategies. Ph.D. dissertation, University of Massachusetts, Amherst, MA, 1978. L. Frazier and C. Clifton. Construal. Cambridge, MA: MIT Press, 1996. L. Frazier and K. Rayner. Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14: 178–210, 1982. J. Hale. A probabilistic earley parser as a psycholinguistic model. Proceedings of NAACL2001, 2001. V. M. Homes, J. O’Regan, and K.G. Evensen. Eye fixation patterns during the reading of relative clause sentences. Journal of Verbal Learning and Verbal Behavior, 20:417–430, 1981. A. K. Joshi and B. Srinivas. Disambiguation of super parts of speech (or supertags): almost parsing. The Proceedings of the 15th International Confer-ence on Computational Lingusitics (COLING ′94), pages 154–160, 1994. C. Juliano and M.K. Tanenhaus. A constraintbased lexicalist account of the subject-object attachment preference. Journal of Psycholinguistic Research, 23:459–471, 1994. D Jurafsky. A probabilistic model of lexical and syntactic access and disambiguation. Cognitive Science, 20:137–194, 1996. A. E. Kim, Bangalore S., and J. Trueswell. A computational model of the grammatical aspects of word recognition as supertagging. paola merlo and suzanne stevenson (eds.). The Lexical Basis of Sentence Processing: Formal, computational and experimental issues, University of Geneva University of Toronto:109–135, 2002. J. King and M. A. Just. Individual differences in syntactic processing: The role of working memory. Journal of Memory and Language, 30:580– 602, 1991. B. MacWhinney. Basic syntactic processes. Language acquisition; Syntax and semantics, S. Kuczaj (Ed.), 1:73–136, 1982. W. M. Mak, Vonk W., and H. Schriefers. The influence of animacy on relative clause processing. Journal of Memory and Language,, 47:50–68, 2002. C.D. Manning and H. Sch¨utze. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, Massachusetts, 1999. S. Narayanan and D Jurafsky. A bayesian model predicts human parse preference and reading times in sentence processing. Proceedings of Advances in Neural Information Processing Systems, 2002. B. Roark. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27 (2):249–276, 2001. M. J. Traxler, R. K. Morris, and R. E. Seely. Processing subject and object relative clauses: evidence from eye movements. Journal of Memory and Language, 47:69–90, 2002. J. C. Trueswell. The role of lexical frequency in syntactic ambiguity resolution. Journal of Memory and Language, 35:556–585, 1996. A. Viterbi. Error bounds for convolution codes and an asymptotically optimal decoding algorithm. IEEE Transactions of Information Theory, 13: 260–269, 1967. 56
2006
7
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 553–560, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Exploiting Comparable Corpora and Bilingual Dictionaries for Cross-Language Text Categorization Alfio Gliozzo and Carlo Strapparava ITC-Irst via Sommarive, I-38050, Trento, ITALY {gliozzo,strappa}@itc.it Abstract Cross-language Text Categorization is the task of assigning semantic classes to documents written in a target language (e.g. English) while the system is trained using labeled documents in a source language (e.g. Italian). In this work we present many solutions according to the availability of bilingual resources, and we show that it is possible to deal with the problem even when no such resources are accessible. The core technique relies on the automatic acquisition of Multilingual Domain Models from comparable corpora. Experiments show the effectiveness of our approach, providing a low cost solution for the Cross Language Text Categorization task. In particular, when bilingual dictionaries are available the performance of the categorization gets close to that of monolingual text categorization. 1 Introduction In the worldwide scenario of the Web age, multilinguality is a crucial issue to deal with and to investigate, leading us to reformulate most of the classical Natural Language Processing (NLP) problems into a multilingual setting. For instance the classical monolingual Text Categorization (TC) problem can be reformulated as a Cross Language Text Categorization (CLTC) task, in which the system is trained using labeled examples in a source language (e.g. English), and it classifies documents in a different target language (e.g. Italian). The applicative interest for the CLTC is immediately clear in the globalized Web scenario. For example, in the community based trade (e.g. eBay) it is often necessary to archive texts in different languages by adopting common merceological categories, very often defined by collections of documents in a source language (e.g. English). Another application along this direction is Cross Lingual Question Answering, in which it would be very useful to filter out the candidate answers according to their topics. In the literature, this task has been proposed quite recently (Bel et al., 2003; Gliozzo and Strapparava, 2005). In those works, authors exploited comparable corpora showing promising results. A more recent work (Rigutini et al., 2005) proposed the use of Machine Translation techniques to approach the same task. Classical approaches for multilingual problems have been conceived by following two main directions: (i) knowledge based approaches, mostly implemented by rule based systems and (ii) empirical approaches, in general relying on statistical learning from parallel corpora. Knowledge based approaches are often affected by low accuracy. Such limitation is mainly due to the problem of tuning large scale multilingual lexical resources (e.g. MultiWordNet, EuroWordNet) for the specific application task (e.g. discarding irrelevant senses, extending the lexicon with domain specific terms and their translations). On the other hand, empirical approaches are in general more accurate, because they can be trained from domain specific collections of parallel text to represent the application needs. There exist many interesting works about using parallel corpora for multilingual applications (Melamed, 2001), such as Machine Translation (Callison-Burch et al., 2004), Cross Lingual 553 Information Retrieval (Littman et al., 1998), and so on. However it is not always easy to find or build parallel corpora. This is the main reason why the “weaker” notion of comparable corpora is a matter of recent interest in the field of Computational Linguistics (Gaussier et al., 2004). In fact, comparable corpora are easier to collect for most languages (e.g. collections of international news agencies), providing a low cost knowledge source for multilingual applications. The main problem of adopting comparable corpora for multilingual knowledge acquisition is that only weaker statistical evidence can be captured. In fact, while parallel corpora provide stronger (text-based) statistical evidence to detect translation pairs by analyzing term co-occurrences in translated documents, comparable corpora provides weaker (term-based) evidence, because text alignments are not available. In this paper we present some solutions to deal with CLTC according to the availability of bilingual resources, and we show that it is possible to deal with the problem even when no such resources are accessible. The core technique relies on the automatic acquisition of Multilingual Domain Models (MDMs) from comparable corpora. This allows us to define a kernel function (i.e. a similarity function among documents in different languages) that is then exploited inside a Support Vector Machines classification framework. We also investigate this problem exploiting synsetaligned multilingual WordNets and standard bilingual dictionaries (e.g. Collins). Experiments show the effectiveness of our approach, providing a simple and low cost solution for the Cross-Language Text Categorization task. In particular, when bilingual dictionaries/repositories are available, the performance of the categorization gets close to that of monolingual TC. The paper is structured as follows. Section 2 briefly discusses the notion of comparable corpora. Section 3 shows how to perform crosslingual TC when no bilingual dictionaries are available and it is possible to rely on a comparability assumption. Section 4 present a more elaborated technique to acquire MDMs exploiting bilingual resources, such as MultiWordNet (i.e. a synset-aligned WordNet) and Collins bilingual dictionary. Section 5 evaluates our methodologies and Section 6 concludes the paper suggesting some future developments. 2 Comparable Corpora Comparable corpora are collections of texts in different languages regarding similar topics (e.g. a collection of news published by agencies in the same period). More restrictive requirements are expected for parallel corpora (i.e. corpora composed of texts which are mutual translations), while the class of the multilingual corpora (i.e. collection of texts expressed in different languages without any additional requirement) is the more general. Obviously parallel corpora are also comparable, while comparable corpora are also multilingual. In a more precise way, let L = {L1, L2, . . . , Ll} be a set of languages, let T i = {ti 1, ti 2, . . . , ti n} be a collection of texts expressed in the language Li ∈L, and let ψ(tj h, ti z) be a function that returns 1 if ti z is the translation of tj h and 0 otherwise. A multilingual corpus is the collection of texts defined by T ∗= S i T i. If the function ψ exists for every text ti z ∈T ∗and for every language Lj, and is known, then the corpus is parallel and aligned at document level. For the purpose of this paper it is enough to assume that two corpora are comparable, i.e. they are composed of documents about the same topics and produced in the same period (e.g. possibly from different news agencies), and it is not known if a function ψ exists, even if in principle it could exist and return 1 for a strict subset of document pairs. The texts inside comparable corpora, being about the same topics, should refer to the same concepts by using various expressions in different languages. On the other hand, most of the proper nouns, relevant entities and words that are not yet lexicalized in the language, are expressed by using their original terms. As a consequence the same entities will be denoted with the same words in different languages, allowing us to automatically detect couples of translation pairs just by looking at the word shape (Koehn and Knight, 2002). Our hypothesis is that comparable corpora contain a large amount of such words, just because texts, referring to the same topics in different languages, will often adopt the same terms to denote the same entities1. 1According to our assumption, a possible additional cri554 However, the simple presence of these shared words is not enough to get significant results in CLTC tasks. As we will see, we need to exploit these common words to induce a second-order similarity for the other words in the lexicons. 2.1 The Multilingual Vector Space Model Let T = {t1, t2, . . . , tn} be a corpus, and V = {w1, w2, . . . , wk} be its vocabulary. In the monolingual settings, the Vector Space Model (VSM) is a k-dimensional space Rk, in which the text tj ∈T is represented by means of the vector ⃗tj such that the zth component of ⃗tj is the frequency of wz in tj. The similarity among two texts in the VSM is then estimated by computing the cosine of their vectors in the VSM. Unfortunately, such a model cannot be adopted in the multilingual settings, because the VSMs of different languages are mainly disjoint, and the similarity between two texts in different languages would always turn out to be zero. This situation is represented in Figure 1, in which both the leftbottom and the rigth-upper regions of the matrix are totally filled by zeros. On the other hand, the assumption of corpora comparability seen in Section 2, implies the presence of a number of common words, represented by the central rows of the matrix in Figure 1. As we will show in Section 5, this model is rather poor because of its sparseness. In the next section, we will show how to use such words as seeds to induce a Multilingual Domain VSM, in which second order relations among terms and documents in different languages are considered to improve the similarity estimation. 3 Exploiting Comparable Corpora Looking at the multilingual term-by-document matrix in Figure 1, a first attempt to merge the subspaces associated to each language is to exploit the information provided by external knowledge sources, such as bilingual dictionaries, e.g. collapsing all the rows representing translation pairs. In this setting, the similarity among texts in different languages could be estimated by exploiting the classical VSM just described. However, the main disadvantage of this approach to estimate inter-lingual text similarity is that it strongly terion to decide whether two corpora are comparable is to estimate the percentage of terms in the intersection of their vocabularies. relies on the availability of a multilingual lexical resource. For languages with scarce resources a bilingual dictionary could be not easily available. Secondly, an important requirement of such a resource is its coverage (i.e. the amount of possible translation pairs that are actually contained in it). Finally, another problem is that ambiguous terms could be translated in different ways, leading us to collapse together rows describing terms with very different meanings. In Section 4 we will see how the availability of bilingual dictionaries influences the techniques and the performance. In the present Section we want to explore the case in which such resources are supposed not available. 3.1 Multilingual Domain Model A MDM is a multilingual extension of the concept of Domain Model. In the literature, Domain Models have been introduced to represent ambiguity and variability (Gliozzo et al., 2004) and successfully exploited in many NLP applications, such as Word Sense Disambiguation (Strapparava et al., 2004), Text Categorization and Term Categorization. A Domain Model is composed of soft clusters of terms. Each cluster represents a semantic domain, i.e. a set of terms that often co-occur in texts having similar topics. Such clusters identify groups of words belonging to the same semantic field, and thus highly paradigmatically related. MDMs are Domain Models containing terms in more than one language. A MDM is represented by a matrix D, containing the degree of association among terms in all the languages and domains, as illustrated in Table 1. For example the term virus is associated to both MEDICINE COMPUTER SCIENCE HIV e/i 1 0 AIDSe/i 1 0 viruse/i 0.5 0.5 hospitale 1 0 laptope 0 1 Microsofte/i 0 1 clinicai 1 0 Table 1: Example of Domain Matrix. we denotes English terms, wi Italian terms and we/i the common terms to both languages. the domain COMPUTER SCIENCE and the domain MEDICINE while the domain MEDICINE is associated to both the terms AIDS and HIV. Inter-lingual 555                                  English texts Italian texts te 1 te 2 · · · te n−1 te n ti 1 ti 2 · · · ti m−1 ti m we 1 0 1 · · · 0 1 0 0 · · · English Lexicon we 2 1 1 · · · 1 0 0 ... ... . . . . . . . . . . . . . . . . . . . . . . . ... 0 ... we p−1 0 1 · · · 0 0 ... 0 we p 0 1 · · · 0 0 · · · 0 0 common wi we/i 1 0 1 · · · 0 0 0 0 · · · 1 0 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . wi 1 0 0 · · · 0 1 · · · 1 1 Italian Lexicon wi 2 0 ... 1 1 · · · 0 1 ... ... 0 ... . . . . . . . . . . . . . . . . . . . . . . . . wi q−1 ... 0 0 1 · · · 0 1 wi q · · · 0 0 0 1 · · · 1 0                                  Figure 1: Multilingual term-by-document matrix domain relations are captured by placing different terms of different languages in the same semantic field (as for example HIV e/i, AIDSe/i, hospitale, and clinicai). Most of the named entities, such as Microsoft and HIV are expressed using the same string in both languages. Formally, let V i = {wi 1, wi 2, . . . , wi ki} be the vocabulary of the corpus T i composed of document expressed in the language Li, let V ∗= S i V i be the set of all the terms in all the languages, and let k∗= |V ∗| be the cardinality of this set. Let D = {D1, D2, ..., Dd} be a set of domains. A DM is fully defined by a k∗× d domain matrix D representing in each cell di,z the domain relevance of the ith term of V ∗with respect to the domain Dz. The domain matrix D is used to define a function D : Rk∗→Rd, that maps the document vectors ⃗tj expressed into the multilingual classical VSM (see Section 2.1), into the vectors ⃗t′ j in the multilingual domain VSM. The function D is defined by2 D(⃗tj) = ⃗tj(IIDFD) = ⃗t′ j (1) where IIDF is a diagonal matrix such that iIDF i,l = IDF(wl i), ⃗tj is represented as a row vector, and IDF(wl i) is the Inverse Document Frequency of 2In (Wong et al., 1985) the formula 1 is used to define a Generalized Vector Space Model, of which the Domain VSM is a particular instance. wl i evaluated in the corpus T l. In this work we exploit Latent Semantic Analysis (LSA) (Deerwester et al., 1990) to automatically acquire a MDM from comparable corpora. LSA is an unsupervised technique for estimating the similarity among texts and terms in a large corpus. In the monolingual settings LSA is performed by means of a Singular Value Decomposition (SVD) of the term-by-document matrix T describing the corpus. SVD decomposes the term-by-document matrix T into three matrixes T ≃VΣk′UT where Σk′ is the diagonal k × k matrix containing the highest k′ ≪k eigenvalues of T, and all the remaining elements are set to 0. The parameter k′ is the dimensionality of the Domain VSM and can be fixed in advance (i.e. k′ = d). In the literature (Littman et al., 1998) LSA has been used in multilingual settings to define a multilingual space in which texts in different languages can be represented and compared. In that work LSA strongly relied on the availability of aligned parallel corpora: documents in all the languages are represented in a term-by-document matrix (see Figure 1) and then the columns corresponding to sets of translated documents are collapsed (i.e. they are substituted by their sum) before starting the LSA process. The effect of this step is to merge the subspaces (i.e. the right and the left sectors of the matrix in Figure 1) in which 556 the documents have been originally represented. In this paper we propose a variation of this strategy, performing a multilingual LSA in the case in which an aligned parallel corpus is not available. It exploits the presence of common words among different languages in the term-by-document matrix. The SVD process has the effect of creating a LSA space in which documents in both languages are represented. Of course, the higher the number of common words, the more information will be provided to the SVD algorithm to find common LSA dimension for the two languages. The resulting LSA dimensions can be perceived as multilingual clusters of terms and document. LSA can then be used to define a Multilingual Domain Matrix DLSA. For further details see (Gliozzo and Strapparava, 2005). As Kernel Methods are the state-of-the-art supervised framework for learning and they have been successfully adopted to approach the TC task (Joachims, 2002), we chose this framework to perform all our experiments, in particular Support Vector Machines3. Taking into account the external knowledge provided by a MDM it is possible estimate the topic similarity among two texts expressed in different languages, with the following kernel: KD(ti, tj) = ⟨D(ti), D(tj)⟩ q ⟨D(tj), D(tj)⟩⟨D(ti), D(ti)⟩ (2) where D is defined as in equation 1. Note that when we want to estimate the similarity in the standard Multilingual VSM, as described in Section 2.1, we can use a simple bag of words kernel. The BoW kernel is a particular case of the Domain Kernel, in which D = I, and I is the identity matrix. In the evaluation typically we consider the BoW Kernel as a baseline. 4 Exploiting Bilingual Dictionaries When bilingual resources are available it is possible to augment the the “common” portion of the matrix in Figure 1. In our experiments we exploit two alternative multilingual resources: MultiWordNet and the Collins English-Italian bilingual dictionary. 3We adopted the efficient implementation freely available at http://svmlight.joachims.org/. MultiWordNet4. It is a multilingual computational lexicon, conceived to be strictly aligned with the Princeton WordNet. The available languages are Italian, Spanish, Hebrew and Romanian. In our experiment we used the English and the Italian components. The last version of the Italian WordNet contains around 58,000 Italian word senses and 41,500 lemmas organized into 32,700 synsets aligned whenever possible with WordNet English synsets. The Italian synsets are created in correspondence with the Princeton WordNet synsets, whenever possible, and semantic relations are imported from the corresponding English synsets. This implies that the synset index structure is the same for the two languages. Thus for the all the monosemic words, we augment each text in the dataset with the corresponding synset-id, which act as an expansion of the “common” terms of the matrix in Figure 1. Adopting the methodology described in Section 3.1, we exploit these common sense-indexing to induce a second-order similarity for the other terms in the lexicons. We evaluate the performance of the cross-lingual text categorization, using both the BoW Kernel and the Multilingual Domain Kernel, observing that also in this case the leverage of the external knowledge brought by the MDM is effective. It is also possible to augment each text with all the synset-ids of all the words (i.e. monosemic and polysemic) present in the dataset, hoping that the SVM machine learning device cut off the noise due to the inevitable spurious senses introduced in the training examples. Obviously in this case, differently from the “monosemic” enrichment seen above, it does not make sense to apply any dimensionality reduction supplied by the Multilingual Domain Model (i.e. the resulting second-order relations among terms and documents produced on a such “extended” corpus should not be meaningful)5. Collins. The Collins machine-readable bilingual dictionary is a medium size dictionary including 37,727 headwords in the English Section and 32,602 headwords in the Italian Section. This is a traditional dictionary, without sense indexing like the WordNet repository. In this case 4Available at http://multiwordnet.itc.it. 5The use of a WSD system would help in this issue. However the rationale of this paper is to see how far it is possible to go with very few resources. And we suppose that a multilingual all-words WSD system is not easily available. 557 English Italian Categories Training Test Total Training Test Total Quality of Life 5759 1989 7748 5781 1901 7682 Made in Italy 5711 1864 7575 6111 2068 8179 Tourism 5731 1857 7588 6090 2015 8105 Culture and School 3665 1245 4910 6284 2104 8388 Total 20866 6955 27821 24266 8088 32354 Table 2: Number of documents in the data set partitions we follow the way, for each text of one language, to augment all the present words with the translation words found in the dictionary. For the same reason, we chose not to exploit the MDM, while experimenting along this way. 5 Evaluation The CLTC task has been rarely attempted in the literature, and standard evaluation benchmark are not available. For this reason, we developed an evaluation task by adopting a news corpus kindly put at our disposal by AdnKronos, an important Italian news provider. The corpus consists of 32,354 Italian and 27,821 English news partitioned by AdnKronos into four fixed categories: QUALITY OF LIFE, MADE IN ITALY, TOURISM, CULTURE AND SCHOOL. The English and the Italian corpora are comparable, in the sense stated in Section 2, i.e. they cover the same topics and the same period of time. Some news stories are translated in the other language (but no alignment indication is given), some others are present only in the English set, and some others only in the Italian. The average length of the news stories is about 300 words. We randomly split both the English and Italian part into 75% training and 25% test (see Table 2). We processed the corpus with PoS taggers, keeping only nouns, verbs, adjectives and adverbs. Table 3 reports the vocabulary dimensions of the English and Italian training partitions, the vocabulary of the merged training, and how many common lemmata are present (about 14% of the total). Among the common lemmata, 97% are nouns and most of them are proper nouns. Thus the initial term-by-document matrix is a 43,384 × 45,132 matrix, while the DLSA was acquired using 400 dimensions. As far as the CLTC task is concerned, we tried the many possible options. In all the cases we trained on the English part and we classified the Italian part, and we trained on the Italian and clas# lemmata English training 22,704 Italian training 26,404 English + Italian 43,384 common lemmata 5,724 Table 3: Number of lemmata in the training parts of the corpus sified on the English part. When used, the MDM was acquired running the SVD only on the joint (English and Italian) training parts. Using only comparable corpora. Figure 2 reports the performance without any use of bilingual dictionaries. Each graph show the learning curves respectively using a BoW kernel (that is considered here as a baseline) and the multilingual domain kernel. We can observe that the latter largely outperform a standard BoW approach. Analyzing the learning curves, it is worth noting that when the quantity of training increases, the performance becomes better and better for the Multilingual Domain Kernel, suggesting that with more available training it could be possible to improve the results. Using bilingual dictionaries. Figure 3 reports the learning curves exploiting the addition of the synset-ids of the monosemic words in the corpus. As expected the use of a multilingual repository improves the classification results. Note that the MDM outperforms the BoW kernel. Figure 4 shows the results adding in the English and Italian parts of the corpus all the synset-ids (i.e. monosemic and polisemic) and all the translations found in the Collins dictionary respectively. These are the best results we get in our experiments. In these figures we report also the performance of the corresponding monolingual TC (we used the SVM with the BoW kernel), which can be considered as an upper bound. We can observe that the CLTC results are quite close to the performance obtained in the monolingual classification tasks. 558 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F1 measure Fraction of training data (train on English, test on Italian) Multilingual Domain Kernel Bow Kernel 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F1 measure Fraction of training data (train on Italian, test on English) Multilingual Domain Kernel Bow Kernel Figure 2: Cross-language learning curves: no use of bilingual dictionaries 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F1 measure Fraction of training data (train on English, test on Italian) Multilingual Domain Kernel Bow Kernel 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F1 measure Fraction of training data (train on Italian, test on English) Multilingual Domain Kernel Bow Kernel Figure 3: Cross-language learning curves: monosemic synsets from MultiWordNet 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F1 measure Fraction of training data (train on English, test on Italian) Monolingual (Italian) TC Collins MultiWordNet 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F1 measure Fraction of training data (train on Italian, test on English) Monolingual (English) TC Collins MultiWordNet Figure 4: Cross-language learning curves: all synsets from MultiWordNet // All translations from Collins 559 6 Conclusion and Future Work In this paper we have shown that the problem of cross-language text categorization on comparable corpora is a feasible task. In particular, it is possible to deal with it even when no bilingual resources are available. On the other hand when it is possible to exploit bilingual repositories, such as a synset-aligned WordNet or a bilingual dictionary, the obtained performance is close to that achieved for the monolingual task. In any case we think that our methodology is low-cost and simple, and it can represent a technologically viable solution for multilingual problems. For the future we try to explore also the use of a word sense disambiguation all-words system. We are confident that even with the actual state-of-the-art WSD performance, we can improve the actual results. Acknowledgments This work has been partially supported by the ONTOTEXT (From Text to Knowledge for the Semantic Web) project, funded by the Autonomous Province of Trento under the FUP-2004 program. References N. Bel, C. Koster, and M. Villegas. 2003. Crosslingual text categorization. In Proceedings of European Conference on Digital Libraries (ECDL), Trondheim, August. C. Callison-Burch, D. Talbot, and M. Osborne. 2004. Statistical machine translation with word-and sentence-aligned parallel corpora. In Proceedings of ACL-04, Barcelona, Spain, July. S. Deerwester, S. T. Dumais, G. W. Furnas, T.K. Landauer, and R. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391–407. E. Gaussier, J. M. Renders, I. Matveeva, C. Goutte, and H. Dejean. 2004. A geometric view on bilingual lexicon extraction from comparable corpora. In Proceedings of ACL-04, Barcelona, Spain, July. A. Gliozzo and C. Strapparava. 2005. Cross language text categorization by acquiring multilingual domain models from comparable corpora. In Proc. of the ACL Workshop on Building and Using Parallel Texts (in conjunction of ACL-05), University of Michigan, Ann Arbor, June. A. Gliozzo, C. Strapparava, and I. Dagan. 2004. Unsupervised and supervised exploitation of semantic domains in lexical disambiguation. Computer Speech and Language, 18:275–299. T. Joachims. 2002. Learning to Classify Text using Support Vector Machines. Kluwer Academic Publishers. P. Koehn and K. Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of ACL Workshop on Unsupervised Lexical Acquisition, Philadelphia, July. M. Littman, S. Dumais, and T. Landauer. 1998. Automatic cross-language information retrieval using latent semantic indexing. In G. Grefenstette, editor, Cross Language Information Retrieval, pages 51– 62. Kluwer Academic Publishers. D. Melamed. 2001. Empirical Methods for Exploiting Parallel Texts. The MIT Press. L. Rigutini, M. Maggini, and B. Liu. 2005. An EM based training algorithm for cross-language text categorizaton. In Proceedings of Web Intelligence Conference (WI-2005), Compi`egne, France, September. C. Strapparava, A. Gliozzo, and C. Giuliano. 2004. Pattern abstraction and term similarity for word sense disambiguation. In Proceedings of SENSEVAL-3, Barcelona, Spain, July. S.K.M. Wong, W. Ziarko, and P.C.N. Wong. 1985. Generalized vector space model in information retrieval. In Proceedings of the 8th ACM SIGIR Conference. 560
2006
70
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 561–568, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Progressive Feature Selection Algorithm for Ultra Large Feature Spaces Qi Zhang Computer Science Department Fudan University Shanghai 200433, P.R. China [email protected] Fuliang Weng Research and Technology Center Robert Bosch Corp. Palo Alto, CA 94304, USA [email protected] Zhe Feng Research and Technology Center Robert Bosch Corp. Palo Alto, CA 94304, USA [email protected] Abstract Recent developments in statistical modeling of various linguistic phenomena have shown that additional features give consistent performance improvements. Quite often, improvements are limited by the number of features a system is able to explore. This paper describes a novel progressive training algorithm that selects features from virtually unlimited feature spaces for conditional maximum entropy (CME) modeling. Experimental results in edit region identification demonstrate the benefits of the progressive feature selection (PFS) algorithm: the PFS algorithm maintains the same accuracy performance as previous CME feature selection algorithms (e.g., Zhou et al., 2003) when the same feature spaces are used. When additional features and their combinations are used, the PFS gives 17.66% relative improvement over the previously reported best result in edit region identification on Switchboard corpus (Kahn et al., 2005), which leads to a 20% relative error reduction in parsing the Switchboard corpus when gold edits are used as the upper bound. 1 Introduction Conditional Maximum Entropy (CME) modeling has received a great amount of attention within natural language processing community for the past decade (e.g., Berger et al., 1996; Reynar and Ratnaparkhi, 1997; Koeling, 2000; Malouf, 2002; Zhou et al., 2003; Riezler and Vasserman, 2004). One of the main advantages of CME modeling is the ability to incorporate a variety of features in a uniform framework with a sound mathematical foundation. Recent improvements on the original incremental feature selection (IFS) algorithm, such as Malouf (2002) and Zhou et al. (2003), greatly speed up the feature selection process. However, like many other statistical modeling algorithms, such as boosting (Schapire and Singer, 1999) and support vector machine (Vapnik 1995), the algorithm is limited by the size of the defined feature space. Past results show that larger feature spaces tend to give better results. However, finding a way to include an unlimited amount of features is still an open research problem. In this paper, we propose a novel progressive feature selection (PFS) algorithm that addresses the feature space size limitation. The algorithm is implemented on top of the Selective Gain Computation (SGC) algorithm (Zhou et al., 2003), which offers fast training and high quality models. Theoretically, the new algorithm is able to explore an unlimited amount of features. Because of the improved capability of the CME algorithm, we are able to consider many new features and feature combinations during model construction. To demonstrate the effectiveness of our new algorithm, we conducted a number of experiments on the task of identifying edit regions, a practical task in spoken language processing. Based on the convention from Shriberg (1994) and Charniak and Johnson (2001), a disfluent spoken utterance is divided into three parts: the reparandum, the part that is repaired; the inter561 regnum, which can be filler words or empty; and the repair/repeat, the part that replaces or repeats the reparandum. The first two parts combined are called an edit or edit region. An example is shown below: interregnum It is, you know, this is a tough problem. reparandum repair In section 2, we briefly review the CME modeling and SGC algorithm. Then, section 3 gives a detailed description of the PFS algorithm. In section 4, we describe the Switchboard corpus, features used in the experiments, and the effectiveness of the PFS with different feature spaces. Section 5 concludes the paper. 2 Background Before presenting the PFS algorithm, we first give a brief review of the conditional maximum entropy modeling, its training process, and the SGC algorithm. This is to provide the background and motivation for our PFS algorithm. 2.1 Conditional Maximum Entropy Model The goal of CME is to find the most uniform conditional distribution of y given observation x, ( ) x y p , subject to constraints specified by a set of features ( ) y x fi , , where features typically take the value of either 0 or 1 (Berger et al., 1996). More precisely, we want to maximize ( ) ( ) ( ) ( ) ( ) x y p x y p x p p H y x log ~ ,∑ − = (1) given the constraints: ( ) ( ) i i f E f E ~ = (2) where ( ) ( ) ( ) ∑ = y x i i y x f y x p f E , , , ~ ~ is the empirical expected feature count from the training data and ( ) ( ) ( ) ( ) ∑ = y x i i y x f x y p x p f E , , ~ is the feature expectation from the conditional model ( ) x y p . This results in the following exponential model: ( ) ( ) ( )⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = ∑ j j j y x f x Z x y p , exp 1 λ (3) where λj is the weight corresponding to the feature fj, and Z(x) is a normalization factor. A variety of different phenomena, including lexical, structural, and semantic aspects, in natural language processing tasks can be expressed in terms of features. For example, a feature can be whether the word in the current position is a verb, or the word is a particular lexical item. A feature can also be about a particular syntactic subtree, or a dependency relation (e.g., Charniak and Johnson, 2005). 2.2 Selective Gain Computation Algorithm In real world applications, the number of possible features can be in the millions or beyond. Including all the features in a model may lead to data over-fitting, as well as poor efficiency and memory overflow. Good feature selection algorithms are required to produce efficient and high quality models. This leads to a good amount of work in this area (Ratnaparkhi et al., 1994; Berger et al., 1996; Pietra et al, 1997; Zhou et al., 2003; Riezler and Vasserman, 2004) In the most basic approach, such as Ratnaparkhi et al. (1994) and Berger et al. (1996), training starts with a uniform distribution over all values of y and an empty feature set. For each candidate feature in a predefined feature space, it computes the likelihood gain achieved by including the feature in the model. The feature that maximizes the gain is selected and added to the current model. This process is repeated until the gain from the best candidate feature only gives marginal improvement. The process is very slow, because it has to re-compute the gain for every feature at each selection stage, and the computation of a parameter using Newton’s method becomes expensive, considering that it has to be repeated many times. The idea behind the SGC algorithm (Zhou et al., 2003) is to use the gains computed in the previous step as approximate upper bounds for the subsequent steps. The gain for a feature needs to be re-computed only when the feature reaches the top of a priority queue ordered by gain. In other words, this happens when the feature is the top candidate for inclusion in the model. If the re-computed gain is smaller than that of the next candidate in the list, the feature is re-ranked according to its newly computed gain, and the feature now at the top of the list goes through the same gain re-computing process. This heuristics comes from evidences that the gains become smaller and smaller as more and more good features are added to the model. This can be explained as follows: assume that the Maximum Likelihood (ML) estimation lead to the best model that reaches a ML value. The ML value is the upper bound. Since the gains need to be positive to proceed the process, the difference 562 between the Likelihood of the current and the ML value becomes smaller and smaller. In other words, the possible gain each feature may add to the model gets smaller. Experiments in Zhou et al. (2003) also confirm the prediction that the gains become smaller when more and more features are added to the model, and the gains do not get unexpectively bigger or smaller as the model grows. Furthermore, the experiments in Zhou et al. (2003) show no significant advantage for looking ahead beyond the first element in the feature list. The SGC algorithm runs hundreds to thousands of times faster than the original IFS algorithm without degrading classification performance. We used this algorithm for it enables us to find high quality CME models quickly. The original SGC algorithm uses a technique proposed by Darroch and Ratcliff (1972) and elaborated by Goodman (2002): when considering a feature fi, the algorithm only modifies those un-normalized conditional probabilities: ( ) ( ) ∑j j j y x f , exp λ for (x, y) that satisfy fi (x, y)=1, and subsequently adjusts the corresponding normalizing factors Z(x) in (3). An implementation often uses a mapping table, which maps features to the training instance pairs (x, y). 3 Progressive Feature Selection Algorithm In general, the more contextual information is used, the better a system performs. However, richer context can lead to combinatorial explosion of the feature space. When the feature space is huge (e.g., in the order of tens of millions of features or even more), the SGC algorithm exceeds the memory limitation on commonly available computing platforms with gigabytes of memory. To address the limitation of the SGC algorithm, we propose a progressive feature selection algorithm that selects features in multiple rounds. The main idea of the PFS algorithm is to split the feature space into tractable disjoint sub-spaces such that the SGC algorithm can be performed on each one of them. In the merge step, the features that SGC selects from different sub-spaces are merged into groups. Instead of re-generating the feature-to-instance mapping table for each sub-space during the time of splitting and merging, we create the new mapping table from the previous round’s tables by collecting those entries that correspond to the selected features. Then, the SGC algorithm is performed on each of the feature groups and new features are selected from each of them. In other words, the feature space splitting and subspace merging are performed mainly on the feature-to-instance mapping tables. This is a key step that leads to this very efficient PFS algorithm. At the beginning of each round for feature selection, a uniform prior distribution is always assumed for the new CME model. A more precise description of the PFS algorithm is given in Table 1, and it is also graphically illustrated in Figure 1. Given: Feature space F(0) = {f1 (0), f2 (0), …, fN (0)}, step_num = m, select_factor = s 1. Split the feature space into N1 parts {F1 (1), F2 (1), …, FN1 (1)} = split(F(0)) 2. for k=1 to m-1 do //2.1 Feature selection for each feature space Fi (k) do FSi (k) = SGC(Fi (k), s) //2.2 Combine selected features {F1 (k+1), …, FNk+1 (k+1)} = merge(FS1 (k), …, FSNk (k)) 3. Final feature selection & optimization F(m) = merge(FS1 (m-1), …, FSNm-1 (m-1)) FS(m) = SGC(F(m), s) Mfinal = Opt(FS(m)) Table 1. The PFS algorithm. M ) 2 ( 1 F )1( 1 FS )1( 1i FS M M )1( 2i FS M )1( 1 N FS L select Step 1 Step m )1( 1 F )1( 1iF M M )1( 2iF M )1( 1 N F ) 2 ( 1 FS ) 2 ( 2 N FS ) (m F M merge Step 2 ) 0 ( F Split select merge select ) 2 ( 2 N F Mfinal ) (m FS optimize Figure 1. Graphic illustration of PFS algorithm. In Table 1, SGC() invokes the SGC algorithm, and Opt() optimizes feature weights. The functions split() and merge() are used to split and merge the feature space respectively. Two variations of the split() function are investigated in the paper and they are described below: 1. random-split: randomly split a feature space into n- disjoint subspaces, and select an equal amount of features for each feature subspace. 2. dimension-based-split: split a feature space into disjoint subspaces based on fea563 ture dimensions/variables, and select the number of features for each feature subspace with a certain distribution. We use a simple method for merge() in the experiments reported here, i.e., adding together the features from a set of selected feature subspaces. One may image other variations of the split() function, such as allowing overlapping subspaces. Other alternatives for merge() are also possible, such as randomly grouping the selected feature subspaces in the dimension-based split. Due to the limitation of the space, they are not discussed here. This approach can in principle be applied to other machine learning algorithms as well. 4 Experiments with PFS for Edit Region Identification In this section, we will demonstrate the benefits of the PFS algorithm for identifying edit regions. The main reason that we use this task is that the edit region detection task uses features from several levels, including prosodic, lexical, and syntactic ones. It presents a big challenge to find a set of good features from a huge feature space. First we will present the additional features that the PFS algorithm allows us to include. Then, we will briefly introduce the variant of the Switchboard corpus used in the experiments. Finally, we will compare results from two variants of the PFS algorithm. 4.1 Edit Region Identification Task In spoken utterances, disfluencies, such as selfediting, pauses and repairs, are common phenomena. Charniak and Johnson (2001) and Kahn et al. (2005) have shown that improved edit region identification leads to better parsing accuracy – they observe a relative reduction in parsing f-score error of 14% (2% absolute) between automatic and oracle edit removal. The focus of our work is to show that our new PFS algorithm enables the exploration of much larger feature spaces for edit identification – including prosodic features, their confidence scores, and various feature combinations – and consequently, it further improves edit region identification. Memory limitation prevents us from including all of these features in experiments using the boosting method described in Johnson and Charniak (2004) and Zhang and Weng (2005). We couldn’t use the new features with the SGC algorithm either for the same reason. The features used here are grouped according to variables, which define feature sub-spaces as in Charniak and Johnson (2001) and Zhang and Weng (2005). In this work, we use a total of 62 variables, which include 16 1 variables from Charniak and Johnson (2001) and Johnson and Charniak (2004), an additional 29 variables from Zhang and Weng (2005), 11 hierarchical POS tag variables, and 8 prosody variables (labels and their confidence scores). Furthermore, we explore 377 combinations of these 62 variables, which include 40 combinations from Zhang and Weng (2005). The complete list of the variables is given in Table 2, and the combinations used in the experiments are given in Table 3. One additional note is that some features are obtained after the rough copy procedure is performed, where we used the same procedure as the one by Zhang and Weng (2005). For a fair comparison with the work by Kahn et al. (2005), word fragment information is retained. 4.2 The Re-segmented Switchboard Data In order to include prosodic features and be able to compare with the state-oft-art, we use the University of Washington re-segmented Switchboard corpus, described in Kahn et al. (2005). In this corpus, the Switchboard sentences were segmented into V5-style sentence-like units (SUs) (LDC, 2004). The resulting sentences fit more closely with the boundaries that can be detected through automatic procedures (e.g., Liu et al., 2005). Because the edit region identification results on the original Switchboard are not directly comparable with the results on the newly segmented data, the state-of-art results reported by Charniak and Johnson (2001) and Johnson and Charniak (2004) are repeated on this new corpus by Kahn et al. (2005). The re-segmented UW Switchboard corpus is labeled with a simplified subset of the ToBI prosodic system (Ostendorf et al., 2001). The three simplified labels in the subset are p, 1 and 4, where p refers to a general class of disfluent boundaries (e.g., word fragments, abruptly shortened words, and hesitation); 4 refers to break level 4, which describes a boundary that has a boundary tone and phrase-final lengthening; 1 Among the original 18 variables, two variables, Pf and Tf are not used in our experiments, because they are mostly covered by the other variables. Partial word flags only contribute to 3 features in the final selected feature list. 564 Categories Variable Name Short Description Orthographic Words W-5, … , W+5 Words at the current position and the left and right 5 positions. Partial Word Flags P-3, …, P+3 Partial word flags at the current position and the left and right 3 positions Words Distance DINTJ, DW, DBigram, DTrigram Distance features POS Tags T-5, …, T+5 POS tags at the current position and the left and right 5 positions. Tags Hierarchical POS Tags (HTag) HT-5, …, HT+5 Hierarchical POS tags at the current position and the left and right 5 positions. HTag Rough Copy Nm, Nn, Ni, Nl, Nr, Ti Hierarchical POS rough copy features. Rough Copy Word Rough Copy WNm, WNi, WNl, WNr Word rough copy features. Prosody Labels PL0, …, PL3 Prosody label with largest post possibility at the current position and the right 3 positions. Prosody Prosody Scores PC0, …, PC3 Prosody confidence at the current position and the right 3 positions. Table 2. A complete list of variables used in the experiments. Categories Short Description Number of Combinations Tags HTagComb Combinations among Hierarchical POS Tags 55 Words OrthWordComb Combinations among Orthographic Words 55 Tags WTComb WTTComb Combinations of Orthographic Words and POS Tags; Combination among POS Tags 176 Rough Copy RCComb Combinations of HTag Rough Copy and Word Rough Copy 55 Prosody PComb Combinations among Prosody, and with Words 36 Table 3. All the variable combinations used in the experiments. and 1 is used to include the break index levels BL 0, 1, 2, and 3. Since the majority of the corpus is labeled via automatic methods, the fscores for the prosodic labels are not high. In particular, 4 and p have f-scores of about 70% and 60% respectively (Wong et al., 2005). Therefore, in our experiments, we also take prosody confidence scores into consideration. Besides the symbolic prosody labels, the corpus preserves the majority of the previously annotated syntactic information as well as edit region labels. In following experiments, to make the results comparable, the same data subsets described in Kahn et al. (2005) are used for training, developing and testing. 4.3 Experiments The best result on the UW Switchboard for edit region identification uses a TAG-based approach (Kahn et al., 2005). On the original Switchboard corpus, Zhang and Weng (2005) reported nearly 20% better results using the boosting method with a much larger feature space 2 . To allow comparison with the best past results, we create a new CME baseline with the same set of features as that used in Zhang and Weng (2005). We design a number of experiments to test the following hypotheses: 1. PFS can include a huge number of new features, which leads to an overall performance improvement. 2. Richer context, represented by the combinations of different variables, has a positive impact on performance. 3. When the same feature space is used, PFS performs equally well as the original SGC algorithm. The new models from the PFS algorithm are trained on the training data and tuned on the development data. The results of our experiments on the test data are summarized in Table 4. The first three lines show that the TAG-based approach is outperformed by the new CME baseline (line 3) using all the features in Zhang and Weng (2005). However, the improvement from 2 PFS is not applied to the boosting algorithm at this time because it would require significant changes to the available algorithm. 565 Results on test data Feature Space Codes number of features Precision Recall F-Value TAG-based result on UW-SWBD reported in Kahn et al. (2005) 78.20 CME with all the variables from Zhang and Weng (2005) 2412382 89.42 71.22 79.29 CME with all the variables from Zhang and Weng (2005) + post 2412382 87.15 73.78 79.91 +HTag +HTagComb +WTComb +RCComb 17116957 90.44 72.53 80.50 +HTag +HTagComb +WTComb +RCComb +PL0 … PL3 17116981 88.69 74.01 80.69 +HTag +HTagComb +WTComb +RCComb +PComb: without cut 20445375 89.43 73.78 80.86 +HTag +HTagComb +WTComb +RCComb +PComb: cut2 19294583 88.95 74.66 81.18 +HTag +HTagComb +WTComb +RCComb +PComb: cut2 +Gau 19294583 90.37 74.40 81.61 +HTag +HTagComb +WTComb +RCComb +PComb: cut2 +post 19294583 86.88 77.29 81.80 +HTag +HTagComb +WTComb +RCComb +PComb: cut2 +Gau +post 19294583 87.79 77.02 82.05 Table 4. Summary of experimental results with PFS. CME is significantly smaller than the reported results using the boosting method. In other words, using CME instead of boosting incurs a performance hit. The next four lines in Table 4 show that additional combinations of the feature variables used in Zhang and Weng (2005) give an absolute improvement of more than 1%. This improvement is realized through increasing the search space to more than 20 million features, 8 times the maximum size that the original boosting and CME algorithms are able to handle. Table 4 shows that prosody labels alone make no difference in performance. Instead, for each position in the sentence, we compute the entropy of the distribution of the labels’ confidence scores. We normalize the entropy to the range [0, 1], according to the formula below: ( ) ( ) Uniform H p H score − = 1 (4) Including this feature does result in a good improvement. In the table, cut2 means that we equally divide the feature scores into 10 buckets and any number below 0.2 is ignored. The total contribution from the combined feature variables leads to a 1.9% absolute improvement. This confirms the first two hypotheses. When Gaussian smoothing (Chen and Rosenfeld, 1999), labeled as +Gau, and postprocessing (Zhang and Weng, 2005), labeled as +post, are added, we observe 17.66% relative improvement (or 3.85% absolute) over the previous best f-score of 78.2 from Kahn et al. (2005). To test hypothesis 3, we are constrained to the feature spaces that both PFS and SGC algorithms can process. Therefore, we take all the variables from Zhang and Weng (2005) as the feature space for the experiments. The results are listed in Table 5. We observed no f-score degradation with PFS. Surprisingly, the total amount of time PFS spends on selecting its best features is smaller than the time SGC uses in selecting its best features. This confirms our hypothesis 3. Results on test data Split / Non-split Precision Recall F-Value non-split 89.42 71.22 79.29 split by 4 parts 89.67 71.68 79.67 split by 10 parts 89.65 71.29 79.42 Table 5. Comparison between PFS and SGC with all the variables from Zhang and Weng (2005). The last set of experiments for edit identification is designed to find out what split strategies PFS algorithm should adopt in order to obtain good results. Two different split strategies are tested here. In all the experiments reported so far, we use 10 random splits, i.e., all the features are randomly assigned to 10 subsets of equal size. We may also envision a split strategy that divides the features based on feature variables (or dimensions), such as word-based, tag-based, etc. The four dimensions used in the experiments are listed as the top categories in Tables 2 and 3, and the results are given in Table 6. Results on test data Split Criteria Allocation Criteria Precision Recall F-Value Random Uniform 88.95 74.66 81.18 Dimension Uniform 89.78 73.42 80.78 Dimension Prior 89.78 74.01 81.14 Table 6. Comparison of split strategies using feature space +HTag+HTagComb+WTComb+RCComb+PComb: cut2 In Table 6, the first two columns show criteria for splitting feature spaces and the number of features to be allocated for each group. Random and Dimension mean random-split and dimension-based-split, respectively. When the criterion 566 is Random, the features are allocated to different groups randomly, and each group gets the same number of features. In the case of dimensionbased split, we determine the number of features allocated for each dimension in two ways. When the split is Uniform, the same number of features is allocated for each dimension. When the split is Prior, the number of features to be allocated in each dimension is determined in proportion to the importance of each dimension. To determine the importance, we use the distribution of the selected features from each dimension in the model “+ HTag + HTagComb + WTComb + RCComb + PComb: cut2”, namely: Word-based 15%, Tag-based 70%, RoughCopy-based 7.5% and Prosody-based 7.5%3. From the results, we can see no significant difference between the random-split and the dimension-based-split. To see whether the improvements are translated into parsing results, we have conducted one more set of experiments on the UW Switchboard corpus. We apply the latest version of Charniak’s parser (2005-08-16) and the same procedure as Charniak and Johnson (2001) and Kahn et al. (2005) to the output from our best edit detector in this paper. To make it more comparable with the results in Kahn et al. (2005), we repeat the same experiment with the gold edits, using the latest parser. Both results are listed in Table 7. The difference between our best detector and the gold edits in parsing (1.51%) is smaller than the difference between the TAG-based detector and the gold edits (1.9%). In other words, if we use the gold edits as the upper bound, we see a relative error reduction of 20.5%. Parsing F-score Methods Edit F-score Reported in Kahn et al. (2005) Latest Charniak Parser Diff. with Oracle Oracle 100 86.9 87.92 -- Kahn et al. (2005) 78.2 85.0 -- 1.90 PFS best results 82.05 -- 86.41 1.51 Table 7. Parsing F-score various different edit region identification results. 3 It is a bit of cheating to use the distribution from the selected model. However, even with this distribution, we do not see any improvement over the version with randomsplit. 5 Conclusion This paper presents our progressive feature selection algorithm that greatly extends the feature space for conditional maximum entropy modeling. The new algorithm is able to select features from feature space in the order of tens of millions in practice, i.e., 8 times the maximal size previous algorithms are able to process, and unlimited space size in theory. Experiments on edit region identification task have shown that the increased feature space leads to 17.66% relative improvement (or 3.85% absolute) over the best result reported by Kahn et al. (2005), and 10.65% relative improvement (or 2.14% absolute) over the new baseline SGC algorithm with all the variables from Zhang and Weng (2005). We also show that symbolic prosody labels together with confidence scores are useful in edit region identification task. In addition, the improvements in the edit identification lead to a relative 20% error reduction in parsing disfluent sentences when gold edits are used as the upper bound. Acknowledgement This work is partly sponsored by a NIST ATP funding. The authors would like to express their many thanks to Mari Ostendorf and Jeremy Kahn for providing us with the re-segmented UW Switchboard Treebank and the corresponding prosodic labels. Our thanks also go to Jeff Russell for his careful proof reading, and the anonymous reviewers for their useful comments. All the remaining errors are ours. References Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22 (1): 39-71. Eugene Charniak and Mark Johnson. 2001. Edit Detection and Parsing for Transcribed Speech. In Proceedings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics, 118-126, Pittsburgh, PA, USA. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best Parsing and MaxEnt Discriminative Reranking. In Proceedings of the 43rd Annual Meeting of Association for Computational Linguistics, 173-180, Ann Arbor, MI, USA. Stanley Chen and Ronald Rosenfeld. 1999. A Gaussian Prior for Smoothing Maximum Entropy Mod567 els. Technical Report CMUCS-99-108, Carnegie Mellon University. John N. Darroch and D. Ratcliff. 1972. Generalized Iterative Scaling for Log-Linear Models. In Annals of Mathematical Statistics, 43(5): 1470-1480. Stephen A. Della Pietra, Vincent J. Della Pietra, and John Lafferty. 1997. Inducing Features of Random Fields. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4): 380-393. Joshua Goodman. 2002. Sequential Conditional Generalized Iterative Scaling. In Proceedings of the 40th Annual Meeting of Association for Computational Linguistics, 9-16, Philadelphia, PA, USA. Mark Johnson, and Eugene Charniak. 2004. A TAGbased noisy-channel model of speech repairs. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, 33-39, Barcelona, Spain. Jeremy G. Kahn, Matthew Lease, Eugene Charniak, Mark Johnson, and Mari Ostendorf. 2005. Effective Use of Prosody in Parsing Conversational Speech. In Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing, 233-240, Vancouver, Canada. Rob Koeling. 2000. Chunking with Maximum Entropy Models. In Proceedings of the CoNLL-2000 and LLL-2000, 139-141, Lisbon, Portugal. LDC. 2004. Simple MetaData Annotation Specification. Technical Report of Linguistic Data Consortium. (http://www.ldc.upenn.edu/Projects/MDE). Yang Liu, Elizabeth Shriberg, Andreas Stolcke, Barbara Peskin, Jeremy Ang, Dustin Hillard, Mari Ostendorf, Marcus Tomalin, Phil Woodland and Mary Harper. 2005. Structural Metadata Research in the EARS Program. In Proceedings of the 30th ICASSP, volume V, 957-960, Philadelphia, PA, USA. Robert Malouf. 2002. A Comparison of Algorithms for Maximum Entropy Parameter Estimation. In Proceedings of the 6th Conference on Natural Language Learning (CoNLL-2002), 49-55, Taibei, Taiwan. Mari Ostendorf, Izhak Shafran, Stefanie ShattuckHufnagel, Leslie Charmichael, and William Byrne. 2001. A Prosodically Labeled Database of Spontaneous Speech. In Proceedings of the ISCA Workshop of Prosody in Speech Recognition and Understanding, 119-121, Red Bank, NJ, USA. Adwait Ratnaparkhi, Jeff Reynar and Salim Roukos. 1994. A Maximum Entropy Model for Prepositional Phrase Attachment. In Proceedings of the ARPA Workshop on Human Language Technology, 250-255, Plainsboro, NJ, USA. Jeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A Maximum Entropy Approach to Identifying Sentence Boundaries. In Proceedings of the 5th Conference on Applied Natural Language Processing, 16-19, Washington D.C., USA. Stefan Riezler and Alexander Vasserman. 2004. Incremental Feature Selection and L1 Regularization for Relaxed Maximum-entropy Modeling. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 174181, Barcelona, Spain. Robert E. Schapire and Yoram Singer, 1999. Improved Boosting Algorithms Using Confidencerated Predictions. Machine Learning, 37(3): 297336. Elizabeth Shriberg. 1994. Preliminaries to a Theory of Speech Disfluencies. Ph.D. Thesis, University of California, Berkeley. Vladimir Vapnik. 1995. The Nature of Statistical Learning Theory. Springer, New York, NY, USA. Darby Wong, Mari Ostendorf, Jeremy G. Kahn. 2005. Using Weakly Supervised Learning to Improve Prosody Labeling. Technical Report UWEETR2005-0003, University of Washington. Qi Zhang and Fuliang Weng. 2005. Exploring Features for Identifying Edited Regions in Disfluent Sentences. In Proc. of the 9th International Workshop on Parsing Technologies, 179-185, Vancouver, Canada. Yaqian Zhou, Fuliang Weng, Lide Wu, and Hauke Schmidt. 2003. A Fast Algorithm for Feature Selection in Conditional Maximum Entropy Modeling. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, 153-159, Sapporo, Japan. 568
2006
71
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 569–576, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Annealing Structural Bias in Multilingual Weighted Grammar Induction∗ Noah A. Smith and Jason Eisner Department of Computer Science / Center for Language and Speech Processing Johns Hopkins University, Baltimore, MD 21218 USA {nasmith,jason}@cs.jhu.edu Abstract We first show how a structural locality bias can improve the accuracy of state-of-the-art dependency grammar induction models trained by EM from unannotated examples (Klein and Manning, 2004). Next, by annealing the free parameter that controls this bias, we achieve further improvements. We then describe an alternative kind of structural bias, toward “broken” hypotheses consisting of partial structures over segmented sentences, and show a similar pattern of improvement. We relate this approach to contrastive estimation (Smith and Eisner, 2005a), apply the latter to grammar induction in six languages, and show that our new approach improves accuracy by 1–17% (absolute) over CE (and 8–30% over EM), achieving to our knowledge the best results on this task to date. Our method, structural annealing, is a general technique with broad applicability to hidden-structure discovery problems. 1 Introduction Inducing a weighted context-free grammar from flat text is a hard problem. A common starting point for weighted grammar induction is the Expectation-Maximization (EM) algorithm (Dempster et al., 1977; Baker, 1979). EM’s mediocre performance (Table 1) reflects two problems. First, it seeks to maximize likelihood, but a grammar that makes the training data likely does not necessarily assign a linguistically defensible syntactic structure. Second, the likelihood surface is not globally concave, and learners such as the EM algorithm can get trapped on local maxima (Charniak, 1993). We seek here to capitalize on the intuition that, at least early in learning, the learner should search primarily for string-local structure, because most structure is local.1 By penalizing dependencies between two words that are farther apart in the string, we obtain consistent improvements in accuracy of the learned model (§3). We then explore how gradually changing δ over time affects learning (§4): we start out with a ∗This work was supported by a Fannie and John Hertz Foundation fellowship to the first author and NSF ITR grant IIS-0313193 to the second author. The views expressed are not necessarily endorsed by the sponsors. We thank three anonymous COLING-ACL reviewers for comments. 1To be concrete, in the corpora tested here, 95% of dependency links cover ≤4 words (English, Bulgarian, Portuguese), ≤5 words (German, Turkish), ≤6 words (Mandarin). model selection among values of λ and Θ(0) worst unsup. sup. oracle German 19.8 19.8 54.4 54.4 English 21.8 41.6 41.6 42.0 Bulgarian 24.7 44.6 45.6 45.6 Mandarin 31.8 37.2 50.0 50.0 Turkish 32.1 41.2 48.0 51.4 Portuguese 35.4 37.4 42.3 43.0 Table 1: Baseline performance of EM-trained dependency parsing models: F1 on non-$ attachments in test data, with various model selection conditions (3 initializers × 6 smoothing values). The languages are listed in decreasing order by the training set size. Experimental details can be found in the appendix. strong preference for short dependencies, then relax the preference. The new approach, structural annealing, often gives superior performance. An alternative structural bias is explored in §5. This approach views a sentence as a sequence of one or more yields of separate, independent trees. The points of segmentation are a hidden variable, and during learning all possible segmentations are entertained probabilistically. This allows the learner to accept hypotheses that explain the sentences as independent pieces. In §6 we briefly review contrastive estimation (Smith and Eisner, 2005a), relating it to the new method, and show its performance alone and when augmented with structural bias. 2 Task and Model In this paper we use a simple unlexicalized dependency model due to Klein and Manning (2004). The model is a probabilistic head automaton grammar (Alshawi, 1996) with a “split” form that renders it parseable in cubic time (Eisner, 1997). Let x = ⟨x1, x2, ..., xn⟩be the sentence. x0 is a special “wall” symbol, $, on the left of every sentence. A tree y is defined by a pair of functions yleft and yright (both {0, 1, 2, ..., n} →2{1,2,...,n}) that map each word to its sets of left and right dependents, respectively. The graph is constrained to be a projective tree rooted at $: each word except $ has a single parent, and there are no cycles 569 or crossing dependencies.2 yleft(0) is taken to be empty, and yright(0) contains the sentence’s single head. Let yi denote the subtree rooted at position i. The probability P(yi | xi) of generating this subtree, given its head word xi, is defined recursively: Y D∈{left,right} pstop(stop | xi, D, [yD(i) = ∅]) (1) × Y j∈yD(i) pstop(¬stop | xi, D, firsty(j)) ×pchild(xj | xi, D) × P(yj | xj) where firsty(j) is a predicate defined to be true iff xj is the closest child (on either side) to its parent xi. The probability of the entire tree is given by pΘ(x, y) = P(y0 | $). The parameters Θ are the conditional distributions pstop and pchild. Experimental baseline: EM. Following common practice, we always replace words by part-ofspeech (POS) tags before training or testing. We used the EM algorithm to train this model on POS sequences in six languages. Complete experimental details are given in the appendix. Performance with unsupervised and supervised model selection across different λ values in add-λ smoothing and three initializers Θ(0) is reported in Table 1. The supervised-selected model is in the 40–55% F1-accuracy range on directed dependency attachments. (Here F1 ≈precision ≈recall; see appendix.) Supervised model selection, which uses a small annotated development set, performs almost as well as the oracle, but unsupervised model selection, which selects the model that maximizes likelihood on an unannotated development set, is often much worse. 3 Locality Bias among Trees Hidden-variable estimation algorithms— including EM—typically work by iteratively manipulating the model parameters Θ to improve an objective function F(Θ). EM explicitly alternates between the computation of a posterior distribution over hypotheses, pΘ(y | x) (where y is any tree with yield x), and computing a new parameter estimate Θ.3 2A projective parser could achieve perfect accuracy on our English and Mandarin datasets, > 99% on Bulgarian, Turkish, and Portuguese, and > 98% on German. 3For weighted grammar-based models, the posterior does not need to be explicitly represented; instead expectations under pΘ are used to compute updates to Θ. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 δ F (EM baseline) German English Bulgarian Mandarin Turkish Portuguese Figure 1: Test-set F1 performance of models trained by EM with a locality bias at varying δ. Each curve corresponds to a different language and shows performance of supervised model selection within a given δ, across λ and Θ(0) values. (See Table 3 for performance of models selected across δs.) We decode with δ = 0, though we found that keeping the training-time value of δ would have had almost no effect. The EM baseline corresponds to δ = 0. One way to bias a learner toward local explanations is to penalize longer attachments. This was done for supervised parsing in different ways by Collins (1997), Klein and Manning (2003), and McDonald et al. (2005), all of whom considered intervening material or coarse distance classes when predicting children in a tree. Eisner and Smith (2005) achieved speed and accuracy improvements by modeling distance directly in a ML-estimated (deficient) generative model. Here we use string distance to measure the length of a dependency link and consider the inclusion of a sum-of-lengths feature in the probabilistic model, for learning only. Keeping our original model, we will simply multiply into the probability of each tree another factor that penalizes long dependencies, giving: p′ Θ(x, y) ∝pΘ(x, y)·e    δ n X i=1 X j∈y(i) |i −j|     (2) where y(i) = yleft(i) ∪yright(i). Note that if δ = 0, we have the original model. As δ →−∞, the new model p′ Θ will favor parses with shorter dependencies. The dynamic programming algorithms remain the same as before, with the appropriate eδ|i−j| factor multiplied in at each attachment between xi and xj. Note that when δ = 0, p′ Θ ≡pΘ. Experiment. We applied a locality bias to the same dependency model by setting δ to different 570 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 -1 -0.5 0 0.5 1 1.5 δ F German Bulgarian Turkish Figure 2: Test-set F1 performance of models trained by EM with structural annealing on the distance weight δ. Here we show performance with add-10 smoothing, the all-zero initializer, for three languages with three different initial values δ0. Time progresses from left to right. Note that it is generally best to start at δ0 ≪0; note also the importance of picking the right point on the curve to stop. See Table 3 for performance of models selected across smoothing, initialization, starting, and stopping choices, in all six languages. values in [−1, 0.2] (see Eq. 2). The same initializers Θ(0) and smoothing conditions were tested. Performance of supervised model selection among models trained at different δ values is plotted in Fig. 1. When a model is selected across all conditions (3 initializers × 6 smoothing values × 7 δs) using annotated development data, performance is notably better than the EM baseline using the same selection procedure (see Table 3, second column). 4 Structural Annealing The central idea of this paper is to gradually change (anneal) the bias δ. Early in learning, local dependencies are emphasized by setting δ ≪0. Then δ is iteratively increased and training repeated, using the last learned model to initialize. This idea bears a strong similarity to deterministic annealing (DA), a technique used in clustering and classification to smooth out objective functions that are piecewise constant (hence discontinuous) or bumpy (non-concave) (Rose, 1998; Ueda and Nakano, 1998). In unsupervised learning, DA iteratively re-estimates parameters like EM, but begins by requiring that the entropy of the posterior pΘ(y | x) be maximal, then gradually relaxes this entropy constraint. Since entropy is concave in Θ, the initial task is easy (maximize a concave, continuous function). At each step the optimization task becomes more difficult, but the initializer is given by the previous step and, in practice, tends to be close to a good local maximum of the more difficult objective. By the last iteration the objective is the same as in EM, but the annealed search process has acted like a good initializer. This method was applied with some success to grammar induction models by Smith and Eisner (2004). In this work, instead of imposing constraints on the entropy of the model, we manipulate bias toward local hypotheses. As δ increases, we penalize long dependencies less. We call this structural annealing, since we are varying the strength of a soft constraint (bias) on structural hypotheses. In structural annealing, the final objective would be the same as EM if our final δ, δf = 0, but we found that annealing farther (δf > 0) works much better.4 Experiment: Annealing δ. We experimented with annealing schedules for δ. We initialized at δ0 ∈{−1, −0.4, −0.2}, and increased δ by 0.1 (in the first case) or 0.05 (in the others) up to δf = 3. Models were trained to convergence at each δepoch. Model selection was applied over the same initialization and regularization conditions as before, δ0, and also over the choice of δf, with stopping allowed at any stage along the δ trajectory. Trajectories for three languages with three different δ0 values are plotted in Fig. 2. Generally speaking, δ0 ≪0 performs better. There is consistently an early increase in performance as δ increases, but the stopping δf matters tremendously. Selected annealed-δ models surpass EM in all six languages; see the third column of Table 3. Note that structural annealing does not always outperform fixed-δ training (English and Portuguese). This is because we only tested a few values of δ0, since annealing requires longer runtime. 5 Structural Bias via Segmentation A related way to focus on local structure early in learning is to broaden the set of hypotheses to include partial parse structures. If x = ⟨x1, x2, ..., xn⟩, the standard approach assumes that x corresponds to the vertices of a single dependency tree. Instead, we entertain every hypothesis in which x is a sequence of yields from separate, independently-generated trees. For example, ⟨x1, x2, x3⟩is the yield of one tree, ⟨x4, x5⟩is the 4The reader may note that δf > 0 actually corresponds to a bias toward longer attachments. A more apt description in the context of annealing is to say that during early stages the learner starts liking local attachments too much, and we need to exaggerate δ to “coax” it to new hypotheses. See Fig. 2. 571 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 -1.5 -1 -0.5 0 0.5 β F German Bulgarian Turkish Figure 3: Test-set F1 performance of models trained by EM with structural annealing on the breakage weight β. Here we show performance with add-10 smoothing, the all-zero initializer, for three languages with three different initial values β0. Time progresses from left (large β) to right. See Table 3 for performance of models selected across smoothing, initialization, and stopping choices, in all six languages. yield of a second, and ⟨x6, ..., xn⟩is the yield of a third. One extreme hypothesis is that x is n singlenode trees. At the other end of the spectrum is the original set of hypotheses—full trees on x. Each has a nonzero probability. Segmented analyses are intermediate representations that may be helpful for a learner to use to formulate notions of probable local structure, without committing to full trees.5 We only allow unobserved breaks, never positing a hard segmentation of the training sentences. Over time, we increase the bias against broken structures, forcing the learner to commit most of its probability mass to full trees. 5.1 Vine Parsing At first glance broadening the hypothesis space to entertain all 2n−1 possible segmentations may seem expensive. In fact the dynamic programming computation is almost the same as summing or maximizing over connected dependency trees. For the latter, we use an inside-outside algorithm that computes a score for every parse tree by computing the scores of items, or partial structures, through a bottom-up process. Smaller items are built first, then assembled using a set of rules defining how larger items can be built.6 Now note that any sequence of partial trees over x can be constructed by combining the same items into trees. The only difference is that we 5See also work on partial parsing as a task in its own right: Hindle (1990) inter alia. 6See Eisner and Satta (1999) for the relevant algorithm used in the experiments. are willing to consider unassembled sequences of these partial trees as hypotheses, in addition to the fully connected trees. One way to accomplish this in terms of yright(0) is to say that the root, $, is allowed to have multiple children, instead of just one. Here, these children are independent of each other (e.g., generated by a unigram Markov model). In supervised dependency parsing, Eisner and Smith (2005) showed that imposing a hard constraint on the whole structure— specifically that each non-$ dependency arc cross fewer than k words—can give guaranteed O(nk2) runtime with little to no loss in accuracy (for simple models). This constraint could lead to highly contrived parse trees, or none at all, for some sentences—both are avoided by the allowance of segmentation into a sequence of trees (each attached to $). The construction of the “vine” (sequence of $’s children) takes only O(n) time once the chart has been assembled. Our broadened hypothesis model is a probabilistic vine grammar with a unigram model over $’s children. We allow (but do not require) segmentation of sentences, where each independent child of $ is the root of one of the segments. We do not impose any constraints on dependency length. 5.2 Modeling Segmentation Now the total probability of an n-length sentence x, marginalizing over its hidden structures, sums up not only over trees, but over segmentations of x. For completeness, we must include a probability model over the number of trees generated, which could be anywhere from 1 to n. The model over the number T of trees given a sentence of length n will take the following log-linear form: P(T = t | n) = etβ , n X i=1 eiβ where β ∈R is the sole parameter. When β = 0, every value of T is equally likely. For β ≪0, the model prefers larger structures with few breaks. At the limit (β →−∞), we achieve the standard learning setting, where the model must explain x using a single tree. We start however at β ≫0, where the model prefers smaller trees with more breaks, in the limit preferring each word in x to be its own tree. We could describe “brokenness” as a feature in the model whose weight, β, is chosen extrinsically (and time-dependently), rather than empirically—just as was done with δ. 572 model selection among values of σ2 and Θ(0) worst unsup. sup. oracle DORT1 32.5 59.3 63.4 63.4 Ger. LENGTH 30.5 56.4 57.3 57.8 DORT1 20.9 56.6 57.4 57.4 Eng. LENGTH 29.1 37.2 46.2 46.2 DORT1 19.4 26.0 40.5 43.1 Bul. LENGTH 25.1 35.3 38.3 38.3 DORT1 9.4 24.2 41.1 41.1 Man. LENGTH 13.7 17.9 26.2 26.2 DORT1 7.3 38.6 58.2 58.2 Tur. LENGTH 21.5 34.1 55.5 55.5 DORT1 35.0 59.8 71.8 71.8 Por. LENGTH 30.8 33.6 33.6 33.6 Table 2: Performance of CE on test data, for different neighborhoods and with different levels of regularization. Boldface marks scores better than EM-trained models selected the same way (Table 1). The score is the F1 measure on non-$ attachments. Annealing β resembles the popular bootstrapping technique (Yarowsky, 1995), which starts out aiming for high precision, and gradually improves coverage over time. With strong bias (β ≫0), we seek a model that maintains high dependency precision on (non-$) attachments by attaching most tags to $. Over time, as this is iteratively weakened (β →−∞), we hope to improve coverage (dependency recall). Bootstrapping was applied to syntax learning by Steedman et al. (2003). Our approach differs in being able to remain partly agnostic about each tag’s true parent (e.g., by giving 50% probability to attaching to $), whereas Steedman et al. make a hard decision to retrain on a whole sentence fully or leave it out fully. In earlier work, Brill and Marcus (1992) adopted a “local first” iterative merge strategy for discovering phrase structure. Experiment: Annealing β. We experimented with different annealing schedules for β. The initial value of β, β0, was one of {−1 2, 0, 1 2}. After EM training, β was diminished by 1 10; this was repeated down to a value of βf = −3. Performance after training at each β value is shown in Fig. 3.7 We see that, typically, there is a sharp increase in performance somewhere during training, which typically lessens as β →−∞. Starting β too high can also damage performance. This method, then, 7Performance measures are given using a full parser that finds the single best parse of the sentence with the learned parsing parameters. Had we decoded with a vine parser, we would see a precision↘, recall↗curve as β decreased. is not robust to the choice of λ, β0, or βf, nor does it always do as well as annealing δ, although considerable gains are possible; see the fifth column of Table 3. By testing models trained with a fixed value of β (for values in [−1, 1]), we ascertained that the performance improvement is due largely to annealing, not just the injection of segmentation bias (fourth vs. fifth column of Table 3).8 6 Comparison and Combination with Contrastive Estimation Contrastive estimation (CE) was recently introduced (Smith and Eisner, 2005a) as a class of alternatives to the likelihood objective function locally maximized by EM. CE was found to outperform EM on the task of focus in this paper, when applied to English data (Smith and Eisner, 2005b). Here we review the method briefly, show how it performs across languages, and demonstrate that it can be combined effectively with structural bias. Contrastive training defines for each example xi a class of presumably poor, but similar, instances called the “neighborhood,” N(xi), and seeks to maximize CN(Θ) = X i log pΘ(xi | N(xi)) = X i log P y pΘ(xi, y) P x′∈N(xi) P y pΘ(x′, y) At this point we switch to a log-linear (rather than stochastic) parameterization of the same weighted grammar, for ease of numerical optimization. All this means is that Θ (specifically, pstop and pchild in Eq. 1) is now a set of nonnegative weights rather than probabilities. Neighborhoods that can be expressed as finitestate lattices built from xi were shown to give significant improvements in dependency parser quality over EM. Performance of CE using two of those neighborhoods on the current model and datasets is shown in Table 2.9 0-mean diagonal Gaussian smoothing was applied, with different variances, and model selection was applied over smoothing conditions and the same initializers as 8In principle, segmentation can be combined with the locality bias in §3 (δ). In practice, we found that this usually under-performed the EM baseline. 9We experimented with DELETE1, TRANSPOSE1, DELETEORTRANSPOSE1, and LENGTH. To conserve space we show only the latter two, which tend to perform best. 573 EM fixed δ annealed δ fixed β annealed β CE fixed δ + CE δ δ0 →δf β β0 →βf N N, δ German 54.4 61.3 0.2 70.0 -0.4 →0.4 66.2 0.4 68.9 0.5 →-2.4 63.4 DORT1 63.8 DORT1, -0.2 English 41.6 61.8 -0.6 53.8 -0.4 →0.3 55.6 0.2 58.4 0.5 →0.0 57.4 DORT1 63.5 DORT1, -0.4 Bulgarian 45.6 49.2 -0.2 58.3 -0.4 →0.2 47.3 -0.2 56.5 0 →-1.7 40.5 DORT1 – Mandarin 50.0 51.1 -0.4 58.0 -1.0 →0.2 38.0 0.2 57.2 0.5 →-1.4 43.4 DEL1 – Turkish 48.0 62.3 -0.2 62.4 -0.2 →-0.15 53.6 -0.2 59.4 0.5 →-0.7 58.2 DORT1 61.8 DORT1, -0.6 Portuguese 42.3 50.4 -0.4 50.2 -0.4 →-0.1 51.5 0.2 62.7 0.5 →-0.5 71.8 DORT1 72.6 DORT1, -0.2 Table 3: Summary comparing models trained in a variety of ways with some relevant hyperparameters. Supervised model selection was applied in all cases, including EM (see the appendix). Boldface marks the best performance overall and trials that this performance did not significantly surpass under a sign test (i.e., p ̸< 0.05). The score is the F1 measure on non-$ attachments. The fixed δ + CE condition was tested only for languages where CE improved over EM. before. Four of the languages have at least one effective CE condition, supporting our previous English results (Smith and Eisner, 2005b), but CE was harmful for Bulgarian and Mandarin. Perhaps better neighborhoods exist for these languages, or there is some ideal neighborhood that would perform well for all languages. Our approach of allowing broken trees (§5) is a natural extension of the CE framework. Contrastive estimation views learning as a process of moving posterior probability mass from (implicit) negative examples to (explicit) positive examples. The positive evidence, as in MLE, is taken to be the observed data. As originally proposed, CE allowed a redefinition of the implicit negative evidence from “all other sentences” (as in MLE) to “sentences like xi, but perturbed.” Allowing segmentation of the training sentences redefines the positive and negative evidence. Rather than moving probability mass only to full analyses of the training example xi, we also allow probability mass to go to partial analyses of xi. By injecting a bias (δ ̸= 0 or β > −∞) among tree hypotheses, however, we have gone beyond the CE framework. We have added features to the tree model (dependency length-sum, number of breaks), whose weights we extrinsically manipulate over time to impose locality bias CN and improve search on CN. Another idea, not explored here, is to change the contents of the neighborhood N over time. Experiment: Locality Bias within CE. We combined CE with a fixed-δ locality bias for neighborhoods that were successful in the earlier CE experiment, namely DELETEORTRANSPOSE1 for German, English, Turkish, and Portuguese. Our results, shown in the seventh column of Table 3, show that, in all cases except Turkish, the combination improves over either technique on its own. We leave exploration of structural annealing with CE to future work. Experiment: Segmentation Bias within CE. For (language, N) pairs where CE was effective, we trained models using CE with a fixedβ segmentation model. Across conditions (β ∈ [−1, 1]), these models performed very badly, hypothesizing extremely local parse trees: typically over 90% of dependencies were length 1 and pointed in the same direction, compared with the 60–70% length-1 rate seen in gold standards. To understand why, consider that the CE goal is to maximize the score of a sentence and all its segmentations while minimizing the scores of neighborhood sentences and their segmentations. An ngram model can accomplish this, since the same n-grams are present in all segmentations of x, and (some) different n-grams appear in N(x) (for LENGTH and DELETEORTRANSPOSE1). A bigram-like model that favors monotone branching, then, is not a bad choice for a CE learner that must account for segmentations of x and N(x). Why doesn’t CE without segmentation resort to n-gram-like models? Inspection of models trained using the standard CE method (no segmentation) with transposition-based neighborhoods TRANSPOSE1 and DELETEORTRANSPOSE1 did have high rates of length-1 dependencies, while the poorly-performing DELETE1 models found low length-1 rates. This suggests that a bias toward locality (“n-gram-ness”) is built into the former neighborhoods, and may partly explain why CE works when it does. We achieved a similar locality bias in the likelihood framework when we broadened the hypothesis space, but doing so under CE over-focuses the model on local structures. 574 7 Error Analysis We compared errors made by the selected EM condition with the best overall condition, for each language. We found that the number of corrected attachments always outnumbered the number of new errors by a factor of two or more. Further, the new models are not getting better by merely reversing the direction of links made by EM; undirected accuracy also improved significantly under a sign test (p < 10−6), across all six languages. While the most common corrections were to nouns, these account for only 25–41% of corrections, indicating that corrections are not “all of the same kind.” Finally, since more than half of corrections in every language involved reattachment to a noun or a verb (content word), we believe the improved models to be getting closer than EM to the deeper semantic relations between words that, ideally, syntactic models should uncover. 8 Future Work One weakness of all recent weighted grammar induction work—including Klein and Manning (2004), Smith and Eisner (2005b), and the present paper—is a sensitivity to hyperparameters, including smoothing values, choice of N (for CE), and annealing schedules—not to mention initialization. This is quite observable in the results we have presented. An obstacle for unsupervised learning in general is the need for automatic, efficient methods for model selection. For annealing, inspiration may be drawn from continuation methods; see, e.g., Elidan and Friedman (2005). Ideally one would like to select values simultaneously for many hyperparameters, perhaps using a small annotated corpus (as done here), extrinsic figures of merit on successful learning trajectories, or plausibility criteria (Eisner and Karakos, 2005). Grammar induction serves as a tidy example for structural annealing. In future work, we envision that other kinds of structural bias and annealing will be useful in other difficult learning problems where hidden structure is required, including machine translation, where the structure can consist of word correspondences or phrasal or recursive syntax with correspondences. The technique bears some similarity to the estimation methods described by Brown et al. (1993), which started by estimating simple models, using each model to seed the next. 9 Conclusion We have presented a new unsupervised parameter estimation method, structural annealing, for learning hidden structure that biases toward simplicity and gradually weakens (anneals) the bias over time. We applied the technique to weighted dependency grammar induction and achieved a significant gain in accuracy over EM and CE, raising the state-of-the-art across six languages from 42– 54% to 58–73% accuracy. References S. Afonso, E. Bick, R. Haber, and D. Santos. 2002. Floresta sint´a(c)tica: a treebank for Portuguese. In Proc. of LREC. H. Alshawi. 1996. Head automata and bilingual tiling: Translation with minimal representations. In Proc. of ACL. N. B. Atalay, K. Oflazer, and B. Say. 2003. The annotation process in the Turkish treebank. In Proc. of LINC. J. K. Baker. 1979. Trainable grammars for speech recognition. In Proc. of the Acoustical Society of America. S. Brants, S. Dipper, S. Hansen, W. Lezius, and G. Smith. 2002. The TIGER Treebank. In Proc. of Workshop on Treebanks and Linguistic Theories. E. Brill and M. Marcus. 1992. Automatically acquiring phrase structure using distributional analysis. In Proc. of DARPA Workshop on Speech and Natural Language. P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. S. Buchholz and E. Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proc. of CoNLL. E. Charniak. 1993. Statistical Language Learning. MIT Press. M. Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proc. of ACL. A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood estimation from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1–38. J. Eisner and D. Karakos. 2005. Bootstrapping without the boot. In Proc. of HLT-EMNLP. J. Eisner and G. Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proc. of ACL. J. Eisner and N. A. Smith. 2005. Parsing with soft and hard constraints on dependency length. In Proc. of IWPT. J. Eisner. 1997. Bilexical grammars and a cubic-time probabilistic parser. In Proc. of IWPT. G. Elidan and N. Friedman. 2005. Learning hidden variable networks: the information bottleneck approach. Journal of Machine Learning Research, 6:81–127. D. Hindle. 1990. Noun classification from predicateargument structure. In Proc. of ACL. D. Klein and C. D. Manning. 2002. A generative constituentcontext model for improved grammar induction. In Proc. of ACL. D. Klein and C. D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In NIPS 15. D. Klein and C. D. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proc. of ACL. 575 M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330. R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In Proc. of ACL. K. Oflazer, B. Say, D. Z. Hakkani-T¨ur, and G. T¨ur. 2003. Building a Turkish treebank. In A. Abeille, editor, Building and Exploiting Syntactically-Annotated Corpora. Kluwer. K. Rose. 1998. Deterministic annealing for clustering, compression, classification, regression, and related optimization problems. Proc. of the IEEE, 86(11):2210–2239. K. Simov and P. Osenova. 2003. Practical annotation scheme for an HPSG treebank of Bulgarian. In Proc. of LINC. K. Simov, G. Popova, and P. Osenova. 2002. HPSGbased syntactic treebank of Bulgarian (BulTreeBank). In A. Wilson, P. Rayson, and T. McEnery, editors, A Rainbow of Corpora: Corpus Linguistics and the Languages of the World, pages 135–42. Lincom-Europa. K. Simov, P. Osenova, A. Simov, and M. Kouylekov. 2004. Design and implementation of the Bulgarian HPSG-based Treebank. Journal of Research on Language and Computation, 2(4):495–522. N. A. Smith and J. Eisner. 2004. Annealing techniques for unsupervised statistical language learning. In Proc. of ACL. N. A. Smith and J. Eisner. 2005a. Contrastive estimation: Training log-linear models on unlabeled data. In Proc. of ACL. N. A. Smith and J. Eisner. 2005b. Guiding unsupervised grammar induction using contrastive estimation. In Proc. of IJCAI Workshop on Grammatical Inference Applications. M. Steedman, M. Osborne, A. Sarkar, S. Clark, R. Hwa, J. Hockenmaier, P. Ruhlen, S. Baker, and J. Crim. 2003. Bootstrapping statistical parsers from small datasets. In Proc. of EACL. N. Ueda and R. Nakano. 1998. Deterministic annealing EM algorithm. Neural Networks, 11(2):271–282. N. Xue, F. Xia, F.-D. Chiou, and M. Palmer. 2004. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 10(4):1–30. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of ACL. A Experimental Setup Following the usual conventions (Klein and Manning, 2002), our experiments use treebank POS sequences of length ≤10, stripped of words and punctuation. For smoothing, we apply add-λ, with six values of λ (in CE trials, we use a 0-mean diagonal Gaussian prior with five different values of σ2). Our training datasets are: • 8,227 German sentences from the TIGER Treebank (Brants et al., 2002), • 5,301 English sentences from the WSJ Penn Treebank (Marcus et al., 1993), • 4,929 Bulgarian sentences from the BulTreeBank (Simov et al., 2002; Simov and Osenova, 2003; Simov et al., 2004), • 2,775 Mandarin sentences from the Penn Chinese Treebank (Xue et al., 2004), • 2,576 Turkish sentences from the METUSabanci Treebank (Atalay et al., 2003; Oflazer et al., 2003), and • 1,676 Portuguese sentences from the Bosque portion of the Floresta Sint´a(c)tica Treebank (Afonso et al., 2002). The Bulgarian, Turkish, and Portuguese datasets come from the CoNLL-X shared task (Buchholz and Marsi, 2006); we thank the organizers. When comparing a hypothesized tree y to a gold standard y∗, precision and recall measures are available. If every tree in the gold standard and every hypothesis tree is such that |yright(0)| = 1, then precision = recall = F1, since |y| = |y∗|. |yright(0)| = 1 for all hypothesized trees in this paper, but not all treebank trees; hence we report the F1 measure. The test set consists of around 500 sentences (in each language). Iterative training proceeds until either 100 iterations have passed, or the objective converges within a relative tolerance of ϵ = 10−5, whichever occurs first. Models trained at different hyperparameter settings and with different initializers are selected using a 500-sentence development set. Unsupervised model selection means the model with the highest training objective value on the development set was chosen. Supervised model selection chooses the model that performs best on the annotated development set. (Oracle and worst model selection are chosen based on performance on the test data.) We use three initialization methods. We run a single special E step (to get expected counts of model events) then a single M step that renormalizes to get a probabilistic model Θ(0). In initializer 1, the E step scores each tree as follows (only connected trees are scored): u(x, yleft, yright) = n Y i=1 Y j∈y(i)  1 + 1 |i −j|  (Proper) expectations under these scores are computed using an inside-outside algorithm. Initializer 2 computes expected counts directly, without dynamic programming. For an n-length sentence, p(yright(0) = {i}) = 1 n and p(j ∈y(i)) ∝ 1 |i−j|. These are scaled by an appropriate constant for each sentence, then summed across sentences to compute expected event counts. Initializer 3 assumes a uniform distribution over hidden structures in the special E step by setting all log probabilities to zero. 576
2006
72
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 577–584, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Maximum Entropy Based Restoration of Arabic Diacritics Imed Zitouni, Jeffrey S. Sorensen, Ruhi Sarikaya IBM T.J. Watson Research Center 1101 Kitchawan Rd, Yorktown Heights, NY 10598 {izitouni, sorenj, sarikaya}@us.ibm.com Abstract Short vowels and other diacritics are not part of written Arabic scripts. Exceptions are made for important political and religious texts and in scripts for beginning students of Arabic. Script without diacritics have considerable ambiguity because many words with different diacritic patterns appear identical in a diacritic-less setting. We propose in this paper a maximum entropy approach for restoring diacritics in a document. The approach can easily integrate and make effective use of diverse types of information; the model we propose integrates a wide array of lexical, segmentbased and part-of-speech tag features. The combination of these feature types leads to a state-of-the-art diacritization model. Using a publicly available corpus (LDC’s Arabic Treebank Part 3), we achieve a diacritic error rate of 5.1%, a segment error rate 8.5%, and a word error rate of 17.3%. In case-ending-less setting, we obtain a diacritic error rate of 2.2%, a segment error rate 4.0%, and a word error rate of 7.2%. 1 Introduction Modern Arabic written texts are composed of scripts without short vowels and other diacritic marks. This often leads to considerable ambiguity since several words that have different diacritic patterns may appear identical in a diacritic-less setting. Educated modern Arabic speakers are able to accurately restore diacritics in a document. This is based on the context and their knowledge of the grammar and the lexicon of Arabic. However, a text without diacritics becomes a source of confusion for beginning readers and people with learning disabilities. A text without diacritics is also problematic for applications such as text-to-speech or speech-to-text, where the lack of diacritics adds another layer of ambiguity when processing the data. As an example, full vocalization of text is required for text-to-speech applications, where the mapping from graphemes to phonemes is simple compared to languages such as English and French; where there is, in most cases, one-to-one relationship. Also, using data with diacritics shows an improvement in the accuracy of speech-recognition applications (Afify et al., 2004). Currently, text-tospeech, speech-to-text, and other applications use data where diacritics are placed manually, which is a tedious and time consuming excercise. A diacritization system that restores the diacritics of scripts, i.e. supply the full diacritical markings, would be of interest to these applications. It also would greatly benefit nonnative speakers, sufferers of dyslexia and could assist in restoring diacritics of children’s and poetry books, a task that is currently done manually. We propose in this paper a statistical approach that restores diacritics in a text document. The proposed approach is based on the maximum entropy framework where several diverse sources of information are employed. The model implicitly learns the correlation between these types of information and the output diacritics. In the next section, we present the set of diacritics to be restored and the ambiguity we face when processing a non-diacritized text. Section 3 gives a brief summary of previous related works. Section 4 presents our diacritization model; we explain the training and decoding process as well as the different feature categories employed to restore the diacritics. Section 5 describes a clearly defined and replicable split of the LDC’s Arabic Treebank Part 3 corpus, used to built and evaluate the system, so that the reproduction of the results and future comparison can accurately be established. Section 6 presents the experimental results. Section 7 reports a comparison of our approach to the finite state machine modeling technique that showed promissing results in (Nelken and Shieber, 2005). Finally, section 8 concludes the paper and discusses future directions. 2 Arabic Diacritics The Arabic alphabet consists of 28 letters that can be extended to a set of 90 by additional shapes, marks, and vowels (Tayli and Al-Salamah, 1990). The 28 letters represent the consonants and long 577 vowels such as A , ù  (both pronounced as /a:/), ù  (pronounced as /i:/), and ñ (pronounced as /u:/). Long vowels are constructed by combining A , ù , ù , and ñ with the short vowels. The short vowels and certain other phonetic information such as consonant doubling (shadda) are not represented by letters, but by diacritics. A diacritic is a short stroke placed above or below the consonant. Table 1 shows the complete set of AraDiacritic Name Meaning/ on è Pronunciation Short vowels è fatha /a/ è damma /u/ è kasra /i/ Doubled case ending (“tanween”) è tanween al-fatha /an/ è tanween al-damma /un/ è tanween al-kasra /in/ Syllabification marks è shadda consonant doubling è sukuun vowel absence Table 1: Arabic diacritics on the letter – consonant – è (pronounced as /t/). bic diacritics. We split the Arabic diacritics into three sets: short vowels, doubled case endings, and syllabification marks. Short vowels are written as symbols either above or below the letter in text with diacritics, and dropped all together in text without diacritics. We find three short vowels: • fatha: it represents the /a/ sound and is an oblique dash over a consonant as in è (c.f. fourth row of Table 1). • damma: it represents the /u/ sound and is a loop over a consonant that resembles the shape of a comma (c.f. fifth row of Table 1). • kasra: it represents the /i/ sound and is an oblique dash under a consonant (c.f. sixth row of Table 1). The doubled case ending diacritics are vowels used at the end of the words to mark case distinction, which can be considered as a double short vowels; the term “tanween” is used to express this phenomenon. Similar to short vowels, there are three different diacritics for tanween: tanween al-fatha, tanween al-damma, and tanween al-kasra. They are placed on the last letter of the word and have the phonetic effect of placing an “N” at the end of the word. Text with diacritics contains also two syllabification marks: • shadda: it is a gemination mark placed above the Arabic letters as in è. It denotes the doubling of the consonant. The shadda is usually combined with a short vowel such as in è. • sukuun: written as a small circle as in è. It is used to indicate that the letter doesn’t contain vowels. Figure 1 shows an Arabic sentence transcribed with and without diacritics. In modern Arabic, writing scripts without diacritics is the most natural way. Because many words with different vowel patterns may appear identical in a diacritic-less setting, considerable ambiguity exists at the word level. The word I. J», for example, has 21 possible forms that have valid interpretations when adding diacritics (Kirchhoffand Vergyri, 2005). It may have the interpretation of the verb “to write” in I. J » (pronounced /kataba/). Also, it can be interpreted as “books” in the noun form I. J » (pronounced /kutubun/). A study made by (Debili et al., 2002) shows that there is an average of 11.6 possible diacritizations for every non-diacritized word when analyzing a text of 23,000 script forms. .èQ» YÖÏ@  K QË@ I. J» . è Q » Y ÜÏ@  K QË@ I. J » Figure 1: The same Arabic sentence without (upper row) and with (lower row) diacritics. The English translation is “the president wrote the document.” Arabic diacritic restoration is a non-trivial task as expressed in (El-Imam, 2003). Native speakers of Arabic are able, in most cases, to accurately vocalize words in text based on their context, the speaker’s knowledge of the grammar, and the lexicon of Arabic. Our goal is to convert knowledge used by native speakers into features and incorporate them into a maximum entropy model. We assume that the input text does not contain any diacritics. 3 Previous Work Diacritic restoration has been receiving increasing attention and has been the focus of several studies. In (El-Sadany and Hashish, 1988), a rule based method that uses morphological analyzer for 578 vowelization was proposed. Another, rule-based grapheme to sound conversion approach was appeared in 2003 by Y. El-Imam (El-Imam, 2003). The main drawbacks of these rule based methods is that it is difficult to maintain the rules up-to-date and extend them to other Arabic dialects. Also, new rules are required due to the changing nature of any “living” language. More recently, there have been several new studies that use alternative approaches for the diacritization problem. In (Emam and Fisher, 2004) an example based hierarchical top-down approach is proposed. First, the training data is searched hierarchically for a matching sentence. If there is a matching sentence, the whole utterance is used. Otherwise they search for matching phrases, then words to restore diacritics. If there is no match at all, character n-gram models are used to diacritize each word in the utterance. In (Vergyri and Kirchhoff, 2004), diacritics in conversational Arabic are restored by combining morphological and contextual information with an acoustic signal. Diacritization is treated as an unsupervised tagging problem where each word is tagged as one of the many possible forms provided by the Buckwalter’s morphological analyzer (Buckwalter, 2002). The Expectation Maximization (EM) algorithm is used to learn the tag sequences. Y. Gal in (Gal, 2002) used a HMM-based diacritization approach. This method is a white-space delimited word based approach that restores only vowels (a subset of all diacritics). Most recently, a weighted finite state machine based algorithm is proposed (Nelken and Shieber, 2005). This method employs characters and larger morphological units in addition to words. Among all the previous studies this one is more sophisticated in terms of integrating multiple information sources and formulating the problem as a search task within a unified framework. This approach also shows competitive results in terms of accuracy when compared to previous studies. In their algorithm, a character based generative diacritization scheme is enabled only for words that do not occur in the training data. It is not clearly stated in the paper whether their method predict the diacritics shedda and sukuun. Even though the methods proposed for diacritic restoration have been maturing and improving over time, they are still limited in terms of coverage and accuracy. In the approach we present in this paper, we propose to restore the most comprehensive list of the diacritics that are used in any Arabic text. Our method differs from the previous approaches in the way the diacritization problem is formulated and because multiple information sources are integrated. We view the diacritic restoration problem as sequence classification, where given a sequence of characters our goal is to assign diacritics to each character. Our appoach is based on Maximum Entropy (MaxEnt henceforth) technique (Berger et al., 1996). MaxEnt can be used for sequence classification, by converting the activation scores into probabilities (through the soft-max function, for instance) and using the standard dynamic programming search algorithm (also known as Viterbi search). We find in the literature several other approaches of sequence classification such as (McCallum et al., 2000) and (Lafferty et al., 2001). The conditional random fields method presented in (Lafferty et al., 2001) is essentially a MaxEnt model over the entire sequence: it differs from the Maxent in that it models the sequence information, whereas the Maxent makes a decision for each state independently of the other states. The approach presented in (McCallum et al., 2000) combines Maxent with Hidden Markov models to allow observations to be presented as arbitrary overlapping features, and define the probability of state sequences given observation sequences. We report in section 7 a comparative study between our approach and the most competitive diacritic restoration method that uses finite state machine algorithm (Nelken and Shieber, 2005). The MaxEnt framework was successfully used to combine a diverse collection of information sources and yielded a highly competitive model that achieves a 5.1% DER. 4 Automatic Diacritization The performance of many natural language processing tasks, such as shallow parsing (Zhang et al., 2002) and named entity recognition (Florian et al., 2004), has been shown to depend on integrating many sources of information. Given the stated focus of integrating many feature types, we selected the MaxEnt classifier. MaxEnt has the ability to integrate arbitrary types of information and make a classification decision by aggregating all information available for a given classification. 4.1 Maximum Entropy Classifiers We formulate the task of restoring diacritics as a classification problem, where we assign to each character in the text a label (i.e., diacritic). Before formally describing the method1, we introduce some notations: let Y = {y1, . . . , yn} be the set of diacritics to predict or restore, X be the example space and F = {0, 1}m be a feature space. Each example x ∈X has associated a vector of binary features f (x) = (f1 (x) , . . . , fm (x)). In a supervised framework, like the one we are considering here, we have access to a set of training examples together with their classifications: {(x1, y1) , . . . , (xk, yk)}. 1This is not meant to be an in-depth introduction to the method, but a brief overview to familiarize the reader with them. 579 The MaxEnt algorithm associates a set of weights (αij)i=1...n j=1...m with the features, which are estimated during the training phase to maximize the likelihood of the data (Berger et al., 1996). Given these weights, the model computes the probability distribution over labels for a particular example x as follows: P (y|x) = 1 Z(x) m Y j=1 αfj(x) ij , Z(x) = X i Y j αfj(x) ij where Z(X) is a normalization factor. To estimate the optimal αj values, we train our MaxEnt model using the sequential conditional generalized iterative scaling (SCGIS) technique (Goodman, 2002). While the MaxEnt method can nicely integrate multiple feature types seamlessly, in certain cases it is known to overestimate its confidence in especially low-frequency features. To overcome this problem, we use the regularization method based on adding Gaussian priors as described in (Chen and Rosenfeld, 2000). After computing the class probability distribution, the chosen diacritic is the one with the most aposteriori probability. The decoding algorithm, described in section 4.2, performs sequence classification, through dynamic programming. 4.2 Search to Restore Diacritics We are interested in finding the diacritics of all characters in a script or a sentence. These diacritics have strong interdependencies which cannot be properly modeled if the classification is performed independently for each character. We view this problem as sequence classification, as contrasted with an example-based classification problem: given a sequence of characters in a sentence x1x2 . . . xL, our goal is to assign diacritics (labels) to each character, resulting in a sequence of diacritics y1y2 . . . yL. We make an assumption that diacritics can be modeled as a limited order Markov sequence: the diacritic associated with the character i depends only on the diacritics associated with the k previous diacritics, where k is usually equal to 3. Given this assumption, and the notation xL 1 = x1 . . . xL, the conditional probability of assigning the diacritic sequence yL 1 to the character sequence xL 1 becomes p yL 1 |xL 1  = p y1|xL 1  p y2|xL 1 , y1  . . . p yL|xL 1 , yL−1 L−k+1  (1) and our goal is to find the sequence that maximizes this conditional probability ˆyL 1 = arg max yL 1 p yL 1 |xL 1  (2) While we restricted the conditioning on the classification tag sequence to the previous k diacritics, we do not impose any restrictions on the conditioning on the characters – the probability is computed using the entire character sequence xL 1 . To obtain the sequence in Equation (2), we create a classification tag lattice (also called trellis), as follows: • Let xL 1 be the input sequence of character and S = {s1, s2, . . . , sm} be an enumeration of Yk (m = |Y|k) - we will call an element sj a state. Every such state corresponds to the labeling of k successive characters. We find it useful to think of an element si as a vector with k elements. We use the notations si [j] for jth element of such a vector (the label associated with the token xi−k+j+1) and si [j1 . . . j2] for the sequence of elements between indices j1 and j2. • We conceptually associate every character xi, i = 1, . . . , L with a copy of S, Si =  si 1, . . . , si m ; this set represents all the possible labelings of characters xi i−k+1 at the stage where xi is examined. • We then create links from the set Si to the Si+1, for all i = 1 . . . L −1, with the property that w si j1, si+1 j2  =    psi+1 j1 [k] |xL 1 , si+1 j2 [1..k −1] if si j1 [2..k] = si+1 j2 [1..k −1] 0 otherwise These weights correspond to probability of a transition from the state si j1 to the state si+1 j2 . • For every character xi, we compute recursively2 β0 (sj) = 0, j = 1, . . . , k βi (sj) = max j1=1,...,M βi−1 (sj1) + log w si−1 j1 , si j  γi (sj) = arg max j1=1,...,M βi−1 (sj1) + log w si−1 j1 , si j  Intuitively, βi (sj) represents the logprobability of the most probable path through the lattice that ends in state sj after i steps, and γi (sj) represents the state just before sj on that particular path. • Having computed the (βi)i values, the algorithm for finding the best path, which corresponds to the solution of Equation (2) is 1. Identify ˆsL L = arg maxj=1...L βL (sj) 2. For i = L −1 . . . 1, compute ˆsi i = γi+1 ˆsi+1 i+1  2For convenience, the index i associated with state si j is moved to β; the function βi (sj) is in fact β si j  . 580 3. The solution for Equation (2) is given by ˆy =  ˆs1 1[k], ˆs2 2[k], . . . , ˆsL L [k] The runtime of the algorithm is Θ  |Y|k · L  , linear in the size of the sentence L but exponential in the size of the Markov dependency, k. To reduce the search space, we use beam-search. 4.3 Features Employed Within the MaxEnt framework, any type of features can be used, enabling the system designer to experiment with interesting feature types, rather than worry about specific feature interactions. In contrast, with a rule based system, the system designer would have to consider how, for instance, lexical derived information for a particular example interacts with character context information. That is not to say, ultimately, that rule-based systems are in some way inferior to statistical models – they are built using valuable insight which is hard to obtain from a statistical-model-only approach. Instead, we are merely suggesting that the output of such a rule-based system can be easily integrated into the MaxEnt framework as one of the input features, most likely leading to improved performance. Features employed in our system can be divided into three different categories: lexical, segmentbased, and part-of-speech tag (POS) features. We also use the previously assigned two diacritics as additional features. In the following, we briefly describe the different categories of features: • Lexical Features: we include the character n-gram spanning the curent character xi, both preceding and following it in a window of 7: {xi−3, . . . , xi+3}. We use the current word wi and its word context in a window of 5 (forward and backward trigram): {wi−2, . . . , wi+2}. We specify if the character of analysis is at the beginning or at the end of a word. We also add joint features between the above source of information. • Segment-Based Features : Arabic blankdelimited words are composed of zero or more prefixes, followed by a stem and zero or more suffixes. Each prefix, stem or suffix will be called a segment in this paper. Segments are often the subject of analysis when processing Arabic (Zitouni et al., 2005). Syntactic information such as POS or parse information is usually computed on segments rather than words. As an example, the Arabic white-space delimited word ÑîDÊK.A ¯ contains a verb ÉK.A ¯, a third-person feminine singular subject-marker H (she), and a pronoun suffix Ñë (them); it is also a complete sentence meaning “she met them.” To separate the Arabic white-space delimited words into segments, we use a segmentation model similar to the one presented by (Lee et al., 2003). The model obtains an accuracy of about 98%. In order to simulate real applications, we only use segments generated by the model rather than true segments. In the diacritization system, we include the current segment ai and its word segment context in a window of 5 (forward and backward trigram): {ai−2, . . . , ai+2}. We specify if the character of analysis is at the beginning or at the end of a segment. We also add joint information with lexical features. • POS Features : we attach to the segment ai of the current character, its POS: POS(ai). This is combined with joint features that include the lexical and segment-based information. We use a statistical POS tagging system built on Arabic Treebank data with MaxEnt framework (Ratnaparkhi, 1996). The model has an accuracy of about 96%. We did not want to use the true POS tags because we would not have access to such information in real applications. 5 Data The diacritization system we present here is trained and evaluated on the LDC’s Arabic Treebank of diacritized news stories – Part 3 v1.0: catalog number LDC2004T11 and ISBN 1-58563-298-8. The corpus includes complete vocalization (including case-endings). We introduce here a clearly defined and replicable split of the corpus, so that the reproduction of the results or future investigations can accurately and correctly be established. This corpus includes 600 documents from the An Nahar News Text. There are a total of 340,281 words. We split the corpus into two sets: training data and development test (devtest) data. The training data contains 288,000 words approximately, whereas the devtest contains close to 52,000 words. The 90 documents of the devtest data are created by taking the last (in chronological order) 15% of documents dating from “20021015 0101” (i.e., October 15, 2002) to “20021215 0045” (i.e., December 15, 2002). The time span of the devtest is intentionally non-overlapping with that of the training set, as this models how the system will perform in the real world. Previously published papers use proprietary corpus or lack clear description of the training/devtest data split, which make the comparison to other techniques difficult. By clearly reporting the split of the publicly available LDC’s Arabic Treebank 581 corpus in this section, we want future comparisons to be correctly established. 6 Experiments Experiments are reported in terms of word error rate (WER), segment error rate (SER), and diacritization error rate (DER). The DER is the proportion of incorrectly restored diacritics. The WER is the percentage of incorrectly diacritized white-space delimited words: in order to be counted as incorrect, at least one character in the word must have a diacritization error. The SER is similar to WER but indicates the proportion of incorrectly diacritized segments. A segment can be a prefix, a stem, or a suffix. Segments are often the subject of analysis when processing Arabic (Zitouni et al., 2005). Syntactic information such as POS or parse information is based on segments rather than words. Consequently, it is important to know the SER in cases where the diacritization system may be used to help disambiguate syntactic information. Several modern Arabic scripts contains the consonant doubling “shadda”; it is common for native speakers to write without diacritics except the shadda. In this case the role of the diacritization system will be to restore the short vowels, doubled case ending, and the vowel absence “sukuun”. We run two batches of experiments: a first experiment where documents contain the original shadda and a second one where documents don’t contain any diacritics including the shadda. The diacritization system proceeds in two steps when it has to predict the shadda: a first step where only shadda is restored and a second step where other diacritics (excluding shadda) are predicted. To assess the performance of the system under different conditions, we consider three cases based on the kind of features employed: 1. system that has access to lexical features only; 2. system that has access to lexical and segmentbased features; 3. system that has access to lexical, segmentbased and POS features. The different system types described above use the two previously assigned diacritics as additional feature. The DER of the shadda restoration step is equal to 5% when we use lexical features only, 0.4% when we add segment-based information, and 0.3% when we employ lexical, POS, and segment-based features. Table 2 reports experimental results of the diacritization system with different feature sets. Using only lexical features, we observe a DER of 8.2% and a WER of 25.1% which is competitive to a True shadda Predicted shadda WER SER DER WER SER DER Lexical features 24.8 12.6 7.9 25.1 13.0 8.2 Lexical + segment-based features 18.2 9.0 5.5 18.8 9.4 5.8 Lexical + segment-based + POS features 17.3 8.5 5.1 18.0 8.9 5.5 Table 2: The impact of features on the diacritization system performance. The columns marked with “True shadda” represent results on documents containing the original consonant doubling “shadda” while columns marked with “Predicted shadda” represent results where the system restored all diacritics including shadda. state-of-the-art system evaluated on Arabic Treebank Part 2: in (Nelken and Shieber, 2005) a DER of 12.79% and a WER of 23.61% are reported. The system they described in (Nelken and Shieber, 2005) uses lexical, segment-based, and morphological information. Table 2 also shows that, when segment-based information is added to our system, a significant improvement is achieved: 25% for WER (18.8 vs. 25.1), 38% for SER (9.4 vs. 13.0), and 41% for DER (5.8 vs. 8.2). Similar behavior is observed when the documents contain the original shadda. POS features are also helpful in improving the performance of the system. They improved the WER by 4% (18.0 vs. 18.8), SER by 5% (8.9 vs. 9.4), and DER by 5% (5.5 vs. 5.8). Case-ending in Arabic documents consists of the diacritic attributed to the last character in a whitespace delimited word. Restoring them is the most difficult part in the diacritization of a document. Case endings are only present in formal or highly literary scripts. Only educated speakers of modern standard Arabic master their use. Technically, every noun has such an ending, although at the end of a sentence no inflection is pronounced, even in formal speech, because of the rules of ‘pause’. For this reason, we conduct another experiment in which case-endings were stripped throughout the training and testing data without the attempt to restore them. We present in Table 3 the performance of the diacritization system on documents without caseendings. Results clearly show that when caseendings are omitted, the WER declines by 58% (7.2% vs. 17.3%), SER is decreased by 52% (4.0% vs. 8.5%), and DER is reduced by 56% (2.2% vs. 5.1%). Also, Table 3 shows again that a richer set of features results in a better performance; compared to a system using lexical features only, adding POS and segment-based features improved the WER by 38% (7.2% vs. 11.8%), the SER by 39% (4.0% vs. 6.6%), and DER by 38% (2.2% vs. 582 True shadda Predicted shadda WER SER DER WER SER DER Lexical features 11.8 6.6 3.6 12.4 7.0 3.9 Lexical + segment-based features 7.8 4.4 2.4 8.6 4.8 2.7 Lexical + segment-based + POS features 7.2 4.0 2.2 7.9 4.4 2.5 Table 3: Performance of the diacritization system based on employed features. System is trained and evaluated on documents without case-ending. Columns marked with “True shadda” represent results on documents containing the original consonant doubling “shadda” while columns marked with “Predicted shadda” represent results where the system restored all diacritics including shadda. 3.6%). Similar to the results reported in Table 2, we show that the performance of the system are similar whether the document contains the original shadda or not. A system like this trained on non case-ending documents can be of interest to applications such as speech recognition, where the last state of a word HMM model can be defined to absorb all possible vowels (Afify et al., 2004). 7 Comparison to other approaches As stated in section 3, the most recent and advanced approach to diacritic restoration is the one presented in (Nelken and Shieber, 2005): they showed a DER of 12.79% and a WER of 23.61% on Arabic Treebank corpus using finite state transducers (FST) with a Katz language modeling (LM) as described in (Chen and Goodman, 1999). Because they didn’t describe how they split their corpus into training/test sets, we were not able to use the same data for comparison purpose. In this section, we want essentially to duplicate the aforementioned FST result for comparison using the identical training and testing set we use for our experiments. We also propose some new variations on the finite state machine modeling technique which improve performance considerably. The algorithm for FST based vowel restoration could not be simpler: between every pair of characters we insert diacritics if doing so improves the likelihood of the sequence as scored by a statistical n-gram model trained upon the training corpus. Thus, in between every pair of characters we propose and score all possible diacritical insertions. Results reported in Table 4 indicate the error rates of diacritic restoration (including shadda). We show performance using both KneserNey and Katz LMs (Chen and Goodman, 1999) with increasingly large n-grams. It is our opinion that large n-grams effectively duplicate the use of a lexicon. It is unfortunate but true that, even for a rich resource like the Arabic Treebank, the choice of modeling heuristic and the effects of small sample size are considerable. Using the finite state machine modeling technique, we obtain similar results to those reported in (Nelken and Shieber, 2005): a WER of 23% and a DER of 15%. Better performance is reached with the use of Kneser-Ney LM. These results still under-perform those obtained by MaxEnt approach presented in Table 2. When all sources of information are included, the MaxEnt technique outperforms the FST model by 21% (22% vs. 18%) in terms of WER and 39% (9% vs. 5.5%) in terms of DER. The SER reported on Table 2 and Table 3 are based on the Arabic segmentation system we use in the MaxEnt approach. Since, the FST model doesn’t use such a system, we found inappropriate to report SER in this section. Katz LM Kneser-Ney LM n-gram size WER DER WER DER 3 63 31 55 28 4 54 25 38 19 5 51 21 28 13 6 44 18 24 11 7 39 16 23 11 8 37 15 23 10 Table 4: Error Rate in % for n-gram diacritic restoration using FST. We propose in the following an extension to the aforementioned FST model, where we jointly determines not only diacritics but segmentation into affixes as described in (Lee et al., 2003). Table 5 gives the performance of the extended FST model where Kneser-Ney LM is used, since it produces better results. This should be a much more difficult task, as there are more than twice as many possible insertions. However, the choice of diacritics is related to and dependent upon the choice of segmentation. Thus, we demonstrate that a richer internal representation produces a more powerful model. 8 Conclusion We presented in this paper a statistical model for Arabic diacritic restoration. The approach we propose is based on the Maximum entropy framework, which gives the system the ability to integrate different sources of knowledge. Our model has the advantage of successfully combining diverse sources of information ranging from lexical, segment-based and POS features. Both POS and segment-based features are generated by separate statistical systems – not extracted manually – in order to simulate real world applications. The segment-based features are extracted from a statistical morphological analysis system using WFST approach and the POS features are generated by a parsing model 583 True Shadda Predicted Shadda n-gram size Kneser-Ney Kneser-Ney WER DER WER DER 3 49 23 52 27 4 34 14 35 17 5 26 11 26 12 6 23 10 23 10 7 23 9 22 10 8 23 9 22 10 Table 5: Error Rate in % for n-gram diacritic restoration and segmentation using FST and Kneser-Ney LM. Columns marked with “True shadda” represent results on documents containing the original consonant doubling “shadda” while columns marked with “Predicted shadda” represent results where the system restored all diacritics including shadda. that also uses Maximum entropy framework. Evaluation results show that combining these sources of information lead to state-of-the-art performance. As future work, we plan to incorporate Buckwalter morphological analyzer information to extract new features that reduce the search space. One idea will be to reduce the search to the number of hypotheses, if any, proposed by the morphological analyzer. We also plan to investigate additional conjunction features to improve the accuracy of the model. Acknowledgments Grateful thanks are extended to Radu Florian for his constructive comments regarding the maximum entropy classifier. References M. Afify, S. Abdou, J. Makhoul, L. Nguyen, and B. Xiang. 2004. The BBN RT04 BN Arabic System. In RT04 Workshop, Palisades NY. A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. T. Buckwalter. 2002. Buckwalter Arabic morphological analyzer version 1.0. Technical report, Linguistic Data Consortium, LDC2002L49 and ISBN 1-58563257-0. Stanley F. Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. computer speech and language. Computer Speech and Language, 4(13):359–393. Stanley Chen and Ronald Rosenfeld. 2000. A survey of smoothing techniques for me models. IEEE Trans. on Speech and Audio Processing. F. Debili, H. Achour, and E. Souissi. 2002. De l’etiquetage grammatical a‘ la voyellation automatique de l’arabe. Technical report, Correspondances de l’Institut de Recherche sur le Maghreb Contemporain 17. Y. El-Imam. 2003. Phonetization of arabic: rules and algorithms. Computer Speech and Language, 18:339– 373. T. El-Sadany and M. Hashish. 1988. Semi-automatic vowelization of Arabic verbs. In 10th NC Conference, Jeddah, Saudi Arabia. O. Emam and V. Fisher. 2004. A hierarchical approach for the statistical vowelization of Arabic text. Technical report, IBM patent filed, DE9-2004-0006, US patent application US2005/0192809 A1. R. Florian, H. Hassan, A. Ittycheriah, H. Jing, N. Kambhatla, X. Luo, N Nicolov, and S Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Proceedings of HLT-NAACL 2004, pages 1–8. Y. Gal. 2002. An HMM approach to vowel restoration in Arabic and Hebrew. In ACL-02 Workshop on Computational Approaches to Semitic Languages. Joshua Goodman. 2002. Sequential conditional generalized iterative scaling. In Proceedings of ACL’02. K. Kirchhoffand D. Vergyri. 2005. Cross-dialectal data sharing for acoustic modeling in Arabic speech recognition. Speech Communication, 46(1):37–51, May. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML. Y.-S. Lee, K. Papineni, S. Roukos, O. Emam, and H. Hassan. 2003. Language model based Arabic word segmentation. In Proceedings of the ACL’03, pages 399–406. Andrew McCallum, Dayne Freitag, and Fernando Pereira. 2000. Maximum entropy markov models for information extraction and segmentation. In ICML. Rani Nelken and Stuart M. Shieber. 2005. Arabic diacritization using weighted finite-state transducers. In ACL-05 Workshop on Computational Approaches to Semitic Languages, pages 79–86, Ann Arbor, Michigan. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Conference on Empirical Methods in Natural Language Processing. M. Tayli and A. Al-Salamah. 1990. Building bilingual microcomputer systems. Communications of the ACM, 33(5):495–505. D. Vergyri and K. Kirchhoff. 2004. Automatic diacritization of Arabic for acoustic modeling in speech recognition. In COLING Workshop on Arabic-script Based Languages, Geneva, Switzerland. Tong Zhang, Fred Damerau, and David E. Johnson. 2002. Text chunking based on a generalization of Winnow. Journal of Machine Learning Research, 2:615– 637. Imed Zitouni, JeffSorensen, Xiaoqiang Luo, and Radu Florian. 2005. The impact of morphological stemming on Arabic mention detection and coreference resolution. In Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages, pages 63– 70, Ann Arbor, June. 584
2006
73
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 585–592, Sydney, July 2006. c⃝2006 Association for Computational Linguistics An Iterative Implicit Feedback Approach to Personalized Search Yuanhua Lv 1, Le Sun 2, Junlin Zhang 2, Jian-Yun Nie 3, Wan Chen 4, and Wei Zhang 2 1, 2 Institute of Software, Chinese Academy of Sciences, Beijing, 100080, China 3 University of Montreal, Canada 1 [email protected] 2 {sunle, junlin01, zhangwei04}@iscas.cn 3 [email protected] 4 [email protected] Abstract General information retrieval systems are designed to serve all users without considering individual needs. In this paper, we propose a novel approach to personalized search. It can, in a unified way, exploit and utilize implicit feedback information, such as query logs and immediately viewed documents. Moreover, our approach can implement result re-ranking and query expansion simultaneously and collaboratively. Based on this approach, we develop a client-side personalized web search agent PAIR (Personalized Assistant for Information Retrieval), which supports both English and Chinese. Our experiments on TREC and HTRDP collections clearly show that the new approach is both effective and efficient. 1 Introduction Analysis suggests that, while current information retrieval systems, e.g., web search engines, do a good job of retrieving results to satisfy the range of intents people have, they are not so well in discerning individuals’ search goals (J. Teevan et al., 2005). Search engines encounter problems such as query ambiguity and results ordered by popularity rather than relevance to the user’s individual needs. To overcome the above problems, there have been many attempts to improve retrieval accuracy based on personalized information. Relevance Feedback (G. Salton and C. Buckley, 1990) is the main post-query method for automatically improving a system’s accuracy of a user’s individual need. The technique relies on explicit relevance assessments (i.e. indications of which documents contain relevant information). Relevance feedback has been proved to be quite effective for improving retrieval accuracy (G. Salton and C. Buckley, 1990; J. J. Rocchio, 1971). However, searchers may be unwilling to provide relevance information through explicitly marking relevant documents (M. Beaulieu and S. Jones, 1998). Implicit Feedback, in which an IR system unobtrusively monitors search behavior, removes the need for the searcher to explicitly indicate which documents are relevant (M. Morita and Y. Shinoda, 1994). The technique uses implicit relevance indications, although not being as accurate as explicit feedback, is proved can be an effective substitute for explicit feedback in interactive information seeking environments (R. White et al., 2002). In this paper, we utilize the immediately viewed documents, which are the clicked results in the same query, as one type of implicit feedback information. Research shows that relative preferences derived from immediately viewed documents are reasonably accurate on average (T. Joachims et al., 2005). Another type of implicit feedback information that we exploit is users’ query logs. Anyone who uses search engines has accumulated lots of click through data, from which we can know what queries were, when queries occurred, and which search results were selected to view. These query logs provide valuable information to capture users’ interests and preferences. Both types of implicit feedback information above can be utilized to do result re-ranking and query expansion, (J. Teevan et al., 2005; Xuehua Shen. et al., 2005) which are the two general approaches to personalized search. (J. Pitkow et al., 2002) However, to the best of our knowledge, how to exploit these two types of implicit feedback in a unified way, which not only brings collaboration between query expansion and result re-ranking but also makes the whole system more concise, has so far not been well studied in the previous work. In this paper, we adopt HITS algorithm (J. Kleinberg, 1998), and propose a 585 HITS-like iterative approach addressing such a problem. Our work differs from existing work in several aspects: (1) We propose a HITS-like iterative approach to personalized search, based on which, implicit feedback information, including immediately viewed documents and query logs, can be utilized in a unified way. (2) We implement result re-ranking and query expansion simultaneously and collaboratively triggered by every click. (3) We develop and evaluate a client-side personalized web search agent PAIR, which supports both English and Chinese. The remaining of this paper is organized as follows. Section 2 describes our novel approach for personalized search. Section 3 provides the architecture of PAIR system and some specific techniques. Section 4 presents the details of the experiment. Section 5 discusses the previous work related to our approach. Section 6 draws some conclusions of our work. 2 Iterative Implicit Feedback Approach We propose a HITS-like iterative approach for personalized search. HITS (Hyperlink-Induced Topic Search) algorithm, first described by (J. Kleinberg, 1998), was originally used for the detection of high-score hub and authority web pages. The Authority pages are the central web pages in the context of particular query topics. The strongest authority pages consciously do not link one another1 — they can only be linked by some relatively anonymous hub pages. The mutual reinforcement principle of HITS states that a web page is a good authority page if it is linked by many good hub pages, and that a web page is a good hub page if it links many good authority pages. A directed graph is constructed, of which the nodes represent web pages and the directed edges represent hyperlinks. After iteratively computing based on the reinforcement principle, each node gets an authority score and a hub score. In our approach, we exploit the relationships between documents and terms in a similar way to HITS. Unseen search results, those results which are retrieved from search engine yet not been presented to the user, are considered as “authority pages”. Representative terms are considered as “hub pages”. Here the representative terms are the terms extracted from and best representing the implicit feedback information. Representative terms confer a relevance score to the unseen 1 For instance, There is hardly any other company’s Web page linked from “http://www.microsoft.com/” search results — specifically, the unseen search results, which contain more good representative terms, have a higher possibility of being relevant; the representative terms should be more representative, if they occur in the unseen search results that are more likely to be relevant. Thus, also there is mutual reinforcement principle existing between representative terms and unseen search results. By the same token, we constructed a directed graph, of which the nodes indicate unseen search results and representative terms, and the directed edges represent the occurrence of the representative terms in the unseen search results. The following Table 1 shows how our approach corresponds to HITS algorithm. The Directed Graph Approaches Nodes Edges HITS Authority Pages Hub Pages Hyperlinks Our Approach Unseen Search Results Representative Terms Occurrence2 Table 1. Our approach versus HITS. Because we have already known that the representative terms are “hub pages”, and that the unseen search results are “authority pages”, with respect to the former, only hub scores need to be computed; with respect to the latter, only authority scores need to be computed. Finally, after iteratively computing based on the mutual reinforcement principle we can re-rank the unseen search results according to their authority scores, as well as select the representative terms with highest hub scores to expand the query. Below we present how to construct a directed graph to begin with. 2.1 Constructing a Directed Graph We can view the unseen search results and the representative terms as a directed graph G = (V, E). A sample directed graph is shown in Figure 1: Figure 1. A sample directed graph. The nodes V correspond to the unseen search results (the rectangles in Figure 1) and the repre 2 The occurrence of the representative terms in the unseen search results. 586 sentative terms (the circles in Figure 1); a directed edge “p→q∈E” is weighed by the frequency of the occurrence of a representative term p in an unseen search result q (e.g., the number put on the edge “t1→r2” indicates that t1 occurs twice in r2). We say that each representative term only has an out-degree which is the number of the unseen search results it occurs in, as well as that each unseen search result only has an in-degree which is the count of the representative terms it contains. Based on this, we assume that the unseen search results and the representative terms respectively correspond to the authority pages and the hub pages — this assumption is used throughout the proposed algorithm. 2.2 A HITS-like Iterative Algorithm In this section, we present how to initialize the directed graph and how to iteratively compute the authority scores and the hub scores. And then according to these scores, we show how to re-rank the unseen search results and expand the initial query. Initially, each unseen search result of the query are considered equally authoritative, that is, 0 0 0 1 2 | | 1 | | Y Y y y y = …= = (1) Where vector Y indicates authority scores of the overall unseen search results, and |Y| is the size of such a vector. Meanwhile, each representative term, with the term frequency tfj in the history query logs that have been judged related to the current query, obtains its hub score according to the follow formulation: 0 | | 1 X j i j i tf tf x = = ∑ (2) Where vector X indicates hub scores of the overall representative terms, and |X| is the size of the vector X. The nodes of the directed graph are initialized in this way. Next, we associate each edge with a weight: , ( )j i i j w tf t r → = (3) Where tfi,j indicates the term frequency of the representative term ti occurring in the unseen search result rj; “w(ti→ rj)” is the weight of edge that link from ti to rj. For instance, in Figure 1, w(t1→ r2) = 2. After initialization, the iteratively computing of hub scores and authority scores starts. The hub score of each representative term is re-computed based on three factors: the authority scores of each unseen search result where this term occurs; the occurring frequency of this term in each unseen search result; the total occurrence of every representative term in each unseen search result. The formulation for re-computing hub scores is as follows: ( 1) : : ( ) ( ) ' k j i i j n j i j n k j j n w w t r t r t r y x t r + ∀ → ∀ → → = → ∑ ∑ (4) Where x`i (k+1) is the hub score of a representative term ti after (k+1)th iteration; yj k is the authority score of an unseen search result rj after kth iteration; “∀j: ti→rj” indicates the set of all unseen search results those ti occurs in; “∀n: tn→rj” indicates the set of all representative terms those rj contains. The authority score of each unseen search result is also re-computed relying on three factors: the hub scores of each representative term that this search result contains; the occurring frequency of each representative term in this search result; the total occurrence of each representative term in every unseen search results. The formulation for re-computing authority scores is as follows: ( 1) : : ( ) ( ) ' k k i j m i j i m i j i i m w w t r t r t r y x t r + ∀ → ∀ → → = → ∑ ∑ (5) Where y`j (k+1) is the authority score of an unseen search result rj after (k+1)th iteration; xi k is the hub score of a representative term ti after kth iteration; “∀i: ti→rj” indicates the set of all representative terms those rj contains; “∀m: ti→rm” indicates the set of all unseen search results those ti occurs in. After re-computation, the hub scores and the authority scores are normalized to 1. The formulation for normalization is as follows: | | | | 1 1 and ' ' ' ' j i i Y X j k k k k y x y x y x = = = = ∑ ∑ (6) The iteration, including re-computation and normalization, is repeated until the changes of the hub scores and the authority scores are smaller than some predefined threshold θ (e.g. 10-6). Specifically, after each repetition, the changes in authority scores and hub scores are computed using the following formulation: 2 2 ( 1) ( 1) | | | | 1 1 ( ) ( ) k k k k i i j j Y x j i c y y x x + + = = = − + − ∑ ∑ (7) The iteration stops if c<θ. Moreover, the iteration will also stop if repetition has reached a 587 predefined times k (e.g. 30). The procedure of the iteration is shown in Figure 2. As soon as the iteration stops, the top n unseen search results with highest authority scores are selected and recommended to the user; the top m representative terms with highest hub scores are selected to expand the original query. Here n is a predefined number (in PAIR system we set n=3, n is given a small number because using implicit feedback information is sometimes risky.) m is determined according to the position of the biggest gap, that is, if ti – ti+1 is bigger than the gap of any other two neighboring ones of the top half representative terms, then m is given a value i. Furthermore, some of these representative terms (e.g. top 50% high score terms) will be again used in the next time of implementing the iterative algorithm together with some newly incoming terms extracted from the just now click. Figure 2. The HITS-like iterative algorithm. 3 Implementation 3.1 System Design In this section, we present our experimental system PAIR, which is an IE Browser Helper Object (BHO) based on the popular Web search engine Google. PAIR has three main modules: Result Retrieval module, User Interactions module, and Iterative Algorithm module. The architecture is shown in Figure 3. The Result Retrieval module runs in backgrounds and retrieves results from search engine. When the query has been expanded, this module will use the new keywords to continue retrieving. The User Interactions module can handle three types of basic user actions: (1) submitting a query; (2) clicking to view a search result; (3) clicking the “Next Page” link. For each of these actions, the system responds with: (a) exploiting and extracting representative terms from implicit feedback information; (b) fetching the unseen search results via Results Retrieval module; (c) sending the representative terms and the unseen search results to Iterative Algorithm module. Figure 3. The architecture of PAIR. The Iterative Algorithm module implements the HITS-like algorithm described in section 2. When this module receives data from User Interactions module, it responds with: (a) iteratively computing the hub scores and authority scores; (b) re-ranking the unseen search results and expanding the original query. Some specific techniques for capturing and exploiting implicit feedback information are described in the following sections. 3.2 Extract Representative Terms from Query Logs We judge whether a query log is related to the current query based on the similarity between the query log and the current query text. Here the query log is associated with all documents that the user has selected to view. The form of each query log is as follows <query text><query time> [clicked documents]* The “clicked documents” consist of URL, title and snippet of every clicked document. The reason why we utilize the query text of the current query but not the search results (including title, snippet, etc.) to compute the similarity, is out of consideration for efficiency. If we had used the search results to determine the similarity, the computation could only start once the search engine has returned the search results. In our method, instead, we can exploit query logs while search engine is doing retrieving. Notice that although our system only utilizes the query logs in the last 24 hours; in practice, we can exploit much more because of its low computation cost with respect to the retrieval process performed in parallel. Iterate (T, R, k, θ) T: a collection of m terms R: a collection of n search results k: a natural number θ: a predefined threshold Apply (1) to initialize Y. Apply (2) to initialize X. Apply (3) to initialize W. For i = 1, 2…, k Apply (4) to (Xi-1, Yi-1) and obtain X`i. Apply (5) to (Xi-1, Yi-1) and obtain Y`i. Apply (6) to Normalize X`i and Y`i, and respectively obtain Xi and Yi. Apply (7) and obtain c. If c<θ, then break. End Return (X, Y). 588 Table 2. Sample results of re-ranking. The search results in boldface are the ones that our system recommends to the user. “-3” and “-2” in the right side of some results indicate the how their ranks descend. We use the standard vector space retrieval model (G. Salton and M. J. McGill, 1983) to compute the similarity. If the similarity between any query log and the current query exceeds a predefined threshold, the query log will be considered to be related to current query. Our system will attempt to extract some (e.g. 30%) representative terms from such related query logs according to the weights computed by applying the following formulation: ( ) i i i w f idf t t = (8) Where tfi and idfi respectively are the term frequency and inverse document frequency of ti in the clicked documents of a related query log. This formulation means that a term is more representative if it has a higher frequency as well as a broader distribution in the related query log. 3.3 Extract Representative Terms from Immediately Viewed Documents The representative terms extracted from immediately viewed documents are determined based on three factors: term frequency in the immediately viewed document, inverse document frequency in the entire seen search results, and a discriminant value. The formulation is as follows: ( ) ( ) N i i i i r d d w d x x tf idf x x = × × (9) Where tfxi dr is the term frequency of term xi in the viewed results set dr; tfxi dr is the inverse document frequency of xi in the entire seen results set dN. And the discriminant value d(xi) of xi is computed using the weighting schemes F2 (S. E. Robertson and K. Sparck Jones, 1976) as follows: ( ) ln ( ) ( ) i r R d n r N R x = − − (10) Where r is the number of the immediately viewed documents containing term xi; n is the number of the seen results containing term xi; R is the number of the immediately viewed documents in the query; N is the number of the entire seen results. 3.4 Sample Results Unlike other systems which do result re-ranking and query expansion respectively in different ways, our system implements these two functions simultaneously and collaboratively — Query expansion provides diversified search results which must rely on the use of re-ranking to be moved forward and recommended to the user. Figure 4. A screen shot for query expansion. After iteratively computing using our approach, the system selects some search results with top highest authority scores and recommends them to the user. In Table 2, we show that PAIR successfully re-ranks the unseen search results of “jaguar” respectively using the immediately Google result PAIR result query = “jaguar” query = “jaguar” After the 4th result being clicked query = “jaguar” “car” ∈ query logs 1 Jaguar www.jaguar.com/ Jaguar www.jaguar.com/ Jaguar UK - Jaguar Cars www.jaguar.co.uk/ 2 Jaguar CA - Jaguar Cars www.jaguar.com/ca/en/ Jaguar CA - Jaguar Cars www.jaguar.com/ca/en/ Jaguar UK - R is for… www.jaguar-racing.com/ 3 Jaguar Cars www.jaguarcars.com/ Jaguar Cars www.jaguarcars.com/ Jaguar www.jaguar.com/ 4 Apple - Mac OS X www.apple.com/macosx/ Apple - Mac OS X www.apple.com/macosx/ Jaguar CA - Jaguar Cars www.jaguar.com/ca/en/ -2 5 Apple - Support … www.apple.com/support/... Amazon.com: Mac OS X 10.2… www.amazon.com/exec/obidos/... Jaguar Cars www.jaguarcars.com/ -2 6 Jaguar UK - Jaguar Cars www.jaguar.co.uk/ Mac OS X 10.2 Jaguar… arstechnica.com/reviews/os… Apple - Mac OS X www.apple.com/macosx/ -2 7 Jaguar UK - R is for… www.jaguar-racing.com/ Macworld: News: Macworld… maccentral.macworld.com/news/… Apple - Support … www.apple.com/support/... -2 8 Jaguar dspace.dial.pipex.com/… Apple - Support… www.apple.com/support/... -3 Jaguar dspace.dial.pipex.com/… 9 Schrödinger -> Home www.schrodinger.com/ Jaguar UK - Jaguar Cars www.jaguar.co.uk/ -3 Schrödinger -> Home www.schrodinger.com/ 10 Schrödinger -> Site Map www.schrodinger.com/... Jaguar UK - R is for… www.jaguar-racing.com/ -3 Schrödinger -> Site Map www.schrodinger.com/... 589 viewed documents and the query logs. Simultaneously, some representative terms are selected to expand the original query. In the query of “jaguar” (without query logs), we click some results about “Mac OS”, and then we see that a term “Mac” has been selected to expand the original query, and some results of the new query “jaguar Mac” are recommended to the user under the help of re-ranking, as shown in Figure 4. 4 Experiment 4.1 Experimental Methodology It is a challenge to quantitatively evaluate the potential performance improvement of the proposed approach over Google in an unbiased way (D. Hawking et al., 1999; Xuehua Shen et al., 2005). Here, we adopt a similar quantitative evaluation as what Xuehua Shen et al. (2005) do to evaluate our system PAIR and recruit 9 students who have different backgrounds to participate in our experiment. We use query topics from TREC 2005 and 2004 Hard Track, TREC 2004 Terabyte track for English information retrieval,3 and use query topics from HTRDP 2005 Evaluation for Chinese information retrieval.4 The reason why we utilize multiple TREC tasks rather than using a single one is that more queries are more likely to cover the most interesting topics for each participant. Initially, each participant would freely choose some topics (typically 5 TREC topics and 5 HTRDP topics). Each query of TREC topics will be submitted to three systems: UCAIR 5 (Xuehua Shen et al., 2005), “PAIR No QE” (PAIR system of which the query expansion function is blocked) and PAIR. Each query of HTRDP topics needs only to be submitted to “PAIR No QE” and PAIR. We do not evaluate UCAIR using HTRDP topics, since it does not support Chinese. For each query topic, the participants use the title of the topic as the initial keyword to begin with. Also they can form some other keywords by themselves if the title alone fails to describe some details of the topic. There is no limit on how many queries they must submit. During each query process, the participant may click to view some results, just as in normal web search. Then, at the end of each query, search results from these different systems are randomly and anonymously mixed together so that every par 3 Text REtrieval Conference. http://trec.nist.gov/ 4 2005 HTRDP Evaluation. http://www.863data.org.cn/ 5 The latest version released on November 11, 2005. http://sifaka.cs.uiuc.edu/ir/ucair/ ticipant would not know where a result comes from. The participants would judge which of these results are relevant. At last, we respectively measure precision at top 5, top 10, top 20 and top 30 documents of these system. 4.2 Results and Analysis Altogether, 45 TREC topics (62 queries in all) are chosen for English information retrieval. 712 documents are judged as relevant from Google search results. The corresponding number of relevant documents from UCAIR, “PAIR No QE” and PAIR respectively is: 921, 891 and 1040. Figure 5 shows the average precision of these four systems at top n documents among such 45 TREC topics. Figure 5. Average precision for TREC topics. 45 HTRDP topics (66 queries in all) are chosen for Chinese information retrieval. 809 documents are judged as relevant from Google search results. The corresponding number of relevant documents from “PAIR No QE” and PAIR respectively is: 1198 and 1416. Figure 6 shows the average precision of these three systems at top n documents among such 45 HTRDP topics. Figure 6. Average precision for HTRDP topics. PAIR and “PAIR No QE” versus Google We can see clearly from Figure 5 and Figure 6 that the precision of PAIR is improved a lot comparing with that of Google in all measure590 ments. Moreover, the improvement scale increases from precision at top 10 to that of top 30. One explanation for this is that the more implicit feedback information generated, the more representative terms can be obtained, and thus, the iterative algorithm can perform better, leading to more precise search results. “PAIR No QE” also significantly outperforms Google in these measurements, however, with query expansion, PAIR can perform even better. Thus, we say that result re-ranking and query expansion both play an important role in PAIR. Comparing Figure 5 with Figure 6, one can see that the improvement of PAIR versus Google in Chinese IR is even larger than that of English IR. One explanation for this is that: before implementing the iterative algorithm, each Chinese search result, including title and snippet, is segmented into words (or phrases). And only the noun, verb and adjective of these words (or phrases) are used in next stages, whereas, we only remove the stop words for English search result. Another explanation is that there are some Chinese web pages with the same content. If one of such pages is clicked, then, occasionally some repetition pages are recommended to the user. However, since PAIR is based on the search results of Google and the information concerning the result pages that PAIR can obtained is limited, which leads to it difficult to avoid the replications. PAIR and “PAIR No QE” versus UCAIR In Figure 5, we can see that the precision of “PAIR No QE” is better than that of UCAIR among top 5 and top 10 documents, and is almost the same as that of UCAIR among top 20 and top 30 documents. However, PAIR is much better than UCAIR in all measurements. This indicates that result re-ranking fails to do its best without query expansion, since the relevant documents in original query are limited, and only the re-ranking method alone cannot solve the “relevant documents sparseness” problem. Thus, the query expansion method, which can provide fresh and relevant documents, can help the re-ranking method to reach an even better performance. Efficiency of PAIR The iteration statistic in evaluation indicates that the average iteration times of our approach is 22 before convergence on condition that we set the threshold θ = 10-6. The experiment shows that the computation time of the proposed approach is imperceptible for users (less than 1ms.) 5 Related Work There have been many prior attempts to personalized search. In this paper, we focus on the related work doing personalized search based on implicit feedback information. Some of the existing studies capture users’ information need by exploiting query logs. For example, M. Speretta and S. Gauch (2005) build user profiles based on activity at the search site and study the use of these profiles to provide personalized search results. F. Liu et al. (2002) learn user's favorite categories from his query history. Their system maps the input query to a set of interesting categories based on the user profile and confines the search domain to these categories. Some studies improve retrieval performance by exploiting users’ browsing history (F. Tanudjaja and L. Mu, 2002; M. Morita and Y. Shinoda, 1994) or Web communities (A. Kritikopoulos and M. Sideri, 2003; K. Sugiyama et al., 2004) Some studies utilize client side interactions, for example, K. Bharat (2000) automatically discovers related material on behalf of the user by serving as an intermediary between the user and information retrieval systems. His system observes users interacting with everyday applications and then anticipates their information needs using a model of the task at hand. Some latest studies combine several types of implicit feedback information. J. Teevan et al. (2005) explore rich models of user interests, which are built from both search-related information, such as previously issued queries and previously visited Web pages, and other information about the user such as documents and email the user has read and created. This information is used to re-rank Web search results within a relevance feedback framework. Our work is partly inspired by the study of Xuehua Shen et al. (2005), which is closely related to ours in that they also exploit immediately viewed documents and short-term history queries, implement query expansion and re-ranking, and develop a client-side web search agents that perform eager implicit feedback. However, their work differs from ours in three ways: First, they use the cosine similarity to implement query expansion, and use Rocchio formulation (J. J. Rocchio, 1971) to re-rank the search results. Thus, their query expansion and re-ranking are computed separately and are not so concise and collaborative. Secondly, their query expansion is based only on the past queries and is implemented before the query, which leads to that 591 their query expansion does not benefit from user’s click through data. Thirdly, they do not compute the relevance of search results and the relativity of expanded terms in an iterative fashion. Thus, their approach does not utilize the relation among search results, among expanded terms, and between search results and expanded terms. 6 Conclusions In this paper, we studied how to exploit implicit feedback information to improve retrieval accuracy. Unlike most previous work, we propose a novel HITS-like iterative algorithm that can make use of query logs and immediately viewed documents in a unified way, which not only brings collaboration between query expansion and result re-ranking but also makes the whole system more concise. We further propose some specific techniques to capture and exploit these two types of implicit feedback information. Using these techniques, we develop a client-side web search agent PAIR. Experiments in English and Chinese collections show that our approach is both effective and efficient. However, there is still room to improve the performance of the proposed approach, such as exploiting other types of personalized information, choosing some more effective strategies to extract representative terms, studying the effects of the parameters used in the approach, etc. Acknowledgement We would like to thank the anonymous reviewers for their helpful feedback and corrections, and to the nine participants of our evaluation experiments. Additionally, this work is supported by the National Science Fund of China under contact 60203007. References A. Kritikopoulos and M. Sideri, 2003. The Compass Filter: Search engine result personalization using Web communities. In Proceedings of ITWP, pages 229-240. D. Hawking, N. Craswell, P.B. Thistlewaite, and D. Harman, 1999. Results and challenges in web search evaluation. Computer Networks, 31(11-16):1321–1330. F. Liu, C. Yu, and W. Meng, 2002. Personalized web search by mapping user queries to categories. In Proceedings of CIKM, pages 558-565. F. Tanudjaja and L. Mu, 2002. Persona: a contextualized and personalized web search. HICSS. G. Salton and M. J. McGill, 1983. Introduction to Modern Information Retrieval. McGraw-Hill. G. Salton and C. Buckley, 1990. Improving retrieval performance by relevance feedback. Journal of the American Society for Information Science, 41(4):288-297. J. J. Rocchio, 1971. Relevance feedback in information retrieval. In The SMART Retrieval System : Experiments in Automatic Document Processing, pages 313–323. Prentice-Hall Inc. J. Kleinberg, 1998. Authoritative sources in a hyperlinked environment. ACM, 46(5):604–632. J. Pitkow, H. Schutze, T. Cass, R. Cooley, D. Turnbull, A. Edmonds, E. Adar, and T. Breuel, 2002. Personalized search. Communications of the ACM, 45(9):50-55. J. Teevan, S. T. Dumais, and E. Horvitz, 2005. Personalizing search via automated analysis of interests and activities. In Proceedings of SIGIR, pages 449-456. K. Bharat, 2000. SearchPad: Explicit capture of search context to support Web search. Computer Networks, 33(1-6): 493-501. K. Sugiyama, K. Hatano, and M. Yoshikawa, 2004. Adaptive Web search based on user profile constructed without any effort from user. In Proceedings of WWW, pages 675-684. M. Beaulieu and S. Jones, 1998. Interactive searching and interface issues in the okapi best match retrieval system. Interacting with Computers, 10(3):237-248. M. Morita and Y. Shinoda, 1994. Information filtering based on user behavior analysis and best match text retrieval. In Proceedings of SIGIR, pages 272–281. M. Speretta and S. Gauch, 2005. Personalizing search based on user search history. Web Intelligence, pages 622-628. R. White, I. Ruthven, and J. M. Jose, 2002. The use of implicit evidence for relevance feedback in web retrieval. In Proceedings of ECIR, pages 93–109. S. E. Robertson and K. Sparck Jones, 1976. Relevance weighting of search terms. Journal of the American Society for Information Science, 27(3):129-146. T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G. Gay, 2005. Accurately Interpreting Clickthrough Data as Implicit Feedback, In Proceedings of SIGIR, pages 154-161. Xuehua Shen, Bin Tan, and Chengxiang Zhai, 2005. Implicit User Modeling for Personalized Search. In Proceedings of CIKM, pages 824-831. 592
2006
74
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 593–600, Sydney, July 2006. c⃝2006 Association for Computational Linguistics The Effect of Translation Quality in MT-Based Cross-Language Information Retrieval Jiang Zhu Haifeng Wang Toshiba (China) Research and Development Center 5/F., Tower W2, Oriental Plaza, No.1, East Chang An Ave., Dong Cheng District Beijing, 100738, China {zhujiang, wanghaifeng}@rdc.toshiba.com.cn Abstract This paper explores the relationship between the translation quality and the retrieval effectiveness in Machine Translation (MT) based Cross-Language Information Retrieval (CLIR). To obtain MT systems of different translation quality, we degrade a rule-based MT system by decreasing the size of the rule base and the size of the dictionary. We use the degraded MT systems to translate queries and submit the translated queries of varying quality to the IR system. Retrieval effectiveness is found to correlate highly with the translation quality of the queries. We further analyze the factors that affect the retrieval effectiveness. Title queries are found to be preferred in MT-based CLIR. In addition, dictionary-based degradation is shown to have stronger impact than rule-based degradation in MT-based CLIR. 1 Introduction Cross-Language Information Retrieval (CLIR) enables users to construct queries in one language and search the documents in another language. CLIR requires that either the queries or the documents be translated from a language into another, using available translation resources. Previous studies have concentrated on query translation because it is computationally less expensive than document translation, which requires a lot of processing time and storage costs (Hull & Grefenstette, 1996). There are three kinds of methods to perform query translation, namely Machine Translation (MT) based methods, dictionary-based methods and corpus-based methods. Corresponding to these methods, three types of translation resources are required: MT systems, bilingual wordlists and parallel or comparable corpora. CLIR effectiveness depends on both the design of the retrieval system and the quality of the translation resources that are used. In this paper, we explore the relationship between the translation quality of the MT system and the retrieval effectiveness. The MT system involved in this research is a rule-based Englishto-Chinese MT (ECMT) system. We degrade the MT system in two ways. One is to degrade the rule base of the system by progressively removing rules from it. The other is to degrade the dictionary by gradually removing word entries from it. In both methods, we observe successive changes on translation quality of the MT system. We conduct query translation with the degraded MT systems and obtain translated queries of varying quality. Then we submit the translated queries to the IR system and evaluate the performance. Retrieval effectiveness is found to be strongly influenced by the translation quality of the queries. We further analyze the factors that affect the retrieval effectiveness. Title queries are found to be preferred in MT-based query translation. In addition, the size of the dictionary is shown to have stronger impact on retrieval effectiveness than the size of the rule base in MTbased query translation. The remainder of this paper is organized as follows. In section 2, we briefly review related work. In section 3, we introduce two systems involved in this research: the rule-based ECMT system and the KIDS IR system. In section 4, we describe our experimental method. Section 5 and section 6 reports and discusses the experimental results. Finally we present our conclusion and future work in section 7. 593 2 Related Work 2.1 Effect of Translation Resources Previous studies have explored the effect of translation resources such as bilingual wordlists or parallel corpora on CLIR performance. Xu and Weischedel (2000) measured CLIR performance as a function of bilingual dictionary size. Their English-Chinese CLIR experiments on TREC 5&6 Chinese collections showed that the initial retrieval performance increased sharply with lexicon size but the performance was not improved after the lexicon exceeded 20,000 terms. Demner-Fushman and Oard (2003) identified eight types of terms that affected retrieval effectiveness in CLIR applications through their coverage by general-purpose bilingual term lists. They reported results from an evaluation of the coverage of 35 bilingual term lists in news retrieval application. Retrieval effectiveness was found to be strongly influenced by term list size for lists that contain between 3,000 and 30,000 unique terms per language. Franz et al. (2001) investigated the CLIR performance as a function of training corpus size for three different training corpora and observed approximately logarithmically increased performance with corpus size for all the three corpora. Kraaij (2001) compared three types of translation resources for bilingual retrieval based on query translation: a bilingual machine-readable dictionary, a statistical dictionary based on a parallel web corpus and the Babelfish MT service. He drew a conclusion that the mean average precision of a run was proportional to the lexical coverage. McNamee and Mayfield (2002) examined the effectiveness of query expansion techniques by using parallel corpora and bilingual wordlists of varying quality. They confirmed that retrieval performance dropped off as the lexical coverage of translation resources decreased and the relationship was approximately linear. Previous research mainly focused on studying the effectiveness of bilingual wordlists or parallel corpora from two aspects: size and lexical coverage. Kraaij (2001) examined the effectiveness of MT system, but also from the aspect of lexical coverage. Why lack research on analyzing effect of translation quality of MT system on CLIR performance? The possible reason might be the problem on how to control the translation quality of the MT system as what has been done to bilingual wordlists or parallel corpora. MT systems are usually used as black boxes in CLIR applications. It is not very clear how to degrade MT software because MT systems are usually optimized for grammatically correct sentences rather than word-by-word translation. 2.2 MT-Based Query Translation MT-based query translation is perhaps the most straightforward approach to CLIR. Compared with dictionary or corpus based methods, the advantage of MT-based query translation lies in that technologies integrated in MT systems, such as syntactic and semantic analysis, could help to improve the translation accuracy (Jones et al., 1999). However, in a very long time, fewer experiments with MT-based methods have been reported than with dictionary-based methods or corpus-based methods. The main reasons include: (1) MT systems of high quality are not easy to obtain; (2) MT systems are not available for some language pairs; (3) queries are usually short or even terms, which limits the effectiveness of MT-based methods. However, recent research work on CLIR shows a trend to adopt MT-based query translation. At the fifth NTCIR workshop, almost all the groups participating in Bilingual CLIR and Multilingual CLIR tasks adopt the query translation method using MT systems or machine-readable dictionaries (Kishida et al., 2005). Recent research work also proves that MT-based query translation could achieve comparable performance to other methods (Kishida et al., 2005; Nunzio et al., 2005). Considering more and more MT systems are being used in CLIR, it is of significance to carefully analyze how the performance of MT system may influence the retrieval effectiveness. 3 System Description 3.1 The Rule-Based ECMT System The MT system used in this research is a rulebased ECMT system. The translation quality of this ECMT system is comparable to the best commercial ECMT systems. The basis of the system is semantic transfer (Amano et al., 1989). Translation resources comprised in this system include a large dictionary and a rule base. The rule base consists of rules of different functions such as analysis, transfer and generation. 3.2 KIDS IR System KIDS is an information retrieval engine that is based on morphological analysis (Sakai et al., 2003). It employs the Okapi/BM25 term weighting scheme, as fully described in (Robertson & Walker, 1999; Robertson & Sparck Jones, 1997). 594 To focus our study on the relationship between MT performance and retrieval effectiveness, we do not use techniques such as pseudo-relevance feedback although they are available and are known to improve IR performance. 4 Experimental Method To obtain MT systems of varying quality, we degrade the rule-based ECMT system by impairing the translation resources comprised in the system. Then we use the degraded MT systems to translate the queries and evaluate the translation quality. Next, we submit the translated queries to the KIDS system and evaluate the retrieval performance. Finally we calculate the correlation between the variation of translation quality and the variation of retrieval effectiveness to analyze the relationship between MT performance and CLIR performance. 4.1 Degradation of MT System In this research, we degrade the MT system in two ways. One is rule-based degradation, which is to decrease the size of the rule base by randomly removing rules from the rule base. For sake of simplicity, in this research we only consider transfer rules that are used for transferring the source language to the target language and keep other kinds of rules untouched. That is, we only consider the influence of transfer rules on translation quality1. We first randomly divide the rules into segments of equal size. Then we remove the segments from the rule base, one at each time and obtain a group of degraded rule bases. Afterwards, we use MT systems with the degraded rule bases to translate the queries and get groups of translated queries, which are of different translation quality. The other is dictionary-based degradation, which is to decrease the size of the dictionary by randomly removing a certain number of word entries from the dictionary iteratively. Function words are not removed from the dictionary. Using MT systems with the degraded dictionaries, we also obtain groups of translated queries of different translation quality. 4.2 Evaluation of Performance We measure the performance of the MT system by translation quality and use NIST score as the evaluation measure (Doddington, 2002). The 1 In the following part of this paper, rules refer to transfer rules unless explicitly stated. NIST scores reported in this paper are generated by NIST scoring toolkit2. For retrieval performance, we use Mean Average Precision (MAP) as the evaluation measure (Voorhees, 2003). The MAP values reported in this paper are generated by trec_eval toolkit 3, which is the standard tool used by TREC for evaluating an ad hoc retrieval run. 5 Experiments 5.1 Data The experiments are conducted on the TREC5&6 Chinese collection. The collection consists of document set, topic set and the relevance judgment file. The document set contains articles published in People's Daily from 1991 to 1993, and news articles released by the Xinhua News Agency in 1994 and 1995. It includes totally 164,789 documents. The topic set contains 54 topics. In the relevance judgment file, a binary indication of relevant (1) or non-relevant (0) is given. <top> <num> Number: CH41 <C-title> 京九铁路的桥梁隧道工程 <E-title> Bridge and Tunnel Construction for the Beijing-Kowloon Railroad <C-desc> Description: 京九铁路,桥梁,隧道,贯通,特大桥, <E-desc> Description: Beijing-Kowloon Railroad, bridge, tunnel, connection, very large bridge <C-narr> Narrative: 相关文件必须提到京九铁路的桥梁隧道工 程,包括地点、施工阶段、长度. <E-narr> Narrative: A relevant document discusses bridge and tunnel construction for the Beijing-Kowloon Railroad, including location, construction status, span or length. </top> Figure 1. Example of TREC Topic 5.2 Query Formulation & Evaluation For each TREC topic, three fields are provided: title, description and narrative, both in Chinese and English, as shown in figure 1. The title field is the statement of the topic. The description 2 The toolkit could be downloaded from: http://www.nist.gov/speech/tests/mt/resources/scoring.htm 3 The toolkit could be downloaded from: http://trec.nist.gov/trec_eval/trec_eval.7.3.tar.gz 595 field lists some terms that describe the topic. The narrative field provides a complete description of document relevance for the assessors. In our experiments, we use two kinds of queries: title queries (use only the title field) and desc queries (use only the description field). We do not use narrative field because it is the criteria used by the assessors to judge whether a document is relevant or not, so it usually contains quite a number of unrelated words. Title queries are one-sentence queries. When use NIST scoring tool to evaluate the translation quality of the MT system, reference translations of source language sentences are required. NIST scoring tool supports multi references. In our experiments, we introduce two reference translations for each title query. One is the Chinese title (C-title) in title field of the original TREC topic (reference translation 1); the other is the translation of the title query given by a human translator (reference translation 2). This is to alleviate the bias on translation evaluation introduced by only one reference translation. An example of title query and its reference translations are shown in figure 2. Reference 1 is the Chinese title provided in original TREC topic. Reference 2 is the human translation of the query. For this query, the translation output generated by the MT system is "在中国的机器人技术研究". If only use reference 1 as reference translation, the system output will not be regarded as a good translation. But in fact, it is a good translation for the query. Introducing reference 2 helps to alleviate the unfair evaluation. Title Query: CH27 <query> Robotics Research in China <reference 1> 中国在机器人方面的研制 <reference 2> 中国的机器人技术 Figure 2. Example of Title Query A desc query is not a sentence but a string of terms that describes the topic. The term in the desc query is either a word, a phrase or a string of words. A desc query is not a proper input for the MT system. But the MT system still works. It translates the desc query term by term. When the term is a word or a phrase that exists in the dictionary, the MT system looks up the dictionary and takes the first translation in the entry as the translation of the term without any further analysis. When the term is a string of words such as "number(数量) of(的) infections(感染)", the system translates the term into "感染数量". Besides using the Chinese description (C-desc) in the description field of the original TREC topic as the reference translation of each desc query, we also have the human translator give another reference translation for each desc query. Comparison on the two references shows that they are very similar to each other. So in our final experiments, we use only one reference for each desc query, which is the Chinese description (Cdesc) provided in the original TREC topic. An example of desc query and its reference translation is shown in figure 3. Desc Query: CH22 <query> malaria, number of deaths, number of infections <reference> 疟疾,死亡人数,感染病例 Figure 3. Example of Desc Query 5.3 Runs Previous studies (Kwok, 1997; Nie et al., 2000) proved that using words and n-grams indexes leads to comparable performance for Chinese IR. So in our experiments, we use bi-grams as index units. We conduct following runs to analyze the relationship between MT performance and CLIR performance: • rule-title: MT-based title query translation with degraded rule base • rule-desc: MT-based desc query translation with degraded rule base • dic-title: MT-based title query translation with degraded dictionary • dic-desc: MT-based desc query translation with degraded dictionary For baseline comparison, we conduct Chinese monolingual runs with title queries and desc queries. 5.4 Monolingual Performance The results of Chinese monolingual runs are shown in Table 1. Run MAP title-cn1 0.3143 title-cn2 0.3001 desc-cn 0.3514 Table 1. Monolingual Results 596 title-cn1: use reference translation 1 of each title query as Chinese query title-cn2: use reference translation 2 of each title query as Chinese query desc-cn: use reference translation of each desc query as Chinese query Among all the three monolingual runs, desc-cn achieves the best performance. Title-cn1 achieves better performance than title-cn2, which indicates directly using Chinese title as Chinese query performs better than using human translation of title query as Chinese query. 5.5 Results on Rule-Based Degradation There are totally 27,000 transfer rules in the rule base. We use all these transfer rules in the experiment of rule-based degradation. The 27,000 rules are randomly divided into 36 segments, each of which contains 750 rules. To degrade the rule base, we start with no degradation, then we remove one segment at each time, up to a complete degradation with all segments removed. With each of the segment removed from the rule base, the MT system based on the degraded rule base produces a group of translations for the input queries. The completely degraded system with all segments removed could produce a group of rough translations for the input queries. Figure 4 and figure 5 show the experimental results on title queries (rule-title) and desc queries (rule-desc) respectively. Figure 4(a) shows the changes of translation quality of the degraded MT systems on title queries. From the result, we observe a successive change on MT performance. The fewer rules, the worse translation quality achieves. The NIST score varies from 7.3548 at no degradation to 5.9155 at complete degradation. Figure 4(b) shows the changes of retrieval performance by using the translations generated by the degraded MT systems as queries. The MAP varies from 0.3126 at no degradation to 0.2810 at complete degradation. Comparison on figure 4(a) and 4(b) indicates similar variations between translation quality and retrieval performance. The better the translation quality, the better the retrieval performance is. Figure 5(a) shows the changes of translation quality of the degraded MT systems on desc queries. Figure 5(b) shows the corresponding changes of retrieval performance. We observe a similar relationship between MT performance and retrieval performance as to the results based 5.8000 6.0000 6.2000 6.4000 6.6000 6.8000 7.0000 7.2000 7.4000 7.6000 0 4 8 12 16 20 24 28 32 36 40 MT System with Degraded Rule Base NIST Score Figure 4(a). MT Performance on Rule-based Degradation with Title Query 4.8400 4.8600 4.8800 4.9000 4.9200 4.9400 4.9600 4.9800 5.0000 5.0200 5.0400 0 4 8 12 16 20 24 28 32 36 40 MT System with Degraded Rule Base NIST Score Figure 5(a). MT Performance on Rule-based Degradation with Desc Query 0.2800 0.2850 0.2900 0.2950 0.3000 0.3050 0.3100 0.3150 0.3200 0 4 8 12 16 20 24 28 32 36 40 MT System with Degraded Rule Base MAP Figure 4(b). Retrieval Effectiveness on Rulebased Degradation with Title Query 0.2750 0.2770 0.2790 0.2810 0.2830 0.2850 0.2870 0.2890 0 4 8 12 16 20 24 28 32 36 40 MT System with Degraded Rule Base MAP Figure 5(b). Retrieval Effectiveness on Rulebased Degradation with Desc Query 597 5.8000 6.0000 6.2000 6.4000 6.6000 6.8000 7.0000 7.2000 7.4000 7.6000 0 4 8 12 16 20 24 28 32 36 40 MT System with Degraded Dcitionary NIST Score Figure 6(a). MT Performance on Dictionarybased Degradation with Title Query 4.4000 4.5000 4.6000 4.7000 4.8000 4.9000 5.0000 5.1000 0 4 8 12 16 20 24 28 32 36 40 MT System with Degraded Dictionary NIST Score Figure 7(a). MT Performance on Dictionarybased Degradation with Desc Query 0.1800 0.2000 0.2200 0.2400 0.2600 0.2800 0.3000 0.3200 0 4 8 12 16 20 24 28 32 36 40 MT System with Degraded Dcitionary MAP Figure 6(b). Retrieval Effectiveness on Dictionary-based Degradation with Title Query 0.2400 0.2450 0.2500 0.2550 0.2600 0.2650 0.2700 0.2750 0.2800 0.2850 0.2900 0 4 8 12 16 20 24 28 32 36 40 MT System with Degraded Dictionary MAP Figure 7(b). Retrieval Effectiveness on Dictionary-based Degradation with Desc Query on title queries. The NIST score varies from 5.0297 at no degradation to 4.8497 at complete degradation. The MAP varies from 0.2877 at no degradation to 0.2759 at complete degradation. 5.6 Results on Dictionary-Based Degradation The dictionary contains 169,000 word entries. To make the results on dictionary-based degradation comparable to the results on rule-based degradation, we degrade the dictionary so that the variation interval on translation quality is similar to that of the rule-based degradation. We randomly select 43,200 word entries for degradation. These word entries do not include function words. We equally split these word entries into 36 segments. Then we remove one segment from the dictionary at each time until all the segments are removed and obtain 36 degraded dictionaries. We use the MT systems with the degraded dictionaries to translate the queries and observe the changes on translation quality and retrieval performance. The experimental results on title queries (dic-title) and desc queries (dic-desc) are shown in figure 6 and figure 7 respectively. From the results, we also observe a similar relationship between translation quality and retrieval performance as what we have observed in the rule-based degradation. For both title queries and desc queries, the larger the dictionary size, the better the NIST score and MAP is. For title queries, the NIST score varies from 7.3548 at no degradation to 6.0067 at complete degradation. The MAP varies from 0.3126 at no degradation to 0.1894 at complete degradation. For desc queries, the NIST score varies from 5.0297 at no degradation to 4.4879 at complete degradation. The MAP varies from 0.2877 at no degradation to 0.2471 at complete degradation. 5.7 Summary of the Results Here we summarize the results of the four runs in Table 2. Run NIST Score MAP title queries No degradation 7.3548 0.3126 Complete: rule-title 5.9155 0.2810 Complete: dic-title 6.0067 0.1894 desc queries No degradation 5.0297 0.2877 Complete: rule-desc 4.8497 0.2759 Complete: dic-desc 4.4879 0.2471 Table 2. Summary of Runs 598 6 Discussion Based on our observations, we analyze the correlations between NIST scores and MAPs, as listed in Table 3. In general, there is a strong correlation between translation quality and retrieval effectiveness. The correlations are above 95% for all of the four runs, which means in general, a better performance on MT will lead to a better performance on retrieval. Run Correlation rule-title 0.9728 rule-desc 0.9500 dic-title 0.9521 dic-desc 0.9582 Table 3. Correlation Between Translation Quality & Retrieval Effectiveness 6.1 Impacts of Query Format For Chinese monolingual runs, retrieval based on desc queries achieves better performance than the runs based on title queries. This is because a desc query consists of terms that relate to the topic, i.e., all the terms in a desc query are precise query terms. But a title query is a sentence, which usually introduces words that are unrelated to the topic. Results on bilingual retrieval are just contrary to monolingual ones. Title queries perform better than desc queries. Moreover, MAP at no degradation for title queries is 0.3126, which is about 99.46% of the performance of monolingual run title-cn1, and outperforms the performance of title-cn2 run. But MAP at no degradation for desc queries is 0.2877, which is just 81.87% of the performance of the monolingual run desc-cn. Comparison on the results shows that the MT system performs better on title queries than on desc queries. This is reasonable because desc queries are strings of terms, however the MT system is optimized for grammatically correct sentences rather than word-by-word translation. Considering the correlation between translation quality and retrieval effectiveness, it is rational that title queries achieve better results on retrieval than desc queries. 6.2 Impacts of Rules and Dictionary Table 4 shows the fall of NIST score and MAP at complete degradation compared with NIST score and MAP achieved at no degradation. Comparison on the results of title queries shows that similar variation of translation quality leads to quite different variation on retrieval effectiveness. For rule-title run, 19.57% reduction in translation quality results in 10.11% reduction in retrieval effectiveness. But for dic-title run, 18.33% reduction in translation quality results in 39.41% reduction in retrieval effectiveness. This indicates that retrieval effectiveness is more sensitive to the size of the dictionary than to the size of the rule base for title queries. Why dictionarybased degradation has stronger impact on retrieval effectiveness than rule-based degradation? This is because retrieval systems are typically more tolerant of syntactic than semantic translation errors (Fluhr, 1997). Therefore although syntactic errors caused by the degradation of the rule base result in a decrease of translation quality, they have smaller impacts on retrieval effectiveness than the word translation errors caused by the degradation of dictionary. For desc queries, there is no big difference between dictionary-based degradation and rulebased degradation. This is because the MT system translates the desc queries term by term, so degradation of rule base mainly results in word translation errors instead of syntactic errors. Thus, degradation of dictionary and rule base has similar effect on retrieval effectiveness. Run NIST Score Fall MAP Fall title queries rule-title 19.57% 10.11% dic-title 18.33% 39.41% desc queries rule-desc 3.58% 4.10% dic-desc 10.77% 14.11% Table 4. Fall on Translation Quality & Retrieval Effectiveness 7 Conclusion and Future Work In this paper, we investigated the effect of translation quality in MT-based CLIR. Our study showed that the performance of MT system and IR system correlates highly with each other. We further analyzed two main factors in MT-based CLIR. One factor is the query format. We concluded that title queries are preferred for MTbased CLIR because MT system is usually optimized for translating sentences rather than words. The other factor is the translation resources comprised in the MT system. Our observation showed that the size of the dictionary has a stronger effect on retrieval effectiveness than the size of the rule base in MT-based CLIR. Therefore in order to improve the retrieval effectiveness of a MT-based CLIR application, it is more 599 effective to develop a larger dictionary than to develop more rules. This introduces another interesting question relating to MT-based CLIR. That is how CLIR can benefit further from MT. Directly using the translations generated by the MT system may not be the best choice for the IR system. There are rich features generated during the translation procedure. Will such features be helpful to CLIR? This question is what we would like to answer in our future work. References Shin-ya Amano, Hideki Hirakawa, Hirosysu Nogami, and Akira Kumano. 1989. The Toshiba Machine Translation System. Future Computing System, 2(3):227-246. Dina Demner-Fushman, and Douglas W. Oard. 2003. The Effect of Bilingual Term List Size on Dictionary-Based Cross-Language Information Retrieval. In Proc. of the 36th Hawaii International Conference on System Sciences (HICSS-36), pages 108117. George Doddington. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram Cooccurrence Statistics. In Proc. of the Second International Conference on Human Language Technology (HLT-2002), pages 138-145. Christian Fluhr. 1997. Multilingual Information Retrieval. In Ronald A Cole, Joseph Mariani, Hans Uszkoreit, Annie Zaenen, and Victor Zue (Eds.), Survey of the State of the Art in Human Language Technology, pages 261-266, Cambridge University Press, New York. Martin Franz, J. Scott McCarley, Todd Ward, and Wei-Jing Zhu. 2001. Quantifying the Utility of Parallel Corpora. In Proc. of the 24th Annual ACM Conference on Research and Development in Information Retrieval (SIGIR-2001), pages 398-399. David A. Hull and Gregory Grefenstette. 1996. Querying Across Languages: A Dictionary-Based Approach to Multilingual Information Retrieval. In Proc. of the 19th Annual ACM Conference on Research and Development in Information Retrieval (SIGIR-1996), pages 49-57. Gareth Jones, Tetsuya Sakai, Nigel Collier, Akira Kumano and Kazuo Sumita. 1999. Exploring the Use of Machine Translation Resources for EnglishJapanese Cross-Language Infromation Retrieval. In Proc. of MT Summit VII Workshop on Machine Translation for Cross Language Information Retrieval, pages 15-22. Kazuaki Kishida, Kuang-hua Chen, Sukhoon Lee, Kazuko Kuriyama, Noriko Kando, Hsin-Hsi Chen, and Sung Hyon Myaeng. 2005. Overview of CLIR Task at the Fifth NTCIR Workshop. In Proc. of the NTCIR-5 Workshop Meeting, pages 1-38. Wessel Kraaij. 2001. TNO at CLEF-2001: Comparing Translation Resources. In Proc. of the CLEF-2001 Workshop, pages 78-93. Kui-Lam Kwok. 1997. Comparing Representation in Chinese Information Retrieval. In Proc. of the 20th Annual ACM Conference on Research and Development in Information Retrieval (SIGIR-1997), pages 34-41. Paul McNamee and James Mayfield. 2002. Comparing Cross-Language Query Expansion Techniques by Degrading Translation Resources. In Proc. of the 25th Annual ACM Conference on Research and Development in Information Retrieval (SIGIR2002), pages 159-166. Jian-Yun Nie, Jianfeng Gao, Jian Zhang, and Ming Zhou. 2000. On the Use of Words and N-grams for Chinese Information Retrieval. In Proc. of the Fifth International Workshop on Information Retrieval with Asian Languages (IRAL-2000), pages 141-148. Giorgio M. Di Nunzio, Nicola Ferro, Gareth J. F. Jones, and Carol Peters. 2005. CLEF 2005: Ad Hoc Track Overview. In C. Peters (Ed.), Working Notes for the CLEF 2005 Workshop. Stephen E. Robertson and Stephen Walker. 1999. Okapi/Keenbow at TREC-8. In Proc. of the Eighth Text Retrieval Conference (TREC-8), pages 151162. Stephen E. Robertson and Karen Sparck Jones. 1997. Simple, Proven Approaches to Text Retrieval. Technical Report 356, Computer Laboratory, University of Cambridge, United Kingdom. Tetsuya Sakai, Makoto Koyama, Masaru Suzuki, and Toshihiko Manabe. 2003. Toshiba KIDS at NTCIR-3: Japanese and English-Japanese IR. In Proc. of the Third NTCIR Workshop on Research in Information Retrieval, Automatic Text Summarization and Question Answering (NTCIR-3), pages 51-58. Ellen M. Voorhees. 2003. Overview of TREC 2003. In Proc. of the Twelfth Text Retrieval Conference (TREC 2003), pages 1-13. Jinxi Xu and Ralph Weischedel. 2000. Cross-lingual Information Retrieval Using Hidden Markov Models. In Proc. of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC2000), pages 95-103. 600
2006
75
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 601–608, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Comparison of Document, Sentence, and Term Event Spaces Catherine Blake School of Information and Library Science University of North Carolina at Chapel Hill North Carolina, NC 27599-3360 [email protected] Abstract The trend in information retrieval systems is from document to sub-document retrieval, such as sentences in a summarization system and words or phrases in question-answering system. Despite this trend, systems continue to model language at a document level using the inverse document frequency (IDF). In this paper, we compare and contrast IDF with inverse sentence frequency (ISF) and inverse term frequency (ITF). A direct comparison reveals that all language models are highly correlated; however, the average ISF and ITF values are 5.5 and 10.4 higher than IDF. All language models appeared to follow a power law distribution with a slope coefficient of 1.6 for documents and 1.7 for sentences and terms. We conclude with an analysis of IDF stability with respect to random, journal, and section partitions of the 100,830 full-text scientific articles in our experimental corpus. 1 Introduction The vector based information retrieval model identifies relevant documents by comparing query terms with terms from a document corpus. The most common corpus weighting scheme is the term frequency (TF) x inverse document frequency (IDF), where TF is the number of times a term appears in a document, and IDF reflects the distribution of terms within the corpus (Salton and Buckley, 1988). Ideally, the system should assign the highest weights to terms with the most discriminative power. One component of the corpus weight is the language model used. The most common language model is the Inverse Document Frequency (IDF), which considers the distribution of terms between documents (see equation (1)). IDF has played a central role in retrieval systems since it was first introduced more than thirty years ago (Sparck Jones, 1972). IDF(ti)=log2(N)–log2(ni)+1 (1) N is the total number of corpus documents; ni is the number of documents that contain at least one occurrence of the term ti; and ti is a term, which is typically stemmed. Although information retrieval systems are trending from document to sub-document retrieval, such as sentences for summarization and words, or phrases for question answering, systems continue to calculate corpus weights on a language model of documents. Logic suggests that if a system identifies sentences rather than documents, it should use a corpus weighting scheme based on the number of sentences rather than the number documents. That is, the system should replace IDF with the Inverse Sentence Frequency (ISF), where N in (1) is the total number of sentences and ni is the number of sentences with term i. Similarly, if the system retrieves terms or phrases then IDF should be replaced with the Inverse Term Frequency (ITF), where N in (1) is the vocabulary size, and ni is the number of times a term or phrases appears in the corpus. The challenge is that although document language models have had unprecedented empirical success, language models based on a sentence or term do not appear to work well (Robertson, 2004). Our goal is to explore the transition from the document to sentence and term spaces, such that we may uncover where the language models start 601 to break down. In this paper, we explore this goal by answering the following questions: How correlated are the raw document, sentence, and term spaces? How correlated are the IDF, ISF, and ITF values? How well does each language models conform to Zipf’s Law and what are the slope coefficients? How sensitive is IDF with respect to sub-sets of a corpus selected at random, from journals, or from document sections including the abstract and body of an article? This paper is organized as follows: Section 2 provides the theoretical and practical implications of this study; Section 3 describes the experimental design we used to study document, sentence, and term, spaces in our corpora of more than one-hundred thousand full-text documents; Section 4 discusses the results; and Section 5 draws conclusions from this study. 2 Background and Motivation The transition from document to sentence to term spaces has both theoretical and practical ramifications. From a theoretical standpoint, the success of TFxIDF is problematic because the model combines two different event spaces – the space of terms in TF and of documents in IDF. In addition to resolving the discrepancy between event spaces, the foundational theories in information science, such as Zipf’s Law (Zipf, 1949) and Shannon’s Theory (Shannon, 1948) consider only a term event space. Thus, establishing a direct connection between the empirically successful IDF and the theoretically based ITF may enable a connection to previously adopted information theories. 0 5 10 15 20 25 0 5 10 15 20 25 log(Vocababulary Size (n)) log(Corpus Size (N)) SS SM SL MS MM ML LS LM LL first IDF paper this paper Document space dominates Vocabulary space dominates the web over time ↑ Figure 1. Synthetic data showing IDF trends for different sized corpora and vocabulary. Understanding the relationship among document, sentence and term spaces also has practical importance. The size and nature of text corpora has changed dramatically since the first IDF experiments. Consider the synthetic data shown in Figure 1, which reflects the increase in both vocabulary and corpora size from small (S), to medium (M), to large (L). The small vocabulary size is from the Cranfield corpus used in Sparck Jones (1972), medium is from the 0.9 million terms in the Heritage Dictionary (Pickett 2000) and large is the 1.3 million terms in our corpus. The small number of documents is from the Cranfield corpus in Sparck Jones (1972), medium is 100,000 from our corpus, and large is 1 million As a document corpus becomes sufficiently large, the rate of new terms in the vocabulary decreases. Thus, in practice the rate of growth on the x-axis of Figure 1 will slow as the corpus size increases. In contrast, the number of documents (shown on the y-axis in Figure 1) remains unbounded. It is not clear which of the two components in equation (1), the log2(N), which reflects the number of documents, or the log2(ni),which reflects the distribution of terms between documents within the corpus will dominate the equation. Our strategy is to explore these differences empirically. In addition to changes in the vocabulary size and the number of documents, the average number of terms per document has increased from 7.9, 12.2 and 32 in Sparck Jones (1972), to 20 and 32 in Salton and Buckley (1988), to 4,981 in our corpus. The transition from abstracts to fulltext documents explains the dramatic difference in document length; however, the impact with respect to the distribution of terms and motivates us to explore differences between the language used in an abstract, and that used in the body of a document. One last change from the initial experiments is a trend towards an on-line environment, where calculating IDF is prohibitively expensive. This suggests a need to explore the stability of IDF so that system designers can make an informed decision regarding how many documents should be included in the IDF calculations. We explore the stability of IDF in random, journal, and document section sub-sets of the corpus. 3 Experimental Design Our goal in this paper is to compare and contrast language models based on a document with those based on a sentence and term event spaces. We considered several of the corpora from the Text Retrieval Conferences (TREC, trec.nist.gov); however, those collections were primarily news 602 articles. One exception was the recently added genomics track, which considered full-text scientific articles, but did not provide relevance judgments at a sentence or term level. We also considered the sentence level judgments from the novelty track and the phrase level judgments from the question-answering track, but those were news and web documents respectively and we had wanted to explore the event spaces in the context of scientific literature. Table 1 shows the corpus that we developed for these experiments. The American Chemistry Society provided 103,262 full-text documents, which were published in 27 journals from 200020041. We processed the headings, text, and tables using Java BreakIterator class to identify sentences and a Java implementation of the Porter Stemming algorithm (Porter, 1980) to identify terms. The inverted index was stored in an Oracle 10i database. Docs Avg Tokens Journal # % Length Million % ACHRE4 548 0.5 4923 2.7 1 ANCHAM 4012 4.0 4860 19.5 4 BICHAW 8799 8.7 6674 58.7 11 BIPRET 1067 1.1 4552 4.9 1 BOMAF6 1068 1.1 4847 5.2 1 CGDEFU 566 0.5 3741 2.1 <1 CMATEX 3598 3.6 4807 17.3 3 ESTHAG 4120 4.1 5248 21.6 4 IECRED 3975 3.9 5329 21.2 4 INOCAJ 5422 5.4 6292 34.1 6 JACSAT 14400 14.3 4349 62.6 12 JAFCAU 5884 5.8 4185 24.6 5 JCCHFF 500 0.5 5526 2.8 1 JCISD8 1092 1.1 4931 5.4 1 JMCMAR 3202 3.2 8809 28.2 5 JNPRDF 2291 2.2 4144 9.5 2 JOCEAH 7307 7.2 6605 48.3 9 JPCAFH 7654 7.6 6181 47.3 9 JPCBFK 9990 9.9 5750 57.4 11 JPROBS 268 0.3 4917 1.3 <1 MAMOBX 6887 6.8 5283 36.4 7 MPOHBP 58 0.1 4868 0.3 <1 NALEFD 1272 1.3 2609 3.3 1 OPRDFK 858 0.8 3616 3.1 1 ORLEF7 5992 5.9 1477 8.8 2 Total 100,830 526.6 Average 4,033 4.0 4,981 21.1 Std Dev 3,659 3.6 1,411 20.3 Table 1. Corpus summary. 1 Formatting inconsistencies precluded two journals and reduced the number of documents by 2,432. We made the following comparisons between the document, sentence, and term event spaces. (1) Raw term comparison A set of well-correlated spaces would enable an accurate prediction from one space to the next. We will plot pair-wise correlations between each space to reveal similarities and differences. This comparison reflects a previous analysis comprising a random sample of 193 words from a 50 million word corpus of 85,432 news articles (Church and Gale 1999). Church and Gale’s analysis of term and document spaces resulted in a p value of -0.994. Our work complements their approach by considering full-text scientific articles rather than news documents, and we consider the entire stemmed term vocabulary in a 526 million-term corpus. (2) Zipf Law comparison Information theory tells us that the frequency of terms in a corpus conforms to the power law distribution K/jθ (Baeza-Yates and Ribeiro-Neto 1999). Zipf’s Law is a special case of the power law, where θ is close to 1 (Zipf, 1949). To provide another perspective of the alternative spaces, we calculated the parameters of Zipf’s Law, K and θ for each event space and journal using the binning method proposed in (Adamic 2000). By accounting for K, the slope as defined by θ will provide another way to characterize differences between the document, sentence and term spaces. We expect that all event spaces will conform to Zipf’s Law. (3) Direct IDF, ISF, and ITF comparison The log2(N) and log2(ni) should allow a direct comparison between IDF, ISF and ITF. Our third experiment was to provide pair-wise comparisons among these the event spaces. (4) Abstract versus full-text comparison Language models of scientific articles often consider only abstracts because they are easier to obtain than full-text documents. Although historically difficult to obtain, the increased availability of full-text articles motivates us to understand the nature of language within the body of a document. For example, one study found that full-text articles require weighting schemes that consider document length (Kamps, et al, 2005). However, controlling the weights for document lengths may hide a systematic difference between the language used in abstracts and the language used in the body of a document. For example, authors may use general language in an 603 abstract and technical language within a document. Transitioning from abstracts to full-text documents presents several challenges including how to weigh terms within the headings, figures, captions, and tables. Our forth experiment was to compare IDF between the abstract and full text of the document. We did not consider text from headings, figures, captions, or tables. (5) IDF Sensitivity In a dynamic environment such as the Web, it would be desirable to have a corpus-based weight that did not change dramatically with the addition of new documents. An increased understanding of IDF stability may enable us to make specific system recommendations such as if the collection increases by more than n% then update the IDF values. To explore the sensitivity we compared the amount of change in IDF values for various subsets of the corpus. IDF values were calculated using samples of 10%, 20%, …, 90% and compared with the global IDF. We stratified sampling such that the 10% sample used term frequencies in 10% of the ACHRE4 articles, 10% of the BICHAW articles, etc. To control for variations in the corpus, we repeated each sample 10 times and took the average from the 10 runs. To explore the sensitivity we compared the global IDF in Equation 1 with the local sample, where N was the average number of documents in the sample and ni was the average term frequency for each stemmed term in the sample. In addition to exploring sensitivity with respect to a random subset, we were interested in learning more about the relationship between the global IDF and the IDF calculated on a journal sub-set. To explore these differences, we compared the global IDF with local IDF where N was the number of documents in each journal and ni was the number of times the stemmed term appears in the text of that journal. 4 Results and Discussion The 100830 full text documents comprised 2,001,730 distinct unstemmed terms, and 1,391,763 stemmed terms. All experiments reported in this paper consider stemmed terms. 4.1 Raw frequency comparison The dimensionality of the document, sentence, and terms spaces varied greatly, with 100830 documents, 16.5 million sentences, and 2.0 million distinct unstemmed terms (526.0 million in total), and 1.39 million distinct stemmed terms. Figure 2A shows the correlation between the frequency of a term in the document space (x) and the average frequency of the same set of terms in the sentence space (y). For example, the average number of sentences for the set of terms that appear in 30 documents is 74.6. Figure 2B compares the document (x) and average term freq- Frequency A - Document vs. Sentence 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.0E+7 1.0E+8 1.0E+00 1.0E+01 1.0E+02 1.0E+03 1.0E+04 1.0E+05 1.0E+06 Document Frequency (Log scale) Average Sentence Frequency (Log scale) B - Document vs. Term 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.0E+7 1.0E+8 1.00E+00 1.00E+01 1.00E+02 1.00E+03 1.00E+04 1.00E+05 1.00E+06 Document Frequency (Log scale) Average Term Frequency (Log scale) C - Sentence vs.Term 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.0E+7 1.0E+8 1.0E+00 1.0E+01 1.0E+02 1.0E+03 1.0E+04 1.0E+05 1.0E+06 1.0E+07 Sentence Frequency (Log scale) Average Term Frequency (Log scale) Standard Deviation Error D - Document vs. Sentence 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 Document Frequency (Log scale) Sentence Standard Deviation (Log scale) E - Document vs. Term 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 Document Frequency (Log scale) Term Standard Deviation (Log scale) F - Sentence vs. Term 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 Sentence Frequency (Log scale) Term Standard Deviation (Log scale) Figure 2. Raw frequency correlation between document, sentence, and term spaces. 604 A – JACSAT Document Space 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.E+0 1.E+1 1.E+2 1.E+3 1.E+4 1.E+5 1.E+6 1.E+7 1.E+8 Word Rank (log scale) Word Frequency (log scale) Actual Predicted(K=89283, m=1.6362) B – JACSAT Sentence Space 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.E+0 1.E+1 1.E+2 1.E+3 1.E+4 1.E+5 1.E+6 1.E+7 1.E+8 Word Rank (log scale) Word Frequency (log scale) Actual Predicted (K=185818, m=1.7138) C – JACSAT Term Space 1.0E+0 1.0E+1 1.0E+2 1.0E+3 1.0E+4 1.0E+5 1.0E+6 1.E+0 1.E+1 1.E+2 1.E+3 1.E+4 1.E+5 1.E+6 1.E+7 1.E+8 Word Rank (log scale) Word Frequency (log scale) Actual Predicted(K=185502, m=1.7061) D - Slope Coefficients between document, sentence, and term spaces for each journal, when fit to the power law K=jm -1.85 -1.80 -1.75 -1.70 -1.65 -1.60 -1.55 -1.80 -1.70 -1.60 -1.50 Document Slope Sentence or Term Slope Sentence Term JACSAT Figure 3. Zipf’s Law comparison. A through C show the power law distribution for the journal JACSAT in the document (A), sentence (B), and term (C) event spaces. Note the predicted slope coefficients of 1.6362, 1.7138 and 1.7061 respectively). D shows the document, sentence, and term slope coefficients for each of the 25 journals when fit to the power law K=jm, where j is the rank. quency (y) These figures suggest that the document space differs substantially from the sentence and term spaces. Figure 2C shows the sentence frequency (x) and average term frequency (y), demonstrating that the sentence and term spaces are highly correlated. Luhn proposed that if terms were ranked by the number of times they occurred in a corpus, then the terms of interest would lie within the center of the ranked list (Luhn 1958). Figures 2D, E and F show the standard deviation between the document and sentence space, the document and term space and the sentence and term space respectively. These figures suggest that the greatest variation occurs for important terms. 4.2 Zipf’s Law comparison Zipf’s Law states that the frequency of terms in a corpus conforms to a power law distribution K/jθ where θ is close to 1 (Zipf, 1949). We calculated the K and θ coefficients for each journal and language model combination using the binning method proposed in (Adamic, 2000). Figures 3A-C show the actual frequencies, and the power law fit for the each language model in just one of the 25 journals (jacsat). These and the remaining 72 figures (not shown) suggest that Zipf’s Law holds in all event spaces. Zipf Law states that θ should be close to -1. In our corpus, the average θ in the document space was -1.65, while the average θ in both the sentence and term spaces was -1.73. Figure 3D compares the document slope (x) coefficient for each of the 25 journals with the sentence and term spaces coefficients (y). These findings are consistent with a recent study that suggested θ should be closer to 2 (Cancho 2005). Another study found that term frequency rank distribution was a better fit Zipf’s Law when the term space comprised both words and phrases (Ha et al, 2002). We considered only stemmed terms. Other studies suggest that a Poisson mixture model would better capture the frequency rank distribution than the power model (Church and Gale, 1995). A comprehensive overview of using Zipf’s Law to model language can be found in (Guiter and Arapov, 1982). 605 4.3 Direct IDF, ISF, and ITF comparison Our third experiment was to compare the three language models directly. Figure 4A shows the average, minimum and maximum ISF value for each rounded IDF value. After fitting a regression line, we found that ISF correlates well with IDF, but that the average ISF values are 5.57 greater than the corresponding IDF. Similarly, ITF correlates well with IDF, but the ITF values are 10.45 greater than the corresponding IDF. A y = 1.0662x + 5.5724 R2 = 0.9974 0 5 10 15 20 25 30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 IDF ISF Avg Min Max B y = 1.0721x + 10.452 R2 = 0.9972 0 5 10 15 20 25 30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 IDF ITF Avg Min Max C y = 1.0144x + 4.6937 R2 = 0.9996 0 5 10 15 20 25 30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ISF ITF Avg Min Max Figure 4. Pair-wise IDF, ISF, and ITF comparisons. It is little surprise that Figure 4C reveals a strong correlation between ITF and ISF, given the correlation between raw frequencies reported in section 4.1. Again, we see a high correlation between the ISF and ITF spaces but that the ITF values are on average 4.69 greater than the equivalent ISF value. These findings suggests that simply substituting ISF or ITF for IDF would result in a weighting scheme where the corpus weights would dominate the weights assigned to query in the vector based retrieval model. The variation appears to increase at higher IDF values. Table 2 (see over) provides example stemmed terms with varying frequencies, and their corresponding IDF, ISF and ITF weights. The most frequent term “the”, appears in 100717 documents, 12,771,805 sentences and 31,920,853 times. In contrast, the stemmed term “electrochem” appeared in only six times in the corpus, in six different documents, and six different sentences. Note also the differences between abstracts, and the full-text IDF (see section 4.4). 4.4 Abstract vs full text comparison Although abstracts are often easier to obtain, the availability of full-text documents continues to increase. In our fourth experiment, we compared the language used in abstracts with the language used in the full-text of a document. We compared the abstract and non-abstract terms in each of the three language models. Not all of the documents distinguished the abstract from the body. Of the 100,830 documents, 92,723 had abstracts and 97,455 had sections other than an abstract. We considered only those documents that differentiated between sections. Although the number of documents did not differ greatly, the vocabulary size did. There were 214,994 terms in the abstract vocabulary and 1,337,897 terms in the document body, suggesting a possible difference in the distribution of terms, the log(ni) component of IDF. Figure 5 suggests that language used in an abstract differs from the language used in the body of a document. On average, the weights assigned to stemmed terms in the abstract were higher than the weights assigned to terms in the body of a document (space limitations preclude the inclusion of the ISF and ITF figures). 0 2 4 6 8 10 12 14 16 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Global IDF Average abstract/Non-abstract IDF Abstract Non-Abstract Figure 5. Abstract and full-text IDF compared with global IDF. 606 Document (IDF) Sentence (ISF) Term (ITF) Word Abs NonAbs All Abs NonAbs All Abs NonAbs All the 1.014 1.004 1.001 1.342 1.364 1.373 4.604 9.404 5.164 chemist 11.074 5.957 5.734 13.635 12.820 12.553 22.838 17.592 17.615 synthesis 14.331 11.197 10.827 17.123 18.000 17.604 26.382 22.632 22.545 eletrochem 17.501 15.251 15.036 20.293 22.561 22.394 29.552 26.965 27.507 Table 2. Examples of IDF, ISF and ITF for terms with increasing IDF. 4.5 IDF sensitivity The stability of the corpus weighting scheme is particularly important in a dynamic environment such as the web. Without an understanding of how IDF behaves, we are unable to make a principled decision regarding how often a system should update the corpus-weights. To measure the sensitivity of IDF we sampled at 10% intervals from the global corpus as outlined in section 3. Figure 6 compares the global IDF with the IDF from each of the 10% samples. The 10% samples are almost indiscernible from the global IDF, which suggests that IDF values are very stable with respect to a random subset of articles. Only the 10% sample shows any visible difference from the global IDF values, and even then, the difference is only noticeable at higher global IDF values (greater than 17 in our corpus). 0 2 4 6 8 10 12 14 16 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 IDF of Total Corpus Average IDF of Stemmed Terms 10 20 30 40 50 60 70 80 90 % of Total Corpus Figure 6 – Global IDF vs random sample IDF. In addition to a random sample, we compared the global based IDF with IDF values generated from each journal (in an on-line environment, it may be pertinent to partition pages into academic or corporate URLs or to calculate term frequencies for web pages separately from blog and wikis). In this case, N in equation (1) was the number of documents in the journal and ni was the distribution of terms within a journal. If the journal vocabularies were independent, the vocabulary size would be 4.1 million for unstemmed terms and 2.6 million for stemmed terms. Thus, the journals shared 48% and 52% of their vocabulary for unstemmed and stemmed terms respectively. Figure 7 shows the result of this comparison and suggests that the average IDF within a journal differed greatly from the global IDF value, particularly when the global IDF value exceeds five. This contrasts sharply with the random samples shown in Figure 6. 0 5 10 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Global IDF Average Local IDF ACHRE4 ANCHAM BICHAW BIPRET BOMAF6 CGDEFU CMATEX ESTHAG IECRED INOCAJ JACSAT JAFCAU JCCHFF JCISD8 JMCMAR JNPRDF JOCEAH JPCAFH JPCBFK JPROBS MAMOBX MPOHBP NALEFD OPRDFK ORLEF7 Figure 7 – Global IDF vs local journal IDF. At first glance, the journals with more articles appear to correlated more with the global IDF than journals with fewer articles. For example, JACSAT has 14,400 documents and is most correlated, while MPOHBP with 58 documents is least correlated. We plotted the number of articles in each journal with the mean squared error (figure not shown) and found that journals with fewer than 2,000 articles behave differently to journals with more than 2,000 articles; however, the relationship between the number of articles in the journal and the degree to which the language in that journal reflects the language used in the entire collection was not clear. 5 Conclusions We have compared the document, sentence, and term spaces along several dimensions. Results from our corpus of 100,830 full-text scientific articles suggest that the difference between these alternative spaces is both theoretical and practi607 cal in nature. As users continue to demand information systems that provide sub-document retrieval, the need to model language at the subdocument level becomes increasingly important. The key findings from this study are: (1) The raw document frequencies are considerably different to the sentence and term frequencies. The lack of a direct correlation between the document and sub-document raw spaces, in particular around the areas of important terms, suggest that it would be difficult to perform a linear transformation from the document to a sub-document space. In contrast, the raw term frequencies correlate well with the sentence frequencies. (2) IDF, ISF and ITF are highly correlated; however, simply replacing IDF with the ISF or ITF would result in a weighting scheme where the corpus weight dominated the weights assigned to query and document terms. (3) IDF was surprisingly stable with respect to random samples at 10% of the total corpus. The average IDF values based on only a 20% random stratified sample correlated almost perfectly to IDF values that considered frequencies in the entire corpus. This finding suggests that systems in a dynamic environment, such as the Web, need not update the global IDF values regularly (see (4)). (4) In contrast to the random sample, the journal based IDF samples did not correlate well to the global IDF. Further research is required to understand these factors that influence language usage. (5) All three models (IDF, ISF and ITF) suggest that the language used in abstracts is systematically different from the language used in the body of a full-text scientific document. Further research is required to understand how well the abstract tested corpus-weighting schemes will perform in a full-text environment. References Lada A. Adamic 2000 Zipf, Power-laws, and Pareto - a ranking tutorial. [Available from http://www.parc.xerox.com/istl/groups/iea/papers/r anking/ranking.html] Ricardo Baeza-Yates, and Berthier Ribeiro-Neto 1999 Modern Information Retrieval: Addison Wesley. Cancho, R. Ferrer 2005 The variation of Zipfs Law in human language. The European Physical Journal B 44 (2):249-57. Kenneth W Church and William A. Gale 1999 Inverse document frequency: a measure of deviations from Poisson. NLP using very large corpora, Kluwer Academic Publishers. Kenneth W Church.and William A. Gale 1995 Poisson mixtures. Natural Language Engineering, 1 (2):163-90. H. Guiter and M Arapov 1982. Editors Studies on Zipf's Law. Brochmeyer, Bochum. Jaap Kamps, Maarten De Rijke, and Borkur Sigurbjornsson 2005 The Importance of lenght normalization for XML retrieval. Information Retrieval 8:631-54. Le Quan Ha, E.I. Sicilia-Garcia, Ji Ming, and F.J. Smith 2002 Extension of Zipf's Law to words and phrases. 19th International Conference on Computational linguistics. Hans P. Luhn 1958 The automatic creation of literature abstracts IBM Journal of Research and Development 2 (1):155-64. Joseph P Pickett et al. 2000 The American Heritage® Dictionary of the English Language. Fourth edition. Edited by H. Mifflin. Martin F. Porter 1980 An Algorithm for Suffix Stripping. Program, 14 (3). 130-137. Stephen Robertson 2004 Understanding inverse document frequency: on theoretical arguments for IDF. Journal of Documentation 60 (5):503-520. Gerard Salton and Christopher Buckley 1988 Termweighting approaches in automatic text retrieval. Information Processing & Management, 24 (5):513-23. Claude E. Shannon 1948 A Mathematical Theory of Communication Bell System Technical Journal. 27 379–423 & 623–656. Karen Sparck Jones, Steve Walker, and Stephen Robertson 2000 A probabilistic model of information retrieval: development and comparative experiments Part 1. Information Processing & Management, 36:779-808. Karen Sparck Jones 1972 A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28:11-21. George Kingsley Zipf 1949 Human behaviour and the principle of least effort. An introduction to human ecology, 1st edn. Edited by Addison-Wesley. Cambridge, MA. 608
2006
76
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 609–616, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Tree-to-String Alignment Template for Statistical Machine Translation Yang Liu , Qun Liu , and Shouxun Lin Institute of Computing Technology Chinese Academy of Sciences No.6 Kexueyuan South Road, Haidian District P. O. Box 2704, Beijing, 100080, China {yliu,liuqun,sxlin}@ict.ac.cn Abstract We present a novel translation model based on tree-to-string alignment template (TAT) which describes the alignment between a source parse tree and a target string. A TAT is capable of generating both terminals and non-terminals and performing reordering at both low and high levels. The model is linguistically syntaxbased because TATs are extracted automatically from word-aligned, source side parsed parallel texts. To translate a source sentence, we first employ a parser to produce a source parse tree and then apply TATs to transform the tree into a target string. Our experiments show that the TAT-based model significantly outperforms Pharaoh, a state-of-the-art decoder for phrase-based models. 1 Introduction Phrase-based translation models (Marcu and Wong, 2002; Koehn et al., 2003; Och and Ney, 2004), which go beyond the original IBM translation models (Brown et al., 1993) 1 by modeling translations of phrases rather than individual words, have been suggested to be the state-of-theart in statistical machine translation by empirical evaluations. In phrase-based models, phrases are usually strings of adjacent words instead of syntactic constituents, excelling at capturing local reordering and performing translations that are localized to 1The mathematical notation we use in this paper is taken from that paper: a source string f J 1 = f1, . . . , fj, . . . , fJ is to be translated into a target string eI 1 = e1, . . . , ei, . . . , eI. Here, I is the length of the target string, and J is the length of the source string. substrings that are common enough to be observed on training data. However, a key limitation of phrase-based models is that they fail to model reordering at the phrase level robustly. Typically, phrase reordering is modeled in terms of offset positions at the word level (Koehn, 2004; Och and Ney, 2004), making little or no direct use of syntactic information. Recent research on statistical machine translation has lead to the development of syntax-based models. Wu (1997) proposes Inversion Transduction Grammars, treating translation as a process of parallel parsing of the source and target language via a synchronized grammar. Alshawi et al. (2000) represent each production in parallel dependency tree as a finite transducer. Melamed (2004) formalizes machine translation problem as synchronous parsing based on multitext grammars. Graehl and Knight (2004) describe training and decoding algorithms for both generalized tree-to-tree and tree-to-string transducers. Chiang (2005) presents a hierarchical phrasebased model that uses hierarchical phrase pairs, which are formally productions of a synchronous context-free grammar. Ding and Palmer (2005) propose a syntax-based translation model based on a probabilistic synchronous dependency insert grammar, a version of synchronous grammars defined on dependency trees. All these approaches, though different in formalism, make use of synchronous grammars or tree-based transduction rules to model both source and target languages. Another class of approaches make use of syntactic information in the target language alone, treating the translation problem as a parsing problem. Yamada and Knight (2001) use a parser in the target language to train probabilities on a set of 609 operations that transform a target parse tree into a source string. Paying more attention to source language analysis, Quirk et al. (2005) employ a source language dependency parser, a target language word segmentation component, and an unsupervised word alignment component to learn treelet translations from parallel corpus. In this paper, we propose a statistical translation model based on tree-to-string alignment template which describes the alignment between a source parse tree and a target string. A TAT is capable of generating both terminals and non-terminals and performing reordering at both low and high levels. The model is linguistically syntax-based because TATs are extracted automatically from word-aligned, source side parsed parallel texts. To translate a source sentence, we first employ a parser to produce a source parse tree and then apply TATs to transform the tree into a target string. One advantage of our model is that TATs can be automatically acquired to capture linguistically motivated reordering at both low (word) and high (phrase, clause) levels. In addition, the training of TAT-based model is less computationally expensive than tree-to-tree models. Similarly to (Galley et al., 2004), the tree-to-string alignment templates discussed in this paper are actually transformation rules. The major difference is that we model the syntax of the source language instead of the target side. As a result, the task of our decoder is to find the best target string while Galley’s is to seek the most likely target tree. 2 Tree-to-String Alignment Template A tree-to-string alignment template z is a triple ⟨˜T, ˜S, ˜A⟩, which describes the alignment ˜A between a source parse tree ˜T = T(F J′ 1 ) 2 and a target string ˜S = EI′ 1 . A source string F J′ 1 , which is the sequence of leaf nodes of T(F J′ 1 ), consists of both terminals (source words) and nonterminals (phrasal categories). A target string EI′ 1 is also composed of both terminals (target words) and non-terminals (placeholders). An alignment ˜A is defined as a subset of the Cartesian product of source and target symbol positions: ˜A ⊆{(j, i) : j = 1, . . . , J′; i = 1, . . . , I′} (1) 2We use T(·) to denote a parse tree. To reduce notational overhead, we use T(z) to represent the parse tree in z. Similarly, S(z) denotes the string in z. Figure 1 shows three TATs automatically learned from training data. Note that when demonstrating a TAT graphically, we represent non-terminals in the target strings by blanks. NP NR 布什 NN 总统 LCP NP NR 美国 CC 和 NR LC 间 NP DNP NP DEG NP President Bush between United States and Figure 1: Examples of tree-to-string alignment templates obtained in training In the following, we formally describe how to introduce tree-to-string alignment templates into probabilistic dependencies to model Pr(eI 1|fJ 1 ) 3. In a first step, we introduce the hidden variable T(fJ 1 ) that denotes a parse tree of the source sentence fJ 1 : Pr(eI 1|fJ 1 ) = X T(fJ 1 ) Pr(eI 1, T(fJ 1 )|fJ 1 ) (2) = X T(fJ 1 ) Pr(T(fJ 1 )|fJ 1 )Pr(eI 1|T(fJ 1 ), fJ 1 ) (3) Next, another hidden variable D is introduced to detach the source parse tree T(fJ 1 ) into a sequence of K subtrees ˜T K 1 with a preorder transversal. We assume that each subtree ˜Tk produces a target string ˜Sk. As a result, the sequence of subtrees ˜T K 1 produces a sequence of target strings ˜SK 1 , which can be combined serially to generate the target sentence eI 1. We assume that Pr(eI 1|D, T(fJ 1 ), fJ 1 ) ≡Pr( ˜SK 1 | ˜T K 1 ) because eI 1 is actually generated by the derivation of ˜SK 1 . Note that we omit an explicit dependence on the detachment D to avoid notational overhead. Pr(eI 1|T(fJ 1 ), fJ 1 ) = X D Pr(eI 1, D|T(fJ 1 ), fJ 1 ) (4) = X D Pr(D|T(fJ 1 ), fJ 1 )Pr(eI 1|D, T(fJ 1 ), fJ 1 ) (5) = X D Pr(D|T(fJ 1 ), fJ 1 )Pr( ˜SK 1 | ˜T K 1 ) (6) = X D Pr(D|T(fJ 1 ), fJ 1 ) K Y k=1 Pr( ˜Sk| ˜Tk) (7) 3The notational convention will be as follows. We use the symbol Pr(·) to denote general probability distribution with no specific assumptions. In contrast, for model-based probability distributions, we use generic symbol p(·). 610 NP DNP NP NR 中国 DEG 的 NP NN 经济 NN 发展 NP DNP NP DEG 的 NP NP NR 中国 NP NN NN NN 经济 NN 发展 中国的经济发展 parsing detachment production of China economic development combination economic development of China Figure 2: Graphic illustration for translation process To further decompose Pr( ˜S| ˜T), the tree-tostring alignment template, denoted by the variable z, is introduced as a hidden variable. Pr( ˜S| ˜T) = X z Pr( ˜S, z| ˜T) (8) = X z Pr(z| ˜T)Pr( ˜S|z, ˜T) (9) Therefore, the TAT-based translation model can be decomposed into four sub-models: 1. parse model: Pr(T(fJ 1 )|fJ 1 ) 2. detachment model: Pr(D|T(fJ 1 ), fJ 1 ) 3. TAT selection model: Pr(z| ˜T) 4. TAT application model: Pr( ˜S|z, ˜T) Figure 2 shows how TATs work to perform translation. First, the input source sentence is parsed. Next, the parse tree is detached into five subtrees with a preorder transversal. For each subtree, a TAT is selected and applied to produce a string. Finally, these strings are combined serially to generate the translation (we use X to denote the non-terminal): X1 ⇒X2 of X3 ⇒X2 of China ⇒X3 X4 of China ⇒economic X4 of China ⇒economic development of China Following Och and Ney (2002), we base our model on log-linear framework. Hence, all knowledge sources are described as feature functions that include the given source string fJ 1 , the target string eI 1, and hidden variables. The hidden variable T(fJ 1 ) is omitted because we usually make use of only single best output of a parser. As we assume that all detachment have the same probability, the hidden variable D is also omitted. As a result, the model we actually adopt for experiments is limited because the parse, detachment, and TAT application sub-models are simplified. Pr(eI 1, zK 1 |fJ 1 ) = exp[PM m=1 λmhm(eI 1, fJ 1 , zK 1 )] P e′I 1,z′K 1 exp[PM m=1 λmhm(e′I 1, fJ 1 , z′K 1 )] ˆeI 1 = argmax eI 1,zK 1 ½ M X m=1 λmhm(eI 1, fJ 1 , zK 1 ) ¾ For our experiments we use the following seven feature functions 4 that are analogous to default feature set of Pharaoh (Koehn, 2004). To simplify the notation, we omit the dependence on the hidden variables of the model. h1(eI 1, fJ 1 ) = log K Y k=1 N(z) · δ(T(z), ˜Tk) N(T(z)) h2(eI 1, fJ 1 ) = log K Y k=1 N(z) · δ(T(z), ˜Tk) N(S(z)) h3(eI 1, fJ 1 ) = log K Y k=1 lex(T(z)|S(z)) · δ(T(z), ˜Tk) h4(eI 1, fJ 1 ) = log K Y k=1 lex(S(z)|T(z)) · δ(T(z), ˜Tk) h5(eI 1, fJ 1 ) = K h6(eI 1, fJ 1 ) = log IY i=1 p(ei|ei−2, ei−1) h7(eI 1, fJ 1 ) = I 4When computing lexical weighting features (Koehn et al., 2003), we take only terminals into account. If there are no terminals, we set the feature value to 1. We use lex(·) to denote lexical weighting. We denote the number of TATs used for decoding by K and the length of target string by I. 611 Tree String Alignment ( NR 布什) Bush 1:1 ( NN 总统) President 1:1 ( VV 发表) made 1:1 ( NN 演讲) speech 1:1 ( NP ( NR ) ( NN ) ) X1 | X2 1:2 2:1 ( NP ( NR 布什) ( NN ) ) X | Bush 1:2 2:1 ( NP ( NR ) ( NN 总统) ) President | X 1:2 2:1 ( NP ( NR 布什) ( NN 总统) ) President | Bush 1:2 2:1 ( VP ( VV ) ( NN ) ) X1 | a | X2 1:1 2:3 ( VP ( VV 发表) ( NN ) ) made | a | X 1:1 2:3 ( VP ( VV ) ( NN 演讲) ) X | a | speech 1:1 2:3 ( VP ( VV 发表) ( NN 演讲) ) made | a | speech 1:1 2:3 ( IP ( NP ) ( VP ) ) X1 | X2 1:1 2:2 Table 1: Examples of TATs extracted from the TSA in Figure 3 with h = 2 and c = 2 3 Training To extract tree-to-string alignment templates from a word-aligned, source side parsed sentence pair ⟨T(fJ 1 ), eI 1, A⟩, we need first identify TSAs (TreeString-Alignment) using similar criterion as suggested in (Och and Ney, 2004). A TSA is a triple ⟨T(fj2 j1 ), ei2 i1, ¯A)⟩that is in accordance with the following constraints: 1. ∀(i, j) ∈A : i1 ≤i ≤i2 ↔j1 ≤j ≤j2 2. T(fj2 j1 ) is a subtree of T(fJ 1 ) Given a TSA ⟨T(fj2 j1 ), ei2 i1, ¯A⟩, a triple ⟨T(fj4 j3 ), ei4 i3, ˆA⟩is its sub TSA if and only if: 1. T(fj4 j3 ), ei4 i3, ˆA⟩is a TSA 2. T(fj4 j3 ) is rooted at the direct descendant of the root node of T(fj1 j2 ) 3. i1 ≤i3 ≤i4 ≤i2 4. ∀(i, j) ∈¯A : i3 ≤i ≤i4 ↔j3 ≤j ≤j4 Basically, we extract TATs from a TSA ⟨T(fj2 j1 ), ei2 i1, ¯A⟩using the following two rules: 1. If T(fj2 j1 ) contains only one node, then ⟨T(fj2 j1 ), ei2 i1, ¯A⟩is a TAT 2. If the height of T(fj2 j1 ) is greater than one, then build TATs using those extracted from sub TSAs of ⟨T(fj2 j1 ), ei2 i1, ¯A⟩. IP NP NR 布什 NN 总统 VP VV 发表 NN 演讲 President Bush made a speech Figure 3: An example of TSA Usually, we can extract a very large amount of TATs from training data using the above rules, making both training and decoding very slow. Therefore, we impose three restrictions to reduce the magnitude of extracted TATs: 1. A third constraint is added to the definition of TSA: ∃j′, j′′ : j1 ≤j′ ≤j2 and j1 ≤j′′ ≤j2 and (i1, j′) ∈¯A and (i2, j′′) ∈¯A This constraint requires that both the first and last symbols in the target string must be aligned to some source symbols. 2. The height of T(z) is limited to no greater than h. 3. The number of direct descendants of a node of T(z) is limited to no greater than c. Table 1 shows the TATs extracted from the TSA in Figure 3 with h = 2 and c = 2. As we restrict that T(fj2 j1 ) must be a subtree of T(fJ 1 ), TATs may be treated as syntactic hierar612 chical phrase pairs (Chiang, 2005) with tree structure on the source side. At the same time, we face the risk of losing some useful non-syntactic phrase pairs. For example, the phrase pair 布什总统发表←→President Bush made can never be obtained in form of TAT from the TSA in Figure 3 because there is no subtree for that source string. 4 Decoding We approach the decoding problem as a bottom-up beam search. To translate a source sentence, we employ a parser to produce a parse tree. Moving bottomup through the source parse tree, we compute a list of candidate translations for the input subtree rooted at each node with a postorder transversal. Candidate translations of subtrees are placed in stacks. Figure 4 shows the organization of candidate translation stacks. NP DNP NP NR 中国 DEG 的 NP NN 经济 NN 发展 8 4 7 2 3 5 6 1 ... 1 ... 2 ... 3 ... 4 ... 5 ... 6 ... 7 ... 8 Figure 4: Candidate translations of subtrees are placed in stacks according to the root index set by postorder transversal A candidate translation contains the following information: 1. the partial translation 2. the accumulated feature values 3. the accumulated probability A TAT z is usable to a parse tree T if and only if T(z) is rooted at the root of T and covers part of nodes of T. Given a parse tree T, we find all usable TATs. Given a usable TAT z, if T(z) is equal to T, then S(z) is a candidate translation of T. If T(z) covers only a portion of T, we have to compute a list of candidate translations for T by replacing the non-terminals of S(z) with candidate translations of the corresponding uncovered subtrees. NP DNP NP DEG 的 NP 8 4 7 2 3 of ... 1 ... 2 ... 3 ... 4 ... 5 ... 6 ... 7 ... 8 Figure 5: Candidate translation construction For example, when computing the candidate translations for the tree rooted at node 8, the TAT used in Figure 5 covers only a portion of the parse tree in Figure 4. There are two uncovered subtrees that are rooted at node 2 and node 7 respectively. Hence, we replace the third symbol with the candidate translations in stack 2 and the first symbol with the candidate translations in stack 7. At the same time, the feature values and probabilities are also accumulated for the new candidate translations. To speed up the decoder, we limit the search space by reducing the number of TATs used for each input node. There are two ways to limit the TAT table size: by a fixed limit (tatTable-limit) of how many TATs are retrieved for each input node, and by a probability threshold (tatTable-threshold) that specify that the TAT probability has to be above some value. On the other hand, instead of keeping the full list of candidates for a given node, we keep a top-scoring subset of the candidates. This can also be done by a fixed limit (stack-limit) or a threshold (stack-threshold). To perform recombination, we combine candidate translations that share the same leading and trailing bigrams in each stack. 5 Experiments Our experiments were on Chinese-to-English translation. The training corpus consists of 31, 149 sentence pairs with 843, 256 Chinese words and 613 System Features BLEU4 d + φ(e|f) 0.0573 ± 0.0033 Pharaoh d + lm + φ(e|f) + wp 0.2019 ± 0.0083 d + lm + φ(f|e) + lex(f|e) + φ(e|f) + lex(e|f) + pp + wp 0.2089 ± 0.0089 h1 0.1639 ± 0.0077 Lynx h1 + h6 + h7 0.2100 ± 0.0089 h1 + h2 + h3 + h4 + h5 + h6 + h7 0.2178 ± 0.0080 Table 2: Comparison of Pharaoh and Lynx with different feature settings on the test corpus 949, 583 English words. For the language model, we used SRI Language Modeling Toolkit (Stolcke, 2002) to train a trigram model with modified Kneser-Ney smoothing (Chen and Goodman, 1998) on the 31, 149 English sentences. We selected 571 short sentences from the 2002 NIST MT Evaluation test set as our development corpus, and used the 2005 NIST MT Evaluation test set as our test corpus. We evaluated the translation quality using the BLEU metric (Papineni et al., 2002), as calculated by mteval-v11b.pl with its default setting except that we used case-sensitive matching of n-grams. 5.1 Pharaoh The baseline system we used for comparison was Pharaoh (Koehn et al., 2003; Koehn, 2004), a freely available decoder for phrase-based translation models: p(e|f) = pφ(f|e)λφ × pLM(e)λLM × pD(e, f)λD × ωlength(e)λW(e) (10) We ran GIZA++ (Och and Ney, 2000) on the training corpus in both directions using its default setting, and then applied the refinement rule “diagand” described in (Koehn et al., 2003) to obtain a single many-to-many word alignment for each sentence pair. After that, we used some heuristics, which including rule-based translation of numbers, dates, and person names, to further improve the alignment accuracy. Given the word-aligned bilingual corpus, we obtained 1, 231, 959 bilingual phrases (221, 453 used on test corpus) using the training toolkits publicly released by Philipp Koehn with its default setting. To perform minimum error rate training (Och, 2003) to tune the feature weights to maximize the system’s BLEU score on development set, we used optimizeV5IBMBLEU.m (Venugopal and Vogel, 2005). We used default pruning settings for Pharaoh except that we set the distortion limit to 4. 5.2 Lynx On the same word-aligned training data, it took us about one month to parse all the 31, 149 Chinese sentences using a Chinese parser written by Deyi Xiong (Xiong et al., 2005). The parser was trained on articles 1 −270 of Penn Chinese Treebank version 1.0 and achieved 79.4% (F1 measure) as well as a 4.4% relative decrease in error rate. Then, we performed TAT extraction described in section 3 with h = 3 and c = 5 and obtained 350, 575 TATs (88, 066 used on test corpus). To run our decoder Lynx on development and test corpus, we set tatTable-limit = 20, tatTable-threshold = 0, stack-limit = 100, and stack-threshold = 0.00001. 5.3 Results Table 2 shows the results on test set using Pharaoh and Lynx with different feature settings. The 95% confidence intervals were computed using Zhang’s significance tester (Zhang et al., 2004). We modified it to conform to NIST’s current definition of the BLEU brevity penalty. For Pharaoh, eight features were used: distortion model d, a trigram language model lm, phrase translation probabilities φ(f|e) and φ(e|f), lexical weightings lex(f|e) and lex(e|f), phrase penalty pp, and word penalty wp. For Lynx, seven features described in section 2 were used. We find that Lynx outperforms Pharaoh with all feature settings. With full features, Lynx achieves an absolute improvement of 0.006 over Pharaoh (3.1% relative). This difference is statistically significant (p < 0.01). Note that Lynx made use of only 88, 066 TATs on test corpus while 221, 453 bilingual phrases were used for Pharaoh. The feature weights obtained by minimum er614 Features System d lm φ(f|e) lex(f|e) φ(e|f) lex(e|f) pp wp Pharaoh 0.0476 0.1386 0.0611 0.0459 0.1723 0.0223 0.3122 -0.2000 Lynx 0.3735 0.0061 0.1081 0.1656 0.0022 0.0824 0.2620 Table 3: Feature weights obtained by minimum error rate training on the development corpus BLEU4 tat 0.2178 ± 0.0080 tat + bp 0.2240 ± 0.0083 Table 4: Effect of using bilingual phrases for Lynx ror rate training for both Pharaoh and Lynx are shown in Table 3. We find that φ(f|e) (i.e. h2) is not a helpful feature for Lynx. The reason is that we use only a single non-terminal symbol instead of assigning phrasal categories to the target string. In addition, we allow the target string consists of only non-terminals, making translation decisions not always based on lexical evidence. 5.4 Using bilingual phrases It is interesting to use bilingual phrases to strengthen the TAT-based model. As we mentioned before, some useful non-syntactic phrase pairs can never be obtained in form of TAT because we restrict that there must be a corresponding parse tree for the source phrase. Moreover, it takes more time to obtain TATs than bilingual phrases on the same training data because parsing is usually very time-consuming. Given an input subtree T(F j2 j1 ), if F j2 j1 is a string of terminals, we find all bilingual phrases that the source phrase is equal to F j2 j1 . Then we build a TAT for each bilingual phrase ⟨fJ′ 1 , eI′ 1 , ˆA⟩: the tree of the TAT is T(F j2 j1 ), the string is eI′ 1 , and the alignment is ˆA. If a TAT built from a bilingual phrase is the same with a TAT in the TAT table, we prefer to the greater translation probabilities. Table 4 shows the effect of using bilingual phrases for Lynx. Note that these bilingual phrases are the same with those used for Pharaoh. 5.5 Results on large data We also conducted an experiment on large data to further examine our design philosophy. The training corpus contains 2.6 million sentence pairs. We used all the data to extract bilingual phrases and a portion of 800K pairs to obtain TATs. Two trigram language models were used for Lynx. One was trained on the 2.6 million English sentences and another was trained on the first 1/3 of the Xinhua portion of Gigaword corpus. We also included rule-based translations of named entities, dates, and numbers. By making use of these data, Lynx achieves a BLEU score of 0.2830 on the 2005 NIST Chinese-to-English MT evaluation test set, which is a very promising result for linguistically syntax-based models. 6 Conclusion In this paper, we introduce tree-to-string alignment templates, which can be automatically learned from syntactically-annotated training data. The TAT-based translation model improves translation quality significantly compared with a stateof-the-art phrase-based decoder. Treated as special TATs without tree on the source side, bilingual phrases can be utilized for the TAT-based model to get further improvement. It should be emphasized that the restrictions we impose on TAT extraction limit the expressive power of TAT. Preliminary experiments reveal that removing these restrictions does improve translation quality, but leads to large memory requirements. We feel that both parsing and word alignment qualities have important effects on the TATbased model. We will retrain the Chinese parser on Penn Chinese Treebank version 5.0 and try to improve word alignment quality using log-linear models as suggested in (Liu et al., 2005). Acknowledgement This work is supported by National High Technology Research and Development Program contract “Generally Technical Research and Basic Database Establishment of Chinese Platform”(Subject No. 2004AA114010). We are grateful to Deyi Xiong for providing the parser and Haitao Mi for making the parser more efficient and robust. Thanks to Dr. Yajuan Lv for many helpful comments on an earlier draft of this paper. 615 References Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Learning dependency translation models as collections of finite-state head transducers. Computational Linguistics, 26(1):45-60. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311. Stanley F. Chen and Joshua Goodman. 1998. Am empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University Center for Research in Computing Technology. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of 43rd Annual Meeting of the ACL, pages 263-270. Yuan Ding and Martha Palmer. 2005. Machine translation using probabilistic synchronous dependency insert grammars. In Proceedings of 43rd Annual Meeting of the ACL, pages 541-548. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of NAACL-HLT 2004, pages 273280. Jonathan Graehl and Kevin Knight. 2004. Training tree transducers. In Proceedings of NAACL-HLT 2004, pages 105-112. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL 2003, pages 127-133. Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine trnaslation models. In Proceedings of the Sixth Conference of the Association for Machine Translation in the Americas, pages 115-124. Yang Liu, Qun Liu, and Shouxun Lin. 2005. Loglinear models for word alignment. In Proceedings of 43rd Annual Meeting of the ACL, pages 459-466. Daniel Marcu and William Wong. 2002. A phrasebased, joint probability model for statistical machine translation. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 133-139. Dan Melamed. 2004. Statistical machine translation by parsing. In Proceedings of 42nd Annual Meeting of the ACL, pages 653-660. Franz J. Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of 38th Annual Meeting of the ACL, pages 440-447. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of 40th Annual Meeting of the ACL, pages 295-302. Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417-449. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of 41st Annual Meeting of the ACL, pages 160-167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the ACL, pages 311-318. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In Proceedings of 43rd Annual Meeting of the ACL, pages 271-279. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Proceedings of International Conference on Spoken Language Processing, volume 2, pages 901-904. Ashish Venugopal and Stephan Vogel. 2005. Considerations in maximum mutual information and minimum classification error training for statistical machine translation. In Proceedings of the Tenth Conference of the European Association for Machine Translation (EAMT-05). Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403. Deyi Xiong, Shuanglong Li, Qun Liu, Shouxun Lin, and Yueliang Qian. 2005. Parsing the Penn Chinese treebank with semantic knowledge. In Proceedings of IJCNLP 2005, pages 70-81. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of 39th Annual Meeting of the ACL, pages 523-530. Ying Zhang, Stephan Vogel, and Alex Waibel. 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system? In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC), pages 2051-2054. 616
2006
77
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 617–624, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Incorporating speech recognition confidence into discriminative named entity recognition of speech data Katsuhito Sudoh Hajime Tsukada Hideki Isozaki NTT Communication Science Laboratories Nippon Telegraph and Telephone Corporation 2-4 Hikaridai, Seika-cho, Keihanna Science City, Kyoto 619-0237, Japan {sudoh,tsukada,isozaki}@cslab.kecl.ntt.co.jp Abstract This paper proposes a named entity recognition (NER) method for speech recognition results that uses confidence on automatic speech recognition (ASR) as a feature. The ASR confidence feature indicates whether each word has been correctly recognized. The NER model is trained using ASR results with named entity (NE) labels as well as the corresponding transcriptions with NE labels. In experiments using support vector machines (SVMs) and speech data from Japanese newspaper articles, the proposed method outperformed a simple application of textbased NER to ASR results in NER Fmeasure by improving precision. These results show that the proposed method is effective in NER for noisy inputs. 1 Introduction As network bandwidths and storage capacities continue to grow, a large volume of speech data including broadcast news and PodCasts is becoming available. These data are important information sources as well as such text data as newspaper articles and WWW pages. Speech data as information sources are attracting a great deal of interest, such as DARPA’s global autonomous language exploitation (GALE) program. We also aim to use them for information extraction (IE), question answering, and indexing. Named entity recognition (NER) is a key technique for IE and other natural language processing tasks. Named entities (NEs) are the proper expressions for things such as peoples’ names, locations’ names, and dates, and NER identifies those expressions and their categories. Unlike text data, speech data introduce automatic speech recognition (ASR) error problems to NER. Although improvements to ASR are needed, developing a robust NER for noisy word sequences is also important. In this paper, we focus on the NER of ASR results and discuss the suppression of ASR error problems in NER. Most previous studies of the NER of speech data used generative models such as hidden Markov models (HMMs) (Miller et al., 1999; Palmer and Ostendorf, 2001; Horlock and King, 2003b; B´echet et al., 2004; Favre et al., 2005). On the other hand, in text-based NER, better results are obtained using discriminative schemes such as maximum entropy (ME) models (Borthwick, 1999; Chieu and Ng, 2003), support vector machines (SVMs) (Isozaki and Kazawa, 2002), and conditional random fields (CRFs) (McCallum and Li, 2003). Zhai et al. (2004) applied a text-level ME-based NER to ASR results. These models have an advantage in utilizing various features, such as part-of-speech information, character types, and surrounding words, which may be overlapped, while overlapping features are hard to use in HMM-based models. To deal with ASR error problems in NER, Palmer and Ostendorf (2001) proposed an HMMbased NER method that explicitly models ASR errors using ASR confidence and rejects erroneous word hypotheses in the ASR results. Such rejection is especially effective when ASR accuracy is relatively low because many misrecognized words may be extracted as NEs, which would decrease NER precision. Motivated by these issues, we extended their approach to discriminative models and propose an NER method that deals with ASR errors as fea617 tures. We use NE-labeled ASR results for training to incorporate the features into the NER model as well as the corresponding transcriptions with NE labels. In testing, ASR errors are identified by ASR confidence scores and are used for the NER. In experiments using SVM-based NER and speech data from Japanese newspaper articles, the proposed method increased the NER F-measure, especially in precision, compared to simply applying text-based NER to the ASR results. 2 SVM-based NER NER is a kind of chunking problem that can be solved by classifying words into NE classes that consist of name categories and such chunking states as PERSON-BEGIN (the beginning of a person’s name) and LOCATION-MIDDLE (the middle of a location’s name). Many discriminative methods have been applied to NER, such as decision trees (Sekine et al., 1998), ME models (Borthwick, 1999; Chieu and Ng, 2003), and CRFs (McCallum and Li, 2003). In this paper, we employ an SVM-based NER method in the following way that showed good NER performance in Japanese (Isozaki and Kazawa, 2002). We define three features for each word: the word itself, its part-of-speech tag, and its character type. We also use those features for the two preceding and succeeding words for context dependence and use 15 features when classifying a word. Each feature is represented by a binary value (1 or 0), for example, “whether the previous word is Japan,” and each word is classified based on a long binary vector where only 15 elements are 1. We have two problems when solving NER using SVMs. One, SVMs can solve only a two-class problem. We reduce multi-class problems of NER to a group of two-class problems using the one-against-all approach, where each SVM is trained to distinguish members of a class (e.g., PERSON-BEGIN) from non-members (PERSON-MIDDLE, MONEY-BEGIN, ... ). In this approach, two or more classes may be assigned to a word or no class may be assigned to a word. To avoid these situations, we choose class c that has the largest SVM output score gc(x) among all others. The other is that the NE label sequence must be consistent; for example, ARTIFACT-END must follow ARTIFACT-BEGIN or Speech data NE-labeled transcriptions Transcriptions ASR results ASR-based training data Text-based training data Manual transcription ASR NE labeling Setting ASR confidence feature to 1 Alignment & identifying ASR errors and NEs Figure 1: Procedure for preparing training data. ARTIFACT-MIDDLE. We use a Viterbi search to obtain the best and consistent NE label sequence after classifying all words in a sentence, based on probability-like values obtained by applying sigmoid function sn(x) = 1/(1 + exp(−βnx)) to SVM output score gc(x). 3 Proposed method 3.1 Incorporating ASR confidence into NER In the NER of ASR results, ASR errors cause NEs to be missed and erroneous NEs to be recognized. If one or more words constituting an NE are misrecognized, we cannot recognize the correct NE. Even if all words constituting an NE are correctly recognized, we may not recognize the correct NE due to ASR errors on context words. To avoid this problem, we model ASR errors using additional features that indicate whether each word is correctly recognized. Our NER model is trained using ASR results with a feature, where feature values are obtained through alignment to the corresponding transcriptions. In testing, we estimate feature values using ASR confidence scores. In this paper, this feature is called the ASR confidence feature. Note that we only aim to identify NEs that are correctly recognized by ASR, and NEs containing ASR errors are not regarded as NEs. Utilizing erroneous NEs is a more difficult problem that is beyond the scope of this paper. 3.2 Training NER model Figure 1 illustrates the procedure for preparing training data from speech data. First, the speech 618 data are manually transcribed and automatically recognized by the ASR. Second, we label NEs in the transcriptions and then set the ASR confidence feature values to 1 because the words in the transcriptions are regarded as correctly recognized words. Finally, we align the ASR results to the transcriptions to identify ASR errors for the ASR confidence feature values and to label correctly recognized NEs in the ASR results. Note that we label the NEs in the ASR results that exist in the same positions as the transcriptions. If a part of an NE is misrecognized, the NE is ignored, and all words for the NE are labeled as non-NE words (OTHER). Examples of text-based and ASR-based training data are shown in Tables 1 and 2. Since the name Murayama Tomiichi in Table 1 is misrecognized in ASR, the correctly recognized word Murayama is also labeled OTHER in Table 2. Another approach can be considered, where misrecognized words are replaced by word error symbols such as those shown in Table 3. In this case, those words are rejected, and those part-of-speech and character type features are not used in NER. 3.3 ASR confidence scoring for using the proposed NER model ASR confidence scoring is an important technique in many ASR applications, and many methods have been proposed including using word posterior probabilities on word graphs (Wessel et al., 2001), integrating several confidence measures using neural networks (Schaaf and Kemp, 1997), using linear discriminant analysis (Kamppari and Hazen, 2000), and using SVMs (Zhang and Rudnicky, 2001). Word posterior probability is a commonly used and effective ASR confidence measure. Word posterior probability p([w; τ, t]|X) of word w at time interval [τ, t] for speech signal X is calculated as follows (Wessel et al., 2001): p([w; τ, t]|X) = X W ∈W [w;τ,t] n p(X|W ) (p(W ))βoα p(X) , (1) where W is a sentence hypothesis, W [w; τ, t] is the set of sentence hypotheses that include w in [τ, t], p(X|W ) is a acoustic model score, p(W ) is a language model score, α is a scaling parameter (α<1), and β is a language model weight. α is used for scaling the large dynamic range of Word Confidence NE label Murayama 1 PERSON-BEGIN Tomiichi 1 PERSON-END shusho 1 OTHER wa 1 OTHER nento 1 DATE-SINGLE Table 1: An example of text-based training data. Word Confidence NE label Murayama 1 OTHER shi 0 OTHER ni 0 OTHER ichi 0 OTHER shiyo 0 OTHER wa 1 OTHER nento 1 DATE-SINGLE Table 2: An example of ASR-based training data. Word Confidence NE label Murayama 1 OTHER (error) 0 OTHER (error) 0 OTHER (error) 0 OTHER (error) 0 OTHER wa 1 OTHER nento 1 DATE-SINGLE Table 3: An example of ASR-based training data with word error symbols. p(X|W )(p(W ))β to avoid a few of the top hypotheses dominating posterior probabilities. p(X) is approximated by the sum over all sentence hypotheses and is denoted as p(X) = X W n p(X|W ) (p(W ))βoα . (2) p([w; τ, t]|X) can be efficiently calculated using a forward-backward algorithm. In this paper, we use SVMs for ASR confidence scoring to achieve a better performance than when using word posterior probabilities as ASR confidence scores. SVMs are trained using ASR results, whose errors are known through their alignment to their reference transcriptions. The following features are used for confidence scoring: the word itself, its part-of-speech tag, and its word posterior probability; those of the two preceding and succeeding words are also used. The word itself and its part-of-speech are also represented 619 by a set of binary values, the same as with an SVM-based NER. Since all other features are binary, we reduce real-valued word posterior probability p to ten binary features for simplicity: (if 0 < p ≤0.1, if 0.1 < p ≤0.2, ... , and if 0.9 < p ≤1.0). To normalize SVMs’ output scores for ASR confidence, we use a sigmoid function sw(x) = 1/(1 + exp(−βwx)). We use these normalized scores as ASR confidence scores. Although a large variety of features have been proposed in previous studies, we use only these simple features and reserve the other features for further studies. Using the ASR confidence scores, we estimate whether each word is correctly recognized. If the ASR confidence score of a word is greater than threshold tw, the word is estimated as correct, and we set the ASR confidence feature value to 1; otherwise we set it to 0. 3.4 Rejection at the NER level We use the ASR confidence feature to suppress ASR error problems; however, even text-based NERs sometimes make errors. NER performance is a trade-off between missing correct NEs and accepting erroneous NEs, and requirements differ by task. Although we can tune the parameters in training SVMs to control the trade-off, it seems very hard to find appropriate values for all the SVMs. We use a simple NER-level rejection by modifying the SVM output scores for the nonNE class (OTHER). We add constant offset value to to each SVM output score for OTHER. With a large to, OTHER becomes more desirable than the other NE classes, and many words are classified as nonNE words and vice versa. Therefore, to works as a parameter for NER-level rejection. This approach can also be applied to text-based NER. 4 Experiments We conducted the following experiments related to the NER of speech data to investigate the performance of the proposed method. 4.1 Setup In the experiment, we simulated the procedure shown in Figure 1 using speech data from the NE-labeled text corpus. We used the training data of the Information Retrieval and Extraction Exercise (IREX) workshop (Sekine and Eriguchi, 2000) as the text corpus, which consisted of 1,174 Japanese newspaper articles (10,718 sentences) and 18,200 NEs in eight categories (artifact, organization, location, person, date, time, money, and percent). The sentences were read by 106 speakers (about 100 sentences per speaker), and the recorded speech data were used for the experiments. The experiments were conducted with 5fold cross validation, using 80% of the 1,174 articles and the ASR results of the corresponding speech data for training SVMs (both for ASR confidence scoring and for NER) and the rest for the test. We tokenized the sentences into words and tagged the part-of-speech information using the Japanese morphological analyzer ChaSen 1 2.3.3 and then labeled the NEs. Unreadable tokens such as parentheses were removed in tokenization. After tokenization, the text corpus had 264,388 words of 60 part-of-speech types. Since three different kinds of characters are used in Japanese, the character types used as features included: single-kanji (words written in a single Chinese character), all-kanji (longer words written in Chinese characters), hiragana (words written in hiragana Japanese phonograms), katakana (words written in katakana Japanese phonograms), number, single-capital (words with a single capitalized letter), all-capital, capitalized (only the first letter is capitalized), roman (other roman character words), and others (all other words). We used all the features that appeared in each training set (no feature selection was performed). The chunking states included in the NE classes were: BEGIN (beginning of a NE), MIDDLE (middle of a NE), END (ending of a NE), and SINGLE (a single-word NE). There were 33 NE classes (eight categories * four chunking states + OTHER), and therefore we trained 33 SVMs to distinguish words of a class from words of other classes. For NER, we used an SVM-based chunk annotator YamCha 2 0.33 with a quadratic kernel (1 + ⃗x · ⃗y)2 and a soft margin parameter of SVMs C=0.1 for training and applied sigmoid function sn(x) with βn=1.0 and Viterbi search to the SVMs’ outputs. These parameters were experimentally chosen using the test set. We used an ASR engine (Hori et al., 2004) with a speaker-independent acoustic model. The lan1http://chasen.naist.jp/hiki/ChaSen/ (in Japanese) 2http://www.chasen.org/˜taku/software/yamcha/ 620 guage model was a word 3-gram model, trained using other Japanese newspaper articles (about 340 M words) that were also tokenized using ChaSen. The vocabulary size of the word 3-gram model was 426,023. The test-set perplexity over the text corpus was 76.928. The number of outof-vocabulary words was 1,551 (0.587%). 223 (1.23%) NEs in the text corpus contained such outof-vocabulary words, so those NEs could not be correctly recognized by ASR. The scaling parameter α was set to 0.01, which showed the best ASR error estimation results using word posterior probabilities in the test set in terms of receiver operator characteristic (ROC) curves. The language model weight β was set to 15, which is a commonly used value in our ASR system. The word accuracy obtained using our ASR engine for the overall dataset was 79.45%. In the ASR results, 82.00% of the NEs in the text corpus remained. Figure 2 shows the ROC curves of ASR error estimation for the overall five cross-validation test sets, using SVMbased ASR confidence scoring and word posterior probabilities as ASR confidence scores, where True positive rate = # correctly recognized words estimated as correct # correctly recognized words False positive rate = # misrecognized words estimated as correct # misrecognized words . In SVM-based ASR confidence scoring, we used the quadratic kernel and C=0.01. Parameter βw of sigmoid function sw(x) was set to 1.0. These parameters were also experimentally chosen. SVMbased ASR confidence scoring showed better performance in ASR error estimation than simple word posterior probabilities by integrating multiple features. Five values of ASR confidence threshold tw were tested in the following experiments: 0.2, 0.3, 0.4, 0.5, and 0.6 (shown by black dots in Figure 2). 4.2 Evaluation metrics Evaluation was based on an averaged NER Fmeasure, which is the harmonic mean of NER precision and recall: NER precision = # correctly recognized NEs # recognized NEs NER recall = # correctly recognized NEs # NEs in original text . 0 20 40 60 80 100 0 20 40 60 80 100 True positve rate (%) False positive rate (%) =0.3 =0.4 SVM-based confidence scoring Word posterior probability tw t t t w =0.2 tw =0.6 w =0.5 w Figure 2: SVM-based confidence scoring outperforms word posterior probability for ASR error estimation. A recognized NE was accepted as correct if and only if it appeared in the same position as its reference NE through alignment, in addition to having the correct NE surface and category, because the same NEs might appear more than once. Comparisons of NE surfaces did not include differences in word segmentation because of the segmentation ambiguity in Japanese. Note that NER recall with ASR results could not exceed the rate of the remaining NEs after ASR (about 82%) because NEs containing ASR errors were always lost. In addition, we also evaluated the NER performance in NER precision and recall with NERlevel rejection using the procedure in Section 3.4, by modifying the non-NE class scores using offset value to. 4.3 Compared methods We compared several combinations of features and training conditions for evaluating the effect of incorporating the ASR confidence feature and investigating differences among training data: textbased, ASR-based, and both. Baseline does not use the ASR confidence feature and is trained using text-based training data only. NoConf-A does not use the ASR confidence feature and is trained using ASR-based training data only. 621 Method Confidence Training Test F-measure (%) Precision (%) Recall (%) Baseline Text ASR 67.00 70.67 63.70 NoConf-A Not used ASR ASR 65.52 78.86 56.05 NoConf-TA Text+ASR ASR 66.95 77.55 58.91 Conf-A ASR ASR∗ 67.69 76.69 60.59 Proposed Used Text+ASR ASR∗ 69.02 78.13 61.81 Conf-Reject Used† Text+ASR ASR∗ 68.77 77.57 61.78 Conf-UB Used Text+ASR ASR∗∗ 73.14 87.51 62.83 Transcription Not used Text Text 84.04 86.27 81.93 Table 4: NER results in averaged NER F-measure, precision, and recall without considering NER-level rejection (to = 0). ASR word accuracy was 79.45%, and 82.00% of NEs remained in ASR results. (†Unconfident words were rejected and replaced by word error symbols, ∗tw = 0.4, ∗∗ASR errors were known.) NoConf-TA does not use the ASR confidence feature and is trained using both text-based and ASR-based training data. Conf-A uses the ASR confidence feature and is trained using ASR-based training data only. Proposed uses the ASR confidence feature and is trained using both text-based and ASR-based training data. Conf-Reject is almost the same as Proposed, but misrecognized words are rejected and replaced with word error symbols, as described at the end of Section 3.2. The following two methods are for reference. Conf-UB assumes perfect ASR confidence scoring, so the ASR errors in the test set are known. The NER model, which is identical to Proposed, is regarded as the upper-boundary of Proposed. Transcription applies the same model as Baseline to reference transcriptions, assuming word accuracy is 100%. 4.4 NER Results In the NER experiments, Proposed achieved the best results among the above methods. Table 4 shows the NER results obtained by the methods without considering NER-level rejection (i.e., to = 0), using threshold tw = 0.4 for Conf-A, Proposed, and Conf-Reject, which resulted in the best NER F-measures (see Table 5). Proposed showed the best F-measure, 69.02%. It outperformed Baseline by 2.0%, with a 7.5% improvement in precision, instead of a recall decrease of 1.9%. Conf-Reject showed slightly worse results Method tw F (%) P (%) R (%) 0.2 66.72 71.28 62.71 0.3 67.32 73.68 61.98 Conf-A 0.4 67.69 76.69 60.59 0.5 67.04 79.64 57.89 0.6 64.48 81.90 53.14 0.2 68.08 72.54 64.14 0.3 68.70 75.11 63.31 Proposed 0.4 69.02 78.13 61.81 0.5 68.17 80.88 58.93 0.6 65.39 83.00 53.96 0.2 68.06 72.49 64.14 0.3 68.61 74.88 63.31 Conf-Reject 0.4 68.77 77.57 61.78 0.5 67.93 80.23 58.91 0.6 64.93 82.05 53.73 Table 5: NER results with varying ASR confidence score threshold tw for Conf-A, Proposed, and Conf-Reject. (F: F-measure, P: precision, R: recall) than Proposed. Conf-A resulted in 1.3% worse Fmeasure than Proposed. NoConf-A and NoConfTA achieved 7-8% higher precision than Baseline; however, their F-measure results were worse than Baseline because of the large drop of recall. The upper-bound results of the proposed method (Conf-UB) in F-measure was 73.14%, which was 4% higher than Proposed. Figure 3 shows NER precision and recall with NER-level rejection by to for Baseline, NoConfTA, Proposed, Conf-UB, and Transcription. In the figure, black dots represent results with to = 0, as shown in Table 4. By all five methods, we 622 0 20 40 60 80 100 50 60 70 80 90 100 Recall (%) Precision (%) Baseline NoConf-TA Proposed Conf-UB Transcription Figure 3: NER precision and recall with NERlevel rejection by to obtained higher precision with to > 0. Proposed achieved more than 5% higher precision than Baseline on most recall ranges and showed higher precision than NoConf-TA on recall ranges higher than about 35%. 5 Discussion The proposed method effectively improves NER performance, as shown by the difference between Proposed and Baseline in Tables 4 and 5. Improvement comes from two factors: using both textbased and ASR-based training data and incorporating ASR confidence feature. As shown by the difference between Baseline and the methods using ASR-based training data (NoConf-A, NoConfTA, Conf-A, Proposed, Conf-Reject), ASR-based training data increases precision and decreases recall. In ASR-based training data, all words constituting NEs that contain ASR errors are regarded as non-NE words, and those NE examples are lost in training, which emphasizes NER precision. When text-based training data are also available, they compensate for the loss of NE examples and recover NER recall, as shown by the difference between the methods without textbased training data (NoConf-A, Conf-A) and those with (NoConf-TA, Proposed). The ASR confidence feature also increases NER recall, as shown by the difference between the methods without it (NoConf-A, NoConf-TA) and with it (Conf-A, Proposed). This suggests that the ASR confidence feature helps distinguish whether ASR error influences NER and suppresses excessive rejection of NEs around ASR errors. With respect to the ASR confidence feature, the small difference between Conf-Reject and Proposed suggests that ASR confidence is a more dominant feature in misrecognized words than the other features: the word itself, its part-of-speech tag, and its character type. In addition, the difference between Conf-UB and Proposed indicated that there is room to improve NER performance with better ASR confidence scoring. NER-level rejection also increased precision, as shown in Figure 3. We can control the tradeoff between precision and recall with to according to the task requirements, even in text-based NER. In the NER of speech data, we can obtain much higher precision using both ASR-based training data and NER-level rejection than using either one. 6 Related work Recent studies on the NER of speech data consider more than 1-best ASR results in the form of N-best lists and word lattices. Using many ASR hypotheses helps recover the ASR errors of NE words in 1-best ASR results and improves NER accuracy. Our method can be extended to multiple ASR hypotheses. Generative NER models were used for multipass ASR and NER searches using word lattices (Horlock and King, 2003b; B´echet et al., 2004; Favre et al., 2005). Horlock and King (2003a) also proposed discriminative training of their NER models. These studies showed the advantage of using multiple ASR hypotheses, but they do not use overlapping features. Discriminative NER models were also applied to multiple ASR hypotheses. Zhai et al. (2004) applied text-based NER to N-best ASR results, and merged the N-best NER results by weighted voting based on several sentence-level results such as ASR and NER scores. Using the ASR confidence feature does not depend on SVMs and can be used with their method and other discriminative models. 7 Conclusion We proposed a method for NER of speech data that incorporates ASR confidence as a feature of discriminative NER, where the NER model 623 is trained using both text-based and ASR-based training data. In experiments using SVMs, the proposed method showed a higher NER Fmeasure, especially in terms of improving precision, than simply applying text-based NER to ASR results. The method effectively rejected erroneous NEs due to ASR errors with a small drop of recall, thanks to both the ASR confidence feature and ASR-based training data. NER-level rejection also effectively increased precision. Our approach can also be used in other tasks in spoken language processing, and we expect it to be effective. Since confidence itself is not limited to speech, our approach can also be applied to other noisy inputs, such as optical character recognition (OCR). For further improvement, we will consider N-best ASR results or word lattices as inputs and introduce more speech-specific features such as word durations and prosodic features. Acknowledgments We would like to thank anonymous reviewers for their helpful comments. References Fr´ed´eric B´echet, Allen L. Gorin, Jeremy H. Wright, and Dilek Hakkani-T¨ur. 2004. Detecting and extracting named entities from spontaneous speech in a mixed-initiative spoken dialogue context: How May I Help You? Speech Communication, 42(2):207– 225. Andrew Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. thesis, New York University. Hai Leong Chieu and Hwee Tou Ng. 2003. Named entity recognition with a maximum entropy approach. In Proc. CoNLL, pages 160–163. Benoˆıt Favre, Fr´ed´eric B´echet, and Pascal Noc´era. 2005. Robust named entity extraction from large spoken archives. In Proc. HLT-EMNLP, pages 491– 498. Takaaki Hori, Chiori Hori, and Yasuhiro Minami. 2004. Fast on-the-fly composition for weighted finite-state transducers in 1.8 million-word vocabulary continuous-speech recognition. In Proc. ICSLP, volume 1, pages 289–292. James Horlock and Simon King. 2003a. Discriminative methods for improving named entity extraction on speech data. In Proc. EUROSPEECH, pages 2765–2768. James Horlock and Simon King. 2003b. Named entity extraction from word lattices. In Proc. EUROSPEECH, pages 1265–1268. Hideki Isozaki and Hideto Kazawa. 2002. Efficient support vector classifiers for named entity recognition. In Proc. COLING, pages 390–396. Simo O. Kamppari and Timothy J. Hazen. 2000. Word and phone level acoustic confidence scoring. In Proc. ICASSP, volume 3, pages 1799–1802. Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proc. CoNLL, pages 188–191. David Miller, Richard Schwartz, Ralph Weischedel, and Rebecca Stone. 1999. Named entity extraction from broadcast news. In Proceedings of the DARPA Broadcast News Workshop, pages 37–40. David D. Palmer and Mari Ostendorf. 2001. Improving information extraction by modeling errors in speech recognizer output. In Proc. HLT, pages 156–160. Thomas Schaaf and Thomas Kemp. 1997. Confidence measures for spontaneous speech recognition. In Proc. ICASSP, volume II, pages 875–878. Satoshi Sekine and Yoshio Eriguchi. 2000. Japanese named entity extraction evaluation - analysis of results. In Proc. COLING, pages 25–30. Satoshi Sekine, Ralph Grishman, and Hiroyuki Shinnou. 1998. A decision tree method for finding and classifying names in Japanese texts. In Proc. the Sixth Workshop on Very Large Corpora, pages 171– 178. Frank Wessel, Ralf Schl¨uter, Klaus Macherey, and Hermann Ney. 2001. Confidence measures for large vocabulary continuous speech recognition. IEEE Transactions on Speech and Audio Processing, 9(3):288–298. Lufeng Zhai, Pascale Fung, Richard Schwartz, Marine Carpuat, and Dekai Wu. 2004. Using N-best lists for named entity recognition from chinese speech. In Proc. HLT-NAACL, pages 37–40. Rong Zhang and Alexander I. Rudnicky. 2001. Word level confidence annotation using combinations of features. In Proc. EUROSPEECH, pages 2105– 2108. 624
2006
78
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 625–632, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Exploiting Syntactic Patterns as Clues in Zero-Anaphora Resolution Ryu Iida, Kentaro Inui and Yuji Matsumoto Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan {ryu-i,inui,matsu}@is.naist.jp Abstract We approach the zero-anaphora resolution problem by decomposing it into intra-sentential and inter-sentential zeroanaphora resolution. For the former problem, syntactic patterns of the appearance of zero-pronouns and their antecedents are useful clues. Taking Japanese as a target language, we empirically demonstrate that incorporating rich syntactic pattern features in a state-of-the-art learning-based anaphora resolution model dramatically improves the accuracy of intra-sentential zero-anaphora, which consequently improves the overall performance of zeroanaphora resolution. 1 Introduction Zero-anaphora is a gap in a sentence that has an anaphoric function similar to a pro-form (e.g. pronoun) and is often described as “referring back” to an expression that supplies the information necessary for interpreting the sentence. For example, in the sentence “There are two roads to eternity, a straight and narrow, and a broad and crooked,” the gaps in “a straight and narrow (gap)” and “a broad and crooked (gap)” have a zero-anaphoric relationship to “two roads to eternity.” The task of identifying zero-anaphoric relations in a given discourse, zero-anaphora resolution, is essential in a wide range of NLP applications. This is the case particularly in such a language as Japanese, where even obligatory arguments of a predicate are often omitted when they are inferable from the context. In fact, in our Japanese newspaper corpus, for example, 45.5% of the nominative arguments of verbs are omitted. Since such gaps can not be interpreted only by shallow syntactic parsing, a model specialized for zero-anaphora resolution needs to be devised on the top of shallow syntactic and semantic processing. Recent work on zero-anaphora resolution can be located in two different research contexts. First, zero-anaphora resolution is studied in the context of anaphora resolution (AR), in which zeroanaphora is regarded as a subclass of anaphora. In AR, the research trend has been shifting from rulebased approaches (Baldwin, 1995; Lappin and Leass, 1994; Mitkov, 1997, etc.) to empirical, or corpus-based, approaches (McCarthy and Lehnert, 1995; Ng and Cardie, 2002a; Soon et al., 2001; Strube and M¨uller, 2003; Yang et al., 2003) because the latter are shown to be a cost-efficient solution achieving a performance that is comparable to best performing rule-based systems (see the Coreference task in MUC1 and the Entity Detection and Tracking task in the ACE program2). The same trend is observed also in Japanese zeroanaphora resolution, where the findings made in rule-based or theory-oriented work (Kameyama, 1986; Nakaiwa and Shirai, 1996; Okumura and Tamura, 1996, etc.) have been successfully incorporated in machine learning-based frameworks (Seki et al., 2002; Iida et al., 2003). Second, the task of zero-anaphora resolution has some overlap with Propbank3-style semantic role labeling (SRL), which has been intensively studied, for example, in the context of the CoNLL SRL task4. In this task, given a sentence “To attract younger listeners, Radio Free Europe intersperses the latest in Western rock groups”, an SRL 1http://www-nlpir.nist.gov/related projects/muc/ 2http://projects.ldc.upenn.edu/ace/ 3http://www.cis.upenn.edu/˜mpalmer/project pages/ACE.htm 4http://www.lsi.upc.edu/˜srlconll/ 625 model is asked to identify the NP Radio Free Europe as the A0 (Agent) argument of the verb attract. This can be seen as the task of finding the zero-anaphoric relationship between a nominal gap (the A0 argument of attract) and its antecedent (Radio Free Europe) under the condition that the gap and its antecedent appear in the same sentence. In spite of this overlap between AR and SRL, there are some important findings that are yet to be exchanged between them, partly because the two fields have been evolving somewhat independently. The AR community has recently made two important findings: • A model that identifies the antecedent of an anaphor by a series of comparisons between candidate antecedents has a remarkable advantage over a model that estimates the absolute likelihood of each candidate independently of other candidates (Iida et al., 2003; Yang et al., 2003). • An AR model that carries out antecedent identification before anaphoricity determination, the decision whether a given NP is anaphoric or not (i.e. discourse-new), significantly outperforms a model that executes those subtasks in the reverse order or simultaneously (Poesio et al., 2004; Iida et al., 2005). To our best knowledge, however, existing SRL models do not exploit these advantages. In SRL, on the other hand, it is common to use syntactic features derived from the parse tree of a given input sentence for argument identification. A typical syntactic feature is the path on a parse tree from a target predicate to a noun phrase in question (Gildea and Jurafsky, 2002; Carreras and Marquez, 2005). However, existing AR models deal with intra- and inter-sentential anaphoric relations in a uniform manner; that is, they do not use as rich syntactic features as state-of-the-art SRL models do, even in finding intra-sentential anaphoric relations. We believe that the AR and SRL communities can learn more from each other. Given this background, in this paper, we show that combining the aforementioned techniques derived from each research trend makes significant impact on zero-anaphora resolution, taking Japanese as a target language. More specifically, we demonstrate the following: • Incorporating rich syntactic features in a state-of-the-art AR model dramatically improves the accuracy of intra-sentential zeroanaphora resolution, which consequently improves the overall performance of zeroanaphora resolution. This is to be considered as a contribution to AR research. • Analogously to inter-sentential anaphora, decomposing the antecedent identification task into a series of comparisons between candidate antecedents works remarkably well also in intra-sentential zero-anaphora resolution. We hope this finding to be adopted in SRL. The rest of the paper is organized as follows. Section 2 describes the task definition of zeroanaphora resolution in Japanese. In Section 3, we review previous approaches to AR. Section 4 described how the proposed model incorporates effectively syntactic features into the machine learning-based approach. We then report the results of our experiments on Japanese zeroanaphora resolution in Section 5 and conclude in Section 6. 2 Zero-anaphora resolution In this paper, we consider only zero-pronouns that function as an obligatory argument of a predicate for two reasons: • Providing a clear definition of zero-pronouns appearing in adjunctive argument positions involves awkward problems, which we believe should be postponed until obligatory zero-anaphora is well studied. • Resolving obligatory zero-anaphora tends to be more important than adjunctive zeropronouns in actual applications. A zero-pronoun may have its antecedent in the discourse; in this case, we say the zero-pronoun is anaphoric. On the other hand, a zero-pronoun whose referent does not explicitly appear in the discourse is called a non-anaphoric zero-pronoun. A zero-pronoun may be non-anaphoric typically when it refers to an extralinguistic entity (e.g. the first or second person) or its referent is unspecified in the context. The following are Japanese examples. In sentence (1), zero-pronoun φi is anaphoric as its antecedent, ‘shusho (prime minister)’, appears in the same sentence. In sentence (2), on the other hand, φj is considered non-anaphoric if its referent (i.e. the first person) does not appear in the discourse. (1) shushoi-wa houbeisi-te , prime ministeri-TOP visit-U.S.-CONJ PUNC 626 ryoukoku-no gaikou-o both countries-BETWEEN diplomacy-OBJ (φi-ga) suishinsuru (φi-NOM) promote-ADNOM houshin-o akirakanisi-ta . plan-OBJ unveil-PAST PUNC The prime minister visited the united states and unveiled the plan to push diplomacy between the two countries. (2) (φj-ga) ie-ni kaeri-tai . (φj-NOM) home-DAT want to go back PUNC (I) want to go home. Given this distinction, we consider the task of zero-anaphora resolution as the combination of two sub-problems, antecedent identification and anaphoricity determination, which is analogous to NP-anaphora resolution: For each zero-pronoun in a given discourse, find its antecedent if it is anaphoric; otherwise, conclude it to be non-anaphoric. 3 Previous work 3.1 Antecedent identification Previous machine learning-based approaches to antecedent identification can be classified as either the candidate-wise classification approach or the preference-based approach. In the former approach (Soon et al., 2001; Ng and Cardie, 2002a, etc.), given a target anaphor, TA, the model estimates the absolute likelihood of each of the candidate antecedents (i.e. the NPs preceding TA), and selects the best-scored candidate. If all the candidates are classified negative, TA is judged nonanaphoric. In contrast, the preference-based approach (Yang et al., 2003; Iida et al., 2003) decomposes the task into comparisons of the preference between candidates and selects the most preferred one as the antecedent. For example, Iida et al. (2003) proposes a method called the tournament model. This model conducts a tournament consisting of a series of matches in which candidate antecedents compete with each other for a given anaphor. While the candidate-wise classification model computes the score of each single candidate independently of others, the tournament model learns the relative preference between candidates, which is empirically proved to be a significant advantage over candidate-wise classification (Iida et al., 2003). 3.2 Anaphoricity determination There are two alternative ways for anaphoricity determination: the single-step model and the two-step model. The single-step model (Soon et al., 2001; Ng and Cardie, 2002a) determines the anaphoricity of a given anaphor indirectly as a by-product of the search for its antecedent. If an appropriate candidate antecedent is found, the anaphor is classified as anaphoric; otherwise, it is classified as non-anaphoric. One disadvantage of this model is that it cannot employ the preferencebased model because the preference-based model is not capable of identifying non-anaphoric cases. The two-step model (Ng, 2004; Poesio et al., 2004; Iida et al., 2005), on the other hand, carries out anaphoricity determination in a separate step from antecedent identification. Poesio et al. (2004) and Iida et al. (2005) claim that the latter subtask should be done before the former. For example, given a target anaphor (TA), Iida et al.’s selection-then-classification model: 1. selects the most likely candidate antecedent (CA) of TA using the tournament model, 2. classifies TA paired with CA as either anaphoric or non-anaphoric using an anaphoricity determination model. If the CA-TA pair is classified as anaphoric, CA is identified as the antecedent of TA; otherwise, TA is conclude to be non-anaphoric. The anaphoricity determination model learns the non-anaphoric class directly from non-anaphoric training instances whereas the single-step model cannot not use non-anaphoric cases in training. 4 Proposal 4.1 Task decomposition We approach the zero-anaphora resolution problem by decomposing it into two subtasks: intrasentential and inter-sentential zero-anaphora resolution. For the former problem, syntactic patterns in which zero-pronouns and their antecedents appear may well be useful clues, which, however, does not apply to the latter problem. We therefore build a separate component for each subtask, adopting Iida et al. (2005)’s selection-thenclassification model for each component: 1. Intra-sentential antecedent identification: For a given zero-pronoun ZP in a given sentence S, select the most-likely candidate antecedent C∗ 1 from the candidates appearing in S by the intra-sentential tournament model 627 2. Intra-sentential anaphoricity determination: Estimate plausibility p1 that C∗ 1 is the true antecedent, and return C∗ 1 if p1 ≥θintra (θintra is a preselected threshold) or go to 3 otherwise 3. Inter-sentential antecedent identification: Select the most-likely candidate antecedent C∗ 2 from the candidates appearing outside of S by the inter-sentential tournament model. 4. Inter-sentential anaphoricity determination: Estimate plausibility p2 that C∗ 2 is the true antecedent, and return C∗ 2 if p2 ≥θinter (θinter is a preselected threshold) or return non-anaphoric otherwise. 4.2 Representation of syntactic patterns In the first two of the above four steps, we use syntactic pattern features. Analogously to SRL, we extract the parse path between a zero-pronoun to its antecedent to capture the syntactic pattern of their occurrence. Among many alternative ways of representing a path, in the experiments reported in the next section, we adopted a method as we describe below, leaving the exploration of other alternatives as future work. Given a sentence, we first use a standard dependency parser to obtain the dependency parse tree, in which words are structured according to the dependency relation between them. Figure 1(a), for example, shows the dependency tree of sentence (1) given in Section 2. We then extract the path between a zero-pronoun and its antecedent as in Figure 1(b). Finally, to encode the order of siblings and reduce data sparseness, we further transform the extracted path as in Figure 1(c): • A path is represented by a subtree consisting of backbone nodes: φ (zero-pronoun), Ant (antecedent), Node (the lowest common ancestor), LeftNode (left-branch node) and RightNode. • Each backbone node has daughter nodes, each corresponding to a function word associated with it. • Content words are deleted. This way of encoding syntactic patterns is used in intra-sentential anaphoricity determination. In antecedent identification, on the other hand, the tournament model allows us to incorporate three paths, a path for each pair of a zero-pronoun and left and right candidate antecedents, as shown in                !#" $"   %&     ' ' ' ' ' ' ' ' (  & ( %  ) *  +    , *  - . %  , -  0/.1  /1 2   (  3  ,  4-56 & 7 (  (  / 85 + 9  2   % /  :;   <  + 5 *  51 =>@?                !#" $"   %&     ' ' ' ' ' ' ' ' (  & ( %  ) *  +    , . %  , -  /1 2   <  + 5 *  51 =AB? C     D E  F 3 G H89F 3 ' ' ' ' FI 3 G H89FI 3  /.1 2   <  + 5 *  51 =JK? *  - *  - 3 0 3 0 3  Figure 1: Representation of the path between a zero-pronoun to its antecedent                 !   " #   $  %   & '()  *  + ,    %-   &  (.) % %         % * , % /0&  1    %   1      1   " 1  %  1  0 , 1   1  #   $ 1    " 1  &  (.) 1  &  (.) 1    " 1       1  /&  2 35476 2 35856 2 359!6  : ; <  # = <0 )   , %  , 1  : ; < 1  # = <0 ) Figure 2: Paths used in the tournament model Figure 25. 4.3 Learning algorithm As noted in Section 1, the use of zero-pronouns in Japanese is relatively less constrained by syntax compared, for example, with English. This forces the above way of encoding path information to produce an explosive number of different paths, which inevitably leads to serious data sparseness. This issue can be addressed in several ways. The SRL community has devised a range of variants of the standard path representation to reduce the complexity (Carreras and Marquez, 2005). Applying Kernel methods such as Tree kernels (Collins and Duffy, 2001) and Hierarchical DAG kernels (Suzuki et al., 2003) is another strong option. The Boosting-based algorithm pro5To indicate which node belongs to which subtree, the label of each node is prefixed either with L, R or I. 628          Figure 4: Tree representation of features for the tournament model. posed by Kudo and Matsumoto (2004) is designed to learn subtrees useful for classification. Leaving the question of selecting learning algorithms open, in our experiments, we have so far examined Kudo and Matsumoto (2004)’s algorithm, which is implemented as the BACT system6. Given a set of training instances, each of which is represented as a tree labeled either positive or negative, the BACT system learns a list of weighted decision stumps with a Boosting algorithm. Each decision stump is associated with tuple ⟨t, l, w⟩, where t is a subtree appearing in the training set, l a label, and w a weight, indicating that if a given input includes t, it gives w votes to l. The strength of this algorithm is that it deals with structured feature and allows us to analyze the utility of features. In antecedent identification, we train the tournament model by providing a set of labeled trees as a training set, where a label is either left or right. Each labeled tree has (i) path trees TL, TR and TI (as given in Figure 2) and (ii) a set nodes corresponding to the binary features summarized in Table 3, each of which is linked to the root node as illustrated in Figure 4. This way of organizing a labeled tree allows the model to learn, for example, the combination of a subtree of TL and some of the binary features. Analogously, for anaphoricity determination, we use trees (TC, f1, . . . , fn), where TC denotes a path subtree as in Figure 1(c). 5 Experiments We conducted an evaluation of our method using Japanese newspaper articles. The following four models were compared: 1. BM: Ng and Cardie (2002a)’s model, which identify antecedents by the candidatewise classification model, and determine anaphoricity using the one-step model. 6http://chasen.org/˜taku/software/bact/ 2. BM STR: BM with the syntactic features such as those in Figure 1(c). 3. SCM: The selection-then-classification model explained in Section 3. 4. SCM STR: SCM with all types of syntactic features shown in Figure 2. 5.1 Setting We created an anaphoric relation-tagged corpus consisting of 197 newspaper articles (1,803 sentences), 137 articles annotated by two annotators and 60 by one. The agreement ratio between two annotators on the 197 articles was 84.6%, which indicated that the annotation was sufficiently reliable. In the experiments, we removed from the above data set the zero-pronouns to which the two annotators did not agree. Consequently, the data set contained 995 intra-sentential anaphoric zero-pronouns, 754 inter-sentential anaphoric zero-pronouns, and 603 non-anaphoric zeropronouns (2,352 zero-pronouns in total), with each anaphoric zero-pronoun annotated to be linked to its antecedent. For each of the following experiments, we conducted five-fold cross-validation over 2,352 zero-pronouns so that the set of the zero-pronouns from a single text was not divided into the training and test sets. In the experiments, all the features were automatically acquired with the help of the following NLP tools: the Japanese morphological analyzer ChaSen7 and the Japanese dependency structure analyzer CaboCha8, which also carried out named-entity chunking. 5.2 Results on intra-sentential zero-anaphora resolution In both intra-anaphoricity determination and antecedent identification, we investigated the effect of introducing the syntactic features for improving the performance. First, the results of antecedent identification are shown in Table 1. The comparison between BM (SCM) with BM STR (SCM STR) indicates that introducing the structural information effectively contributes to this task. In addition, the large improvement from BM STR to SCM STR indicates that the use of the preference-based model has significant impact on intra-sentential antecedent identification. This 7http://chasen.naist.jp/hiki/ChaSen/ 8http://chasen.org/˜taku/software/cabocha/ 629 Figure 3: Feature set. Feature Type Feature Description Lexical HEAD BF characters of right-most morpheme in NP (PRED). Grammatical PRED IN MATRIX 1 if PRED exists in the matrix clause; otherwise 0. PRED IN EMBEDDED 1 if PRED exists in the relative clause; otherwise 0. PRED VOICE 1 if PRED contains auxiliaries such as ‘(ra)reru’; otherwise 0. PRED AUX 1 if PRED contains auxiliaries such as ‘(sa)seru’, ‘hosii’, ‘morau’, ‘itadaku’, ‘kudasaru’, ‘yaru’ and ‘ageru’. PRED ALT 1 if PRED VOICE is 1 or PRED AUX is 1; otherwise 0. POS Part-of-speech of NP followed by IPADIC (Asahara and Matsumoto, 2003). DEFINITE 1 if NP contains the article corresponding to DEFINITE ‘the’, such as ‘sore’ or ‘sono’; otherwise 0. DEMONSTRATIVE 1 if NP contains the article corresponding to DEMONSTRATIVE ‘that’ or ‘this’, such as ‘kono’, ‘ano’; otherwise 0. PARTICLE Particle followed by NP, such as ‘wa (topic)’, ‘ga (subject)’, ‘o (object)’. Semantic NE Named entity of NP: PERSON, ORGANIZATION, LOCATION, ARTIFACT, DATE, TIME, MONEY, PERCENT or N/A. EDR HUMAN 1 if NP is included among the concept ‘a human being’ or ‘atribute of a human being’ in EDR dictionary (Jap, 1995); otherwise 0. PRONOUN TYPE Pronoun type of NP. (e.g. ‘kare (he)’ →PERSON, ‘koko (here)’ →LOCATION, ‘sore (this)’ →OTHERS) SELECT REST 1 if NP satisfies selectional restrictions in Nihongo Goi Taikei (Japanese Lexicon) (Ikehara et al., 1997); otherwise 0. COOC the score of well-formedness model estimated from a large number of triplets ⟨Noun, Case, Predicate⟩proposed by Fujita et al. (2004) Positional SENTNUM Distance between NP and PRED. BEGINNING 1 if NP is located in the beggining of sentence; otherwise 0. END 1 if NP is located in the end of sentence; otherwise 0. PRED NP 1 if PRED precedes NP; otherwise 0. NP PRED 1 if NP precedes PRED; otherwise 0. DEP PRED 1 if NPi depends on PRED; otherwise 0. DEP NP 1 if PRED depends on NPi; otherwise 0. IN QUOTE 1 if NP exists in the quoted text; otherwise 0. Heuristic CL RANK a rank of NP in forward looking-center list based on Centering Theory (Grosz et al., 1995) CL ORDER a order of NP in forward looking-center list based on Centering Theory (Grosz et al., 1995) NP and PRED stand for a bunsetsu-chunk of a candidate antecedent and a bunsetsu-chunk of a predicate which has a target zero-pronoun respectively. finding may well contribute to semantic role labeling because these two tasks have a large overlap as discussed in Section 1. Second, to evaluate the performance of intrasentential zero-anaphora resolution, we plotted recall-precision curves altering threshold parameter and θinter for intra-anaphoricity determination as shown in Figure 5, where recall R and precision P were calculated by: R = # of detected antecedents correctly # of anaphoric zero-pronouns , P = # of detected antecedents correctly # of zero-pronouns classified as anaphoric. The curves indicate the upperbound of the performance of these models; in practical settings, the parameters have to be trained beforehand. Figure 5 shows that BM STR (SCM STR) outperforms BM (SCM), which indicates that incorporating syntactic pattern features works remarkably well for intra-sentential zero-anaphora Table 1: Accuracy of antecedent identification. BM BM STR SCM SCM STR 48.0% 63.5% 65.1% 70.5% (478/995) (632/995) (648/995) (701/995) resolution. Futhermore, SCM STR is significantly better than BM STR. This result supports that the former has an advantage of learning non-anaphoric zero-pronouns (181 instances) as negative training instances in intra-sentential anaphoricity determination, which enables it to reject non-anaphoric zero-pronouns more accurately than the others. 5.3 Discussion Our error analysis reveals that a majority of errors can be attributed to the current way of handling quoted phrases and sentences. Figure 6 shows the difference in resolution accuracy between zero-pronouns appearing in a quotation 630 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 precision recall BM BM_STR SCM SCM_STR BM BM_STR SCM SCM_STR Figure 5: Recall-precision curves of intrasentential zero-anaphora resolution. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 precision recall SCM_STR IN_Q OUT_Q SCM_STR IN_Q OUT_Q Figure 6: Recall-precision curves of resolving inquote and out-quote zero-pronouns. (262 zero-pronouns) and the rest (733 zeropronouns), where “IN Q” denotes the former (inquote zero-pronouns) and “OUT Q” the latter. The accuracy on the IN Q problems is considerably lower than that on the OUT Q cases, which indicates that we should deal with in-quote cases with a separate model so that it can take into account the nested structure of discourse segments introduced by quotations. 5.4 Impact on overall zero-anaphora resolution We next evaluated the effects of introducing the proposed model on overall zero-anaphora resolution including inter-sentential cases. As a baseline model, we implemented the original SCM, designed to resolve intra-sentential zeroanaphora and inter-sentential zero-anaphora simultaneously with no syntactic pattern features. Here, we adopted Support Vector Machines (Vapnik, 1998) to train the classifier on the baseline 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 precision recall SCM SCM_STR θintra=0.022 0.013 0.009 0.005 -0.006 SCM SCM_STR Figure 7: Recall-precision curves of overall zeroanaphora resolution. 0 0.05 0.1 0.15 0.2 0.25 0.3 -0.05 -0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04 0.05 AUC threshold θintra SCM SCM_STR SCM SCM_STR Figure 8: AUC curves plotted by altering θintra. model and the inter-sentential zero-anaphora resolution in the SCM using structural information. For the proposed model, we plotted several recall-precision curves by selecting different value for threshold parameters θintra and θinter. The results are shown in Figure 7, which indicates that the proposed model significantly outperforms the original SCM if θintra is appropriately chosen. We then investigated the feasibility of parameter selection for θintra by plotting the AUC values for different θintra values. Here, each AUC value is the area under a recall-precision curve. The results are shown in Figure 8. Since the original SCM does not use θintra, the AUC value of it is constant, depicted by the SCM. As shown in the Figure 8, the AUC-value curve of the proposed model is not peaky, which indicates the selection of parameter θintra is not difficult. 631 6 Conclusion In intra-sentential zero-anaphora resolution, syntactic patterns of the appearance of zero-pronouns and their antecedents are useful clues. Taking Japanese as a target language, we have empirically demonstrated that incorporating rich syntactic pattern features in a state-of-the-art learning-based anaphora resolution model dramatically improves the accuracy of intra-sentential zero-anaphora, which consequently improves the overall performance of zero-anaphora resolution. In our next step, we are going to address the issue of how to find zero-pronouns, which requires us to design a broader framework that allows zeroanaphora resolution to interact with predicateargument structure analysis. Another important issue is how to find a globally optimal solution to the set of zero-anaphora resolution problems in a given discourse, which leads us to explore methods as discussed by McCallum and Wellner (2003). References M. Asahara and Y. Matsumoto, 2003. IPADIC User Manual. Nara Institute of Science and Technology, Japan. B. Baldwin. 1995. CogNIAC: A Discourse Processing Engine. Ph.D. thesis, Department of Computer and Information Sciences, University of Pennsylvania. X. Carreras and L. Marquez. 2005. Introduction to the conll2005 shared task: Semantic role labeling. In Proceedings of the Ninth CoNll, pages 152–164. M. Collins and N.l Duffy. 2001. Convolution kernels for natural language. In Proceedings of the NIPS, pages 625– 632. A. Fujita, K. Inui, and Y. Matsumoto. 2004. Detection of incorrect case assignments in automatically generated paraphrases of japanese sentences. In Proceeding of the first IJCNLP, pages 14–21. D. Gildea and D. Jurafsky. 2002. Automatic labeling of semantic roles. In Computational Linguistics, pages 245– 288. B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–226. R. Iida, K. Inui, H. Takamura, and Y. Matsumoto. 2003. Incorporating contextual cues in trainable models for coreference resolution. In Proceedings of the 10th EACL Workshop on The Computational Treatment of Anaphora, pages 23–30. R. Iida, K. Inui, and Y. Matsumoto. 2005. Anaphora resolution by antecedent identification followed by anaphoricity determination. ACM Transactions on Asian Language Information Processing (TALIP), 4:417–434. S. Ikehara, M. Miyazaki, S. Shirai A. Yokoo, H. Nakaiwa, K. Ogura, Y. Ooyama, and Y. Hayashi. 1997. Nihongo Goi Taikei (in Japanese). Iwanami Shoten. Japan Electronic Dictionary Research Institute, Ltd. Japan, 1995. EDR Electronic Dictionary Technical Guide. M. Kameyama. 1986. A property-sharing constraint in centering. In Proceedings of the 24th ACL, pages 200–206. T. Kudo and Y. Matsumoto. 2004. A boosting algorithm for classification of semi-structured text. In Proceedings of the 2004 EMNLP, pages 301–308. S. Lappin and H. J. Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):535–561. A. McCallum and B. Wellner. 2003. Object consolidation by graph partitioning with a conditionally trained distance metric. In Proceedings of the KDD-2003 Workshop on Data Cleaning, Record Linkage, and Object Consolidation, pages 19–24. J. F. McCarthy and W. G. Lehnert. 1995. Using decision trees for coreference resolution. In Proceedings of the 14th IJCAI, pages 1050–1055. R. Mitkov. 1997. Factors in anaphora resolution: they are not the only things that matter. a case study based on two different approaches. In Proceedings of the ACL’97/EACL’97 Workshop on Operational Factors in Practical, Robust Anaphora Resolution. H. Nakaiwa and S. Shirai. 1996. Anaphora resolution of japanese zero pronouns with deictic reference. In Proceedings of the 16th COLING, pages 812–817. V. Ng. 2004. Learning noun phrase anaphoricity to improve coreference resolution: Issues in representation and optimization. In Proceedings of the 42nd ACL, pages 152– 159. V. Ng and C. Cardie. 2002a. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th ACL, pages 104–111. M. Okumura and K. Tamura. 1996. Zero pronoun resolution in japanese discourse based on centering theory. In Proceedings of the 16th COLING, pages 871–876. M. Poesio, O. Uryupina, R. Vieira, M. Alexandrov-Kabadjov, and R. Goulart. 2004. Discourse-new detectors for definite description resolution: A survey and a preliminary proposal. In Proceedings of the 42nd ACL Workshop on Reference Resolution and its Applications, pages 47–54. K. Seki, A. Fujii, and T. Ishikawa. 2002. A probabilistic method for analyzing japanese anaphora integrating zero pronoun detection and resolution. In Proceedings of the 19th COLING, pages 911–917. W. M. Soon, H. T. Ng, and D. C. Y. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. M. Strube and C. M¨uller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proceedings of the 41st ACL, pages 168–175. J. Suzuki, T. Hirao, Y. Sasaki, and E. Maeda. 2003. Hierarchical directed acyclic graph kernel: Methods for structured natural language data. In Proceeding of the 41st ACL, pages 32–39. V. N. Vapnik. 1998. Statistical Learning Theory. Adaptive and Learning Systems for Signal Processing Communications, and control. John Wiley & Sons. X. Yang, G. Zhou, J. Su, and C. L. Tan. 2003. Coreference resolution using competition learning approach. In Proceedings of the 41st ACL, pages 176–183. 632
2006
79
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 57–64, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Acceptability Prediction by Means of Grammaticality Quantification Philippe Blache, Barbara Hemforth & St´ephane Rauzy Laboratoire Parole & Langage CNRS - Universit´e de Provence 29 Avenue Robert Schuman 13621 Aix-en-Provence, France {blache,hemforth,rauzy}@lpl.univ-aix.fr Abstract We propose in this paper a method for quantifying sentence grammaticality. The approach based on Property Grammars, a constraint-based syntactic formalism, makes it possible to evaluate a grammaticality index for any kind of sentence, including ill-formed ones. We compare on a sample of sentences the grammaticality indices obtained from PG formalism and the acceptability judgements measured by means of a psycholinguistic analysis. The results show that the derived grammaticality index is a fairly good tracer of acceptability scores. 1 Introduction Syntactic formalisms make it possible to describe precisely the question of grammaticality. When a syntactic structure can be associated to a sentence, according to a given grammar, we can decide whether or not the sentence is grammatical. In this conception, a language (be it natural or not) is produced (or generated) by a grammar by means of a specific mechanism, for example derivation. However, when no structure can be built, nothing can be said about the input to be parsed except, eventually, the origin of the failure. This is a problem when dealing with non canonical inputs such as spoken language, e-mails, non-native speaker productions, etc. From this perspective, we need robust approaches that are at the same time capable of describing precisely the form of the input, the source of the problem and to continue the parse. Such capabilities render it possible to arrive at a precise evaluation of the grammaticality of the input. In other words, instead of deciding on the grammaticality of the input, we can give an indication of its grammaticality, quantified on the basis of the description of the properties of the input. This paper addresses the problem of ranking the grammaticality of different sentences. This question is of central importance for the understanding of language processing, both from an automatic and from a cognitive perspective. As for NLP, ranking grammaticality makes it possible to control dynamically the parsing process (in choosing the most adequate structures) or to find the best structure among a set of solutions (in case of nondeterministic approaches). Likewise the description of cognitive processes involved in language processing by human has to explain how things work when faced with unexpected or non canonical material. In this case too, we have to explain why some productions are more acceptable and easier to process than others. The question of ranking grammaticality has been addressed from time to time in linguistics, without being a central concern. Chomsky, for example, mentioned this problem quite regularly (see for example (Chomsky75)). However he rephrases it in terms of “degrees of ’belongingness’ to the language”, a somewhat fuzzy notion both formally and linguistically. More recently, several approaches have been proposed illustrating the interest of describing these mechanisms in terms of constraint violations. The idea consists in associating weights to syntactic constraints and to evaluate, either during or after the parse, the weight of violated constraints. This approach is at the basis of Linear Optimality Theory (see (Keller00), and (Sorace05) for a more general perspective) in which grammaticality is judged on the basis of the total weights of violated constraints. It is then possible to rank different candidate struc57 tures. A similar idea is proposed in the framework of Constraint Dependency Grammar (see (Menzel98), (Schr¨oder02)). In this case too, acceptability is function of the violated constraints weights. However, constraint violation cannot in itself constitute a measure of grammaticality without taking into account other parameters as well. The type and the number of constraints that are satisfied are of central importance in acceptability judgment: a construction violating 1 constraint and satisfying 15 of them is more acceptable than one violating the same constraint but satisfying only 5 others. In the same way, other informations such as the position of the violation in the structure (whether it occurs in a deeply embedded constituent or higher one in the structure) plays an important role as well. In this paper, we propose an approach overcoming such limitations. It takes advantage of a fully constraint-based syntactic formalism (called Property Grammars, cf. (Blache05b)) that offers the possibility of calculating a grammaticality index, taking into account automatically derived parameters as well as empirically determined weights. This index is evaluated automatically and we present a psycholinguistic study showing how the parser predictions converge with acceptability judgments. 2 Constraint-based parsing Constraints are generally used in linguistics as a control process, verifying that a syntactic structure (e.g. a tree) verifies some well-formedness conditions. They can however play a more general role, making it possible to express syntactic information without using other mechanism (such as a generation function). Property Grammars (noted hereafter PG) are such a fully constraint-based formalism. In this approach, constraints stipulate different kinds of relation between categories such as linear precedence, imperative co-occurrence, dependency, repetition, etc. Each of these syntactic relations corresponds to a type of constraint (also called property): • Linear precedence: Det ≺N (a determiner precedes the noun) • Dependency: AP ; N (an adjectival phrase depends on the noun) • Requirement: V[inf] ⇒to (an infinitive comes with to) • Exclusion: seems ⇎ThatClause[subj] (the verb seems cannot have That clause subjects) • Uniqueness : UniqNP {Det} (the determiner is unique in a NP) • Obligation : ObligNP {N, Pro} (a pronoun or a noun is mandatory in a NP) • Constituency : ConstNP {Det, AP, N, Pro} (set of possible constituents of NP) In PG, each category of the grammar is described with a set of properties. A grammar is then made of a set of properties. Parsing an input consists in verifying for each category of description the set of corresponding properties in the grammar. More precisely, the idea consists in verifying, for each subset of constituents, the properties for which they are relevant (i.e. the constraints that can be evaluated). Some of these properties are satisfied, some others possibly violated. The result of a parse, for a given category, is the set of its relevant properties together with their evaluation. This result is called characterization and is formed by the subset of the satisfied properties, noted P +, and the set of the violated ones, noted P −. For example, the characterizations associated to the NPs “the book” and “book the” are respectively of the form: P +={Det ≺N; Det ; N; N ⇎Pro; Uniq(Det), Oblig(N), etc.}, P −=∅ P +={Det ; N; N ⇎Pro; Uniq(Det), Oblig(N), etc.}, P −={Det ≺N} This approach allows to characterize any kind of syntactic object. In PG, following the proposal made in Construction Grammar (see (Fillmore98), (Kay99)), all such objects are called constructions. They correspond to a phrase (NP, PP, etc.) as well as a syntactic turn (cleft, whquestions, etc.). All these objects are described by means of a set of properties (see (Blache05b)). In terms of parsing, the mechanism consists in exhibiting the potential constituents of a given construction. This stage corresponds, in constraint solving techniques, to the search of an assignment satisfying the constraint system. The particularity in PG comes from constraint relaxation. Here, the goal is not to find the assignment satisfying the constraint system, but the best assignment (i.e. the one satisfying as much as possible the system). In this way, the PG approach permits to deal with more or less grammatical sentences. Provided that 58 some control mechanisms are added to the process, PG parsing can be robust and efficient (see (Blache06)) and parse different material, including spoken language corpora. Using a constraint-based approach such as the one proposed here offers several advantages. First, constraint relaxation techniques make it possible to process any kind of input. When parsing non canonical sentences, the system identifies precisely, for each constituent, the satisfied constraints as well as those which are violated. It furnishes the possibility of parsing any kind of input, which is a pre-requisite for identifying a graded scale of grammaticality. The second important interest of constraints lies in the fact that syntactic information is represented in a nonholistic manner or, in other words, in a decentralized way. This characteristic allows to evaluate precisely the syntactic description associated with the input. As shown above, such a description is made of sets of satisfied and violated constraints. The idea is to take advantage of such a representation for proposing a quantitative evaluation of these descriptions, elaborated from different indicators such as the number of satisfied or violated constraints or the number of evaluated constraints. The hypothesis, in the perspective of a gradience account, is to exhibit a relation between a quantitative evaluation and the level of grammaticality: the higher the evaluation value, the more grammatical the construction. The value is then an indication of the quality of the input, according to a given grammar. In the next section we propose a method for computing this value. 3 Characterization evaluation The first idea that comes to mind when trying to quantify the quality of a characterization is to calculate the ratio of satisfied properties with respect to the total set of evaluated properties. This information is computed as follows: Let C a construction defined in the grammar by means of a set of properties SC, let AC an assignment for the construction C, • P + = set of satisfied properties for AC • P −= set of violated properties for AC • N+ : number of satisfied properties N+ = card(P +) • N−: number of violated properties N−= card(P −) • Satisfaction ratio (SR): the number of satisfied properties divided by the number of evaluated properties SR = N+ E The SR value varies between 0 and 1, the two extreme values indicating that no properties are satisfied (SR=0) or none of them are violated (SR=1). However, SR only relies on the evaluated properties. It is also necessary to indicate whether a characterization uses a small or a large subpart of the properties describing the construction in the grammar. For example, the VP in our grammar is described by means of 25 constraints whereas the PP only uses 7 of them. Let’s imagine the case where 7 constraints can be evaluated for both constructions, with an equal SR. However, the two constructions do not have the same quality: one relies on the evaluation of all the possible constraints (in the PP) whereas the other only uses a few of them (in the VP). The following formula takes these differences into account : • E : number of relevant (i.e. evaluated) properties E = N+ + N− • T= number of properties specifying construction C = card(SC) • Completeness coefficient (CC) : the number of evaluated properties divided by the number of properties describing the construction in the grammar CC = E T These purely quantitative aspects have to be contrasted according to the constraint types. Intuitively, some constraints, for a given construction, play a more important role than some others. For example, linear precedence in languages with poor morphology such as English or French may have a greater importance than obligation (i.e. the necessity of realizing the head). To its turn, obligation may be more important than uniqueness (i.e. impossible repetition). In this case, violating a property would have different consequences according to its relative importance. The following examples illustrate this aspect: (1) a. The the man who spoke with me is my brother. b. The who spoke with me man is my brother. In (1a), the determiner is repeated, violating a uniqueness constraint of the first NP, whereas (1c) violates a linearity constraint of the same NP. 59 Clearly, (1a) seems to be more grammatical than (1b) whereas in both cases, only one constraint is violated. This contrast has to be taken into account in the evaluation. Before detailing this aspect, it is important to note that this intuition does not mean that constraints have to be organized into a ranking scheme, as with the Optimality Theory (see (Prince93)). The parsing mechanism remains the same with or without this information and the hierarchization only plays the role of a process control. Identifying a relative importance of the types of constraints comes to associate them with a weight. Note that at this stage, we assign weights to constraint types, not directly to the constraints, differently from other approaches (cf. (Menzel98), (Foth05)). The experiment described in the next section will show that this weighting level seems to be efficient enough. However, in case of necessity, it remains possible to weight directly some constraints into a given construction, overriding thus the default weight assigned to the constraint types. The notations presented hereafter are used to describe constraint weighting. Remind that P + and P −indicate the set of satisfied and violated properties of a given construction. • p+ i : property belonging to P + • p− i : property belonging to P − • w(p) : weight of the property of type p • W + : sum of the satisfied properties weights W + = N+ X i=1 w(p+ i ) • W −: sum of the violated properties weights W −= N− X i=1 w(p− i ) One indication of the relative importance of the constraints involved in the characterization of a construction is given by the following formula: • QI: the quality index of a construction QI = W + −W − W + + W − The QI index varies then between -1 and 1. A negative value indicates that the set of violated constraints has a greater importance than the set of satisfied one. This does not mean that more constraints are violated than satisfied, but indicates the importance of the violated ones. We now have three different indicators that can be used in the evaluation of the characterization: the satisfaction ratio (noted SR) indicating the ratio of satisfied constraints, the completeness coefficient (noted CC) specifying the ratio of evaluated constraints, and the quality index (noted QI) associated to the quality of the characterization according to the respective degree of importance of evaluated constraints. These three indices are used to form a global precision index (noted PI). These three indicators do not have the same impact in the evaluation of the characterization, they are then balanced with coefficients in the normalized formula: • PI = (k×QI)+(l×SR)+(m×CC) 3 As such, PI constitutes an evaluation of the characterization for a given construction. However, it is necessary to take into account the “quality” of the constituents of the construction as well. A construction can satisfy all the constraints describing it, but can be made of embedded constituents more or less well formed. The overall indication of the quality of a construction has then to integrate in its evaluation the quality of each of its constituents. This evaluation depends finally on the presence or not of embedded constructions. In the case of a construction made of lexical constituents, no embedded construction is present and the final evaluation is the precision index PI as described above. We will call hereafter the evaluation of the quality of the construction the “grammaticality index” (noted GI). It is calculated as follows: • Let d the number of embedded constructions • If d = 0 then GI = PI, else GI = PI × Pd i=1 GI(Ci) d In this formula, we note GI(Ci) the grammaticality index of the construction Ci. The general formula for a construction C is then a function of its precision index and of the sum of the grammaticality indices of its embedded constituents. This 60 formula implements the propagation of the quality of each constituent. This means that the grammaticality index of a construction can be lowered when its constituents violate some properties. Reciprocally, this also means that violating a property at an embedded level can be partially compensated at the upper levels (provided they have a good grammaticality index). 4 Grammaticality index from PG We describe in the remainder of the paper predictions of the model as well as the results of a psycholinguistic evaluation of these predictions. The idea is to evaluate for a given set of sentences on the one hand the grammaticality index (done automatically), on the basis of a PG grammar, and on the other hand the acceptability judgment given by a set of subjects. This experiment has been done for French, a presentation of the data and the experiment itself will be given in the next section. We present in this section the evaluation of grammaticality index. Before describing the calculation of the different indicators, we have to specify the constraints weights and the balancing coefficients used in PI. These values are language-dependent, they are chosen intuitively and partly based on earlier analysis, this choice being evaluated by the experiment as described in the next section. In the remainder, the following values are used: Constraint type Weight Exclusion, Uniqueness, Requirement 2 Obligation 3 Linearity, Constituency 5 Concerning the balancing coefficients, we give a greater importance to the quality index (coefficient k=2), which seems to have important consequences on the acceptability, as shown in the previous section. The two other coefficients are significatively less important, the satisfaction ratio being at the middle position (coefficient l=1) and the completeness at the lowest (coefficient m=0,5). Let’s start with a first example, illustrating the process in the case of a sentence satisfying all constraints. (2) Marie a emprunt´e un tr`es long chemin pour le retour. Mary took a very long way for the return. The first NP contains one lexical constituent, Mary. Three constraints, among the 14 describing the NP, are evaluated and all satisfied: Oblig(N), stipulating that the head is realized, Const(N), indicating the category N as a possible constituent, and Excl(N, Pro), verifying that N is not realized together with a pronoun. The following values come from this characterization: N+ NE T W+ WQI SR CC PI GI 3 0 3 14 10 0 1 1 0.21 1.04 1.04 We can see that, according to the fact that all evaluated constraints are satisfied, QI and SR equal 1. However, the fact that only 3 constraints among 14 are evaluated lowers down the grammatical index. This last value, insofar as no constituents are embedded, is the same as PI. These results can be compared with another constituent of the same sentence, the VP. This construction also only contains satisfied properties. Its characterization is the following : Char(VP)=Const(Aux, V, NP, PP) ; Oblig(V) ; Uniq(V) ; Uniq(NP) ; Uniq(PP) ; Aux⇒V[part] ; V≺NP ; Aux≺V ; V≺PP. On top of this set of evaluated constraints (9 among the possible 25), the VP includes two embedded constructions : a PP and a NP. A grammaticality index has been calculated for each of them: GI(PP) = 1.24 GI(NP)=1.23. The following table indicates the different values involved in the calculation of the GI. N+ NE T W+ WQI SR CC PI 9 0 9 25 31 0 1 1 0.36 1.06 GI Emb Const GI 1.23 1.31 The final GI of the VP reaches a high value. It benefits on the one hand from its own quality (indicated by PI) and on another hand from that of its embedded constituents. In the end, the final GI obtained at the sentence level is function of its own PI (very good) and the NP and VP GIs, as shown in the table: N+ NE T W+ WQI SR CC PI 5 0 5 9 17 0 1 1 0.56 1.09 GI Emb Const GI 1.17 1.28 Let’s compare now these evaluations with those obtained for sentences with violated constraints, as in the following examples: (3) a. Marie a emprunt´e tr`es long chemin un pour le retour. Mary took very long way a for the return. b. Marie a emprunt´e un tr`es chemin pour le retour. Mary took a very way for the return. In (2a), 2 linear constraints are violated: a determiner follows a noun and an AP in “tr`es long chemin un”. Here are the figures calculated for this NP: N+ NE T W+ WQI SR CC PI GI 8 2 10 14 23 10 0.39 0.80 0.71 0.65 0.71 61 The QI indicator is very low, the violated constraints being of heavy weight. The grammaticality index is a little bit higher because a lot of constraints are also satisfied. The NP GI is then propagated to its dominating construction, the VP. This phrase is well formed and also contains a wellformed construction (PP) as sister of the NP. Note that in the following table summarizing the VP indicators, the GI product of the embedded constituents is higher than the GI of the NP. This is due to the well-formed PP constituent. In the end, the GI index of the VP is better than that of the ill-formed NP: N+ NE T W+ WQI SR CC PI 9 0 9 25 31 0 1 1 0.36 1.06 GI Emb Const GI 0.97 1.03 For the same reasons, the higher level construction S also compensates the bad score of the NP. However, in the end, the final GI of the sentence is much lower than that of the corresponding wellformed sentence (see above). N+ NE T W+ WQI SR CC PI 5 0 5 9 17 0 1 1 0.56 1.09 GI Emb Const GI 1.03 1.13 The different figures of the sentence (2b) show that the violation of a unique constraint (in this case the Oblig(Adj) indicating the absence of the head in the AP) can lead to a global lower GI than the violation of two heavy constraints as for (2a). In this case, this is due to the fact that the AP only contains one constituent (a modifier) that does not suffice to compensate the violated constraint. The following table indicates the indices of the different phrases. Note that in this table, each phrase is a constituent of the following (i.e. AP belongs to NP itself belonging to VP, and so on). N+ NE T W+ WQI SR CC PI AP 2 1 3 7 7 3 0.40 0.67 0.43 0.56 NP 10 0 10 14 33 0 1 1 0.71 1.12 VP 9 0 9 25 31 0 1 1 0.36 1.06 S 5 0 5 9 17 0 1 1 0.56 1.09 GI Emb Const GI AP 1 0.56 NP 0.56 0.63 VP 0.93 0.99 S 1.01 1.11 5 Judging acceptability of violations We ran a questionnaire study presenting participants with 60 experimental sentences like (11) to (55) below. 44 native speakers of French completed the questionnaire giving acceptability judgements following the Magnitude Estimation technique. 20 counterbalanced forms of the questionnaire were constructed. Three of the 60 experimental sentences appeared in each version in each form of the questionnaire, and across the 20 forms, each experimental sentence appeared once in each condition. Each sentence was followed by a question concerning its acceptability. These 60 sentences were combined with 36 sentences of various forms varying in complexity (simple main clauses, simple embeddings and doubly nested embeddings) and plausibility (from fully plausible to fairly implausible according to the intuitions of the experimenters). One randomization was made of each form. Procedure: The rating technique used was magnitude estimation (ME, see (Bard96)). Participants were instructed to provide a numeric score that indicates how much better (or worse) the current sentence was compared to a given reference sentence (Example: If the reference sentence was given the reference score of 100, judging a target sentence five times better would result in 500, judging it five times worse in 20). Judging the acceptability ratio of a sentence in this way results in a scale which is open-ended on both sides. It has been demonstrated that ME is therefore more sensitive than fixed rating-scales, especially for scores that would approach the ends of such rating scales (cf. (Bard96)). Each questionnaire began with a written instruction where the subject was made familiar with the task based on two examples. After that subjects were presented with a reference sentence for which they had to provide a reference score. All following sentences had to be judged in relation to the reference sentence. Individual judgements were logarithmized (to arrive at a linear scale) and normed (z-standardized) before statistical analyses. Global mean scores are presented figure 1. We tested the reliability of results for different randomly chosen subsets of the materials. Constructions for which the judgements remain highly stable across subsets of sentences are marked by an asterisk (rs > 0.90; p < 0.001). The mean reliability across subsets is rs > 0.65 (p < 0.001). What we can see in these data is that in particular violations within prepositional phrases are not judged in a very stable way. The way they are judged appears to be highly dependent on the preposition used and the syntactic/semantic context. This is actually a very plausible result, given that heads of prepositional phrases are closed class items that are much more predictable in many syntactic and semantic environments than heads of 62 noun phrases and verb phrases. We will therefore base our further analyses mainly on violations within noun phrases, verb phrases, and adjectival phrases. Results including prepositional phrases will be given in parentheses. Since the constraints described above do not make any predictions for semantic violations, we excluded examples 25, 34, 45, and 55 from further analyses. 6 Acceptability versus grammaticality index We compare in this section the results coming from the acceptability measurements described in section 5 and the values of grammaticality indices obtained as proposed section 4. From the sample of 20 sentences presented in figure 1, we have discarded 4 sentences, namely sentence 25, 34, 45 and 55, for which the property violation is of semantic order (see above). We are left with 16 sentences, the reference sentence satisfying all the constraints and 15 sentences violating one of the syntactic constraints. The results are presented figure 2. Acceptability judgment (ordinate) versus grammaticality index (abscissa) is plotted for each sentence. We observe a high coefficient of correlation (ρ = 0.76) between the two distributions, indicating that the grammaticality index derived from PG is a fairly good tracer of the observed acceptability measurements. The main contribution to the grammaticality index comes from the quality index QI (ρ = 0.69) while the satisfaction ratio SR and the completeNo violations 11. Marie a emprunt´e un tr`es long chemin pour le retour 0.465 NP-violations 21. Marie a emprunt´e tr`es long chemin un pour le retour -0.643 * 22. Marie a emprunt´e un tr`es long chemin chemin pour le retour -0.161 * 23. Marie a emprunt´e un tr`es long pour le retour -0.871 * 24. Marie a emprunt´e tr`es long chemin pour le retour -0.028 * 25. Marie a emprunt´e un tr`es heureux chemin pour le retour -0.196 * AP-violations 31. Marie a emprunt´e un long tr`es chemin pour le retour -0.41 * 32. Marie a emprunt´e un tr`es long long chemin pour le retour -0.216 33. Marie a emprunt´e un tr`es chemin pour le retour -0.619 34. Marie a emprunt´e un grossi`erement long chemin pour le retour -0.058 * PP-violations 41. Marie a emprunt´e un tr`es long chemin le retour pour -0.581 42. Marie a emprunt´e un tr`es long chemin pour pour le retour -0.078 43. Marie a emprunt´e un tr`es long chemin le retour -0.213 44. Marie a emprunt´e un tr`es long chemin pour -0.385 45. Marie a emprunt´e un tr`es long chemin dans le retour -0.415 VP-violations 51. Marie un tr`es long chemin a emprunt´e pour le retour -0.56 * 52.Marie a emprunt´e emprunt´e un tr`es long chemin pour le retour -0.194 * 53.Marie un tr`es long chemin pour le retour -0.905 * 54. Marie emprunt´e un tr`es long chemin pour le retour -0.322 * 55. Marie a persuad´e un tr`es long chemin pour le retour -0.394 * Figure 1: Acceptability results ness coefficient CC contributions, although significant, are more modest (ρ = 0.18 and ρ = 0.17 respectively). We present in figure 3 the correlation between acceptability judgements and grammaticality indices after the removal of the 4 sentences presenting PP violations. The analysis of the experiment described in section 5 shows indeed that acceptability measurements of the PP-violation sentences is less reliable than for others phrases. We thus expect that removing these data from the sample will strengthen the correlation between the two distributions. The coefficient of correlation of the 12 remaining data jumps to ρ = 0.87, as expected. Figure 2: Correlation between acceptability judgement and grammaticality index Figure 3: Correlation between acceptability judgement and grammaticality index removing PP violations Finally, the adequacy of the PG grammaticality indices to the measurements was investigated by means of resultant analysis. We adapted the parameters of the model in order to arrive at a good fit based on half of the sentences materials (randomly chosen from the full set), with a correlation of ρ = 0.85 (ρ = 0.76 including PPs) between the grammaticality index and acceptability judgements. Surprisingly, we arrived at the best fit with only two different weights: A weight of 2 for Exclusion, Uniqueness, and Requirement, and a weight of 5 for Obligation, Linearity, and Constituency. This result converges with the hard 63 and soft constraint repartition idea as proposed by (Keller00). The fact that the grammaticality index is based on these properties as well as on the number of constraints to be evaluated, the number of constraints to the satisfied, and the goodness of embedded constituents apparently results in a fined grained and highly adequate prediction even with this very basic distinction of constraints. Fixing these parameters, we validated the predictions of the model for the remaining half of the materials. Here we arrived at a highly reliable correlation of ρ = 0.86 (ρ = 0.67 including PPs) between PG grammaticality indices and acceptability judgements. 7 Conclusion The method described in this paper makes it possible to give a quantified indication of sentence grammaticality. This approach is direct and takes advantage of a constraint-based representation of syntactic information, making it possible to represent precisely the syntactic characteristics of an input in terms of satisfied and (if any) violated constraints. The notion of grammaticality index we have proposed here integrates different kind of information: the quality of the description (in terms of well-formedness degree), the density of information (the quantity of constraints describing an element) as well as the structure itself. These three parameters are the basic indicators of the grammaticality index. The relevance of this method has been experimentally shown, and the results described in this paper illustrate the correlation existing between the prediction (automatically calculated) expressed in terms of GI and the acceptability judgment given by subjects. This approach also presents a practical interest: it can be directly implemented into a parser. The next step of our work will be its validation on large corpora. Our parser will associate a grammatical index to each sentence. This information will be validated by means of acceptability judgments acquired on the basis of a sparse sampling strategy. References Bard E., D. Robertson & A. Sorace (1996) “Magnitude Estimation of Linguistic Acceptability”, Language 72:1. Blache P. & J.-P. Prost (2005) “Gradience, Constructions and Constraint Systems”, in H. Christiansen & al. (eds), Constraint Solving and NLP, Lecture Notes in Computer Science, Springer. Blache P. (2005) “Property Grammars: A Fully Constraint-Based Theory”, in H. Christiansen & al. (eds), Constraint Solving and NLP, Lecture Notes in Computer Science, Springer. Blache P. (2006) “A Robust and Efficient Parser for Non-Canonical Inputs”, in proceedings of Robust Methods in Analysis of Natural Language Data, EACL workshop. Chomsky N.. (1975) The Logical Structure of Linguistic Theory, Plenum Press Croft W. & D. Cruse (2003) Cognitive Linguistics, Cambridge University Press. Foth K., M. Daum & W. Menzel (2005) “Parsing Unrestricted German Text with Defeasible Constraints”, in H. Christiansen & al. (eds), Constraint Solving and NLP, Lecture Notes in Computer Science, Springer. Fillmore C. (1998) “Inversion and Contructional Inheritance”, in Lexical and Constructional Aspects of Linguistic Explanation, Stanford University. Kay P. & C. Fillmore (1999) “Grammatical Constructions and Linguistic Generalizations: the what’s x doing y construction”, Language. Keller F. (2000) Gradience in Grammar. Experimental and Computational Aspects of Degrees of Grammaticality, Phd Thesis, University of Edinburgh. Keller F. (2003) “A probabilistic Parser as a Model of Global Processing Difficulty”, in proceedings of ACCSS-03 Menzel W. & I. Schroder (1998) “Decision procedures for dependency parsing using graded constraints”, in S. Kahane & A. Polgu`ere (eds), Proc. ColingACL Workshop on Processing of Dependencybased Grammars. Prince A. & Smolensky P. (1993) Optimality Theory: Constraint Interaction in Generative Grammars, Technical Report RUCCS TR-2, Rutgers Center for Cognitive Science. Sag I., T. Wasow & E. Bender (2003) Syntactic Theory. A Formal Introduction, CSLI. Schr¨oder I. (2002) Natural Language Parsing with Graded Constraints. PhD Thesis, University of Hamburg. Sorace A. & F. Keller (2005) “Gradience in Linguistic Data”, in Lingua, 115. 64
2006
8
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 633–640, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Self-Organizing -gram Model for Automatic Word Spacing Seong-Bae Park Yoon-Shik Tae Se-Young Park Department of Computer Engineering Kyungpook National University Daegu 702-701, Korea sbpark,ystae,sypark @sejong.knu.ac.kr Abstract An automatic word spacing is one of the important tasks in Korean language processing and information retrieval. Since there are a number of confusing cases in word spacing of Korean, there are some mistakes in many texts including news articles. This paper presents a high-accurate method for automatic word spacing based on self-organizing -gram model. This method is basically a variant of -gram model, but achieves high accuracy by automatically adapting context size. In order to find the optimal context size, the proposed method automatically increases the context size when the contextual distribution after increasing it dose not agree with that of the current context. It also decreases the context size when the distribution of reduced context is similar to that of the current context. This approach achieves high accuracy by considering higher dimensional data in case of necessity, and the increased computational cost are compensated by the reduced context size. The experimental results show that the self-organizing structure of -gram model enhances the basic model. 1 Introduction Even though Korean widely uses Chinese characters, the ideograms, it has a word spacing model unlike Chinese and Japanese. The word spacing of Korean, however, is not a simple task, though the basic rule for it is simple. The basic rule asserts that all content words should be spaced. However, there are a number of exceptions due to various postpositions and endings. For instance, it is difficult to distinguish some postpositions from incomplete nouns. Such exceptions induce many mistakes of word spacing even in news articles. The problem of the inaccurate word spacing is that they are fatal in language processing and information retrieval. The incorrect word spacing would result in the incorrect morphological analysis. For instance, let us consider a famous Korean sentence: “          .” The true word spacing for this sentence is “   #  #      .” whose meaning is that my father entered the room. If the sentence is written as “   #  #      .”, it means that my father entered the bag, which is totally different from the original meaning. That is, since the morphological analysis is the first-step in most NLP applications, the sentences with incorrect word spacing must be corrected for their further processing. In addition, the wrong word spacing would result in the incorrect index for terms in information retrieval. Thus, correcting the sentences with incorrect word spacing is a critical task in Korean information processing. One of the most simple and strong models for automatic word spacing is -gram model. In spite of the advantages of the -gram model, its problem should be also considered for achieving high performance. The main problem of the model is that it is usually modeled with fixed window size, . The small value for represents the narrow context in modeling, which results in poor performance in general. However, it is also difficult to increase for better performance due to data sparseness. Since the corpus size is physically limited, it is highly possible that many -grams which do not appear in the corpus exist in the real world. 633 The goal of this paper is to provide a new method for processing automatic word spacing with an -gram model. The proposed method automatically adapts the window size . That is, this method begins with a bigram model, and it shrinks to an unigram model when data sparseness occurs. It also grows up to a trigram, fourgram, and so on when it requires more specific information in determining word spacing. In a word, the proposed model organizes the windows size online, and achieves high accuracy by removing both data sparseness and information lack. The rest of the paper is organized as follows. Section 2 surveys the previous work on automatic word spacing and the smoothing methods for gram models. Section 3 describes the general way to automatic word spacing by an -gram model, and Section 4 proposes a self-organizing -gram model to overcome some drawbacks of -gram models. Section 5 presents the experimental results. Finally, Section 6 draws conclusions. 2 Previous Work Many previous work has explored the possibility of automatic word spacing. While most of them reported high accuracy, they can be categorized into two parts in methodology: analytic approach and statistical approach. The analytic approach is based on the results of morphological analysis. Kang used the fundamental morphological analysis techniques (Kang, 2000), and Kim et al. distinguished each word by the morphemic information of postpositions and endings (Kim et al., 1998). The main drawbacks of this approach are that (i) the analytic step is very complex, and (ii) it is expensive to construct and maintain the analytic knowledge. In the other hand, the statistical approach extracts from corpora the probability that a space is put between two syllables. Since this approach can obtain the necessary information automatically, it does require neither the linguistic knowledge on syllable composition nor the costs for knowledge construction and maintenance. In addition, the fact that it does not use a morphological analyzer produces solid results even for unknown words. Many previous studies using corpora are based on bigram information. According to (Kang, 2004), the number of syllables used in modern Korean is about  , which implies that the number of bigrams reaches  . In order to obtain stable statistics for all bigrams, a great large volume of corpora will be required. If higher order -gram is adopted for better accuracy, the volume of corpora required will be increased exponentially. The main drawback of -gram model is that it suffers from data sparseness however large the corpus is. That is, there are many -grams of which frequency is zero. To avoid this problem, many smoothing techniques have been proposed for construction of -gram models (Chen and Goodman, 1996). Most of them belongs to one of two categories. One is to pretend each -gram occurs once more than it actually did (Mitchell, 1996). The other is to interpolate -grams with lower dimensional data (Jelinek and Mercer, 1980; Katz, 1987). However, these methods artificially modify the original distribution of corpus. Thus, the final probabilities used in learning with grams are the ones distorted by a smoothing technique. A maximum entropy model can be considered as another way to avoid zero probability in -gram models (Rosenfeld, 1996). Instead of constructing separate models and then interpolate them, it builds a single, combined model to capture all the information provided by various knowledge sources. Even though a maximum entropy approach is simple, general, and strong, it is computationally very expensive. In addition, its performance is mainly dependent on the relevance of knowledge sources, since the prior knowledge on the target problem is very important (Park and Zhang, 2002). Thus, when prior knowledge is not clear and computational cost is an important factor, -gram models are more suitable than a maximum entropy model. Adapting features or contexts has been an important issue in language modeling (Siu and Ostendorf, 2000). In order to incorporate longdistance features into a language model, (Rosenfeld, 1996) adopted triggers, and (Mochihashi and Mastumoto, 2006) used a particle filter. However, these methods are restricted to a specific language model. Instead of long-distance features, some other researchers tried local context extension. For this purpose, (Sch¨utze and Singer, 1994) adopted a variable memory Markov model proposed by (Ron et al., 1996), (Kim et al., 2003) applied selective extension of features to POS tagging, and (Dickinson and Meurers, 2005) expanded context of -gram models to find errors in syntactic anno634 tation. In these methods, only neighbor words or features of the target -grams became candidates to be added into the context. Since they required more information for better performance or detecting errors, only the context extension was considered. 3 Automatic Word Spacing by -gram Model The problem of automatic word spacing can be regarded as a binary classification task. Let a sentence be given as           . If i.i.d. sampling is assumed, the data from this sentence are given as               where   and        . In this representation,  is a contextual representation of a syllable  . If a space should be put after  , then  , the class of , is true. It is false otherwise. Therefore, the automatic word spacing is to estimate a function          . That is, our task is to determine whether a space should be put after a syllable   expressed as  with its context. The probabilistic method is one of the strong and most widely used methods for estimating . That is, for each  ,                where      is rewritten as                   Since    is independent of finding the class of ,    is determined by multiplying      and    . That is,                   In -gram model,  is expressed with neighbor syllables around  . Typically, is taken to be two or three, corresponding to a bigram or trigram respectively.  corresponds to     when  . In the same way, it is       when  . The simple and easy way to estimate      is to use maximum likelihood estimate with a large corpus. For instance, consider the case  . Then, the probability      is represented as        , and is computed by                      (1)               0.7 0.75 0.8 0.85 0.9 0 1e+06 2e+06 3e+06 4e+06 5e+06 6e+06 7e+06 8e+06 Accuracy (%) No. of Training Examples unigram bigram trigram 4-gram 5-gram 6-gram 7-gram 8-gram 9-gram 10-gram Figure 1: The performance of -gram models according to the values of in automatic word spacing. where  is a counting function. Determining the context size, the value of , in -gram models is closely related with the corpus size. The larger is , the larger corpus is required to avoid data sparseness. In contrast, though loworder -grams do not suffer from data sparseness severely, they do not reflect the language characteristics well, either. Typically researchers have used  or  , and achieved high performance in many tasks (Bengio et al., 2003). Figure 1 supports that bigram and trigram outperform low-order ( ) and high-order ( ) -grams in automatic word spacing. All the experimental settings for this figure follows those in Section 5. In this figure, bigram model shows the best accuracy and trigram achieves the second best, whereas unigram model results in the worst accuracy. Since the bigram model is best, a selforganizing -gram model explained below starts from bigram. 4 Self-Organizing -gram Model To tackle the problem of fixed window size in gram models, we propose a self-organizing structure for them. 4.1 Expanding -grams When -grams are compared with  -grams, their performance in many tasks is lower than that of  -grams (Charniak, 1993). Simultaneously the computational cost for  -grams is far higher than that for -grams. Thus, it can be justified to use  -grams instead of 635 Function HowLargeExpand() Input: : -grams Output: an integer for expanding size 1. Retrieve  -grams  for . 2. Compute           3. If   EXP Then return 0. 4. return HowLargeExpand() + 1. Figure 2: A function that determines how large a window size should be. grams only when higher performance is expected. In other words,  -grams should be different from -grams. Otherwise, the performance would not be different. Since our task is attempted with a probabilistic method, the difference can be measured with conditional distributions. If the conditional distributions of -grams and  -grams are similar each other, there is no reason to adopt  -grams. Let      be a class-conditional probability by -grams and      that by  grams. Then, the difference     between them is measured by Kullback-Leibler divergence. That is,                which is computed by                (2)     that is larger than a predefined threshold EXP implies that      is different from     . In this case,  -grams is used instead of -grams. Figure 2 depicts an algorithm that determines how large -grams should be used. It recursively finds the optimal expanding window size. For instance, let bigrams ( ) be used at first. When the difference between bigrams and trigrams ( ) is larger than EXP, that between trigrams and fourgrams ( ) is checked again. If it is less than EXP, then this function returns 1 and trigrams are used instead of bigrams. Otherwise, it considers higher -grams again. Function HowSmallShrink() Input: : -grams Output: an integer for shrinking size 1. If  Then return 0. 2. Retrieve  -grams  for . 3. Compute           4. If  SHR Then return 0. 5. return HowSmallShrink() - 1. Figure 3: A function that determines how small a window size should be used. 4.2 Shrinking -grams Shrinking -grams is accomplished in the direction opposite to expanding -grams. After comparing -grams with  -grams,  -grams are used instead of -grams only when they are similar enough. The difference     between -grams and  -grams is, once again, measured by Kullback-Leibler divergence. That is,                If     is smaller than another predefined threshold SHR, then  -grams are used instead of -grams. Figure 3 shows an algorithm which determines how deeply the shrinking is occurred. The main stream of this algorithm is equivalent to that in Figure 2. It also recursively finds the optimal shrinking window size, but can not be further reduced when the current model is an unigram. The merit of shrinking -grams is that it can construct a model with a lower dimensionality. Since the maximum likelihood estimate is used in calculating probabilities, this helps obtaining stable probabilities. According to the well-known curse of dimensionality, the data density required is reduced exponentially by reducing dimensions. Thus, if the lower dimensional model is not different so much from the higher dimensional one, it is highly possible that the probabilities from lower dimensional space are more stable than those from higher dimensional space. 636 Function ChangingWindowSize() Input: : -grams Output: an integer for changing window size 1. Set exp := HowLargeExpand(). 2. If exp  Then return exp. 3. Set shr := HowSmallShrink(). 4. If shr   Then return shr. 5. return 0. Figure 4: A function that determines the changing window size of -grams. 4.3 Overall Self-Organizing Structure For a given i.i.d. sample , there are three possibilities on changing -grams. First one is not to change -grams. It is obvious when -grams are not changed. This occurs when both      EXP and      SHR are met. This is when the expanding results in too similar distribution to that of the current -grams and the distribution after shrinking is too different from that of the current -grams. The remaining possibilities are then expanding and shrinking. The application order between them can affect the performance of the proposed method. In this paper, an expanding is checked prior to a shrinking as shown in Figure 4. The function ChangingWindowSize first calls HowLargeExpand. The non-zero return value of HowLargeExpand implies that the window size of the current -grams should be enlarged. Otherwise, ChangingWindowSize checks if the window size should be shrinked by calling HowSmallShrink. If HowSmallShrink returns a negative integer, the window size should be shrinked to ( + shr). If both functions return zero, the window size should not be changed. The reason why HowLargeExpand is called prior to HowSmallShrink is that the expanded grams handle more specific data. ( )-grams, in general, help obtaining higher accuracy than grams, since ( )-gram data are more specific than -gram ones. However, it is time-consuming to consider higher-order data, since the number of kinds of data increases. The time increased due to expanding is compensated by shrinking. After shrinking, only lower-oder data are considered, and then processing time for them decreases. 4.4 Sequence Tagging Since natural language sentences are sequential as their nature, the word spacing can be considered as a special POS tagging task (Lee et al., 2002) for which a hidden Markov model is usually adopted. The best sequence of word spacing for the sentence is defined as                                           by where  is a sentence length. If we assume that the syllables are independent of each other,      is given by              which can be computed using Equation (1). In addition, by Markov assumption, the probability of a current tag   conditionally depends on only the previous  tags. That is,               Thus, the best sequence is determined by                       (3) Since this equation follows Markov assumption, the best sequence is found by applying the Viterbi algorithm. 5 Experiments 5.1 Data Set The data set used in this paper is the HANTEC corpora version 2.0 distributed by KISTI1. From this corpus, we extracted only the HKIB94 part which consists of 22,000 news articles in 1994 from Hankook Ilbo. The reason why HKIB94 is chosen is that the word spacing of news articles is relatively more accurate than other texts. Even though this data set is composed of totally 12,523,688 Korean syllables, the number of unique syllables is just 1http://www.kisti.re.kr 637 Methods Accuracy (%) baseline 72.19 bigram 88.34 trigram 87.59 self-organizing bigram 91.31 decision tree 88.68 support vector machine 89.10 Table 1: The experimental results of various methods for automatic word spacing. 2,037 after removing all special symbols, digits, and English alphabets. The data set is divided into three parts: training (70%), held-out (20%), and test (10%). The held-out set is used only to estimate EXP and SHR. The number of instances in the training set is 8,766,578, that in the held-out set is 2,504,739, and that in test set is 1,252,371. Among the 1,252,371 test cases, the number of positive instances is 348,278, and that of negative instances is 904,093. Since about 72% of test cases are negative, this is the baseline of the automatic word spacing. 5.2 Experimental Results To evaluate the performance of the proposed method, two well-known machine learning algorithms are compared together. The tested machine learning algorithms are (i) decision tree and (ii) support vector machines. We use C4.5 release 8 (Quinlan, 1993) for decision tree induction and      (Joachims, 1998) for support vector machines. For all experiments with decision trees and support vector machines, the context size is set to two since the bigram shows the best performance in Figure 1. Table 1 gives the experimental results of various methods including machine learning algorithms and self-organizing -gram model. The ‘selforganizing bigram’ in this table is the one proposed in this paper. The normal -grams achieve an accuracy of around 88%, while decision tree and support vector machine produce that of around 89%. The self-organizing -gram model achieves 91.31%. The accuracy improvement by the selforganizing -gram model is about 19% over the baseline, about 3% over the normal -gram model, and 2% over decision trees and support vector machines. In order to organize the context size for -grams Order No. of Errors Expanding then Shrinking 108,831 Shrinking then Expanding 114,343 Table 2: The number of errors caused by the application order of context expanding and shrinking. online, two operations of expanding and shrinking were proposed. Table 2 shows how much the number of errors is affected by their application order. The number of errors made by expanding first is 108,831 while that by shrinking first is 114,343. That is, if shrinking is applied ahead of expanding, 5,512 additional errors are made. Thus, it is clear that expanding should be considered first. The errors by expanding can be explained with two reasons: (i) the expression power of the model and (ii) data sparseness. Since Korean is a partially-free word order language and the omission of words are very frequent, -gram model that captures local information could not express the target task sufficiently. In addition, the classconditional distribution after expanding could be very different from that before expanding due to data sparseness. In such cases, the expanding should not be applied since the distribution after expanding is not trustworthy. However, only the difference between two distributions is considered in the proposed method, and the errors could be made by data sparseness. Figure 5 shows that the number of training instances does not matter in computing probabilities of -grams. Even though the accuracy increases slightly, the accuracy difference after 900,000 instances is not significant. It implies that the errors made by the proposed method is not from the lack of training instance but from the lack of its expression power for the target task. This result also complies with Figure 1. 5.3 Effect of Right Context All the experiments above considered left context only. However, Kang reported that the probabilistic model using both left and right context outperforms the one that uses left context only (Kang, 2004). In his work, the word spacing probability        between two adjacent syllables   and   is given as                            (4)           638 0.7 0.75 0.8 0.85 0.9 0.95 1 0 1e+06 2e+06 3e+06 4e+06 5e+06 6e+06 7e+06 8e+06 Accuracy (%) No. of Training Examples Figure 5: The effect of the number of training examples in the self-organizing -gram model. Context Accuracy (%) Left Context Only 91.31 Right Context Only 88.26 Both Contexts 92.54 Table 3: The effect of using both left and right context. where         are computed respectively based on the syllable frequency. In order to reflect the idea of bidirectional context in the proposed model, the model is enhanced by modifying         in Equation (1). That is, the likelihood of         is expanded to be                                         Since the coefficients of Equation (4) were determined arbitrarily (Kang, 2004), they are replaced with parameters   of which values are determined using a held-out data. The change of accuracy by the context is shown in Table 3. When only the right context is used, the accuracy gets 88.26% which is worse than the left context only. That is, the original -gram is a relatively good model. However, when both left and right context are used, the accuracy becomes 92.54%. The accuracy improvement by using additional right context is 1.23%. This results coincide with the previous report (Lee et al., 2002). The  ’s to achieve this accuracy are         , and    . Method Accuracy(%) Normal HMM 92.37 Self-Organizing HMM 94.71 Table 4: The effect of considering a tag sequence. 5.4 Effect of Considering Tag Sequence The state-of-the-art performance on Korean word spacing is to use the hidden Markov model. According to the previous work (Lee et al., 2002), the hidden Markov model shows the best performance when it sees two previous tags and two previous syllables. For the simplicity in the experiments, the value for  in Equation (3) is set to be one. The performance comparison between normal HMM and the proposed method is given in Table 4. The proposed method considers the various number of previous syllables, whereas the normal HMM has the fixed context. Thus, the proposed method in Table 4 is specified as ‘self-organizing HMM.’ The accuracy of the self-organizing HMM is 94.71%, while that of the normal HMM is just 92.37%. Even though the normal HMM considers more previous tags (  ), the accuracy of the self-organizing model is 2.34% higher than that of the normal HMM. Therefore, the proposed method that considers the sequence of word spacing tags achieves higher accuracy than any other methods reported ever. 6 Conclusions In this paper we have proposed a new method to learn word spacing in Korean by adaptively organizing context size. Our method is based on the simple -gram model, but the context size is changed as needed. When the increased context is much different from the current one, the context size is increased. In the same way, the context is decreased, if the decreased context is not so much different from the current one. The benefits of this method are that it can consider wider context by increasing context size as required, and save the computational cost due to the reduced context. The experiments on HANTEC corpora showed that the proposed method improves the accuracy of the trigram model by 3.72%. Even compared with some well-known machine learning algorithms, it achieved the improvement of 2.63% over decision trees and 2.21% over support vector machines. In addition, we showed two ways for improving the 639 proposed method: considering right context and word spacing sequence. By considering left and right context at the same time, the accuracy is improved by 1.23%, and the consideration of word spacing sequence gives the accuracy improvement of 2.34%. The -gram model is one of the most widely used methods in natural language processing and information retrieval. Especially, it is one of the successful language models, which is a key technique in language and speech processing. Therefore, the proposed method can be applied to not only word spacing but also many other tasks. Even though word spacing is one of the important tasks in Korean information processing, it is just a simple task in many other languages such as English, German, and French. However, due to its generality, the importance of the proposed method yet does hold in such languages. Acknowledgements This work was supported by the Korea Research Foundation Grant funded by the Korean Government (KRF-2005-202-D00465). References Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. 2003. A Neural Probabilistic Language Model. Journal of Machine Learning Research, Vol. 3, pp. 1137–1155. E. Charniak. 1993. Statistical Language Learning. MIT Press. S. Chen and J. Goodman. 1996. An Empirical Study of Smoothing Techniques for Language Modeling. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pp. 310–318. M. Dickinson and W. Meurers. 2005. Detecting Errors in Discontinuous Structural Annotation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pp. 322–329. F. Jelinek and R. Mercer. 1980. Interpolated Estimation of Markov Source Parameters from Sparse Data. In Proceedings of the Workshop on Pattern Recognition in Practice. T. Joachims. 1998. Making Large-Scale SVM Learning Practical. LS8, Universit t Dortmund. S.-S. Kang, 2000. Eojeol-Block Bidirectional Algorithm for Automatic Word Spacing of Hangul Sentences. Journal of KISS, Vol. 27, No. 4, pp. 441– 447. (in Korean) S.-S. Kang. 2004. Improvement of Automatic Word Segmentation of Korean by Simplifying Syllable Bigram. In Proceedings of the 15th Conference on Korean Language and Information Processing, pp. 227–231. (in Korean) S. Katz. 1987. Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing. Vol. 35, No. 3, pp. 400–401. K.-S. Kim, H.-J. Lee, and S.-J. Lee. 1998. ThreeStage Spacing System for Korean in Sentence with No Word Boundaries. Journal of KISS, Vol. 25, No. 12, pp. 1838–1844. (in Korean) J.-D. Kim, H.-C. Rim, and J. Tsujii. 2003. SelfOrganizing Markov Models and Their Application to Part-of-Speech Tagging. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pp. 296–302. D.-G. Lee, S.-Z. Lee, H.-C. Rim, and H.-S. Lim, 2002. Automatic Word Spacing Using Hidden Markov Model for Refining Korean Text Corpora. In Proceedings of the 3rd Workshop on Asian Language Resources and International Standardization, pp. 51–57. T. Mitchell. 1997. Machine Learning. McGraw Hill. D. Mochihashi and Y. Matsumoto. 2006. Context as Filtering. Advances in Neural Information Processing Systems 18, pp. 907–914. S.-B. Park and B.-T. Zhang. 2002. A Boosted Maximum Entropy Model for Learning Text Chunking. In Proceedings of the 19th International Conference on Machine Learning, pp. 482–489. R. Quinlan. 1993. C4.5: Program for Machine Learning. Morgan Kaufmann Publishers. D. Ron, Y. Singer, and N. Tishby. 1996. The Power of Amnesia: Learning Probabilistic Automata with Variable Memory Length. Machine Learning, Vol. 25, No. 2, pp. 117–149. R. Rosenfeld. 1996. A Maximum Entropy Approach to Adaptive Statistical Language Modeling. Computer, Speech and Language, Vol. 10, pp. 187– 228. H. Sch¨utze and Y. Singer. 1994. Part-of-Speech Tagging Using a Variable Memory Markov Model. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pp. 181– 187. M. Siu and M. Ostendorf. 2000. Variable N-Grams and Extensions for Conversational Speech Language Modeling. IEEE Transactions on Speech and Audio Processing, Vol. 8, No. 1, pp. 63–75. 640
2006
80
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 641–648, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Concept Unification of Terms in Different Languages for IR Qing Li, Sung-Hyon Myaeng Information & Communications University, Korea {liqing,myaeng}@icu.ac.kr Yun Jin Chungnam National University, Korea [email protected] Bo-yeong Kang Seoul National University, Korea [email protected] Abstract Due to the historical and cultural reasons, English phases, especially the proper nouns and new words, frequently appear in Web pages written primarily in Asian languages such as Chinese and Korean. Although these English terms and their equivalences in the Asian languages refer to the same concept, they are erroneously treated as independent index units in traditional Information Retrieval (IR). This paper describes the degree to which the problem arises in IR and suggests a novel technique to solve it. Our method firstly extracts an English phrase from Asian language Web pages, and then unifies the extracted phrase and its equivalence(s) in the language as one index unit. Experimental results show that the high precision of our conceptual unification approach greatly improves the IR performance. 1 Introduction The mixed use of English and local languages presents a classical problem of vocabulary mismatch in monolingual information retrieval (MIR). The problem is significant especially in Asian language because words in the local languages are often mixed with English words. Although English terms and their equivalences in a local language refer to the same concept, they are erroneously treated as independent index units in traditional MIR. Such separation of semantically identical words in different languages may limit retrieval performance. For instance, as shown in Figure 1, there are three kinds of Chinese Web pages containing information related with “Viterbi Algorithm (韦特比算法)”. The first case contains “Viterbi Algorithm” but not its Chinese equivalence “韦特比算法”. The second Figure 1. Three Kinds of Web Pages contains “韦特比算法” but not “Viterbi Algorithm”. The third has both of them. A user would expect that a query with either “Viterbi Algorithm” or “韦特比算法” would retrieve all of these three groups of Chinese Web pages. Otherwise some potentially useful information will be ignored. Furthermore, one English term may have several corresponding terms in a different language. For instance, Korean words “디지탈”, “디지틀”, and “디지털” are found in local Web pages, which all correspond to the English word “digital” but are in different forms because of different phonetic interpretations. Establishing an equivalence class among the three Korean words and the English counterpart is indispensable. By doing so, although the query is “디지탈”, the Web pages containing “디지틀”, “디지털” or “digital” can be all retrieved. The same goes to Chinese terms. For example, two same semantic Chinese terms “维特比” and “韦特比” correspond to one English term “Viterbi”. There should be a semantic equivalence relation between them. Although tracing the original English term from a term in a native language by back transliteration (Jeong et al., 1999) is a good way to build such mapping, it is only applicable to the words that are amenable for transliteration based on the phoneme. It is difficult to expand the method to abbreviations and compound words. 641 Since English abbreviations frequently appear in Korean and Chinese texts, such as “세계무역기구 (WTO)” in Korean, “世界贸易 组织 (WTO)” in Chinese, it is essential in IR to have a mapping between these English abbreviations and the corresponding words. The same applies to the compound words like “서울대 (Seoul National University)” in Korean, “疯牛病 (mad cow disease)” in Chinese. Realizing the limitation of the transliteration, we present a way to extract the key English phrases in local Web pages and conceptually unify them with their semantically identical terms in the local language. 2 Concept Unification The essence of the concept unification of terms in different languages is similar to that of the query translation for cross-language information retrieval (CLIR) which has been widely explored (Cheng et al., 2004; Cao and Li, 2002; Fung et al., 1998; Lee, 2004; Nagata et al., 2001; Rapp, 1999; Zhang et al., 2005; Zhang and Vine, 2004). For concept unification in index, firstly key English phrases should be extracted from local Web pages. After translating them into the local language, the English phrase and their translation(s) are treated as the same index units for IR. Different from previous work on query term translation that aims at finding relevant terms in another language for the target term in source language, conceptual unification requires a high translation precision. Although the fuzzy Chinese translations (e.g. “ 病毒(virus), 陈盈豪 (designer’s name), 电脑病毒 (computer virus)) of English term “CIH” can enhance the CLIR performance by the “query expansion” gain (Cheng et al., 2004), it does not work in the conceptual unification of terms in different languages for IR. While there are lots of additional sources to be utilized for phrase translation (e.g., anchor text, parallel or comparable corpus), we resort to the mixed language Web pages which are the local Web pages with some English words, because they are easily obtainable and frequently selfrefresh. Observing the fact that English words sometimes appear together with their equivalence in a local language in Web texts as shown in Figure 1, it is possible to mine the mixed language searchresult pages obtained from Web search engines and extract proper translations for these English words that are treated as queries. Due to the language nature of Chinese and Korean, we integrate the phoneme and semanteme instead of statistical information alone to pick out the right translation from the search-result pages. 3 Key Phrase Extraction Since our intention is to unify the semantically identical words in different languages and index them together, the primary task is to decide what kinds of key English phrases in local Web pages are necessary to be conceptually unified. In (Jeong et al., 1999), it extracts the Korean foreign words for concept unification based on statistical information. Some of the English equivalences of these Korean foreign words, however, may not exist in the Korean Web pages. Therefore, it is meaningless to do the crosslanguage concept unification for these words. The English equivalence would not benefit any retrieval performance since no local Web pages contain it, even if the search system builds a semantic class among both local language and English for these words. In addition, the method for detecting Korean foreign words may bring some noise. The Korean terms detected as foreign words sometimes are not meaningful. Therefore, we do it the other way around by choosing the English phrases from the local Web pages based on a certain selection criteria. Instead of extracting all the English phrases in the local Web pages, we only select the English phrases that occurred within the special marks including quotation marks and parenthesis. Because English phrases within these markers reveal their significance in information searching to some extent. In addition, if the phrase starts with some stemming words (e.g., for, as) or includes some special sign, it is excluded as the phrases to be translated. 4 Translation of English Phrases In order to translate the English phrases extracted, we query the search engine with English phrases to retrieve the local Web pages containing them. For each document returned, only the title and the query-biased summary are kept for further analysis. We dig out the translation(s) for the English phrases from these collected documents. 4.1 Extraction of Candidates for Selection After querying the search engine with the English phrase, we can get the snippets (title and summary) of Web texts in the returned searchresult pages as shown in Figure 1. The next step then is to extract translation candidates within a window of a limited size, which includes the 642 English phrase, in the snippets of Web texts in the returned search-result pages. Because of the agglutinative nature of the Chinese and Korean languages, we should group the words in the local language into proper units as translation candidates, instead of treating each individual word as candidates. There are two typical ways: one is to group the words based on their co-occurrence information in the corpus (Cheng et al., 2004), and the other is to employ all sequential combinations of the words as the candidates (Zhang and Vine, 2004). Although the first reduces the number of candidates, it risks losing the right combination of words as candidates. We adopt the second in our approach, so that, return to the aforementioned example in Figure 1, if there are three Chinese characters (韦特比) within the predefined window, the translation candidates for English phrases “Viterbi” are “韦”,“特”, “比”, “韦特 ”, “特比”, and “韦特比”. The number of candidates in the second method, however, is greatly increased by enlarging the window size k . Realizing that the number of words, n , available in the window size, k , is generally larger than the predefined maximum length of candidate, m , it is unreasonable to use all adjacent sequential combinations of available words within the window size k . Therefore, we tune the method as follows: 1. If n m ≤ , all adjacent sequential combinations of words within the window are treated as candidates 2. If n m > , only adjacent sequential combinations of which the word number is less than m are regarded as candidates. For example, if we set n to 4 and m to 2, the window “ 1 2 3 4 w w w w ” consists of four words. Therefore, only “ 1 2 w w ”, “ 2 3 w w ”, “ 3 4 w w ”, “ 1 w ”, “ 2 w ”, “ 3 w ”, “ 4 w ” are employed as the candidates for final translation selection. Based on our experiments, this tuning method achieves the same performance while reducing the candidate size greatly. 4.2 Selection of candidates The final step is to select the proper candidate(s) as the translation(s) of the key English phrase. We present a method that considers the statistical, phonetic and semantic features of the English candidates for selection. Statistical information such as co-occurrence, Chi-square, mutual information between the English term and candidates helps distinguish the right translation(s). Using Cheng’s Chi-square method (Cheng et al., 2004), the probability to find the right translation for English specific term is around 30% in the top-1 case and 70% in the top-5 case. Since our goal is to find the corresponding counterpart(s) of the English phrase to treat them as one index unit in IR, the accuracy level is not satisfactory. Since it seems difficult to improve the precision solely through variant statistical methods, we also consider semantic and phonetic information of candidates besides the statistical information. For example, given the English Key phrase “Attack of the clones”, the right Korean translation “클론의습격” is far away from the top-10 selected by Chi-square method (Cheng et al., 2004). However, based on the semantic match of “습격” and “Attack”, and the phonetic match of “클론” and “clones”, we can safely infer they are the right translation. The same rule applies to the Chinese translation “克 隆人的进攻”, where “克隆人” is phonetically match for “clones” and “进攻” semantically corresponds to “attack”. In selection step, we first remove most of the noise candidates based on the statistical method and re-rank the candidates based on the semantic and phonetic similarity. 4.3 Statistical model There are several statistical models to rank the candidates. Nagata (2001) and Huang (2005) use the frequency of co-occurrence and the textual distance, the number of words between the Key phrase and candidates in texts to rank the candidates, respectively. Although the details of the methods are quite different, both of them share the same assumption that the higher cooccurrence between candidates and the Key phrase, the more possible they are the right translations for each other. In addition, they observed that most of the right translations for the Key phrase are close to it in the text, especially, right after or before the key phrase (e.g. “ … 연방수사국(FBI)이…”). Zhang (2004) suggested a statistical model based on the frequency of co-occurrence and the length of the candidates. In the model, since the distance between the key phrase and a candidate is not considered, the right translation located far away from the key phrase also has a chance to be selected. We observe, however, that such case is very rare in our study, and most of right translations are located within 5~8 words. The distance information is a valuable factor to be considered. 643 In our statistical model, we consider the frequency, length and location of candidates together. The intuition is that if the candidate is the right translation, it tends to co-occur with the key phrase frequently; its location tends to be close to the key phrase; and the longer the candidates’ length, the higher the chance to be the right translation. The formula to calculate the ranking score for a candidate is as follows: 1 ( ) ( , ) ( , ) (1 ) max max k i k i FL i len Freq len len c d q c w q c α α − = × + − × ∑ where ( , ) k i d q c is the word distance between the English phrase q and the candidate ic in the kth occurrence of candidate in the search-result pages. If q is adjacent to ic , the word distance is one. If there is one word between them, it is counted as two and so forth. α is the coefficient constant, and maxFreq len − is the max reciprocal of ( , ) k i d q c among all the candidates. ( ) i len c is the number of characters in the candidate ic . 4.4 Phonetic and semantic model Phonetic and semantic match: There has been some related work on extracting term translation based on the transliteration model (Kang and Choi, 2002; Kang and Kim, 2000). Different from transliteration that attempts to generate English transliteration given a foreign word in local language, our approach is a kind a match problem since we already have the candidates and aim at selecting the right candidates as the final translation(s) for the English key phrase. While the transliteration method is partially successful, it suffers form the problem that transliteration rules are not applied consistently. The English key phrase for which we are looking for the translation sometimes contains several words that may appear in a dictionary as an independent unit. Therefore, it can only be partially matched based on the phonetic similarity, and the rest part may be matched by the semantic similarity in such situation. Returning to the above example, “clone” is matched with “클론” by phonetic similarity. “of” and “attack” are matched with “의” and “습격” respectively by semantic similarity. The objective is to find a set of mappings between the English word(s) in the key phrase and the local language word(s) in candidates, which maximize the sum of the semantic and phonetic mapping weights. We call the sum as SSP (Score of semanteme and phoneme). The higher SSP value is, the higher the probability of the candidate to be the right translation. The solution for a maximization problem can be found using an exhaustive search method. However, the complexity is very high in practice for a large number of pairs to be processed. As shown in Figure 2, the problem can be represented as a bipartite weighted graph matching problem. Let the English key phrase, E, be represented as a sequence of tokens 1,..., m ew ew < > , and the candidate in local language, C, be represented as a sequence of tokens 1,..., n cw cw < > . Each English and candidate token is represented as a graph vertex. An edge ( , ) i j ew cw is formed with the weight ( , ) i j ew cw ω calculated as the average of normalized semantic and phonetic values, whose calculation details are explained below. In order to balance the number of vertices on both sides, we add the virtual vertex (vertices) with zero weight on the side with less number of vertices. The SSP is calculated: n ( ) i=1 SSP=argmax ( , ) i i kw ewπ ω ∑ where π is a permutation of {1, 2, 3, …, n}. It can be solved by the Kuhn-Munkres algorithm (also known as Hungarian algorithm) with polynomial time complexity (Munkres, 1957). Figure 2. Matching based on the semanteme and phoneme Phonetic & Semantic Weights: If two languages have a close linguistic relationship such as English and French, cognate matching (Davis, 1997) is typically employed to translate the untranslatable terms. Interestingly, Buckley et al., (2000) points out that “English query words are treated as potentially misspelled French words” and attempts to treat English words as variations of French words according to lexicographical rules. However, when two languages are very distinct, e.g., English–Korean, English–Chinese, transliteration from English words is utilized for cognate matching. Phonetic weight is the transliteration probability between English and candidates in local language. We adopt the method in (Jeong et al., 1999) with some adjustments. In essence, we compute the probabilities of particular English 클론 습격 의 The of Clones Attack 644 key phrase EW given a candidate in the local language CW. 1 1 1 1 1 ( , ) ( ,..., , ,..., ) 1 ( ,..., , ,..., ) log ( | ) ( | ) phoneme phoneme m k phoneme n k j j j j j EW CW e e c c g g c c P g g P c g n ω ω ω − = = = ∑ where the English phrase consists of a string of English alphabets 1,..., m e e , and the candidate in the local language is comprised of a string of phonetic elements. 1,..., k c c . For Korean language, the phonetic element is the Korean alphabets such as “ㄱ”, “ㅣ”, “ㄹ” , “ㅎ” and etc. For Chinese language, the phonetic elements mean the elements of “pinying”. ig is a pronunciation unit comprised of one or more English alphabets ( e.g., ‘ss’ for ‘ㅅ’, a Korean alphabet ). The first term in the product corresponds to the transition probability between two states in HMM and the second term to the output probability for each possible output that could correspond to the state, where the states are all possible distinct English pronunciation units for the given Korean or Chinese word. Because the difference between Korean/Chinese and English phonetic systems makes the above uni-gram model almost impractical in terms of output quality, bi-grams are applied to substitute the single alphabet in the above equation. Therefore, the phonetic weight should be calculated as: 1 1 1 1 1 ( , ) log ( | ) ( | ) phoneme j j j j j j j j j E C P g g g g P c c g g n ω + − + + = ∑ where 1 1 ( | ) j j j j P c c g g + + is computed from the training corpus as the ratio between the frequency of 1 j j c c + in the candidates, which were originated from 1 j j g g + in English words, to the frequency of 1 j j g g + . If 1 j = or j n = , 1 j g − or 1 j g + , 1 jc + is substituted with a space marker. The semantic weight is calculated from the bilingual dictionary. The current bilingual dictionary we employed for the local languages are Korean-English WorldNet and LDC ChineseEnglish dictionary with additional entries inserted manually. The weight relies on the degree of overlaps between an English translation and the candidate semanteme No. of overlapping units w (E,C)= argmax total No. of units For example, given the English phrase “Inha University” and its candidate “인하대 (Inha University), “University” is translated into “대학교”, therefore, the semantic weight between “University” and “대” is about 0.33 because only one third of the full translation is available in the candidate. Due to the range difference between phonetic and semantic weights, we normalized them by dividing the maximum phonetic and semantic weights in each pair of the English phrase and a candidate if the maximum is larger than zero. The strategy for us to pick up the final translation(s) is distinct on two different aspects from the others. If the SSP values of all candidates are less than the threshold, the top one obtained by statistical model is selected as the final translation. Otherwise, we re-rank the candidates according to the SSP value. Then we look down through the new rank list and draw a “virtual” line if there is a big jump of SSP value. If there is no big jump of SSP values, the “virtual” line is drawn at the bottom of the new rank list. Instead of the top-1 candidate, the candidates above the “virtual” line are all selected as the final translations. It is because that an English phrase may have more than one correct translation in the local language. Return to the previous example, the English term “Viterbi” corresponds to two Chinese translations “维特比” and “韦特比”. The candidate list based on the statistical information is “编码, 算法, 译码, 维特比,…,韦特比”. We then calculate the SSP value of these candidates and re-rank the candidates whose SSP values are larger than the threshold which we set to 0.3. Since the SSP value of “维特比(0.91)” and “韦 特比(0.91)” are both larger than the threshold and there is no big jump, both of them are selected as the final translation. 5 Experimental Evaluation Although the technique we developed has values in their own right and can be applied for other language engineering fields such as query translation for CLIR, we intend to understand to what extent monolingual information retrieval effectiveness can be increased when relevant terms in different language are treated as one unit while indexing. We first examine the translation precision and then study the impact of our approach for monolingual IR. We crawls the web pages of a specific domain (university & research) by WIRE crawler provided by center of Web Research, university of Chile (http://www.cwr.cl/projects/WIRE/). Currently, we have downloaded 32 sites with 5,847 645 Korean Web pages and 74 sites with 13,765 Chinese Web pages. 232 and 746 English terms were extracted from Korean Web pages and Chinese Web pages, respectively. The accuracy of unifying semantically identical words in different languages is dependant on the translation performance. The translation results are shown in table 1. As it can be observed, 77% of English terms from Korean web pages and 83% of English terms from Chinese Web pages can be strictly translated into accurate Korean and Chinese, respectively. However, additional 15% and 14% translations contained at least one Korean and Chinese translations, respectively. The errors were brought in by containing additional related information or incomplete translation. For instance, the English term “blue chip” is translated into “蓝芯(blue chip)”, “蓝筹股 (a kind of stock)”. However, another acceptable translation “绩优股 (a kind of stock)” is ignored. An example for incomplete translation is English phrase “ SIGIR 2005” which only can be translate into “国际计算机检索年会 (international conference of computer information retrieval” ignoring the year. Korean Chinese No. % No. % Exactly correct 179 77% 618 83% At least one is correct but not all 35 15% 103 14% Wrong translation 18 8% 25 3% Total 232 100% 746 100% Table 1. Translation performance We also compare our approach with two wellknown translation systems. We selected 200 English words and translate them into Chinese and Korean by these systems. Table2 and Table 3 show the results in terms of the top 1, 3, 5 inclusion rates for Korean and Chinese translation, respectively. “Exactly and incomplete” translations are all regarded as the right translations. “LiveTrans” and “Google” represent the systems against which we compared the translation ability. Google provides a machine translation function to translate text such as Web pages. Although it works pretty well to translate sentences, it is ineligible for short terms where only a little contextual information is available for translation. LiveTrans (Cheng et al., 2004) provided by the WKD lab in Academia Sinica is the first unknown word translation system based on webmining. There are two ways in this system to translate words: the fast one with lower precision is based on the “chi-square” method ( 2 χ ) and the smart one with higher precision is based on “context-vector” method (CV) and “chi-square” method ( 2 χ ) together. “ST” and “ST+PS” represent our approaches based on statistic model and statistic model plus phonetic and semantic model, respectively. Top -1 Top-3 Top -5 Google 56% NA NA “Fast” 2 χ 37% 43% 53.5% Live Trans “Smart” 2 χ +CV 42% 49% 60% ST(dk=1) 28.5 % 41% 47% ST 39 % 46.5% 55.5% Our Methods ST+PS 93% 93% 93% Table 2. Comparison (Chinese case) Top -1 Top-3 Top -5 Google 44% NA NA “Fast” 2 χ 28% 37.5% 45% Live Trans “Smart” 2 χ +CV 24.5% 44% 50% ST(dk=1) 26.5 % 35.5% 41.5% ST 32 % 40% 46.5% Our Methods ST+PS 89% 89.5% 89.5% Table 3. Comparison (Korean case) Even though the overall performance of LiveTrans’ combined method ( 2 χ +CV) is better than the simple method ( 2 χ ) in both Table 2 and 3, the same doesn’t hold for each individual. For instance, “Jordan” is the English translation of Korean term “요르단”, which ranks 2nd and 5th in ( 2 χ ) and ( 2 χ +CV), respectively. The context-vector sometimes misguides the selection. In our two-step selection approach, the final selection would not be diverted by the false statistic information. In addition, in order to examine the contribution of distance information in the statistical method, we ran our experiments based on statistical method (ST) with two different conditions. In the first case, we set ( , ) k i d q c to 1, that is, the location information of all candidates is ignored. In the second case, ( , ) k i d q c is calculated based on the real textual distance of the candidates. As in both Table 2 and Table 3, the later case shows better performance. As shown in both Table 2 and Table 3, it can be observed that “ST+PS” shows the best performance, then followed by “LiveTrans (smart)”, “ST”, “LiveTrans(fast)”, and “Google”. The sta646 tistical methods seem to be able to give a rough estimate for potential translations without giving high precision. Considering the contextual words surrounding the candidates and the English phrase can further improve the precision but still less than the improvement made by the phonetic and semantic information in our approach. High precision is very important to the practical application of the translation results. The wrong translation sometimes leads to more damage to its later application than without any translation available. For instance, the Chinese translation of “viterbi” is “算法(algorithm)” by LiveTrans (fast). Obviously, treating “Viterbi” and “算法 (algorithm)”as one index unit is not acceptable. We ran monolingual retrieval experiment to examine the impact of our concept unification on IR. The retrieval system is based on the vector space model with our own indexing scheme to which the concept unification part was added. We employed the standard tf idf × scheme for index term weighting and idf for query term weighting. Our experiment is based on KT-SET test collection (Kim et al., 1994). It contains 934 documents and 30 queries together with relevance judgments for them. In our index scheme, we extracted the key English phrases in the Korean texts, and translated them. Each English phrases and its equivalence(s) in Korean is treated as one index unit. The baseline against which we compared our approach applied a relatively simple indexing technique. It uses a dictionary that is KoreanEnglish WordNet, to identify index terms. The effectiveness of the baseline scheme is comparable with other indexing methods (Lee and Ahn, 1999). While there is a possibility that an indexing method with a full morphological analysis may perform better than our rather simple method, it would also suffer from the same problem, which can be alleviated by concept unification approach. As shown in Figure 3, we obtained 14.9 % improvement based on mean average 11-pt precision. It should be also noted that this result was obtained even with the errors made by the unification of semantically identical terms in different languages. 6 Conclusion In this paper, we showed the importance of the unification of semantically identical terms in different languages for Asian monolingual information retrieval, especially Chinese and Korean. Taking the utilization of the high translation accuracy of our previous work, we successfully unified the most semantically identical terms in the corpus. This is along the line of work where researchers attempt to index documents with concepts rather than words. We would extend our work along this road in the future. Recall 0.0 .2 .4 .6 .8 1.0 Precision 0.0 .2 .4 .6 .8 1.0 Baseline Conceptual Unification Figure 3. Korean Monolingual IR Reference Buckley, C., Mitra, M., Janet, A. and Walz, C.C.. 2000. Using Clustering and Super Concepts within SMART: TREC 6. Information Processing & Management. 36(1): 109-131. Cao, Y. and Li., H.. 2002. Base Noun Phrase Translation Using Web Data and the EM Algorithm. In Proc. of. the 19th COLING. Cheng, P., Teng, J., Chen, R., Wang, J., Liu,W., Chen, L.. 2004. Translating Specific Queries with Web Corpora for Cross-language Information Retrieval. In Proc. of ACM SIGIR. Davis, M.. 1997. New Experiments in Cross-language Text Retrieval at NMSU's Computing Research Lab. In Proc. Of TREC-5. Fung, P. and Yee., L.Y.. 1998. An IR Approach for Translating New Words from Nonparallel, Comparable Texts. In Proc. of COLING/ACL-98. Huang, F., Zhang, Y. and Vogel, S.. 2005. Mining Key Phrase Translations from Web Corpora, In Proc. of the Human Language Technologies Conference (HLT-EMNLP). Jeong, K. S., Myaeng, S. H., Lee, J. S., Choi, K. S.. 1999. Automatic identification and backtransliteration of foreign words for information retrieval. Information Processing & Management. 35(4): 523-540. Kang, B. J., and Choi, K. S. 2002. Effective Foreign Word Extraction for Korean Information Retrieval. Information Processing & Management, 38(1): 91109. 647 Kang, I. H. and Kim, G. C.. 2000. English-to-Korean Transliteration using Multiple Unbounded Overlapping Phoneme Chunks. In Proc. of COLING . Kim, S.-H. et al.. 1994. Development of the Test Set for Testing Automatic Indexing. In Proc. of the 22nd KISS Spring Conference. (in Korean). Lee, J, H. and Ahn, J. S.. 1996. Using N-grams for Korean Test Retrieval. In Proc. of SIGIR. Lee, J. S.. 2004. Automatic Extraction of Translation Phrase Enclosed within Parentheses using Bilingual Alignment Method. In Proc. of the 5th ChinaKorea Joint Symposium on Oriental Language Processing and Pattern Recognition. Munkres, J.. 1957. Algorithms for the Assignment and Transportation Problems. J. Soc. Indust. Appl. Math., 5 (1957). Nagata, M., Saito, T., and Suzuki, K.. 2001. Using the Web as a Bilingual Dictionary. In Proc. of ACL '2001 DD-MT Workshop. Rapp, R.. 1999. Automatic Identification of Word Translations from Unrelated English and German corpora. In Proc. of ACL. Zhang, Y., Huang, F. and Vogel, S.. 2005. Mining Translations of OOV Terms from the Web through Cross-lingual Query Expansion, In Proc. of ACM SIGIR-05. Zhang, Y. and Vines, P.. 2004. Using the Web for Automated Translation Extraction in CrossLanguage Information Retrieval. In Proc. of ACM SIGIR-04. 648
2006
81
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 649–656, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Word Alignment in English-Hindi Parallel Corpus Using Recency-Vector Approach: Some Studies Niladri Chatterjee Department of Mathematics Indian Institute of Technology Delhi Hauz Khas, New Delhi INDIA - 110016 niladri [email protected] Saumya Agrawal Department of Mathematics Indian Institute of Technology Kharagpur, West Bengal INDIA - 721302 saumya [email protected] Abstract Word alignment using recency-vector based approach has recently become popular. One major advantage of these techniques is that unlike other approaches they perform well even if the size of the parallel corpora is small. This makes these algorithms worth-studying for languages where resources are scarce. In this work we studied the performance of two very popular recency-vector based approaches, proposed in (Fung and McKeown, 1994) and (Somers, 1998), respectively, for word alignment in English-Hindi parallel corpus. But performance of the above algorithms was not found to be satisfactory. However, subsequent addition of some new constraints improved the performance of the recency-vector based alignment technique significantly for the said corpus. The present paper discusses the new version of the algorithm and its performance in detail. 1 Introduction Several approaches including statistical techniques (Gale and Church, 1991; Brown et al., 1993), lexical techniques (Huang and Choi, 2000; Tiedemann, 2003) and hybrid techniques (Ahrenberg et al., 2000), have been pursued to design schemes for word alignment which aims at establishing links between words of a source language and a target language in a parallel corpus. All these schemes rely heavily on rich linguistic resources, either in the form of huge data of parallel texts or various language/grammar related tools, such as parser, tagger, morphological analyser etc. Recency vector based approach has been proposed as an alternative strategy for word alignment. Approaches based on recency vectors typically consider the positions of the word in the corresponding texts rather than sentence boundaries. Two algorithms of this type can be found in (Fung and McKeown, 1994) and (Somers, 1998). The algorithms first compute the position vector Vw for the word w in the text. Typically, Vw is of the form ⟨p1p2 . . . pk⟩, where the pis indicate the positions of the word w in a text T. A new vector Rw, called the recency vector, is computed using the position vector Vw, and is defined as ⟨p2−p1, p3−p2, . . . , pk −pk−1⟩. In order to compute the alignment of a given word in the source language text, the recency vector of the word is compared with the recency vector of each target language word and the similarity between them is measured by computing a matching cost associated with the recency vectors using dynamic programming. The target language word having the least cost is selected as the aligned word. The results given in the above references show that the algorithms worked quite well in aligning words in parallel corpora of language pairs consisting of various European languages and Chinese, Japanese, taken pair-wise. Precision of about 70% could be achieved using these algorithms. The major advantage of this approach is that it can work even on a relatively small dataset and it does not rely on rich language resources. The above advantage motivated us to study the effectiveness of these algorithms for aligning words in English-Hindi parallel texts. The corpus used for this work is described in Table 1. It has been made manually from three different sources: children’s storybooks, English to Hindi translation book material, and advertisements. We shall call 649 the three corpora as Storybook corpus, Sentence corpus and Advertisement corpus, respectively. 2 Word Alignment Algorithm: Recency Vector Based Approach DK-vec algorithm given in (Fung and McKeown, 1994) uses the following dynamic programming based approach to compute the matching cost C(n, m) of two vectors v1 and v2 of lengths n and m, respectively. The cost is calculated recursively using the following formula, C(i, j) = |(v1(i) −v2(j)| + min{C(i −1, j), C(i −1, j −1), C(i, j −1)} where i and j have values from 2 to n and 2 to m respectively, n and m being the number of distinct words in source and target language corpus respectively. Note that vl(k) denotes the kth entry of the vector vl, for l = 1 and 2. The costs are initialised as follows. C(1, 1) = |v1(1) −v2(1)|; C(i, 1) = |v1(i) −v2(1)| + C(i −1, 1); C(1, j) = |v1(1) −v2(j)| + C(1, j −1); The word in the target language that has the minimum normalized cost (C(n, m)/(n + m)) is taken as the translation of the word considered in the source text. One major shortcoming of the above scheme is its high computational complexity i.e. O(mn). A variation of the above scheme has been proposed in (Somers, 1998) which has a much lower computational complexity O(min(m, n)). In this new scheme, a distance called Levenshtein distance(S) is successively measured using : S = S + min{|v1(i + 1) −v2(j)|, |v1(i+1)−v2(j+1)|, |v1(i)−v2(j+1)|} The word in the target text having the minimum value of S (Levenshtein difference) is considered to be the translation of the word in the source text. 2.1 Constraints Used in the Dynamic Programming Algorithms In order to reduce the complexity of the dynamic programming algorithm certain constraints have been proposed in (Fung and McKeown, 1994). 1. Starting Point Constraint: The constraint imposed is: |first-occurrence of source language word (w1) - first-occurrence of target language word w2| < 1 2∗(length of the text). 2. Euclidean distance constraint: The constraint imposed is: p (m1 −m2)2 + (s1 −s2)2 < T, where mj and sj are the mean and standard deviation, respectively, of the recency vector of wj, j = 1 or 2. Here, T is some predefined threshold: 3. Length Constraint: The constraint imposed is: 1 2 ∗f2 < f1 < 2 ∗f2, where f1 and f2 are the frequencies of occurrence of w1 and w2, in their respective texts. 2.2 Experiments with DK-vec Algorithm The results of the application of this algorithm have been very poor when applied on the three English to Hindi parallel corpora mentioned above without imposing any constraints. We then experimented by varying the values of the parameters in the constraints in order to observe their effects on the accuracy of alignment. As was suggested in (Somers, 1998), we also observed that the Euclidean distance constraint is not very beneficial when the corpus size is small. So this constraint has not been considered in our subsequent experiments. Starting point constraint imposes a range within which the search for the matching word is restricted. Although Fung and McKeown suggested the range to be half of the length of the text, we felt that the optimum value of this range will vary from text to text depending on the type of corpus, length ratio of the two texts etc. Table 2 shows the results obtained on applying the DK vec algorithm on Sentence corpus for different lower values of range. Similar results were obtained for the other two corpora. The maximum increase observed in the F-score is around 0.062 for the Sentence corpus, 0.03 for the Story book corpus and 0.05 for the Advertisement corpus. None of these improvements can be considered to be significant. 2.3 Experiments with Somers’ Algorithm The algorithm provided by Somers works by first finding all the minimum score word pairs using dynamic programming, and then applying three filters Multiple Alignment Selection filter, Best Alignment Score Selection filter and Frequency Range constraint to the raw results to increase the accuracy of alignment. The Multiple Alignment Selection(MAS) filter takes care of situations where a single target language word is aligned with the number of source 650 Corpora English corpus Hindi corpus Total words Distinct words Total words Distinct words Storybook corpus 6545 1079 7381 1587 Sentence corpus 8541 1186 9070 1461 Advertisement corpus 3709 1307 4009 1410 Table 1: Details of English-Hindi Parallel Corpora Range Available Proposed Correct P% R% F-score 50 516 430 34 7.91 6.59 0.077 150 516 481 51 10.60 09.88 0.102 250 516 506 98 19.37 18.99 0.192 500 516 514 100 19.46 19.38 0.194 700 516 515 94 18.25 18.22 0.182 800 516 515 108 20.97 20.93 0.209 900 516 515 88 17.09 17.05 0.171 1000 516 516 100 19.38 19.38 0.194 2000 516 516 81 15.70 15.70 0.157 4535 516 516 76 14.73 14.73 0.147 Table 2: Results of DK-vec Algorithm on Sentence Corpus for different range language words. Somers has suggested that in such cases only the word pair that has the minimum alignment score should be considered. Table 3 provides results (see column F-score old) when the raw output is passed through the MAS filters for the three corpora. Note that for all the three corpora a variety of frequency ranges have been considered, and we have observed that the results obtained are slightly better when the MAS filter has been used. The best F-score is obtained when frequency range is high i.e. 100-150, 100-200. But here the words are very few in number and are primarily pronoun, determiner or conjunction which are not significant from alignment perspective. Also, it was observed that when medium frequency ranges, such as 30-50, are used the best result, in terms of precision, is around 20-28% for the three corpora. However, since the corpus size is small, here too the available and proposed aligned word pairs are very few (below 25). Lower frequency ranges (viz. 2-20 and its sub-ranges) result in the highest number of aligned pairs. We noticd that these aligned word pairs are typically verb, adjective, noun and adverb. But here too the performance of the algorithm may be considered to be unsatisfactory. Although Somers has recommended words in the frequency ranges 1030 to be considered for alignment, we have considered lower frequency words too in our experiments. This is because the corpus size being small we would otherwise have effectively overlooked many small-frequency words (e.g. noun, verb, adjective) that are significant from the alignment point of view. Somers has further observed that if the Best Alignment Score Selection (BASS) filter is applied to yield the first few best results of alignment the overall quality of the result improves. Figure 1 shows the results of the experiments done for different alignment score cut-off without considering the Frequency Range constraint on the three corpora. However, it was observed that the performance of the algorithm reduced slightly on introducing this BASS filter. The above experiments suggest that the performance of the two algorithms is rather poor in the context of English-Hindi parallel texts as compared to other language pairs as shown by Fung and Somers. In the following section we discuss the reasons for the low recall and precision values. 2.4 Why Recall and Precision are Low We observed that the primary reason for the poor performance of the above algorithms in English - Hindi context is the presence of multiple Hindi equivalents for the same English word. This can happen primarily due to three reasons: 651 Figure 1: Results of Somers’ Algorithm and Improved approach for different score cut-off Declension of Adjective: Declensions of adjectives are not present in English grammar. No morphological variation in adjectives takes place along with the number and gender of the noun. But, in Hindi, adjectives may have such declensions. For example, the Hindi for “black” is kaalaa when the noun is masculine singular number (e.g. black horse ∼kaalaa ghodaa). But the Hindi translation of “black horses” is kaale ghode; whereas “black mare” is translated as kaalii ghodii. Thus the same English word “black” may have three Hindi equivalents kaalaa, kaalii, and kale which are to be used judiciously by considering the number and gender of the noun concerned. Declensions of Pronouns and Nouns: Nouns or pronouns may also have different declensions depending upon the case endings and/or the gender and number of the object. For example, the same English word “my” may have different forms (e.g. meraa, merii, mere) when translated in Hindi. For illustration, while “my book” is translated as ∼merii kitaab, the translation of “my name” is meraa naam. This happens because naam is masculine in Hindi, while kitaab is feminine. (Note that in Hindi there is no concept of Neuter gender). Similar declensions may be found with respect to nouns too. For example, the Hindi equivalent of the word “hour” is ghantaa. In plural form it becomes ghante (e.g. “two hours” ∼do ghante). But when used in a prepositional phrase, it becomes ghanto. Thus the Hindi translation for “in two hours” is do ghanto mein. Verb Morphology: Morphology of verbs in Hindi depends upon the gender, number and person of the subject. There are 11 possible suffixes (e.g taa, te, tii, egaa) in Hindi that may be attached to the root Verb to render morphological variations. For illustration, I read. →main padtaa hoon (Masculine) but main padtii hoon (Feminine) You read. →tum padte ho (Masculine) or tum padtii ho (Feminine) He will read. →wah padegaa. Due to the presence of multiple Hindi equivalents, the frequencies of word occurrences differ significantly, and thereby jeopardize the calculations. As a consequence, many English words are wrongly aligned. In the following section we describe certain measures that we propose for improving the efficiency of the recency vector based algorithms for word alignment in English - Hindi parallel texts. 3 Improvements in Word Alignment In order to take care of morphological variations, we propose to use root words instead of various declensions of the word. For the present work this has been done manually for Hindi. However, algorithms similar to Porter’s algorithm may be developed for Hindi too for cleaning a Hindi text of morphological inflections (Ramanathan and Rao, 2003). The modified text, for both English and Hindi, are then subjected to word alignment. Table 4 gives the details about the root word corpus used to improve the result of word alignment. Here the total number of words for the three types of corpora is greater than the total number of words in the original corpus (Table 1). This is because of the presence of words like “I’ll” in the English corpus which have been taken as “I shall” in the root word corpus. Also words like Unkaa have been taken as Un kaa in the Hindi root word corpus, leading to an increase in the corpus size. 652 Since we observed (see Section 2.2) that Euclidean distance constraint does not add significantly to the performance, we propose not to use this constraint for English-Hindi word alignment. However, we propose to impose both frequency range constraint and length constraint (see Section 2.1 and Section 2.3). Instead of the starting point constraint, we have introduced a new constraint, viz. segment constraint, to localise the search for the matching words. The starting point constraint expresses range in terms of number of words. However, it has been observed (see section 2.2) that the optimum value of the range varies with the nature of text. Hence no value for range may be identified that applies uniformly on different corpora. Also for noisy corpora the segment constraint is expected to yield better results as the search here is localised better. The proposed segment constraint expresses range in terms of segments. In order to impose this constraint, first the parallel texts are aligned at sentence level. The search for a target language word is then restricted to few segments above and below the current one. Use of sententially aligned corpora for word alignment has already been recommended in (Brown et al., 1993). However, the requirement there is quite stringent – all the sentences are to be correctly aligned. The segment constraint proposed herein works well even if the text alignment is not perfect. Use of roughly aligned corpora has also been proposed in (Dagan and Gale, 1993) for word alignment in bilingual corpora, where statistical techniques have been used as the underlying alignment scheme. In this work, the sentence level alignment algorithm given in (Gale and Church, 1991) has been used for applying segment constraint. As shown in Table 5, the alignment obtained using this algorithm is not very good (only 70% precision for Storybook corpus). The three aligned root word corpora are then subjected to segment constraint in our experiments. Next important decision we need to take which dynamic programming algorithm should be used. Results shown in Section 2.2 and 2.3 demonstrate that the performance of DK-vec algorithm and Somers’ algorithm are almost at par. Hence keeping in view the improved computational complexity, we choose to use Levenshtein distance as used in Somers’ algorithm for comparing recency vectors. In the following subsection we discuss the experimental results of the proposed approach. 3.1 Experimental Results and Comparison with Existing algorithms We have conducted experiments to determine the number of segments above and below the current segment that should be considered for searching the match of a word for each corpus. In this respect we define i-segment constraint in which the search is restricted to the segments k −i to k + i of the target language corpus when the word under consideration is in the segment k of the source language corpus. Evidently, the value of i depends on the accuracy of sentence alignment. Table 5 suggests that the quality of alignment is different for the three corpora that we considered. Due to the very high precision and recall for Sentence corpus we have restricted our search to the kth segment only, i.e. the value of i is 0. However, since the results are not so good for the Storybook and Advertisement corpora we found after experimenting that the best results were obtained when i was 1. During the experiments it was observed that as the number of segments was lowered or increased from the optimum segment the accuracy of alignment decreased continuously by around 10% for low frequency ranges for the three corpora and remained almost same for high frequency ranges. Table 3 shows the results obtained when segment constraint is applied on the three corpora at optimum segment range for various frequency ranges. A comparison between the F-score given by algorithm in (Somers, 1998) (the column Fscore old in the table) and the F-score obtained by applying the improved scheme (the column Fscore new in the table) indicate that the results have improved significantly for low frequency ranges. It is observed that the accuracy of alignment for almost 95% of the available words has increased significantly. This accounts for words within low frequency range of 2–40 for Sentence corpus, 2– 30 for Storybook corpus, and 2–20 for Advertisement corpus. Also, most of the correct word pairs given by the modified approach are verbs, adjectives or nouns. Also it was observed that as the noise in the corpus increased the results became poorer. This accounts for the lowest F-score values for advertisement corpus. The Sentence corpus, however, has been found to be the least noisy, and highest precision and recall values were obtained with this corpus. 653 Using Somers’ second filter on each corpus for the optimum segment we found that the results at low scores were better as shown in Figure 1. The word pairs obtained after applying the modified approach can be used as anchor points for further alignment as well as for vocabulary extraction. In case of the Sentence corpus, best result for anchor points for further alignment lies at the score cut off 1000 where precision and recall are 86.88% and 80.35% respectively. Hence F-score is 0.835 which is very high as compared to 0.173 obtained by Somers’ approach and indicates an improvement of 382.65%. Also, here the number of correct word pairs is 198, whereas the algorithms in (Fung and McKeown, 1994) and (Somers, 1998) gave only 62 and 61 correct word pairs, respectively. Hence the results are very useful for vocabulary extraction as well. Similarly, Figure 2 and Figure 3 show significant improvements for the other two corpora. At any score cut-off, the modified approach gives better results than the algorithms proposed in (Somers, 1998). 4 Conclusion This paper focuses on developing suitable word alignment schemes in parallel texts where the size of the corpus is not large. In languages, where rich linguistic tools are yet to be developed, or available freely, such an algorithm may prove to be beneficial for various NLP activities, such as, vocabulary extraction, alignment etc. This work considers word alignment in English - Hindi parallel corpus, where the size of the corpus used is about 18 thousand words for English and 20 thousand words for Hindi. The paucity of the resources suggests that statistical techniques are not suitable for the task. On the other hand, Lexicon-based approaches are highly resource-dependent. As a consequence, they could not be considered as suitable schemes. Recency vector based approaches provide a suitable alternative. Variations of this approach have already been used for word alignment in parallel texts involving European languages and Chinese, Japanese. However, our initial experiments with these algorithms on English-Hindi did not produce good results. In order to improve their performances certain measures have been taken. The proposed algorithm improved the performance manifold. This approach can be used for word alignment in language pairs like English-Hindi. Since the available corpus size is rather small we could not compare the results obtained with various other word alignment algorithms proposed in the literature. In particular we like to compare the proposed scheme with the famous IBM models. We hope that with a much larger corpus size we shall be able to make the necessary comparisons in near future. References L. Ahrenberg, M. Merkel, A. Sagvall Hein, and J.Tiedemann. 2000. Evaluation of word alignment systems. In Proc. 2nd International conference on Linguistic resources and Evaluation (LREC-2000), volume 3, pages 1255–1261, Athens, Greece. P. Brown, S. A. Della Pietra, V. J. Della Pietra, , and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263–311. K. W. Church Dagan, I. and W. A. Gale. 1993. Robust bilingual word alignment for machine aided translation. In Proc. Workshop on Very Large Corpora: Academic and Industrial Perspectives, pages 1–8, Columbus, Ohio. P. Fung and K. McKeown. 1994. Aligning noisy parallel corpora across language groups: Word pair feature matching by dynamic time warping. In Technology Partnerships for Crossing the Language Barrier: Proc. First conference of the Association for Machine Translation in the Americas, pages 81–88, Columbia, Maryland. W. A. Gale and K. W. Church. 1991. Identifying word correspondences in parallel texts. In Proc. Fourth DARPA Workshop on Speech and Natural Language, pages 152–157. Morgan Kaufmann Publishers, Inc. Jin-Xia Huang and Key-Sun Choi. 2000. Chinese korean word alignment based on linguistic comparison. In Proc. 38th annual meeting of the association of computational linguistic, pages 392–399, Hong Kong. Ananthakrishnan Ramanathan and Durgesh D. Rao. 2003. A lightweight stemmer for hindi. In Proc. Workshop of Computational Linguistics for South Asian Languages -Expanding Synergies with Europe, EACL-2003, pages 42–48, Budapest, Hungary. H Somers. 1998. Further experiments in bilingual text alignment. International Journal of Corpus Linguistics, 3:115–150. J¨org Tiedemann. 2003. Combining clues word alignment. In Proc. 10th Conference of The European Chapter of the Association for Computational Linguistics, pages 339–346, Budapest, Hungary. 654 Segment Constraint: 0-segment (Sentence Corpus) Frequency a p c P% R% F-score F-score % range (new) (old) increase 2-5 285 181 141 77.90 49.74 0.61 0.118 416.90 3-5 147 108 81 75.00 55.10 0.64 0.169 278.69 3-10 211 152 119 78.29 56.40 0.61 0.168 263.10 5-20 146 103 79 76.70 54.12 0.64 0.216 196.29 10-20 49 35 29 82.86 59.18 0.69 0.233 196.14 20-30 19 12 9 75.00 47.37 0.58 0.270 114.62 30-50 14 8 6 75.00 42.86 0.55 0.229 140.17 40-50 4 2 2 100.00 50.00 0.67 0.222 201.80 50-100 15 12 8 66.67 53.33 0.59 0.392 50.51 100-200 6 5 5 100.00 83.33 0.91 0.91 200-300 3 3 3 100.00 100.00 1.00 1.00 Segment Constraint: 1-segment (Story book Corpus) 2-5 281 184 89 48.37 31.67 0.38 0.039 874.35 3-5 143 108 52 48.15 36.36 0.41 0.042 876.19 5-10 125 89 35 39.39 28.00 0.33 0.090 266.67 10-20 75 50 25 50.00 33.33 0.40 0.115 247.83 10-30 117 76 39 51.32 33.33 0.41 0.114 259.65 20-30 32 23 11 47.83 34.38 0.37 0.041 802.43 30-40 14 8 2 25.00 14.29 0.18 0.100 80 40-50 7 7 2 28.57 28.57 0.29 0.200 45.00 50-100 11 10 2 20.00 18.18 0.19 0.110 72.72 100-200 5 5 2 40.00 40.00 0.40 0.444 Segment Constraint: 1-segment (Advertisement Corpus) 2-5 411 250 67 26.80 16.30 0.20 0.035 471.43 3-5 189 145 41 28.28 21.69 0.25 0.073 242.47 3-10 237 172 48 27.91 20.03 0.23 0.075 206.67 5-20 107 73 27 36.99 25.23 0.30 0.141 112.77 10-20 31 22 6 27.27 19.35 0.23 0.229 4.37 10-30 40 28 8 32.14 22.50 0.26 0.247 5.26 30-40 3 2 1 50.00 33.33 0.40 0.222 80.18 30-50 3 2 1 50.00 33.33 0.40 0.222 80.18 50-100 4 3 1 33.33 25.00 0.29 0.178 60.60 100-200 2 2 0 0 0 1.000 Table 3: Comparison of experimental results with Segment Constraint on the three Engish-Hindi parallel corpora Corpora English corpus Hindi corpus Total words Distinct words Total words Distinct words Storybook corpus 6609 895 7606 1100 Advertisement corpus 3795 1213 4057 1198 Sentence corpus 8540 1012 9159 1152 Table 4: Experimental root word parallel corpora of English -Hindi 655 Different Corpora Actual alignment Alignment given Correct alignment R% P% in text by system given by system Advertisement corpus 323 358 253 78.32 70.68 Storybook corpus 609 546 476 78.16 87.18 Sentence corpus 4548 4548 4458 98.02 98.02 Table 5: Results of Church and Gale Algorithm for Sentence level Alignment Figure 2: Alignment Results for Sentence Corpus Figure 3: Alignment Results for Story Book Corpus 656
2006
82
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 657–664, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Extracting loanwords from Mongolian corpora and producing a Japanese-Mongolian bilingual dictionary Badam-Osor Khaltar Graduate School of Library, Information and Media Studies University of Tsukuba 1-2 Kasuga Tsukuba, 305-8550 Japan [email protected] Atsushi Fujii Graduate School of Library, Information and Media Studies University of Tsukuba 1-2 Kasuga Tsukuba, 305-8550 Japan [email protected] Tetsuya Ishikawa The Historiographical Institute The University of Tokyo 3-1 Hongo 7-chome, Bunkyo-ku Tokyo, 133-0033 Japan [email protected] Abstract This paper proposes methods for extracting loanwords from Cyrillic Mongolian corpora and producing a Japanese–Mongolian bilingual dictionary. We extract loanwords from Mongolian corpora using our own handcrafted rules. To complement the rule-based extraction, we also extract words in Mongolian corpora that are phonetically similar to Japanese Katakana words as loanwords. In addition, we correspond the extracted loanwords to Japanese words and produce a bilingual dictionary. We propose a stemming method for Mongolian to extract loanwords correctly. We verify the effectiveness of our methods experimentally. 1 Introduction Reflecting the rapid growth in science and technology, new words and technical terms are being progressively created, and these words and terms are often transliterated when imported as loanwords in another language. Loanwords are often not included in dictionaries, and decrease the quality of natural language processing, information retrieval, machine translation, and speech recognition. At the same time, compiling dictionaries is expensive, because it relies on human introspection and supervision. Thus, a number of automatic methods have been proposed to extract loanwords and their translations from corpora, targeting various languages. In this paper, we focus on extracting loanwords in Mongolian. The Mongolian language is divided into Traditional Mongolian, written using the Mongolian alphabet, and Modern Mongolian, written using the Cyrillic alphabet. We focused solely on Modern Mongolian, and use the word “Mongolian” to refer to Modern Mongolian in this paper. There are two major problems in extracting loanwords from Mongolian corpora. The first problem is that Mongolian uses the Cyrillic alphabet to represent both conventional words and loanwords, and so the automatic extraction of loanwords is difficult. This feature provides a salient contrast to Japanese, where the Katakana alphabet is mainly used for loanwords and proper nouns, but not used for conventional words. The second problem is that content words, such as nouns and verbs, are inflected in sentences in Mongolian. Each sentence in Mongolian is segmented on a phrase-by-phase basis. A phrase consists of a content word and one or more suffixes, such as postpositional particles. Because loanwords are content words, then to extract loanwords correctly, we have to identify the original form using stemming. In this paper, we propose methods for extracting loanwords from Cyrillic Mongolian and producing a Japanese–Mongolian bilingual dictionary. We also propose a stemming method to identify the original forms of content words in Mongolian phrases. 657 2 Related work To the best of our knowledge, no attempt has been made to extract loanwords and their translations targeting Mongolian. Thus, we will discuss existing methods targeting other languages. In Korean, both loanwords and conventional words are spelled out using the Korean alphabet, called Hangul. Thus, the automatic extraction of loanwords in Korean is difficult, as it is in Mongolian. Existing methods that are used to extract loanwords from Korean corpora (Myaeng and Jeong, 1999; Oh and Choi, 2001) use the phonetic differences between conventional Korean words and loanwords. However, these methods require manually tagged training corpora, and are expensive. A number of corpus-based methods are used to extract bilingual lexicons (Fung and McKeown, 1996; Smadja, 1996). These methods use statistics obtained from a parallel or comparable bilingual corpus, and extract word or phrase pairs that are strongly associated with each other. However, these methods cannot be applied to a language pair where a large parallel or comparable corpus is not available, such as Mongolian and Japanese. Fujii et al. (2004) proposed a method that does not require tagged corpora or parallel corpora to extract loanwords and their translations. They used a monolingual corpus in Korean and a dictionary consisting of Japanese Katakana words. They assumed that loanwords in multiple countries corresponding to the same source word are phonetically similar. For example, the English word “system” has been imported into Korean, Mongolian, and Japanese. In these languages, the romanized words are “siseutem”, “sistem”, and “shisutemu”, respectively. It is often the case that new terms have been imported into multiple languages simultaneously, because the source words are usually influential across cultures. It is feasible that a large number of loanwords in Korean can also be loanwords in Japanese. Additionally, Katakana words can be extracted from Japanese corpora with a high accuracy. Thus, Fujii et al. (2004) extracted the loanwords in Korean corpora that were phonetically similar to Japanese Katakana words. Because each of the extracted loanwords also corresponded to a Japanese word during the extraction process, a Japanese–Korean bilingual dictionary was produced in a single framework. However, a number of open questions remain from Fujii et al.’s research. First, their stemming method can only be used for Korean. Second, their accuracy in extracting loanwords was low, and thus, an additional extraction method was required. Third, they did not report on the accuracy of extracting translations, and finally, because they used Dynamic Programming (DP) matching for computing the phonetic similarities between Korean and Japanese words, the computational cost was prohibitive. In an attempt to extract Chinese–English translations from corpora, Lam et al. (2004) proposed a similar method to Fujii et al. (2004). However, they searched the Web for Chinese–English bilingual comparable corpora, and matched named entities in each language corpus if they were similar to each other. Thus, Lam et al.’s method cannot be used for a language pair where comparable corpora do not exist. In contrast, using Fujii et al.’s (2004) method, the Katakana dictionary and a Korean corpus can be independent. In addition, Lam et al.’s method requires Chinese–English named entity pairs to train the similarity computation. Because the accuracy of extracting named entities was not reported, it is not clear to what extent this method is effective in extracting loanwords from corpora. 3 Methodology 3.1 Overview In view of the discussion outlined in Section 2, we enhanced the method proposed by Fujii et al. (2004) for our purpose. Figure 1 shows the method that we used to extract loanwords from a Mongolian corpus and to produce a Japanese–Mongolian bilingual dictionary. Although the basis of our method is similar to that used by Fujii et al. (2004), “Stemming”, “Extracting loanwords based on rules”, and “N-gram retrieval” are introduced in this paper. First, we perform stemming on a Mongolian corpus to segment phrases into a content word and one or more suffixes. 658 Second, we discard segmented content words if they are in an existing dictionary, and extract the remaining words as candidate loanwords. Third, we use our own handcrafted rules to extract loanwords from the candidate loanwords. While the rule-based method can extract loanwords with a high accuracy, a number of loanwords cannot be extracted using predefined rules. Fourth, as performed by Fujii et al. (2004), we use a Japanese Katakana dictionary and extract a candidate loanword that is phonetically similar to a Katakana word as a loanword. We romanize the candidate loanwords that were not extracted using the rules. We also romanize all words in the Katakana dictionary. However, unlike Fujii et al. (2004), we use N-gram retrieval to limit the number of Katakana words that are similar to the candidate loanwords. Then, we compute the phonetic similarities between each candidate loanword and each retrieved Katakana word using DP matching, and select a pair whose score is above a predefined threshold. As a result, we can extract loanwords in Mongolian and their translations in Japanese simultaneously. Finally, to identify Japanese translations for the loanwords extracted using the rules defined in the third step above, we perform N-gram retrieval and DP matching. We will elaborate further on each step in Sections 3.2–3.7. 3.2 Stemming A phrase in Mongolian consists of a content word and one or more suffixes. A content word can potentially be inflected in a phrase. Figure 2 shows Mongolian corpus Katakana dictionary Stemming Extracting candidate loanwords Romanization Japanese-Mongolian bilingual dictionary Extracting loanwords based on rules Romanization N-gram retrieval Mongolian loanword dictionary High Similarity Computing phonetic similarity Fig ure 1: Overview of our extraction method. Type Example (a) No inflection. ном + ын → номын Book + Genitive Case (b) Vowel elimination. ажил +аас+ аа→ ажлаасаа Work + Ablative Case +Reflexive (c) Vowel insertion. ах + д → ахад Brother + Dative Case (d) Consonant insertion. байшин + ийн→ байшингийн Building + Genitive Case (e) The letter “ь” is converted to “и”, and the vowel is eliminated. сургууль+ аас→ сургуулиас School + Ablative Case Figure 2: Inflection types of nouns in Mongolian. the inflection types of content words in phrases. In phrase (a), there is no inflection in the content word “ном (book)” concatenated with the suffix “ын (genitive case)”. However, in phrases (b)–(e) in Figure 2, the content words are inflected. Loanwords are also inflected in all of these types, except for phrase (b). Thus, we have to identify the original form of a content word using stemming. While most loanwords are nouns, a number of loanwords can also be verbs. In this paper, we propose a stemming method for nouns. Figure 3 shows our stemming method. We will explain our stemming method further, based on Figure 3. First, we consult a “Suffix dictionary” and perform backward partial matching to determine whether or not one or more suffixes are concatenated at the end of a target phrase. Second, if a suffix is detected, we use a “Suffix segmentation rule” to segment the suffix and extract 659 Figure 3: Overview of our noun stemming method. the noun. The inflection type in phrases (c)–(e) in Figure 2 is also determined. Third, we investigate whether or not the vowel elimination in phrase (b) in Figure 2 occurred in the extracted noun. Because the vowel elimination occurs only in the last vowel of a noun, we check the last two characters of the extracted noun. If both of the characters are consonants, the eliminated vowel is inserted using a “Vowel insertion rule” and the noun is converted into its original form. Existing Mongolian stemming methods (Ehara et al., 2004; Sanduijav et al., 2005) use noun dictionaries. Because we intend to extract loanwords that are not in existing dictionaries, the above methods cannot be used. Noun dictionaries have to be updated as new words are created. Our stemming method does not require a noun dictionary. Instead, we manually produced a suffix dictionary, suffix segmentation rule, and vowel insertion rule. However, once these resources are produced, almost no further compilation is required. The suffix dictionary consists of 37 suffixes that can concatenate with nouns. These suffixes are postpositional particles. Table 1 shows the dictionary entries, in which the inflection forms of the postpositional particles are shown in parentheses. The suffix segmentation rule consists of 173 rules. We show examples of these rules in Figure 4. Even if suffixes are identical in their phrases, the segmentation rules can be different, depending on the counterpart noun. In Figure 4, the suffix “ийн” matches both the noun phrases (a) and (b) by backward partial matching. However, each phrase is segmented by a Table 1: Entries of the suffix dictionary. detect a suffix in the phrase Suffix dictionary Suffix segmentation rule phrase noun segment a suffix and extract a noun Yes insert a vowel check if the last two characters of the noun are both consonants Vowel insertion rule No Case Suffix Genitive Accusative Dative Ablative Instrumental Cooperative Reflexive Plural н, ы, ын, ны, ий, ийн, ний ыг, ийг, г д, т аас (иас), оос (иос), ээс, өөс аар (иар), оор (иор), ээр, өөр тай, той, тэй аа (иа), оо (ио), ээ, өө ууд (иуд), үүд (иүд) Suffix Noun phrase Noun (a) Ээжийн mother’s ээж mother ийн Genitive (b) Хараагийн Haraa’(river name)s Хараа Haraa Figure 4: Examples of the suffix segmentation rule. deferent rule independently. The underlined suffixes are segmented in each phrase, respectively. In phrase (a), there is no inflection, and the suffix is easily segmented. However, in phrase (b), a consonant insertion has occurred. Thus, both the inserted consonant, “г”, and the suffix have to be removed. The vowel insertion rule consists of 12 rules. To insert an eliminated vowel and extract the original form of the noun, we check the last two characters of a target noun. If both of these are consonants, we determine that a vowel was eliminated. However, a number of nouns end with two consonants inherently, and therefore, we referred to a textbook on Mongolian grammar (Bayarmaa, 2002) to produce 12 rules to determine when to insert a vowel between two consecutive consonants. For example, if any of “м”, “г”, “л”, “б”, “в”, or “р” are at the end of a noun, a vowel is inserted. However, if any of “ц”, “ж”, “з”, “с”, “д”, “т”, “ш”, “ч”, or “х” are the second to last consonant in a noun, a vowel is not inserted. The Mongolian vowel harmony rule is a phonological rule in which female vowels and male vowels are prohibited from occurring in a single word together (with the exception of proper nouns). We used this rule to determine which vowel should be inserted. The appropriate vowel is determined by the first vowel of the first syllable in the target noun. 660 For example, if there are “а” and “у” in the first syllable, the vowel “а” is inserted between the last two consonants. 3.3 Extracting candidate loanwords After collecting nouns using our stemming method, we discard the conventional Mongolian nouns. We discard nouns defined in a noun dictionary (Sanduijav et al., 2005), which includes 1,926 nouns. We also discard proper nouns and abbreviations. The first characters of proper nouns, such as “Эрдэнэбат (Erdenebat)”, and all the characters of abbreviations, such as “ЦШНИ (Nuclear research centre)”, are written using capital letters in Mongolian. Thus, we discard words that are written using capital characters, except those occurring at the beginning of sentences. In addition, because “ө” and “ү” are not used to spell out Western languages, words including those characters are also discarded. 3.4 Extracting loanwords based on rules We manually produced seven rules to identify loanwords in Mongolian. Words that match with one of the following rules are extracted as loanwords. (a) A word including the consonants “к”, “п”, “ф”, or “щ”. These consonants are usually used to spell out foreign words. (b) A word that violated the Mongolian vowel harmony rule. Because of the vowel harmony rule, a word that includes female and male vowels, which is not based on the Mongolian phonetic system, is probably a loanword. (c) A word beginning with two consonants. A conventional Mongolian word does not begin with two consonants. (d) A word ending with two particular consonants. A word whose penultimate character is any of: “п”, ”б”, “т”, ”ц”, “ч”, ”з”, or “ш” and whose last character is a consonant violates Mongolian grammar, and is probably a loanword. (e) A word beginning with the consonant “в”. In a modern Mongolian dictionary (Ozawa, 2000), there are 54 words beginning with “в”, of which 31 are loanwords. Therefore, a word beginning with “в” is probably a loanword. (f) A word beginning with the consonant “р”. In a modern Mongolian dictionary (Ozawa, 2000), there are 49 words beginning with “р”, of which only four words are conventional Mongolian words. Therefore, a word beginning with “р” is probably a loanword. (g) A word ending with “<consonant> + и”. We discovered this rule empirically. 3.5 Romanization We manually aligned each Mongolian Cyrillic alphabet to its Roman representation1. In Japanese, the Hepburn and Kunrei systems are commonly used for romanization proposes. We used the Hepburn system, because its representation is similar to that used in Mongolian, compared to the Kunrei system. However, we adapted 11 Mongolian romanization expressions to the Japanese Hepburn romanization. For example, the sound of the letter “L” does not exist in Japanese, and thus, we converted “L” to “R” in Mongolian. 3.6 N-gram retrieval By using a document retrieval method, we efficiently identify Katakana words that are phonetically similar to a candidate loanword. In other words, we use a candidate loanword, and each Katakana word as a query and a document, respectively. We call this method “N-gram retrieval”. Because the N-gram retrieval method does not consider the order of the characters in a target word, the accuracy of matching two words is low, but the computation time is fast. On the other hand, because DP matching considers the order of the characters in a target word, the accuracy of matching two words is high, but the computation time is slow. We combined these two methods to achieve a high matching accuracy with a reasonable computation time. First, we extract Katakana words that are phonetically similar to a candidate loanword using N-gram retrieval. Second, we compute the similarity between the candidate loanword and each of the retrieved Katakana words using DP matching to improve the accuracy. We romanize all the Katakana words in the dictionary and index them using consecutive N 1 http://badaa.mngl.net/docs.php?p=trans_table (May, 2006) 661 characters. We also romanize each candidate loanword when use as a query. We experimentally set N = 2, and use the Okapi BM25 (Robertson et al., 1995) for the retrieval model. 3.7 Computing phonetic similarity Given the romanized Katakana words and the romanized candidate loanwords, we compute the similarity between the two strings, and select the pairs associated with a score above a predefined threshold as translations. We use DP matching to identify the number of differences (i.e., insertion, deletion, and substitution) between two strings on an alphabet-by-alphabet basis. While consonants in transliteration are usually the same across languages, vowels can vary depending on the language. The difference in consonants between two strings should be penalized more than the difference in vowels. We compute the similarity between two romanized words using Equation (1). v c dv dc + × + × × − α α ) ( 2 1 (1) Here, dc and dv denote the number of differences in consonants and vowels, respectively, and α is a parametric consonant used to control the importance of the consonants. We experimentally set α = 2. Additionally, c and v denote the number of all the consonants and vowels in the two strings, respectively. The similarity ranges from 0 to 1. 4 Experiments 4.1 Method We collected 1,118 technical reports published in Mongolian from the “Mongolian IT Park”2 and used them as a Mongolian corpus. The number of phrase types and phrase tokens in our corpus were 110,458 and 263,512, respectively. We collected 111,116 Katakana words from multiple Japanese dictionaries, most of which were technical term dictionaries. We evaluated our method from four perspectives: “stemming”, “loanword extraction”, “translation extraction”, and “computational cost.” We will discuss these further in Sections 4.2-4.5, respectively. 4.2 Evaluating stemming We randomly selected 50 Mongolian technical 2 http://www.itpark.mn/ (May, 2006) reports from our corpus, and used them to evaluate the accuracy of our stemming method. These technical reports were related to: medical science (17), geology (10), light industry (14), agriculture (6), and sociology (3). In these 50 reports, the number of phrase types including conventional Mongolian nouns and loanword nouns was 961 and 206, respectively. We also found six phrases including loanword verbs, which were not used in the evaluation. Table 2 shows the results of our stemming experiment, in which the accuracy for conventional Mongolian nouns was 98.7% and the accuracy for loanwords was 94.6%. Our stemming method is practical, and can also be used for morphological analysis of Mongolian corpora. We analyzed the reasons for any failures, and found that for 12 conventional nouns and 11 loanwords, the suffixes were incorrectly segmented. 4.3 Evaluating loanword extraction We used our stemming method on our corpus and selected the most frequently used 1,300 words. We used these words to evaluate the accuracy of our loanword extraction method. Of these 1,300 words, 165 were loanwords. We varied the threshold for the similarity, and investigated the relationship between precision and recall. Recall is the ratio of the number of correct loanwords extracted by our method to the total number of correct loanwords. Precision is the ratio of the number of correct loanwords extracted by our method to the total number of words extracted by our method. We extracted loanwords using rules (a)–(g) defined in Section 3.4. As a result, 139 words were extracted. Table 3 shows the precision and recall of each rule. The precision and recall showed high values using “All rules”, which combined the words extracted by rules (a)–(g) independently. We also extracted loanwords using the phonetic similarity, as discussed in Sections 3.6 and 3.7. Table 2: Results of our noun stemming method. No. of each phrase type Accuracy (%) Conventional nouns 961 98.7 Loanwords 206 94.6 662 We used the N-gram retrieval method to obtain up to the top 500 Katakana words that were similar to each candidate loanword. Then, we selected up to the top five pairs of a loanword and a Katakana word whose similarity computed using Equation (1) was greater than 0.6. Table 4 shows the results of our similarity-based extraction. Both the precision and the recall for the similarity-based loanword extraction were lower than those for the “All rules” data listed in Table 3. Table 4: Precision and recall for our similarity-based loanword extraction. Words extracted automatically Extracted correct loanwords Precision (%) Recall (%) 3,479 109 3.1 66.1 We also evaluated the effectiveness of a combination of the N-gram and DP matching methods. We performed similarity-based extraction after rule-based extraction. Table 5 shows the results, in which the data of the “Rule” are identical to those of the “All rules” data listed in Table 3. However, the “Similarity” data are not identical to those listed in Table 4, because we performed similarity-based extraction using only the words that were not extracted by rule-based extraction. When we combined the rule-based and similarity-based methods, the recall improved from 84.2% to 91.5%. The recall value should be high when a human expert modifies or verifies the resultant dictionary. Figure 5 shows example of extracted loanwords in Mongolian and their English glosses. 4.4 Evaluating Translation extraction In the row “Both” shown in Table 5, 151 loanwords were extracted, for each of which we selected up to the top five Katakana words whose similarity computed using Equation (1) was greater than 0.6 as Table 3: Precision and recall for rule-based loanword extraction. Rules (a) (b) (c) (d) (e) (f) (g) All rules Words extracted automatically 102 63 21 6 4 5 24 150 Extracted correct loanwords 101 60 20 5 4 5 19 139 Precision (%) 99.0 95.2 95.2 83.3 Table 5: Precision and recall of different loanword extraction methods. No. of words No. that were correct Precision (%) Recall (%) Rule 150 139 92.7 84.2 Similarity 60 12 20.0 46.2 Both 210 151 71.2 91.5 Mongolian English gloss альбумин лаборатор механизм митохондр albumin laboratory mechanism mitochondria Figure 5: Example of extracted loanwords. translations. As a result, Japanese translations were extracted for 109 loanwords. Table 6 shows the results, in which the precision and recall of extracting Japanese–Mongolian translations were 56.2% and 72.2%, respectively. We analyzed the data and identified the reasons for any failures. For five loanwords, the N-gram retrieval failed to search for the similar Katakana words. For three loanwords, the phonetic similarity computed using Equation (1) was not high enough for a correct translation. For 27 loanwords, the Japanese translations did not exist inherently. For seven loanwords, the Japanese translations existed, but were not included in our Katakana dictionary. Figure 6 shows the Japanese translations extracted for the loanwords shown in Figure 5. Table 6: Precision and recall for translation extraction. No. of translations extracted automatically No. of extracted correct translations Precision (%) Recall (%) 194 109 56.2 72.2 100 100 79.2 92.7 Recall (%) 61.2 36.4 12.1 3.0 2.4 3.03 11.5 84.2 663 Japanese Mongolian English gloss アルブミン ラボラトリー メカニズム ミトコンドリア альбумин лаборатор механизм митохондр albumin laboratory mechanism mitochondria Figure 6: Japanese translations extracted for the loanwords shown in Figure 5. 4.5 Evaluating computational cost We randomly selected 100 loanwords from our corpus, and used them to evaluate the computational cost of the different extraction methods. We compared the computation time and the accuracy of “N-gram”, “DP matching”, and “N-gram + DP matching” methods. The experiments were performed using the same PC (CPU = Pentium III 1 GHz dual, Memory = 2 GB). Table 7 shows the improvement in computation time by “N-gram + DP matching” on “DP matching”, and the average rank of the correct translations for “N-gram”. We improved the efficiency, while maintaining the sorting accuracy of the translations. Table 7: Evaluation of the computational cost. Method N-gram DP N-gram + DP Loanwords 100 Computation time (sec.) 95 136,815 293 Extracted correct translations 66 66 66 Average rank of correct translations 44.8 2.7 2.7 5 Conclusion We proposed methods for extracting loanwords from Cyrillic Mongolian corpora and producing a Japanese–Mongolian bilingual dictionary. Our research is the first serious effort in producing dictionaries of loanwords and their translations targeting Mongolian. We devised our own rules to extract loanwords from Mongolian corpora. We also extracted words in Mongolian corpora that are phonetically similar to Japanese Katakana words as loanwords. We also corresponded the extracted loanwords to Japanese words, and produced a Japanese–Mongolian bilingual dictionary. A noun stemming method that does not require noun dictionaries was also proposed. Finally, we evaluated the effectiveness of the components experimentally. References Terumasa Ehara, Suzushi Hayata, and Nobuyuki Kimura. 2004. Mongolian morphological analysis using ChaSen. Proceedings of the 10th Annual Meeting of the Association for Natural Language Processing, pp. 709-712. (In Japanese). Atsushi Fujii, Tetsuya Ishikawa, and Jong-Hyeok Lee. 2004. Term extraction from Korean corpora via Japanese. Proceedings of the 3rd International Workshop on Computational Terminology, pp. 71-74. Pascal Fung and Kathleen McKeown. 1996. Finding terminology translations from non-parallel corpora. Proceedings of the 5th Annual Workshop on Very Large Corpora, pp. 53-87. Wai Lam, Ruizhang Huang, and Pik-Shan Cheung. 2004. Learning phonetic similarity for matching named entity translations and mining new translations. Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 289-296. Sung Hyun Myaeng and Kil-Soon Jeong. 1999. Back-Transliteration of foreign words for information retrieval. Information Processing and Management, Vol. 35, No. 4, pp. 523 -540. Jong-Hooh Oh and Key-Sun Choi. 2001. Automatic extraction of transliterated foreign words using hidden markov model. Proceedings of the International Conference on Computer Processing of Oriental Languages, 2001, pp. 433-438. Shigeo Ozawa. Modern Mongolian Dictionary. Daigakushorin. 2000. Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1995. Okapi at TREC-3, Proceedings of the Third Text REtrieval Conference (TREC-3), NIST Special Publication 500-226. pp. 109-126. Enkhbayar Sanduijav, Takehito Utsuro, and Satoshi Sato. 2005. Mongolian phrase generation and morphological analysis based on phonological and morphological constraints. Journal of Natural Language Processing, Vol. 12, No. 5, pp. 185-205. (In Japanese) . Frank Smadja, Vasileios Hatzivassiloglou, Kathleen R. McKeown. 1996. Translating collocations for bilingual lexicons: A statistical approach. Computational Linguistics, Vol. 22, No. 1, pp. 1-38. Bayarmaa Ts. 2002. Mongolian grammar in I-IV grades. (In Mongolian). 664
2006
83
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 665–672, Sydney, July 2006. c⃝2006 Association for Computational Linguistics An Unsupervised Morpheme-Based HMM for Hebrew Morphological Disambiguation Meni Adler Department of Computer Science Ben Gurion University of the Negev 84105 Beer Sheva, Israel [email protected] Michael Elhadad Department of Computer Science Ben Gurion University of the Negev 84105 Beer Sheva, Israel [email protected] Abstract Morphological disambiguation is the process of assigning one set of morphological features to each individual word in a text. When the word is ambiguous (there are several possible analyses for the word), a disambiguation procedure based on the word context must be applied. This paper deals with morphological disambiguation of the Hebrew language, which combines morphemes into a word in both agglutinative and fusional ways. We present an unsupervised stochastic model – the only resource we use is a morphological analyzer – which deals with the data sparseness problem caused by the affixational morphology of the Hebrew language. We present a text encoding method for languages with affixational morphology in which the knowledge of word formation rules (which are quite restricted in Hebrew) helps in the disambiguation. We adapt HMM algorithms for learning and searching this text representation, in such a way that segmentation and tagging can be learned in parallel in one step. Results on a large scale evaluation indicate that this learning improves disambiguation for complex tag sets. Our method is applicable to other languages with affix morphology. 1 Introduction Morphological disambiguation is the process of assigning one set of morphological features to each individual word in a text, according to the word context. In this work, we investigate morphological disambiguation in Modern Hebrew. We explore unsupervised learning method, which is more challenging than the supervised case. The main motivation for this approach is that despite the development ∗This work is supported by the Lynn and William Frankel Center for Computer Sciences, and by the Knowledge Center for Hebrew Processing, Israel Science Ministry. of annotated corpora in Hebrew1, there is still not enough data available for supervised training. The other reason, is that unsupervised methods can handle the dynamic nature of Modern Hebrew, as it evolves over time. In the case of English, because morphology is simpler, morphological disambiguation is generally covered under the task of part-of-speech tagging. The main morphological variations are embedded in the tag name (for example, Ns and Np for noun singular or plural). The tagging accuracy of supervised stochastic taggers is around 96%97% (Manning and Schutze, 1999, 10.6.1). Merialdo (1994) reports an accuracy of 86.6% for an unsupervised word-based HMM, trained on a corpus of 42,186 sentences (about 1M words), over a tag set of 159 different tags. Elworthy (1994), in contrast, reports an accuracy of 75.49%, 80.87% and 79.12% for unsupervised word-based HMM trained on parts of the LOB corpora, with a tagset of 134 tags. With good initial conditions, such as good approximation of the tag distribution for each word, Elworthy reports an improvement to 94.6%, 92.27% and 94.51% on the same data sets. Merialdo, on the other hand, reports an improvement to 92.6% and 94.4% for the case where 100 and 2000 sentences of the training corpus are manually tagged. Modern Hebrew is characterized by rich morphology, with a high level of ambiguity. On average, in our corpus, the number of possible analyses per word reached 2.4 (in contrast to 1.4 for English). In Hebrew, several morphemes combine into a single word in both agglutinative and fusional ways. This results in a potentially high number of tags for each word. In contrast to English tag sets whose sizes range from 48 to 195, the number of tags for Hebrew, based on all combinations of the morphological attributes (part-of-speech, gender, number, person, tense, status, and the affixes’ properties2), 1The Knowledge Center for Hebrew processing is developing such corpora: http://mila.cs.technion.ac.il/ 2The list of morphological attributes is described in (Yona and Wintner, 2005). An in-depth discussion of the Hebrew word form is provided in (Allon, 1995, pp. 665 can grow theoretically to about 300,000 tags. In practice, we found only 1,934 tags in a corpus of news stories we gathered, which contains about 6M words. The large size of such a tag set (about 10 times larger than the most comprehensive English tag set) is problematic in term of data sparseness. Each morphological combination appears rarely, and more samples are required in order to learn the probabilistic model. In this paper, we hypothesize that the large set of morphological features of Hebrew words, should be modeled by a compact morpheme model, based on the segmented words (into prefix, baseform, and suffix). Our main result is that best performance is obtained when learning segmentation and morpheme tagging in one step, which is made possible by an appropriate text representation. 2 Hebrew and Arabic Tagging Previous Work Several works have dealt with Hebrew tagging in the past decade. In Hebrew, morphological analysis requires complex processing according to the rules of Hebrew word formation. The task of a morphological analyzer is to produce all possible analyses for a given word. Recent analyzers provide good performance and documentation of this process (Yona and Wintner, 2005; Segal, 2000). Morphological analyzers rely on a dictionary, and their performance is, therefore, impacted by the occurrence of unknown words. The task of a morphological disambiguation system is to pick the most likely analysis produced by an analyzer in the context of a full sentence. Levinger et al. (1995) developed a context-free method in order to acquire the morpho-lexical probabilities, from an untagged corpus. Their method handles the data sparseness problem by using a set of similar words for each word, built according to a set of rules. The rules produce variations of the morphological properties of the word analyses. Their tests indicate an accuracy of about 88% for context-free analysis selection based on the approximated analysis distribution. In tests we reproduced on a larger data set (30K tagged words), the accuracy is only 78.2%. In order to improve the results, the authors recommend merging their method together with other morphological disambiguation methods – which is the approach we pursue in this work. Levinger’s morphological disambiguation system (Levinger, 1992) combines the above approximated probabilities with an expert system, based on a manual set of 16 syntactic constraints . In the first phase, the expert system is applied, dis24–86). ambiguating 35% of the ambiguous words with an accuracy of 99.6%. In order to increase the applicability of the disambiguation, approximated probabilities are used for words that were not disambiguated in the first stage. Finally, the expert system is used again over the new probabilities that were set in the previous stage. Levinger reports an accuracy of about 94% for disambiguation of 85% of the words in the text (overall 80% disambiguation). The system was also applied to prune out the least likely analyses in a corpus but without, necessarily, selecting a single analysis for each word. For this task, an accuracy of 94% was reported while reducing 92% of the ambiguous analyses. Carmel and Maarek (1999) use the fact that on average 45% of the Hebrew words are unambiguous, to rank analyses, based on the number of disambiguated occurrences in the text, normalized by the total number of occurrences for each word. Their application – indexing for an information retrieval system – does not require all of the morphological attributes but only the lemma and the PoS of each word. As a result, for this case, 75% of the words remain with one analysis with 95% accuracy, 20% with two analyses and 5% with three analyses. Segal (2000) built a transformation-based tagger in the spirit of Brill (1995). In the first phase, the analyses of each word are ranked according to the frequencies of the possible lemmas and tags in a training corpus of about 5,000 words. Selection of the highest ranked analysis for each word gives an accuracy of 83% of the test text – which consists of about 1,000 words. In the second stage, a transformation learning algorithm is applied (in contrast to Brill, the observed transformations are not applied, but used for re-estimation of the word couples probabilities). After this stage, the accuracy is about 93%. The last stage uses a bottomup parser over a hand-crafted grammar with 150 rules, in order to select the analysis which causes the parsing to be more accurate. Segal reports an accuracy of 95%. Testing his system over a larger test corpus, gives poorer results: Lembersky (2001) reports an accuracy of about 85%. Bar-Haim et al. (2005) developed a word segmenter and PoS tagger for Hebrew. In their architecture, words are first segmented into morphemes, and then, as a second stage, these morphemes are tagged with PoS. The method proceeds in two sequential steps: segmentation into morphemes, then tagging over morphemes. The segmentation is based on an HMM and trained over a set of 30K annotated words. The segmentation step reaches an accuracy of 96.74%. PoS tagging, based on unsupervised estimation which combines a small annotated corpus with an untagged corpus of 340K 666 Word Segmentation Tag Translation bclm bclm PNN name of a human rights association (Betselem) bclm bclm VB while taking a picture bclm bcl-m cons-NNM-suf their onion bclm b-cl-m P1-NNM-suf under their shadow bclm b-clm P1-NNM in a photographer bclm b-clm P1-cons-NNM in a photographer bclm b-clm P1-h-NNM in the photographer hn‘im h-n‘im P1-VBR that are moving hn‘im hn‘im P1-h-JJM the lovely hn‘im hn‘im VBP made pleasant Table 1: Possible analyses for the words bclm hn‘im words by using smoothing technique, gives an accuracy of 90.51%. As noted earlier, there is as yet no large scale Hebrew annotated corpus. We are in the process of developing such a corpus, and we have developed tagging guidelines (Elhadad et al., 2005) to define a comprehensive tag set, and assist human taggers achieve high agreement. The results discussed above should be taken as rough approximations of the real performance of the systems, until they can be re-evaluated on such a large scale corpus with a standard tag set. Arabic is a language with morphology quite similar to Hebrew. Theoretically, there might be 330,000 possible morphological tags, but in practice, Habash and Rambow (2005) extracted 2,200 different tags from their corpus, with an average number of 2 possible tags per word. As reported by Habash and Rambow, the first work on Arabic tagging which used a corpus for training and evaluation was the work of Diab et al. (2004). Habash and Rambow were the first to use a morphological analyzer as part of their tagger. They developed a supervised morphological disambiguator, based on training corpora of two sets of 120K words, which combines several classifiers of individual morphological features. The accuracy of their analyzer is 94.8% – 96.2% (depending on the test corpus). An unsupervised HMM model for dialectal Arabic (which is harder to be tagged than written Arabic), with accurracy of 69.83%, was presented by Duh and Kirchhoff(2005). Their supervised model, trained on a manually annotated corpus, reached an accuracy of 92.53%. Arabic morphology seems to be similar to Hebrew morphology, in term of complexity and data sparseness, but comparison of the performances of the baseline tagger used by Habash and Rambow – which selects the most frequent tag for a given word in the training corpus – for Hebrew and Arabic, shows some intriguing differences: 92.53% for Arabic and 71.85% for Hebrew. Furthermore, as mentioned above, even the use of a sophisticated context-free tagger, based on (Levinger et al., 1995), gives low accuracy of 78.2%. This might imply that, despite the similarities, morphological disambiguation in Hebrew might be harder than in Arabic. It could also mean that the tag set used for the Arabic corpora has not been adapted to the specific nature of Arabic morphology (a comment also made in (Habash and Rambow, 2005)). We propose an unsupervised morpheme-based HMM to address the data sparseness problem. In contrast to Bar-Haim et al., our model combines segmentation and morphological disambiguation, in parallel. The only resource we use in this work is a morphological analyzer. The analyzer itself can be generated from a word list and a morphological generation module, such as the HSpell wordlist (Har’el and Kenigsberg, 2004). 3 Morpheme-Based Model for Hebrew 3.1 Morpheme-Based HMM The lexical items of word-based models are the words of the language. The implication of this decision is that both lexical and syntagmatic relations of the model, are based on a word-oriented tagset. With such a tagset, it must be possible to tag any word of the language with at least one tag. Let us consider, for instance, the Hebrew phrase bclm hn‘im3, which contains two words. The word bclm has several possible morpheme segmentations and analyses4 as described in Table 1. In wordbased HMM, we consider such a phrase to be generated by a Markov process, based on the wordoriented tagset of N = 1934 tags/states and about M = 175K word types. Line W of Table 2 describes the size of a first-order word-based HMM, built over our corpus. In this model, we found 834 entries for the Π vector (which models the distribution of tags in first position in sentences) out of possibly N = 1934, about 250K entries for the A matrix (which models the transition probabilities from tag to tag) out of possibly N 2 ≈3.7M, and about 300K entries for the B matrix (which models 3Transcription according to Ornan (2002). 4The tagset we use for the annotation follows the guidelines we have developed (Elhadad et al., 2005). 667 States PI A A2 B B2 W 1934 834 250K 7M 300K 5M M 202 145 20K 700K 130K 1.7M Table 2: Model Sizes the emission probabilities from tag to word) out of possibly M · N ≈350M. For the case of a secondorder HMM, the size of the A2 matrix (which models the transition probabilities from two tags to the third one), grows to about 7M entries, where the size of the B2 matrix (which models the emission probabilities from two tags to a word) is about 5M. Despite the sparseness of these matrices, the number of their entries is still high, since we model the whole set of features of the complex word forms. Let us assume, that the right segmentation for the sentence is provided to us – for example: b clm hn‘im – as is the case for English text. In such a way, the observation is composed of morphemes, generated by a Markov process, based on a morpheme-based tagset. The size of such a tagset for Hebrew is about 200, where the size of the Π,A,B,A2 and B2 matrices is reduced to 145, 16K, 140K, 700K, and 1.7M correspondingly, as described in line M of Table 2 – a reduction of 90% when compared with the size of a word-based model. The problem in this approach, is that ”someone” along the way, agglutinates the morphemes of each word leaving the observed morphemes uncertain. For example, the word bclm can be segmented in four different ways in Table 1, as indicated by the placement of the ’-’ in the Segmentation column, while the word hn‘im can be segmented in two different ways. In the next section, we adapt the parameter estimation and the searching algorithms for such uncertain output observation. 3.2 Learning and Searching Algorithms for Uncertain Output Observation In contrast to standard HMM, the output observations of the above morpheme-based HMM are ambiguous. We adapted Baum-Welch (Baum, 1972) and Viterbi (Manning and Schutze, 1999, 9.3.2) algorithms for such uncertain observation. We first formalize the output representation and then describe the algorithms. Output Representation The learning and searching algorithms of HMM are based on the output sequence of the underlying Markov process. For the case of a morpheme-based model, the output sequence is uncertain – we don’t see the emitted morphemes but the words they form. If, for instance, the Markov process emitted the morphemes b clm h n‘im, we would see two words (bclm hn‘im) instead. In order to handle the output ambiguity, we use static knowledge of how morphemes are combined into a word, such as the four known combinations of the word bclm, the two possible combinations of the word hn‘im, and their possible tags within the original words. Based on this information, we encode the sentence into a structure that represents all the possible “readings” of the sentence, according to the possible morpheme combinations of the words, and their possible tags. The representation consists of a set of vectors, each vector containing the possible morphemes and their tags for each specific “time” (sequential position within the morpheme expansion of the words of the sentence). A morpheme is represented by a tuple (symbol, state, prev, next), where symbol denotes a morpheme, state is one possible tag for this morpheme, prev and next are sets of indexes, denoting the indexes of the morphemes (of the previous and the next vectors) that precede and follow the current morpheme in the overall lattice, representing the sentence. Fig. 2 describes the representation of the sentence bclm hn‘im. An emission is denoted in this figure by its symbol, its state index, directed edges from its previous emissions, and directed edges to its next emissions. In order to meet the condition of Baum-Eagon inequality (Baum, 1972) that the polynomial P(O|µ) – which represents the probability of an observed sequence O given a model µ – be homogeneous, we must add a sequence of special EOS (end of sentence) symbols at the end of each path up to the last vector, so that all the paths reach the same length. The above text representation can be used to model multi-word expressions (MWEs). Consider the Hebrew sentence: hw’ ‘wrk dyn gdwl, which can be interpreted as composed of 3 units (he lawyer great / he is a great lawyer) or as 4 units (he edits law big / he is editing an important legal decision). In order to select the correct interpretation, we must determine whether ‘wrk dyn is an MWE. This is another case of uncertain output observation, which can be represented by our text encoding, as done in Fig. 1. ‘wrk dyn 6 gdwl 19 EOS 17 EOS 17 dyn 6 gdwl 19 ‘wrk 18 hw’ 20 Figure 1: The sentence hw’ ‘wrk dyn gdwl This representation seems to be expensive in term of the number of emissions per sentence. However, we observe in our data that most of the words have only one or two possible segmentations, and most of the segmentations consist of at most one affix. In practice, we found the average number of emissions per sentence in our corpus (where each symbol is counted as the number of its predecessor emissions) to be 455, where the average number of words per sentence is about 18. That is, the 668 cost of operating over an ambiguous sentence representation increases the size of the sentence (from 18 to 455), but on the other hand, it reduces the probabilistic model by a factor of 10 (as discussed above). Morphological disambiguation over such a sequence of vectors of uncertain morphemes is similar to words extraction in automatic speech recognition (ASR)(Jurafsky and Martin, 2000, chp. 5,7). The states of the ASR model are phones, where each observation is a vector of spectral features. Given a sequence of observations for a sentence, the encoding – based on the lattice formed by the phones distribution of the observations, and the language model – searches for the set of words, made of phones, which maximizes the acoustic likelihood and the language model probabilities. In a similar manner, the supervised training of a speech recognizer combines a training corpus of speech wave files, together with word-transcription, and language model probabilities, in order to learn the phones model. There are two main differences between the typical ASR model and ours: (1) an ASR decoder deals with one aspect - segmentation of the observations into a set of words, where this segmentation can be modeled at several levels: subphones, phones and words. These levels can be trained individually (such as training a language model from a written corpus, and training the phones model for each word type, given transcripted wave file), and then combined together (in a hierarchical model). Morphological disambiguation over uncertain morphemes, on the other hand, deals with both morpheme segmentation and the tagging of each morpheme with its morphological features. Modeling morpheme segmentation, within a given word, without its morphology features would be insufficient. (2) The supervised resources of ASR are not available for morphological disambiguation: we don’t have a model of morphological features sequences (equivalent to the language model of ASR) nor a tagged corpus (equivalent to the transcripted wave files of ASR). These two differences require a design which combines the two dimensions of the problem, in order to support unsupervised learning (and searching) of morpheme sequences and their morphological features, simultaneously. Parameter Estimation We present a variation of the Baum-Welch algorithm (Baum, 1972) which operates over the lattice representation we have defined above. The algorithm starts with a probabilistic model µ (which can be chosen randomly or obtained from good initial conditions), and at each iteration, a new model ¯µ is derived in order to better explain the given output observations. For a given sentence, we define T as the number of words in the sentence, and ¯T as the number of vectors of the output representation O = {ot}, 1 ≤t ≤¯T, where each item in the output is denoted by ol t = (sym, state, prev, next), 1 ≤t ≤¯T, 1 ≤l ≤|ot|. We define α(t, l) as the probability to reach ol t at time t, and β(t, l) as the probability to end the sequence from ol t. Fig. 3 describes the expectation and the maximization steps of the learning algorithm for a first-order HMM. The algorithm works in O( ˙T ) time complexity, where ˙T is the total number of symbols in the output sequence encoding, where each symbol is counted as the size of its prev set. Searching for best state sequence The searching algorithm gets an observation sequence O and a probabilistic model µ, and looks for the best state sequence that generates the observation. We define δ(t, l) as the probability of the best state sequence that leads to emission ol t, and ψ(t, l) as the index of the emission at time t−1 that precedes ol t in the best state sequence that leads to it. Fig. 4 describes the adaptation of the Viterbi (Manning and Schutze, 1999, 9.3.2) algorithm to our text representation for first-order HMM, which works in O( ˙T ) time. 4 Experimental Results We ran a series of experiments on a Hebrew corpus to compare various approaches to the full morphological disambiguation and PoS tagging tasks. The training corpus is obtained from various newspaper sources and is characterized by the following statistics: 6M word occurrences, 178,580 distinct words, 64,541 distinct lemmas. Overall, the ambiguity level is 2.4 (average number of analyses per word). We tested the results on a test corpus, manually annotated by 2 taggers according to the guidelines we published and checked for agreement. The test corpus contains about 30K words. We compared two unsupervised models over this data set: Word model [W], and Morpheme model [M]. We also tested two different sets of initial conditions. Uniform distribution [Uniform]: For each word, each analysis provided by the analyzer is estimated with an equal likelihood. Context Free approximation [CF]: We applied the CF algorithm of Levinger et al.(1995) to estimate the likelihood of each analysis. Table 3 reports the results of full morphological disambiguation. For each morpheme and word models, three types of models were tested: [1] First-order HMM, [2-] Partial second-order HMM only state transitions were modeled (excluding B2 matrix), [2] Second-order HMM (including the B2 matrix). Analysis If we consider the tagger which selects the most probable morphological analysis for each 669 clm 7 m 3 n‘im 16 clm 10 cl 9 hn‘im 14 hn‘im 15 h 2 n‘im 16 h 2 EOS 17 clm 8 hn‘im 11 hn‘im 12 m 4 hn‘im 14 hn‘im 15 h 2 hn‘im 11 hn‘im 12 EOS 17 hn‘im 14 hn‘im 15 hn‘im 11 hn‘im 12 EOS 17 n‘im 16 EOS 17 bcl 6 b 1 bclm 5 b 0 Figure 2: Representation of the sentence bclm hn‘im Expectation α(1, l) = πol 1.statebol 1.state,ol 1.sym (1) α(t, l) = bol t.state,ol t.sym X l′∈ol t.prev α(t −1, l′)aol′ t−1.state,ol t.state β( ¯T, l) = 1 (2) β(t, l) = X l′∈ol t.next aol t.state,ol′ t+1.statebol′ t+1.state,ol′ t+1.symβ(t + 1, l′) Maximization ¯πi = P l:ol 1.state=i α(1, l)β(1, l) P l α(1, l)β(1, l) (3) ¯ai,j = P ¯T t=2 P l:ol t.state=j P l′∈ol t.prev:ol′ t−1.state=i α(t −1, l′)ai,jbj,ol t.symβ(t, l) P ¯T −1 t=1 P l:ol t.state=i α(t, l)β(t, l) (4) ¯bi,k = P ¯T t=1 P l:ol t.sym=k,ol t.state=i α(t, l)β(t, l) P ¯T t=1 P l:ol t.state=i α(t, l)β(t, l) (5) Figure 3: The learning algorithm for first-order model Initialization δ(1, l) = πol 1.statebol 1.state,ol 1.sym (6) Induction δ(t, l) = max l′∈ol t.prev δ(t −1, l′)aol′ t−1.state,ol t.statebol t.state,ol t.sym (7) ψ(t, l) = argmax l′∈ol t.prev δ(t −1, l′)aol′ t−1.state,ol t.statebol t.state,ol t.sym (8) Termination and path readout ¯X ¯T = argmax 1≤l≤| ¯T | δ( ¯T, l) (9) ¯Xt = ψ(t + 1, ¯Xt+1) P( ¯X) = max 1≤l≤|O ¯ T | δ( ¯T, l) (10) Figure 4: The searching algorithm for first-order model 670 Order Uniform CF W 1 82.01 84.08 W 280.44 85.75 W 2 79.88 85.78 M 1 81.08 84.54 M 281.53 88.5 M 2 83.39 85.83 Table 3: Morphological Disambiguation word in the text, according to Levinger et al. (1995) approximations, with accuracy of 78.2%, as the baseline tagger, four steps of error reduction can be identified. (1) Contextual information: The simplest first-order word-based HMM with uniform initial conditions, achieves error reduction of 17.5% (78.2 – 82.01). (2) Initial conditions: Error reductions in the range: 11.5% – 37.8% (82.01 – 84.08 for word model 1, and 81.53 – 88.5 for morhpeme model 2-) were achieved by initializing the various models with context-free approximations. While this observation confirms Elworthy (1994), the impact of error reduction is much less than reported there for English - about 70% (79 – 94). The key difference (beside the unclear characteristic of Elworthy initial condition - since he made use of an annotated corpus) is the much higher quality of the uniform distribution for Hebrew. (3) Model order: The partial second-order HMM [2-] produced the best results for both word (85.75%) and morpheme (88.5%) models over the initial condition. The full second-order HMM [2] didn’t upgrade the accuracy of the partial second-order, but achieved the best results for the uniform distribution morpheme model. This is because the context-free approximation does not take into account the tag of the previous word, which is part of model 2. We believe that initializing the morpheme model over a small set of annotated corpus will set much stronger initial condition for this model. (4) Model type: The main result of this paper is the error reduction of the morpheme model with respect to the word model: about 19.3% (85.75 – 88.5). In addition, we apply the above models for the simpler task of segmentation and PoS tagging, as reported in Table 4. The task requires picking the correct morphemes of each word with their correct PoS (excluding all other morphological features). The best result for this task is obtained with the morpheme model 2: 92.32%. For this simpler task, the improvement brought by the morpheme model over the word model is less significant, but still consists of a 5% error reduction. Unknown words account for a significant chunk of the errors. Table 5 shows the distribution of errors contributed by unknown words (words that cannot be analyzed by the morphological analyzer). 7.5% of the words in the test corpus are unknown: 4% are not recognized at all by the morphological analyzer (marked as [None] in the taOrder Uniform CF W 1 91.07 91.47 W 290.45 91.93 W 2 90.21 91.84 M 1 89.23 91.42 M 289.77 91.76 M 2 91.42 92.32 Table 4: Segmentation and PoS Tagging ble), and for 3.5%, the set of analyses proposed by the analyzer does not contain the correct analysis [Missing]. We extended the lexicon to include missing and none lexemes of the closed sets. In addition, we modified the analyzer to extract all possible segmentations of unknown words, with all the possible tags for the segmented affixes, where the remaining unknown baseforms are tagged as UK. The model was trained over this set. In the next phase, the corpus was automatically tagged, according to the trained model, in order to form a tag distribution for each unknown word, according to its context and its form. Finally, the tag for each unknown word were selected according to its tag distribution. This strategy accounts for about half of the 7.5% unknown words. None Missing % Proper name 26 36 62 Closed Set 8 5.6 13.6 Other 16.5 5.4 21.9 Junk 2.5 0 2.5 53 47 100 Table 5: Unknown Word Distribution Table 6 shows the confusion matrix for known words (5% and up). The key confusions can be attributed to linguistic properties of Modern Hebrew: most Hebrew proper names are also nouns (and they are not marked by capitalization) – which explains the PN/N confusion. The verb/noun and verb/adjective confusions are explained by the nature of the participle form in Hebrew (beinoni) – participles behave syntactically almost in an identical manner as nouns. Correct Error % proper name noun 17.9 noun verb 15.3 noun proper name 6.6 verb noun 6.3 adjective noun 5.4 adjective verb 5.0 Table 6: Confusion Matrix for Known Words 5 Conclusions and Future Work In this work, we have introduced a new text encoding method that captures rules of word formation in a language with affixational morphology such as Hebrew. This text encoding method allows us to 671 learn in parallel segmentation and tagging rules in an unsupervised manner, despite the high ambiguity level of the morphological data (average number of 2.4 analyses per word). Reported results on a large scale corpus (6M words) with fully unsupervised learning are 92.32% for PoS tagging and 88.5% for full morphological disambiguation. In this work, we used the backoffsmoothing method, suggested by Thede and Harper (1999), with an extension of additive smoothing (Chen, 1996, 2.2.1) for the lexical probabilities (B and B2 matrices). To complete this study, we are currently investigating several smoothing techniques (Chen, 1996), in order to check whether the morpheme model is critical for the data sparseness problem, or whether it can be handled with smoothing over a word model. We are currently investigating two major methods to improve our results: first, we have started gathering a larger corpus of manually tagged text and plan to perform semi-supervised learning on a corpus of 100K manually tagged words. Second, we plan to improve the unknown word model, such as integrating it with named entity recognition system (Ben-Mordechai, 2005). References Emmanuel Allon. 1995. Unvocalized Hebrew Writing. Ben Gurion University Press. (in Hebrew). Roy Bar-Haim, Khalil Sima’an, and Yoad Winter. 2005. Choosing an optimal architecture for segmentation and pos-tagging of modern Hebrew. In Proceedings of ACL-05 Workshop on Computational Approaches to Semitic Languages. Leonard E. Baum. 1972. An inequality and associated maximization technique in statistical estimation for probabilistic functions of a Markov process. Inequalities, 3:1–8. Na’ama Ben-Mordechai. 2005. Named entities recognition in Hebrew. Master’s thesis, Ben Gurion University of the Negev, Beer Sheva, Israel. (in Hebrew). Eric Brill. 1995. Transformation-based errordriven learning and natural languge processing: A case study in part-of-speech tagging. Computational Linguistics, 21:543–565. David Carmel and Yoelle S. Maarek. 1999. Morphological disambiguation for Hebrew search systems. In Proceeding of NGITS-99. Stanley F. Chen. 1996. Building Probabilistic Models for Natural Language. Ph.D. thesis, Harvard University, Cambridge, MA. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic tagging of Arabic text: From raw text to base phrase chunks. In Proceeding of HLT-NAACL-04. Kevin Duh and Katrin Kirchhoff. 2005. Pos tagging of dialectal Arabic: A minimally supervised approach. In Proceedings of ACL-05 Workshop on Computational Approaches to Semitic Languages. Michael Elhadad, Yael Netzer, David Gabay, and Meni Adler. 2005. Hebrew morphological tagging guidelines. Technical report, Ben Gurion University, Dept. of Computer Science. David Elworthy. 1994. Does Baum-Welch reestimation help taggers? In Proceeding of ANLP-94. Nizar Habash and Owen Rambow. 2005. Arabic tokenization, part-of-speech tagging and morphological disambiguation in one fell swoop. In Proceeding of ACL-05. Nadav Har’el and Dan Kenigsberg. 2004. HSpell - the free Hebrew spell checker and morphological analyzer. Israeli Seminar on Computational Linguistics, December 2004. Daniel Jurafsky and James H. Martin. 2000. Speech and language processing. Prentice-Hall. Gennady Lembersky. 2001. Named entities recognition; compounds: approaches and recognitions methods. Master’s thesis, Ben Gurion University of the Negev, Beer Sheva, Israel. (in Hebrew). Moshe Levinger, Uzi Ornan, and Alon Itai. 1995. Learning morpholexical probabilities from an untagged corpus with an application to Hebrew. Computational Linguistics, 21:383–404. Moshe Levinger. 1992. Morhphological disambiguation in hebrew. Master’s thesis, Technion, Haifa, Israel. (in Hebrew). Christopher D. Manning and Hinrich Schutze. 1999. Foundation of Statistical Language Processing. MIT Press. Bernard Merialdo. 1994. Tagging English text with probabilistic model. Computatinal Linguistics, 20:155–171. Uzi Ornan. 2002. Hebrew in latin script. L˘eˇson´enu, LXIV:137–151. (in Hebrew). Erel Segal. 2000. Hebrew morphological analyzer for Hebrew undotted texts. Master’s thesis, Technion, Haifa, Israel. (in Hebrew). Scott M. Thede and Mary P. Harper. 1999. A second-order hidden Markov model for part-ofspeech tagging. In Proceeding of ACL-99. Shlomo Yona and Shuly Wintner. 2005. A finitestate morphological grammar of Hebrew. In Proceedings of ACL-05 Workshop on Computational Approaches to Semitic Languages. 672
2006
84
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 673–680, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Contextual Dependencies in Unsupervised Word Segmentation∗ Sharon Goldwater and Thomas L. Griffiths and Mark Johnson Department of Cognitive and Linguistic Sciences Brown University Providence, RI 02912 {Sharon Goldwater,Tom Griffiths,Mark Johnson}@brown.edu Abstract Developing better methods for segmenting continuous text into words is important for improving the processing of Asian languages, and may shed light on how humans learn to segment speech. We propose two new Bayesian word segmentation methods that assume unigram and bigram models of word dependencies respectively. The bigram model greatly outperforms the unigram model (and previous probabilistic models), demonstrating the importance of such dependencies for word segmentation. We also show that previous probabilistic models rely crucially on suboptimal search procedures. 1 Introduction Word segmentation, i.e., discovering word boundaries in continuous text or speech, is of interest for both practical and theoretical reasons. It is the first step of processing orthographies without explicit word boundaries, such as Chinese. It is also one of the key problems that human language learners must solve as they are learning language. Many previous methods for unsupervised word segmentation are based on the observation that transitions between units (characters, phonemes, or syllables) within words are generally more predictable than transitions across word boundaries. Statistics that have been proposed for measuring these differences include “successor frequency” (Harris, 1954), “transitional probabilities” (Saffran et al., 1996), mutual information (Sun et al., ∗This work was partially supported by the following grants: NIH 1R01-MH60922, NIH RO1-DC000314, NSF IGERT-DGE-9870676, and the DARPA CALO project. 1998), “accessor variety” (Feng et al., 2004), and boundary entropy (Cohen and Adams, 2001). While methods based on local statistics are quite successful, here we focus on approaches based on explicit probabilistic models. Formulating an explicit probabilistic model permits us to cleanly separate assumptions about the input and properties of likely segmentations from details of algorithms used to find such solutions. Specifically, this paper demonstrates the importance of contextual dependencies for word segmentation by comparing two probabilistic models that differ only in that the first assumes that the probability of a word is independent of its local context, while the second incorporates bigram dependencies between adjacent words. The algorithms we use to search for likely segmentations do differ, but so long as the segmentations they produce are close to optimal we can be confident that any differences in the segmentations reflect differences in the probabilistic models, i.e., in the kinds of dependencies between words. We are not the first to propose explicit probabilistic models of word segmentation. Two successful word segmentation systems based on explicit probabilistic models are those of Brent (1999) and Venkataraman (2001). Brent’s ModelBased Dynamic Programming (MBDP) system assumes a unigram word distribution. Venkataraman uses standard unigram, bigram, and trigram language models in three versions of his system, which we refer to as n-gram Segmentation (NGS). Despite their rather different generative structure, the MBDP and NGS segmentation accuracies are very similar. Moreover, the segmentation accuracy of the NGS unigram, bigram, and trigram models hardly differ, suggesting that contextual dependencies are irrelevant to word segmentation. How673 ever, the segmentations produced by both these methods depend crucially on properties of the search procedures they employ. We show this by exhibiting for each model a segmentation that is less accurate but more probable under that model. In this paper, we present an alternative framework for word segmentation based on the Dirichlet process, a distribution used in nonparametric Bayesian statistics. This framework allows us to develop extensible models that are amenable to standard inference procedures. We present two such models incorporating unigram and bigram word dependencies, respectively. We use Gibbs sampling to sample from the posterior distribution of possible segmentations under these models. The plan of the paper is as follows. In the next section, we describe MBDP and NGS in detail. In Section 3 we present the unigram version of our own model, the Gibbs sampling procedure we use for inference, and experimental results. Section 4 extends that model to incorporate bigram dependencies, and Section 5 concludes the paper. 2 NGS and MBDP The NGS and MBDP systems are similar in some ways: both are designed to find utterance boundaries in a corpus of phonemically transcribed utterances, with known utterance boundaries. Both also use approximate online search procedures, choosing and fixing a segmentation for each utterance before moving onto the next. In this section, we focus on the very different probabilistic models underlying the two systems. We show that the optimal solution under the NGS model is the unsegmented corpus, and suggest that this problem stems from the fact that the model assumes a uniform prior over hypotheses. We then present the MBDP model, which uses a non-uniform prior but is difficult to extend beyond the unigram case. 2.1 NGS NGS assumes that each utterance is generated independently via a standard n-gram model. For simplicity, we will discuss the unigram version of the model here, although our argument is equally applicable to the bigram and trigram versions. The unigram model generates an utterance u according to the grammar in Figure 1, so P(u) = p$(1 −p$)n−1 n Y j=1 P(wj) (1) 1 −p$ U →W U p$ U →W P(w) W→w ∀w ∈Σ∗ Figure 1: The unigram NGS grammar. where u consists of the words w1 . . . wn and p$ is the probability of the utterance boundary marker $. This model can be used to find the highest probability segmentation hypothesis h given the data d by using Bayes’ rule: P(h|d) ∝P(d|h)P(h) NGS assumes a uniform prior P(h) over hypotheses, so its goal is to find the solution that maximizes the likelihood P(d|h). Using this model, NGS’s approximate search technique delivers competitive results. However, the true maximum likelihood solution is not competitive, since it contains no utterance-internal word boundaries. To see why not, consider the solution in which p$ = 1 and each utterance is a single ‘word’, with probability equal to the empirical probability of that utterance. Any other solution will match the empirical distribution of the data less well. In particular, a solution with additional word boundaries must have 1 −p$ > 0, which means it wastes probability mass modeling unseen data (which can now be generated by concatenating observed utterances together). Intuitively, the NGS model considers the unsegmented solution to be optimal because it ranks all hypotheses equally probable a priori. We know, however, that hypotheses that memorize the input data are unlikely to generalize to unseen data, and are therefore poor solutions. To prevent memorization, we could restrict our hypothesis space to models with fewer parameters than the number of utterances in the data. A more general and mathematically satisfactory solution is to assume a nonuniform prior, assigning higher probability to hypotheses with fewer parameters. This is in fact the route taken by Brent in his MBDP model, as we shall see in the following section. 2.2 MBDP MBDP assumes a corpus of utterances is generated as a single probabilistic event with four steps: 1. Generate L, the number of lexical types. 2. Generate a phonemic representation for each type (except the utterance boundary type, $). 674 3. Generate a token frequency for each type. 4. Generate an ordering for the set of tokens. In a final deterministic step, the ordered tokens are concatenated to create an unsegmented corpus. This means that certain segmented corpora will produce the observed data with probability 1, and all others will produce it with probability 0. The posterior probability of a segmentation given the data is thus proportional to its prior probability under the generative model, and the best segmentation is that with the highest prior probability. There are two important points to note about the MBDP model. First, the distribution over L assigns higher probability to models with fewer lexical items. We have argued that this is necessary to avoid memorization, and indeed the unsegmented corpus is not the optimal solution under this model, as we will show in Section 3. Second, the factorization into four separate steps makes it theoretically possible to modify each step independently in order to investigate the effects of the various modeling assumptions. However, the mathematical statement of the model and the approximations necessary for the search procedure make it unclear how to modify the model in any interesting way. In particular, the fourth step uses a uniform distribution, which creates a unigram constraint that cannot easily be changed. Since our research aims to investigate the effects of different modeling assumptions on lexical acquisition, we develop in the following sections a far more flexible model that also incorporates a preference for sparse solutions. 3 Unigram Model 3.1 The Dirichlet Process Model Our goal is a model of language that prefers sparse solutions, allows independent modification of components, and is amenable to standard search procedures. We achieve this goal by basing our model on the Dirichlet process (DP), a distribution used in nonparametric Bayesian statistics. Our unigram model of word frequencies is defined as wi|G ∼G G|α0, P0 ∼DP(α0, P0) where the concentration parameter α0 and the base distribution P0 are parameters of the model. Each word wi in the corpus is drawn from a distribution G, which consists of a set of possible words (the lexicon) and probabilities associated with those words. G is generated from a DP(α0, P0) distribution, with the items in the lexicon being sampled from P0 and their probabilities being determined by α0, which acts like the parameter of an infinite-dimensional symmetric Dirichlet distribution. We provide some intuition for the roles of α0 and P0 below. Although the DP model makes the distribution G explicit, we never deal with G directly. We take a Bayesian approach and integrate over all possible values of G. The conditional probability of choosing to generate a word from a particular lexical entry is then given by a simple stochastic process known as the Chinese restaurant process (CRP) (Aldous, 1985). Imagine a restaurant with an infinite number of tables, each with infinite seating capacity. Customers enter the restaurant and seat themselves. Let zi be the table chosen by the ith customer. Then P(zi|z−i) =    n (z−i) k i−1+α0 0 ≤k < K(z−i) α0 i−1+α0 k = K(z−i) (2) where z−i = z1 . . . zi−1, n(z−i) k is the number of customers already sitting at table k, and K(z−i) is the total number of occupied tables. In our model, the tables correspond to (possibly repeated) lexical entries, having labels generated from the distribution P0. The seating arrangement thus specifies a distribution over word tokens, with each customer representing one token. This model is an instance of the two-stage modeling framework described by Goldwater et al. (2006), with P0 as the generator and the CRP as the adaptor. Our model can be viewed intuitively as a cache model: each word in the corpus is either retrieved from a cache or generated anew. Summing over all the tables labeled with the same word yields the probability distribution for the ith word given previously observed words w−i: P(wi|w−i) = n(w−i) wi i −1 + α0 + α0P0(wi) i −1 + α0 (3) where n(w−i) w is the number of instances of w observed in w−i. The first term is the probability of generating w from the cache (i.e., sitting at an occupied table), and the second term is the probability of generating it anew (sitting at an unoccupied table). The actual table assignments z−i only become important later, in the bigram model. 675 There are several important points to note about this model. First, the probability of generating a particular word from the cache increases as more instances of that word are observed. This richget-richer process creates a power-law distribution on word frequencies (Goldwater et al., 2006), the same sort of distribution found empirically in natural language. Second, the parameter α0 can be used to control how sparse the solutions found by the model are. This parameter determines the total probability of generating any novel word, a probability that decreases as more data is observed, but never disappears. Finally, the parameter P0 can be used to encode expectations about the nature of the lexicon, since it defines a probability distribution across different novel words. The fact that this distribution is defined separately from the distribution on word frequencies gives the model additional flexibility, since either distribution can be modified independently of the other. Since the goal of this paper is to investigate the role of context in word segmentation, we chose the simplest possible model for P0, i.e. a unigram phoneme distribution: P0(w) = p#(1 −p#)n−1 n Y i=1 P(mi) (4) where word w consists of the phonemes m1 . . . mn, and p# is the probability of the word boundary #. For simplicity we used a uniform distribution over phonemes, and experimented with different fixed values of p#.1 A final detail of our model is the distribution on utterance lengths, which is geometric. That is, we assume a grammar similar to the one shown in Figure 1, with the addition of a symmetric Beta(τ 2) prior over the probability of the U productions,2 and the substitution of the DP for the standard multinomial distribution over the W productions. 3.2 Gibbs Sampling Having defined our generative model, we are left with the problem of inference: we must determine the posterior distribution of hypotheses given our input corpus. To do so, we use Gibbs sampling, a standard Markov chain Monte Carlo method (Gilks et al., 1996). Gibbs sampling is an iterative procedure in which variables are repeatedly 1Note, however, that our model could be extended to learn both p# and the distribution over phonemes. 2The Beta distribution is a Dirichlet distribution over two outcomes. W U w1 = w2.w3 U W U W w3 w2 h1: h2: Figure 2: The two hypotheses considered by the unigram sampler. Dashed lines indicate possible additional structure. All rules except those in bold are part of h−. sampled from their conditional posterior distribution given the current values of all other variables in the model. The sampler defines a Markov chain whose stationary distribution is P(h|d), so after convergence samples are from this distribution. Our Gibbs sampler considers a single possible boundary point at a time, so each sample is from a set of two hypotheses, h1 and h2. These hypotheses contain all the same boundaries except at the one position under consideration, where h2 has a boundary and h1 does not. The structures are shown in Figure 2. In order to sample a hypothesis, we need only calculate the relative probabilities of h1 and h2. Since h1 and h2 are the same except for a few rules, this is straightforward. Let h− be all of the structure shared by the two hypotheses, including n−words, and let d be the observed data. Then P(h1|h−, d) = P(w1|h−, d) = n(h−) w1 + α0P0(w1) n−+ α0 (5) where the second line follows from Equation 3 and the properties of the CRP (in particular, that it is exchangeable, with the probability of a seating configuration not depending on the order in which customers arrive (Aldous, 1985)). Also, P(h2|h−, d) = P(r, w2, w3|h−, d) = P(r|h−, d)P(w2|h−, d)P(w3|w2, h−, d) = nr + τ 2 n−+ 1 + τ · n(h−) w2 + α0P0(w2) n−+ α0 ·n(h−) w3 + I(w2 = w3) + α0P0(w3) n−+ 1 + α0 (6) where nr is the number of branching rules r = U →W U in h−, and I(.) is an indicator function taking on the value 1 when its argument is 676 true, and 0 otherwise. The nr term is derived by integrating over all possible values of p$, and noting that the total number of U productions in h− is n−+ 1. Using these equations we can simply proceed through the data, sampling each potential boundary point in turn. Once the Gibbs sampler converges, these samples will be drawn from the posterior distribution P(h|d). 3.3 Experiments In our experiments, we used the same corpus that NGS and MBDP were tested on. The corpus, supplied to us by Brent, consists of 9790 transcribed utterances (33399 words) of childdirected speech from the Bernstein-Ratner corpus (Bernstein-Ratner, 1987) in the CHILDES database (MacWhinney and Snow, 1985). The utterances have been converted to a phonemic representation using a phonemic dictionary, so that each occurrence of a word has the same phonemic transcription. Utterance boundaries are given in the input to the system; other word boundaries are not. Because our Gibbs sampler is slow to converge, we used annealing to speed inference. We began with a temperature of γ = 10 and decreased γ in 10 increments to a final value of 1. A temperature of γ corresponds to raising the probabilities of h1 and h2 to the power of 1 γ prior to sampling. We ran our Gibbs sampler for 20,000 iterations through the corpus (with γ = 1 for the final 2000) and evaluated our results on a single sample at that point. We calculated precision (P), recall (R), and F-score (F) on the word tokens in the corpus, where both boundaries of a word must be correct to count the word as correct. The induced lexicon was also scored for accuracy using these metrics (LP, LR, LF). Recall that our DP model has three parameters: τ, p#, and α0. Given the large number of known utterance boundaries, we expect the value of τ to have little effect on our results, so we simply fixed τ = 2 for all experiments. Figure 3 shows the effects of varying of p# and α0.3 Lower values of p# cause longer words, which tends to improve recall (and thus F-score) in the lexicon, but decrease token accuracy. Higher values of α0 allow more novel words, which also improves lexicon recall, 3It is worth noting that all these parameters could be inferred. We leave this for future work. 0.1 0.3 0.5 0.7 0.9 50 55 60 (a) Varying P(#) 1 2 5 10 20 50 100 200 500 50 55 60 (b) Varying α0 LF F LF F Figure 3: Word (F) and lexicon (LF) F-score (a) as a function of p#, with α0 = 20 and (b) as a function of α0, with p# = .5. but begins to degrade precision after a point. Due to the negative correlation between token accuracy and lexicon accuracy, there is no single best value for either p# or α0; further discussion refers to the solution for p# = .5, α0 = 20 (though others are qualitatively similar). In Table 1(a), we compare the results of our system to those of MBDP and NGS.4 Although our system has higher lexicon accuracy than the others, its token accuracy is much worse. This result occurs because our system often mis-analyzes frequently occurring words. In particular, many of these words occur in common collocations such as what’s that and do you, which the system interprets as a single words. It turns out that a full 31% of the proposed lexicon and nearly 30% of tokens consist of these kinds of errors. Upon reflection, it is not surprising that a unigram language model would segment words in this way. Collocations violate the unigram assumption in the model, since they exhibit strong word-toword dependencies. The only way the model can capture these dependencies is by assuming that these collocations are in fact words themselves. Why don’t the MBDP and NGS unigram models exhibit these problems? We have already shown that NGS’s results are due to its search procedure rather than its model. The same turns out to be true for MBDP. Table 2 shows the probabili4We used the implementations of MBDP and NGS available at http://www.speech.sri.com/people/anand/ to obtain results for those systems. 677 (a) P R F LP LR LF NGS 67.7 70.2 68.9 52.9 51.3 52.0 MBDP 67.0 69.4 68.2 53.6 51.3 52.4 DP 61.9 47.6 53.8 57.0 57.5 57.2 (b) P R F LP LR LF NGS 76.6 85.8 81.0 60.0 52.4 55.9 MBDP 77.0 86.1 81.3 60.8 53.0 56.6 DP 94.2 97.1 95.6 86.5 62.2 72.4 Table 1: Accuracy of the various systems, with best scores in bold. The unigram version of NGS is shown. DP results are with p# = .5 and α0 = 20. (a) Results on the true corpus. (b) Results on the permuted corpus. Seg: True None MBDP NGS DP NGS 204.5 90.9 210.7 210.8 183.0 MBDP 208.2 321.7 217.0 218.0 189.8 DP 222.4 393.6 231.2 231.6 200.6 Table 2: Negative log probabilities (x 1000) under each model of the true solution, the solution with no utterance-internal boundaries, and the solutions found by each algorithm. Best solutions under each model are bold. ties under each model of various segmentations of the corpus. From these figures, we can see that the MBDP model assigns higher probability to the solution found by our Gibbs sampler than to the solution found by Brent’s own incremental search algorithm. In other words, Brent’s model does prefer the lower-accuracy collocation solution, but his search algorithm instead finds a higher-accuracy but lower-probability solution. We performed two experiments suggesting that our own inference procedure does not suffer from similar problems. First, we initialized our Gibbs sampler in three different ways: with no utteranceinternal boundaries, with a boundary after every character, and with random boundaries. Our results were virtually the same regardless of initialization. Second, we created an artificial corpus by randomly permuting the words in the true corpus, leaving the utterance lengths the same. The artificial corpus adheres to the unigram assumption of our model, so if our inference procedure works correctly, we should be able to correctly identify the words in the permuted corpus. This is exactly what we found, as shown in Table 1(b). While all three models perform better on the artificial corpus, the improvements of the DP model are by far the most striking. 4 Bigram Model 4.1 The Hierarchical Dirichlet Process Model The results of our unigram experiments suggested that word segmentation could be improved by taking into account dependencies between words. To test this hypothesis, we extended our model to incorporate bigram dependencies using a hierarchical Dirichlet process (HDP) (Teh et al., 2005). Our approach is similar to previous n-gram models using hierarchical Pitman-Yor processes (Goldwater et al., 2006; Teh, 2006). The HDP is appropriate for situations in which there are multiple distributions over similar sets of outcomes, and the distributions are believed to be similar. In our case, we define a bigram model by assuming each word has a different distribution over the words that follow it, but all these distributions are linked. The definition of our bigram language model as an HDP is wi|wi−1 = w, Hw ∼Hw ∀w Hw|α1, G ∼DP(α1, G) ∀w G|α0, P0 ∼DP(α0, P0) That is, P(wi|wi−1 = w) is distributed according to Hw, a DP specific to word w. Hw is linked to the DPs for all other words by the fact that they share a common base distribution G, which is generated from another DP.5 As in the unigram model, we never deal with Hw or G directly. By integrating over them, we get a distribution over bigram frequencies that can be understood in terms of the CRP. Now, each word type w is associated with its own restaurant, which represents the distribution over words that follow w. Different restaurants are not completely independent, however: the labels on the tables in the restaurants are all chosen from a common base distribution, which is another CRP. To understand the HDP model in terms of a grammar, we consider $ as a special word type, so that wi ranges over Σ∗∪{$}. After observing w−i, the HDP grammar is as shown in Figure 4, 5This HDP formulation is an oversimplification, since it does not account for utterance boundaries properly. The grammar formulation (see below) does. 678 P2(wi|w−i, z−i) Uwi−1→Wwi Uwi ∀wi ∈Σ∗, wi−1 ∈Σ∗∪{$} P2($|w−i, z−i) Uwi−1→$ ∀wi−1 ∈Σ∗ 1 Wwi →wi ∀wi ∈Σ∗ Figure 4: The HDP grammar after observing w−i. with P2(wi|h−i) =n(wi−1,wi) + α1P1(wi|h−i) nwi−1 + α1 (7) P1(wi|h−i) =    tΣ∗+ τ 2 t+τ · twi+α0P0(wi) tΣ∗+α0 wi ∈Σ∗ t$+ τ 2 t+τ wi = $ where h−i = (w−i, z−i); t$, tΣ∗, and twi are the total number of tables (across all words) labeled with $, non-$, and wi, respectively; t = t$ + tΣ∗ is the total number of tables; and n(wi−1,wi) is the number of occurrences of the bigram (wi−1, wi). We have suppressed the superscript (w−i) notation in all cases. The base distribution shared by all bigrams is given by P1, which can be viewed as a unigram backoff where the unigram probabilities are learned from the bigram table labels. We can perform inference on this HDP bigram model using a Gibbs sampler similar to our unigram sampler. Details appear in the Appendix. 4.2 Experiments We used the same basic setup for our experiments with the HDP model as we used for the DP model. We experimented with different values of α0 and α1, keeping p# = .5 throughout. Some results of these experiments are plotted in Figure 5. With appropriate parameter settings, both lexicon and token accuracy are higher than in the unigram model (dramatically so, for tokens), and there is no longer a negative correlation between the two. Only a few collocations remain in the lexicon, and most lexicon errors are on low-frequency words. The best values of α0 are much larger than in the unigram model, presumably because all unique word types must be generated via P0, but in the bigram model there is an additional level of discounting (the unigram process) before reaching P0. Smaller values of α0 lead to fewer word types with fewer characters on average. Table 3 compares the optimal results of the HDP model to the only previous model incorporating bigram dependencies, NGS. Due to search, the performance of the bigram NGS model is not much different from that of the unigram model. In 100 200 500 1000 2000 40 60 80 (a) Varying α0 F LF 5 10 20 50 100 200 500 40 60 80 (b) Varying α1 F LF Figure 5: Word (F) and lexicon (LF) F-score (a) as a function of α0, with α1 = 10 and (b) as a function of α1, with α0 = 1000. P R F LP LR LF NGS 68.1 68.6 68.3 54.5 57.0 55.7 HDP 79.4 74.0 76.6 67.9 58.9 63.1 Table 3: Bigram system accuracy, with best scores in bold. HDP results are with p# = .5, α0 = 1000, and α1 = 10. contrast, our HDP model performs far better than our DP model, leading to the highest published accuracy for this corpus on both tokens and lexical items. Overall, these results strongly support our hypothesis that modeling bigram dependencies is important for accurate word segmentation. 5 Conclusion In this paper, we have introduced a new modelbased approach to word segmentation that draws on techniques from Bayesian statistics, and we have developed models incorporating unigram and bigram dependencies. The use of the Dirichlet process as the basis of our approach yields sparse solutions and allows us the flexibility to modify individual components of the models. We have presented a method of inference using Gibbs sampling, which is guaranteed to converge to the posterior distribution over possible segmentations of a corpus. Our approach to word segmentation allows us to investigate questions that could not be addressed satisfactorily in earlier work. We have shown that the search algorithms used with previous models of word segmentation do not achieve their ob679 P(h1|h−, d) = n(wl,w1) + α1P1(w1|h−, d) nwl + α1 · n(w1,wr) + I(wl = w1 = wr) + α1P1(wr|h−, d) nw1 + 1 + α1 P(h2|h−, d) = n(wl,w2) + α1P1(w2|h−, d) nwl + α1 · n(w2,w3) + I(wl = w2 = w3) + α1P1(w3|h−, d) nw2 + 1 + α1 · n(w3,wr) + I(wl = w3, w2 = wr) + I(w2 = w3 = wr) + α1P1(wr|h−, d) nw3 + 1 + I(w2 = w4) + α1 Figure 6: Gibbs sampling equations for the bigram model. All counts are with respect to h−. jectives, which has led to misleading results. In particular, previous work suggested that the use of word-to-word dependencies has little effect on word segmentation. Our experiments indicate instead that bigram dependencies can be crucial for avoiding under-segmentation of frequent collocations. Incorporating these dependencies into our model greatly improved segmentation accuracy, and led to better performance than previous approaches on all measures. References D. Aldous. 1985. Exchangeability and related topics. In ´Ecole d’´et´e de probabilit´es de Saint-Flour, XIII—1983, pages 1–198. Springer, Berlin. C. Antoniak. 1974. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. The Annals of Statistics, 2:1152–1174. N. Bernstein-Ratner. 1987. The phonology of parent-child speech. In K. Nelson and A. van Kleeck, editors, Children’s Language, volume 6. Erlbaum, Hillsdale, NJ. M. Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71–105. P. Cohen and N. Adams. 2001. An algorithm for segmenting categorical timeseries into meaningful episodes. In Proceedings of the Fourth Symposium on Intelligent Data Analysis. H. Feng, K. Chen, X. Deng, and W. Zheng. 2004. Accessor variety criteria for chinese word extraction. Computational Lingustics, 30(1). W.R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors. 1996. Markov Chain Monte Carlo in Practice. Chapman and Hall, Suffolk. S. Goldwater, T. Griffiths, and M. Johnson. 2006. Interpolating between types and tokens by estimating power-law generators. In Advances in Neural Information Processing Systems 18, Cambridge, MA. MIT Press. Z. Harris. 1954. Distributional structure. Word, 10:146–162. B. MacWhinney and C. Snow. 1985. The child language data exchange system. Journal of Child Language, 12:271– 296. J. Saffran, E. Newport, and R. Aslin. 1996. Word segmentation: The role of distributional cues. Journal of Memory and Language, 35:606–621. M. Sun, D. Shen, and B. Tsou. 1998. Chinese word segmentation without using lexicon and hand-crafted training data. In Proceedings of COLING-ACL. Y. Teh, M. Jordan, M. Beal, and D. Blei. 2005. Hierarchical Dirichlet processes. In Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA. Y. Teh. 2006. A Bayesian interpretation of interpolated kneser-ney. Technical Report TRA2/06, National University of Singapore, School of Computing. A. Venkataraman. 2001. A statistical model for word discovery in transcribed speech. Computational Linguistics, 27(3):351–372. Appendix To sample from the posterior distribution over segmentations in the bigram model, we define h1 and h2 as we did in the unigram sampler so that for the corpus substring s, h1 has a single word (s = w1) where h2 has two (s = w2.w3). Let wl and wr be the words (or $) preceding and following s. Then the posterior probabilities of h1 and h2 are given in Figure 6. P1(.) can be calculated exactly using the equation in Section 4.1, but this requires explicitly tracking and sampling the assignment of words to tables. For easier and more efficient implementation, we use an approximation, replacing each table count twi by its expected value E[twi]. In a DP(α, P), the expected number of CRP tables for an item occurring n times is α log n+α α (Antoniak, 1974), so E[twi] = α1 X j log n(wj,wi) + α1 α1 This approximation requires only the bigram counts, which we must track anyway. 680
2006
85
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 681–688, Sydney, July 2006. c⃝2006 Association for Computational Linguistics MAGEAD: A Morphological Analyzer and Generator for the Arabic Dialects Nizar Habash and Owen Rambow Center for Computational Learning Systems Columbia University New York, NY 10115, USA habash,rambow  @cs.columbia.edu Abstract We present MAGEAD, a morphological analyzer and generator for the Arabic language family. Our work is novel in that it explicitly addresses the need for processing the morphology of the dialects. MAGEAD performs an on-line analysis to or generation from a root+pattern+features representation, it has separate phonological and orthographic representations, and it allows for combining morphemes from different dialects. We present a detailed evaluation of MAGEAD. 1 Introduction In this paper we present MAGEAD, a morphological analyzer and generator for the Arabic language family, by which we mean both Modern Standard Arabic (MSA) and the spoken dialects.1 Our work is novel in that it explicitly addresses the need for processing the morphology of the dialects as well. The principal theoretical contribution of this paper is an organization of morphological knowledge for processing multiple variants of one language family. The principal practical contribution is the first morphological analyzer and generator for an Arabic dialect that includes a rootand-pattern analysis (which is also the first widecoverage implementation of root-and-pattern morphology for any language using a multitape finitestate machine). We also provide a novel type of detailed evaluation in which we investigate how 1We would like to thank several anonymous reviewers for comments that helped us improve this paper. The work reported in this paper was supported by NSF Award 0329163, with additional work performed under the DARPA GALE program, contract HR0011-06-C-0023. The authors are listed in alphabetical order. different sources of lexical information affect performance of morphological analysis. This paper is organized as follows. In Section 2, we present the relevant facts about morphology in the Arabic language family. Previous work is summarized in Section 3. We present our design goals in Section 4, and then discuss our approach to representing linguistic knowledge for morphological analysis in Section 5. The implementation is sketched in Section 6. We outline the steps involved in creating a Levantine analyzer in Section 7. We evaluate our system in Section 8, and then conclude. 2 Arabic Morphology 2.1 Variants of Arabic The Arabic-speaking world is characterized by diglossia (Ferguson, 1959). Modern Standard Arabic (MSA) is the shared written language from Morocco to the Gulf, but it is not a native language of anyone. It is spoken only in formal, scripted contexts (news, speeches). In addition, there is a continuum of spoken dialects (varying geographically, but also by social class, gender, etc.) which are native languages, but rarely written (except in very informal contexts: collections of folk tales, newsgroups, email, etc). We will refer to MSA and the dialects as variants of Arabic. Variants differ phonologically, lexically, morphologically, and syntactically from one another; many pairs of variants are mutually unintelligible. In unscripted situations where spoken MSA would normally be required (such as talk shows on TV), speakers usually resort to repeated code-switching between their dialect and MSA, as nearly all native speakers of Arabic are unable to produce sustained spontaneous discourse in MSA. 681 In this paper, we discuss MSA and Levantine, the dialect spoken (roughly) in Syria, Lebanon, Jordan, Palestine, and Israel. Our Levantine data comes from Jordan. The discussion in this section uses only examples from MSA, but all variants show a combination of root-and-pattern and affixational morphology and similar examples could be found for Levantine. 2.2 Roots, Patterns and Vocalism Arabic morphemes fall into three categories: templatic morphemes, affixational morphemes, and non-templatic word stems (NTWSs). NTWSs are word stems that are not constructed from a root/pattern/vocalism combination. Verbs are never NTWSs. Templatic morphemes come in three types that are equally needed to create a word stem: roots, patterns and vocalisms. The root morpheme is a sequence of three, four, or five consonants (termed radicals) that signifies some abstract meaning shared by all its derivations. For example, the words   katab ‘to write’,   kaAtib ‘writer’, and     maktuwb ‘written’ all share the root morpheme ktb ( ) ‘writing-related’. The pattern morpheme is an abstract template in which roots and vocalisms are inserted. The vocalism morpheme specifies which short vowels to use with a pattern. We will represent the pattern as a string made up of numbers to indicate radical position, of the symbol V to indicate the position of the vocalism, and of pattern consonants (if needed). A word stem is constructed by interleaving the three types of templatic morphemes. For example, the word stem   katab ‘to write’ is constructed from the root ktb (  ), the pattern 1V2V3 and the vocalism aa. 2.3 Affixational Morphemes Arabic affixes can be prefixes such as sa+ (+  ) ‘will/[future]’, suffixes such as +uwna ( +) ‘[masculine plural]’ or circumfixes such as ta++na ( ++  ) ‘[imperfective subject 2nd person fem. plural]’. Multiple affixes can appear in a word. For example, the word     wasayaktubuwnahA ‘and they will write it’ has two prefixes, one circumfix and one suffix:2 2We analyze the imperfective word stem as including an initial short vowel, and leave a discussion of this analysis to future publications. (1) wasayaktubuwnahA wa+ and sa+ will y+ 3person aktub write +uwna masculine-plural +hA it 2.4 Morphological Rewrite Rules An Arabic word is constructed by first creating a word stem from templatic morphemes or by using a NTWS. Affixational morphemes are then added to this stem. The process of combining morphemes involves a number of phonological, morphemic and orthographic rules that modify the form of the created word so it is not a simple interleaving or concatenation of its morphemic components. An example of a phonological rewrite rule is the voicing of the /t/ of the verbal pattern V1tV2V3 (Form VIII) when the first root radical is /z/, /d/, or /*/ ( , ! , or " ): the verbal stem zhr+V1tV2V3+iaa is realized phonologically as /izdahar/ (orthographically: #%$&!' &( ) ‘flourish’ not /iztahar/ (orthographically: #%) *( ). An example of an orthographic rewrite rule is the deletion of the Alif ( ( ) of the definite article morpheme Al+ (++( ) in nouns when preceded by the preposition l+ (+ , ). 3 Previous Work There has been a considerable amount of work on Arabic morphological analysis; for an overview, see (Al-Sughaiyer and Al-Kharashi, 2004). We summarize some of the most relevant work here. Kataja and Koskenniemi (1988) present a system for handling Akkadian root-and-pattern morphology by adding an additional lexicon component to Koskenniemi’s two-level morphology (1983). The first large scale implementation of Arabic morphology within the constraints of finite-state methods is that of Beesley et al. (1989) with a ‘detouring’ mechanism for access to multiple lexica, which gives rise to other works by Beesley (Beesley, 1998) and, independently, by Buckwalter (2004). The approach of McCarthy (1981) to describing root-and-pattern morphology in the framework of autosegmental phonology has given rise to a number of computational proposals. Kay (1987) proposes a framework with which each of the autosegmental tiers is assigned a tape in a multi-tape finite state machine, with an additional tape for the surface form. Kiraz (2000,2001) extends Kay’s 682 approach and implements a small working multitape system for MSA and Syriac. Other autosegmental approaches (described in more details in Kiraz 2001 (Chapter 4)) include those of Kornai (1995), Bird and Ellison (1994), Pulman and Hepple (1993), whose formalism Kiraz adopts, and others. 4 Design Goals for MAGEAD This work is aimed at a unified processing architecture for the morphology of all variants of Arabic, including the dialects. Three design goals follow from this overall goal: First, we want to be able to use the analyzer when we do not have a lexicon, or only a partial lexicon. This is because, despite the similarities between dialects at the morphological and lexical levels, we do cannot assume we have a complete lexicon for every dialect we wish to morphologically analyze. As a result, we want an on-line analyzer which performs full morphological analysis at run time. Second, we want to be able to exploit the existing regularities among the variants, in particular systematic sound changes which operate at the level of the radicals, and pattern changes. This requires an explicit analysis into root and pattern. Third, the dialects are mainly used in spoken communication and in the rare cases when they are written they do not have standard orthographies, and different (inconsistent) orthographies may be used even within a single written text. We thus need a representation of morphology that incorporates models of both phonology and orthography. In addition, we add two general requirements for morphological analyzers. First, we want both a morphological analyzer and a morphological generator. Second, we want to use a representation that is defined in terms of a lexeme and attributevalue pairs for morphological features such as aspect or person. This is because we want our component to be usable in natural language processing (NLP) applications such as natural language generation and machine translation, and the lexeme provides a usable lexicographic abstraction. Note that the second general requirement (an analysis to a lexemic representation) appears to clash with the first design desideratum (we may not have a lexicon). We tackle these requirements by doing a full analysis of templatic morphology, rather than “precompiling” the templatic morphology into stems and only analyzing affixational morphology on-line (as is done in (Buckwalter, 2004)). Our implementation uses the multitape approach of Kiraz (2000). This is the first large-scale implementation of that approach. We extend it by adding an additional tape for independently modeling phonology and orthography. The use of finite state technology makes MAGEAD usable as a generator as well as an analyzer, unlike some morphological analyzers which cannot be converted to generators in a straightforward manner (Buckwalter, 2004; Habash, 2004). 5 The MAGEAD System: Representation of Linguistic Knowledge MAGEAD relates (bidirectionally) a lexeme and a set of linguistic features to a surface word form through a sequence of transformations. In a generation perspective, the features are translated to abstract morphemes which are then ordered, and expressed as concrete morphemes. The concrete templatic morphemes are interdigitated and affixes added, and finally morphological and phonological rewrite rules are applied. In this section, we discuss our organization of linguistic knowledge, and give some examples; a more complete discussion of the organization of linguistic knowledge in MAGEAD can be found in (Habash et al., 2006). 5.1 Morphological Behavior Classes Morphological analyses are represented in terms of a lexeme and features. We define the lexeme to be a triple consisting of a root (or an NTWS), a meaning index, and a morphological behavior class (MBC). We do not deal with issues relating to word sense here and therefore do not further discuss the meaning index. It is through this view of the lexeme (which incorporates productive derivational morphology without making claims about semantic predictability) that we can both have a lexeme-based representation, and operate without a lexicon. In fact, because lexemes have internal structure, we can hypothesize lexemes on the fly without having to make wild guesses (we know the pattern, it is only the root that we are guessing). We will see in Section 8 that this approach does not wildly overgenerate. We use as our example the surface form #%$%!' *( Aizdaharat (Azdhrt without diacritics) 683 ‘she/it flourished’. The lexeme-and-features representation of this word form is as follows: (2) Root:zhr MBC:verb-VIII POS:V PER:3 GEN:F NUM:SG ASPECT:PERF An MBC maps sets of linguistic feature-value pairs to sets of abstract morphemes. For example, MBC verb-VIII maps the feature-value pair ASPECT:PERF to the abstract root morpheme [PAT PV:VIII], which in MSA corresponds to the concrete root morpheme AV1tV2V3, while the MBC verb-I maps ASPECT:PERF to the abstract root morpheme [PAT PV:I], which in MSA corresponds to the concrete root morpheme 1V2V3. We define MBCs using a hierarchical representation with non-monotonic inheritance. The hierarchy allows us to specify only once those feature-to-morpheme mappings for all MBCs which share them. For example, the root node of our MBC hierarchy is a word, and all Arabic words share certain mappings, such as that from the linguistic feature conj:w to the clitic w+. This means that all Arabic words can take a cliticized conjunction. Similarly, the object pronominal clitics are the same for all transitive verbs, no matter what their templatic pattern is. We have developed a specification language for expressing MBC hierarchies in a concise manner. Our hypothesis is that the MBC hierarchy is variantindependent, though as more variants are added, some modifications may be needed. Our current MBC hierarchy specification for both MSA and Levantine, which covers only the verbs, comprises 66 classes, of which 25 are abstract, i.e., only used for organizing the inheritance hierarchy and never instantiated in a lexeme. 5.2 Ordering and Mapping Abstract and Concrete Morphemes To keep the MBC hierarchy variant-independent, we have also chosen a variant-independent representation of the morphemes that the MBC hierarchy maps to. We refer to these morphemes as abstract morphemes (AMs). The AMs are then ordered into the surface order of the corresponding concrete morphemes. The ordering of AMs is specified in a variant-independent context-free grammar. At this point, our example (2) looks like this: (3) [Root:zhr][PAT PV:VIII] [VOC PV:VIII-act] + [SUBJSUF PV:3FS] Note that as the root, pattern, and vocalism are not ordered with respect to each other, they are simply juxtaposed. The ‘+’ sign indicates the ordering of affixational morphemes. Only now are the AMs translated to concrete morphemes (CMs), which are concatenated in the specified order. Our example becomes: (4) zhr,AV1tV2V3,iaa  +at The interdigitation of root, pattern and vocalism then yields the form Aiztahar+at. 5.3 Morphological, Phonological, and Orthographic Rules We have two types of rules. Morphophonemic/phonological rules map from the morphemic representation to the phonological and orthographic representations. This includes default rules which copy roots and vocalisms to the phonological and orthographic tiers, and specialized rules to handle hollow verbs (verbs with a glide as their middle radical), or more specialized rules for cases such as the pattern consonant change in Form VIII (the /t/ of the pattern changes to a /d/ if the first radical is /z/, /d/, or /*/; this rule operates in our example). For MSA, we have 69 rules of this type. Orthographic rules rewrite only the orthographic representation. These include, for examples, rules for using the shadda (consonant doubling diacritic). For MSA, we have 53 such rules. For our example, we get /izdaharat/ at the phonological level. Using standard MSA diacritized orthography, our example becomes Aizdaharat (in transliteration). Removing the diacritics turns this into the more familiar #%$&!' &( Azdhrt. Note that in analysis mode, we hypothesize all possible diacritics (a finite number, even in combination) and perform the analysis on the resulting multi-path automaton. 6 The MAGEAD System: Implementation We follow (Kiraz, 2000) in using a multitape representation. We extend the analysis of Kiraz by introducing a fifth tier. The five tiers are used as follows: Tier 1: pattern and affixational morphemes; Tier 2: root; Tier 3: vocalism; Tier 4: phonological representation; Tier 5: orthographic representation. In the generation direction, tiers 1 through 3 are always input tiers. Tier 4 is first an output tier, and subsequently an input tier. Tier 5 is always an output tier. 684 We have implemented multi-tape finite state automata as a layer on top of the AT&T twotape finite state transducers (Mohri et al., 1998). We have defined a specification language for the higher multitape level, the new Morphtools format. Specification in the Morphtools format of different types of information such as rules or context-free grammars for morpheme ordering are compiled to the appropriate Lextools format (an NLP-oriented extension of the AT&T toolkit for finite-state machines, (Sproat, 1995)). For reasons of space, we omit a further discussion of Morphtools. For details, see (Habash et al., 2005). 7 From MSA to Levantine We modified MAGEAD so that it accepts Levantine rather than MSA verbs. Our effort concentrated on the orthographic representation; to simplify our task, we used a diacritic-free orthography for Levantine developed at the Linguistic Data Consortium (Maamouri et al., 2006). Changes were done only to the representations of linguistic knowledge at the four levels discussed in Section 5, not to the processing engine. Morphological Behavior Classes: The MBCs are variant-independent, so in theory no changes needed to be implemented. However, as Levantine is our first dialect, we expand the MBCs to include two AMs not found in MSA: the aspectual particle and the postfix negation marker. Abstract Morpheme Ordering: The contextfree grammar representing the ordering of AMs needed to be extended to order the two new AMs, which was straightforward. Mapping Abstract to Concrete Morphemes: This step requires four types of changes to a table representing this mapping. In the first category, the new AMs require mapping to CMs. Second, those AMs which do not exist in Levantine need to be mapped to zero (or to an error value). These are dual number, and subjunctive and jussive moods. Third, in Levantine some AMs allow additional CMs in allomorphic variation with the same CMs as seen in MSA. This affects three object clitics; for example, the second person masculine plural, in addition to  +kum (also found in MSA), also can be (   +kuwA. Fourth, in five cases, the subject suffix in the imperfective is simply different for Levantine. For example, the second person feminine singular indicative imperfective suffix changes from  + +iyna in MSA to  + +iy in Levantine. Note that more changes in CMs would be required were we completely modeling Levantine phonology (i.e., including the short vowels). Morphological, Phonological, and Orthographic Rules. We needed to change one rule, and add one. In MSA, the vowel between the second and third radical is deleted when they are identical (“gemination”) only if the third radical is followed by a suffix starting with a vowel. In Levantine, in contrast, gemination always happens, independently of the suffix. If the suffix starts with a consonant, a long /e/ is inserted after the third radical. The new rule deletes the first person singular subject prefix for the imperfective, + ( A+, when it is preceded by the aspectual marker +  b+. We summarize now the expertise required to convert MSA resources to Levantine, and we comment on the amount of work needed for adding a further dialect. We modified the MBC hierarchy, but only minor changes were needed. We expect only one major further change to the MBCs, namely the addition of an indirect object clitic (since the indirect object in some dialects is sometimes represented as an orthographic clitic). The AM ordering can be read off from examples in a fairly straightforward manner; the introduction of an indirect object AM would, for example, require an extension of the ordering specification. The mapping from AMs to CMs, which is variantspecific, can be obtained easily from a linguistically trained (near-)native speaker or from a grammar handbook, and with a little more effort from an informant. Finally, the rules, which again can be variant-specific, require either a good morphophonological treatise for the dialect, a linguistically trained (near-)native speaker, or extensive access to an informant. In our case, the entire conversion from MSA to Levantine was performed by a native speaker linguist in about six hours. 8 Evaluation The goal of the evaluation is primarily to investigate how reduced lexical resources affect the performance of morphological analysis, as we will not have complete lexicons for the dialects. A second goal is to validate MAGEAD in analysis mode by comparing it to the Buckwalter analyzer (Buckwalter, 2004) when MAGEAD has a full lexicon at its disposal. Because of the lack of resources for the dialects, we use primarily MSA for both goals, but we also discuss a more modest evaluation on a 685 Levantine corpus. We first discuss the different sources of lexical knowledge, and then present our evaluation metrics. We then separately evaluate MSA and Levantine morphological analysis. 8.1 Lexical Knowledge Sources We evaluate the following sources of lexical knowledge on what roots, i.e, combinations of radicals, are possible. Except for all, these are lists of attested verbal roots. It is not a trivial task to compile a list of verbal roots for MSA, and we compare different sources for these lists. all: All radical combinations are allowed, we use no lexical knowledge at all. dar: List of roots extracted by (Darwish, 2003) from Lisan Al’arab, a large Arabic dictionary. bwl: A list of roots appearing as comments in the Buckwalter lexicon (Buckwalter, 2004). lex: Roots extracted by us from the list of lexeme citation forms in the Buckwalter lexicon using surfacy heuristics for quick-and-dirty morphological analysis. mbc: This is the same list as lex, except that we pair each root with the MBCs with which it was seen in the Buckwalter lexicon (recall that for us, a lexeme is a root with an MBC). Note that mbc represents a full lexicon, though it was converted automatically from the Buckwalter lexicon and it has not been hand-checked. 8.2 Test Corpora and Metrics For development and testing purposes, we use MSA and Levantine. For MSA, we use the Penn Arabic Treebank (ATB) (Maamouri et al., 2004). The morphological annotation we use is the “before-file”, which lists the untokenized words (as they appear in the Arabic original text) and all possible analyses according to the Buckwalter analyzer (Buckwalter, 2004). The analysis which is correct for the given token in its context is marked; sometimes, it is also hand-corrected (or added by hand), while the contextually incorrect analyses are never hand-corrected. For development, we use ATB1 section 20000715, and for testing, Sections 20001015 and 20001115 (13,885 distinct verbal types). For Levantine, we use a similarly annotated corpus, the Levantine Arabic Treebank (LATB) from the Linguistic Data Consortium. However, there are three major differences: the text is transcribed speech, the corpus is much smaller, and, since, there is no morphological analyzer for Levantine currently, the before-files are the result of running the MSA Buckwalter analyzer on the Levantine token, with many of the analyses incorrect, and only the analysis chosen for the token in context usually hand-corrected. We use LATB files fsa 16* for development, and for testing, files fsa 17*, fsa 18* (14 conversations, 3,175 distinct verbal types). We evaluate using three different metrics. The token-based metrics are the corresponding typebased metric weighted by the number of occurrences of the type in the test corpus. Recall (TyR for type recall, ToR for token recall): what proportion of the analyses in the gold standard does MAGEAD get? Precision (TyP for type precision, ToP for token precision): what proportion of the analyses that MAGEAD gets are also in the gold standard? Context token recall (CToR): how often does MAGEAD get the contextually correct analysis for that token? We do not give context precision figures, as MAGEAD does not determine the contextually correct analysis (this is a tagging problem). Rather, we interpret the context recall figures as a measure of how often MAGEAD gets the most important of the analyses (i.e., the correct one) for each token. Roots TyR TyP ToR ToP CToR all 21952 98.5 44.8 98.6 36.9 97.9 dar 10377 98.1 50.5 98.3 43.3 97.7 bwl 6450 96.7 52.2 97.2 42.9 96.7 lex 3658 97.3 55.6 97.3 49.2 97.5 mbc 3658 96.1 63.5 95.8 59.4 96.4 Figure 1: Results comparing MAGEAD to the Buckwalter Analyzer on MSA for different root restrictions, and for different metrics; “Roots”indicates the number of possible roots for that restriction; all numbers are percent figures 8.3 Quantitative Analysis: MSA The results are summarized in Figure 1. We see that we get a (rough) recall-precision trade-off, both for types and for tokens: the more restrictive we are, the higher our precision, but recall declines. For all, we get excellent recall, and an overgeneration by a factor of only 2. This performance, assuming it is roughly indicative of dialect performance, allows us to conclude that we can use MAGEAD as a dialect morphological analyzer without a lexicon. For the root lists, we see that precision is al686 ways higher than for all, as many false analyses are eliminated. At the same time, some correct analyses are also eliminated. Furthermore, bwl under performs somewhat. The change from lex to mbc is interesting, as mbc is a true lexicon (since it does not only state which roots are possible, but also what their MBC is). Precision increases substantially, but not as much as we had hoped. We investigate the errors of mbc in the next subsection in more detail. 8.4 Qualitative Analysis: MSA The gold standard we are using has been generated automatically using the Buckwalter analyzer. Only the contextually correct analysis has been hand-checked. As a result, our quantitative analysis in Section 8.3 leaves open the question of how good the gold standard is in the first place. We analyzed all of the 2,536 false positives (types) produced by MAGEAD on our development set (analyses it suggested, but which the Test corpus did not have). In 75% of the errors, the Buckwalter analyzer does not provide a passive voice analysis which differs from the active voice one only in diacritics which are not written. 7% are cases where Buckwalter does not make distinctions that MAGEAD makes (e.g. mood variations that are not phonologically realized); in 4.4% of the errors a correct analysis was created but it was not produced by Buckwalter for various reasons. If we count these cases as true positives rather than as false positives (as in the case in Figure 1) and take type frequency into account, we obtain a token precision rate of 94.9% on the development set. The remaining cases are MAGEAD errors. 3.3% are missing rules to handle special cases such as jussive mood interaction with weak radicals; 5.4% are incorrect combinations of morphemes such as passive voice and object pronouns; 2.6% of the errors are cases of pragmatic overgeneration such as second person masculine subjects with a second person feminine plural object. 1.5% of the errors are errors of the mbc-root list and 1.2% are other errors. A large number of these errors are fixable errors. There were 162 false negatives (gold standard analyses MAGEAD did not get). 65.4% of these errors were a result of the use of the mbc list restriction. The rest of the errors are all a result of unhandled phenomena in MAGEAD: quadrilateral roots (13.6%), imperatives (8%), and specific missing rules/ rule failures (13%) (e.g., for handling some weak radicals/hamza cases, pattern IX gemination-like behavior, etc.). We conclude that we can claim that our precision numbers are actually much higher, and that we can further improve them by adding more rules and knowledge to MAGEAD. 8.5 Quantitative and Qualitative Analysis: Levantine For the Levantine, we do not have a list of all possible analyses for each word in the gold standard: only the contextually appropriate analysis is hand-checked. We therefore only report context recall in Figure 2. As a baseline, we report the MSA MAGEAD with the all restriction applied to the same Levantine test corpus. As we can see, the MSA system performs poorly on Levantine input. The Levantine system we use is the one described in Section 7. We use the resulting analyzer with the all option as we have no information on roots in Levantine. MAGEAD with Levantine knowledge does well, missing only one in 20 contextually correct analyses. We take this to mean that the architecture of MAGEAD allows us to port MAGEAD fairly rapidly to a new dialect and to perform adequately well on the most important analysis for each token, the contextually relevant one. System CTyR CToR MSA-all 52.9 60.4 LEV-all 95.4 94.2 Figure 2: Results on Levantine; MSA-all is a baseline For the Levantine MAGEAD, there were 25 errors, cases of contextually selected analyses that MAGEAD did not get (false negatives). Most of these are related to phenomena that MAGEAD doesn’t currently handle: imperatives (48%) (which are much more common in speech corpora) and quadrilateral roots (8%). There were four cases (16%) of an unhandled variant spelling of an object pronoun and 7 cases (28%) of hamza/weak radical rule errors. 9 Outlook We have described a morphological analyzer for Arabic and its dialects which decomposes word forms into the templatic morphemes and relates 687 morphemes to strings. We have evaluated the current state of the implementation both for MSA and for Levantine, both quantitatively and in a detailed error analysis, and have shown that we have met our design objectives of having a flexible analyzer which can be used on a new dialect in the absence of a lexicon and with a restrained amount of manual knowledge engineering needed. In ongoing work, we are populating MAGEAD with more knowledge (morphemes and rules) for MSA nouns and other parts of speech, for more of Levantine, and for more dialects. We intend to include a full phonological representation for Levantine (including short vowels). In future work, we will investigate the derivation of words with morphemes from more than one variant (code switching). We will also investigate ways of using morphologically tagged corpora to assign weights to the arcs in the transducer so that the analyses returned by MAGEAD are ranked. References Imad A. Al-Sughaiyer and Ibrahim A. Al-Kharashi. 2004. Arabic morphological analysis techniques: A comprehensive survey. Journal of the American Society for Information Science and Technology, 55(3):189–213. K. Beesley, T. Buckwalter, and S. Newton. 1989. Twolevel finite-state analysis of Arabic morphology. In Proceedings of the Seminar on Bilingual Computing in Arabic and English, page n.p. K. Beesley. 1998. Arabic morphology using only finite-state operations. In M. Rosner, editor, Proceedings of the Workshop on Computational Approaches to Semitic Languages, pages 50–7, Montereal. S. Bird and T. Ellison. 1994. One-level phonology. Computational Linguistics, 20(1):55–90. Tim Buckwalter. 2004. Buckwalter Arabic morphological analyzer version 2.0. Kareem Darwish. 2003. Building a shallow Arabic morphological analyser in one day. In ACL02 Workshop on Computational Approaches to Semitic Languages, Philadelpia, PA. Association for Computational Linguistics. Charles F Ferguson. 1959. Diglossia. Word, 15(2):325–340. Nizar Habash, Owen Rambow, and Geroge Kiraz. 2005. Morphological analysis and generation for arabic dialects. In Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages, Ann Arbor, MI. Nizar Habash, Owen Rabmow, and Richard Sproat. 2006. The representation of linguistic knowledge in a pan-Arabic morphological analyzer. Paper under preparation, Columbia University and UIUC. Nizar Habash. 2004. Large scale lexeme based arabic morphological generation. In Proceedings of Traitement Automatique du Langage Naturel (TALN-04). Fez, Morocco. L. Kataja and K. Koskenniemi. 1988. Finite state description of Semitic morphology. In COLING-88: Papers Presented to the 12th International Conference on Computational Linguistics, volume 1, pages 313–15. Martin Kay. 1987. Nonconcatenative finite-state morphology. In Proceedings of the Third Conference of the European Chapter of the Association for Computational Linguistics, pages 2–10. George Anton Kiraz. 2000. Multi-tiered nonlinear morphology using multi-tape finite automata: A case study on Syriac and Arabic. Computational Linguistics, 26(1):77–105. George Kiraz. 2001. Computational Nonlinear Morphology: With Emphasis on Semitic Languages. Cambridge University Press. A. Kornai. 1995. Formal Phonology. Garland Publishing. K. Koskenniemi. 1983. Two-Level Morphology. Ph.D. thesis, University of Helsinki. Mohamed Maamouri, Ann Bies, and Tim Buckwalter. 2004. The Penn Arabic Treebank: Building a largescale annotated arabic corpus. In NEMLAR Conference on Arabic Language Resources and Tools, Cairo, Egypt. Mohamed Maamouri, Ann Bies, Tim Buckwalter, Mona Diab, Nizar Habash, Owen Rambow, and Dalila Tabessi. 2006. Developing and using a pilot dialectal arabic treebank. In Proceedings of LREC, Genoa, Italy. John McCarthy. 1981. A prosodic theory of nonconcatenative morphology. Linguistic Inquiry, 12(3):373–418. M. Mohri, F. Pereira, and M. Riley. 1998. A rational design for a weighted finite-state transducer library. In D. Wood and S. Yu, editors, Automata Implementation, Lecture Notes in Computer Science 1436, pages 144–58. Springer. S. Pulman and M. Hepple. 1993. A feature-based formalism for two-level phonology: a description and implementation. Computer Speech and Language, 7:333–58. Richard Sproat. 1995. Lextools: Tools for finitestate linguistic analysis. Technical Report 11522951108-10TM, Bell Laboratories. 688
2006
86
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 689–696, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Noun Phrase Chunking in Hebrew Influence of Lexical and Morphological Features Yoav Goldberg and Meni Adler and Michael Elhadad Computer Science Department Ben Gurion University of the Negev P.O.B 653 Be'er Sheva 84105, Israel {yoavg,adlerm,elhadad}@cs.bgu.ac.il Abstract We present a method for Noun Phrase chunking in Hebrew. We show that the traditional definition of base-NPs as nonrecursive noun phrases does not apply in Hebrew, and propose an alternative definition of Simple NPs. We review syntactic properties of Hebrew related to noun phrases, which indicate that the task of Hebrew SimpleNP chunking is harder than base-NP chunking in English. As a confirmation, we apply methods known to work well for English to Hebrew data. These methods give low results (F from 76 to 86) in Hebrew. We then discuss our method, which applies SVM induction over lexical and morphological features. Morphological features improve the average precision by ~0.5%, recall by ~1%, and F-measure by ~0.75, resulting in a system with average performance of 93% precision, 93.4% recall and 93.2 Fmeasure.* 1 Introduction Modern Hebrew is an agglutinative Semitic language, with rich morphology. Like most other non-European languages, it lacks NLP resources and tools, and specifically there are currently no available syntactic parsers for Hebrew. We address the task of NP chunking in Hebrew as a * This work was funded by the Israel Ministry of Science and Technology under the auspices of the Knowledge Center for Processing Hebrew. Additional funding was provided by the Lynn and William Frankel Center for Computer Sciences. first step to fulfill the need for such tools. We also illustrate how this task can successfully be approached with little resource requirements, and indicate how the method is applicable to other resource-scarce languages. NP chunking is the task of labelling noun phrases in natural language text. The input to this task is free text with part-of-speech tags. The output is the same text with brackets around base noun phrases. A base noun phrase is an NP which does not contain another NP (it is not recursive). NP chunking is the basis for many other NLP tasks such as shallow parsing, argument structure identification, and information extraction We first realize that the definition of base-NPs must be adapted to the case of Hebrew (and probably other Semitic languages as well) to correctly handle its syntactic nature. We propose such a definition, which we call simple NPs and assess the difficulty of chunking such NPs by applying methods that perform well in English to Hebrew data. While the syntactic problem in Hebrew is indeed more difficult than in English, morphological clues do provide additional hints, which we exploit using an SVM learning method. The resulting method reaches performance in Hebrew comparable to the best results published in English. 2 Previous Work Text chunking (and NP chunking in particular), first proposed by Abney (1991), is a well studied problem for English. The CoNLL2000 shared task (Tjong Kim Sang et al., 2000) was general chunking. The best result achieved for the shared task data was by Zhang et al (2002), who achieved NP chunking results of 94.39% precision, 94.37% recall and 94.38 F-measure using a 689 generalized Winnow algorithm, and enhancing the feature set with the output of a dependency parser. Kudo and Matsumoto (2000) used an SVM based algorithm, and achieved NP chunking results of 93.72% precision, 94.02% recall and 93.87 F-measure for the same shared task data, using only the words and their PoS tags. Similar results were obtained using Conditional Random Fields on similar features (Sha and Pereira, 2003). The NP chunks in the shared task data are base-NP chunks – which are non-recursive NPs, a definition first proposed by Ramshaw and Marcus (1995). This definition yields good NP chunks for English, but results in very short and uninformative chunks for Hebrew (and probably other Semitic languages). Recently, Diab et al (2004) used SVM based approach for Arabic text chunking. Their chunks data was derived from the LDC Arabic TreeBank using the same program that extracted the chunks for the shared task. They used the same features as Kudo and Matsumoto (2000), and achieved over-all chunking performance of 92.06% precision, 92.09% recall and 92.08 F-measure (The results for NP chunks alone were not reported). Since Arabic syntax is quite similar to Hebrew, we expect that the issues reported below apply to Arabic results as well. 3 Hebrew Simple NP Chunks The standard definition of English base-NPs is any noun phrase that does not contain another noun phrase, with possessives treated as a special case, viewing the possessive marker as the first word of a new base-NP (Ramshaw and Marcus, 1995). To evaluate the applicability of this definition to Hebrew, we tested this definition on the Hebrew TreeBank (Sima’an et al, 2001) published by the Hebrew Knowledge Center. We extracted all base-NPs from this TreeBank, which is similar in genre and contents to the English one. This results in extremely simple chunks. English BaseNPs Hebrew BaseNPs Hebrew SimpleNPs Avg # of words 2.17 1.39 2.49 % length 1 30.95 63.32 32.83 % length 2 39.35 35.48 32.12 % length 3 18.68 0.83 14.78 % length 4 6.65 0.16 9.47 % length 5 2.70 0.16 4.56 % length > 5 1.67 0.05 6.22 Table 1. Size of Hebrew and English NPs Table 1 shows the average number of words in a base-NP for English and Hebrew. The Hebrew chunks are basically one-word groups around Nouns, which is not useful for any practical purpose, and so we propose a new definition for Hebrew NP chunks, which allows for some nestedness. We call our chunks Simple NP chunks. 3.1 Syntax of NPs in Hebrew One of the reasons the traditional base-NP definition fails for the Hebrew TreeBank is related to syntactic features of Hebrew – specifically, smixut (construct state – used to express noun compounds), definite marker and the expression of possessives. These differences are reflected to some extent by the tagging guidelines used to annotate the Hebrew Treebank and they result in trees which are in general less flat than the Penn TreeBank ones. Consider the example base noun phrase [The homeless people]. The Hebrew equivalent is (1)      which by the non-recursive NP definition will be bracketed as:             , or, loosely translating back to English: [the home]less [people]. In this case, the fact that the bound-morpheme less appears as a separate construct state word with its own definite marker (ha-) in Hebrew would lead the chunker to create two separate NPs for a simple expression. We present below syntactic properties of Hebrew which are relevant to NP chunking. We then present our definition of Simple NP Chunks. Construct State: The Hebrew genitive case is achieved by placing two nouns next to each other. This is called “noun construct”, or smixut. The semantic interpretation of this construct is varied (Netzer and Elhadad, 1998), but it specifically covers possession. The second noun can be treated as an adjective modifying the next noun. The first noun is morphologically marked in a form known as the construct form (denoted by const). The definite article marker is placed on the second word of the construction: (2)  beit sefer / house-[const] book School (3)  beit ha-sefer / house-[const] the-book The school The construct form can also be embedded: (4)     690 misrad ro$ ha-mem$ala Office-[const poss] head-[const] the-government The prime-minister’s office Possessive: the smixut form can be used to indicate possession. Other ways to express possession include the possessive marker  - ‘$el’ / ‘of’ - (5), or adding a possessive suffix on the noun (6). The various forms can be mixed together, as in (7): (5)   ha-bait $el-i / the-house of-[poss 1st person] My house (6)  beit-i / house-[poss 1st person] My house (7)        misrad-o $el ro$ ha-mem$ala Office-[poss 3rd] of head-[const] the-government The prime minister office Adjective: Hebrew adjectives come after the noun, and agree with it in number, gender and definite marker: (8)  ha-tapu’ah ha-yarok / the-Apple the-green The green apple Some aspects of the predicate structure in Hebrew directly affect the task of NP chunking, as they make the decision to “split” NPs more or less difficult than in English. Word order and the preposition 'et': Hebrew sentences can be either in SVO or VSO form. In order to keep the object separate from the subject, definite direct objects are marked with the special preposition 'et', which has no analog in English. Possible null equative: The equative form in Hebrew can be null. Sentence (9) is a non-null equative, (10) a null equative, while (11) and (12) are predicative NPs, which look very similar to the null-equative form: (9)    ha-bait hu gadol The-house is big The house is big (10)   ha-bait gadol The-house big The house is big (11)   bait gadol House big A big house (12)   ha-bait ha-gadol The-house the-big The big house Morphological Issues: In Hebrew morphology, several lexical units can be concatenated into a single textual unit. Most prepositions, the definite article marker and some conjunctions are concatenated as prefixes, and possessive pronouns and some adverbs are concatenated as suffixes. The Hebrew Treebank is annotated over a segmented version of the text, in which prefixes and suffixes appear as separate lexical units. On the other hand, many bound morphemes in English appear as separate lexical units in Hebrew. For example, the English morphemes re-, ex-, un-, -less, -like, -able, appear in Hebrew as separate lexical units – ,  ,   , , , , .      In our experiment, we use as input to the chunker the text after it has been morphologically disambiguated and segmented. Our analyzer provides segmentation and PoS tags with 92.5% accuracy and full morphology with 88.5% accuracy (Adler and Elhadad, 2006). 3.2 Defining Simple NPs Our definition of Simple NPs is pragmatic. We want to tag phrases that are complete in their syntactic structure, avoid the requirement of tagging recursive structures that include full clauses (relative clauses for example) and in general, tag phrases that have a simple denotation. To establish our definition, we start with the most complex NPs, and break them into smaller parts by stating what should not appear inside a Simple NP. This can be summarized by the following table: Outside SimpleNP Exceptions Prepositional Phrases Relative Clauses Verb Phrases Apposition1 Some conjunctions (Conjunctions are marked according to the TreeBank guidelines)2. % related PPs are allowed:       5% of the sales Possessive  - '$el' / 'of' - is not considered a PP Table 2. Definition of Simple NP chunks Examples for some Simple NP chunks resulting from that definition: 1 Apposition structure is not annotated in the TreeBank. As a heuristic, we consider every comma inside a non conjunctive NP which is not followed by an adjective or an adjective phrase to be marking the beginning of an apposition. 2 As a special case, Adjectival Phrases and possessive conjunctions are considered to be inside the Simple NP. 691                           [This phenomenon] was highlighted yesterday at [the labor and welfare committee-const of the Knesset] that dealt with [the topic-const of foreign workers employment-const].                               3      [The employers] do not expect to succeed in attracting [a significant number of Israeli workers] for [the fruit-picking] because of [the low salaries] paid for [this work]. This definition can also yield some rather long and complex chunks, such as:                     [The conquests of Genghis Khan and his Mongol Tartar army]          !                                         !             According to [reports of local government officials], [factories] on [Tartar territory] earned in [the year] that passed [a sum of 3.7 billion Rb (2.2 billion dollars)], which [Moscow] took [almost all]. Note that Simple NPs are split, for example, by the preposition ‘on’ ([factories] on [Tartar territory]), and by a relative clause ([a sum of 3.7Bn Rb] which [Moscow] took [almost all]). 3.3 Hebrew Simple NPs are harder than English base NPs The Simple NPs derived from our definition are highly coherent units, but are also more complex than the non-recursive English base NPs. As can be seen in Table 1, our definition of Simple NP yields chunks which are on average considerably longer than the English chunks, with about 20% of the chunks with 4 or more words (as opposed to about 10% in English) and a significant portion (6.22%) of chunks with 6 or more words (1.67% in english). Moreover, the baseline used at the CoNLL shared task4 (selecting the chunk tag which was most frequently associated with the current PoS) 3 For readers familiar with Hebrew and feel that  is an adjective and should be inside the NP, we note that this is not the case –  here is actually a Verb in the Beinoni form and the definite marker is actually used as relative marker. 4 http://www.cnts.ua.ac.be/conll2000/chunking/ gives far inferior results for Hebrew SimpleNPs (see Table 3). 4 Chunking Methods 4.1 Baseline Approaches We have experimented with different known methods for English NP chunking, which resulted in poor results for Hebrew. We describe here our experiment settings, and provide the best scores obtained for each method, in comparison to the reported scores for English. All tests were done on the corpus derived from the Hebrew Tree Bank. The corpus contains 5,000 sentences, for a total of 120K tokens (agglutinated words) and 27K NP chunks (more details on the corpus appear below). The last 500 sentences were used as the test set, and all the other sentences were used for training. The results were evaluated using the CoNLL shared task evaluation tools 5 . The approaches tested were Error Driven Pruning (EDP) (Cardie and Pierce, 1998) and Transformational Based Learning of IOB tagging (TBL) (Ramshaw and Marcus, 1995). The Error Driven Pruning method does not take into account lexical information and uses only the PoS tags. For the Transformation Based method, we have used both the PoS tag and the word itself, with the same templates as described in (Ramshaw and Marcus, 1995). We tried the Transformational Based method with more features than just the PoS and the word, but obtained lower performance. Our best results for these methods, as well as the CoNLL baseline (BASE), are presented in Table 3. These results confirm that the task of Simple NP chunking is harder in Hebrew than in English. 4.2 Support Vector Machines We chose to adopt a tagging perspective for the Simple NP chunking task, in which each word is to be tagged as either B, I or O depending on wether it is in the Beginning, Inside, or Outside of the given chunk, an approach first taken by Ramshaw and Marcus (1995), and which has become the de-facto standard for this task. Using this tagging method, chunking becomes a classification problem – each token is predicted as being either I, O or B, given features from a predefined linguistic context (such as the 5http://www.cnts.ua.ac.be/conll2000/chunking/conllev al.txt 692 words surrounding the given word, and their PoS tags). One model that allows for this prediction is Support Vector Machines - SVM (Vapnik, 1995). SVM is a supervised machine learning algorithm which can handle gracefully a large set of overlapping features. SVMs learn binary classifiers, but the method can be extended to multiclass classification (Allwein et al., 2000; Kudo and Matsumoto, 2000). SVMs have been successfully applied to many NLP tasks since (Joachims, 1998), and specifically for base phrase chunking (Kudo and Matsumoto, 2000; 2003). It was also successfully used in Arabic (Diab et al., 2004). The traditional setting of SVM for chunking uses for the context of the token to be classified a window of two tokens around the word, and the features are the PoS tags and lexical items (word forms) of all the tokens in the context. Some settings (Kudo and Matsumoto, 2000) also include the IOB tags of the two “previously tagged” tokens as features (see Fig. 1). This setting (including the last 2 IOB tags) performs nicely for the case of Hebrew Simple NPs chunking as well. Linguistic features are mapped to SVM feature vectors by translating each feature such as “PoS at location n-2 is NOUN” or “word at location n+1 is DOG” to a unique vector entry, and setting this entry to 1 if the feature occurs, and 0 otherwise. This results in extremely large yet extremely sparse feature vectors. English BaseNPs Hebrew SimpleNPs Method Prec Rec Prec Rec F BASE 72.58 82.14 64.7 75.4 69.78 EDP 92.7 93.7 74.6 78.1 76.3 TBL 91.3 91.8 84.7 87.7 86.2 Table 3. Baseline results for Simple NP chunking SVM Chunking in Hebrew WORD POS CHUNK  NA B-NP  NOUN I-NP  PREP O  NAME B-NP PREP O  NA B-NP  NOUN I-NP Figure 1. Linguistic features considered in the basic SVM setting for Hebrew 4.3 Augmentation of Morphological Features Hebrew is a morphologically rich language. Recent PoS taggers and morphological analyzers for Hebrew (Adler and Elhadad, 2006) address this issue and provide for each word not only the PoS, but also full morphological features, such as Gender, Number, Person, Construct, Tense, and the affixes' properties. Our system, currently, computes these features with an accuracy of 88.5%. Our original intuition is that the difficulty of Simple NP chunking can be overcome by relying on morphological features in a small context. These features would help the classifier decide on agreement, and split NPs more accurately. Since SVMs can handle large feature sets, we utilize additional morphological features. In particular, we found the combination of the Number and the Construct features to be most effective in improving chunking results. Indeed, our experiments show that introducing morphological features improves chunking quality by as much as 3-point in F-measure when compared with lexical and PoS features only. 5 Experiment 5.1 The Corpus The Hebrew TreeBank6 consists of 4,995 hand annotated sentences from the Ha’aretz newspaper. Besides the syntactic structure, every word is PoS annotated, and also includes morphological features. The words in the TreeBank are segmented:     (instead of   ). Our morphological analyzer also provides such segmentation. We derived the Simple NPs structure from the TreeBank using the definition given in Section 3.2. We then converted the original Hebrew TreeBank tagset to the tagset of our PoS tagger. For each token, we specify its word form, its PoS, its morphological features, and its correct IOB tag. The result is the Hebrew Simple NP chunks corpus 7. The corpus consists of 4,995 sentences, 27,226 chunks and 120,396 segmented tokens. 67,919 of these tokens are covered by NP chunks. A sample annotated sentence is given in Fig. 2. 6http://mila.cs.technion.ac.il/website/english/resources /corpora/treebank/index.html 7 http://www.cs.bgu.ac.il/~nlpproj/chunking Feature Set Estimated Tag 693  PREPOSITION NA NA N NA N NA N NA NA O  DEF_ART NA NA N NA N NA N NA NA B-NP  NOUN M S N NA N NA N NA NA I-NP  AUXVERB M S N 3 N PAST N NA NA O  ADJECTIVE M S N NA N NA N NA NA O   ADVERB NA NA N NA N NA N NA NA O  VERB NA NA N NA Y TOINF N NA NA O  ET_PREP NA NA N NA N NA N NA NA B-NP  DEF_ART NA NA N NA N NA N NA NA I-NP  NOUN F S N NA N NA N NA NA I-NP . PUNCUATION NA NA N NA N NA N NA NA O Figure 2. A Sample annotated sentence 5.2 Morphological Features: The PoS tagset we use consists of 22 tags: ADJECTIVE ADVERB ET_PREP AUXVERB CONJUNCTION DEF_ART DETERMINER EXISTENTIAL INTERJECTION INTEROGATIVE MODAL NEGATION PARTICLE NOUN NUMBER PRONOUN PREFIX PREPOSITION UNKNOWN PROPERNAME PUNCTUATION VERB For each token, we also supply the following morphological features (in that order): Feature Possible Values Gender (M)ale, (F)emale, (B)oth (unmarked case), (NA) Number (S)ingle, (P)lurar, (D)ual, can be (ALL), (NA) Construct (Y)es, (N)o Person (1)st, (2)nd, (3)rd, (123)all, (NA) To-Infinitive (Y)es, (N)o Tense Past, Present, Future, Beinoni, Imperative, ToInf, BareInf (has) Suffix (Y)es, (N)o Suffix-Num (M)ale, (F)emale, (B)oth, (NA) Suffix-Gen (S)ingle, (P)lurar, (D)ual, (DP)dual plural, can be (ALL), (NA) As noted in (Rambow and Habash 2005), one cannot use the same tagset for a Semitic language as for English. The tagset we have derived has been extensively validated through manual tagging by several testers and crosschecked for agreement. 5.3 Setup and Evaluation For all the SVM chunking experiments, we use the YamCha 8 toolkit (Kudo and Matsumoto, 2003). We use forward moving tagging, using standard SVM with polynomial kernel of degree 2, and C=1. For the multiclass classification, we 8 http://chasen.org/~taku/software/yamcha/ use pairwise voting. For all the reported experiments, we chose the context to be a –2/+2 tokens windows, centered at the current token. We use the standard metrics of accuracy (% of correctly tagged tokens), precision, recall and Fmeasure, with the only exception of normalizing all punctuation tokens from the data prior to evaluation, as the TreeBank is highly inconsistent regarding the bracketing of punctuations, and we don’t consider the exclusions/inclusions of punctuations from our chunks to be errors (i.e., “[a book ,] [an apple]” “[a book] , [an apple]” and “[a book] [, an apple]” are all equivalent chunkings in our view). All our development work was done with the first 500 sentences allocated for testing, and the rest for training. For evaluation, we used a 10fold cross-validation scheme, each time with different consecutive 500 sentences serving for testing and the rest for training. 5.4 Features Used We run several SVM experiments, each with the settings described in section 5.3, but with a different feature set. In all of the experiments the two previously tagged IOB tags were included in the feature set. In the first experiment (denoted WP) we considered the word and PoS tags of the context tokens to be part of the feature set. In the other experiments, we used different subsets of the morphological features of the tokens to enhance the features set. We found that good results were achieved by using the Number and Construct features together with the word and PoS tags (we denote this WPNC). Bad results were achieved when using all the morphological features together. The usefulness of feature sets was stable across all tests in the ten-fold cross validation scheme. 5.5 Results We discuss the results of the WP and WPNC experiments in details, and also provide the results for the WPG (using the Gender feature), and ALL (using all available morphological features) experiments, and P (using only PoS tags). As can be seen in Table 4, lexical information is very important: augmenting the PoS tag with lexical information boosted the F-measure from 77.88 to 92.44. The addition of the extra morphological features of Construct and Number yields another increase in performance, resulting in a final F-measure of 93.2%. Note that the effect of these morphological features on the overall accuracy (the number of BIO tagged cor694 rectly) is minimal (Table 5), yet the effect on the precision and recall is much more significant. It is also interesting to note that the Gender feature hurts performance, even though Hebrew has agreement on both Number and Gender. We do not have a good explanation for this observation – but we are currently verifying the consistency of the gender annotation in the corpus (in particular, the effect of the unmarked gender tag). We performed the WP and WPNC experiment on two forms of the corpus: (1) WP,WPNC using the manually tagged morphological features included in the TreeBank and (2) WPE, WPNCE using the results of our automatic morphological analyzer, which includes about 10% errors (both in PoS and morphological features). With the manual morphology tags, the final F-measure is 93.20, while it is 91.40 with noise. Interestingly, the improvement brought by adding morphological features to chunking in the noisy case (WPNCE) is almost 3.0 F-measure points (as opposed to 0.758 for the "clean" morphology case WPNC). Features Acc Prec Rec F P 91.77 77.03 78.79 77.88 WP 97.49 92.54 92.35 92.44 WPE 94.87 89.14 87.69 88.41 WPG 97.41 92.41 92.22 92.32 ALL 96.68 90.21 90.60 90.40 WPNC 97.61 92.99 93.41 93.20 WPNCE 96.99 91.49 91.32 91.40 Table 4. SVM results for Hebrew Features Prec Rec F WPNC 0.456 1.058 0.758 WPNCE 2.35 3.60 2.99 Table 5. Improvement over WP 5.6 Error Analysis and the Effect of Morphological Features We performed detailed error analysis on the WPNC results for the entire corpus. At the individual token level, Nouns and Conjunctions caused the most confusion, followed by Adverbs and Adjectives. Table 6 presents the confusion matrix for all POSs with a substantial amount of errors. I   O means that the correct chunk tag was I, but the system classified it as O. By examining the errors on the chunks level, we identified 7 common classes of errors: Conjunction related errors: bracketing “[a] and [b]” instead of “[a and b]” and vice versa. Split errors: bracketing [a][b] instead of [a b] Merge errors: bracketing [a b] instead of [a][b] Short errors: bracketing “a [b]” or “[a] b” instead of [a b] Long errors: bracketing “[a b]” instead of “[a] b” or “a [b]” Whole Chunk errors: either missing a whole chunk, or bracketing something which doesn’t overlap with a chunk at all (extra chunk). Missing/ExtraToken errors: this is a generalized form of conjunction errors: either “[a] T [b]” instead of “[a T b]” or vice versa, where T is a single token. The most frequent of such words (other than the conjuncts) was    - the possessive '$el'. Table 6. WPNC Confusion Matrix The data in Table 6 suggests that Adverbs and Adjectives related errors are mostly of the “short” or “long” types, while the Noun (including proper names and pronouns) related errors are of the “split” or “merge” types. The most frequent error type was conjunction related, closely followed by split and merge. Much less significant errors were cases of extra Adverbs or Adjectives at the end of the chunk, and missing adverbs before or after the chunk. Conjunctions are a major source of errors for English chunking as well (Ramshaw and Marcus, 1995, Cardie and Pierce, 1998)9, and we plan to address them in future work. The split and merge errors are related to argument structure, which can be more complicated in Hebrew than in English, because of possible null equatives. The toolong and too-short errors were mostly attachment related. Most of the errors are related to linguistic phenomena that cannot be inferred by the localized context used in our SVM encoding. We examine the types of errors that the addition of 9 Although base-NPs are by definition non-recursive, they may still contain CCs when the coordinators are ‘trapped’: “[securities and exchange commission]” or conjunctions of adjectives. 695 Number and Construct features fixed. Table 7 summarizes this information. ERROR WP WPNC # Fixed % Fixed CONJUNCTION 256 251 5 1.95 SPLIT 198 225 -27 -13.64 MERGE 366 222 144 39.34 LONG (ADJ AFTER) 120 117 3 2.50 EXTRA CHUNK 89 88 1 1.12 LONG (ADV AFTER) 77 81 -4 -5.19 SHORT (ADV AFTER) 67 65 2 2.99 MISSING CHUNK 50 54 -4 -8.00 SHORT (ADV BEFORE) 53 48 5 9.43 EXTRA TOK 47 47 0 0.00 Table 7. Effect of Number and Construct information on most frequent error classes The error classes most affected by the number and construct information were split and merge – WPNC has a tendency of splitting chunks, which resulted in some unjustified splits, but compensates this by fixing over a third of the merging mistakes. This result makes sense – construct and local agreement information can aid in the identification of predicate boundaries. This confirms our original intuition that morphological features do help in identifying boundaries of NP chunks. 6 Conclusion and Future work We have noted that due to syntactic features such as smixut, the traditional definition of base NP chunks does not translate well to Hebrew and probably to other Semitic languages. We defined the notion of Simple NP chunks instead. We have presented a method for identifying Hebrew Simple NPs by supervised learning using SVM, providing another evidence for the suitability of SVM to chunk identification. We have also shown that using morphological features enhances chunking accuracy. However, the set of morphological features used should be chosen with care, as some features actually hurt performance. Like in the case of English, a large part of the errors were caused by conjunctions – this problem clearly requires more than local knowledge. We plan to address this issue in future work. References Meni Adler and Michael Elhadad, 2006. Unsupervised Morpheme-based HMM for Hebrew Morphological Disambiguation. In Proc. of COLING/ACL 2006, Sidney. Steven P. Abney. 1991. Parsing by Chunks. In Robert C. Berwick, Steven P. Abney, and Carol Tenny editors, Principle Based Parsing. Kluwer Academic Publishers. Erin L. Allwein, Robert E. Schapire, and Yoram Singer. 2000. Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers. Journal of Machine Learning Research, 1:113-141. Claire Cardie and David Pierce. 1998. Error-Driven Pruning of Treebank Grammars for Base Noun Phrase Identification. In Proc. of COLING-98, Montréal. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks, In Proc. of HLT/NAACL 2004, Boston. Nizar Habash and Owen Rambow, 2005. Arabic Tokenization, Part-of-speech Tagging and Morphological Disambiguation in One Fell Swoop. In Proc. of ACL 2005, Ann Arbor. Thorsten Joachims. 1998. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proc. of ECML-98, Chemnitz. Taku Kudo and Yuji Matsumato. 2000. Use of Support Vector Learning for Chunk Identification. In Proc. of CoNLL-2000 and LLL-2000, Lisbon. Taku Kudo and Yuji Matsumato. 2003. Fast Methods for Kernel-Based Text Analysis. In Proc. of ACL 2003, Sapporo. Yael Netzer-Dahan and Michael Elhadad, 1998. Generation of Noun Compounds in Hebrew: Can Syntactic Knowledge be Fully Encapsulated? In Proc. of INLG-98, Ontario. Lance A. Ramshaw and Mitchel P. Marcus. 1995. Text Chunking Using Transformation-based Learning. In Proc. of the 3rd ACL Workshop on Very Large Corpora. Cambridge. Khalil Sima’an, Alon Itai, Yoad Winter, Alon Altman and N. Nativ, 2001. Building a Tree-bank of Modern Hebrew Text, in Traitement Automatique des Langues 42(2). Fei Sha and Fernando Pereira. 2003. Shallow Parsing with Conditional Random Fields. Technical Report CIS TR MS-CIS-02-35, University of Pennsylvania. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 Shared Task: Chunking. In Proc. of CoNLL-2000 and LLL-2000, Lisbon. Vladimir Vapnik. 1995. The Nature of Statistical Learning Theory. Springer Verlag, New York, NY. Tong Zhang, Fred Damerau and David Johnson. 2002. Text Chunking based on a Generalization of Winnow. Journal of Machine Learning Research, 2: 615-637. 696
2006
87
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 697–704, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Multi-Tagging for Lexicalized-Grammar Parsing James R. Curran School of IT University of Sydney NSW 2006, Australia [email protected] Stephen Clark Computing Laboratory Oxford University Wolfson Building Parks Road Oxford, OX1 3QD, UK [email protected] David Vadas School of IT University of Sydney NSW 2006, Australia [email protected] Abstract With performance above 97% accuracy for newspaper text, part of speech (POS) tagging might be considered a solved problem. Previous studies have shown that allowing the parser to resolve POS tag ambiguity does not improve performance. However, for grammar formalisms which use more fine-grained grammatical categories, for example TAG and CCG, tagging accuracy is much lower. In fact, for these formalisms, premature ambiguity resolution makes parsing infeasible. We describe a multi-tagging approach which maintains a suitable level of lexical category ambiguity for accurate and efficient CCG parsing. We extend this multitagging approach to the POS level to overcome errors introduced by automatically assigned POS tags. Although POS tagging accuracy seems high, maintaining some POS tag ambiguity in the language processing pipeline results in more accurate CCG supertagging. 1 Introduction State-of-the-art part of speech (POS) tagging accuracy is now above 97% for newspaper text (Collins, 2002; Toutanova et al., 2003). One possible conclusion from the POS tagging literature is that accuracy is approaching the limit, and any remaining improvement is within the noise of the Penn Treebank training data (Ratnaparkhi, 1996; Toutanova et al., 2003). So why should we continue to work on the POS tagging problem? Here we give two reasons. First, for lexicalized grammar formalisms such as TAG and CCG, the tagging problem is much harder. Second, any errors in POS tagger output, even at 97% acuracy, can have a significant impact on components further down the language processing pipeline. In previous work we have shown that using automatically assigned, rather than gold standard, POS tags reduces the accuracy of our CCG parser by almost 2% in dependency F-score (Clark and Curran, 2004b). CCG supertagging is much harder than POS tagging because the CCG tag set consists of finegrained lexical categories, resulting in a larger tag set – over 400 CCG lexical categories compared with 45 Penn Treebank POS tags. In fact, using a state-of-the-art tagger as a front end to a CCG parser makes accurate parsing infeasible because of the high supertagging error rate. Our solution is to use multi-tagging, in which a CCG supertagger can potentially assign more than one lexical category to a word. In this paper we significantly improve our earlier approach (Clark and Curran, 2004a) by adapting the forward-backward algorithm to a Maximum Entropy tagger, which is used to calculate a probability distribution over lexical categories for each word. This distribution is used to assign one or more categories to each word (Charniak et al., 1996). We report large increases in accuracy over single-tagging at only a small cost in increased ambiguity. A further contribution of the paper is to also use multi-tagging for the POS tags, and to maintain some POS ambiguity in the language processing pipeline. In particular, since POS tags are important features for the supertagger, we investigate how supertagging accuracy can be improved by not prematurely committing to a POS tag decision. Our results first demonstrate that a surprising in697 crease in POS tagging accuracy can be achieved with only a tiny increase in ambiguity; and second that maintaining some POS ambiguity can significantly improve the accuracy of the supertagger. The parser uses the CCG lexical categories to build syntactic structure, and the POS tags are used by the supertagger and parser as part of their statisical models. We show that using a multitagger for supertagging results in an effective preprocessor for CCG parsing, and that using a multitagger for POS tagging results in more accurate CCG supertagging. 2 Maximum Entropy Tagging The tagger uses conditional probabilities of the form P(y|x) where y is a tag and x is a local context containing y. The conditional probabilities have the following log-linear form: P(y|x) = 1 Z(x)e P i λifi(x,y) (1) where Z(x) is a normalisation constant which ensures a proper probability distribution for each context x. The feature functions fi(x, y) are binaryvalued, returning either 0 or 1 depending on the tag y and the value of a particular contextual predicate given the context x. Contextual predicates identify elements of the context which might be useful for predicting the tag. For example, the following feature returns 1 if the current word is the and the tag is DT; otherwise it returns 0: fi(x, y) = ( 1 if word(x) = the & y = DT 0 otherwise (2) word(x) = the is an example of a contextual predicate. The POS tagger uses the same contextual predicates as Ratnaparkhi (1996); the supertagger adds contextual predicates corresponding to POS tags and bigram combinations of POS tags (Curran and Clark, 2003). Each feature fi has an associated weight λi which is determined during training. The training process aims to maximise the entropy of the model subject to the constraints that the expectation of each feature according to the model matches the empirical expectation from the training data. This can be also thought of in terms of maximum likelihood estimation (MLE) for a log-linear model (Della Pietra et al., 1997). We use the L-BFGS optimisation algorithm (Nocedal and Wright, 1999; Malouf, 2002) to perform the estimation. MLE has a tendency to overfit the training data. We adopt the standard approach of Chen and Rosenfeld (1999) by introducing a Gaussian prior term to the objective function which penalises feature weights with large absolute values. A parameter defined in terms of the standard deviation of the Gaussian determines the degree of smoothing. The conditional probability of a sequence of tags, y1, . . . , yn, given a sentence, w1, . . . , wn, is defined as the product of the individual probabilities for each tag: P(y1, . . . , yn|w1, . . . , wn) = n Y i=1 P(yi|xi) (3) where xi is the context for word wi. We use the standard approach of Viterbi decoding to find the highest probability sequence. 2.1 Multi-tagging Multi-tagging — assigning one or more tags to a word — is used here in two ways: first, to retain ambiguity in the CCG lexical category sequence for the purpose of building parse structure; and second, to retain ambiguity in the POS tag sequence. We retain ambiguity in the lexical category sequence since a single-tagger is not accurate enough to serve as a front-end to a CCG parser, and we retain some POS ambiguity since POS tags are used as features in the statistical models of the supertagger and parser. Charniak et al. (1996) investigated multi-POS tagging in the context of PCFG parsing. It was found that multi-tagging provides only a minor improvement in accuracy, with a significant loss in efficiency; hence it was concluded that, given the particular parser and tagger used, a single-tag POS tagger is preferable to a multi-tagger. More recently, Watson (2006) has revisited this question in the context of the RASP parser (Briscoe and Carroll, 2002) and found that, similar to Charniak et al. (1996), multi-tagging at the POS level results in a small increase in parsing accuracy but at some cost in efficiency. For lexicalized grammars, such as CCG and TAG, the motivation for using a multi-tagger to assign the elementary structures (supertags) is more compelling. Since the set of supertags is typically much larger than a standard POS tag set, the tagging problem becomes much harder. In 698 fact, when using a state-of-the-art single-tagger, the per-word accuracy for CCG supertagging is so low (around 92%) that wide coverage, high accuracy parsing becomes infeasible (Clark, 2002; Clark and Curran, 2004a). Similar results have been found for a highly lexicalized HPSG grammar (Prins and van Noord, 2003), and also for TAG. As far as we are aware, the only approach to successfully integrate a TAG supertagger and parser is the Lightweight Dependency Analyser of Bangalore (2000). Hence, in order to perform effective full parsing with these lexicalized grammars, the tagger front-end must be a multi-tagger (given the current state-of-the-art). The simplest approach to CCG supertagging is to assign all categories to a word which the word was seen with in the data. This leaves the parser the task of managing the very large parse space resulting from the high degree of lexical category ambiguity (Hockenmaier and Steedman, 2002; Hockenmaier, 2003). However, one of the original motivations for supertagging was to significantly reduce the syntactic ambiguity before full parsing begins (Bangalore and Joshi, 1999). Clark and Curran (2004a) found that performing CCG supertagging prior to parsing can significantly increase parsing efficiency with no loss in accuracy. Our multi-tagging approach follows that of Clark and Curran (2004a) and Charniak et al. (1996): assign all categories to a word whose probabilities are within a factor, β, of the probability of the most probable category for that word: Ci = {c | P(Ci = c|S) > β P(Ci = cmax|S)} Ci is the set of categories assigned to the ith word; Ci is the random variable corresponding to the category of the ith word; cmax is the category with the highest probability of being the category of the ith word; and S is the sentence. One advantage of this adaptive approach is that, when the probability of the highest scoring category is much greater than the rest, no extra categories will be added. Clark and Curran (2004a) propose a simple method for calculating P(Ci = c|S): use the word and POS features in the local context to calculate the probability and ignore the previously assigned categories (the history). However, it is possible to incorporate the history in the calculation of the tag probabilities. A greedy approach is to use the locally highest probability history as a feature, which avoids any summing over alternative histories. Alternatively, there is a well-known dynamic programming algorithm — the forward backward algorithm — which efficiently calculates P(Ci = c|S) (Charniak et al., 1996). The multitagger uses the following conditional probabilities: P(yi|w1,n) = X y1,i−1,yi+1,n P(yi, y1,i−1, yi+1,n|w1,n) where xi,j = xi, . . . xj. Here yi is to be thought of as a fixed category, whereas yj (j ̸= i) varies over the possible categories for word j. In words, the probability of category yi, given the sentence, is the sum of the probabilities of all sequences containing yi. This sum is calculated efficiently using the forward-backward algorithm: P(Ci = c|S) = αi(c)βi(c) (4) where αi(c) is the total probability of all the category sub-sequences that end at position i with category c; and βi(c) is the total probability of all the category sub-sequences through to the end which start at position i with category c. The standard description of the forwardbackward algorithm, for example Manning and Schutze (1999), is usually given for an HMM-style tagger. However, it is straightforward to adapt the algorithm to the Maximum Entropy models used here. The forward-backward algorithm we use is similar to that for a Maximum Entropy Markov Model (Lafferty et al., 2001). POS tags are very informative features for the supertagger, which suggests that using a multiPOS tagger may benefit the supertagger (and ultimately the parser). However, it is unclear whether multi-POS tagging will be useful in this context, since our single-tagger POS tagger is highly accurate: over 97% for WSJ text (Curran and Clark, 2003). In fact, in Clark and Curran (2004b) we report that using automatically assigned, as opposed to gold-standard, POS tags as features results in a 2% loss in parsing accuracy. This suggests that retaining some ambiguity in the POS sequence may be beneficial for supertagging and parsing accuracy. In Section 4 we show this is the case for supertagging. 3 CCG Supertagging and Parsing Parsing using CCG can be viewed as a two-stage process: first assign lexical categories to the words in the sentence, and then combine the categories 699 The WSJ is a paper that I enjoy reading NP/N N (S[dcl]\NP)/NP NP/N N (NP\NP)/(S[dcl]/NP) NP (S[dcl]\NP)/(S[ng]\NP) (S[ng]\NP)/NP Figure 1: Example sentence with CCG lexical categories. together using CCG’s combinatory rules.1 We perform stage one using a supertagger. The set of lexical categories used by the supertagger is obtained from CCGbank (Hockenmaier, 2003), a corpus of CCG normal-form derivations derived semi-automatically from the Penn Treebank. Following our earlier work, we apply a frequency cutoff to the training set, only using those categories which appear at least 10 times in sections 02-21, which results in a set of 425 categories. We have shown that the resulting set has very high coverage on unseen data (Clark and Curran, 2004a). Figure 1 gives an example sentence with the CCG lexical categories. The parser is described in Clark and Curran (2004b). It takes POS tagged sentences as input with each word assigned a set of lexical categories. A packed chart is used to efficiently represent all the possible analyses for a sentence, and the CKY chart parsing algorithm described in Steedman (2000) is used to build the chart. A log-linear model is used to score the alternative analyses. In Clark and Curran (2004a) we described a novel approach to integrating the supertagger and parser: start with a very restrictive supertagger setting, so that only a small number of lexical categories is assigned to each word, and only assign more categories if the parser cannot find a spanning analysis. This strategy results in an efficient and accurate parser, with speeds up to 35 sentences per second. Accurate supertagging at low levels of lexical category ambiguity is therefore particularly important when using this strategy. We found in Clark and Curran (2004b) that a large drop in parsing accuracy occurs if automatically assigned POS tags are used throughout the parsing process, rather than gold standard POS tags (almost 2% F-score over labelled dependencies). This is due to the drop in accuracy of the supertagger (see Table 3) and also the fact that the log-linear parsing model uses POS tags as features. The large drop in parsing accuracy demonstrates that improving the performance of POS tag1See Steedman (2000) for an introduction to CCG, and see Hockenmaier (2003) for an introduction to wide-coverage parsing using CCG. TAGS/WORD β WORD ACC SENT ACC 1.00 1 96.7 51.8 1.01 0.8125 97.1 55.4 1.05 0.2969 98.3 70.7 1.10 0.1172 99.0 80.9 1.20 0.0293 99.5 89.3 1.30 0.0111 99.6 91.7 1.40 0.0053 99.7 93.2 4.23 0 99.8 94.8 Table 1: POS tagging accuracy on Section 00 for different levels of ambiguity. gers is still an important research problem. In this paper we aim to reduce the performance drop of the supertagger by maintaing some POS ambiguity through to the supertagging phase. Future work will investigate maintaining some POS ambiguity through to the parsing phase also. 4 Multi-tagging Experiments We performed several sets of experiments for POS tagging and CCG supertagging to explore the trade-off between ambiguity and tagging accuracy. For both POS tagging and supertagging we varied the average number of tags assigned to each word, to see whether it is possible to significantly increase tagging accuracy with only a small increase in ambiguity. For CCG supertagging, we also compared multi-tagging approaches, with a fixed category ambiguity of 1.4 categories per word. All of the experiments used Section 02-21 of CCGbank as training data, Section 00 as development data and Section 23 as final test data. We evaluate both per-word tag accuracy and sentence accuracy, which is the percentage of sentences for which every word is tagged correctly. For the multi-tagging results we consider the word to be tagged correctly if the correct tag appears in the set of tags assigned to the word. 4.1 Results Table 1 shows the results for multi-POS tagging for different levels of ambiguity. The row corresponding to 1.01 tags per word shows that adding 700 METHOD GOLD POS AUTO POS WORD SENT WORD SENT single 92.6 36.8 91.5 32.7 noseq 96.2 51.9 95.2 46.1 best hist 97.2 63.8 96.3 57.2 fwdbwd 97.9 72.1 96.9 64.8 Table 2: Supertagging accuracy on Section 00 using different approaches with multi-tagger ambiguity fixed at 1.4 categories per word. TAGS/ GOLD POS AUTO POS WORD β WORD SENT WORD SENT 1.0 1 92.6 36.8 91.5 32.7 1.2 0.1201 96.8 63.4 95.8 56.5 1.4 0.0337 97.9 72.1 96.9 64.8 1.6 0.0142 98.3 76.4 97.5 69.3 1.8 0.0074 98.4 78.3 97.7 71.0 2.0 0.0048 98.5 79.4 97.9 72.5 2.5 0.0019 98.7 80.6 98.1 74.3 3.0 0.0009 98.7 81.4 98.3 75.6 12.5 0 98.9 82.3 98.8 80.1 Table 3: Supertagging accuracy on Section 00 for different levels of ambiguity. even a tiny amount of ambiguity (1 extra tag in every 100 words) gives a reasonable improvement, whilst adding 1 tag in 20 words, or approximately one extra tag per sentence on the WSJ, gives a significant boost of 1.6% word accuracy and almost 20% sentence accuracy. The bottom row of Table 1 gives an upper bound on accuracy if the maximum ambiguity is allowed. This involves setting the β value to 0, so all feasible tags are assigned. Note that the performance gain is only 1.6% in sentence accuracy, compared with the previous row, at the cost of a large increase in ambiguity. Our first set of CCG supertagging experiments compared the performance of several approaches. In Table 2 we give the accuracies when using gold standard POS tags, and also POS tags automatically assigned by our POS tagger described above. Since POS tags are important features for the supertagger maximum entropy model, erroneous tags have a significant impact on supertagging accuracy. The single method is the single-tagger supertagger, which at 91.5% per-word accuracy is too inaccurate for use with the CCG parser. The remaining rows in the table give multi-tagger results for a category ambiguity of 1.4 categories per word. The noseq method, which performs significantly better than single, does not take into account the previously assigned categories. The best hist method gains roughly another 1% in accuracy over noseq by taking the greedy approach of using only the two most probable previously assigned categories. Finally, the full forward-backward approach described in Section 2.1 gains roughly another 0.6% by considering all possible category histories. We see the largest jump in accuracy just by returning multiple categories. The other more modest gains come from producing progressively better models of the category sequence. The final set of supertagging experiments in Table 3 demonstrates the trade-off between ambiguity and accuracy. Note that the ambiguity levels need to be much higher to produce similar performance to the POS tagger and that the upper bound case (β = 0) has a very high average ambiguity. This is to be expected given the much larger CCG tag set. 5 Tag uncertainty thoughout the pipeline Tables 2 and 3 show that supertagger accuracy when using gold-standard POS tags is typically 1% higher than when using automatically assigned POS tags. Clearly, correct POS tags are important features for the supertagger. Errors made by the supertagger can multiply out when incorrect lexical categories are passed to the parser, so a 1% increase in lexical category error can become much more significant in the parser evaluation. For example, when using the dependency-based evaluation in Clark and Curran (2004b), getting the lexical category wrong for a ditransitive verb automatically leads to three dependencies in the output being incorrect. We have shown that multi-tagging can significantly increase the accuracy of the POS tagger with only a small increase in ambiguity. What we would like to do is maintain some degree of POS tag ambiguity and pass multiple POS tags through to the supertagging stage (and eventually the parser). There are several ways to encode multiple POS tags as features. The simplest approach is to treat all of the POS tags as binary features, but this does not take into account the uncertainty in each of the alternative tags. What we need is a way of incorporating probability information into the Maximum Entropy supertagger. 701 6 Real-values in ME models Maximum Entropy (ME) models, in the NLP literature, are typically defined with binary features, although they do allow real-valued features. The only constraint comes from the optimisation algorithm; for example, GIS only allows non-negative values. Real-valued features are commonly used with other machine learning algorithms. Binary features suffer from certain limitations of the representation, which make them unsuitable for modelling some properties. For example, POS taggers have difficulty determining if capitalised, sentence initial words are proper nouns. A useful way to model this property is to determine the ratio of capitalised and non-capitalised instances of a particular word in a large corpus and use a realvalued feature which encodes this ratio (Vadas and Curran, 2005). The only way to include this feature in a binary representation is to discretize (or bin) the feature values. For this type of feature, choosing appropriate bins is difficult and it may be hard to find a discretization scheme that performs optimally. Another problem with discretizing feature values is that it imposes artificial boundaries to define the bins. For the example above, we may choose the bins 0 ≤x < 1 and 1 ≤x < 2, which separate the values 0.99 and 1.01 even though they are close in value. At the same time, the model does not distinguish between 0.01 and 0.99 even though they are much further apart. Further, if we have not seen cases for the bin 2 ≤x < 3, then the discretized model has no evidence to determine the contribution of this feature. But for the real-valued model, evidence supporting 1 ≤x < 2 and 3 ≤x < 4 provides evidence for the missing bin. Thus the real-valued model generalises more effectively. One issue that is not addressed here is the interaction between the Gaussian smoothing parameter and real-valued features. Using the same smoothing parameter for real-valued features with vastly different distributions is unlikely to be optimal. However, for these experiments we have used the same value for the smoothing parameter on all real-valued features. This is the same value we have used for the binary features. 7 Multi-POS Supertagging Experiments We have experimented with four different approaches to passing multiple POS tags as features through to the supertagger. For the later experiments, this required the existing binary-valued framework to be extended to support real values. The level of POS tag ambiguity was varied between 1.05 and 1.3 POS tags per word on average. These results are shown in Table 4. The first approach is to treat the multiple POS tags as binary features (bin). This simply involves adding the multiple POS tags for each word in both the training and test data. Every assigned POS tag is treated as a separate feature and considered equally important regardless of its uncertainty. Here we see a minor increase in performance over the original supertagger at the lower levels of POS ambiguity. However, as the POS ambiguity is increased, the performance of the binary-valued features decreases and is eventually worse than the original supertagger. This is because at the lowest levels of ambiguity the extra POS tags can be treated as being of similar reliability. However, at higher levels of ambiguity many POS tags are added which are unreliable and should not be trusted equally. The second approach (split) uses real-valued features to model some degree of uncertainty in the POS tags, dividing the POS tag probability mass evenly among the alternatives. This has the effect of giving smaller feature values to tags where many alternative tags have been assigned. This produces similar results to the binary-valued features, again performing best at low levels of ambiguity. The third approach (invrank) is to use the inverse rank of each POS tag as a real-valued feature. The inverse rank is the reciprocal of the tag’s rank ordered by decreasing probability. This method assumes the POS tagger correctly orders the alternative tags, but does not rely on the probability assigned to each tag. Overall, invrank performs worse than split. The final and best approach is to use the probabilities assigned to each alternative tag as realvalued features: fi(x, y) = ( p(POS(x) = NN) if y = NP 0 otherwise (5) This model gives the best performance at 1.1 POS tags per-word average ambiguity. Note that, even when using the probabilities as features, only a small amount of additional POS ambiguity is required to significantly improve performance. 702 METHOD POS AMB WORD SENT orig 1.00 96.9 64.8 bin 1.05 97.3 67.7 1.10 97.3 66.3 1.20 97.0 63.5 1.30 96.8 62.1 split 1.05 97.4 68.5 1.10 97.4 67.9 1.20 97.3 67.0 1.30 97.2 65.1 prob 1.05 97.5 68.7 1.10 97.5 69.1 1.20 97.5 68.7 1.30 97.5 68.7 invrank 1.05 97.3 68.0 1.10 97.4 68.0 1.20 97.3 67.1 1.30 97.3 67.1 gold 97.9 72.1 Table 4: Multi-POS supertagging on Section 00 with different levels of POS ambiguity and using different approaches to POS feature encoding. Table 5 shows our best performance figures for the multi-POS supertagger, against the previously described method using both gold standard and automatically assigned POS tags. Table 6 uses the Section 23 test data to demonstrate the improvement in supertagging when moving from single-tagging (single) to simple multi-tagging (noseq); from simple multitagging to the full forward-backward algorithm (fwdbwd); and finally when using the probabilities of multiply-assigned POS tags as features (MULTIPOS column). All of these multi-tagging experiments use an ambiguity level of 1.4 categories per word and the last result uses POS tag ambiguity of 1.1 tags per word. 8 Conclusion The NLP community may consider POS tagging to be a solved problem. In this paper, we have suggested two reasons why this is not the case. First, tagging for lexicalized-grammar formalisms, such as CCG and TAG, is far from solved. Second, even modest improvements in POS tagging accuracy can have a large impact on the performance of downstream components in a language processing pipeline. TAGS/ AUTO POS MULTI POS GOLD POS WORD WORD SENT WORD SENT WORD SENT 1.0 91.5 32.7 91.9 34.3 92.6 36.8 1.2 95.8 56.5 96.3 59.2 96.8 63.4 1.4 96.9 64.8 97.5 67.0 97.9 72.1 1.6 97.5 69.3 97.9 73.3 98.3 76.4 1.8 97.7 71.0 98.2 76.1 98.4 78.3 2.0 97.9 72.5 98.4 77.4 98.5 79.4 2.5 98.1 74.3 98.5 78.7 98.7 80.6 3.0 98.3 75.6 98.6 79.7 98.7 81.4 Table 5: Best multi-POS supertagging accuracy on Section 00 using POS ambiguity of 1.1 and the probability real-valued features. METHOD AUTO POS MULTI POS GOLD POS single 92.0 93.3 noseq 95.4 96.6 fwdbwd 97.1 97.7 98.2 Table 6: Final supertagging results on Section 23. We have developed a novel approach to maintaining tag ambiguity in language processing pipelines which avoids premature ambiguity resolution. The tag ambiguity is maintained by using the forward-backward algorithm to calculate individual tag probabilities. These probabilities can then be used to select multiple tags and can also be encoded as real-valued features in subsequent statistical models. With this new approach we have increased POS tagging accuracy significantly with only a tiny ambiguity penalty and also significantly improved on previous CCG supertagging results. Finally, using POS tag probabilities as real-valued features in the supertagging model, we demonstrated performance close to that obtained with gold-standard POS tags. This will significantly improve the robustness of the parser on unseen text. In future work we will investigate maintaining tag ambiguity further down the language processing pipeline and exploiting the uncertainty from previous stages. In particular, we will incorporate real-valued POS tag and lexical category features in the statistical parsing model. Another possibility is to investigate whether similar techniques can improve other tagging tasks, such as Named Entity Recognition. This work can be seen as part of the larger goal of maintaining ambiguity and exploiting un703 certainty throughout language processing systems (Roth and Yih, 2004), which is important for coping with the compounding of errors that is a significant problem in language processing pipelines. Acknowledgements We would like to thank the anonymous reviewers for their helpful feedback. This work has been supported by the Australian Research Council under Discovery Project DP0453131. References Srinivas Bangalore and Aravind Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25(2):237–265. Srinivas Bangalore. 2000. A lightweight dependency analyser for partial parsing. Natural Language Engineering, 6(2):113–138. Ted Briscoe and John Carroll. 2002. Robust accurate statistical annotation of general tex. In Proceedings of the 3rd LREC Conference, pages 1499–1504, Las Palmas, Gran Canaria. Eugene Charniak, Glenn Carroll, John Adcock, Anthony Cassandra, Yoshihiko Gotoh, Jeremy Katz, Michael Littman, and John McCann. 1996. Taggers for parsers. Artificial Intelligence, 85:45–57. Stanley Chen and Ronald Rosenfeld. 1999. A Gaussian prior for smoothing maximum entropy models. Technical report, Carnegie Mellon University, Pittsburgh, PA. Stephen Clark and James R. Curran. 2004a. The importance of supertagging for wide-coverage CCG parsing. In Proceedings of COLING-04, pages 282–288, Geneva, Switzerland. Stephen Clark and James R. Curran. 2004b. Parsing the WSJ using CCG and log-linear models. In Proceedings of the 42nd Meeting of the ACL, pages 104–111, Barcelona, Spain. Stephen Clark. 2002. A supertagger for Combinatory Categorial Grammar. In Proceedings of the TAG+ Workshop, pages 19–24, Venice, Italy. Michael Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms. In Proceedings of the EMNLP Conference, pages 1–8, Philadelphia, PA. James R. Curran and Stephen Clark. 2003. Investigating GIS and smoothing for maximum entropy taggers. In Proceedings of the 10th Meeting of the EACL, pages 91–98, Budapest, Hungary. Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing features of random fields. IEEE Transactions Pattern Analysis and Machine Intelligence, 19(4):380–393. Julia Hockenmaier and Mark Steedman. 2002. Generative models for statistical parsing with Combinatory Categorial Grammar. In Proceedings of the 40th Meeting of the ACL, pages 335–342, Philadelphia, PA. Julia Hockenmaier. 2003. Data and Models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pages 282–289, Williams College, MA. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of the Sixth Workshop on Natural Language Learning, pages 49–55, Taipei, Taiwan. Christopher Manning and Hinrich Schutze. 1999. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, Massachusetts. Jorge Nocedal and Stephen J. Wright. 1999. Numerical Optimization. Springer, New York, USA. Robbert Prins and Gertjan van Noord. 2003. Reinforcing parser preferences through tagging. Traitement Automatique des Langues, 44(3):121–139. Adwait Ratnaparkhi. 1996. A maximum entropy part-ofspeech tagger. In Proceedings of the EMNLP Conference, pages 133–142, Philadelphia, PA. D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Hwee Tou Ng and Ellen Riloff, editors, Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL), pages 1–8. Association for Computational Linguistics. Mark Steedman. 2000. The Syntactic Process. The MIT Press, Cambridge, MA. Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the HLT/NAACL conference, pages 252–259, Edmonton, Canada. David Vadas and James R. Curran. 2005. Tagging unknown words with raw text features. In Proceedings of the Australasian Language Technology Workshop 2005, pages 32–39, Sydney, Australia. Rebecca Watson. 2006. Part-of-speech tagging models for parsing. In Proceedings of the Computaional Linguistics in the UK Conference (CLUK-06), Open University, Milton Keynes, UK. 704
2006
88
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 705–712, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Guessing Parts-of-Speech of Unknown Words Using Global Information Tetsuji Nakagawa Corporate R&D Center Oki Electric Industry Co., Ltd. 2−5−7 Honmachi, Chuo-ku Osaka 541−0053, Japan [email protected] Yuji Matsumoto Graduate School of Information Science Nara Institute of Science and Technology 8916−5 Takayama, Ikoma Nara 630−0101, Japan [email protected] Abstract In this paper, we present a method for guessing POS tags of unknown words using local and global information. Although many existing methods use only local information (i.e. limited window size or intra-sentential features), global information (extra-sentential features) provides valuable clues for predicting POS tags of unknown words. We propose a probabilistic model for POS guessing of unknown words using global information as well as local information, and estimate its parameters using Gibbs sampling. We also attempt to apply the model to semisupervised learning, and conduct experiments on multiple corpora. 1 Introduction Part-of-speech (POS) tagging is a fundamental language analysis task. In POS tagging, we frequently encounter words that do not exist in training data. Such words are called unknown words. They are usually handled by an exceptional process in POS tagging, because the tagging system does not have information about the words. Guessing the POS tags of such unknown words is a difficult task. But it is an important issue both for conducting POS tagging accurately and for creating word dictionaries automatically or semiautomatically. There have been many studies on POS guessing of unknown words (Mori and Nagao, 1996; Mikheev, 1997; Chen et al., 1997; Nagata, 1999; Orphanos and Christodoulakis, 1999). In most of these previous works, POS tags of unknown words were predicted using only local information, such as lexical forms and POS tags of surrounding words or word-internal features (e.g. suffixes and character types) of the unknown words. However, this approach has limitations in available information. For example, common nouns and proper nouns are sometimes difficult to distinguish with only the information of a single occurrence because their syntactic functions are almost identical. In English, proper nouns are capitalized and there is generally little ambiguity between common nouns and proper nouns. In Chinese and Japanese, no such convention exists and the problem of the ambiguity is serious. However, if an unknown word with the same lexical form appears in another part with informative local features (e.g. titles of persons), this will give useful clues for guessing the part-of-speech of the ambiguous one, because unknown words with the same lexical form usually have the same part-of-speech. For another example, there is a part-of-speech named sahen-noun (verbal noun) in Japanese. Verbal nouns behave as common nouns, except that they are used as verbs when they are followed by a verb “suru”; e.g., a verbal noun “dokusho” means “reading” and “dokusho-suru” is a verb meaning to “read books”. It is difficult to distinguish a verbal noun from a common noun if it is used as a noun. However, it will be easy if we know that the word is followed by “suru” in another part in the document. This issue was mentioned by Asahara (2003) as a problem of possibility-based POS tags. A possibility-based POS tag is a POS tag that represents all the possible properties of the word (e.g., a verbal noun is used as a noun or a verb), rather than a property of each instance of the word. For example, a sahennoun is actually a noun that can be used as a verb when it is followed by “suru”. This property cannot be confirmed without observing real usage of the word appearing with “suru”. Such POS tags may not be identified with only local information of one instance, because the property that each instance has is only one among all the possible properties. To cope with these issues, we propose a method that uses global information as well as local information for guessing the parts-of-speech of unknown words. With this method, all the occurrences of the unknown words in a document1 are taken into consideration at once, rather than that each occurrence of the words is processed separately. Thus, the method models the whole document and finds a set of parts-of-speech by maximizing its conditional joint probability given the document, rather than independently maximizing the probability of each part-of-speech given each sentence. Global information is known to be useful in other NLP tasks, especially in the named entity recognition task, and several studies successfully used global features (Chieu and Ng, 2002; Finkel et al., 2005). One potential advantage of our method is its 1In this paper, we use the word document to denote the whole data consisting of multiple sentences (training corpus or test corpus). 705 ability to incorporate unlabeled data. Global features can be increased by simply adding unlabeled data into the test data. Models in which the whole document is taken into consideration need a lot of computation compared to models with only local features. They also cannot process input data one-by-one. Instead, the entire document has to be read before processing. We adopt Gibbs sampling in order to compute the models efficiently, and these models are suitable for offline use such as creating dictionaries from raw text where real-time processing is not necessary but high-accuracy is needed to reduce human labor required for revising automatically analyzed data. The rest of this paper is organized as follows: Section 2 describes a method for POS guessing of unknown words which utilizes global information. Section 3 shows experimental results on multiple corpora. Section 4 discusses related work, and Section 5 gives conclusions. 2 POS Guessing of Unknown Words with Global Information We handle POS guessing of unknown words as a sub-task of POS tagging, in this paper. We assume that POS tags of known words are already determined beforehand, and positions in the document where unknown words appear are also identified. Thus, we focus only on prediction of the POS tags of unknown words. In the rest of this section, we first present a model for POS guessing of unknown words with global information. Next, we show how the test data is analyzed and how the parameters of the model are estimated. A method for incorporating unlabeled data with the model is also discussed. 2.1 Probabilistic Model Using Global Information We attempt to model the probability distribution of the parts-of-speech of all occurrences of the unknown words in a document which have the same lexical form. We suppose that such partsof-speech have correlation, and the part-of-speech of each occurrence is also affected by its local context. Similar situations to this are handled in physics. For example, let us consider a case where a number of electrons with spins exist in a system. The spins interact with each other, and each spin is also affected by the external magnetic field. In the physical model, if the state of the system is s and the energy of the system is E(s), the probability distribution of s is known to be represented by the following Boltzmann distribution: P(s)= 1 Z exp{−βE(s)}, (1) where β is inverse temperature and Z is a normalizing constant defined as follows: Z= X s exp{−βE(s)}. (2) Takamura et al. (2005) applied this model to an NLP task, semantic orientation extraction, and we apply it to POS guessing of unknown words here. Suppose that unknown words with the same lexical form appear K times in a document. Assume that the number of possible POS tags for unknown words is N, and they are represented by integers from 1 to N. Let tk denote the POS tag of the kth occurrence of the unknown words, let wk denote the local context (e.g. the lexical forms and the POS tags of the surrounding words) of the kth occurrence of the unknown words, and let w and t denote the sets of wk and tk respectively: w={w1, · · · , wK}, t={t1, · · · , tK}, tk ∈{1, · · · , N}. λi,j is a weight which denotes strength of the interaction between parts-of-speech i and j, and is symmetric (λi,j = λj,i). We define the energy where POS tags of unknown words given w are t as follows: E(t|w)=− ( 1 2 K X k=1 K X k′=1 k′̸=k λtk,tk′ + K X k=1 log p0(tk|wk) ) , (3) where p0(t|w) is an initial distribution (local model) of the part-of-speech t which is calculated with only the local context w, using arbitrary statistical models such as maximum entropy models. The right hand side of the above equation consists of two components; one represents global interactions between each pair of parts-of-speech, and the other represents the effects of local information. In this study, we fix the inverse temperature β = 1. The distribution of t is then obtained from Equation (1), (2) and (3) as follows: P(t|w)= 1 Z(w)p0(t|w) exp ( 1 2 K X k=1 K X k′=1 k′̸=k λtk,tk′ ) , (4) Z(w)= X t∈T (w) p0(t|w) exp ( 1 2 K X k=1 K X k′=1 k′̸=k λtk,tk′ ) , (5) p0(t|w)≡ K Y k=1 p0(tk|wk), (6) where T (w) is the set of possible configurations of POS tags given w. The size of T (w) is NK, because there are K occurrences of the unknown words and each unknown word can have one of N POS tags. The above equations can be rewritten as follows by defining a function fi,j(t): fi,j(t)≡1 2 K X k=1 K X k′=1 k′̸=k δ(tk, i)δ(tk′, j), (7) P(t|w)= 1 Z(w)p0(t|w) exp ( N X i=1 N X j=1 λi,jfi,j(t) ) , (8) Z(w)= X t∈T (w) p0(t|w) exp ( N X i=1 N X j=1 λi,jfi,j(t) ) , (9) 706 where δ(i, j) is the Kronecker delta: δ(i, j)= n 1 (i = j), 0 (i ̸= j). (10) fi,j(t) represents the number of occurrences of the POS tag pair i and j in the whole document (divided by 2), and the model in Equation (8) is essentially a maximum entropy model with the document level features. As shown above, we consider the conditional joint probability of all the occurrences of the unknown words with the same lexical form in the document given their local contexts, P(t|w), in contrast to conventional approaches which assume independence of the sentences in the document and use the probabilities of all the words only in a sentence. Note that we assume independence between the unknown words with different lexical forms, and each set of the unknown words with the same lexical form is processed separately from the sets of other unknown words. 2.2 Decoding Let us consider how to find the optimal POS tags t basing on the model, given K local contexts of the unknown words with the same lexical form (test data) w, an initial distribution p0(t|w) and a set of model parameters Λ = {λ1,1, · · · , λN,N}. One way to do this is to find a set of POS tags which maximizes P(t|w) among all possible candidates of t. However, the number of all possible candidates of the POS tags is NK and the calculation is generally intractable. Although HMMs, MEMMs, and CRFs use dynamic programming and some studies with probabilistic models which have specific structures use efficient algorithms (Wang et al., 2005), such methods cannot be applied here because we are considering interactions (dependencies) between all POS tags, and their joint distribution cannot be decomposed. Therefore, we use a sampling technique and approximate the solution using samples obtained from the probability distribution. We can obtain a solution ˆt = {ˆt1, · · · , ˆtK} as follows: ˆtk=argmax t Pk(t|w), (11) where Pk(t|w) is the marginal distribution of the part-of-speech of the kth occurrence of the unknown words given a set of local contexts w, and is calculated as an expected value over the distribution of the unknown words as follows: Pk(t|w)= X t1,···,tk−1,tk+1,···,tK tk=t P(t|w), = X t∈T (w) δ(tk, t)P(t|w). (12) Expected values can be approximately calculated using enough number of samples generated from the distribution (MacKay, 2003). Suppose that A(x) is a function of a random variable x, P(x) initialize t(1) for m := 2 to M for k := 1 to K t(m) k ∼P(tk|w, t(m) 1 , · · · , t(m) k−1, t(m−1) k+1 , · · · , t(m−1) K ) Figure 1: Gibbs Sampling is a distribution of x, and {x(1), · · · , x(M)} are M samples generated from P(x). Then, the expectation of A(x) over P(x) is approximated by the samples: X x A(x)P(x)≃1 M M X m=1 A(x(m)). (13) Thus, if we have M samples {t(1), · · · , t(M)} generated from the conditional joint distribution P(t|w), the marginal distribution of each POS tag is approximated as follows: Pk(t|w)≃1 M M X m=1 δ(t(m) k , t). (14) Next, we describe how to generate samples from the distribution. We use Gibbs sampling for this purpose. Gibbs sampling is one of the Markov chain Monte Carlo (MCMC) methods, which can generate samples efficiently from highdimensional probability distributions (Andrieu et al., 2003). The algorithm is shown in Figure 1. The algorithm firstly set the initial state t(1), then one new random variable is sampled at a time from the conditional distribution in which all other variables are fixed, and new samples are created by repeating the process. Gibbs sampling is easy to implement and is guaranteed to converge to the true distribution. The conditional distribution P(tk|w, t1, · · · , tk−1, tk+1, · · · , tK) in Figure 1 can be calculated simply as follows: P(tk|w, t1, · · · , tk−1, tk+1, · · · , tK) = P(t|w) P(t1, · · · , tk−1, tk+1, · · · , tK|w), = 1 Z(w)p0(t|w) exp{ 1 2 PK k′=1 PK k′′=1 k′′̸=k′ λtk′ ,tk′′ } PN t∗ k=1 P(t1, · · · , tk−1, t∗ k, tk+1, · · · , tK|w) , = p0(tk|wk) exp{PK k′=1 k′̸=k λtk′ ,tk} PN t∗ k=1 p0(t∗ k|wk) exp{PK k′=1 k′̸=k λtk′ ,t∗ k} , (15) where the last equation is obtained using the following relation: 1 2 K X k′=1 K X k′′=1 k′′̸=k′ λtk′ ,tk′′ =1 2 K X k′=1 k′̸=k K X k′′=1 k′′̸=k,k′′̸=k′ λtk′ ,tk′′ + K X k′=1 k′̸=k λtk′ ,tk. In later experiments, the number of samples M is set to 100, and the initial state t(1) is set to the POS tags which maximize p0(t|w). The optimal solution obtained by Equation (11) maximizes the probability of each POS tag given w, and this kind of approach is known as the maximum posterior marginal (MPM) estimate (Marroquin, 1985). Finkel et al. (2005) used simulated annealing with Gibbs sampling to find a solution in a similar situation. Unlike simulated annealing, this approach does not need to define a cooling 707 schedule. Furthermore, this approach can obtain not only the best solution but also the second best or the other solutions according to Pk(t|w), which are useful when this method is applied to semiautomatic construction of dictionaries because human annotators can check the ranked lists of candidates. 2.3 Parameter Estimation Let us consider how to estimate the parameter Λ = {λ1,1, · · · , λN,N} in Equation (8) from training data consisting of L examples; {⟨w1, t1⟩, · · · , ⟨wL, tL⟩} (i.e., the training data contains L different lexical forms of unknown words). We define the following objective function LΛ, and find Λ which maximizes LΛ (the subscript Λ denotes being parameterized by Λ): LΛ = log L Y l=1 PΛ(tl|wl) + log P(Λ), = log L Y l=1 1 ZΛ(wl)p0(tl|wl) exp ( N X i=1 N X j=1 λi,jfi,j(tl) ) + log P(Λ), = L X l=1 " −log ZΛ(wl)+log p0(tl|wl)+ N X i=1 N X j=1 λi,jfi,j(tl) # + log P(Λ). (16) The partial derivatives of the objective function are: ∂LΛ ∂λi,j = L X l=1 " fi,j(tl)− ∂ ∂λi,j log ZΛ(wl) # + ∂ ∂λi,j log P(Λ), = L X l=1 " fi,j(tl) − X t∈T (wl) fi,j(t)PΛ(t|wl) # + ∂ ∂λi,j log P(Λ). (17) We use Gaussian priors (Chen and Rosenfeld, 1999) for P(Λ): log P(Λ)=− N X i=1 N X j=1 λ2 i,j 2σ2 + C, ∂ ∂λi,j log P(Λ) = −λi,j σ2 . where C is a constant and σ is set to 1 in later experiments. The optimal Λ can be obtained by quasi-Newton methods using the above LΛ and ∂LΛ ∂λi,j , and we use L-BFGS (Liu and Nocedal, 1989) for this purpose2. However, the calculation is intractable because ZΛ(wl) (see Equation (9)) in Equation (16) and a term in Equation (17) contain summations over all the possible POS tags. To cope with the problem, we use the sampling technique again for the calculation, as suggested by Rosenfeld et al. (2001). ZΛ(wl) can be approximated using M samples {t(1), · · · , t(M)} generated from p0(t|wl): ZΛ(wl)= X t∈T (wl) p0(t|wl) exp ( N X i=1 N X j=1 λi,jfi,j(t) ) , 2In later experiments, L-BFGS often did not converge completely because we used approximation with Gibbs sampling, and we stopped iteration of L-BFGS in such cases. ≃1 M M X m=1 exp ( N X i=1 N X j=1 λi,jfi,j(t(m)) ) . (18) The term in Equation (17) can also be approximated using M samples {t(1), · · · , t(M)} generated from PΛ(t|wl) with Gibbs sampling: X t∈T (wl) fi,j(t)PΛ(t|wl)≃1 M M X m=1 fi,j(t(m)). (19) In later experiments, the initial state t(1) in Gibbs sampling is set to the gold standard tags in the training data. 2.4 Use of Unlabeled Data In our model, unlabeled data can be easily used by simply concatenating the test data and the unlabeled data, and decoding them in the testing phase. Intuitively, if we increase the amount of the test data, test examples with informative local features may increase. The POS tags of such examples can be easily predicted, and they are used as global features in prediction of other examples. Thus, this method uses unlabeled data in only the testing phase, and the training phase is the same as the case with no unlabeled data. 3 Experiments 3.1 Data and Procedure We use eight corpora for our experiments; the Penn Chinese Treebank corpus 2.0 (CTB), a part of the PFR corpus (PFR), the EDR corpus (EDR), the Kyoto University corpus version 2 (KUC), the RWCP corpus (RWC), the GENIA corpus 3.02p (GEN), the SUSANNE corpus (SUS) and the Penn Treebank WSJ corpus (WSJ), (cf. Table 1). All the corpora are POS tagged corpora in Chinese(C), English(E) or Japanese(J), and they are split into three portions; training data, test data and unlabeled data. The unlabeled data is used in experiments of semi-supervised learning, and POS tags of unknown words in the unlabeled data are eliminated. Table 1 summarizes detailed information about the corpora we used: the language, the number of POS tags, the number of open class tags (POS tags that unknown words can have, described later), the sizes of training, test and unlabeled data, and the splitting method of them. For the test data and the unlabeled data, unknown words are defined as words that do not appear in the training data. The number of unknown words in the test data of each corpus is shown in Table 1, parentheses. Accuracy of POS guessing of unknown words is calculated based on how many words among them are correctly POS-guessed. Figure 2 shows the procedure of the experiments. We split the training data into two parts; the first half as sub-training data 1 and the latter half as sub-training data 2 (Figure 2, *1). Then, we check the words that appear in the sub-training 708 Corpus # of POS # of Tokens (# of Unknown Words) [partition in the corpus] (Lang.) (Open Class) Training Test Unlabeled CTB 34 84,937 7,980 (749) 6,801 (C) (28) [sec. 1–270] [sec. 271–300] [sec. 301–325] PFR 42 304,125 370,627 (27,774) 445,969 (C) (39) [Jan. 1–Jan. 9] [Jan. 10–Jan. 19] [Jan. 20–Jan. 31] EDR 15 2,550,532 1,280,057 (24,178) 1,274,458 (J) (15) [id = 4n + 0, id = 4n + 1] [id = 4n + 2] [id = 4n + 3] KUC 40 198,514 31,302 (2,477) 41,227 (J) (36) [Jan. 1–Jan. 8] [Jan. 9] [Jan. 10] RWC 66 487,333 190,571 (11,177) 210,096 (J) (55) [1–10,000th sentences] [10,001–14,000th sentences] [14,001–18,672th sentences] GEN 47 243,180 123,386 (7,775) 134,380 (E) (36) [1–10,000th sentences] [10,001–15,000th sentences] [15,001–20,546th sentences] SUS 125 74,902 37,931 (5,760) 37,593 (E) (90) [sec. A01–08, G01–08, [sec. A09–12, G09–12, [sec. A13–20, G13–22, J01–08, N01–08] J09–17, N09–12] J21–24, N13–18] WSJ 45 912,344 129,654 (4,253) 131,768 (E) (33) [sec. 0–18] [sec. 22–24] [sec. 19–21] Table 1: Statistical Information of Corpora Corpus Training Data Test Data Unlabeled Data SubTraining data 1 (*1) SubTraining data 2 (*1) Sub-Local Model 1 (*3) Sub-Local Model 2 (*3) Global Model Local Model (*2) (optional) Test Result Data flow for training Data flow for testing Figure 2: Experimental Procedure data 1 but not in the sub-training data 2, or vice versa. We handle these words as (pseudo) unknown words in the training data. Such (two-fold) cross-validation is necessary to make training examples that contain unknown words3. POS tags that these pseudo unknown words have are defined as open class tags, and only the open class tags are considered as candidate POS tags for unknown words in the test data (i.e., N is equal to the number of the open class tags). In the training phase, we need to estimate two types of parameters; local model (parameters), which is necessary to calculate p0(t|w), and global model (parameters), i.e., λi,j. The local model parameters are estimated using all the training data (Figure 2, *2). Local 3A major method for generating such pseudo unknown words is to collect the words that appear only once in a corpus (Nagata, 1999). These words are called hapax legomena and known to have similar characteristics to real unknown words (Baayen and Sproat, 1996). These words are interpreted as being collected by the leave-one-out technique (which is a special case of cross-validation) as follows: One word is picked from the corpus and the rest of the corpus is considered as training data. The picked word is regarded as an unknown word if it does not exist in the training data. This procedure is iterated for all the words in the corpus. However, this approach is not applicable to our experiments because those words that appear only once in the corpus do not have global information and are useless for learning the global model, so we use the two-fold cross validation method. model parameters and training data are necessary to estimate the global model parameters, but the global model parameters cannot be estimated from the same training data from which the local model parameters are estimated. In order to estimate the global model parameters, we firstly train sub-local models 1 and 2 from the sub-training data 1 and 2 respectively (Figure 2, *3). The sub-local models 1 and 2 are used for calculating p0(t|w) of unknown words in the sub-training data 2 and 1 respectively, when the global model parameters are estimated from the entire training data. In the testing phase, p0(t|w) of unknown words in the test data are calculated using the local model parameters which are estimated from the entire training data, and test results are obtained using the global model with the local model. Global information cannot be used for unknown words whose lexical forms appear only once in the training or test data, so we process only nonunique unknown words (unknown words whose lexical forms appear more than once) using the proposed model. In the testing phase, POS tags of unique unknown words are determined using only the local information, by choosing POS tags which maximize p0(t|w). Unlabeled data can be optionally used for semisupervised learning. In that case, the test data and the unlabeled data are concatenated, and the best POS tags which maximize the probability of the mixed data are searched. 3.2 Initial Distribution In our method, the initial distribution p0(t|w) is used for calculating the probability of t given local context w (Equation (8)). We use maximum entropy (ME) models for the initial distribution. p0(t|w) is calculated by ME models as follows (Berger et al., 1996): p0(t|w)= 1 Y (w) exp ( H X h=1 αhgh(w, t) ) , (20) 709 Language Features English Prefixes of ω0 up to four characters, suffixes of ω0 up to four characters, ω0 contains Arabic numerals, ω0 contains uppercase characters, ω0 contains hyphens. Chinese Prefixes of ω0 up to two characters, Japanese suffixes of ω0 up to two characters, ψ1, ψ|ω0|, ψ1 & ψ|ω0|, S|ω0| i=1 {ψi} (set of character types). (common) |ω0| (length of ω0), τ−1, τ+1, τ−2 & τ−1, τ+1 & τ+2, τ−1 & τ+1, ω−1 & τ−1, ω+1 & τ+1, ω−2 & τ−2 & ω−1 & τ−1, ω+1 & τ+1 & ω+2 & τ+2, ω−1 & τ−1 & ω+1 & τ+1. Table 2: Features Used for Initial Distribution Y (w)= N X t=1 exp ( H X h=1 αhgh(w, t) ) , (21) where gh(w, t) is a binary feature function. We assume that each local context w contains the following information about the unknown word: • The POS tags of the two words on each side of the unknown word: τ−2, τ−1, τ+1, τ+2.4 • The lexical forms of the unknown word itself and the two words on each side of the unknown word: ω−2, ω−1, ω0, ω+1, ω+2. • The character types of all the characters composing the unknown word: ψ1, · · · , ψ|ω0|. We use six character types: alphabet, numeral (Arabic and Chinese numerals), symbol, Kanji (Chinese character), Hiragana (Japanese script) and Katakana (Japanese script). A feature function gh(w, t) returns 1 if w and t satisfy certain conditions, and otherwise 0; for example: g123(w, t)= n 1 (ω−1 =“President” and τ−1 =“NNP” and t = 5), 0 (otherwise). The features we use are shown in Table 2, which are based on the features used by Ratnaparkhi (1996) and Uchimoto et al. (2001). The parameters αh in Equation (20) are estimated using all the words in the training data whose POS tags are the open class tags. 3.3 Experimental Results The results are shown in Table 3. In the table, local, local+global and local+global w/ unlabeled indicate that the results were obtained using only local information, local and global information, and local and global information with the extra unlabeled data, respectively. The results using only local information were obtained by choosing POS 4In both the training and the testing phases, POS tags of known words are given from the corpora. When these surrounding words contain unknown words, their POS tags are represented by a special tag Unk. PFR (Chinese) +162 vn (verbal noun) +150 ns (place name) +86 nz (other proper noun) +85 j (abbreviation) +61 nr (personal name) · · · · · · −26 m (numeral) −100 v (verb) RWC (Japanese) +33 noun-proper noun-person name-family name +32 noun-proper noun-place name +28 noun-proper noun-organization name +17 noun-proper noun-person name-first name +6 noun-proper noun +4 noun-sahen noun · · · · · · −2 noun-proper noun-place name-country name −29 noun SUS (English) +13 NP (proper noun) +6 JJ (adjective) +2 VVD (past tense form of lexical verb) +2 NNL (locative noun) +2 NNJ (organization noun) · · · · · · −3 NN (common noun) −6 NNU (unit-of-measurement noun) Table 4: Ordered List of Increased/Decreased Number of Correctly Tagged Words tags ˆt = {ˆt1, · · · , ˆtK} which maximize the probabilities of the local model: ˆtk=argmax t p0(t|wk). (22) The table shows the accuracies, the numbers of errors, the p-values of McNemar’s test against the results using only local information, and the numbers of non-unique unknown words in the test data. On an Opteron 250 processor with 8GB of RAM, model parameter estimation and decoding without unlabeled data for the eight corpora took 117 minutes and 39 seconds in total, respectively. In the CTB, PFR, KUC, RWC and WSJ corpora, the accuracies were improved using global information (statistically significant at p < 0.05), compared to the accuracies obtained using only local information. The increases of the accuracies on the English corpora (the GEN and SUS corpora) were small. Table 4 shows the increased/decreased number of correctly tagged words using global information in the PFR, RWC and SUS corpora. In the PFR (Chinese) and RWC (Japanese) corpora, many proper nouns were correctly tagged using global information. In Chinese and Japanese, proper nouns are not capitalized, therefore proper nouns are difficult to distinguish from common nouns with only local information. One reason that only the small increases were obtained with global information in the English corpora seems to be the low ambiguities of proper nouns. Many verbal nouns in PFR and a few sahen-nouns (Japanese verbal nouns) in RWC, which suffer from the problem of possibility-based POS tags, were also correctly tagged using global information. When the unlabeled data was used, the number of nonunique words in the test data increased. Compared with the case without the unlabeled data, the accu710 Corpus Accuracy for Unknown Words (# of Errors) (Lang.) [p-value] ⟨# of Non-unique Unknown Words⟩ local local+global local+global w/ unlabeled CTB 0.7423 (193) 0.7717 (171) 0.7704 (172) (C) [0.0000] ⟨344⟩ [0.0001] ⟨361⟩ PFR 0.6499 (9723) 0.6690 (9193) 0.6785 (8930) (C) [0.0000] ⟨16019⟩ [0.0000] ⟨18861⟩ EDR 0.9639 (874) 0.9643 (863) 0.9651 (844) (J) [0.1775] ⟨4903⟩ [0.0034] ⟨7770⟩ KUC 0.7501 (619) 0.7634 (586) 0.7562 (604) (J) [0.0000] ⟨788⟩ [0.0872] ⟨936⟩ RWC 0.7699 (2572) 0.7785 (2476) 0.7787 (2474) (J) [0.0000] ⟨5044⟩ [0.0000] ⟨5878⟩ GEN 0.8836 (905) 0.8837 (904) 0.8863 (884) (E) [1.0000] ⟨4094⟩ [0.0244] ⟨4515⟩ SUS 0.7934 (1190) 0.7957 (1177) 0.7979 (1164) (E) [0.1878] ⟨3210⟩ [0.0116] ⟨3583⟩ WSJ 0.8345 (704) 0.8368 (694) 0.8352 (701) (E) [0.0162] ⟨1412⟩ [0.7103] ⟨1627⟩ Table 3: Results of POS Guessing of Unknown Words Corpus Mean±Standard Deviation (Lang.) Marginal S.A. CTB (C) 0.7696±0.0021 0.7682±0.0028 PFR (C) 0.6707±0.0010 0.6712±0.0014 EDR (J) 0.9644±0.0001 0.9645±0.0001 KUC (J) 0.7595±0.0031 0.7612±0.0018 RWC (J) 0.7777±0.0017 0.7772±0.0020 GEN (E) 0.8841±0.0009 0.8840±0.0007 SUS (E) 0.7997±0.0038 0.7995±0.0034 WSJ (E) 0.8366±0.0013 0.8360±0.0021 Table 5: Results of Multiple Trials and Comparison to Simulated Annealing racies increased in several corpora but decreased in the CTB, KUC and WSJ corpora. Since our method uses Gibbs sampling in the training and the testing phases, the results are affected by the sequences of random numbers used in the sampling. In order to investigate the influence, we conduct 10 trials with different sequences of pseudo random numbers. We also conduct experiments using simulated annealing in decoding, as conducted by Finkel et al. (2005) for information extraction. We increase inverse temperature β in Equation (1) from β = 1 to β ≈∞with the linear cooling schedule. The results are shown in Table 5. The table shows the mean values and the standard deviations of the accuracies for the 10 trials, and Marginal and S.A. mean that decoding is conducted using Equation (11) and simulated annealing respectively. The variances caused by random numbers and the differences of the accuracies between Marginal and S.A. are relatively small. 4 Related Work Several studies concerning the use of global information have been conducted, especially in named entity recognition, which is a similar task to POS guessing of unknown words. Chieu and Ng (2002) conducted named entity recognition using global features as well as local features. In their ME model-based method, some global features were used such as “when the word appeared first in a position other than the beginning of sentences, the word was capitalized or not”. These global features are static and can be handled in the same manner as local features, therefore Viterbi decoding was used. The method is efficient but does not handle interactions between labels. Finkel et al. (2005) proposed a method incorporating non-local structure for information extraction. They attempted to use label consistency of named entities, which is the property that named entities with the same lexical form tend to have the same label. They defined two probabilistic models; a local model based on conditional random fields and a global model based on loglinear models. Then the final model was constructed by multiplying these two models, which can be seen as unnormalized log-linear interpolation (Klakow, 1998) of the two models which are weighted equally. In their method, interactions between labels in the whole document were considered, and they used Gibbs sampling and simulated annealing for decoding. Our model is largely similar to their model. However, in their method, parameters of the global model were estimated using relative frequencies of labels or were selected by hand, while in our method, global model parameters are estimated from training data so as to fit to the data according to the objective function. One approach for incorporating global information in natural language processing is to utilize consistency of labels, and such an approach have been used in other tasks. Takamura et al. (2005) proposed a method based on the spin models in physics for extracting semantic orientations of words. In the spin models, each electron has one of two states, up or down, and the models give probability distribution of the states. The states of electrons interact with each other and neighboring electrons tend to have the same spin. In their 711 method, semantic orientations (positive or negative) of words are regarded as states of spins, in order to model the property that the semantic orientation of a word tends to have the same orientation as words in its gloss. The mean field approximation was used for inference in their method. Yarowsky (1995) studied a method for word sense disambiguation using unlabeled data. Although no probabilistic models were considered explicitly in the method, they used the property of label consistency named “one sense per discourse” for unsupervised learning together with local information named “one sense per collocation”. There exist other approaches using global information which do not necessarily aim to use label consistency. Rosenfeld et al. (2001) proposed whole-sentence exponential language models. The method calculates the probability of a sentence s as follows: P(s)= 1 Z p0(s) exp (X i λifi(s) ) , where p0(s) is an initial distribution of s and any language models such as trigram models can be used for this. fi(s) is a feature function and can handle sentence-wide features. Note that if we regard fi,j(t) in our model (Equation (7)) as a feature function, Equation (8) is essentially the same form as the above model. Their models can incorporate any sentence-wide features including syntactic features obtained by shallow parsers. They attempted to use Gibbs sampling and other sampling methods for inference, and model parameters were estimated from training data using the generalized iterative scaling algorithm with the sampling methods. Although they addressed modeling of whole sentences, the method can be directly applied to modeling of whole documents which allows us to incorporate unlabeled data easily as we have discussed. This approach, modeling whole wide-scope contexts with log-linear models and using sampling methods for inference, gives us an expressive framework and will be applied to other tasks. 5 Conclusion In this paper, we presented a method for guessing parts-of-speech of unknown words using global information as well as local information. The method models a whole document by considering interactions between POS tags of unknown words with the same lexical form. Parameters of the model are estimated from training data using Gibbs sampling. Experimental results showed that the method improves accuracies of POS guessing of unknown words especially for Chinese and Japanese. We also applied the method to semisupervised learning, but the results were not consistent and there is some room for improvement. Acknowledgements This work was supported by a grant from the National Institute of Information and Communications Technology of Japan. References Christophe Andrieu, Nando de Freitas, Arnaud Doucet, and Michael I. Jordan. 2003. An introduction to MCMC for Machine Learning. Machine Learning, 50:5–43. Masayuki Asahara. 2003. Corpus-based Japanese morphological analysis. Nara Institute of Science and Technology, Doctor’s Thesis. Harald Baayen and Richard Sproat. 1996. Estimating Lexical Priors for Low-Frequency Morphologically Ambiguous Forms. Computational Linguistics, 22(2):155–166. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22(1):39–71. Stanley Chen and Ronald Rosenfeld. 1999. A Gaussian Prior for Smoothing Maximum Entropy Models. Technical Report CMUCS-99-108, Carnegie Mellon University. Chao-jan Chen, Ming-hong Bai, and Keh-Jiann Chen. 1997. Category Guessing for Chinese Unknown Words. In Proceedings of NLPRS ’97, pages 35–40. Hai Leong Chieu and Hwee Tou Ng. 2002. Named Entity Recognition: A Maximum Entropy Approach Using Global Information. In Proceedings of COLING 2002, pages 190–196. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. In Proceedings of ACL 2005, pages 363–370. D. Klakow. 1998. Log-linear interpolation of language models. In Proceedings of ICSLP ’98, pages 1695–1699. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(3):503–528. David J. C. MacKay. 2003. Information Theory, Inference, and Learning Algorithms. Cambridge University Press. Jose. L. Marroquin. 1985. Optimal Bayesian Estimators for Image Segmentation and Surface Reconstruction. A.I. Memo 839, MIT. Andrei Mikheev. 1997. Automatic Rule Induction for UnknownWord Guessing. Computational Linguistics, 23(3):405–423. Shinsuke Mori and Makoto Nagao. 1996. Word Extraction from Corpora and Its Part-of-Speech Estimation Using Distributional Analysis. In Proceedings of COLING ’96, pages 1119–1122. Masaki Nagata. 1999. A Part of Speech Estimation Method for Japanese Unknown Words using a Statistical Model of Morphology and Context. In Proceedings of ACL ’99, pages 277–284. Giorgos S. Orphanos and Dimitris N. Christodoulakis. 1999. POS Disambiguation and Unknown Word Guessing with Decision Trees. In Proceedings of EACL ’99, pages 134–141. Adwait Ratnaparkhi. 1996. A Maximum Entropy Model for Part-ofSpeech Tagging. In Proceedings of EMNLP ’96, pages 133–142. Ronald Rosenfeld, Stanley F. Chen, and Xiaojin Zhu. 2001. Whole-Sentence Exponential Language Models: A Vehicle For Linguistic-Statistical Integration. Computers Speech and Language, 15(1):55–73. Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Extracting Semantic Orientations of Words using Spin Model. In Proceedings of ACL 2005, pages 133–140. Kiyotaka Uchimoto, Satoshi Sekine, and Hitoshi Isahara. 2001. The Unknown Word Problem: a Morphological Analysis of Japanese Using Maximum Entropy Aided by a Dictionary. In Proceedings of EMNLP 2001, pages 91–99. Shaojun Wang, Shaomin Wang, Russel Greiner, Dale Schuurmans, and Li Cheng. 2005. Exploiting Syntactic, Semantic and Lexical Regularities in Language Modeling via Directed Markov Random Fields. In Proceedings of ICML 2005, pages 948–955. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proceedings of ACL ’95, pages 189–196. 712
2006
89
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 65–72, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Discriminative Word Alignment with Conditional Random Fields Phil Blunsom and Trevor Cohn Department of Software Engineering and Computer Science University of Melbourne {pcbl,tacohn}@csse.unimelb.edu.au Abstract In this paper we present a novel approach for inducing word alignments from sentence aligned data. We use a Conditional Random Field (CRF), a discriminative model, which is estimated on a small supervised training set. The CRF is conditioned on both the source and target texts, and thus allows for the use of arbitrary and overlapping features over these data. Moreover, the CRF has efficient training and decoding processes which both find globally optimal solutions. We apply this alignment model to both French-English and Romanian-English language pairs. We show how a large number of highly predictive features can be easily incorporated into the CRF, and demonstrate that even with only a few hundred word-aligned training sentences, our model improves over the current state-ofthe-art with alignment error rates of 5.29 and 25.8 for the two tasks respectively. 1 Introduction Modern phrase based statistical machine translation (SMT) systems usually break the translation task into two phases. The first phase induces word alignments over a sentence-aligned bilingual corpus, and the second phase uses statistics over these predicted word alignments to decode (translate) novel sentences. This paper deals with the first of these tasks: word alignment. Most current SMT systems (Och and Ney, 2004; Koehn et al., 2003) use a generative model for word alignment such as the freely available GIZA++ (Och and Ney, 2003), an implementation of the IBM alignment models (Brown et al., 1993). These models treat word alignment as a hidden process, and maximise the probability of the observed (e, f) sentence pairs1 using the expectation maximisation (EM) algorithm. After the maximisation process is complete, the word alignments are set to maximum posterior predictions of the model. While GIZA++ gives good results when trained on large sentence aligned corpora, its generative models have a number of limitations. Firstly, they impose strong independence assumptions between features, making it very difficult to incorporate non-independent features over the sentence pairs. For instance, as well as detecting that a source word is aligned to a given target word, we would also like to encode syntactic and lexical features of the word pair, such as their partsof-speech, affixes, lemmas, etc. Features such as these would allow for more effective use of sparse data and result in a model which is more robust in the presence of unseen words. Adding these non-independent features to a generative model requires that the features’ inter-dependence be modelled explicitly, which often complicates the model (eg. Toutanova et al. (2002)). Secondly, the later IBM models, such as Model 4, have to resort to heuristic search techniques to approximate forward-backward and Viterbi inference, which sacrifice optimality for tractability. This paper presents an alternative discriminative method for word alignment. We use a conditional random field (CRF) sequence model, which allows for globally optimal training and decoding (Lafferty et al., 2001). The inference algo1We adopt the standard notation of e and f to denote the target (English) and source (foreign) sentences, respectively. 65 rithms are tractable and efficient, thereby avoiding the need for heuristics. The CRF is conditioned on both the source and target sentences, and therefore supports large sets of diverse and overlapping features. Furthermore, the model allows regularisation using a prior over the parameters, a very effective and simple method for limiting over-fitting. We use a similar graphical structure to the directed hidden Markov model (HMM) from GIZA++ (Och and Ney, 2003). This models one-to-many alignments, where each target word is aligned with zero or more source words. Many-to-many alignments are recoverable using the standard techniques for superimposing predicted alignments in both translation directions. The paper is structured as follows. Section 2 presents CRFs for word alignment, describing their form and their inference techniques. The features of our model are presented in Section 3, and experimental results for word aligning both French-English and Romanian-English sentences are given in Section 4. Section 5 presents related work, and we describe future work in Section 6. Finally, we conclude in Section 7. 2 Conditional random fields CRFs are undirected graphical models which define a conditional distribution over a label sequence given an observation sequence. We use a CRF to model many-to-one word alignments, where each source word is aligned with zero or one target words, and therefore each target word can be aligned with many source words. Each source word is labelled with the index of its aligned target, or the special value null, denoting no alignment. An example word alignment is shown in Figure 1, where the hollow squares and circles indicate the correct alignments. In this example the French words une and autre would both be assigned the index 24 – for the English word another – when French is the source language. When the source language is English, another could be assigned either index 25 or 26; in these ambiguous situations we take the first index. The joint probability density of the alignment, a (a vector of target indices), conditioned on the source and target sentences, e and f, is given by: pΛ(a|e, f) = exp P t P k λkhk(t, at−1, at, e, f) ZΛ(e, f) (1) where we make a first order Markov assumption they are constrained by limits which are imposed in order to ensure that the freedom of one person does not violate that of another . . autre une de celle sur pas empiète ne personne une de liberté la que garantir pour fixées été ont qui limites certaines par restreints sont ils Figure 1. A word-aligned example from the Canadian Hansards test set. Hollow squares represent gold standard sure alignments, circles are gold possible alignments, and filled squares are predicted alignments. over the alignment sequence. Here t ranges over the indices of the source sentence (f), k ranges over the model’s features, and Λ = {λk} are the model parameters (weights for their corresponding features). The feature functions hk are predefined real-valued functions over the source and target sentences coupled with the alignment labels over adjacent times (source sentence locations), t. These feature functions are unconstrained, and may represent overlapping and non-independent features of the data. The distribution is globally normalised by the partition function, ZΛ(e, f), which sums out the numerator in (1) for every possible alignment: ZΛ(e, f) = X a exp X t X k λkhk(t, at−1, at, e, f) We use a linear chain CRF, which is encoded in the feature functions of (1). The parameters of the CRF are usually estimated from a fully observed training sample (word aligned), by maximising the likelihood of these data. I.e. ΛML = arg maxΛ pΛ(D), where D = {(a, e, f)} are the training data. Because maximum likelihood estimators for log-linear models have a tendency to overfit the training sample (Chen and Rosenfeld, 1999), we define a prior distribution over the model parameters and derive a maximum a posteriori (MAP) estimate, ΛMAP = arg maxΛ pΛ(D)p(Λ). We use a zeromean Gaussian prior, with the probability density function p0(λk) ∝exp  −λ2 k 2σ2 k  . This yields a log-likelihood objective function of: L = X (a,e,f)∈D log pΛ(a|e, f) + X k log p0(λk) 66 = X (a,e,f)∈D X t X k λkhk(t, at−1, at, e, f) −log ZΛ(e, f) − X k λ2 k 2σ2 k + const. (2) In order to train the model, we maximize (2). While the log-likelihood cannot be maximised for the parameters, Λ, in closed form, it is a convex function, and thus we resort to numerical optimisation to find the globally optimal parameters. We use L-BFGS, an iterative quasi-Newton optimisation method, which performs well for training log-linear models (Malouf, 2002; Sha and Pereira, 2003). Each L-BFGS iteration requires the objective value and its gradient with respect to the model parameters. These are calculated using forward-backward inference, which yields the partition function, ZΛ(e, f), required for the log-likelihood, and the pair-wise marginals, pΛ(at−1, at|e, f), required for its derivatives. The Viterbi algorithm is used to find the maximum posterior probability alignment for test sentences, a∗ = arg maxa pΛ(a|e, f). Both the forward-backward and Viterbi algorithm are dynamic programs which make use of the Markov assumption to calculate efficiently the exact marginal distributions. 3 The alignment model Before we can apply our CRF alignment model, we must first specify the feature set – the functions hk in (1). Typically CRFs use binary indicator functions as features; these functions are only active when the observations meet some criteria and the label at (or label pair, (at−1, at)) matches a pre-specified label (pair). However, in our model the labellings are word indices in the target sentence and cannot be compared readily to labellings at other sites in the same sentence, or in other sentences with a different length. Such naive features would only be active for one labelling, therefore this model would suffer from serious sparse data problems. We instead define features which are functions of the source-target word match implied by a labelling, rather than the labelling itself. For example, from the sentence in Figure 1 for the labelling of f24 = de with a24 = 16 (for e16 = of) we might detect the following feature: h(t, at−1, at, f, e) =  1, if eat = ‘of’ ∧ft = ‘de’ 0, otherwise Note that it is the target word indexed by at, rather than the index itself, which determines whether the feature is active, and thus the sparsity of the index label set is not an issue. 3.1 Features One of the main advantages of using a conditional model is the ability to explore a diverse range of features engineered for a specific task. In our CRF model we employ two main types of features: those defined on a candidate aligned pair of words; and Markov features defined on the alignment sequence predicted by the model. Dice and Model 1 As we have access to only a small amount of word aligned data we wish to be able to incorporate information about word association from any sentence aligned data available. A common measure of word association is the Dice coefficient (Dice, 1945): Dice(e, f) = 2 × CEF (e, f) CE(e) + CF (e) where CE and CF are counts of the occurrences of the words e and f in the corpus, while CEF is their co-occurrence count. We treat these Dice values as translation scores: a high (low) value incidates that the word pair is a good (poor) candidate translation. However, the Dice score often over-estimates the association between common words. For instance, the words the and of both score highly when combined with either le or de, simply because these common words frequently co-occur. The GIZA++ models can be used to provide better translation scores, as they enforce competition for alignment beween the words. For this reason, we used the translation probability distribution from Model 1 in addition to the DICE scores. Model 1 is a simple position independent model which can be trained quickly and is often used to bootstrap parameters for more complex models. It models the conditional probability distribution: p(f, a|e) = p(|f|||e|) (|e| + 1)|f| × |f| Y t=1 p(ft|eat) where p(f|e) are the word translation probabilities. We use both the Dice value and the Model 1 translation probability as real-valued features for each candidate pair, as well as a normalised score 67 over all possible candidate alignments for each target word. We derive a feature from both the Dice and Model 1 translation scores to allow competition between sources words for a particular target alignment. This feature indicates whether a given alignment has the highest translation score of all the candidate alignments for a given target word. For the example in Figure 1, the words la, de and une all receive a high translation score when paired with the. To discourage all of these French words from aligning with the, the best of these (la) is flagged as the best candidate. This allows for competition between source words which would otherwise not occur. Orthographic features Features based on string overlap allow our model to recognise cognates and orthographically similar translation pairs, which are particularly common between European languages. Here we employ a number of string matching features inspired by similar features in Taskar et al. (2005). We use an indicator feature for every possible source-target word pair in the training data. In addition, we include indicator features for an exact string match, both with and without vowels, and the edit-distance between the source and target words as a realvalued feature. We also used indicator features to test for matching prefixes and suffixes of length three. As stated earlier, the Dice translation score often erroneously rewards alignments with common words. In order to address this problem, we include the absolute difference in word length as a real-valued feature and an indicator feature testing whether both words are shorter than 4 characters. Together these features allow the model to disprefer alignments between words with very different lengths – i.e. aligning rare (long) words with frequent (short) determiners, verbs etc. POS tags Part-of-speech tags are an effective method for addressing the sparsity of the lexical features. Observe in Figure 2 that the nounadjective pair Canadian experts aligns with the adjective-noun pair sp´ecialistes canadiens: the alignment exactly matches the parts-of-speech. Access to the words’ POS tags will allow simple modelling of such effects. POS can also be useful for less closely related language pairs, such as English and Japanese where English determiners are never aligned; nor are Japanese case markers. For our French-English language pair we POS tagged the source and target sentences with TreeTagger.2 We created indicator features over the POS tags of each candidate source and target word pair, as well as over the source word and target POS (and vice-versa). As we didn’t have access to a Romanian POS tagger, these features were not used for the Romanian-English language pair. Bilingual dictionary Dictionaries are another source of information for word alignment. We use a single indicator feature which detects when the source and target words appear in an entry of the dictionary. For the English-French dictionary we used FreeDict,3 which contains 8,799 English words. For Romanian-English we used a dictionary compiled by Rada Mihalcea,4 which contains approximately 38,000 entries. Markov features Features defined over adjacent aligment labels allow our model to reflect the tendency for monotonic alignments between European languages. We define a real-valued alignment index jump width feature: jump width(t −1, t) = abs(at −at−1 −1) this feature has a value of 0 if the alignment labels follow the downward sloping diagonal, and is positive otherwise. This differs from the GIZA++ hidden Markov model which has individual parameters for each different jump width (Och and Ney, 2003; Vogel et al., 1996): we found a single feature (and thus parameter) to be more effective. We also defined three indicator features over null transitions to allow the modelling of the probability of transition between, to and from null labels. Relative sentence postion A feature for the absolute difference in relative sentence position (abs( at |e| − t |f|)) allows the model to learn a preference for aligning words close to the alignment matrix diagonal. We also included two conjunction features for the relative sentence position multiplied by the Dice and Model 1 translation scores. Null We use a number of variants on the above features for alignments between a source word and the null target. The maximum translation score between the source and one of the target words 2http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger 3http://www.freedict.de 4http://lit.csci.unt.edu/˜rada/downloads/RoNLP/R.E.tralex 68 model precision recall f-score AER Model 4 refined 87.4 95.1 91.1 9.81 Model 4 intersection 97.9 86.0 91.6 7.42 French →English 96.7 85.0 90.5 9.21 English →French 97.3 83.0 89.6 10.01 intersection 98.7 78.6 87.5 12.02 refined 95.7 89.2 92.3 7.37 Table 1. Results on the Hansard data using all features model precision recall f-score AER Model 4 refined 80.49 64.10 71,37 28.63 Model 4 intersected 95.94 53.56 68.74 31.26 Romanian →English 82.9 61.3 70.5 29.53 English →Romanian 82.8 60.6 70.0 29.98 intersection 94.4 52.5 67.5 32.45 refined 77.1 68.5 72.6 27.41 Table 2. Results on the Romanian data using all features is used as a feature to represent whether there is a strong alignment candidate. The sum of these scores is also used as a feature. Each source word and POS tag pair are used as indicator features which allow the model to learn particular words of tags which tend to commonly (or rarely) align. 3.2 Symmetrisation In order to produce many-to-many alignments we combine the outputs of two models, one for each translation direction. We use the refined method from Och and Ney (2003) which starts from the intersection of the two models’ predictions and ‘grows’ the predicted alignments to neighbouring alignments which only appear in the output of one of the models. 4 Experiments We have applied our model to two publicly available word aligned corpora. The first is the English-French Hansards corpus, which consists of 1.1 million aligned sentences and 484 wordaligned sentences. This data set was used for the 2003 NAACL shared task (Mihalcea and Pedersen, 2003), where the word-aligned sentences were split into a 37 sentence trial set and a 447 sentence testing set. Unlike the unsupervised entrants in the 2003 task, we require word-aligned training data, and therefore must cannibalise the test set for this purpose. We follow Taskar et al. (2005) by using the first 100 test sentences for training and the remaining 347 for testing. This means that our results should not be directly compared to those entrants, other than in an approximate manner. We used the original 37 sentence trial set for feature engineering and for fitting a Gaussian prior. The word aligned data are annotated with both sure (S) and possible (P) alignments (S ⊆P; Och and Ney (2003)), where the possible alignments indicate ambiguous or idiomatic alignments. We measure the performance of our model using alignment error rate (AER), which is defined as: AER(A, S, P) = 1 −|A ∩S| + |A ∩P| |A| + |S| where A is the set of predicted alignments. The second data set is the Romanian-English parallel corpus from the 2005 ACL shared task (Martin et al., 2005). This consists of approximately 50,000 aligned sentences and 448 wordaligned sentences, which are split into a 248 sentence trial set and a 200 sentence test set. We used these as our training and test sets, respectively. For parameter tuning, we used the 17 sentence trial set from the Romanian-English corpus in the 2003 NAACL task (Mihalcea and Pedersen, 2003). For this task we have used the same test data as the competition entrants, and therefore can directly compare our results. The word alignments in this corpus were only annotated with sure (S) alignments, and therefore the AER is equivalent to the F1 score. In the shared task it was found that models which were trained on only the first four letters of each word obtained superior results to those using the full words (Martin et al., 2005). We observed the same result with our model on the trial set and thus have only used the first four letters when training the Dice and Model 1 translation probabilities. Tables 1 and 2 show the results when all feature types are employed on both language pairs. We report the results for both translation directions and when combined using the refined and intersection methods. The Model 4 results are from GIZA++ with the default parameters and the training data lowercased. For Romanian, Model 4 was trained using the first four letters of each word. The Romanian results are close to the best reported result of 26.10 from the ACL shared task (Martin et al., 2005). This result was from a system based on Model 4 plus additional parameters such as a dictionary. The standard Model 4 implementation in the shared task achieved a result of 31.65, while when only the first 4 letters of each word were used it achieved 28.80.5 5These results differ slightly our Model 4 results reported in Table 2. 69 ( ii ) ( a ) Three vehicles will be used by six Canadian experts related to the provision of technical assistance . . technique aide de prestation la de cadre le dans canadiens spécialistes 6 par utilisés seront véhicules 3 ) a ) ii ( (a) With Markov features ( ii ) ( a ) Three vehicles will be used by six Canadian experts related to the provision of technical assistance . . technique aide de prestation la de cadre le dans canadiens spécialistes 6 par utilisés seront véhicules 3 ) a ) ii ( (b) Without Markov features Figure 2. An example from the Hansard test set, showing the effect of the Markov features. Table 3 shows the effect of removing each of the feature types in turn from the full model. The most useful features are the Dice and Model 1 values which allow the model to incorporate translation probabilities from the large sentence aligned corpora. This is to be expected as the amount of word aligned data are extremely small, and therefore the model can only estimate translation probabilities for only a fraction of the lexicon. We would expect the dependence on sentence aligned data to decrease as more word aligned data becomes available. The effect of removing the Markov features can be seen from comparing Figures 2 (a) and (b). The model has learnt to prefer alignments that follow the diagonal, thus alignments such as 3 ↔three and prestation ↔provision are found, and missalignments such as de ↔of, which lie well off the diagonal, are avoided. The differing utility of the alignment word pair feature between the two tasks is probably a result of the different proportions of word- to sentencealigned data. For the French data, where a very large lexicon can be estimated from the million sentence alignments, the sparse word pairs learnt on the word aligned sentences appear to lead to overfitting. In contrast, for Romanian, where more word alignments are used to learn the translation pair features and much less sentence aligned data are available, these features have a significant impact on the model. Suprisingly the orthographic features actually worsen the performance in the tasks (incidentally, these features help the trial set). Our explanation is that the other features (eg. Model 1) already adequately model these correspondences, and therefore the orthographic feafeature group Rom ↔Eng Fre ↔Eng ALL 27.41 7.37 –orthographic 27.30 7.25 –Dice 27.68 7.73 –dictionary 27.72 7.21 –sentence position 28.30 8.01 –POS – 8.19 –Model 1 28.62 8.45 –alignment word pair 32.41 7.20 –Markov 32.75 12.44 –Dice & –Model 1 35.43 14.10 Table 3. The resulting AERs after removing individual groups of features from the full model. tures do not add much additional modelling power. We expect that with further careful feature engineering, and a larger trial set, these orthographic features could be much improved. The Romanian-English language pair appears to offer a more difficult modelling problem than the French-English pair. With both the translation score features (Dice and Model 1) removed – the sentence aligned data are not used – the AER of the Romanian is more than twice that of the French, despite employing more word aligned data. This could be caused by the lack of possible (P) alignment markup in the Romanian data, which provide a boost in AER on the French data set, rewarding what would otherwise be considered errors. Interestingly, without any features derived from the sentence aligned corpus, our model achieves performance equivalent to Model 3 trained on the full corpus (Och and Ney, 2003). This is a particularly strong result, indicating that this method is ideal for data-impoverished alignment tasks. 70 4.1 Training with possible alignments Up to this point our Hansards model has been trained using only the sure (S) alignments. As the data set contains many possible (P) alignments, we would like to use these to improve our model. Most of the possible alignments flag blocks of ambiguous or idiomatic (or just difficult) phrase level alignments. These many-to-many alignments cannot be modelled with our many-to-one setup. However, a number of possibles flag oneto-one or many-to-one aligments: for this experiment we used these possibles in training to investigate their effect on recall. Using these additional alignments our refined precision decreased from 95.7 to 93.5, while recall increased from 89.2 to 92.4. This resulted in an overall decrease in AER to 6.99. We found no benefit from using many-tomany possible alignments as they added a significant amount of noise to the data. 4.2 Model 4 as a feature Previous work (Taskar et al., 2005) has demonstrated that by including the output of Model 4 as a feature, it is possible to achieve a significant decrease in AER. We trained Model 4 in both directions on the two language pairs. We added two indicator features (one for each direction) to our CRF which were active if a given word pair were aligned in the Model 4 output. Table 4 displays the results on both language pairs when these additional features are used with the refined model. This produces a large increase in performance, and when including the possibles, produces AERs of 5.29 and 25.8, both well below that of Model 4 alone (shown in Tables 1 and 2). 4.3 Cross-validation Using 10-fold cross-validation we are able to generate results on the whole of the Hansards test data which are comparable to previously published results. As the sentences in the test set were randomly chosen from the training corpus we can expect cross-validation to give an unbiased estimate of generalisation performance. These results are displayed in Table 5, using the possible (P) alignments for training. As the training set for each fold is roughly four times as big previous training set, we see a small improvement in AER. The final results of 6.47 and 5.19 with and without Model 4 features both exceed the performance of Model 4 alone. However the unsupermodel precision recall f-score AER Rom ↔Eng 79.0 70.0 74.2 25.8 Fre ↔Eng 97.9 90.8 94.2 5.49 Fre ↔Eng (P) 95.5 93.7 94.6 5.29 Table 4. Results using features from Model 4 bidirectional alignments, training with and without the possible (P) alignments. model precision recall f-score AER Fre ↔Eng 94.6 92.2 93.4 6.47 Fre ↔Eng (Model 4) 96.1 93.3 94.7 5.19 Table 5. 10-fold cross-validation results, with and without Model 4 features. vised Model 4 did not have access to the wordalignments in our training set. Callison-Burch et al. (2004) demonstrated that the GIZA++ models could be trained in a semi-supervised manner, leading to a slight decrease in error. To our knowledge, our AER of 5.19 is the best reported result, generative or discriminative, on this data set. 5 Related work Recently, a number of discriminative word alignment models have been proposed, however these early models are typically very complicated with many proposing intractable problems which require heuristics for approximate inference (Liu et al., 2005; Moore, 2005). An exception is Taskar et al. (2005) who presented a word matching model for discriminative alignment which they they were able to solve optimally. However, their model is limited to only providing one-to-one alignments. Also, no features were defined on label sequences, which reduced the model’s ability to capture the strong monotonic relationships present between European language pairs. On the French-English Hansards task, using the same training/testing setup as our work, they achieve an AER of 5.4 with Model 4 features, and 10.7 without (compared to 5.29 and 6.99 for our CRF). One of the strengths of the CRF MAP estimation is the powerful smoothing offered by the prior, which allows us to avoid heuristics such as early stopping and hand weighted loss-functions that were needed for the maximum-margin model. Liu et al. (2005) used a conditional log-linear model with similar features to those we have employed. They formulated a global model, without making a Markovian assumption, leading to the need for a sub-optimal heuristic search strategies. Ittycheriah and Roukos (2005) trained a dis71 criminative model on a corpus of ten thousand word aligned Arabic-English sentence pairs that outperformed a GIZA++ baseline. As with other approaches, they proposed a model which didn’t allow a tractably optimal solution and thus had to resort to a heuristic beam search. They employed a log-linear model to learn the observation probabilities, while using a fixed transition distribution. Our CRF model allows both the observation and transition components of the model to be jointly optimised from the corpus. 6 Further work The results presented in this paper were evaluated in terms of AER. While a low AER can be expected to improve end-to-end translation quality, this is may not necessarily be the case. Therefore, we plan to assess how the recall and precision characteristics of our model affect translation quality. The tradeoff between recall and precision may affect the quality and number of phrases extracted for a phrase translation table. 7 Conclusion We have presented a novel approach for inducing word alignments from sentence aligned data. We showed how conditional random fields could be used for word alignment. These models allow for the use of arbitrary and overlapping features over the source and target sentences, making the most of small supervised training sets. Moreover, we showed how the CRF’s inference and estimation methods allowed for efficient processing without sacrificing optimality, improving on previous heuristic based approaches. On both French-English and Romanian-English we showed that many highly predictive features can be easily incorporated into the CRF, and demonstrated that with only a few hundred wordaligned training sentences, our model outperforms the generative Model 4 baseline. When no features are extracted from the sentence aligned corpus our model still achieves a low error rate. Furthermore, when we employ features derived from Model 4 alignments our CRF model achieves the highest reported results on both data sets. Acknowledgements Special thanks to Miles Osborne, Steven Bird, Timothy Baldwin and the anonymous reviewers for their feedback and insightful comments. References P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. C. Callison-Burch, D. Talbot, and M. Osborne. 2004. Statistical machine translation with word- and sentence-aligned parallel corpora. In Proceedings of ACL, pages 175–182, Barcelona, Spain, July. S. Chen and R. Rosenfeld. 1999. A survey of smoothing techniques for maximum entropy models. IEEE Transactions on Speech and Audio Processing, 8(1):37–50. L. R. Dice. 1945. Measures of the amount of ecologic association between species. Journal of Ecology, 26:297–302. A. Ittycheriah and S. Roukos. 2005. A maximum entropy word aligner for Arabic-English machine translation. In Proceedings of HLT-EMNLP, pages 89–96, Vancouver, British Columbia, Canada, October. P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrasebased translation. In Proceedings of HLT-NAACL, pages 81–88, Edmonton, Alberta. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labelling sequence data. In Proceedings of ICML, pages 282–289. Y. Liu, Q. Liu, and S. Lin. 2005. Log-linear models for word alignment. In Proceedings of ACL, pages 459–466, Ann Arbor. R. Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of CoNLL, pages 49–55. J. Martin, R. Mihalcea, and T. Pedersen. 2005. Word alignment for languages with scarce resources. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, pages 65–74, Ann Arbor, Michigan, June. R. Mihalcea and T. Pedersen. 2003. An evaluation exercise for word alignment. In Proceedings of HLT-NAACL 2003 Workshop, Building and Using Parrallel Texts: Data Driven Machine Translation and Beyond, pages 1–6, Edmonton, Alberta. R. C. Moore. 2005. A discriminative framework for bilingual word alignment. In Proceedings of HLT-EMNLP, pages 81–88, Vancouver, Canada. F. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–52. F. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of HLT-NAACL, pages 213–220. B. Taskar, S. Lacoste-Julien, and D. Klein. 2005. A discriminative matching approach to word alignment. In Proceedings of HLT-EMNLP, pages 73–80, Vancouver, British Columbia, Canada, October. K. Toutanova, H. Tolga Ilhan, and C Manning. 2002. Extentions to HMM-based statistical word alignment models. In Proceedings of EMNLP, pages 87–94, Philadelphia, July. S. Vogel, H. Ney, and C. Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of 16th Int. Conf. on Computational Linguistics, pages 836–841. 72
2006
9
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 713–720, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Clustered Global Phrase Reordering Model for Statistical Machine Translation Masaaki Nagata NTT Communication Science Laboratories 2-4 Hikaridai, Seika-cho, Souraku-gun Kyoto, 619-0237 Japan [email protected], Kuniko Saito NTT Cyber Space Laboratories 1-1 Hikarinooka, Yokoshuka-shi Kanagawa, 239-0847 Japan [email protected] Kazuhide Yamamoto, Kazuteru Ohashi Nagaoka University of Technology 1603-1, Kamitomioka, Nagaoka City Niigata, 940-2188 Japan [email protected], [email protected] Abstract In this paper, we present a novel global reordering model that can be incorporated into standard phrase-based statistical machine translation. Unlike previous local reordering models that emphasize the reordering of adjacent phrase pairs (Tillmann and Zhang, 2005), our model explicitly models the reordering of long distances by directly estimating the parameters from the phrase alignments of bilingual training sentences. In principle, the global phrase reordering model is conditioned on the source and target phrases that are currently being translated, and the previously translated source and target phrases. To cope with sparseness, we use N-best phrase alignments and bilingual phrase clustering, and investigate a variety of combinations of conditioning factors. Through experiments, we show, that the global reordering model significantly improves the translation accuracy of a standard Japanese-English translation task. 1 Introduction Global reordering is essential to the translation of languages with different word orders. Ideally, a model should allow the reordering of any distance, because if we are to translate from Japanese to English, the verb in the Japanese sentence must be moved from the end of the sentence to the beginning just after the subject in the English sentence.  Graduated in March 2006 Standard phrase-based translation systems use a word distance-based reordering model in which non-monotonic phrase alignment is penalized based on the word distance between successively translated source phrases without considering the orientation of the phrase alignment or the identities of the source and target phrases (Koehn et al., 2003; Och and Ney, 2004). (Tillmann and Zhang, 2005) introduced the notion of a block (a pair of source and target phrases that are translations of each other), and proposed the block orientation bigram in which the local reordering of adjacent blocks are expressed as a three-valued orientation, namely Right (monotone), Left (swapped), or Neutral. A block with neutral orientation is supposed to be less strongly linked to its predecessor block: thus in their model, the global reordering is not explicitly modeled. In this paper, we present a global reordering model that explicitly models long distance reordering1. It predicts four type of reordering patterns, namely MA (monotone adjacent), MG (monotone gap), RA (reverse adjacent), and RG (reverse gap). There are based on the identities of the source and target phrases currently being translated, and the previously translated source and target phrases. The parameters of the reordering model are estimated from the phrase alignments of training bilingual sentences. To cope with sparseness, we use N-best phrase alignments and bilingual phrase clustering. In the following sections, we first describe the global phrase reordering model and its param1It might be misleading to call our reordering model “global” since it is at most considers two phrases. A truly global reordering model would take the entire sentence structure into account. 713 eter estimation method including N-best phrase alignments and bilingual phrase clustering. Next, through an experiment, we show that the global phrase reordering model significantly improves the translation accuracy of the IWSLT-2005 Japanese-English translation task (Eck and Hori, 2005). 2 Baseline Translation Model In statistical machine translation, the translation of a source (foreign) sentence is formulated as the search for a target (English) sentence   that maximizes the conditional probability    , which can be rewritten using the Bayes rule as,                   where     is a translation model and    is a target language model. In phrase-based statistical machine translation, the source sentence is segmented into a sequence of  phrases   , and each source phrase  ! is translated into a target phrase   ! . Target phrases may be reordered. The translation model used in (Koehn et al., 2003) is the product of translation probability "#  !    !  and distortion probability $%'& !)(+*,!.  , #        / !10 "#  !    !2 $3'& ! (4* !5  (1) where & ! denotes the start position of the source phrase translated into the 6 -th target phrase, and * !5 denotes the end position of the source phrase translated into the 56 (87  -th target phrase. The translation probability is calculated from the relative frequency as, "#      9;:<>=@?   A    BDC E 9;:F<%=@?   A    (2) where 9;:<>=@?   A    is the frequency of alignments between the source phrase  and the target phrase   . (Koehn et al., 2003) used the following distortion model, which simply penalizes nonmonotonic phrase alignments based on the word distance of successively translated source phrases with an appropriate value for the parameter G , $3'& ! (4* !5  GH IJ ->K JMLN H (3)   OPRQTS UVXWFY[Z     language is a means communication of MG RA RA b1 b2 b3 b4   OPRQTS UVXWFY[Z     language is a means communication of MG RA RA b1 b2 b3 b4 Figure 1: Phrase alignment and reordering bi-1 bi fi-1 fi ei-1 ei bi-1 bi fi-1 fi ei-1 ei bi-1 bi fi-1 fi ei-1 ei bi-1 bi fi-1 fi ei-1 ei source target target source target target source source d=MA d=MG d=RA d=RG Figure 2: Four types of reordering patterns 3 The Global Phrase Reordering Model Figure 1 shows an example of Japanese-English phrase alignment that consists of four phrase pairs. Note that the Japanese verb phrase “ \ ]_^ ” at the the end of the sentence is aligned to the English verb “is” at the beginning of the sentence just after the subject. Such reordering is typical in JapaneseEnglish translations. Motivated by the three-valued orientation for local reordering in (Tillmann and Zhang, 2005), we define the following four types of reordering patterns, as shown in Figure 2, ` monotone adjacent (MA): The two source phrases are adjacent, and are in the same order as the two target phrases. ` monotone gap (MG): The two source phrases are not adjacent, but are in the same order as the two target phrases. ` reverse adjacent (RA): The two source phrases are adjacent, but are in the reverse order of the two target phrases. 714 J-to-E C-to-E Monotone Adjacent 0.441 0.828 Monotone Gap 0.281 0.106 Reverse Adjacent 0.206 0.033 Reverse Gap 0.072 0.033 Table 1: Percentage of reordering patterns ` reverse gap (RG): The two source phrases are not adjacent, and are in the reverse order as the two target phrases. For the global reordering model, we only consider the cases in which the two target phrases are adjacent because, in decoding, the target sentence is generated from left to right and phrase by phrase. If we are to generate the 6 -th target phrase   ! from the source phrase  ! , we call   ! and  ! the current block * ! , and   !5 and  !5 the previous block *T!5 . Table 1 shows the percentage of each reordering pattern that appeared in the N-best phrase alignments of the training bilingual sentences for the IWSLT 2005 Japanese-English and ChineseEnglish translation tasks (Eck and Hori, 2005). Since non-local reorderings such as monotone gap and reverse gap are more frequent in Japanese to English translations, they are worth modeling explicitly in this reordering model. Since the probability of reordering pattern $ (intended to stand for ‘distortion’) is conditioned on the current and previous blocks, the global phrase reordering model is formalized as follows: '$    !. A   ! A  !. A  !  (4) We can replace the conventional word distancebased distortion probability $%'& !( *!5  in Equation (1) with the global phrase reordering model in Equation (4) with minimal modification of the underlying phrase-based decoding algorithm. 4 Parameter Estimation Method In principle, the parameters of the global phrase reordering model in Equation (4) can be estimated from the relative frequencies of respective events in the Viterbi phrase alignment of the training bilingual sentences. This straightforward estimation method, however, often suffers from sparse data problem. To cope with this sparseness, we used N-best phrase alignment and bilingual phrase     means communication of 1 2 3 4 5 6 7 8 Figure 3: Expansion of a phrase pair clustering. We also investigated various approximations of Equation (4) by reducing the conditional factors. 4.1 N-best Phrase Alignment In order to obtain the Viterbi phrase alignment of a bilingual sentence pair, we search for the phrase segmentation and phrase alignment that maximizes the product of the phrase translation probabilities   !    ![ ,     A        C E  N  C   N  / !M0   !    !2 (5) Phrase translation probabilities are approximated using word translation probabilities     !  and   !    as follows,       /   !     !    !    (6) where  and  ! are words in the target and source phrases. The phrase alignment based on Equation (5) can be thought of as an extension of word alignment based on the IBM Model 1 to phrase alignment. Note that bilingual phrase segmentation (phrase extraction) is also done using the same criteria. The approximation in Equation (6) is motivated by (Vogel et al., 2003). Here, we added the second term   !    to cope with the asymmetry between     !2 and   !    . The word translation probabilities are estimated using the GIZA++ (Och and Ney, 2003). The above search is implemented in the following way: 1. All source word and target word pairs are considered to be initial phrase pairs. 715 2. If the phrase translation probability of the phrase pair is less than the threshold, it is deleted. 3. Each phrase pair is expanded toward the eight neighboring directions as shown in Figure 3. 4. If the phrase translation probability of the expanded phrase pair is less than the threshold, it is deleted. 5. The process of expansion and deletion is repeated until no further expansion is possible. 6. The consistent N-best phrase alignment are searched from all combinations of the above phrase pairs. The search for consistent Viterbi phrase alignments can be implemented as a phrase-based decoder using a beam search whose outputs are constrained only to the target sentence. The consistent N-best phrase alignment can be obtained by using A* search as described in (Ueffing et al., 2002). We did not use any reordering constraints, such as IBM constraint and ITG constraint in the search for the N-best phrase alignment (Zens et al., 2004). The thresholds used in the search are the following: the minimum phrase translation probability is 0.0001. The maximum number of translation candidates for each phrase is 20. The beam width is 1e-10, the stack size (for each target candidate word length) is 1000. We found that, compared with the decoding of sentence translation, we have to search significantly larger space for the N-best phrase alignment. Figure 3 shows an example of phrase pair expansion toward eight neighbors. If the current phrase pair is ( , of), the expanded phrase pairs are (    , means of), ( , means of), (  , means of), (    , of), (   , of), (    , of communication), ( , of communication), and (  , of communication). Figure 4 shows an example of the best three phrase alignments for a Japanese-English bilingual sentence. For the estimation of the global phrase reordering model, preliminary tests have shown that the appropriate N-best number is 20. In counting the events for the relative frequency estimation, we treat all N-best phrase alignments equally. For comparison, we also implemented a different N-best phrase alignment method, where ____ the_light_was_red _ __ the_light was_red _ _ the_light was red (1) (2) (3) Figure 4: N-best phrase alignments phrase pairs are extracted using the standard phrase extraction method described in (Koehn et al., 2003). We call this conventional phrase extraction method “grow-diag-final”, and the proposed phrase extraction method “ppicker” (this is intended to stand for phrase picker). 4.2 Bilingual Phrase Clustering The second approach to cope with the sparseness in Equation (4) is to group the phrases into equivalence classes. We used a bilingual word clustering tool, mkcls (Och et al., 1999) for this purpose. It forms partitions of the vocabulary of the two languages to maximize the joint probability of the training bilingual corpus. In order to perform bilingual phrase clustering, all words in a phrase are concatenated by an underscore ’ ’ to form a pseudo word. We then use the modified bilingual sentences as the input to mkcls. We treat all N-best phrase alignments equally. Thus, the phrase alignments in Figure 4 are converted to the following three bilingual sentence pairs.  _  _ _ \"! _ # the_light_was_red  _  _ \"! _ # the_light was_red  _  \$! _ # the_light was red Preliminary tests have shown that the appropriate number of classes for the estimation of the global phrase reordering model is 20. As a comparison, we also tried two phrase classification methods based on the part of speech of the head word (Ohashi et al., 2005). We defined (arguably) the first word of each English phrase and the last word of each Japanese phrase as the 716 shorthand reordering model baseline G H IJ ->K JMLN H " '$  e[0] '$    !2 f[0] '$   ![ e[0]f[0] '$    ! A  !  e[-1]f[0] '$    !5 A  !  e[0]f[-1,0] '$    ! A  !5 A  !  e[-1]f[-1,0] '$    !5 A  !5 A  ![ e[-1,0]f[0] '$    !5 A   ! A  !  e[-1,0]f[-1,0] '$    !5 A   ! A  !5 A  ![ Table 2: All reordering models tried in the experiments head word. We then used the part of speech of the head word as the phrase class. We call this method “1pos”. Since we are not sure whether it is appropriate to introduce asymmetry in head word selection, we also tried a “2pos” method, where the parts of speech of both the first and the last words are used for phrase classification. 4.3 Conditioning Factor of Reordering The third approach to cope with sparseness in Equation (4) is to approximate the equation by reducing the conditioning factors. Other than the baseline word distance-based reordering model and the Equation (4) itself, we tried eight different approximations of Equation (4) as shown in Table 2, where, the symbol in the left column is the shorthand for the reordering model in the right column. The approximations are designed based on two intuitions. The current block (   ! and  ! ) would probably be more important than the previous block (   !5 and  !5 ). The previous target phrase (   !5 ) might be more important than the current target phrase (   ! ) because the distortion model of IBM 4 is conditioned on   !5 ,  !5 and  ! . The appropriate form of the global phrase reordering model is decided through experimentation. 5 Experiments 5.1 Corpus and Tools We used the IWSLT-2005 Japanese-English translation task (Eck and Hori, 2005) for evaluating the proposed global phrase reordering model. We report results using the well-known automatic evaluation metrics Bleu (Papineni et al., 2002). IWSLT (International Workshop on Spoken Sentences Words Vocabulary Japanese 20,000 198,453 9,277 English 20,000 183,452 6,956 Table 3: IWSLT 2005 Japanese-English training data Language Translation) 2005 is an evaluation campaign for spoken language translation.Its task domain encompasses basic travel conversations. 20,000 bilingual sentences are provided for training. Table 3 shows the number of words and the size of vocabulary of the training data. The average sentence length of Japanese is 9.9 words, while that of English is 9.2 words. Two development sets, each containing 500 source sentences, are also provided and each development sentence comes with 16 reference translations. We used the second development set (devset2) for the experiments described in this paper. This 20,000 sentence corpus allows for fast experimentation and enables us to study different aspects of the proposed global phrase reordering model. Japanese word segmentation was done using ChaSen2 and English tokenization was done using a tool provided by LDC3. For the phrase classification based on the parts of speech of the head word, we used the first two layers of the Chasen’s part of speech tag for Japanese. For English part of speech tagging, we used MXPOST4. Word translation probabilities are obtained by using GIZA++ (Och and Ney, 2003). For training, all English words are made in lower case. We used a back-off word trigram model as the language model. It is trained from the lowercased English side of the training corpus using a statistical language modeling toolkit, Palmkit 5. We implemented our own decoder based on the algorithm described in (Ueffing et al., 2002). For decoding, we used phrase translation probability, lexical translation probability, word penalty, and distortion (phrase reordering) probability. Minimum error rate training was not used for weight optimization. The thresholds used in the decoding are the following: the minimum phrase translation probability is 0.01. The maximum number of translation 2http://chasen.aist-nara.ac.jp/ 3http://www.cis.upenn.edu/˜treebank/tokenizer.sed 4http://www.cis.upenn.edu/˜adwait/statnlp.html 5http://palmkit.sourceforge.net/ 717 ppicker grow-diag-final class lex class lex baseline 0.400 0.400 0.343 0.343 " 0.407 0.407 0.350 0.350 f[0] 0.417 0.410 0.362 0.356 e[0] 0.422 0.416 0.356 0.360 e[0]f[0] 0.422 0.404 0.355 0.353 e[0]f[-1,0] 0.407 0.381 0.346 0.327 e[-1,0]f[0] 0.410 0.392 0.348 0.341 e[-1,0]f[-1,0] 0.394 0.387 0.339 0.340 Table 4: BLEU score of reordering models with different phrase extraction methods candidates for each phrase is 10. The beam width is 1e-5, the stack size (for each target candidate word length) is 100. 5.2 Clustered and Lexicalized Model Figure 5 shows the BLEU score of clustered and lexical reordering model with different conditioning factors. Here, “class” shows the accuracy when the identity of each phrase is represented by its class, which is obtained by the bilingual phrase clustering, while “lex” shows the accuracy when the identity of each phrases is represented by its lexical form. The clustered reordering model “class” is generally better than the lexicalized reordering model “lex”. The accuracy of “lex” drops rapidly as the number of conditioning factors increases. The reordering models using the part of speech of the head word for phrase classification such as “1pos” and “2pos” are somewhere in between. The best score is achieved by the clustered model when the phrase reordering pattern is conditioned on either the current target phrase   ! or the current block, namely phrase pair   ! and  ! . They are significantly better than the baseline of the word distance-based reordering model. 5.3 Interaction between Phrase Extraction and Phrase Alignment Table 4 shows the BLEU score of reordering models with different phrase extraction methods. Here, “ppicker” shows the accuracy when phrases are extracted by using the N-best phrase alignment method described in Section 4.1, while “growdiag-final” shows the accuracy when phrases are extracted using the standard phrase extraction algorithm described in (Koehn et al., 2003). It is obvious that, for building the global phrase reordering model, our phrase extraction method is significantly better than the conventional phrase extraction method. We assume this is because the proposed N-best phrase alignment method optimizes the combination of phrase extraction (segmentation) and phrase alignment in a sentence. 5.4 Global and Local Reordering Model In order to show the advantages of explicitly modeling global phrase reordering, we implemented a different reordering model where the reordering pattern is classified into three values: monotone adjacent, reverse adjacent and neutral. By collapsing monotone gap and reverse gap into neutral, it can be thought of as a local reordering model similar to the block orientation bigram (Tillmann and Zhang, 2005). Figure 6 shows the BLEU score of the local and global reordering models. Here, “class3” and “lex3”represent the three-valued local reordering model, while “class4” and “lex4”represent the four-valued global reordering model. “Class” and “lex” represent clustered and lexical models, respectively. We used “grow-diag-final” for phrase extraction in this experiment. It is obvious that the four-valued global reordering model consistently outperformed the threevalued local reordering model under various conditioning factors. 6 Discussion As shown in Figure 5, the reordering model of Equation (4) (indicated as e[-1,0]f[-1,0] in shorthand) suffers from a sparse data problem even if phrase clustering is used. The empirically justifiable global reordering model seems to be the following, conditioned on the classes of source and target phrases: '$  9&   !2,A 9&  !   (7) which is similar to the block orientation bigram (Tillmann and Zhang, 2005). We should note, however, that the block orientation bigram is a joint probability model for the sequence of blocks (source and target phrases) as well as their orientations (reordering pattern) whose purpose is very different from our global phrase reordering model. The advantage of the reordering model is that it can better model global phrase reordering using a four-valued reordering pattern, and it can be easily 718                                     ! #"           #"%$    ! &"   ! " $     " $         "%$    ' " $   () *'+!+ ,!+ ',!+ ) .0/ Figure 5: BLEU score for the clustered and lexical reordering model with different conditioning factors incorporated into a standard phrase-based translation decoder. The problem of the global phrase reordering model is the cost of parameter estimation. In particular, the N-best phrase alignment described in Section 4.1 is computationally expensive. We must devise a more efficient phrase alignment algorithm that can globally optimize both phrase segmentation (phrase extraction) and phrase alignment. 7 Conclusion In this paper, we presented a novel global phrase reordering model, that is estimated from the Nbest phrase alignment of training bilingual sentences. Through experiments, we were able to show that our reordering model offers improved translation accuracy over the baseline method. References Matthias Eck and Chiori Hori. 2005. Overview of the IWSLT 2005 evaluation campaign. In Proceedings of International Workshop on Spoken Language Translation (IWSLT 2005), pages 11–32. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Association of Computational Linguistics (HLT-NAACL-03), pages 127–133. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz Josef Och and Herman Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. Franz Josef Och, Christoph Tillman, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/WVLC-99), pages 20–28. Kazuteru Ohashi, Kazuhide Yamamoto, Kuniko Saito, and Masaaki Nagata. 2005. NUT-NTT statistical machine translation system for IWSLT 2005. In Proceedings of International Workshop on Spoken Language Translation, pages 128–133. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Lnguistics (ACL-02), pages 311–318. Christoph Tillmann and Tong Zhang. 2005. A localized prediction model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-05), pages 557–564. Nicola Ueffing, Franz Josef Och, and Hermann Ney. 2002. Generation of word graphs in statistical machine translation. In Proceedings of the Conference 719                                  ! #"     "        $! "     "   %& '$($( & )+* %& '$($(  & )+*  Figure 6: BLEU score of local and global reordering model on Empirical Methods in Natural Language Processing (EMNLP-02), pages 156–163. Stephan Vogel, Ying Zhang, Fei Huang, Alicia Tribble, Ashish Venugopal, Bing Zhao, and Alex Waibel. 2003. The CMU statistical machine translation system. In Proceedings of MT Summit IX. Richard Zens, Hermann Ney, Taro Watanabe, and Eiichiro Sumita. 2004. Reordering constraints for phrase-based statistical machine translation. In Proceedings of 20th International Conference on Computational Linguistics (COLING-04), pages 205– 211. 720
2006
90
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 721–728, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Discriminative Global Training Algorithm for Statistical MT Christoph Tillmann IBM T.J. Watson Research Center Yorktown Heights, N.Y. 10598 [email protected] Tong Zhang Yahoo! Research New York City, N.Y. 10011 [email protected] Abstract This paper presents a novel training algorithm for a linearly-scored block sequence translation model. The key component is a new procedure to directly optimize the global scoring function used by a SMT decoder. No translation, language, or distortion model probabilities are used as in earlier work on SMT. Therefore our method, which employs less domain specific knowledge, is both simpler and more extensible than previous approaches. Moreover, the training procedure treats the decoder as a black-box, and thus can be used to optimize any decoding scheme. The training algorithm is evaluated on a standard Arabic-English translation task. 1 Introduction This paper presents a view of phrase-based SMT as a sequential process that generates block orientation sequences. A block is a pair of phrases which are translations of each other. For example, Figure 1 shows an Arabic-English translation example that uses four blocks. During decoding, we view translation as a block segmentation process, where the input sentence is segmented from left to right and the target sentence is generated from bottom to top, one block at a time. A monotone block sequence is generated except for the possibility to handle some local phrase re-ordering. In this local re-ordering model (Tillmann and Zhang, 2005; Kumar and Byrne, 2005) a block with orientation  is generated relative to its predecessor block  . During decoding, we maximize the score    of a block orientation sequence    ! "  # $% !  &$ !' ( ! ) (+* $ ( " * ( ! , $ . ( ! (  $ ( ! . "  "/ 0   $%1 ! !   (  ( ! 2+3 ( ! ( !3 # ( 4 5 6 Figure 1: An Arabic-English block translation example, where the Arabic words are romanized. The following orientation sequence is generated:  87:9 ; 7=< ?> 7:9 @ 7=A .  B  ):      7 CED FHGJIK  C  C LCEM N (1) where C is a block, CEM is its predecessor block, and  CPORQ <  eft N A  ight N 9  eutral NS is a threevalued orientation component linked to the block C : a block is generated to the left or the right of its predecessor block CTM , where the orientation  CEM of the predecessor block is ignored. Here, U is the number of blocks in the translation. We are interested in learning the weight vector F from the training data. K  C  C LCEM  is a high-dimensional binary feature representation of the block orientation pair  C  C LCTM  . The block orientation se721 quence V is generated under the restriction that the concatenated source phrases of the blocks C yield the input sentence. In modeling a block sequence, we emphasize adjacent block neighbors that have right or left orientation, since in the current experiments only local block swapping is handled (neutral orientation is used for ’detached’ blocks as described in (Tillmann and Zhang, 2005)). This paper focuses on the discriminative training of the weight vector F used in Eq. 1. The decoding process is decomposed into local decision steps based on Eq. 1, but the model is trained in a global setting as shown below. The advantage of this approach is that it can easily handle tens of millions of features, e.g. up to WYX million features for the experiments in this paper. Moreover, under this view, SMT becomes quite similar to sequential natural language annotation problems such as part-of-speech tagging and shallow parsing, and the novel training algorithm presented in this paper is actually most similar to work on training algorithms presented for these task, e.g. the on-line training algorithm presented in (McDonald et al., 2005) and the perceptron training algorithm presented in (Collins, 2002). The current approach does not use specialized probability features as in (Och, 2003) in any stage during decoder parameter training. Such probability features include language model, translation or distortion probabilities, which are commonly used in current SMT approaches 1. We are able to achieve comparable performance to (Tillmann and Zhang, 2005). The novel algorithm differs computationally from earlier work in discriminative training algorithms for SMT (Och, 2003) as follows: Z No computationally expensive 9 -best lists are generated during training: for each input sentence a single block sequence is generated on each iteration over the training data. Z No additional development data set is necessary as the weight vector F is trained on bilingual training data only. The paper is structured as follows: Section 2 presents the baseline block sequence model and the feature representation. Section 3 presents the discriminative training algorithm that learns 1A translation and distortion model is used in generating the block set used in the experiments, but these translation probabilities are not used during decoding. a good global ranking function used during decoding. Section 4 presents results on a standard Arabic-English translation task. Finally, some discussion and future work is presented in Section 5. 2 Block Sequence Model This paper views phrase-based SMT as a block sequence generation process. Blocks are phrase pairs consisting of target and source phrases and local phrase re-ordering is handled by including so-called block orientation. Starting point for the block-based translation model is a block set, e.g. about [Y\]X million Arabic-English phrase pairs for the experiments in this paper. This block set is used to decode training sentence to obtain block orientation sequences that are used in the discriminative parameter training. Nothing but the block set and the parallel training data is used to carry out the training. We use the block set described in (Al-Onaizan et al., 2004), the use of a different block set may effect translation results. Rather than predicting local block neighbors as in (Tillmann and Zhang, 2005) , here the model parameters are trained in a global setting. Starting with a simple model, the training data is decoded multiple times: the weight vector F is trained to discriminate block sequences with a high translation score against block sequences with a high BLEU score 2. The high BLEU scoring block sequences are obtained as follows: the regular phrase-based decoder is modified in a way that it uses the BLEU score as optimization criterion (independent of any translation model). Here, searching for the highest BLEU scoring block sequence is restricted to local re-ordering as is the model-based decoding (as shown in Fig. 1). The BLEU score is computed with respect to the single reference translation provided by the parallel training data. A block sequence with an average BLEU score of about ^Y\]X?_ is obtained for each training sentence 3. The ’true’ maximum BLEU block sequence as well as the high scoring 2High scoring block sequences may contain translation errors that are quantified by a lower BLEU score. 3The training BLEU score is computed for each training sentence pair separately (treating each sentence pair as a single-sentence corpus with a single reference) and then averaged over all training sentences. Although block sequences are found with a high BLEU score on average there is no guarantee to find the maximum BLEU block sequence for a given sentence pair. The target word sequence corresponding to a block sequence does not have to match the reference translation, i.e. maximum BLEU scores are quite low for some training sentences. 722 block ` sequences are represented by high dimensional feature vectors using the binary features defined below and the translation process is handled as a multi-class classification problem in which each block sequence represents a possible class. The effect of this training procedure can be seen in Figure 2: each decoding step on the training data adds a high-scoring block sequence to the discriminative training and decoding performance on the training data is improved after each iteration (along with the test data decoding performance). A theoretical justification for the novel training procedure is given in Section 3. We now define the feature components for the block bigram feature vector a C  C LCEM  in Eq. 1. Although the training algorithm can handle realvalued features as used in (Och, 2003; Tillmann and Zhang, 2005) the current paper intentionally excludes them. The current feature functions are similar to those used in common phrase-based translation systems: for them it has been shown that good translation performance can be achieved 4. A systematic analysis of the novel training algorithm will allow us to include much more sophisticated features in future experiments, i.e. POSbased features, syntactic or hierarchical features (Chiang, 2005). The dimensionality of the feature vector a C  C LCEM  depends on the number of binary features. For illustration purposes, the binary features are chosen such that they yield b on the example block sequence in Fig. 1. There are phrase-based and word-based features: K 'cNcNc  C  C LCEM  7 7 b block C consists of target phrase ’violate’ and source phrase ’tnthk’ ^ otherwise K 'cNc  C  C LCEM  7 7 b ’Lebanese’ is a word in the target phrase of block C and ’AllbnAny’ is a word in the source phrase ^ otherwise The feature K 'cNcNc is a ’unigram’ phrase-based feature capturing the identity of a block. Additional phrase-based features include block orientation, target and source phrase bigram features. Word-based features are used as well, e.g. feature K 'cNc captures word-to-word translation de4On our test set, (Tillmann and Zhang, 2005) reports a BLEU score of d?e?f+g and (Ittycheriah and Roukos, 2005) reports a BLEU score of hYg?f i . pendencies similar to the use of Model b probabilities in (Koehn et al., 2003). Additionally, we use distortion features involving relative source word position and j -gram features for adjacent target words. These features correspond to the use of a language model, but the weights for theses features are trained on the parallel training data only. For the most complex model, the number of features is about WYX million (ignoring all features that occur only once). 3 Approximate Relevant Set Method Throughout the section, we let k 7    . Each block sequence k 7  B   corresponds to a candidate translation. In the training data where target translations are given, a BLEU score lnmopk  can be calculated for each k 7    against the target translations. In this set up, our goal is to find a weight vector F such that the higher ?pk  is, the higher the corresponding BLEU score l8m]pk  should be. If we can find such a weight vector, then block decoding by searching for the highest   pk  will lead to good translation with high BLEU score. Formally, we denote a source sentence by q , and let rsq  be the set of possible candidate oriented block sequences k 7    that the decoder can generate from q . For example, in a monotone decoder, the set rtq  contains block sequences Q B S that cover the source sentence q in the same order. For a decoder with local re-ordering, the candidate set rPq  also includes additional block sequences with re-ordered block configurations that the decoder can efficiently search. Therefore depending on the specific implementation of the decoder, the set rPq  can be different. In general, rsq  is a subset of all possible oriented block sequences Q  B  NS that are consistent with input sentence q . Given a scoring function    I  and an input sentence q , we can assume that the decoder implements the following decoding rule: u kq  7=v?wyx{z|v?} ~€ƒ‚o„…   pk  \ (2) Let q \L\L\ q† be a set of 9 training sentences. Each sentence q C is associated with a set rtq C  of possible translation block sequences that are searchable by the decoder. Each translation block sequence k O rPq C  induces a translation, which is then assigned a BLEU score lnmopk  (obtained by comparing against the target translations). The 723 goal ‡ of the training is to find a weight vector F such that for each training sentence q C , the corresponding decoder outputs u k O rtq C  which has the maximum BLEU score among all k O rPq C  based on Eq. 2. In other words, if u k maximizes the scoring function   pk  , then u k also maximizes the BLEU metric. Based on the description, a simple idea is to learn the BLEU score lnm]pk  for each candidate block sequence k . That is, we would like to estimate F such that   pk ‰ˆ lnm]pk  . This can be achieved through least squares regression. It is easy to see that if we can find a weight vector F that approximates lnmopk  , then the decoding-rule in Eq. 2 automatically maximizes the BLEU score. However, it is usually difficult to estimate lnmopk  reliably based only on a linear combination of the feature vector as in Eq. 1. We note that a good decoder does not necessarily employ a scoring function that approximates the BLEU score. Instead, we only need to make sure that the top-ranked block sequence obtained by the decoder scoring function has a high BLEU score. To formulate this idea, we attempt to find a decoding parameter such that for each sentence q in the training data, sequences in rPq  with the highest BLEU scores should get   pk  scores higher than those with low BLEU scores. Denote by r Šsq  a set of ‹ block sequences in rtq  with the highest BLEU scores. Our decoded result should lie in this set. We call them the “truth”. The set of the remaining sequences is rsq Œ r Š q  , which we shall refer to as the “alternatives”. We look for a weight vector F that minimize the following training criterion: u F 7:v?wxŽz ‘  b 9 † CED “’  F r Š q C N rtq C N ”–• F ; (3) ’  F r—Š r  7 b ‹ ~Y˜ zv?} ~š™' M ˜œ›  F k k   ›  F k k   7: N  pk N lnm'pk Nž   pk  N lnmopk  NN where  is a non-negative real-valued loss function (whose specific choice is not critical for the purposes of this paper),and • Ÿ ^ is a regularization parameter. In our experiments, results are obtained using the following convex loss  N Lšž   L  7  ¡Œ¢  Nb Œ N Œ   N ; £ (4) where ? L  are BLEU scores,    are translation scores, and N¤  £ 7 z|v?} N^ ¤  . We refer to this formulation as ’costMargin’ (cost-sensitive margin) method: for each training sentence ¥ the ’costMargin’ ’  F r Š q N rtq N between the ’true’ block sequence set r Š q  and the ’alternative’ block sequence set rPq  is maximized. Note that due to the truth and alternative set up, we always have §¦R . This loss function gives an upper bound of the error we will suffer if the order of  and   is wrongly predicted (that is, if we predict P¨©  instead of  ¦   ). It also has the property that if for the BLEU scores HˆR  holds, then the loss value is small (proportional to ªŒ«  ). A major contribution of this work is a procedure to solve Eq. 3 approximately. The main difficulty is that the search space rsq  covered by the decoder can be extremely large. It cannot be enumerated for practical purposes. Our idea is to replace this large space by a small subspace r ‚+¬L… q ®­ rsq  which we call relevant set. The possibility of this reduction is based on the following theoretical result. Lemma 1 Let ›  F k k   be a non-negative continuous piece-wise differentiable function of F , and let u F be a local solution of Eq. 3. Let ¯ C  F k  7°zv?} ~𙱁ƒ‚o„Y²'… M  ˜“‚o„Y²'… ›  F k k  , and define r ‚+¬L… q C  7 Q k  O rtq C ³µ´ k O r Š q C  s.t. ¯ C  u F k §¶ 7 ^¸· ›  u F k k   7 ¯ C  u F k NS \ Then u F is a local solution of z ‘  b 9 † CED ¹’  F r Š q C N r ‚+¬L… q C N ”«• F ; \ (5) If  is a convex function of F (as in our choice), then we know that the global optimal solution remains the same if the whole decoding space r is replaced by the relevant set r ‚º¬L… . Each subspace r ‚º¬L… q C  will be significantly smaller than rsq C  . This is because it only includes those alternatives k  with score ¹»  pk E close to one of the selected truth. These are the most important alternatives that are easily confused with the truth. Essentially the lemma says that if the decoder works well on these difficult alternatives (relevant points), then it works well on the whole space. The idea is closely related to active learning in standard classification problems, where we 724 Table 1: Generic Approximate Relevant Set Method for each data point q initialize truth r Š q  and alternative r ‚+¬L… q  for each decoding iteration ¼ : ½ 7 b ILILI < for each data point q select relevant points Q¾ kY¿ S O rsq  (*) update r ‚º¬L… q À r ‚+¬L… q ÂÁ Q¾ kY¿ S update F by solving Eq. 5 approximately (**) selectively pick the most important samples (often based on estimation uncertainty) for labeling in order to maximize classification performance (Lewis and Catlett, 1994). In the active learning setting, as long as we do well on the actively selected samples, we do well on the whole sample space. In our case, as long as we do well on the relevant set, the decoder will perform well. Since the relevant set depends on the decoder parameter F , and the decoder parameter is optimized on the relevant set, it is necessary to estimate them jointly using an iterative algorithm. The basic idea is to start with a decoding parameter F , and estimate the corresponding relevant set; we then update F based on the relevant set, and iterate this process. The procedure is outlined in Table 1. We intentionally leave the implementation details of the (*) step and (**) step open. Moreover, in this general algorithm, we do not have to assume that   pk  has the form of Eq. 1. A natural question concerning the procedure is its convergence behavior. It can be shown that under mild assumptions, if we pick in (*) an alternative ¾ kY¿ O rtq ƒŒ r—ŠPq  for each k¿ O r Šsq  ( à 7 b \L\L\ ‹ ) such that ›  F kY¿ ¾ k¿  7 z|v?} ~𙱁ƒ‚]„… M ˜¡‚]„… ›  F kY¿ k  N (6) then the procedure converges to the solution of Eq. 3. Moreover, the rate of convergence depends only on the property of the loss function, and not on the size of rPq  . This property is critical as it shows that as long as Eq. 6 can be computed efficiently, then the Approximate Relevant Set algorithm is efficient. Moreover, it gives a bound on the size of an approximate relevant set with a certain accuracy.5 5Due to the space limitation, we will not include a forThe approximate solution of Eq. 5 in (**) can be implemented using stochastic gradient descent (SGD), where we may simply update F as: FÅÄÆF ŒÈÇÂÉ  ›  F kY¿ ¾ k¿  \ The parameter Çʦ ^ is a fixed constant often referred to as learning rate. Again, convergence results can be proved for this procedure. Due to the space limitation, we skip the formal statement as well as the corresponding analysis. Up to this point, we have not assumed any specific form of the decoder scoring function in our algorithm. Now consider Eq. 1 used in our model. We may express it as: ?pk  7 F G IYË pk N where Ë pk  7 CED K  C  C LCTM  . Using this feature representation and the loss function in Eq. 4, we obtain the following costMargin SGD update rule for each training data point and à : FÅÄÌF ” ÇÎÍ lnm+¿ÏпƒNb Œ FHGJI Ïп  £ (7) Í lnmÑ¿ 7 lnmopkY¿ Ҍ lnm] ¾ kY¿ N Ïο 7 Ë pk¿ ӌ Ë  ¾ kY¿  \ 4 Experimental Results We applied the novel discriminative training approach to a standard Arabic-to-English translation task. The training data comes from UN news sources. Some punctuation tokenization and some number classing are carried out on the English and the Arabic training data. We show translation results in terms of the automatic BLEU evaluation metric (Papineni et al., 2002) on the MT03 Arabic-English DARPA evaluation test set consisting of ÔYÔYW sentences with bYÔ¡ÕYÖY× Arabic words with _ reference translations. In order to speed up the parameter training the original training data is filtered according to the test set: all the Arabic substrings that occur in the test set are computed and the parallel training data is filtered to include only those training sentence pairs that contain at least one out of these phrases: the resulting pre-filtered training data contains about ÕYWY^ thousand sentence pairs ( XY\]XYÕ million Arabic words and ÔY\]ÖYÔ million English words). The block set is generated using a phrase-pair selection algorithm similar to (Koehn et al., 2003; Al-Onaizan et al., 2004), which includes some heuristic filtering to mal statement here. A detailed theoretical investigation of the method will be given in a journal paper. 725 increase Ø phrase translation accuracy. Blocks that occur only once in the training data are included as well. 4.1 Practical Implementation Details The training algorithm in Table 2 is adapted from Table 1. The training is carried out by running <Ù7 WY^ times over the parallel training data, each time decoding all the 9Ú7 ÕYWY^¡^Y^Y^ training sentences and generating a single block translation sequence for each training sentence. The top five block sequences r ÛYq C  with the highest BLEU score are computed up-front for all training sentence pairs ¥ C and are stored separately as described in Section 2. The score-based decoding of the ÕYWY^¡^Y^Y^ training sentence pairs is carried out in parallel on ÕYX¸Ô?_ -Bit Opteron machines. Here, the monotone decoding is much faster than the decoding with block swapping: the monotone decoding takes less than ^Y\]X hours and the decoding with swapping takes about an hour. Since the training starts with only the parallel training data and a block set, some initial block sequences have to be generated in order to initialize the global model training: for each input sentence a simple bag of blocks translation is generated. For each input interval that is matched by some block , a single block is added to the bag-of-blocks translation k c q  . The order in which the blocks are generated is ignored. For this block set only block and word identity features are generated, i.e. features of type K 'cNcNc and K 'cNc in Section 2. This step does not require the use of a decoder. The initial block sequence training data contains only a single alternative. The training procedure proceeds by iteratively decoding the training data. After each decoding step, the resulting translation block sequences are stored on disc in binary format. A block sequence generated at decoding step ½ is used in all subsequent training steps ½?; , where ½š; ¦ ½ . The block sequence training data after the ¼ -th decoding step is given as ÓrÛYq C  r ‚º¬š… q C ¡ † CED , where the size Ü r ‚º¬š… q C  Ü of the relevant alternative set is ¼ ” b . Although in order to achieve fast convergence with a theoretical guarantee, we should use Eq. 6 to update the relevant set, in reality, this idea is difficult to implement because it requires a more costly decoding step. Therefore in Table 2, we adopt an approximation, where the relevant set is updated by adding the decoder output at each stage. In this way, we are able to treat the decoding Table 2: Relevant set method: Ý = number of decoding iterations, Þ = number of training sentences. for each input sentence q C Îß 7 b ILILI 9 initialize truth r Û q C  and alternative r ‚+¬L… 7 Q k c q C NS for each decoding iteration ¼ : ½ 7 b ILILI < train F using SGD on training data Âr Û q C Ó r ‚º¬L… q C  † CED for each input sentence q C Ðß 7 b ILILI 9 select top-scoring sequence ¾ kàq C  and update r ‚+¬L… q C ¡À r ‚+¬L… q C ÓÁ Q¾ kq C NS scheme as a black box. One way to approximate Eq. 6 is to generate multiple decoding outputs and pick the most relevant points based on Eq. 6. Since the U -best list generation is computationally costly, only a single block sequence is generated for each training sentence pair, reducing the memory requirements for the training algorithm as well. Although we are not able to rigorously prove fast convergence rate for this approximation, it works well in practice, as Figure 2 shows. Theoretically this is because points achieving large values in Eq. 6 tend to have higher chances to become the top-ranked decoder output as well. The SGDbased on-line training algorithm described in Section 3, is carried out after each decoding step to generate the weight vector F for the subsequent decoding step. Since this training step is carried out on a single machine, it dominates the overall computation time. Since each iteration adds a single relevant alternative to the set r ‚º¬š… q C  , computation time increases with the number of training iterations: the initial model is trained in a few minutes, while training the model after the WY^ -th iteration takes up to X hours for the most complex models. Table 3 presents experimental results in terms of uncased BLEU 6. Two re-ordering restrictions are tested, i.e. monotone decoding (’MON’), and local block re-ordering where neighbor blocks can be swapped (’SWAP’). The ’SWAP’ re-ordering uses the same features as the monotone models plus additional orientation-based and distortion6Translation performance in terms of cased BLEU is typically reduced by about á %. 726 Table 3: Translation results in terms of uncased BLEU on the training data ( ÕYWY^¡^Y^Y^ sentences) and the MT03 test data (670 sentences). Re-ordering Features train test b ’MON’ bleu ^Y\]X?_àÕ Õ phrase ^Y\]WYÖY× ^Y\]ÕYXYÔ W word ^Y\ _àÕYÖ ^Y\]W?_àb _ both ^Y\ _àÖYÖ ^Y\]WYXY[ X ’SWAP’ bleu ^Y\]XY[?_ Ô phrase ^Y\ _Y_àb ^Y\]ÕY[YX Ö word ^Y\ _àXYX ^Y\]WYXY[ × both ^Y\ _àÖY[ ^Y\]WYÔYW based features. Different feature sets include word-based features, phrase-based features, and the combination of both. For the results with word-based features, the decoder still generates phrase-to-phrase translations, but all the scoring is done on the word level. Line × shows a BLEU score of WYÔY\]W for the best performing system which uses all word-based and phrase-based features 7. Line b and line X of Table 3 show the training data averaged BLEU score obtained by searching for the highest BLEU scoring block sequence for each training sentence pair as described in Section 2. Allowing local block swapping in this search procedure yields a much improved BLEU score of ^Y\]XY[ . The experimental results show that word-based models significantly outperform phrase-based models, the combination of wordbased and phrase-based features performs better than those features types taken separately. Additionally, swap-based re-ordering slightly improves performance over monotone decoding. For all experiments, the training BLEU score remains significantly lower than the maximum obtainable BLEU score shown in line b and line X . In this respect, there is significant room for improvements in terms of feature functions and alternative set generation. The word-based models perform surprisingly well, i.e. the model in line Ö uses only three feature types: model b features like K 'cNc in Section 2, distortion features, and target language m-gram features up to j 7 W . Training speed varies depending on the feature types used: for the simplest model shown in line Õ of Table 3, the training takes about bYÕ hours, for the models us7With a margin of âãiYf i€äåh , the differences between the results in line h , line e , and line g are not statistically significant, but the other result differences are. 0 0.1 0.2 0.3 0.4 0.5 0.6 0 5 10 15 20 25 30 ’SWAP.TRAINING’ ’SWAP.TEST’ Figure 2: BLEU performance on the training set (upper graph; averaged BLEU with single reference) and the test set (lower graph; BLEU with four references) as a function of the training iteration æ for the model corresponding to line × in Table 3. ing word-based features shown in line W and line Ö training takes less than Õ days. Finally, the training for the most complex model in line × takes about _ days. Figure 2 shows the BLEU performance for the model corresponding to line × in Table 3 as a function of the number of training iterations. By adding top scoring alternatives in the training algorithm in Table 2, the BLEU performance on the training data improves from about ^Y\]ÕYÕ for the initial model to about ^Y\ _à× for the best model after WY^ iterations. After each training iteration the test data is decoded as well. Here, the BLEU performance improves from ^Y\]^Y× for the initial model to about ^Y\]WYÔ for the final model (we do not include the test data block sequences in the training). Table 3 shows a typical learning curve for the experiments in Table 3: the training BLEU score is much higher than the test set BLEU score despite the fact that the test set uses _ reference translations. 5 Discussion and Future Work The work in this paper substantially differs from previous work in SMT based on the noisy channel approach presented in (Brown et al., 1993). While error-driven training techniques are commonly used to improve the performance of phrasebased translation systems (Chiang, 2005; Och, 2003), this paper presents a novel block sequence translation approach to SMT that is similar to sequential natural language annotation problems 727 such as part-of-speech tagging or shallow parsing, both in modeling and parameter training. Unlike earlier approaches to SMT training, which either rely heavily on domain knowledge, or can only handle a small number of features, this approach treats the decoding process as a black box, and can optimize tens millions of parameters automatically, which makes it applicable to other problems as well. The choice of our formulation is convex, which ensures that we are able to find the global optimum even for large scale problems. The loss function in Eq. 4 may not be optimal, and using different choices may lead to future improvements. Another important direction for performance improvement is to design methods that better approximate Eq. 6. Although at this stage the system performance is not yet better than previous approaches, good translation results are achieved on a standard translation task. While being similar to (Tillmann and Zhang, 2005), the current procedure is more automated with comparable performance. The latter approach requires a decomposition of the decoding scheme into local decision steps with the inherent difficulty acknowledged in (Tillmann and Zhang, 2005). Since such limitation is not present in the current model, improved results may be obtained in the future. A perceptronlike algorithm that handles global features in the context of re-ranking is also presented in (Shen et al., 2004). The computational requirements for the training algorithm in Table 2 can be significantly reduced. While the global training approach presented in this paper is simple, after bYX iterations or so, the alternatives that are being added to the relevant set differ very little from each other, slowing down the training considerably such that the set of possible block translations rsq  might not be fully explored. As mentioned in Section 2, the current approach is still able to handle real-valued features, e.g. the language model probability. This is important since the language model can be trained on a much larger monolingual corpus. 6 Acknowledgment This work was partially supported by the GALE project under the DARPA contract No. HR001106-2-0001. The authors would like to thank the anonymous reviewers for their detailed criticism on this paper. References Yaser Al-Onaizan, Niyu Ge, Young-Suk Lee, Kishore Papineni, Fei Xia, and Christoph Tillmann. 2004. IBM Site Report. In NIST 2004 MT Workshop, Alexandria, VA, June. IBM. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. CL, 19(2):263–311. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of ACL 2005), pages 263–270, Ann Arbor, Michigan, June. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP’02, Philadelphia,PA. A. Ittycheriah and S. Roukos. 2005. A Maximum Entropy Word Aligner for Arabic-English MT. In Proc. of HLT-EMNLP 06, pages 89–96, Vancouver, British Columbia, Canada, October. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLT-NAACL 2003: Main Proceedings, pages 127–133, Edmonton, Alberta, Canada, May 27 - June 1. Shankar Kumar and William Byrne. 2005. Local phrase reordering models for statistical machine translation. In Proc. of HLT-EMNLP 05, pages 161– 168, Vancouver, British Columbia, Canada, October. D. Lewis and J. Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Proceedings of the Eleventh International Conference on Machine Learning, pages 148–156. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL’05, pages 91–98, Ann Arbor, Michigan, June. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL’03, pages 160–167, Sapporo, Japan. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of machine translation. In In Proc. of ACL’02, pages 311–318, Philadelphia, PA, July. Libin Shen, Anoop Sarkar, and Franz-Josef Och. 2004. Discriminative Reranking of Machine Translation. In Proceedings of the Joint HLT and NAACL Conference (HLT 04), pages 177–184, Boston, MA, May. Christoph Tillmann and Tong Zhang. 2005. A localized prediction model for statistical machine translation. In Proceedings of ACL’05, pages 557–564, Ann Arbor, Michigan, June. 728
2006
91
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 729–736, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Phoneme-to-Text Transcription System with an Infinite Vocabulary Shinsuke Mori Daisuke Takuma Gakuto Kurata IBM Research, Tokyo Research Laboratory, IBM Japan, Ltd. 1623-14 Shimotsuruma Yamato-shi, 242-8502, Japan [email protected] Abstract The noisy channel model approach is successfully applied to various natural language processing tasks. Currently the main research focus of this approach is adaptation methods, how to capture characteristics of words and expressions in a target domain given example sentences in that domain. As a solution we describe a method enlarging the vocabulary of a language model to an almost infinite size and capturing their context information. Especially the new method is suitable for languages in which words are not delimited by whitespace. We applied our method to a phoneme-to-text transcription task in Japanese and reduced about 10% of the errors in the results of an existing method. 1 Introduction The noisy channel model approach is being successfully applied to various natural language processing (NLP) tasks, such as speech recognition (Jelinek, 1985), spelling correction (Kernighan et al., 1990), machine translation (Brown et al., 1990), etc. In this approach an NLP system is composed of two modules: one is a taskdependent part (an acoustic model for speech recognition) which describes a relationship between an input signal sequence and a word, the other is a language model (LM) which measures the likelihood of a sequence of words as a sentence in the language. Since the LM is a common part, its improvement augments the accuracies of all NLP systems based on a noisy channel model. Recently the main research focus of LM is shifting to the adaptation method, how to capture the characteristics of words and expressions in a target domain. The standard adaptation method is to prepare a corpus in the application domain, count the frequencies of words and word sequences, and manually annotate new words with their input signal sequences to be added to the vocabulary. It is now easy to gather machine-readable sentences in various domains because of the ease of publication and access via the Web (Kilgarriff and Grefenstette, 2003). In addition, traditional machinereadable forms of medical reports or business reports are also available. When we need to develop an NLP system in various domains, there is a huge but unannotated corpus. For languages, such as Japanese and Chinese, in which the words are not delimited by whitespace, one encounters a word identification problem before counting the frequencies of words and word sequences. To solve this problem one must have a good word segmenter in the domain of the corpus. The only robust and reliable word segmenter in the domain is, however, a word segmenter based on the statistics of the lexicons in the domain! Thus we are obliged to pay a high cost for the manual annotation of a corpus for each new subject domain. In this paper, we propose a novel framework for building an NLP system based on a noisy channel model with an almost infinite vocabulary. In our method, first we estimate the probability of a word boundary existing between two characters at each point of a raw corpus in the target domain. Using these probabilities we regard the corpus as a stochastically segmented corpus (SSC). We then estimate word -gram probabilities from the SSC. Then we build an NLP system, the phoneme-totext transcription system in this paper. To describe the stochastic relationship between a character sequence and its phoneme sequence, we also propose a character-based unknown word model. With this unknown word model and a word gram model estimated from the SSC, the vocabulary of our LM, a set of known words with their context information, is expanded from words in a 729 small annotated corpus to an almost infinite size, including all substrings appearing in the large corpus in the target domain. In experiments, we estimated LMs from a relatively small annotated corpus in the general domain and a large raw corpus in the target domain. A phoneme-to-text transcription system based on our LM and unknown word model eliminated about 10% of the errors in the results of an existing method. 2 Task Complexity In this section we explain the phoneme-to-text transcription task which our new framework is applied to. 2.1 Phoneme-to-text Transcription To input a sentence in a language using a device with fewer keys than the alphabet we need some kind of transcription system. In French stenotypy, for example, a special keyboard with 21 keys is used to input French letters with accents (Derouault and Merialdo, 1986). A similar problem arises when we write an e-mail in any language with a mobile phone or a PDA. For languages with a much larger character set, such as Chinese, Japanese, and Korean, a transcription system called an input method is indispensable for writing on a computer (Lunde, 1998). The task we chose for the evaluation of our method is phoneme-to-text transcription in Japanese, which can also be regarded as a pseudospeech recognition in which the acoustic model is perfect. In order to input Japanese to a computer, the user types phoneme sequences and the computer offers possible transcription candidates in the descending order of their estimated similarities to the characters the user wants to input. Then the user chooses the proper one. 2.2 Ambiguities A phoneme sequence in Japanese (written in sansserif font in this paper) is highly ambiguous for a computer. There are many possible word sequences with similar pronunciations. These ambiguities are mainly due to three factors: Homonyms: There are many words sharing the same phoneme sequences. In the spoken language, they are less ambiguous since they are Generally one of Japanese phonogram sets is used as phoneme. A phonogram is input by a combination of unambiguous ASCII characters. pronounced with different intonations. Intonational signals are, however, omitted in the input of phoneme-to-text transcription. Lack of word boundaries: A word of a long sequence of phonemes can be split into several shorter words, such as frequent content words, particles, etc. (ex. ---/thanks vs. -/ant /is /ten). Variations in writing: Some words have more than one acceptable spellings. For example, 振 り込み/--  /bank-transfer is often written as 振込/-   omitting two verbal endings, especially in business writing. Most of these ambiguities are not difficult to resolve for a native speaker who is familiar with the domain. So the transcription system should offer the candidate word sequences for each context and domain. 2.3 Available Resources Generally speaking, three resources are available for a phoneme-to-text transcription based on the noisy channel model: annotated corpus: a small corpus in the general domain annotated with word boundary information and phoneme sequences for each word single character dictionary: a dictionary containing all possible phoneme sequences for each single character raw corpus in the target domain: a collection of text samples in the target domain extracted from the Web or documents in machine-readable form 3 Language Model and its Application A stochastic LM  is a function from a sequence of characters   to the probability. The summation over all possible sequences of characters must be equal to or less than 1. This probability is used as the likelihood in the NLP system. 3.1 Word -gram Model The most famous LM is an -gram model based on words. In this model, a sentence is regarded as a word sequence  (       ) and words are predicted from beginning to end:           730 where      and   is a special symbol called a  (boundary token). Since it is impossible to define the complete vocabulary, we prepare a special token  for unknown words and an unknown word spelling is predicted by the following character-based -gram model after  is predicted by   :        (1) where      and   is a special symbol . Thus, when   is outside of the vocabulary ,             3.2 Automatic Word Segmentation Nagata (1994) proposed a stochastic word segmenter based on a word -gram model to solve the word segmentation problem. According to this method, the word segmenter divides a sentence into a word sequence with the highest probability   argmax       Nagata (1994) reported an accuracy of about 97% on a test corpus in the same domain using a learning corpus of 10,945 sentences in Japanese. 3.3 Phoneme-to-text Transcription A phoneme-to-text transcription system based on an LM (Mori et al., 1999) receives a phoneme sequence  and returns a list of candidate sentences        in descending order of the probability   :          where            Similar to speech recognition, the probability is decomposed into two independent parts: a pronunciation model (PM) and an LM.                                         (2)        is independent of  and   In this formula   is an LM representing the likelihood of a sentence . For the LM, we can use a word -gram model we explained above. The other part in the above formula    is a PM representing the probability that a given sentence is pronounced as . Since it is impossible to collect the phoneme sequences  for all possible sentences , the model is decomposed into a word-based model   in which the words are pronounced independently             (3) where   is a phoneme sequence corresponding to the word   and the condition   is met. The probabilities       are estimated from a corpus in which each word is annotated with a phoneme sequence as follows:                 (4) where   stands for the frequency of an event in the corpus. For unknown words no transcription model has been proposed and the phoneme-to-text transcription system (Mori et al., 1999) simply returns the phoneme sequence itself.  This is done by replacing the unknown word model based on the Japanese character set    by a model based on the phonemic alphabet     . Thus the candidate evaluation metric of a phoneme-to-text transcription (Mori et al., 1999) composed of the word -gram model and the word-based pronunciation model is as follows:                       (5)            if              if     4 LM Estimation from a Stochastically Segmented Corpus (SSC) To cope with segmentation errors, the concept of stochastic segmentation is proposed (Mori and Takuma, 2004). In this section, we briefly explain a method of calculating word -gram probabilities on a stochastically segmented corpus in the target domain. For a detailed explanation and proofs of the mathematical soundness, please refer to the paper (Mori and Takuma, 2004).  One of the Japanese syllabaries Katakana is used to spell out imported words by imitating their Japanese-constrained pronunciation and the phoneme sequence itself is the correct transcription result for them. Mori et. al. (1999) reported that approximately 33.0% of the unknown words in a test corpus were imported words. 731 xk+1 xbn n ex bn+1 x wn x i xb1 xe1 xb2 e2 x 1 w w2 1-Pbn ( ) 1-Pbn+1 ( ) P n e P P i e1 Pe2 b2 1-P ( ) 1-Pb1 ( ) r 1 n f (w ) = Figure 1: Word -gram frequency in a stochastically segmented corpus (SSC). 4.1 Stochastically Segmented Corpus (SSC) A stochastically segmented corpus (SSC) is defined as a combination of a raw corpus  (hereafter referred to as the character sequence  ) and word boundary probabilities   that a word boundary exists between two characters   and  . Since there are word boundaries before the first character and after the last character of the corpus,     . In (Mori and Takuma, 2004), the word boundary probabilities are defined as follows. First the word boundary estimation accuracy  of an automatic word segmenter is calculated on a test corpus with word boundary information. Then the raw corpus is segmented by the word segmenter. Finally   is set to be  for each  where the word segmenter put a word boundary and   is set to be   for each  where it did not put a word boundary. We adopted the same method in the experiments. 4.2 Word -gram Frequency Word -gram frequencies on an SSC is calculated as follows: Word 0-gram frequency: This is defined as an expected number of words in the SSC:        Word -gram frequency ( ): Let us think of a situation (see Figure 1) in which a word sequence   occurs in the SSC as a subsequence beginning at the   -th character and ending at the -th character and each word  in the word sequence is equal to the character sequence beginning at the  -th character and ending at the -th character (         ;         ;    ;  ). The word -gram frequency of a word sequence    in the SSC is defined by the summation of the stochastic frequency at each occurrence of the character sequence of the word sequence   over all of the occurrences in the SSC:                            where             and              . 4.3 Word -gram probability Similar to the word -gram probability estimation from a decisively segmented corpus, word -gram probabilities in an SSC are estimated by the maximum likelihood estimation method as relative values of word -gram frequencies:                      5 Phoneme-to-Text Transcription with an Infinite Vocabulary The vocabulary of an LM estimated from an SSC consists of all subsequences occurring in it. Adding a module describing a stochastic relationship between these subsequences and input signal sequences, we can build a phoneme-to-text transcription system equipped with an almost infinite vocabulary. 5.1 Word Candidate Enumeration Given a phoneme sequence as an input, the dictionary of a phoneme-to-text transcription system described in Subsection 3.3 returns pairs of a word and a probability per Equation (4). Similarly, the dictionary of a phoneme-to-text system with an infinite vocabulary must be able to take a phoneme sequence  and return all possible pairs of a character sequence  and the probability     as word candidates. This is done as follows: 1. First we prepare a single character dictionary containing all characters  in the language annotated with their all possible phoneme sequences           . For 732 example, the Japanese single character dictionary contains a character  “日” annotated with its all possible phoneme sequences 日         . 2. Then we build a phoneme-to-text transcription system for single characters equipped with the vocabulary consisting of the union set of phoneme sequences for all characters. Given a phoneme sequence , this module returns all possible character sequences  with its generation probability    . For example, given a subsequence of the input phoneme sequence  , this module returns  日テ レ  日手レ  日照レ ニッテレ ニッ手レ ニッ照 レ      as a word candidate set along with their generation probabilities. 3. There are various methods to calculate the probability    . The only condition is that given         ,     must be a stochastic language model (cf. Section 3) on the alphabet . In the experiments, we assumed the uniform distribution of phoneme sequences for each character as follows:                     (6) The module we described above receives a phoneme sequence and enumerates its decompositions to subsequences contained in the single character dictionary. This module is implemented using a dynamic programming method. In the experiments we limited the maximum length of the input to 16 phonemes. 5.2 Modeling Contexts of Word Candidates Word -gram probability estimated from an SSC may not be as accurate as an LM estimated from a corpus segmented appropriately by hand. Thus we use the following interpolation technique:                        where   is history before  ,   is the probability estimated from a segmented corpus  , and  is the probability estimated by our method from a raw corpus  . The   and  are interpolation coefficients which are estimated by the deleted interpolation method (Jelinek et al., 1991).  More precisely, it may happen that the same phoneme sequence is generated from a character sequence in multiple ways. In this case the generation probability is calculated as the summation over all possible generations. In the experiments, the word bi-gram model in our phoneme-to-text transcription system is combined with word bi-gram probabilities estimated from an SSC. Thus the phoneme-to-text transcription system of our new framework refers to the following LM to measure the likelihood of word sequences:     (7)                                 if                       if                             if           where  is the set of all subsequences appearing in the SSC. Our LM based on Equation (7) and an existing LM (cf. Equation (5)) behave differently when they predict an out-of-vocabulary word appearing in the SSC, that is          . In this case our LM has reliable context information on the OOV word to help the system choose the proper word. Our system also clearly functions better than the LM interpolated with a word -gram model estimated from the automatic segmentation result of the corpus when the result is a wrong segmentation. For example, when the automatic segmentation result of the sequence “日 テレ” (the abbreviation of Japan TV broadcasting corporation) has a word boundary between “日” and “テ,” the uni-gram probability  日テレ  is equal to 0 and an OOV word “日テレ” is never enumerated as a candidate. To the contrary, using our method  日テレ    when the sequence “日テレ” appears in the SSC at least once. Thus the sequence is enumerated as a candidate word. In addition, when the sequence appears frequently in the SSC,  日テレ    and the word may appear at a high position in the candidate list even if the automatic segmenter always wrongly segments the sequence into “日” and “テレ.” 5.3 Default Character for Phoneme In very rare cases, it happens that the input phoneme sequence cannot be decomposed into phoneme sequences in the vocabulary and those  Two word fragments “日” and “テレ” may be enumerated as word candidates. The notion of word may be necessary for the user’s facility. However, we do not discuss the necessity of the notion of word in the phoneme-to-text transcription system. 733 corresponding to subsequences of the SSC and, as a result, the transcription system does not output any candidate sentence. To avoid this situation, we prepare a default character for every phoneme and the transcription system also enumerates the default character for each phoneme. In Japanese from the viewpoint of transcription accuracy, it is better to set the default characters to katakana, which are used mainly for transliteration of imported words. Since a katakana is pronunced uniquely (    ),                (8) From Equations (4), (6), and (8), the PM of our transcription system is as follows:       (9)                          if             if            if           where          . 5.4 Phoneme-to-Text Transcription with an Infinite Vocabulary Finally, the transcription system with an infinite vocabulary enumerates candidate sentence        in the descending order of the following evaluation function value composed of an LM     defined by Equation (7) and a PM       defined by Equation (9):              Note that there are only three cases since the case decompositions in Equation (7) and Equation (9) are identical. 6 Evaluation As an evaluation of our phoneme-to-text transcription system, we measured transcription accuracies of several systems on test corpora in two domains: one is a general domain in which we have a small annotated corpus with word boundary information and phoneme sequence for each word, and the other is a target domain in which only a large raw corpus is available. As the transcription result, we took the word sequence of the highest probability. In this section we show the results and evaluate our new framework. Table 1: Annotated corpus in general domain #sentences #words #chars learning 20,808 406,021 598,264 test 2,311 45,180 66,874 Table 2: Raw corpus in the target domain #sentences #words #chars learning 797,345 — 17,645,920 test 1,000 — 20,935 6.1 Conditions on the Experiments The segmented corpus used in our experiments is composed of articles extracted from newspapers and example sentences in a dictionary of daily conversation. Each sentence in the corpus is segmented into words and each word is annotated with a phoneme sequence. The corpus was divided into ten parts. The parameters of the model were estimated from nine of them (learning) and the model was tested on the remaining one (test). Table 1 shows the corpus size. Another corpus we used in the experiments is composed of daily business reports. This corpus is not annotated with word boundary information nor phoneme sequence for each word. For evaluation, we selected 1,000 sentences randomly and annotated them with the phoneme sequences to be used as a test set. The rest was used for LM estimation (see Table 2). 6.2 Evaluation Criterion The criterion we used for transcription systems is precision and recall based on the number of characters in the longest common subsequence (LCS) (Aho, 1990). Let    be the number of characters in the correct sentence,     be that in the output of a system, and    be that of the LCS of the correct sentence and the output of the system, so the recall is defined as        and the precision as        . 6.3 Models for Comparison In order to clarify the difference in the usages of the target domain corpus, we built four transcription systems and compared their accuracies. Below we explain the models in detail. Model : Baseline A word bi-gram model built from the segmented general domain corpus. 734 Table 3: Phoneme-to-text transcription accuracy. word bi-gram from raw corpus unknown General domain Target domain the annotated corpus usage word model Precision Recall Precision Recall  Yes No No 89.80% 92.30% 68.62% 78.40%  Yes Auto. Seg. No 92.67% 93.42% 80.59% 86.19%   Yes Auto. Seg. Yes 92.52% 93.17% 90.35% 93.48%  Yes Stoch. Seg. Yes 92.78% 93.40% 91.10% 94.09% The vocabulary contains 10,728 words appearing in more than one corpora of the nine learning corpora. The automatic word segmenter used to build the other three models is based on the method explained in Section 3 with this LM. Model : Decisive segmentation A word bi-gram model estimated from the automatic segmentation result of the target corpus interpolated with model . Model  : Decisive segmentation Model  extended with our PM for unknown words Model : Stochastic segmentation A word bi-gram model estimated from the SSC in the target domain interpolated with model  and equipped with our PM for unknown words 6.4 Evaluation Table 3 shows the transcription accuracy of the models. A comparison of the accuracies in the target domain of the Model  and Model  confirms the well known fact that even an automatic segmentation result containing errors helps an LM improve its performance. The accuracy of Model  in the general domain is also higher than that of Model . From this result we can say that overadaptation has not occurred. Model  , equipped with our PM for unknown words, is a natural extension of Model , a model based on an existing method. The accuracy of Model   is higher than that of Model  in the target domain, but worse in the general domain. This is because the vocabulary of Model   is enlarged with the words and the word fragments contained in the automatic segmentation result. Though no study has been reported on the method of Model  , below we take Model   as an existing method for a more severe evaluation. Comparing the accuracies of Model   and Model  in both domain, it can be said that using our method we can build a more accurate model than the existing methods. The main reason is that Table 4: Relationship between the raw corpus size and the accuracies. Raw corpus size Precision Recall    chars (1/100) 89.18% 92.32%    chars (1/10) 90.33% 93.40%    chars (1/1) 91.10% 94.09% our phoneme model PM is able to enumerate transcription candidates for out-of-vocabulary words and word -gram probabilities estimated from the SSC helps the model choose the appropriate ones. A detailed study of Table 3 tells us that the reduction rate of character error rate ( recall) of Model  in the target domain (9.36%) is much larger than that in the general domain (3.37%). The reason for this is that the automatic word segmenter tends to make mistakes around characteristic words and expressions in the target domain and our method is much less influenced by those segmentation errors than the existing method is. In order to clarify the relationship between the size of the SSC and the transcription accuracy, we calculated the accuracies while changing the size of the SSC (1/1, 1/10, 1/100). The result, shown in Table 4, shows that we can still achieve a further improvement just by gathering more example sentences in the target domain. The main difference between the models is the LM part. Thus the accuracy increase is yielded by the LM improvements. This fact indicates that we can expect a similar improvement in other generative NLP systems using the noisy channel model by expanding the LM vocabulary with context information to an infinite size. 7 Related Work The well-known methods for the unknown word problem are classified into two groups: one is to use an unknown word model and the other is to extract word candidates from a corpus before the application. Below we describe the relationship 735 between these methods and the proposed method. In the method using an unknown word model, first the generation probability of an unknown word is modeled by a character -gram, and then an NLP system, such as a morphological analyzer, searches for the best solution considering the possibility that all subsequences might be unknown words (Nagata, 1994; Bazzi and Glass, 2000). In the same way, we can build a phoneme-totext transcription system which can enumerate unknown word candidates, but the LM is not able to refer to lexical context information to choose the appropriate word, since the unknown words are modeled to be generated from a single state. We solved this problem by allowing the LM to refer to information from an SSC. When a machine-readable corpus in the target domain is available, we can extract word candidates from the corpus with a certain criterion and use them in application. An advantage of this method is that all of the occurrences of each candidate in the corpus are considered. Nagata (1996) proposed a method calculating word candidates with their uni-gram frequencies using a forwardbackward algorithm. and reported that the accuracy of a morphological analyzer can be improved by adding the extracted words to its vocabulary. Comparing our method with this research, it can be said that our method executes the word candidate enumeration and their context calculation dynamically at the time of the solution search for an NLP task, phoneme-to-text transcription here. One of the advantages of our framework is that the system considers all substrings in the corpus as word candidates (that is the recall of the word extraction is 100%) and a higher accuracy is expected using a consistent criterion, namely the generation probability, for the word candidate enumeration process and solution search process. The framework we propose in this paper, enlarging the vocabulary to an almost infinite size, is general and applicable to many other NLP systems based on the noisy channel model, such as speech recognition, statistical machine translation, etc. Our framework is potentially capable of improving the accuracies in these tasks as well. 8 Conclusion In this paper we proposed a generative NLP system with an almost infinite vocabulary for languages without obvious word boundary information in written texts. In the experiments we compared four phoneme-to-text transcription systems in Japanese. The transcription system equipped with an infinite vocabulary showed a higher accuracy than the baseline model and the model based on the existing method. These results show the efficacy of our method and tell us that our approach is promising for the phoneme-to-text transcription task or other NLP systems based on the noisy channel model. References Alfred V. Aho. 1990. Algorithms for finding patterns in strings. In Handbook of Theoretical Computer Science, volume A: Algorithms and Complexity, pages 273–278. Elseveir Science Publishers. Issam Bazzi and James R. Glass. 2000. Modeling outof-vocabulary words for robust speech recognition. In Proc. of the ICSLP2000. Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Frederick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79–85. Anne-Marie Derouault and Bernard Merialdo. 1986. Natural language modeling for phoneme-to-text transcription. IEEE PAMI, 8(6):742–749. Frederick Jelinek, Robert L. Mercer, and Salim Roukos. 1991. Principles of lexical language modeling for speech recognition. In Advances in Speech Signal Processing, chapter 21, pages 651– 699. Dekker. Frederick Jelinek. 1985. Self-organized language modeling for speech recognition. Technical report, IBM T. J. Watson Research Center. Mark D. Kernighan, Kenneth W. Church, and William A. Gale. 1990. A spelling correction program based on a noisy channel model. In Proc. of the COLING90, pages 205–210. Adam Kilgarriff and Gregory Grefenstette. 2003. Introduction to the special issue on the web as corpus. Computational Linguistics, 29(3):333–347. Ken Lunde. 1998. CJKV Information Processing. O’Reilly & Associates. Shinsuke Mori and Daisuke Takuma. 2004. Word n-gram probability estimation from a Japanese raw corpus. In Proc. of the ICSLP2004. Shinsuke Mori, Tsuchiya Masatoshi, Osamu Yamaji, and Makoto Nagao. 1999. Kana-kanji conversion by a stochastic model. Transactions of IPSJ, 40(7):2946–2953. (in Japanese). Masaaki Nagata. 1994. A stochastic Japanese morphological analyzer using a forward-DP backward-A n-best search algorithm. In Proc. of the COLING94, pages 201–207. Masaaki Nagata. 1996. Automatic extraction of new words from Japanese texts using generalized forward-backward search. In EMNLP. 736
2006
92
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 737–744, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Automatic Generation of Domain Models for Call Centers from Noisy Transcriptions Shourya Roy and L Venkata Subramaniam IBM Research India Research Lab IIT Delhi, Block-1 New Delhi 110016 India rshourya,[email protected] Abstract Call centers handle customer queries from various domains such as computer sales and support, mobile phones, car rental, etc. Each such domain generally has a domain model which is essential to handle customer complaints. These models contain common problem categories, typical customer issues and their solutions, greeting styles. Currently these models are manually created over time. Towards this, we propose an unsupervised technique to generate domain models automatically from call transcriptions. We use a state of the art Automatic Speech Recognition system to transcribe the calls between agents and customers, which still results in high word error rates (40%) and show that even from these noisy transcriptions of calls we can automatically build a domain model. The domain model is comprised of primarily a topic taxonomy where every node is characterized by topic(s), typical Questions-Answers (Q&As), typical actions and call statistics. We show how such a domain model can be used for topic identification of unseen calls. We also propose applications for aiding agents while handling calls and for agent monitoring based on the domain model. 1 Introduction Call center is a general term for help desks, information lines and customer service centers. Many companies today operate call centers to handle customer issues. It includes dialog-based (both voice and online chat) and email support a user receives from a professional agent. Call centers have become a central focus of most companies as they allow them to be in direct contact with their customers to solve product-related and servicesrelated issues and also for grievance redress. A typical call center agent handles over a hundred calls in a day. Gigabytes of data is produced every day in the form of speech audio, speech transcripts, email, etc. This data is valuable for doing analysis at many levels, e.g., to obtain statistics about the type of problems and issues associated with different products and services. This data can also be used to evaluate agents and train them to improve their performance. Today’s call centers handle a wide variety of domains such as computer sales and support, mobile phones and apparels. To analyze the calls in any domain, analysts need to identify the key issues in the domain. Further, there may be variations within a domain, say mobile phones, based on the service providers. The analysts generate a domain model through inspection of the call records (audio, transcripts and emails). Such a model can include a listing of the call categories, types of problems solved in each category, listing of the customer issues, typical questions-answers, appropriate call opening and closing styles, etc. In essence, these models provide a structured view of the domain. Manually building such models for various domains may become prohibitively resource intensive. Another important point to note is that these models are dynamic in nature and change over time. As a new version of a mobile phone is introduced, software is launched in a country, a sudden attack of a virus, the model may need to be refined. Hence, an automated way of creating and maintaining such a model is important. In this paper, we have tried to formalize the essential aspects of a domain model. It comprises of primarily a topic taxonomy where every node is characterized by topic(s), typical Questions737 Answers (Q&As), typical actions and call statistics. To build the model, we first automatically transcribe the calls. Current automatic speech recognition technology for telephone calls have moderate to high word error rates (Padmanabhan et al., 2002). We applied various feature engineering techniques to combat the noise introduced by the speech recognition system and applied text clustering techniques to group topically similar calls together. Using clustering at different granularity and identifying the relationship between groups at different granularity we generate a taxonomy of call types. This taxonomy is augmented with various meta information related to each node as mentioned above. Such a model can be used for identification of topics of unseen calls. Towards this, we envision an aiding tool for agents to increase agent effectiveness and an administrative tool for agent appraisal and training. Organization of the paper: We start by describing related work in relevant areas. Section 3 talks about the call center dataset and the speech recognition system used. The following section contains the definition and describes an unsupervised mechanism for building a topical model from automatically transcribed calls. Section 5 demonstrates the usability of such a topical model and proposes possible applications. Section 6 concludes the paper. 2 Background and Related Work In this work, we are trying to bridge the gap between a few seemingly unrelated research areas viz. (1) Automatic Speech Recognition(ASR), (2) Text Clustering and Automatic Taxonomy Generation (ATG) and (3) Call Center Analytics. We present some relevant work done in each of these areas. Automatic Speech Recognition(ASR): Automatic transcription of telephonic conversations is proven to be more difficult than the transcription of read speech. According to (Padmanabhan et al., 2002), word-error rates are in the range of 78% for read speech whereas for telephonic speech it is more than 30%. This degradation is due to the spontaneity of speech as well as the telephone channel. Most speech recognition systems perform well when trained for a particular accent (Lawson et al., 2003). However, with call centers now being located in different parts of the world, the requirement of handling different accents by the same speech recognition system further increases word error rates. Automatic Taxonomy Generation (ATG): In recent years there has been some work relating to mining domain specific documents to build an ontology. Mostly these systems rely on parsing (both shallow and deep) to extract relationships between key concepts within the domain. The ontology is constructed from this by linking the extracted concepts and relations (Jiang and Tan, 2005). However, the documents contain well formed sentences which allow for parsers to be used. Call Center Analytics: A lot of work on automatic call type classification for the purpose of categorizing calls (Tang et al., 2003), call routing (Kuo and Lee, 2003; Haffner et al., 2003), obtaining call log summaries (Douglas et al., 2005), agent assisting and monitoring (Mishne et al., 2005) has appeared in the past. In some cases, they have modeled these as text classification problems where topic labels are manually obtained (Tang et al., 2003) and used to put the calls into different buckets. Extraction of key phrases, which can be used as features, from the noisy transcribed calls is an important issue. For manually transcribed calls, which do not have any noise, in (Mishne et al., 2005) a phrase level significance estimate is obtained by combining word level estimates that were computed by comparing the frequency of a word in a domain-specific corpus to its frequency in an open-domain corpus. In (Wright et al., 1997) phrase level significance was obtained for noisy transcribed data where the phrases are clustered and combined into finite state machines. Other approaches use n-gram features with stop word removal and minimum support (Kuo and Lee, 2003; Douglas et al., 2005). In (Bechet et al., 2004) call center dialogs have been clustered to learn about dialog traces that are similar. Our Contribution: In the call center scenario, the authors are not aware of any work that deals with automatically generating a taxonomy from transcribed calls. In this paper, we have tried to formalize the essential aspects of a domain model. We show an unsupervised method for building a domain model from noisy unlabeled data, which is available in abundance. This hierarchical domain model contains summarized topic specific details for topics of different granularity. We show how such a model can be used for topic identification of unseen calls. We propose two applications for 738 aiding agents while handling calls and for agent monitoring based on the domain model. 3 Issues with Call Center Data We obtained telephonic conversation data collected from the internal IT help desk of a company. The calls correspond to users making specific queries regarding problems with computer software such as Lotus Notes, Net Client, MS Office, MS Windows, etc. Under these broad categories users faced specific problems e.g. in Lotus Notes users had problems with their passwords, mail archiving, replication, installation, etc. It is possible that many of the sub problem categories are similar, e.g. password issues can occur with Lotus Notes, Net Client and MS Windows. We obtained automatic transcriptions of the dialogs using an Automatic Speech Recognition (ASR) system. The transcription server, used for transcribing the call center data, is an IBM research prototype. The speech recognition system was trained on 300 hours of data comprising of help desk calls sampled at 6KHz. The transcription output comprises information about the recognized words along with their durations, i.e., beginning and ending times of the words. Further, speaker turns are marked, so the agent and customer portions of speech are demarcated without exactly naming which part is the agent and which the customer. It should be noted that the call center agents and the customers were of different nationalities having varied accents and this further made the job of the speech recognizer hard. The resultant transcriptions have a word error rate of about 40%. This high error rate implies that many wrong deletions of actual words and wrong insertion of dictionary words have taken place. Also often speaker turns are not correctly identified and voice portions of both speakers are assigned to a single speaker. Apart from speech recognition errors there are other issues related to spontaneous speech recognition in the transcriptions. There are no punctuation marks, silence periods are marked but it is not possible to find sentence boundaries based on these. There are repeats, false starts, a lot of pause filling words such as um and uh, etc. Portion of a transcribed call is shown in figure 1. Generally, at these noise levels such data is hard to interpret by a human. We used over 2000 calls that have been automatically transcribed for our analysis. The average duration of a call is about 9 SPEAKER 1: windows thanks for calling and you can learn yes i don’t mind it so then i went to SPEAKER 2: well and ok bring the machine front end loaded with a standard um and that’s um it’s a desktop machine and i did that everything was working wonderfully um I went ahead connected into my my network um so i i changed my network settings to um to my home network so i i can you know it’s showing me for my workroom um and then it is said it had to reboot in order for changes to take effect so i rebooted and now it’s asking me for a password which i never i never said anything up SPEAKER 1: ok just press the escape key i can doesn’t do anything can you pull up so that i mean Figure 1: Partial transcript of a help desk dialog minutes. For 125 of these calls, call topics were manually assigned. 4 Generation of Domain Model Fig 2 shows the steps for generating a domain model in the call center scenario. This section explains different modules shown in the figure. 4.1 Description of Model We propose the Domain Model to be comprised of primarily a topic taxonomy where every node is characterized by topic(s), typical QuestionsAnswers (Q&As), typical actions and call statistics. Generating such a taxonomy manually from scratch requires significant effort. Further, the changing nature of customer problems requires frequent changes to the taxonomy. In the next subsection, we show that meaningful taxonomies can be built without any manual supervision from a collection of noisy call transcriptions. 4.2 Taxonomy Generation As mentioned in section 3, automatically transcribed data is noisy and requires a good amount of feature engineering before applying any text analytics technique. Each transcription is passed through a Feature Engineering Component to perform noise removal. We performed a sequence of cleansing operations to remove stopwords such as the, of, seven, dot, january, hello. We also remove pause filling words such as um, uh, huh . The remaining words in every transcription are passed through a stemmer (using Porter’s stemming algo739 Stopword Removal N-gram Extraction Database, archive, replicate Can you access yahoo? Is modem on? Call statistics Feature Engineering ASR Clusterer Taxonomy Builder Model Builder Component Clusters of different granularity Voice help-desk data 1 2 3 4 5 Figure 2: 5 Steps to automatically build domain model from a collection of telephonic conversation recordings rithm 1) to extract the root form of every word e.g. call from called. We extract all n-grams which occur more frequently than a threshold and do not contain any stopword. We observed that using all n-grams without thresholding deteriorates the quality of the generated taxonomy. a t & t, lotus notes, and expense reimbursement are some examples of extracted n-grams. The Clusterer generates individual levels of the taxonomy by using text clustering. We used CLUTO package 2 for doing text clustering. We experimented with all the available clustering functions in CLUTO but no one clustering algorithm consistently outperformed others. Also, there was not much difference between various algorithms based on the available goodness metrics. Hence, we used the default repeated bisection technique with cosine function as the similarity metric. We ran this algorithm on a collection of 2000 transcriptions multiple times. First we generate 5 clusters from the 2000 transcriptions. Next we generate 10 clusters from the same set of transcriptions and so on. At the finest level we split them into 100 clusters. To generate the topic 1http://www.tartarus.org/˜martin/PorterStemmer 2http://glaros.dtc.umn.edu/gkhome/views/cluto taxonomy, these sets containing 5 to 100 clusters are passed through the Taxonomy Builder component. This component (1) removes clusters containing less than n documents (2) introduces directed edges from cluster v1 to v2 if v1 and v2 share at least one document between them, and where v2 is one level finer than v1. Now v1 and v2 become nodes in adjacent layers in the taxonomy. Here we found the taxonomy to be a tree but in general it can be a DAG. Now onwards, each node in the taxonomy will be referred to as a topic. This kind of top-down approach was preferred over a bottom-up approach because it not only gives the linkage between clusters of various granularity but also gives the most descriptive and discriminative set of features associated with each node. CLUTO defines descriptive (and discriminative) features as the set of features which contribute the most to the average similarity (dissimilarity) between documents belonging to the same cluster (different clusters). In general, there is a large overlap between descriptive and discriminative features. These features, topic features, are later used for generating topic specific information. Figure 3 shows a part of the taxonomy obtained from the IT help desk dataset. The labels 740 atandt connect lotusnot click client connect wireless network default properti net netclient localarea areaconnect router cabl databas server folder copi archiv replic mail slash folder file archiv databas servercopi localcopi Figure 3: A part of the automatically generated ontology along with descriptive features. shown in Figure 3 are the most descriptive and discriminative features of a node given the labels of its ancestors. 4.3 Topic Specific Information The Model Builder component in Figure 2 creates an augmented taxonomy with topic specific information extracted from noisy transcriptions. Topic specific information includes phrases that describe typical actions, typical Q&As and call statistics (for each topic in the taxonomy). Typical Actions: Actions correspond to typical issues raised by the customer, problems and strategies for solving them. We observed that action related phrases are mostly found around topic features. Hence, we start by searching and collecting all the phrases containing topic words from the documents belonging to the topic. We define a 10-word window around the topic features and harvest all phrases from the documents. The set of collected phrases are then searched for n-grams with support above a preset threshold. For example, both the 10-grams note in click button to set up for all stops and to action settings and click the button to set up increase the support count of the 5-gram click button to set up. The search for the n-grams proceeds based on a threshold on a distance function that counts the insertions necessary to match the two phrases. For example can you is closer to can < ... > you than to can < ... >< ... > you. Longer n-grams are allowed a higher distance threshold than shorter ngrams. After this stage we extracted all the phrases that frequently occur within the cluster. In the second step, phrase tiling and ordering, we prune and merge the extracted phrases and order them. Tiling constructs longer n-grams from sequences of overlapping shorter n-grams. We noted that the phrases have more meaning if they are ordered by their appearance. For example, if go to the program menu typically appears before select options from program menu then it is more thank you for calling this is problem with our serial number software Q: may i have your serial number Q: how may i help you today A: i’m having trouble with my at&t network ............ ............ click on advance log in properties i want you to right click create a connection across an existing internet connection in d. n. s. use default network ............ ............ Q: would you like to have your ticket A: ticket number is two thank you for calling and have a great day thank you for calling bye bye anything else i can help you with have a great day you too Figure 4: Topic specific information useful to present them in the order of their appearance. We establish this order based on the average turn number where a phrase occurs. Typical Questions-Answers: To understand a customer’s issue the agent needs to ask the right set of questions. Asking the right questions is the key to effective call handling. We search for all the questions within a topic by defining question templates. The question templates basically look for all phrases beginning with how, what, can I, can you, were there, etc. This set comprised of 127 such templates for questions. All 10-word phrases conforming to the question templates are collected and phrase harvesting, tiling and ordering is done on them as described above. For the answers we search for phrases in the vicinity immediately following the question. Figure 4 shows a part of the topic specific information that has been generated for the default properti node in Fig 3. There are 123 documents in this node. We have selected phrases that occur at least 5 times in these 123 documents. We have captured the general opening and closing styles used by the agents in addition to typical actions and Q&As for the topic. In this node the documents pertain to queries on setting up a new A T & T network connection. Most of the topic specific issues that have been captured relate to the agent 741 leading the customer through the steps for setting up the connection. In the absence of tagged dataset we could not quantify our observation. However, when we compared the automatically generated topic specific information to the extracted information from the hand labeled calls, we noted that almost all the issues have been captured. In fact there are some issues in the automatically generated set that are missing from the hand labeled set. The following observations can be made from the topic specific information that has been generated: • The phrases that have been captured turn out to be quite well formed. Even though the ASR system introduces a lot of noise, the resulting phrases when collected over the clusters are clean. • Some phrases appear in multiple forms thank you for calling how can i help you, how may i help you today, thanks for calling can i be of help today. While tiling is able to merge matching phrases, semantically similar phrases are not merged. • The list of topic specific phrases, as already noted, matched and at times was more exhaustive than similar hand generated sets. Call Statistics: We compute various aggregate statistics for each node in the topic taxonomy as part of the model viz. (1) average call duration(in seconds), (2) average transcription length(number of words) (3) average number of speaker turns and (4) number of calls. We observed that call durations and number of speaker turns varies significantly from one topic to another. Figure 5 shows average call duration and corresponding average transcription lengths for a few interesting topics. It can be seen that in topic cluster-1, which is about expense reimbursement and related stuff, most of the queries can be answered quickly in standard ways. However, some connection related issues (topic cluster-5) require more information from customers and are generally longer in duration. Interestingly, topic cluster-2 and topic cluster-4 have similar average call durations but quite different average transcription lengths. On investigation we found that cluster-4 is primarily about printer related queries where the customer many a times is not ready with details like printer name, ip address of the printer, resulting in long hold time whereas for cluster-2, which is about online courses, users 0 100 200 300 400 500 600 700 800 900 5 4 3 2 1 0 200 400 600 800 1000 1200 1400 1600 Call Duration(secs) Transcription Length(no. of words) Topic Cluster Figure 5: Call duration and transcription length for some topic clusters generally have details like course name, etc. ready with them and are interactive in nature. We build a hierarchical index of type {topic→information} based on this automatically generated model for each topic in the topic taxonomy. An entry of this index contains topic specific information viz. (1) typical Q&As, (2) typical actions, and (3) call statistics. As we go down this hierarchical index the information associated with each topic becomes more and more specific. In (Mishne et al., 2005) a manually developed collection of issues and their solutions is indexed so that they can be matched to the call topic. In our work the indexed collection is automatically obtained from the call transcriptions. Also, our index is more useful because of its hierarchical nature where information can be obtained for topics of various granularity unlike (Mishne et al., 2005) where there is no concept of topics at all. 5 Application of Domain Model Information retrieval from spoken dialog data is an important requirement for call centers. Call centers constantly endeavor to improve the call handling efficiency and identify key problem areas. The described model provides a comprehensive and structured view of the domain that can be used to do both. It encodes three levels of information about the domain: • General: The taxonomy along with the labels gives a general view of the domain. The general information can be used to monitor trends on how the number of calls in different categories change over time e.g. daily, weekly, monthly. 742 • Topic level: This includes a listing of the specific issues related to the topic, typical customer questions and problems, usual strategies for solving the problems, average call durations, etc. It can be used to identify primary issues, problems and solutions pertaining to any category. • Dialog level: This includes information on how agents typically open and close calls, ask questions and guide customers, average number of speaker turns, etc. The dialog level information can be used to monitor whether agents are using courteous language in their calls, whether they ask pertinent questions, etc. The {topic→information} index requires identification of the topic for each call to make use of information available in the model. Below we show examples of the use of the model for topic identification. 5.1 Topic Identification Many of the customer complaints can be categorized into coarse as well as fine topic categories by listening to only the initial part of the call. Exploiting this observation we do fast topic identification using a simple technique based on distribution of topic specific descriptive and discriminative features (Sec 4.2) within the initial portion of the call. Figure 6 shows variation in prediction accuracy using this technique as a function of the fraction of a call observed for 5, 10 and 25 clusters verified over the 125 hand-labeled transcriptions. It can be seen, at coarse level, nearly 70% prediction accuracy can be achieved by listening to the initial 30% of the call and more than 80% of the calls can be correctly categorized by listening only to the first half of the call. Also calls related to some categories can be quickly detected compared to some other clusters as shown in Figure 7. 5.2 Aiding and Administrative Tool Using the techniques presented in this paper so far it is possible to put together many applications for a call center. In this section we give some example applications and describe ways in which they can be implemented. Based on the hierarchical model described in Section 4 and topic identification mentioned in the last sub-section we describe 10 20 30 40 50 60 70 80 90 100 100 90 80 70 60 50 40 30 20 10 Prediction accuracy(%) Fraction of call observed(%) ’5-Clusters’ ’10-Clusters’ ’25-Clusters’ Figure 6: Variation in prediction accuracy with fraction of call observed for 5, 10 and 25 clusters 0 10 20 30 40 50 60 70 80 90 100 10 9 8 7 6 5 4 3 2 1 Prediction accuracy(%) Cluster ID 25% observed 50% observed 75% observed 100% observed Figure 7: Cluster wise variation in prediction accuracy for 10 clusters (1) a tool capable of aiding agents for efficient handling of calls to improve customer satisfaction as well as to reduce call handling time, (2) an administrative tool for agent appraisal and training. Agent aiding is done based on the automatically generated domain model. The hierarchical nature of the model helps to provide generic to specific information to the agent as the call progresses. During call handling the agent can be provided the automatically generated taxonomy and the agent can get relevant information associated with different nodes by say clicking on the nodes. For example, once the agent identifies a call to be about {lotusnot} in Fig 3 then he can see the generic Lotus Notes related Q&As and actions. By interacting further with the customer the agent identifies it to be of {copi archiv replic} topic and typical Q&As and actions change accordingly. Finally, the agent narrows down to the topic as {servercopi localcopi} and suggest solution for replication problem in Lotus Notes. The concept of administrative tool is primarily driven by Dialog and Topic level information. We envision this post-processing tool to be used 743 for comparing completed individual calls with corresponding topics based on the distribution of Q&As, actions and call statistics. Based on the topic level information we can check whether the agent identified the issues and offered the known solutions on a given topic. We can use the dialog level information to check whether the agent used courteous opening and closing sentences. Calls that deviate from the topic specific distributions, can be identified in this way and agents handling these calls can be offered further training on the subject matter, courtesy, etc. This kind of postprocessing tool can also help us to catch abnormally long calls, agents with high average call handle time, etc. 6 Discussion and Future Work We have shown that it is possible to retrieve useful information from noisy transcriptions of call center voice conversations. We have shown that the extracted information can be put in the form of a model that succinctly captures the domain and provides a comprehensive view of it. We briefly showed through experiments that this model is an accurate description of the domain. We have also suggested useful scenarios where the model can be used to aid and improve call center performance. A call center handles several hundred-thousand calls per year in various domains. It is very difficult to monitor the performance based on manual processing of the calls. The framework presented in this paper, allows a large part of this work to be automated. A domain specific model that is automatically learnt and updated based on the voice conversations allows the call center to identify problem areas quickly and allocate resources more effectively. In future we would like to semantically cluster the topic specific information so that redundant topics are eliminated from the list. We can use Automatic Taxonomy Generation(ATG) algorithms for document summarization (Kummamuru et al., 2004) to build topic taxonomies. We would also like to link our model to technical manuals, catalogs, etc. already available on the different topics in the given domain. Acknowledgements: We thank our colleagues Raghuram Krishnapuram and Sreeram Balakrishnan for helpful discussions. We also thank Olivier Siohan from the IBM T. J. Watson Research Center for providing us with call transcriptions. References F. Bechet, G. Riccardi and D. Hakkani-Tur 2004. Mining Spoken Dialogue Corpora for System Evaluation and Modeling. Conference on Empirical Methods in Natural Language Processing (EMNLP). July, Barcelona, Spain. S. Douglas, D. Agarwal, T. Alonso, R. M. Bell, M. Gilbert, D. F. Swayne and C. Volinsky. 2005. Mining Customer Care Dialogs for “Daily News”. IEEE Trans. on Speech and Audio Processing, 13(5):652– 660. P. Haffner, G. Tur and J. H. Wright 2003. Optimizing SVMs for Complex Call Classification. IEEE International Conference on Acoustics, Speech, and Signal Processing. April 6-10, Hong Kong. X. Jiang and A.-H. Tan. 2005. Mining Ontological Knowledge from Domain-Specific Text Documents. IEEE International Conference on Data Mining, November 26-30, New Orleans, Louisiana, USA. K. Kummamuru, R. Lotlikar, S. Roy, K. Singal and R. Krishnapuram. 2004. A hierarchical monothetic document clustering algorithm for summarization and browsing search results. International Conference on World Wide Web. New York, NY, USA. H.-K J. Kuo and C.-H. Lee. 2003. Discriminative Training of Natural Language Call Routers. IEEE Trans. on Speech and Audio Processing, 11(1):24– 35. A. D. Lawson, D. M. Harris, J. J. Grieco. 2003. Effect of Foreign Accent on Speech Recognition in the NATO N-4 Corpus. Eurospeech. September 14, Geneva, Switzerland. G. Mishne, D. Carmel, R. Hoory, A. Roytman and A. Soffer. 2005. Automatic Analysis of Call-center Conversations. Conference on Information and Knowledge Management. October 31-November 5, Bremen, Germany. M. Padmanabhan, G. Saon, J. Huang, B. Kingsbury and L. Mangu.. 2002. Automatic Speech Recognition Performance on a Voicemail Transcription Task. IEEE Trans. on Speech and Audio Processing, 10(7):433–442. M. Tang, B. Pellom and K. Hacioglu. 2003. Calltype Classification and Unsupervised Training for the Call Center Domain. Automatic Speech Recognition and Understanding Workshop. November 30December 4, St. Thomas, U S Virgin Islands. J. Wright, A. Gorin and G. Riccardi. 1997. Automatic Acquisition of Salient Grammar Fragments for Call-type Classification. Eurospeech. September, Rhodes, Greece. 744
2006
93
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 745–752, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Proximity in Context: an empirically grounded computational model of proximity for processing topological spatial expressions∗ John D. Kelleher Dublin Institute of Technology Dublin, Ireland [email protected] Geert-Jan M. Kruijff DFKI GmbH Saarbru¨cken, Germany [email protected] Fintan J. Costello University College Dublin Dublin, Ireland [email protected] Abstract The paper presents a new model for contextdependent interpretation of linguistic expressions about spatial proximity between objects in a natural scene. The paper discusses novel psycholinguistic experimental data that tests and verifies the model. The model has been implemented, and enables a conversational robot to identify objects in a scene through topological spatial relations (e.g. “X near Y”). The model can help motivate the choice between topological and projective prepositions. 1 Introduction Our long-term goal is to develop conversational robots with which we can have natural, fluent situated dialog. An inherent aspect of such situated dialog is reference to aspects of the physical environment in which the agents are situated. In this paper, we present a computational model which provides a context-dependent analysis of the environment in terms of spatial proximity. We show how we can use this model to ground spatial language that uses topological prepositions (“the ball near the box”) to identify objects in a scene. Proximity is ubiquitous in situated dialog, but there are deeper “cognitive” reasons for why we need a context-dependent model of proximity to facilitate fluent dialog with a conversational robot. This has to do with the cognitive load that processing proximity expressions imposes. Consider the examples in (1). Psycholinguistic data indicates that a spatial proximity expression (1b) presents a heavier cognitive load than a referring expression identifying an object purely on physical features (1a) yet is easier to process than a projective expression (1c) (van der Sluis and Krahmer, 2004). ∗The research reported here was supported by the CoSy project, EU FP6 IST ”Cognitive Systems” FP6-004250-IP. (1) a. the blue ball b. the ball near the box c. the ball to the right of the box One explanation for this preference is that feature-based descriptions are easier to resolve perceptually, with a further distinction among features as given in Figure 1, cf. (Dale and Reiter, 1995). On the other hand, the interpretation and realization of spatial expressions requires effort and attention (Logan, 1994; Logan, 1995). Figure 1: Cognitive load Similarly we can distinguish between the cognitive loads of processing different forms of spatial relations. Focusing on static prepositions, topological prepositions have a lower cognitive load than projective prepositions. Topological prepositions (e.g. “at”, “near”) describe proximity to an object. Projective prepositions (e.g. “above”) describe a region in a particular direction from the object. Projective prepositions impose a higher cognitive load because we need to consider different spatial frames of reference (Krahmer and Theune, 1999; Moratz and Tenbrink, 2006). Now, if we want a robot to interact with other agents in a way that obeys the Principle of Minimal Cooperative Effort (Clark and Wilkes-Gibbs, 1986), it should adopt the simplest means to (spatially) refer to an object. However, research on spatial language in human-robot interaction has primarily focused on the use of projective prepositions. We currently lack a comprehensive model for topological prepositions. Without such a model, 745 a robot cannot interpret spatial proximity expressions nor motivate their contextually and pragmatically appropriate use. In this paper, we present a model that addresses this problem. The model uses energy functions, modulated by visual and discourse salience, to model how spatial templates associated with other landmarks may interfere to establish what are contextually appropriate ways to locate a target relative to these landmarks. The model enables grounding of spatial expressions using spatial proximity to refer to objects in the environment. We focus on expressions using topological prepositions such as “near” or “at”. Terminology. We use the term target (T) to refer to the object that is being located by a spatial expression, and landmark (L) to refer to the object relative to which the target’s location is described: “[The man]T near [the table]L.” A distractor is any object in the visual context that is neither landmark nor target. Overview §2 presents contextual effects we can observe in grounding spatial expressions, including the effect of interference on whether two objects may be considered proximal. §3 discusses a model that accounts for all these effects, and §4 describes an experiment to test the model. §5 shows how we use the model in linguistic interpretation. 2 Data Below we discuss previous psycholinguistic experients, focusing on how contextual factors such as distance, size, and salience may affect proximity. We also present novel examples, showing that the location of other objects in a scene may interfere with the acceptability of a proximal description to locate a target relative to a landmark. These examples motivate the model in §3. 1.74 1.90 2.84 3.16 2.34 1.81 2.13 2.61 3.84 4.66 4.97 4.90 3.56 3.26 4.06 5.56 7.55 7.97 7.29 4.80 3.91 3.47 4.81 6.94 7.56 7.31 5.59 3.63 4.47 5.91 8.52 O 7.90 6.13 4.46 3.25 4.03 4.50 4.78 4.41 3.47 3.10 1.84 2.23 2.03 3.06 2.53 2.13 2.00 Figure 2: 7-by-7 cell grid with mean goodness ratings for the relation the X is near O as a function of the position occupied by X. Spatial reasoning is a complex activity that involves at least two levels of processing: a geometric level where metric, topological, and projective properties are handled, (Herskovits, 1986); and a functional level where the normal function of an entity affects the spatial relationships attributed to it in a context, cf. (Coventry and Garrod, 2004). We focus on geometric factors. Although a lot of experimental work has been done on spatial reasoning and language (cf. (Coventry and Garrod, 2004)), only Logan and Sadler (1996) examined topological prepositions in a context where functional factors were excluded. They introduced the notion of a spatial template. The template is centred on the landmark and identifies for each point in its space the acceptability of the spatial relationship between the landmark and the target appearing at that point being described by the preposition. Logan & Sadler examined various spatial prepositions this way. In their experiments, a human subject was shown sentences of the form “the X is [relation] the O”, each with a picture of a spatial configuration of an O in the center of an invisible 7-by-7 cell grid, and an X in one of the 48 surrounding positions. The subject then had to rate how well the sentence described the picture, on a scale from 1(bad) to 9(good). Figure 2 gives the mean goodness rating for the relation “near to” as a function of the position occupied by X (Logan and Sadler, 1996). It is clear from Figure 2 that ratings diminish as the distance between X and O increases, but also that even at the extremes of the grid the ratings were still above 1 (min. rating). Besides distance there are also other factors that determine the applicability of a proximal relation. For example, given prototypical size, the region denoted by “near the building” is larger than that of “near the apple” (Gapp, 1994). Moreover, an object’s salience influences the determination of the proximal region associated with it (Regier and Carlson, 2001; Roy, 2002). Finally, the two scenes in Figure 3 show interference as a contextual factor. For the scene on the left we can use “the blue box is near the black box” to describe object (c). This seems inappropriate in the scene on the right. Placing an object (d) beside (b) appears to interfere with the appropriateness of using a proximal relation to locate (c) relative to (b), even though the absolute distance between (c) and (b) has not changed. Thus, there is empirical evidence for several 746 Figure 3: Proximity and distance contextual factors determining the applicability of a proximal description. We argued that the location of other distractor objects in context may also interfere with this applicability. The model in §3 captures all these factors, and is evaluated in §4. 3 Computational Model Below we describe a model of relative proximity that uses (1) the distance between objects, (2) the size and salience of the landmark object, and (3) the location of other objects in the scene. Our model is based on first computing absolute proximity between each point and each landmark in a scene, and then combining or overlaying the resulting absolute proximity fields to compute the relative proximity of each point to each landmark. 3.1 Computing absolute proximity fields We first compute for each landmark an absolute proximity field giving each point’s proximity to that landmark, independent of proximity to any other landmark. We compute fields on the projection of the scene onto the 2D-plane, a 2D-array ARRAY of points. At each point P in ARRAY , the absolute proximity for landmark L is proxabs = (1 −distnormalised(L, P, ARRAY )) ∗salience(L). (1) In this equation the absolute proximity for a point P and a landmark L is a function of both the distance between the point and the location of the landmark, and the salience of the landmark. To represent distance we use a normalised distance function distnormalised(L, P, ARRAY ), which returns a value between 0 and 1.1 The smaller the distance between L and P, the higher the absolute proximity value returned, i.e. the more acceptable it is to say that P is close to L. In this way, this component of the absolute proximity field captures the gradual gradation in applicability evident in Logan and Sadler (1996). 1We normalise by computing the distance between the two points, and then dividing this distance it by the maximum distance between point L and any point in the scene. We model the influence of visual and discourse salience on absolute proximity as a function salience(L), returning a value between 0 and 1 that represents the relative salience of the landmark L in the scene (2). The relative salience of an object is the average of its visual salience (Svis) and discourse salience (Sdisc), salience(L) = (Svis(L) + Sdisc(L))/2 (2) Visual salience Svis is computed using the algorithm of Kelleher and van Genabith (2004). Computing a relative salience for each object in a scene is based on its perceivable size and its centrality relative to the viewer’s focus of attention. The algorithm returns scores in the range of 0 to 1. As the algorithm captures object size we can model the effect of landmark size on proximity through the salience component of absolute proximity. The discourse salience (Sdisc) of an object is computed based on recency of mention (Hajicov´a, 1993) except we represent the maximum overall salience in the scene as 1, and use 0 to indicate that the landmark is not salient in the current context. 0 0.1 0.2 0.3 0.4 0.5 0.( 0.) 0.* 0.+ 1 ,-3.-3/ ,-2.-2/ ,-1.-1/ L ,1.1/ ,2.2/ ,3.3/ point location proximity ratin. 123ol6te proximity to L. 3alienBe 1 123ol6te proximity to L. 3alienBe 0.( 123ol6te proximity to L. 3alienBe 0.5 Figure 4: Absolute proximity ratings for landmark L centered in a 2D plane, points ranging from plane’s upper-left corner (<-3,-3>) to lower right corner(<3,3>). Figure 4 shows computed absolute proximity with salience values of 1, 0.6, and 0.5, for points from the upper-left to the lower-right of a 2D plane, with the landmark at the center of that plane. The graph shows how salience influences absolute proximity in our model: for a landmark with high salience, points far from the landmark can still have high absolute proximity to it. 3.2 Computing relative proximity fields Once we have constructed absolute proximity fields for the landmarks in a scene, our next step is to overlay these fields to produce a measure of 747 relative proximity to each landmark at each point. For this we first select a landmark, and then iterate over each point in the scene comparing the absolute proximity of the selected landmark at that point with the absolute proximity of all other landmarks at that point. The relative proximity of a selected landmark at a point is equal to the absolute proximity field for that landmark at that point, minus the highest absolute proximity field for any other landmark at that point (see Equation 3). proxrel(P, L) = proxabs(P, L)−MAX ∀LX ̸=L proxabs(P, LX ) (3) The idea here is that the other landmark with the highest absolute proximity is acting in competition with the selected landmark. If that other landmark’s absolute proximity is higher than the absolute proximity of the selected landmark, the selected landmark’s relative proximity for the point will be negative. If the competing landmark’s absolute proximity is slightly lower than the absolute proximity of the selected landmark, the selected landmark’s relative proximity for the point will be positive, but low. Only when the competing landmark’s absolute proximity is significantly lower than the absolute proximity of the selected landmark will the selected landmark have a high relative proximity for the point in question. In (3) the proximity of a given point to a selected landmark rises as that point’s distance from the landmark decreases (the closer the point is to the landmark, the higher its proximity score for the landmark will be), but falls as that point’s distance from some other landmark decreases (the closer the point is to some other landmark, the lower its proximity score for the selected landmark will be). Figure 5 shows the relative proximity fields of two landmarks, L1 and L2, computed using (3), in a 1-dimensional (linear) space. The two landmarks have different degrees of salience: a salience of 0.5 for L1 and of 0.6 for L2 (represented by the different sizes of the landmarks). In this figure, any point where the relative proximity for one particular landmark is above the zero line represents a point which is proximal to that landmark, rather than to the other landmark. The extent to which that point is above zero represents its degree of proximity to that landmark. The overall proximal area for a given landmark is the overall area for which its relative proximity field is above zero. The left and right borders of the figure represent the boundaries (walls) of the area. Figure 5 illustrates three main points. First, the overall size of a landmark’s proximal area is a function of the landmark’s position relative to the other landmark and to the boundaries. For example, landmark L2 has a large open space between it and the right boundary: Most of this space falls into the proximal area for that landmark. Landmark L1 falls into quite a narrow space between the left boundary and L2. L1 thus has a much smaller proximal area in the figure than L2. Second, the relative proximity field for some landmark is a function of that landmark’s salience. This can be seen in Figure 5 by considering the space between the two landmarks. In that space the width of the proximal area for L2 is greater than that of L1, because L2 is more salient. The third point concerns areas of ambiguous proximity in Figure 5: areas in which neither of the landmarks have a significantly higher relative proximity than the other. There are two such areas in the Figure. The first is between the two landmarks, in the region where one relative proximity field line crosses the other. These points are ambiguous in terms of relative proximity because these points are equidistant from those two landmarks. The second ambiguous area is at the extreme right of the space shown in Figure 5. This area is ambiguous because this area is distant from both landmarks: points in this area would not be judged proximal to either landmark. The question of ambiguity in relative proximity judgments is considered in more detail in §5. !"#$ !"#% !"#& !"#' !"#( " "#( "#' "#& "#% "#$ )( )' point lo(ation* relative proximity *+,-./0+ 2*34/5/.637 23/8. .3 )( *+,-./0+ 2*34/5/.6 37 23/8. .3 )' Figure 5: Graph of relative proximity fields for two landmarks L1 and L2. Relative proximity fields were computed with salience scores of 0.5 for L1 and 0.6 for L2. 4 Experiment Below we describe an experiment which tests our approach (§3) to relative proximity by examining 748 the changes in people’s judgements of the appropriateness of the expression near being used to describe the relationship between a target and landmark object in an image where a second, distractor landmark is present. All objects in these images were coloured shapes, a circle, triangle or square. 4.1 Material and Procedure All images used in this experiment contained a central landmark object and a target object, usually with a third distractor object. The landmark was always placed in the middle of a 7-by-7 grid. Images were divided into 8 groups of 6 images each. Each image in a group contained the target object placed in one of 6 different cells on the grid, numbered from 1 to 6. Figure 6 shows how we number these target positions according to their nearness to the landmark. 1 2 4 5 a 6 g L c e b d f 3 Figure 6: Relative locations of landmark (L) target positions (1..6) and distractor landmark positions (a..g) in images used in the experiment. Groups are organised according to the presence and position of a distractor object. In group a the distractor is directly above the landmark, in group b the distractor is rotated 45 degrees clockwise from the vertical, in group c it is directly to the right of the landmark, in d it is rotated 135 degrees clockwise from the vertical, and so on. The distractor object is always the same distance from the central landmark. In addition to the distractor groups a,b,c,d,e,f and g, there is an eighth group, group x, in which no distractor object occurs. In the experiment, each image was displayed with a sentence of the form The is near the , with a description of the target and landmark respectively. The sentence was presented under the image. 12 participants took part in this experiment. Participants were asked to rate the acceptability of the sentence as a description of the image using a 10-point scale, with zero denoting not acceptable at all; four or five denoting moderately acceptable; and nine perfectly acceptable. 4.2 Results and Discussion We assess participants’ responses by comparing their average proximity judgments with those predicted by the absolute proximity equation (Equation 1), and by the relative proximity equation (Equation 3). For both equations we assume that all objects have a salience score of 1. With salience equal to 1, the absolute proximity equation relates proximity between target and landmark objects to the distance between those two objects, so that the closer the target is to the landmark the higher its proximity will be. With salience equal to 1, the relative proximity equation relates proximity to both distance between target and landmark and distance between target and distractor, so that the proximity of a given target object to a landmark rises as that target’s distance from the landmark decreases but falls as the target’s distance from some other distractor object decreases. Figure 7 shows graphs comparing participants’ proximity ratings with the proximity scores computed by Equation 1 (the absolute proximity equation), and by Equation 3 (the relative proximity equation), for the images in group x and in the other 7 groups. In the first graph there is no difference between the proximity scores computed by the two equations, since, when there is no distractor object present the relative proximity equation reduces to the absolute proximity equation. The correlation between both computed proximity scores and participants’ average proximity scores for this group is quite high (r = 0.95). For the remaining 7 groups the proximity value computed from Equation 1 gives a fair match to people’s proximity judgements for target objects (the average correlation across these seven groups in Figure 7 is around r = 0.93). However, relative proximity score as computed in Equation 3 significantly improves the correlation in each graph, giving an average correlation across the seven groups of around r = 0.99 (all correlations in Figure 7 are significant p < 0.01). Given that the correlations for both Equation 1 and Equation 3 are high we examined whether the results returned by Equation 3 were reliably closer to human judgements than those from Equation 1. For the 42 images where a distractor object was present we recorded which equation gave a result that was closer to participants’ normalised aver749 ! ! ! !" !# $ # " # " % & ' ( !"#$%!&'()"!*(+ +(#,"'*-%.&/#(0*,*!1&-)(#% )*+,-./0.1+2-,345.67,839+:.# ;1+**4<839+:.*.=.$>?'@ )*+,-./0.1+2-,345.67,839+:.% ;1+**4<839+:.*.=.$>?'@ )*+,-./0.+AB4*C45 !" !# $ # " # " % & ' ( !"#$%!&'()"!*(+ +(#,"'*-%.&/#(0*,*!1&-)(#% )*+,-.A0.1+2-,345.67,839+:.#. ;1+**4<839+:.*.=$>?%@ )*+,-.A0.1+2-,345.67,839+:.% ;1+**4<839+:.*=$>??@ )*+,-.A0.+AB4*C45 !" !# $ # " # " % & ' ( !"#$%!&'()"!*(+ +(#,"'*-%.&/#(0*,*!1&-)(#% )*+,-.10.1+2-,345.67,839+:.#. ;1+**4<839+:.*.=$>?%@ )*+,-.10.1+2-,345.67,839+:.% ;1+**4<839+:.*=$>??@ )*+,-.10.+AB4*C45 !" !# $ # " # " % & ' ( !"#$%!&'()"!*(+ +(#,"'*-%.&/#(0*,*!1&-)(#% )*+,-.50.1+2-,345.67,839+:.#. ;1+**4<839+:.*.=$>?'@ )*+,-.50.1+2-,345.67,839+:.% ;1+**4<839+:.*=$>??@ )*+,-.50.+AB4*C45 !" !# $ # " # " % & ' ( !"#$%!&'()"!*(+ +(#,"'*-%.&/#(0*,*!1&-)(#% )*+,-.40.1+2-,345.67,839+:.#. ;1+**4<839+:.*.=$>?%@ )*+,-.40.1+2-,345.67,839+:.% ;1+**4<839+:.*=#>$@ )*+,-.40.+AB4*C45 !" !# $ # " # " % & ' ( !"#$%!&'()"!*(+ +(#,"'*-%.&/#(0*,*!1&-)(#% )*+,-.D0.1+2-,345.67,839+:.#. ;1+**4<839+:.*.=$>?%@ )*+,-.D0.1+2-,345.67,839+:.% ;1+**4<839+:.*=$>??@ )*+,-.D0.+AB4*C45 !" !# $ # " # " % & ' ( !"#$%!&'()"!*(+ +(#,"'*-%.&/#(0*,*!1&-)(#% )*+,-.)0.1+2-,345.67,839+:.#. ;1+**4<839+:.*.=$>?E@ )*+,-.)0.1+2-,345.67,839+:.% ;1+**4<839+:.*=$>??@ )*+,-.)0.+AB4*C45 !" !# $ # " # " % & ' ( !"#$%!&'()"!*(+ +(#,"'*-%.&/#(0*,*!1&-)(#%& )*+,-.80.1+2-,345.67,839+:.#. ;1+**4<839+:.*.=$>?'@ )*+,-.80.1+2-,345.67,839+:.% ;1+**4<839+:.*=$>?E@ )*+,-.80.+AB4*C45 Figure 7: comparison between normalised proximity scores observed and computed for each group. age for that image. In 28 cases Equation 3 was closer, while in 14 Equation 1 was closer (a 2:1 advantage for Equation 3, significant in a sign test: n+ = 28, n−= 14, Z = 2.2, p < 0.05). We conclude that proximity judgements for objects in our experiment are best represented by relative proximity as computed in Equation 3. These results support our ‘relative’ model of proximity.2 It is interesting to note that Equation 3 overestimates proximity in the cases (a, b and g) 2Note that, in order to display the relationship between proximity values given by participants, computed in Equation 1, and computed in Equation 3, the values displayed in Figure 7 are normalised so that proximity values have a mean of 0 and a standard deviation of 1. This normalisation simply means that all values fall in the same region of the scale, and can be easily compared visually. where the distractor object is closest to the targets and slightly underestimates proximity in all other cases. We will investigate this in future work. 5 Expressing spatial proximity We use the model of §3 to interpret spatial references to objects. A fundamental requirement for processing situated dialogue is that linguistic meaning provides enough information to establish the visual grounding of spatial expressions: How can the robot relate the meaning of a spatial expression to a scene it visually perceives, so it can locate the objects which the expression applies to? Approaches agree here on the need for ontologically rich representations, but differ in how these are to be visually grounded. Oates et al. (2000) 750 and Roy (2002) use machine learning to obtain a statistical mapping between visual and linguistic features. Gorniak and Roy (2004) use manually constructed mappings between linguistic constructions, and probabilistic functions which evaluate whether an object can act as referent, whereas DeVault and Stone (2004) use symbolic constraint resolution. Our approach to visual grounding of language is similar to the latter two approaches. We use a Combinatory Categorial Grammar (CCG) (Baldridge and Kruijff, 2003) to describe the relation between the syntactic structure of an utterance and its meaning. We model meaning as an ontologically richly sorted, relational structure, using a description logic-like framework (Baldridge and Kruijff, 2002). We use OpenCCG for parsing and realization.3 (2) the box near the ball @{b:phys−obj}(box & ⟨Delimitation⟩unique & ⟨Number⟩singular & ⟨Quantification⟩specific singular) & @{b:phys−obj}⟨Location⟩(r : region & near & ⟨Proximity⟩proximal & ⟨Positioning⟩static) & @{r:region}⟨FromWhere⟩(b1 : phys −obj & ball & ⟨Delimitation⟩unique & ⟨Number⟩singular & ⟨Quantification⟩specific singular) Example (2) shows the meaning representation for “the box near the ball”. It consists of several, related elementary predicates (EPs). One type of EP represents a discourse referent as a proposition with a handle: @{b:phys−obj}(box) means that the referent b is a physical object, namely a box. Another type of EP states dependencies between referents as modal relations, e.g. @{b:phys−obj}⟨Location⟩(r : region & near) means that discourse referent b (the box) is located in a region r that is near to a landmark. We represent regions explicitly to enable later reference to the region using deictic reference (e.g. “there”). Within each EP we can have semantic features, e.g. the region r characterizes a static location of b and expresses proximity to a landmark. Example (2) gives a ball in the context as the landmark. We use the sorting information in the utterance’s meaning (e.g. phys-obj, region) for further 3http://www.sf.net/openccg/ interpretation using ontology-based spatial reasoning. This yields several inferences that need to hold for the scene, like DeVault and Stone (2004). Where we differ is in how we check whether these inferences hold. Like Gorniak and Roy (2004), we map these conditions onto the energy landscape computed by the proximity field functions. This enables us to take into account inhibition effects arising in the actual situated context, unlike Gorniak & Roy or DeVault & Stone. We convert relative proximity fields into proximal regions anchored to landmarks to contextually interpret linguistic meaning. We must decide whether a landmark’s relative proximity score at a given point indicates that it is “near” or “close to” or “at” or “beside” the landmark. For this we iterate over each point in the scene, and compare the relative proximity scores of the different landmarks at each point. If the primary landmark’s (i.e., the landmark with the highest relative proximity at the point) relative proximity exceeds the next highest relative proximity score by more than a predefined confidence interval the point is in the vague region anchored around the primary landmark. Otherwise, we take it as ambiguous and not in the proximal region that is being interpreted. The motivation for the confidence interval is to capture situations where the difference in relative proximity scores between the primary landmark and one or more landmarks at a given point is relatively small. Figure 8 illustrates the parsing of a scene into the regions “near” two landmarks. The relative proximity fields of the two landmarks are identical to those in Figure 5, using a confidence interval of 0.1. Ambiguous points are where the proximity ambiguity series is plotted at 0.5. The regions “near” each landmark are those areas of the graph where each landmark’s relative proximity series is the highest plot on the graph. Figure 8 illustrates an important aspect of our model: the comparison of relative proximity fields naturally defines the extent of vague proximal regions. For example, see the region right of L2 in Figure 8. The extent of L2’s proximal region in this direction is bounded by the interference effect of L1’s relative proximity field. Because the landmarks’ relative proximity scores converge, the area on the far right of the image is ambiguous with respect to which landmark it is proximal to. In effect, the model captures the fact that the area is relatively distant from both landmarks. Follow751 Figure 8: Graph of ambiguous regions overlaid on relative proximity fields for landmarks L1 and L2, with confidence interval=0.1 and different salience scores for L1 (0.5) and L2 (0.6). Locations of landmarks are marked on the X-axis. ing the cognitive load model (§1), objects located in this region should be described with a projective relation such as “to the right of L2” rather than a proximal relation like “near L2”, see Kelleher and Kruijff (2006). 6 Conclusions We addressed the issue of how we can provide a context-dependent interpretation of spatial expressions that identify objects based on proximity in a visual scene. We discussed available psycholinguistic data to substantiate the usefulness of having such a model for interpreting and generating fluent situated dialogue between a human and a robot, and that we need a contextdependent representation of what is (situationally) appropriate to consider proximal to a landmark. Context-dependence thereby involves salience of landmarks as well as inhibition effects between landmarks. We presented a model in which we can address these issues, and we exemplified how logical forms representing the meaning of spatial proximity expressions can be grounded in this model. We tested and verified the model using a psycholinguistic experiment. Future work will examine whether the model can be used to describe the semantics of nouns (such as corner) that express vague spatial extent, and how the model relates to the functional aspects of spatial reasoning. References J. Baldridge and G.J.M. Kruijff. 2002. Coupling CCG and hybrid logic dependency semantics. In Proceedings of ACL 2002, Philadelphia, Pennsylvania. J. Baldridge and G.J.M. Kruijff. 2003. Multi-modal combinatory categorial grammar. In Proceedings of EACL 2003, Budapest, Hungary. H. Clark and D. Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22:1–39. K.R. Coventry and S. Garrod. 2004. Saying, Seeing and Acting. The Psychological Semantics of Spatial Prepositions. Essays in Cognitive Psychology Series. Lawrence Erlbaum Associates. R. Dale and E. Reiter. 1995. Computatinal interpretations of the gricean maxims in the generation of referring expressions. Cognitive Science, 18:233–263. D. DeVault and M. Stone. 2004. Interpreting vague utterances in context. In Proceedings of COLING 2004, volume 2, pages 1247–1253, Geneva, Switzerland. K.P. Gapp. 1994. Basic meanings of spatial relations: Computation and evaluation in 3d space. In Proceedings of AAAI-94, pages 1393–1398. P. Gorniak and D. Roy. 2004. Grounded semantic composition for visual scenes. Journal of Artificial Intelligence Research, 21:429–470. E. Hajicov´a. 1993. Issues of Sentence Structure and Discourse Patterns, volume 2 of Theoretical and Computational Linguistics. Charles University Press. A Herskovits. 1986. Language and spatial cognition: An interdisciplinary study of prepositions in English. Studies in Natural Language Processing. Cambridge University Press. J.D. Kelleher and G.J. Kruijff. 2006. Incremental generation of spatial referring expressions in situated dialog. In Proceedings ACL/COLING ’06, Sydney, Australia. J. Kelleher and J. van Genabith. 2004. Visual salience and reference resolution in simulated 3d environments. AI Review, 21(3-4):253–267. E. Krahmer and M. Theune. 1999. Efficient generation of descriptions in context. In R. Kibble and K. van Deemter, editors, Workshop on the Generation of Nominals, ESSLLI’99, Utrecht, The Netherlands. G.D. Logan and D.D. Sadler. 1996. A computational analysis of the apprehension of spatial relations. In M. Bloom, P.and Peterson, L. Nadell, and M. Garrett, editors, Language and Space, pages 493–529. MIT Press. G.D. Logan. 1994. Spatial attention and the apprehension of spatial relations. Journal of Experimental Psychology: Human Perception and Performance, 20:1015–1036. G.D. Logan. 1995. Linguistic and conceptual control of visual spatial attention. Cognitive Psychology, 12:523–533. R. Moratz and T. Tenbrink. 2006. Spatial reference in linguistic human-robot interaction: Iterative, empirically supported development of a model of projective relations. Spatial Cognition and Computation. T. Oates, Z. Eyler-Walker, and P.R. Cohen. 2000. Toward natural language interfaces for robotic agents: Grounding linguistic meaning in sensors. In Proceedings of the Fourth International Conference on Autonomous Agents, pages 227–228. T Regier and L. Carlson. 2001. Grounding spatial language in perception: An empirical and computational investigation. Journal of Experimental Psychology: General, 130(2):273–298. D.K. Roy. 2002. Learning words and syntax for a scene description task. Computer Speech and Language, 16(3). I.F. van der Sluis and E.J. Krahmer. 2004. The influence of target size and distance on the production of speech and gesture in multimodal referring expressions. In R. Kibble and K. van Deemter, editors, ICSLP04. 752
2006
94