text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1188–1198, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Joint Syntactic and Semantic Parsing with Combinatory Categorial Grammar Jayant Krishnamurthy Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 [email protected] Tom M. Mitchell Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 [email protected] Abstract We present an approach to training a joint syntactic and semantic parser that combines syntactic training information from CCGbank with semantic training information from a knowledge base via distant supervision. The trained parser produces a full syntactic parse of any sentence, while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary. We demonstrate our approach by training a parser whose semantic representation contains 130 predicates from the NELL ontology. A semantic evaluation demonstrates that this parser produces logical forms better than both comparable prior work and a pipelined syntax-then-semantics approach. A syntactic evaluation on CCGbank demonstrates that the parser’s dependency Fscore is within 2.5% of state-of-the-art. 1 Introduction Integrating syntactic parsing with semantics has long been a goal of natural language processing and is expected to improve both syntactic and semantic processing. For example, semantics could help predict the differing prepositional phrase attachments in “I caught the butterfly with the net” and “I caught the butterfly with the spots.” A joint analysis could also avoid propagating syntactic parsing errors into semantic processing, thereby improving performance. We suggest that a large populated knowledge base should play a key role in syntactic and semantic parsing: in training the parser, in resolving syntactic ambiguities when the trained parser is applied to new text, and in its output semantic representation. Using semantic information from the knowledge base at training and test time will ideally improve the parser’s ability to solve difficult syntactic parsing problems, as in the examples above. A semantic representation tied to a knowledge base allows for powerful inference operations – such as identifying the possible entity referents of a noun phrase – that cannot be performed with shallower representations (e.g., frame semantics (Baker et al., 1998) or a direct conversion of syntax to logic (Bos, 2005)). This paper presents an approach to training a joint syntactic and semantic parser using a large background knowledge base. Our parser produces a full syntactic parse of every sentence, and furthermore produces logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary. For example, given a phrase like “my favorite town in California,” our parser will assign a logical form like λx.CITY(x) ∧LOCATEDIN(x, CALIFORNIA) to the “town in California” portion. Additionally, the parser uses predicate and entity type information during parsing to select a syntactic parse. Our parser is trained by combining a syntactic parsing task with a distantly-supervised relation extraction task. Syntactic information is provided by CCGbank, a conversion of the Penn Treebank into the CCG formalism (Hockenmaier and Steedman, 2002a). Semantics are learned by training the parser to extract knowledge base relation instances from a corpus of unlabeled sentences, in a distantly-supervised training regime. This approach uses the knowledge base to avoid expensive manual labeling of individual sentence semantics. By optimizing the parser to perform both tasks simultaneously, we train a parser that produces accurate syntactic and semantic analyses. We demonstrate our approach by training a joint syntactic and semantic parser, which we call ASP. ASP produces a full syntactic analysis of every sentence while simultaneously producing logical forms containing any of 61 category and 69 re1188 lation predicates from NELL. Experiments with ASP demonstrate that jointly analyzing syntax and semantics improves semantic parsing performance over comparable prior work and a pipelined syntax-then-semantics approach. ASP’s syntactic parsing performance is within 2.5% of state-ofthe-art; however, we also find that incorporating semantic information reduces syntactic parsing accuracy by ∼0.5%. 2 Prior Work This paper combines two lines of prior work: broad coverage syntactic parsing with CCG and semantic parsing. Broad coverage syntactic parsing with CCG has produced both resources and successful parsers. These parsers are trained and evaluated using CCGbank (Hockenmaier and Steedman, 2002a), an automatic conversion of the Penn Treebank into the CCG formalism. Several broad coverage parsers have been trained using this resource (Hockenmaier and Steedman, 2002b; Hockenmaier, 2003b). The parsing model in this paper is loosely based on C&C (Clark and Curran, 2007b; Clark and Curran, 2007a), a discriminative loglinear model for statistical parsing. Some work has also attempted to automatically derive logical meaning representations directly from syntactic CCG parses (Bos, 2005; Lewis and Steedman, 2013). However, these approaches to semantics do not ground the text to beliefs in a knowledge base. Meanwhile, work on semantic parsing has focused on producing semantic parsers for answering simple natural language questions (Zelle and Mooney, 1996; Ge and Mooney, 2005; Wong and Mooney, 2006; Wong and Mooney, 2007; Lu et al., 2008; Kate and Mooney, 2006; Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011). This line of work has typically used a corpus of sentences with annotated logical forms to train the parser. Recent work has relaxed the requisite supervision conditions (Clarke et al., 2010; Liang et al., 2011), but has still focused on simple questions. Finally, some work has looked at applying semantic parsing to answer queries against large knowledge bases, such as YAGO (Yahya et al., 2012) and Freebase (Cai and Yates, 2013b; Cai and Yates, 2013a; Kwiatkowski et al., 2013; Berant et al., 2013). Although this work considers a larger number (thousands) of predicates than we do, none of these systems are capable of parsing open-domain text. Our approach is most closely related to the distantly-supervised approach of Krishnamurthy and Mitchell (2012). The parser presented in this paper can be viewed as a combination of both a broad coverage syntactic parser and a semantic parser trained using distant supervision. Combining these two lines of work has synergistic effects – for example, our parser is capable of semantically analyzing conjunctions and relative clauses based on the syntactic annotation of these categories in CCGbank. This synergy gives our parser a richer semantic representation than previous work, while simultaneously enabling broad coverage. 3 Parser Design This section describes the Combinatory Categorial Grammar (CCG) parsing model used by ASP. The input to the parser is a part-of-speech tagged sentence, and the output is a syntactic CCG parse tree, along with zero or more logical forms representing the semantics of subspans of the sentence. These logical forms are constructed using category and relation predicates from a broad coverage knowledge base. The parser also outputs a collection of dependency structures summarizing the sentence’s predicate-argument structure. Figure 1 illustrates ASP’s input/output specification. 3.1 Knowledge Base The parser uses category and relation predicates from a broad coverage knowledge base both to construct logical forms and to parametrize the parsing model. The knowledge base is assumed to have two kinds of ontological structure: a generalization/subsumption hierarchy and argument type constraints. This paper uses NELL’s ontology (Carlson et al., 2010), which, for example, specifies that the category ORGANIZATION is a generalization of SPORTSTEAM, and that both arguments to the LOCATEDIN relation must have type LOCATION. These type constraints are enforced during parsing. Throughout this paper, predicate names are shown in SMALLCAPS. 3.2 Syntax ASP uses a lexicalized and semanticallytyped Combinatory Categorial Grammar (CCG) (Steedman, 1996). Most grammatical information in CCG is encoded in a lexicon Λ, containing entries such as: 1189 area / NN N λx.LOCATION(x) that / WDT (N1\N1)/(S[dcl]\NP1)2 λf.λg.λz.g(z) ∧f(λy.y = z) includes / VBZ (S[dcl]\NP1)/NP2 λf.λg.∃x, y.g(x) ∧f(y) ∧LOCATEDIN(y, x) beautiful / JJ N1/N1 λf.f London / NNP N λx.M(x, “london”, CITY) N : λx.M(x, “london”, CITY) (S[dcl]\NP1) : λg.∃x, y.g(x) ∧M(y, “london”, CITY) ∧LOCATEDIN(y, x) N1\N1 : λg.λz.∃x, y.g(z) ∧x = z ∧M(y, “london”, CITY) ∧LOCATEDIN(y, x) N : λz.∃x, y.LOCATION(z) ∧x = z ∧M(y, “london”, CITY) ∧LOCATEDIN(y, x) Head Argument word POS semantic type index syntactic category arg. num. word POS semantic type index that WDT — 1 (N1\N1)/(S\NP1)2 1 area NN LOCATION 0 that WDT — 1 (N1\N1)/(S\NP1)2 2 includes VBZ LOCATEDIN−1 2 includes VBZ LOCATEDIN−1 2 (S[dcl]\NP1)/NP2 1 area NN LOCATION 0 includes VBZ LOCATEDIN−1 2 (S[dcl]\NP1)/NP2 2 ENTITY:CITY NNP CITY 4 beautiful JJ — 3 N1/N1 1 ENTITY:CITY NNP CITY 4 Figure 1: Example input and output for ASP. Given a POS-tagged sentence, the parser produces a CCG syntactic tree and logical form (top), and a collection of dependency structures (bottom). person := N : PERSON : λx.PERSON(x) London := N : CITY : λx.M(x, “london”, CITY) great := N1/N1 : — : λf.λx.f(x) bought := (S[dcl]\NP1)/NP2 : ACQUIRED : λf.λg.∃x, y.f(y) ∧g(x) ∧ACQUIRED(x, y) Each lexicon entry maps a word to a syntactic category, semantic type, and logical form. CCG has two kinds of syntactic categories: atomic and functional. Atomic categories include N for noun and S for sentence. Functional categories are functions constructed recursively from atomic categories; these categories are denoted using slashes to separate the category’s argument type from its return type. The argument type appears on the right side of the slash, and the return type on the left. The direction of slash determines where the argument must appear – / means an argument on the right, and \ means an argument on the left. Syntactic categories in ASP are annotated with two additional kinds of information. First, atomic categories may have associated syntactic features given in square brackets. These features are used in CCGbank to distinguish variants of atomic syntactic categories, e.g., S[dcl] denotes a declarative sentence. Second, each category is annotated with head and dependency information using subscripts. These subscripts are used to populate predicate-argument dependencies (described below), and to pass head information using unification. For example, the head of the parse in Figure 1 is “area,” due to the coindexing of the argument and return categories in the category N1\N1. In addition to the syntactic category, each lexicon entry has a semantic type and a logical form. The semantic type is a category or relation predicate that concisely represents the word’s semantics. The semantic type is used to enforce type constraints during parsing and to include semantics in the parser’s parametrization. The logical form gives the full semantics of the word in lambda calculus. The parser also allows lexicon entries with the semantic type “—”, representing words whose semantics cannot be expressed using predicates from the ontology. Parsing in CCG combines adjacent categories using a small number of combinators, such as function application: X/Y : f Y : g =⇒X : f(g) Y : g X\Y : f =⇒X : f(g) The first rule states that the category X/Y can be applied to the category Y , returning category X, and that the logical form f is applied to g to produce the logical form for the returned category. Head words and semantic types are also propagated to the returned category based on the annotated head-passing markup. 3.3 Dependency Structures Parsing a sentence produces a collection of dependency structures which summarize the predicateargument structure of the sentence. Dependency structures are 10-tuples, of the form: < head word, head POS, head semantic type, head word index, head word syntactic category, argument number, argument word, argument POS, argument semantic type, argument word index > A dependency structure captures a relationship between a head word and its argument. During parsing, whenever a subscripted argument of a syntactic category is filled, a dependency structure 1190 is created between the head of the applied function and its argument. For example, in Figure 1, the first application fills argument 1 of “beautiful” with “London,” creating a dependency structure. 3.4 Logical Forms ASP performs a best-effort semantic analysis of every parsed sentence, producing logical forms for subspans of the sentence when possible. Logical forms are designed so that the meaning of a sentence is a universally- and existentially-quantified conjunction of predicates with partially shared arguments. This representation allows the parser to produce semantic analyses for a reasonable subset of language, including prepositions, verbs, nouns, relative clauses, and conjunctions. Figure 1 shows a representative sample of a logical form produced by ASP. Generally, the parser produces a lambda calculus statement with several existentially-quantified variables ranging over entities in the knowledge base. The only exception to this rule is conjunctions, which are represented using a scoped universal quantifier over the conjoined predicates. Entity mentions appear in logical forms via a special mention predicate, M, instead of as database constants. For example, “London” appears as M(x, “london”, CITY), instead of as a constant like LONDON. The meaning of this mention predicate is that x is an entity which can be called “london” and belongs to the CITY category. This representation propagates uncertainty about entity references into the logical form where background knowledge can be used for disambiguation. For example, “London, England” is assigned a logical form that disambiguates “London” to a “London” located in “England.”1 Lexicon entries without a semantic type are automatically assigned logical forms based on their head passing markup. For example, in Figure 1, the adjective “beautiful” is assigned λf.f. This approach allows a logical form to be derived for most sentences, but (somewhat counterintuitively) can lose interesting logical forms from constituent subspans. For example, the preposition “in” has syntactic category (N1\N1)/N2, which results in the logical form λf.λg.g. This logical form discards any information present in the argument f. We avoid this problem by extracting a logical form from every subtree of the CCG parse. 1Specifically, λx.∃y.CITYLOCATEDINCOUNTRY(x, y) ∧ M(x, “london”, CITY) ∧M(y, “england”, COUNTRY) 3.5 Parametrization The parser Γ is trained as a discriminative linear model of the following form: Γ(ℓ, d, t|s; θ) = θT φ(d, t, s) Given a parameter vector θ and a sentence s, the parser produces a score for a syntactic parse tree t, a collection of dependency structures d and a logical form ℓ. The score depends on features of the parse produced by the feature function φ. φ contains four classes of features: lexicon features, combinator features, dependency features and dependency distance features (Table 1). These features are based on those of C&C (Clark and Curran, 2007b), modified to include semantic types. The features are designed to share syntactic information about a word across its distinct semantic realizations in order to transfer syntactic information from CCGbank to semantic parsing. The parser also includes a hard type-checking constraint to ensure that logical forms are welltyped. This constraint states that dependency structures with a head semantic type only accept arguments that (1) have a semantic type, and (2) are within the domain/range of the head type. 4 Parameter Estimation This section describes the training procedure for ASP. Training is performed by minimizing a joint objective function combining a syntactic parsing task and a distantly-supervised relation extraction task. The input training data includes: 1. A collection L of sentences si with annotated syntactic trees ti (e.g., CCGbank). 2. A corpus of sentences S (e.g., Wikipedia). 3. A knowledge base K (e.g., NELL), containing relation instances r(e1, e2) ∈K. 4. A CCG lexicon Λ (see Section 5.2). Given these resources, the algorithm described in this section produces parameters θ for a semantic parser. Our parameter estimation procedure constructs a joint objective function O(θ) that decomposes into syntactic and semantic components: O(θ) = Osyn(θ) + Osem(θ). The syntactic component Osyn is a standard syntactic parsing objective constructed using the syntactic resource L. The semantic component Osem is a distantly-supervised relation extraction task based on the semantic constraint from Krishnamurthy and Mitchell (2012). These components are described in more detail in the following sections. 1191 Lexicon features: word, POS := X : t : ℓ Word/syntactic category word, X POS/syntactic category POS, X Word semantics word, X, t Combinator features: X Y →Z or X →Z Binary combinator indicator X Y →Z Unary combinator indicator X →Z Root syntactic category Z Dependency Features: < hw, hp, ht, hi, s, n, aw, ap, at, ai > Predicate-Argument Indicator < hw, —, ht, —, s, n, aw, —, at, — > Word-Word Indicator < hw, —, —, —, s, n, aw, —, —, — > Predicate-POS Indicator < hw, —, ht, —, s, n, —, ap, —, — > Word-POS Indicator < hw, —, —, —, s, n, —, ap, —, — > POS-Argument Indicator < —, hp, —, —, s, n, aw, —, at, — > POS-Word Indicator < —, hp, —, —, s, n, aw, —, —, — > POS-POS Indicator < —, hp, —, —, s, n, —, ap, —, — > Dependency Distance Features: Token distance hw, ht, —, s, n, d d = Number of tokens between hi and ai: 0, 1, 2 or more. Token distance word backoff hw, —, s, n, d d = Number of tokens between hi and ai: 0, 1, 2 or more. Token distance POS backoff —, —, hp, s, n, d d = Number of tokens between hi and ai: 0, 1, 2 or more. (The above distance features are repeated using the number of intervening verbs and punctuation marks.) Table 1: Listing of parser feature templates used in the feature function φ. Each feature template represents a class of indicator features that fire during parsing when lexicon entries are used, combinators are applied, or dependency structures are instantiated. 4.1 Syntactic Objective The syntactic objective is the structured perceptron objective instantiated for a syntactic parsing task. This objective encourages the parser to accurately reproduce the syntactic parses in the annotated corpus L = {(si, ti)}n i=1: Osyn(θ) = n X i=1 | max ˆℓ, ˆd,ˆt Γ(ˆℓ, ˆd, ˆt|si; θ) − max ℓ∗,d∗Γ(ℓ∗, d∗, ti|si; θ)|+ The first term in the above expression represents the best CCG parse of the sentence si according to the current model. The second term is the best parse of si whose syntactic tree equals the true syntactic tree ti. In the above equation | · |+ denotes the positive part of the expression. Minimizing this objective therefore finds parameters θ that reproduce the annotated syntactic trees. 4.2 Semantic Objective The semantic objective corresponds to a distantlysupervised relation extraction task that constrains the logical forms produced by the semantic parser. Distant supervision is provided by the following constraint: every relation instance r(e1, e2) ∈K must be expressed by at least one sentence in S(e1,e2), the set of sentences that mention both e1 and e2 (Hoffmann et al., 2011). If this constraint is empirically true and sufficiently constrains the parser’s logical forms, then optimizing the semantic objective produces an accurate semantic parser. A training example in the semantic objective consists of the set of sentences mentioning a pair of entities, S(e1,e2) = {s1, s2, ...}, paired with a binary vector representing the set of relations that the two entities participate in, y(e1,e2). The distant supervision constraint Ψ forces the logical forms predicted for the sentences to entail the relations in y(e1,e2). Ψ is a deterministic OR constraint that checks whether each logical form entails the relation instance r(e1, e2), deterministically setting yr = 1 if any logical form entails the instance and yr = 0 otherwise. Let (ℓ, d, t) represent a collection of semantic parses for the sentences S = S(e1,e2). Let Γ(ℓ, d, t|S; θ) = P|S| i=1 Γ(ℓi, di, ti|si; θ) represent the total weight assigned by the parser to a collection of parses for the sentences S. For the pair of entities (e1, e2), the semantic objective is: Osem(θ) = | max ˆℓ,ˆd,ˆt Γ(ˆℓ, ˆd,ˆt|S; θ) −max ℓ∗,d∗,t∗ Ψ(y(e1,e2), ℓ∗, d∗, t∗) + Γ(ℓ∗, d∗, t∗|S; θ)  |+ 4.3 Optimization Training minimizes the joint objective using the structured perceptron algorithm, which can be viewed as the stochastic subgradient method (Ratliff et al., 2006) applied to the objective O(θ). We initialize the parameters to zero, i.e., θ0 = 0. On each iteration, we sample either a syntactic example (si, ti) or a semantic example (S(e1,e2), y(e1,e2)). If a syntactic example is sampled, we apply the following parameter update: ˆℓ, ˆd, ˆt ← arg max ℓ,d,t Γ(ℓ, d, t|si; θt) ℓ∗, d∗ ← arg max ℓ,d Γ(ℓ, d, ti|si; θt) θt+1 ← θt + φ(d∗, ti, si) −φ( ˆd, ˆt, si) This update moves the parameters toward the features of the best parse with the correct syntactic derivation, φ(d∗, ti, si). If a semantic example is 1192 Labeled Dependencies Unlabeled Dependencies P R F P R F Coverage ASP 85.58 85.31 85.44 91.75 91.46 91.60 99.63 ASP-SYN 86.06 85.84 85.95 92.13 91.89 92.01 99.63 C&C (Clark and Curran, 2007b) 88.34 86.96 87.64 93.74 92.28 93.00 99.63 (Hockenmaier, 2003a) 84.3 84.6 84.4 91.8 92.2 92.0 99.83 Table 2: Syntactic parsing results for Section 23 of CCGbank. Parser performance is measured using precision (P), recall (R) and F-measure (F) of labeled and unlabeled dependencies. sampled, we instead apply the following update: ˆℓ, ˆd,ˆt ←arg max ℓ,d,t Γ(ℓ, d, t|S(e1,e2); θt) ℓ∗, d∗, t∗←arg max ℓ,d,t Γ(ℓ, d, t|S(e1,e2); θt) + Ψ(y(e1,e2), ℓ, d, t) θt+1 ←θt + φ(d∗, t∗, S(e1,e2)) −φ(ˆd,ˆt, S(e1,e2)) This update moves the parameters toward the features of the best set of parses that satisfy the distant supervision constraint. Training outputs the average of each iteration’s parameters, ¯θ = 1 n Pn t=1 θt. In practice, we train the parser by performing a single pass over the examples in the data set. All of the maximizations above can be performed exactly using a CKY-style chart parsing algorithm, except for the last one. This maximization is intractable due to the coupling between logical forms in ℓcaused by enforcing the distant supervision constraint. We approximate this maximization in two steps. First, we perform a beam search to produce a list of candidate parses for each sentence s ∈S(e1,e2). We then extract relation instances from each parse and apply the greedy inference algorithm from Hoffmann et al., (2011) to identify the best set of parses that satisfy the distant supervision constraint. The procedure skips any examples with sentences that cannot be parsed (due to beam search failures) or where the distant supervision constraint cannot be satisfied. 5 Experiments The experiments below evaluate ASP’s syntactic and semantic parsing ability. The parser is trained on CCGbank and a corpus of Wikipedia sentences, using NELL’s predicate vocabulary. The syntactic analyses of the trained parser are evaluated against CCGbank, and its logical forms are evaluated on an information extraction task and against an annotated test set of Wikipedia sentences. 5.1 Data Sets The data sets for the evaluation consist of CCGbank, a corpus of dependency-parsed Wikipedia sentences, and a logical knowledge base derived from NELL and Freebase. Sections 02-21 of CCGbank were used for training, Section 00 for validation, and Section 23 for the final results. The knowledge base’s predicate vocabulary is taken from NELL, and its instances are taken from Freebase using a manually-constructed mapping between Freebase and NELL. Using Freebase relation instances produces cleaner training data than NELL’s automatically-extracted instances. Using the relation instances and Wikipedia sentences, we constructed a data set for distantlysupervised relation extraction. We identified mentions of entities in each sentence using simple string matching, then aggregated these sentences by entity pair. 20% of the entity pairs were set aside for validation. In the remaining training data, we downsampled entity pairs that did not participate in at least one relation. We further eliminated sentences containing more than 30 tokens. The resulting training corpus contains 25k entity pairs (half of which participate in a relation), 41k sentences, and 71 distinct relation predicates. 5.2 Grammar Construction The grammar for ASP contains the annotated lexicon entries and grammar rules in Sections 02-21 of CCGbank, and additional semantic entries produced using a set of dependency parse heuristics. The lexicon Λ contains all words that occur at least 20 times in CCGbank. Rare words are replaced by their part of speech. The head passing and dependency markup was generated using the rules of the C&C parser (Clark and Curran, 2007b). These lexicon entries are also annotated with logical forms capturing their head passing relationship. For example, the adjective category N1/N1 is annotated with the logical form λf.f. These entries are all assigned semantic type —. We augment this lexicon with additional entries 1193 Sentence Extracted Logical Form St. John, a Mexican-American born in San Francisco, California, her family comes from Zacatecas, Mexico. λx.∃y, z.M(x, “st. john”) ∧ M(y, “san francisco”) ∧ PERSONBORNINLOCATION(x, y) ∧ CITYLOCATEDINSTATE(y, z) ∧M(z, “california”) The capital and largest city of Laos is Vientiane and other major cities include Luang Prabang, Savannakhet and Pakse. ∃x, y.M(x, “vientiane”) ∧ CITY(x) ∧ CITYCAPITALOFCOUNTRY(x, y) ∧M(y, “laos”) Gellar next played a lead role in James Toback ’s critically unsuccessful independent “Harvard Man” (2001), where she played the daughter of a mobster. λx.∃y.M(y, “james toback”) ∧ DIRECTORDIRECTEDMOVIE(y, x) ∧ M(x, “harvard man”) Figure 2: Logical forms produced by ASP for sentences in the information extraction corpus. Each logical form is extracted from the underlined sentence portion. ASP PIPELINE K&M-2012 0 300 600 900 0 0.2 0.4 0.6 0.8 1.0 Figure 3: Logical form precision as a function of the expected number of correct extracted logical forms. ASP extracts more correct logical forms because it jointly analyzes syntax and semantics. mapping words to logical forms with NELL predicates. These entries are instantiated using a set of dependency parse patterns, listed in an online appendix.2 These patterns are applied to the training corpus, heuristically identifying verbs, prepositions, and possessives that express relations, and nouns that express categories. The patterns also include special cases for forms of “to be.” This process generates ∼4000 entries (not counting entity names), representing 69 relations and 61 categories from NELL. Section 3.2 shows several lexicon entries generated by this process. The parser’s combinators include function application, composition, and crossed composition, as well as several binary and unary type-changing rules that occur in CCGbank. All combinators were restricted to only apply to categories that combine in Sections 02-21. Finally, the grammar includes a number of heuristically-instantiated binary rules of the form , N →N\N that instantiate a relation between adjacent nouns. These rules capture appositives and some other constructions. 5.3 Supertagging Parsing in practice can be slow because the parser’s lexicalized grammar permits a large number of parses for a sentence. We improve parser performance by performing supertagging (Banga2http://rtw.ml.cmu.edu/acl2014_asp/ lore and Joshi, 1999; Clark and Curran, 2004). We trained a logistic regression classifier to predict the syntactic category of each token in a sentence from features of the surrounding tokens and POS tags. Subsequent parsing is restricted to only consider categories whose probability is within a factor of α of the highest-scoring category. The parser uses a backoff strategy, first attempting to parse with the supertags from α = 0.01, backing off to α = 0.001 if the initial parsing attempt fails. 5.4 Syntactic Evaluation The syntactic evaluation measures ASP’s ability to reproduce the predicate-argument dependencies in CCGbank. As in previous work, our evaluation uses labeled and unlabeled dependencies. Labeled dependencies are dependency structures with both words and semantic types removed, leaving two word indexes, a syntactic category, and an argument number. Unlabeled dependencies further eliminate the syntactic category and argument number, leaving a pair of word indexes. Performance is measured using precision, recall, and F-measure against the annotated dependency structures in CCGbank. Precision is the fraction of predicted dependencies which are in CCGbank, recall is the fraction of CCGbank dependencies produced by the parser, and F-measure is the harmonic mean of precision and recall. For comparison, we also trained a syntactic version of our parser, ASP-SYN, using only the CCGbank lexicon and grammar. Comparing against this parser lets us measure the effect of the relation extraction task on syntactic parsing. Table 2 shows the results of our evaluation. For comparison, we include results for two existing syntactic CCG parsers: C&C, the current state-of-the-art CCG parser (Clark and Curran, 2007b), and the next best system (Hockenmaier, 2003a). Both ASP and ASP-SYN perform reasonably well, within 2.5% of the performance of C&C at the same coverage level. However, ASP1194 Logical Form Extraction Extraction Accuracy Precision Recall ASP 0.28 0.90 0.32 K&M-2012 0.14 1.00 0.06 PIPELINE 0.2 0.63 0.17 Table 3: Logical form accuracy and extraction precision/recall on the annotated test set. The high extraction recall for ASP shows that it produces more complete logical forms than either baseline. SYN outperforms ASP by around 0.5%, suggesting that ASP’s additional semantic knowledge slightly hurts syntactic parsing performance. This performance loss appears to be largely due to poor entity mention detection, as we found that not using entity mention lexicon entries at test time improves ASP’s labeled and unlabeled F-scores by 0.3% on Section 00. The knowledge base contains many infrequently-mentioned entities with common names; these entities contribute incorrect semantic type information that confuses the parser. 5.5 Semantic Evaluation We performed two semantic evaluations to better understand ASP’s ability to construct logical forms. The first evaluation emphasizes precision over recall, and the second evaluation accurately measures recall using a manually labeled test set. 5.5.1 Baselines For comparison, we also trained two baseline models. The first baseline, PIPELINE, is a pipelined syntax-then-semantics approach designed to mimic Boxer (Bos, 2005). This baseline first syntactically parses each sentence using ASP-SYN, then produces a semantic analysis by assigning a logical form to each word. We train this baseline using the semantic objective (Section 4.2) while holding fixed the syntactic parse of each sentence. Note that, unlike Boxer, this baseline learns which logical form to assign each word, and its logical forms contain NELL predicates. The second baseline, K&M-2012, is the approach of Krishnamurthy and Mitchell (2012), representing the state-of-the-art in distantlysupervised semantic parsing. This approach trains a semantic parser by combining distant semantic supervision with syntactic supervision from dependency parses. The best performing variant of this system also uses dependency parses at test time to constrain the interpretation of test sentences – hence, this system also uses a pipelined syntax-then-semantics approach. To improve comparability, we reimplemented this approach using our parsing model, which has richer features than were used in their paper. 5.5.2 Information Extraction Evaluation The information extraction evaluation uses each system to extract logical forms from a large corpus of sentences, then measures the fraction of extracted logical forms that are correct. The test set consists of 8.5k sentences sampled from the held-out Wikipedia sentences. Each system was run on this data set, extracting all logical forms from each sentence that entailed at least one category or relation instance. We ranked these extractions using the parser’s inside chart score, then manually annotated a sample of 250 logical forms from each system for correctness. Logical forms were marked correct if all category and relation instances entailed by the logical form were expressed by the sentence. Note that a correct logical form need not entail all of the relations expressed by the sentence, reflecting an emphasis on precision over recall. Figure 2 shows some example logical forms produced by ASP in the evaluation. The annotated sample of logical forms allows us to estimate precision for each system as a function of the number of correct extractions (Figure 3). The number of correct extractions is directly proportional to recall, and was estimated from the total number of extractions and precision at each rank in the sample. All three systems initially have high precision, implying that their extracted logical forms express facts found in the sentence. However, ASP produces 3 times more correct logical forms than either baseline because it jointly analyzes syntax and semantics. The baselines suffer from reduced recall because they depend on receiving an accurate syntactic parse as input; syntactic parsing errors cause these systems to fail. Examining the incorrect logical forms produced by ASP reveals that incorrect mention detection is by far the most common source of mistakes. Approximately 50% of errors are caused by marking common nouns as entity mentions (e.g., marking “coin” as a COMPANY). These errors occur because the knowledge base contains many infrequently mentioned entities with relatively common names. Another 30% of errors are caused by assigning an incorrect type to a common proper noun (e.g, marking “Bolivia” as a CITY). This analysis suggests that performing entity linking before parsing could significantly reduce errors. 1195 Sentence: De Niro and Joe Pesci in “Goodfellas” offered a virtuoso display of the director’s bravura cinematic technique and reestablished, enhanced, and consolidated his reputation. Annotation: LF: λx.∀p ∈{λd.M(d, “de niro”), λj.M(j, “joe pesci”)}∃y.p(x) ∧STARREDINMOVIE(x, y) ∧M(y, “goodfellas”) Instances: STARREDINMOVIE(de niro, goodfellas), STARREDINMOVIE(joe pesci, goodfellas) Prediction: LF: λx.∀p ∈{λd.M(d, “de niro”), λj.M(j, “joe pesci”)}∃y.p(x) ∧STARREDINMOVIE(x, y) ∧M(y, “goodfellas”) Instances: STARREDINMOVIE(de niro, goodfellas), STARREDINMOVIE(joe pesci, goodfellas) Logical form accuracy: 1 / 1 Extraction Precision: 2 / 2 Extraction Recall: 2 / 2 Sentence: In addition to the University of Illinois, Champaign is also home to Parkland College. Annotation: LF: ∃c, p.M(c, “champaign”) ∧CITY(c) ∧M(p, “parkland college”) ∧UNIVERSITYINCITY(p, c) Instances: CITY(champaign), UNIVERSITYINCITY(parkland college, champaign) Prediction: LF 1: λx.∃yM(y, “illinois”) ∧M(x, “university”) ∧CITYLOCATEDINSTATE(x, y) LF 2: ∃c, p.M(c, “champaign”) ∧CITY(c) ∧M(p, “parkland college”) ∧UNIVERSITYINCITY(p, c) Instances: CITY(champaign), UNIVERSITYINCITY(parkland college, champaign), CITYLOCATEDINSTATE(university, illinois) Logical form accuracy: 1 / 1 Extraction Precision: 2 / 3 Extraction Recall: 2 / 2 Figure 4: Two test examples with ASP’s predictions and error calculations. The annotated logical forms are for the italicized sentence spans, while the extracted logical forms are for the underlined spans. 5.5.3 Annotated Sentence Evaluation A limitation of the previous evaluation is that it does not measure the completeness of predicted logical forms, nor estimate what portion of sentences are left unanalyzed. We conducted a second evaluation to measure these quantities. The data for this evaluation consists of sentences annotated with logical forms for subspans. We manually annotated Wikipedia sentences from the held-out set with logical forms for the largest subspans for which a logical form existed. To avoid trivial cases, we only annotated logical forms containing at least one category or relation predicate and at least one mention. We also chose not to annotate mentions of entities that are not in the knowledge base, as no system would be able to correctly identify them. The corpus contains 97 sentences with 100 annotated logical forms. We measured performance using two metrics: logical form accuracy, and extraction precision/recall. Logical form accuracy examines the predicted logical form for the smallest subspan of the sentence containing the annotated span, and marks this prediction correct if it exactly matches the annotation. A limitation of this metric is that it does not assign partial credit to logical forms that are close to, but do not exactly match, the annotation. The extraction metric assigns partial credit by computing the precision and recall of the category and relation instances entailed by the predicted logical form, using those entailed by the annotated logical form as the gold standard. Figure 4 shows the computation of both error metrics on two examples from the test corpus. Table 3 shows the results of the annotated sentence evaluation. ASP outperforms both baselines in logical form accuracy and extraction recall, suggesting that it produces more complete analyses than either baseline. The extraction precision of 90% suggests that ASP rarely extracts incorrect information. Precision is higher in this evaluation because every sentence in the data set has at least one correct extraction. 6 Discussion We present an approach to training a joint syntactic and semantic parser. Our parser ASP produces a full syntactic parse of any sentence, while simultaneously producing logical forms for sentence spans that have a semantic representation within its predicate vocabulary. The parser is trained by jointly optimizing performance on a syntactic parsing task and a distantly-supervised relation extraction task. Experimental results demonstrate that jointly analyzing syntax and semantics triples the number of extracted logical forms over approaches that first analyze syntax, then semantics. However, we also find that incorporating semantics slightly reduces syntactic parsing performance. Poor entity mention detection is a major source of error in both cases, suggesting that future work should consider integrating entity linking with joint syntactic and semantic parsing. Acknowledgments This work was supported in part by DARPA under award FA8750-13-2-0005. We additionally thank Jamie Callan and Chris R´e’s Hazy group for collecting and processing the Wikipedia corpus. 1196 References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceedings of the 17th International Conference on Computational Linguistics - Volume 1. Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: an approach to almost parsing. Computational Linguistics, 25(2):237–265. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Johan Bos. 2005. Towards wide-coverage semantic interpretation. In In Proceedings of Sixth International Workshop on Computational Semantics IWCS-6. Qingqing Cai and Alexander Yates. 2013a. Largescale Semantic Parsing via Schema Matching and Lexicon Extension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Qingqing Cai and Alexander Yates. 2013b. Semantic Parsing Freebase: Towards Open-domain Semantic Parsing. In Proceedings of the Second Joint Conference on Lexical and Computational Semantics (*SEM). Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence. Stephen Clark and James R. Curran. 2004. The importance of supertagging for wide-coverage CCG parsing. In Proceedings of the 20th International Conference on Computational Linguistics. Stephen Clark and James R. Curran. 2007a. Perceptron training for a wide-coverage lexicalizedgrammar parser. In Proceedings of the Workshop on Deep Linguistic Processing. Stephen Clark and James R. Curran. 2007b. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Ruifang Ge and Raymond J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proceedings of the Ninth Conference on Computational Natural Language Learning. Julia Hockenmaier and Mark Steedman. 2002a. Acquiring compact lexicalized grammars from a cleaner treebank. In Proceedings of Third International Conference on Language Resources and Evaluation. Julia Hockenmaier and Mark Steedman. 2002b. Generative models for statistical parsing with combinatory categorial grammar. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Julia Hockenmaier. 2003a. Data and Models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. Julia Hockenmaier. 2003b. Parsing with generative models of predicate-argument structure. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke S. Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Rohit J. Kate and Raymond J. Mooney. 2006. Using string-kernels for learning semantic parsers. In 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference. Jayant Krishnamurthy and Tom M. Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Mike Lewis and Mark Steedman. 2013. Combined distributional and logical semantics. Transactions of the Association for Computational Linguistics, 1:179–192. Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the Association for Computational Linguistics, Portland, Oregon. Association for Computational Linguistics. 1197 Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Nathan D. Ratliff, J. Andrew Bagnell, and Martin A. Zinkevich. 2006. (online) subgradient methods for structured prediction. Artificial Intelligence and Statistics. Mark Steedman. 1996. Surface Structure and Interpretation. The MIT Press, Cambridge, MA, USA. Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the Human Language Technology Conference of the NAACL. Yuk Wah Wong and Raymond J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural language questions for the web of data. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the thirteenth national conference on Artificial Intelligence. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In UAI ’05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence. 1198
2014
112
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1199–1209, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Semantic Hierarchies via Word Embeddings Ruiji Fu†, Jiang Guo†, Bing Qin†, Wanxiang Che†, Haifeng Wang‡, Ting Liu†∗ †Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China ‡Baidu Inc., Beijing, China {rjfu, jguo, bqin, car, tliu}@ir.hit.edu.cn [email protected] Abstract Semantic hierarchy construction aims to build structures of concepts linked by hypernym–hyponym (“is-a”) relations. A major challenge for this task is the automatic discovery of such relations. This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words. We identify whether a candidate word pair has hypernym–hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms. Our result, an F-score of 73.74%, outperforms the state-of-theart methods on a manually labeled test dataset. Moreover, combining our method with a previous manually-built hierarchy extension method can further improve Fscore to 80.29%. 1 Introduction Semantic hierarchies are natural ways to organize knowledge. They are the main components of ontologies or semantic thesauri (Miller, 1995; Suchanek et al., 2008). In the WordNet hierarchy, senses are organized according to the “is-a” relations. For example, “dog” and “canine” are connected by a directed edge. Here, “canine” is called a hypernym of “dog.” Conversely, “dog” is a hyponym of “canine.” As key sources of knowledge, semantic thesauri and ontologies can support many natural language processing applications. However, these semantic resources are limited in its scope and domain, and their manual construction is knowledge intensive and time consuming. Therefore, many researchers ∗Email correspondence. 生物 organism 植物 plant 毛茛科 Ranunculaceae 乌头属 Aconitum 乌头 aconite 植物药 medicinal plant 药品 medicine 生物 organism 植物 plant 毛茛科 Ranunculaceae 乌头属 Aconitum 乌头 aconite 植物药 medicinal plant 药品 medicine Figure 1: An example of semantic hierarchy construction. have attempted to automatically extract semantic relations or to construct taxonomies. A major challenge for this task is the automatic discovery of hypernym-hyponym relations. Fu et al. (2013) propose a distant supervision method to extract hypernyms for entities from multiple sources. The output of their model is a list of hypernyms for a given enity (left panel, Figure 1). However, there usually also exists hypernym–hyponym relations among these hypernyms. For instance, “植物(plant)” and “毛茛科(Ranunculaceae)” are both hypernyms of the entity “乌头(aconit),” and “植 物(plant)” is also a hypernym of “毛茛科 (Ranunculaceae).” Given a list of hypernyms of an entity, our goal in the present work is to construct a semantic hierarchy of these hypernyms (right panel, Figure 1).1 Some previous works extend and refine manually-built semantic hierarchies by using other resources (e.g., Wikipedia) (Suchanek et al., 2008). However, the coverage is limited by the scope of the resources. Several other works relied heavily on lexical patterns, which would suffer from deficiency because such patterns can only cover a small proportion of complex linguistic circumstances (Hearst, 1992; Snow et al., 2005). 1In this study, we focus on Chinese semantic hierarchy construction. The proposed method can be easily adapted to other languages. 1199 Besides, distributional similarity methods (Kotlerman et al., 2010; Lenci and Benotto, 2012) are based on the assumption that a term can only be used in contexts where its hypernyms can be used and that a term might be used in any contexts where its hyponyms are used. However, it is not always rational. Our previous method based on web mining (Fu et al., 2013) works well for hypernym extraction of entity names, but it is unsuitable for semantic hierarchy construction which involves many words with broad semantics. Moreover, all of these methods do not use the word semantics effectively. This paper proposes a novel approach for semantic hierarchy construction based on word embeddings. Word embeddings, also known as distributed word representations, typically represent words with dense, low-dimensional and realvalued vectors. Word embeddings have been empirically shown to preserve linguistic regularities, such as the semantic relationship between words (Mikolov et al., 2013b). For example, v(king) −v(queen) ≈v(man) −v(woman), where v(w) is the embedding of the word w. We observe that a similar property also applies to the hypernym–hyponym relationship (Section 3.3), which is the main inspiration of the present study. However, we further observe that hypernym– hyponym relations are more complicated than a single offset can represent. To address this challenge, we propose a more sophisticated and general method — learning a linear projection which maps words to their hypernyms (Section 3.3.1). Furthermore, we propose a piecewise linear projection method based on relation clustering to better model hypernym–hyponym relations (Section 3.3.2). Subsequently, we identify whether an unknown word pair is a hypernym–hyponym relation using the projections (Section 3.4). To the best of our knowledge, we are the first to apply word embeddings to this task. For evaluation, we manually annotate a dataset containing 418 Chinese entities and their hypernym hierarchies, which is the first dataset for this task as far as we know. The experimental results show that our method achieves an F-score of 73.74% which significantly outperforms the previous state-of-the-art methods. Moreover, combining our method with the manually-built hierarchy extension method proposed by Suchanek et al. (2008) can further improve F-score to 80.29%. 2 Background As main components of ontologies, semantic hierarchies have been studied by many researchers. Some have established concept hierarchies based on manually-built semantic resources such as WordNet (Miller, 1995). Such hierarchies have good structures and high accuracy, but their coverage is limited to fine-grained concepts (e.g., “Ranunculaceae” is not included in WordNet.). We have made similar obsevation that about a half of hypernym–hyponym relations are absent in a Chinese semantic thesaurus. Therefore, a broader range of resources is needed to supplement the manually built resources. In the construction of the famous ontology YAGO, Suchanek et al. (2008) link the categories in Wikipedia onto WordNet. However, the coverage is still limited by the scope of Wikipedia. Several other methods are based on lexical patterns. They use manually or automatically constructed lexical patterns to mine hypernym– hyponym relations from text corpora. A hierarchy can then be built based on these pairwise relations. The pioneer work by Hearst (1992) has found out that linking two noun phrases (NPs) via certain lexical constructions often implies hypernym relations. For example, NP1 is a hypernym of NP2 in the lexical pattern “such NP1 as NP2.” Snow et al. (2005) propose to automatically extract large numbers of lexico-syntactic patterns and subsequently detect hypernym relations from a large newswire corpus. Their method relies on accurate syntactic parsers, and the quality of the automatically extracted patterns is difficult to guarantee. Generally speaking, these pattern-based methods often suffer from low recall or precision because of the coverage or the quality of the patterns. The distributional methods assume that the contexts of hypernyms are broader than the ones of their hyponyms. For distributional similarity computing, each word is represented as a semantic vector composed of the pointwise mutual information (PMI) with its contexts. Kotlerman et al. (2010) design a directional distributional measure to infer hypernym–hyponym relations based on the standard IR Average Precision evaluation measure. Lenci and Benotto (2012) propose another measure focusing on the contexts that hypernyms do not share with their hyponyms. However, broader semantics may not always infer broader contexts. For example, for terms “Obama’ and 1200 “American people”, it is hard to say whose contexts are broader. Our previous work (Fu et al., 2013) applies a web mining method to discover the hypernyms of Chinese entities from multiple sources. We assume that the hypernyms of an entity co-occur with it frequently. It works well for named entities. But for class names (e.g., singers in Hong Kong, tropical fruits) with wider range of meanings, this assumption may fail. In this paper, we aim to identify hypernym– hyponym relations using word embeddings, which have been shown to preserve good properties for capturing semantic relationship between words. 3 Method In this section, we first define the task formally. Then we elaborate on our proposed method composed of three major steps, namely, word embedding training, projection learning, and hypernym– hyponym relation identification. 3.1 Task Definition Given a list of hypernyms of an entity, our goal is to construct a semantic hierarchy on it (Figure 1). We represent the hierarchy as a directed graph G, in which the nodes denote the words, and the edges denote the hypernym–hyponym relations. Hypernym-hyponym relations are asymmetric and transitive when words are unambiguous: • ∀x, y ∈L : x H −→y ⇒¬(y H −→x) • ∀x, y, z ∈L : (x H −→z ∧z H −→y) ⇒x H −→y Here, L denotes the list of hypernyms. x, y and z denote the hypernyms in L. We use H −→to represent a hypernym–hyponym relation in this paper. Actually, x, y and z are unambiguous as the hypernyms of a certain entity. Therefore, G should be a directed acyclic graph (DAG). 3.2 Word Embedding Training Various models for learning word embeddings have been proposed, including neural net language models (Bengio et al., 2003; Mnih and Hinton, 2008; Mikolov et al., 2013b) and spectral models (Dhillon et al., 2011). More recently, Mikolov et al. (2013a) propose two log-linear models, namely the Skip-gram and CBOW model, to efficiently induce word embeddings. These two models can be trained very efficiently on a largescale corpus because of their low time complexity. No. Examples 1 v(虾) −v(对虾) ≈v(鱼) −v(金鱼) v(shrimp) −v(prawn) ≈v(fish) −v(gold fish) 2 v(工人) −v(木匠) ≈v(演员) −v(小丑) v(laborer) −v(carpenter) ≈v(actor) −v(clown) 3 v(工人) −v(木匠) ̸≈v(鱼) −v(金鱼) v(laborer) −v(carpenter) ̸≈v(fish) −v(gold fish) Table 1: Embedding offsets on a sample of hypernym–hyponym word pairs. Additionally, their experiment results have shown that the Skip-gram model performs best in identifying semantic relationship among words. Therefore, we employ the Skip-gram model for estimating word embeddings in this study. The Skip-gram model adopts log-linear classifiers to predict context words given the current word w(t) as input. First, w(t) is projected to its embedding. Then, log-linear classifiers are employed, taking the embedding as input and predict w(t)’s context words within a certain range, e.g. k words in the left and k words in the right. After maximizing the log-likelihood over the entire dataset using stochastic gradient descent (SGD), the embeddings are learned. 3.3 Projection Learning Mikolov et al. (2013b) observe that word embeddings preserve interesting linguistic regularities, capturing a considerable amount of syntactic/semantic relations. Looking at the well-known example: v(king) −v(queen) ≈v(man) − v(woman), it indicates that the embedding offsets indeed represent the shared semantic relation between the two word pairs. We observe that the same property also applies to some hypernym–hyponym relations. As a preliminary experiment, we compute the embedding offsets between some randomly sampled hypernym–hyponym word pairs and measure their similarities. The results are shown in Table 1. The first two examples imply that a word can also be mapped to its hypernym by utilizing word embedding offsets. However, the offset from “carpenter” to “laborer” is distant from the one from “gold fish” to “fish,” indicating that hypernym–hyponym relations should be more complicated than a single vector offset can represent. To verify this hypothesis, we compute the embedding offsets over all hypernym– 1201 运动员-足球球员 sportsman - footballer 职员-公务员 staff - civil servant 工人-园丁 laborer - gardener 海员-领航员 seaman - navigator 演员-歌手 actor - singer 演员-主角 actor - protagonist 演员-小丑 actor - clown 职位-校长 position - headmaster 演员-斗牛士 actor - matador 工人-临时工 laborer - temporary worker 工人-木匠 laborer - carpenter 职位-总领事 position – consul general 职员-空姐 staff - airline hostess 职员-售货员 staff - salesclerk 职员-售票员 staff - conductor 鸡-公鸡 chicken - cock 羊-小尾寒羊 sheep - small-tail Han sheep 羊-公羊 sheep - ram 马-斑马 equus - zebra 虾-对虾 shrimp - prawn 狗-警犬 dog - police dog 兔-长毛兔 rabbit - wool rabbit 海豚-白鳍豚 dolphin - white-flag dolphin 鱼-鲨鱼 fish - shark 鱼-热带鱼 fish - tropical fish 鱼-金鱼 fish - gold fish 蟹-海蟹 crab - sea crab 驴-野驴 donkey - wild ass Figure 2: Clusters of the vector offsets in training data. The figure shows that the vector offsets distribute in some clusters. The left cluster shows some hypernym–hyponym relations about animals. The right one shows some relations about people’s occupations. hyponym word pairs in our training data and visualize them.2 Figure 2 shows that the relations are adequately distributed in the clusters, which implies that hypernym–hyponym relations indeed can be decomposed into more fine-grained relations. Moreover, the relations about animals are spatially close, but separate from the relations about people’s occupations. To address this challenge, we propose to learn the hypernym–hyponym relations using projection matrices. 3.3.1 A Uniform Linear Projection Intuitively, we assume that all words can be projected to their hypernyms based on a uniform transition matrix. That is, given a word x and its hypernym y, there exists a matrix Φ so that y = Φx. For simplicity, we use the same symbols as the words to represent the embedding vectors. Obtaining a consistent exact Φ for the projection of all hypernym–hyponym pairs is difficult. Instead, we can learn an approximate Φ using Equation 1 on the training data, which minimizes the meansquared error: Φ∗= arg min Φ 1 N X (x,y) ∥Φx −y ∥2 (1) where N is the number of (x, y) word pairs in the training data. This is a typical linear regression problem. The only difference is that our predictions are multi-dimensional vectors instead of scalar values. We use SGD for optimization. 2Principal Component Analysis (PCA) is applied for dimensionality reduction. 3.3.2 Piecewise Linear Projections A uniform linear projection may still be underrepresentative for fitting all of the hypernym– hyponym word pairs, because the relations are rather diverse, as shown in Figure 2. To better model the various kinds of hypernym–hyponym relations, we apply the idea of piecewise linear regression (Ritzema, 1994) in this study. Specifically, the input space is first segmented into several regions. That is, all word pairs (x, y) in the training data are first clustered into several groups, where word pairs in each group are expected to exhibit similar hypernym–hyponym relations. Each word pair (x, y) is represented with their vector offsets: y −x for clustering. The reasons are twofold: (1) Mikolov’s work has shown that the vector offsets imply a certain level of semantic relationship. (2) The vector offsets distribute in clusters well, and the word pairs which are close indeed represent similar relations, as shown in Figure 2. Then we learn a separate projection for each cluster, respectively (Equation 2). Φ∗ k = arg min Φk 1 Nk X (x,y)∈Ck ∥Φkx −y ∥2 (2) where Nk is the amount of word pairs in the kth cluster Ck. We use the k-means algorithm for clustering, where k is tuned on a development dataset. 3.3.3 Training Data To learn the projection matrices, we extract training data from a Chinese semantic thesaurus, Tongyi Cilin (Extended) (CilinE for short) which 1202 … … … … … Root Level 1 Level 2 Level 3 Level 4 Level 5 物 object 动物 animal 昆虫 insect -蜻蜓 dragonfly B i 18 A 06@ 蜻蜓 : 动物 (dragonfly : animal) 蜻蜓 : 昆虫 (dragonfly : insect) CilinE hypernymhyponym pairs Sense Code: Bi18A06@ Sense Code: Bi18A Sense Code: Bi18 Sense Code: Bi Sense Code: B 昆虫 : 动物 (insect : animal) Figure 3: Hierarchy of CilinE and an Example of Training Data Generation contains 100,093 words (Che et al., 2010).3 CilinE is organized as a hierarchy of five levels, in which the words are linked by hypernym–hyponym relations (right panel, Figure 3). Each word in CilinE has one or more sense codes (some words are polysemous) that indicate its position in the hierarchy. The senses of words in the first level, such as “物(object)” and “时间(time),” are very general. The fourth level only has sense codes without real words. Therefore, we extract words in the second, third and fifth levels to constitute hypernym– hyponym pairs (left panel, Figure 3). Note that mapping one hyponym to multiple hypernyms with the same projection (Φx is unique) is difficult. Therefore, the pairs with the same hyponym but different hypernyms are expected to be clustered into separate groups. Figure 3 shows that the word “dragonfly” in the fifth level has two hypernyms: “insect” in the third level and “animal” in the second level. Hence the relations dragonfly H −→insect and dragonfly H −→animal should fall into different clusters. In our implementation, we apply this constraint by simply dividing the training data into two categories, namely, direct and indirect. Hypernymhyponym word pair (x, y) is classified into the direct category, only if there doesn’t exist another word z in the training data, which is a hypernym of x and a hyponym of y. Otherwise, (x, y) is classified into the indirect category. Then, data in these two categories are clustered separately. 3www.ltp-cloud.com/download/ x y Φk δ x' Φl Figure 4: In this example, Φkx is located in the circle with center y and radius δ. So y is considered as a hypernym of x. Conversely, y is not a hypernym of x′. x y z x y (a) (b) z x y Figure 5: (a) If d(Φjy, x) > d(Φkx, y), we remove the path from y to x; (b) if d(Φjy, x) > d(Φkx, z) and d(Φjy, x) > d(Φiz, y), we reverse the path from y to x. 3.4 Hypernym-hyponym Relation Identification Upon obtaining the clusters of training data and the corresponding projections, we can identify whether two words have a hypernym–hyponym relation. Given two words x and y, we find cluster Ck whose center is closest to the offset y −x, and obtain the corresponding projection Φk. For y to be considered a hypernym of x, one of the two conditions below must hold. Condition 1: The projection Φk puts Φkx close enough to y (Figure 4). Formally, the euclidean distance between Φkx and y: d(Φkx, y) must be less than a threshold δ. d(Φkx, y) =∥Φkx −y ∥2< δ (3) Condition 2: There exists another word z satisfying x H −→z and z H −→y. In this case, we use the transitivity of hypernym–hyponym relations. Besides, the final hierarchy should be a DAG as discussed in Section 3.1. However, the projection method cannot guarantee that theoretically, because the projections are learned from pairwise hypernym–hyponym relations without the whole hierarchy structure. All pairwise hypernym– hyponym relation identification methods would suffer from this problem actually. It is an interesting problem how to construct a globally opti1203 mal semantic hierarchy conforming to the form of a DAG. But this is not the focus of this paper. So if some conflicts occur, that is, a relation circle exists, we remove or reverse the weakest path heuristically (Figure 5). If a circle has only two nodes, we remove the weakest path. If a circle has more than two nodes, we reverse the weakest path to form an indirect hypernym–hyponym relation. 4 Experimental Setup 4.1 Experimental Data In this work, we learn word embeddings from a Chinese encyclopedia corpus named Baidubaike4, which contains about 30 million sentences (about 780 million words). The Chinese segmentation is provided by the open-source Chinese language processing platform LTP5 (Che et al., 2010). Then, we employ the Skip-gram method (Section 3.2) to train word embeddings. Finally we obtain the embedding vectors of 0.56 million words. The training data for projection learning is collected from CilinE (Section 3.3.3). We obtain 15,247 word pairs of hypernym–hyponym relations (9,288 for direct relations and 5,959 for indirect relations). For evaluation, we collect the hypernyms for 418 entities, which are selected randomly from Baidubaike, following Fu et al. (2013). We then ask two annotators to manually label the semantic hierarchies of the correct hypernyms. The final data set contains 655 unique hypernyms and 1,391 hypernym–hyponym relations among them. We randomly split the labeled data into 1/5 for development and 4/5 for testing (Table 2). The hierarchies are represented as relations of pairwise words. We measure the inter-annotator agreement using the kappa coefficient (Siegel and Castellan Jr, 1988). The kappa value is 0.96, which indicates a good strength of agreement. 4.2 Evaluation Metrics We use precision, recall, and F-score as our metrics to evaluate the performances of the methods. Since hypernym–hyponym relations and its reverse (hyponym–hypernym) have one-to-one correspondence, their performances are equal. For 4Baidubaike (baike.baidu.com) is one of the largest Chinese encyclopedias containing more than 7.05 million entries as of September, 2013. 5www.ltp-cloud.com/demo/ Relation # of word pairs Dev. Test hypernym–hyponym 312 1,079 hyponym–hypernym∗ 312 1,079 unrelated 1,044 3,250 Total 1,668 5,408 Table 2: The evaluation data. ∗Since hypernym– hyponym relations and hyponym–hypernym relations have one-to-one correspondence, their numbers are the same. 1 5 10 15 20 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 1 10 20 30 40 50 60 Indirect F1-Score Direct Figure 6: Performance on development data w.r.t. cluster size. simplicity, we only report the performance of the former in the experiments. 5 Results and Analysis 5.1 Varying the Amount of Clusters We first evaluate the effect of different number of clusters based on the development data. We vary the numbers of the clusters both for the direct and indirect training word pairs. As shown in Figure 6, the performance of clustering is better than non-clustering (when the cluster number is 1), thus providing evidences that learning piecewise projections based on clustering is reasonable. We finally set the numbers of the clusters of direct and indirect to 20 and 5, respectively, where the best performances are achieved on the development data. 5.2 Comparison with Previous Work In this section, we compare the proposed method with previous methods, including manually-built hierarchy extension, pairwise relation extraction 1204 P(%) R(%) F(%) MWiki+CilinE 92.41 60.61 73.20 MPattern 97.47 21.41 35.11 MSnow 60.88 25.67 36.11 MbalApinc 54.96 53.38 54.16 MinvCL 49.63 62.84 55.46 MFu 87.40 48.19 62.13 MEmb 80.54 67.99 73.74 MEmb+CilinE 80.59 72.42 76.29 MEmb+Wiki+CilinE 79.78 80.81 80.29 Table 3: Comparison of the proposed method with existing methods in the test set. Pattern Translation w 是[一个|一种] h w is a [a kind of] h w [、] 等h w[,] and other h h [,] 叫[做] w h[,] called w h [,] [像]如w h[,] such as w h [,] 特别是w h[,] especially w Table 4: Chinese Hearst-style lexical patterns. The contents in square brackets are omissible. based on patterns, word distributions, and web mining (Section 2). Results are shown in Table 3. 5.2.1 Overall Comparison MWiki+CilinE refers to the manually-built hierarchy extension method of Suchanek et al. (2008). In our experiment, we use the category taxonomy of Chinese Wikipedia6 to extend CilinE. Table 3 shows that this method achieves a high precision but also a low recall, mainly because of the limited scope of Wikipedia. MPattern refers to the pattern-based method of Hearst (1992). We extract hypernym–hyponym relations in the Baidubaike corpus, which is also used to train word embeddings (Section 4.1). We use the Chinese Hearst-style patterns (Table 4) proposed by Fu et al. (2013), in which w represents a word, and h represents one of its hypernyms. The result shows that only a small part of the hypernyms can be extracted based on these patterns because only a few hypernym relations are expressed in these fixed patterns, and many are expressed in highly flexible manners. In the same corpus, we apply the method MSnow originally proposed by Snow et al. (2005). The same training data for projections learn6dumps.wikimedia.org/zhwiki/20131205/ ing from CilinE (Section 3.3.3) is used as seed hypernym–hyponym pairs. Lexico-syntactic patterns are extracted from the Baidubaike corpus by using the seeds. We then develop a logistic regression classifier based on the patterns to recognize hypernym–hyponym relations. This method relies on an accurate syntactic parser, and the quality of the automatically extracted patterns is difficult to guarantee. We re-implement two previous distributional methods MbalApinc (Kotlerman et al., 2010) and MinvCL (Lenci and Benotto, 2012) in the Baidubaike corpus. Each word is represented as a feature vector in which each dimension is the PMI value of the word and its context words. We compute a score for each word pair and apply a threshold to identify whether it is a hypernym–hyponym relation. MFu refers to our previous web mining method (Fu et al., 2013). This method mines hypernyms of a given word w from multiple sources and returns a ranked list of the hypernyms. We select the hypernyms with scores over a threshold of each word in the test set for evaluation. This method assumes that frequent co-occurrence of a noun or noun phrase n in multiple sources with w indicate possibility of n being a hypernym of w. The results presented in Fu et al. (2013) show that the method works well when w is an entity, but not when w is a word with a common semantic concept. The main reason may be that there are relatively more introductory pages about entities than about common words in the Web. MEmb is the proposed method based on word embeddings. Table 3 shows that the proposed method achieves a better recall and F-score than all of the previous methods do. It can significantly (p < 0.01) improve the F-score over the state-ofthe-art method MWiki+CilinE. MEmb and MCilinE can also be combined. The combination strategy is to simply merge all positive results from the two methods together, and then to infer new relations based on the transitivity of hypernym–hyponym relations. The F-score is further improved from 73.74% to 76.29%. Note that, the combined method achieves a 4.43% recall improvement over MEmb, but the precision is almost unchanged. The reason is that the inference based on the relations identified automatically may lead to error propagation. For example, the relation x H −→y is incorrectly identified by MEmb. 1205 P(%) R(%) F(%) MWiki+CilinE 80.39 19.29 31.12 MEmb+CilinE 71.16 52.80 60.62 MEmb+Wiki+CilinE 69.13 61.65 65.17 Table 5: Performance on the out-of-CilinE data in the test set. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Recall Precision G G G G GG G G G G G G GG G MEmb+Wiki+CilinE MEmb+CilinE MWiki+CilinE Figure 7: Precision-Recall curves on the out-ofCilinE data in the test set. When the relation y H −→z from MCilinE is added, it will cause a new incorrect relation x H −→z. Combining MEmb with MWiki+CilinE achieves a 7% F-score improvement over the best baseline MWiki+CilinE. Therefore, the proposed method is complementary to the manually-built hierarchy extension method (Suchanek et al., 2008). 5.2.2 Comparison on the Out-of-CilinE Data We are greatly interested in the practical performance of the proposed method on the hypernym– hyponym relations outside of CilinE. We say a word pair is outside of CilinE, as long as there is one word in the pair not existing in CilinE. In our test data, about 62% word pairs are outside of CilinE. Table 5 shows the performances of the best baseline method and our method on the outof-CilinE data. The method exploiting the taxonomy in Wikipedia, MWiki+CilinE, achieves the highest precision but has a low recall. By contrast, our method can discover more hypernym– hyponym relations with some loss of precision, thereby achieving a more than 29% F-score improvement. The combination of these two methods achieves a further 4.5% F-score improvement over MEmb+CilinE. Generally speaking, the proposed method greatly improves the recall but damages the precision. Actually, we can get different precisions and re生物 organism 植物 plant 毛茛科 Ranunculaceae 乌头属 Aconitum 乌头 aconite 植物药 medicinal plant 药品 medicine (a) CilinE 生物 organism 植物 plant 毛茛科 Ranunculaceae 乌头属 Aconitum 乌头 aconite 植物药 medicinal plant 药品 medicine (b) Wikipedia+CilinE 生物 organism 植物 plant 毛茛科 Ranunculaceae 乌头属 Aconitum 乌头 aconite 植物药 medicinal plant 药品 medicine (c) Embedding 生物 organism 植物 plant 毛茛科 Ranunculaceae 乌头属 Aconitum 乌头 aconite 植物药 medicinal plant 药品 medicine (d) Embedding+Wikipedia+CilinE Figure 8: An example for error analysis. The red paths refer to the relations between the named entity and its hypernyms extracted using the web mining method (Fu et al., 2013). The black paths with hollow arrows denote the relations identified by the different methods. The boxes with dotted borders refer to the concepts which are not linked to correct positions. calls by adjusting the threshold δ (Equation 3). Figure 7 shows that MEmb+CilinE achieves a higher precision than MWiki+CilinE when their recalls are the same. When they achieve the same precision, the recall of MEmb+CilinE is higher. 5.3 Error Analysis and Discussion We analyze error cases after experiments. Some cases are shown in Figure 8. We can see that there is only one general relation “植物(plant)” H −→“生物(organism)” existing in CilinE. Some fine-grained relations exist in Wikipedia, but the coverage is limited. Our method based on word embeddings can discover more hypernym– hyponym relations than the previous methods can. When we combine the methods together, we get the correct hierarchy. Figure 8 shows that our method loses the relation “乌头属(Aconitum)” H −→“毛茛科 (Ranunculaceae).” It is because they are very semantically similar (their cosine similarity is 0.9038). Their representations are so close to each other in the embedding space that we have not find projections suitable for these pairs. The 1206 error statistics show that when the cosine similarities of word pairs are greater than 0.8, the recall is only 9.5%. This kind of error accounted for about 10.9% among all the errors in our test set. One possible solution may be adding more data of this kind to the training set. 6 Related Work In addition to the works mentioned in Section 2, we introduce another set of related studies in this section. Evans (2004), Ortega-Mendoza et al. (2007), and Sang (2007) consider web data as a large corpus and use search engines to identify hypernyms based on the lexical patterns of Hearst (1992). However, the low quality of the sentences in the search results negatively influence the precision of hypernym extraction. Following the method for discovering patterns automatically (Snow et al., 2005), McNamee et al. (2008) apply the same method to extract hypernyms of entities in order to improve the performance of a question answering system. Ritter et al. (2009) propose a method based on patterns to find hypernyms on arbitrary noun phrases. They use a support vector machine classifier to identify the correct hypernyms from the candidates that match the patterns. As our experiments show, patternbased methods suffer from low recall because of the low coverage of patterns. Besides Kotlerman et al. (2010) and Lenci and Benotto (2012), other researchers also propose directional distributional similarity methods (Weeds et al., 2004; Geffet and Dagan, 2005; Bhagat et al., 2007; Szpektor et al., 2007; Clarke, 2009). However, their basic assumption that a hyponym can only be used in contexts where its hypernyms can be used and that a hypernym might be used in all of the contexts where its hyponyms are used may not always rational. Snow et al. (2006) provides a global optimization scheme for extending WordNet, which is different from the above-mentioned pairwise relationships identification methods. Word embeddings have been successfully applied in many applications, such as in sentiment analysis (Socher et al., 2011b), paraphrase detection (Socher et al., 2011a), chunking, and named entity recognition (Turian et al., 2010; Collobert et al., 2011). These applications mainly utilize the representing power of word embeddings to alleviate the problem of data sparsity. Mikolov et al. (2013a) and Mikolov et al. (2013b) further observe that the semantic relationship of words can be induced by performing simple algebraic operations with word vectors. Their work indicates that word embeddings preserve some interesting linguistic regularities, which might provide support for many applications. In this paper, we improve on their work by learning multiple linear projections in the embedding space, to model hypernym–hyponym relationships within different clusters. 7 Conclusion and Future Work This paper proposes a novel method for semantic hierarchy construction based on word embeddings, which are trained using a large-scale corpus. Using the word embeddings, we learn the hypernym–hyponym relationship by estimating projection matrices which map words to their hypernyms. Further improvements are made using a cluster-based approach in order to model the more fine-grained relations. Then we propose a few simple criteria to identity whether a new word pair is a hypernym–hyponym relation. Based on the pairwise hypernym–hyponym relations, we build semantic hierarchies automatically. In our experiments, the proposed method significantly outperforms state-of-the-art methods and achieves the best F1-score of 73.74% on a manually labeled test dataset. Further experiments show that our method is complementary to the previous manually-built hierarchy extension methods. For future work, we aim to improve word embedding learning under the guidance of hypernym–hyponym relations. By including the hypernym–hyponym relation constraints while training word embeddings, we expect to improve the embeddings such that they become more suitable for this task. Acknowledgments This work was supported by National Natural Science Foundation of China (NSFC) via grant 61133012, 61273321 and the National 863 Leading Technology Research Project via grant 2012AA011102. Special thanks to Shiqi Zhao, Zhenghua Li, Wei Song and the anonymous reviewers for insightful comments and suggestions. We also thank Xinwei Geng and Hongbo Cai for their help in the experiments. 1207 References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155. Rahul Bhagat, Patrick Pantel, Eduard H Hovy, and Marina Rey. 2007. Ledir: An unsupervised algorithm for learning directionality of inference rules. In EMNLP-CoNLL, pages 161–170. Wanxiang Che, Zhenghua Li, and Ting Liu. 2010. Ltp: A chinese language technology platform. In Coling 2010: Demonstrations, pages 13–16, Beijing, China, August. Daoud Clarke. 2009. Context-theoretic semantics for natural language: an overview. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics, pages 112–119. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. Paramveer Dhillon, Dean P Foster, and Lyle H Ungar. 2011. Multi-view learning of word embeddings via cca. In Advances in Neural Information Processing Systems, pages 199–207. Richard Evans. 2004. A framework for named entity recognition in the open domain. Recent Advances in Natural Language Processing III: Selected Papers from RANLP 2003, 260:267–274. Ruiji Fu, Bing Qin, and Ting Liu. 2013. Exploiting multiple sources for open-domain hypernym discovery. In EMNLP, pages 1224–1234. Maayan Geffet and Ido Dagan. 2005. The distributional inclusion hypotheses and lexical entailment. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 107–114. Association for Computational Linguistics. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguisticsVolume 2, pages 539–545. Association for Computational Linguistics. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering, 16(4):359–389. Alessandro Lenci and Giulia Benotto. 2012. Identifying hypernyms in distributional semantic spaces. In Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 75–79. Association for Computational Linguistics. Paul McNamee, Rion Snow, Patrick Schone, and James Mayfield. 2008. Learning named entity hyponyms for question answering. In Proceedings of the Third International Joint Conference on Natural Language Processing, pages 799–804. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of NAACLHLT, pages 746–751. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41. Andriy Mnih and Geoffrey E Hinton. 2008. A scalable hierarchical distributed language model. In Advances in neural information processing systems, pages 1081–1088. Rosa M Ortega-Mendoza, Luis Villase˜nor-Pineda, and Manuel Montes-y G´omez. 2007. Using lexical patterns for extracting hyponyms from the web. In MICAI 2007: Advances in Artificial Intelligence, pages 904–911. Springer. Alan Ritter, Stephen Soderland, and Oren Etzioni. 2009. What is this, anyway: Automatic hypernym discovery. In Proceedings of the 2009 AAAI Spring Symposium on Learning by Reading and Learning to Read, pages 88–93. HP Ritzema. 1994. Drainage principles and applications. Erik Tjong Kim Sang. 2007. Extracting hypernym pairs from the web. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 165–168. Association for Computational Linguistics. Sidney Siegel and N John Castellan Jr. 1988. Nonparametric statistics for the behavioral sciences. McGraw-Hill, New York. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In Lawrence K. Saul, Yair Weiss, and L´eon Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1297–1304. MIT Press, Cambridge, MA. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 801–808, Sydney, Australia, July. Association for Computational Linguistics. 1208 Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Ng. 2011a. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801–809. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011b. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151–161. Association for Computational Linguistics. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2008. Yago: A large ontology from wikipedia and wordnet. Web Semantics: Science, Services and Agents on the World Wide Web, 6(3):203–217. Idan Szpektor, Eyal Shnarch, and Ido Dagan. 2007. Instance-based evaluation of entailment rule acquisition. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 456–463, Prague, Czech Republic, June. Association for Computational Linguistics. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394. Association for Computational Linguistics. Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th international conference on Computational Linguistics, page 1015. Association for Computational Linguistics. 1209
2014
113
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1210–1219, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Probabilistic Soft Logic for Semantic Textual Similarity Islam Beltagy§ Katrin Erk† Raymond Mooney§ §Department of Computer Science †Department of Linguistics The University of Texas at Austin Austin, Texas 78712 §{beltagy,mooney}@cs.utexas.edu †[email protected] Abstract Probabilistic Soft Logic (PSL) is a recently developed framework for probabilistic logic. We use PSL to combine logical and distributional representations of natural-language meaning, where distributional information is represented in the form of weighted inference rules. We apply this framework to the task of Semantic Textual Similarity (STS) (i.e. judging the semantic similarity of naturallanguage sentences), and show that PSL gives improved results compared to a previous approach based on Markov Logic Networks (MLNs) and a purely distributional approach. 1 Introduction When will people say that two sentences are similar? This question is at the heart of the Semantic Textual Similarity task (STS)(Agirre et al., 2012). Certainly, if two sentences contain many of the same words, or many similar words, that is a good indication of sentence similarity. But that can be misleading. A better characterization would be to say that if two sentences use the same or similar words in the same or similar relations, then those two sentences will be judged similar.1 Interestingly, this characterization echoes the principle of compositionality, which states that the meaning of a phrase is uniquely determined by the meaning of its parts and the rules that connect those parts. Beltagy et al. (2013) proposed a hybrid approach to sentence similarity: They use a very 1Mitchell and Lapata (2008) give an amusing example of two sentences that consist of all the same words, but are very different in their meaning: (a) It was not the sales manager who hit the bottle that day, but the office worker with the serious drinking problem. (b) That day the office manager, who was drinking, hit the problem sales worker with a bottle, but it was not serious. deep representation of sentence meaning, expressed in first-order logic, to capture sentence structure, but combine it with distributional similarity ratings at the word and phrase level. Sentence similarity is then modelled as mutual entailment in a probabilistic logic. This approach is interesting in that it uses a very deep and precise representation of meaning, which can then be relaxed in a controlled fashion using distributional similarity. But the approach faces large hurdles in practice, stemming from efficiency issues with the Markov Logic Networks (MLN) (Richardson and Domingos, 2006) that they use for performing probabilistic logical inference. In this paper, we use the same combined logicbased and distributional framework as Beltagy et al., (2013) but replace Markov Logic Networks with Probabilistic Soft Logic (PSL) (Kimmig et al., 2012; Bach et al., 2013). PSL is a probabilistic logic framework designed to have efficient inference. Inference in MLNs is theoretically intractable in the general case, and existing approximate inference algorithms are computationally expensive and sometimes inaccurate. Consequently, the MLN approach of Beltagy et al. (2013) was unable to scale to long sentences and was only tested on the relatively short sentences in the Microsoft video description corpus used for STS (Agirre et al., 2012). On the other hand, inference in PSL reduces to a linear programming problem, which is theoretically and practically much more efficient. Empirical results on a range of problems have confirmed that inference in PSL is much more efficient than in MLNs, and frequently more accurate (Kimmig et al., 2012; Bach et al., 2013). We show how to use PSL for STS, and describe changes to the PSL framework that make it more effective for STS. For evaluation, we test on three STS datasets, and compare our PSL system with the MLN approach of Beltagy et al., (2013) and with distributional-only baselines. Experimental 1210 results demonstrate that, overall, PSL models human similarity judgements more accurately than these alternative approaches, and is significantly faster than MLNs. The rest of the paper is organized as follows: section 2 presents relevant background material, section 3 explains how we adapted PSL for the STS task, section 4 presents the evaluation, and sections 5 and 6 discuss future work and conclusions, respectively. 2 Background 2.1 Logical Semantics Logic-based representations of meaning have a long tradition (Montague, 1970; Kamp and Reyle, 1993). They handle many complex semantic phenomena such as relational propositions, logical operators, and quantifiers; however, their binary nature prevents them from capturing the “graded” aspects of meaning in language. Also, it is difficult to construct formal ontologies of properties and relations that have broad coverage, and semantically parsing sentences into logical expressions utilizing such an ontology is very difficult. Consequently, current semantic parsers are mostly restricted to quite limited domains, such as querying a specific database (Kwiatkowski et al., 2013; Berant et al., 2013). In contrast, our system is not limited to any formal ontology and can use a wide-coverage tool for semantic analysis, as discussed below. 2.2 Distributional Semantics Distributional models (Turney and Pantel, 2010), on the other hand, use statistics on contextual data from large corpora to predict semantic similarity of words and phrases (Landauer and Dumais, 1997; Mitchell and Lapata, 2010). They are relatively easier to build than logical representations, automatically acquire knowledge from “big data,” and capture the “graded” nature of linguistic meaning, but do not adequately capture logical structure (Grefenstette, 2013). Distributional models are motivated by the observation that semantically similar words occur in similar contexts, so words can be represented as vectors in high dimensional spaces generated from the contexts in which they occur (Landauer and Dumais, 1997; Lund and Burgess, 1996). Such models have also been extended to compute vector representations for larger phrases, e.g. by adding the vectors for the individual words (Landauer and Dumais, 1997) or by a component-wise product of word vectors (Mitchell and Lapata, 2008; Mitchell and Lapata, 2010), or more complex methods that compute phrase vectors from word vectors and tensors (Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011). We use vector addition (Landauer and Dumais, 1997), and component-wise product (Mitchell and Lapata, 2008) as baselines for STS. Vector addition was previously found to be the best performing simple distributional method for STS (Beltagy et al., 2013). 2.3 Markov Logic Networks Markov Logic Networks (MLN) (Richardson and Domingos, 2006) are a framework for probabilistic logic that employ weighted formulas in firstorder logic to compactly encode complex undirected probabilistic graphical models (i.e., Markov networks). Weighting the rules is a way of softening them compared to hard logical constraints and thereby allowing situations in which not all clauses are satisfied. MLNs define a probability distribution over possible worlds, where a world’s probability increases exponentially with the total weight of the logical clauses that it satisfies. A variety of inference methods for MLNs have been developed, however, developing a scalable, general-purpose, accurate inference method for complex MLNs is an open problem. Beltagy et al. (2013) use MLNs to represent the meaning of natural language sentences and judge textual entailment and semantic similarity, but they were unable to scale the approach beyond short sentences due to the complexity of MLN inference. 2.4 Probabilistic Soft Logic Probabilistic Soft Logic (PSL) is a recently proposed alternative framework for probabilistic logic (Kimmig et al., 2012; Bach et al., 2013). It uses logical representations to compactly define large graphical models with continuous variables, and includes methods for performing efficient probabilistic inference for the resulting models. A key distinguishing feature of PSL is that ground atoms have soft, continuous truth values in the interval [0, 1] rather than binary truth values as used in MLNs and most other probabilistic logics. Given a set of weighted logical formulas, PSL builds a graphical model defining a probability distribution over the continuous space of values of the random variables in the model. 1211 A PSL model is defined using a set of weighted if-then rules in first-order logic, as in the following example: ∀x, y, z. friend(x, y) ∧votesFor(y, z) → votesFor(x, z) | 0.3 (1) ∀x, y, z. spouse(x, y) ∧votesFor(y, z) → votesFor(x, z) | 0.8 (2) In our notation, we use lower case letters like x, y, z to represent variables and upper case letters for constants. The first rule states that a person is likely to vote for the same person as his/her friend. The second rule encodes the same regularity for a person’s spouse. The weights encode the knowledge that a spouse’s influence is greater than a friend’s in this regard. In addition, PSL includes similarity functions. Similarity functions take two strings or two sets as input and return a truth value in the interval [0, 1] denoting the similarity of the inputs. For example, in our application, we generate inference rules that incorporate the similarity of two predicates. This can be represented in PSL as: ∀x. similarity(“predicate1”, “predicate2”) ∧ predicate1(x) →predicate2(x) As mentioned above, each ground atom, a, has a soft truth value in the interval [0, 1], which is denoted by I(a). To compute soft truth values for logical formulas, Lukasiewicz’s relaxation of conjunctions(∧), disjunctions(∨) and negations(¬) are used: I(l1 ∧l1) = max{0, I(l1) + I(l2) −1} I(l1 ∨l1) = min{I(l1) + I(l2), 1} I(¬l1) = 1 −I(l1) Then, a given rule r ≡rbody →rhead, is said to be satisfied (i.e. I(r) = 1) iff I(rbody) ≤I(rhead). Otherwise, PSL defines a distance to satisfaction d(r) which captures how far a rule r is from being satisfied: d(r) = max{0, I(rbody) −I(rhead)}. For example, assume we have the set of evidence: I(spouse(B, A)) = 1, I(votesFor(A, P)) = 0.9, I(votesFor(B, P)) = 0.3, and that r is the resulting ground instance of rule (2). Then I(spouse(B, A) ∧votesFor(A, P)) = max{0, 1 + 0.9 −1} = 0.9, and d(r) = max{0, 0.9 −0.3} = 0.6. Using distance to satisfaction, PSL defines a probability distribution over all possible interpretations I of all ground atoms. The pdf is defined as follows: p(I) = 1 Z exp [− X r∈R λr(d(r))p]; (3) Z = Z I exp [− X r∈R λr(d(r))p] where Z is the normalization constant, λr is the weight of rule r, R is the set of all rules, and p ∈ {1, 2} provides two different loss functions. For our application, we always use p = 1 PSL is primarily designed to support MPE inference (Most Probable Explanation). MPE inference is the task of finding the overall interpretation with the maximum probability given a set of evidence. Intuitively, the interpretation with the highest probability is the interpretation with the lowest distance to satisfaction. In other words, it is the interpretation that tries to satisfy all rules as much as possible. Formally, from equation 3, the most probable interpretation, is the one that minimizes P r∈R λr(d(r))p. In case of p = 1, and given that all d(r) are linear equations, then minimizing the sum requires solving a linear program, which, compared to inference in other probabilistic logics such as MLNs, can be done relatively efficiently using well-established techniques. In case p = 2, MPE inference can be shown to be a second-order cone program (SOCP) (Kimmig et al., 2012). 2.5 Semantic Textual Similarity Semantic Textual Similarity (STS) is the task of judging the similarity of a pair of sentences on a scale from 0 to 5, and was recently introduced as a SemEval task (Agirre et al., 2012). Gold standard scores are averaged over multiple human annotations and systems are evaluated using the Pearson correlation between a system’s output and gold standard scores. The best performing system in 2012’s competition was by B¨ar et al. (2012), a complex ensemble system that integrates many techniques including string similarity, n-gram overlap, WordNet similarity, vector space similarity and MT evaluation metrics. Two of the datasets we use for evaluation are from the 2012 competition. We did not utilize the new datasets added in the 2013 competition since they did not contain naturally-occurring, full sentences, which is the focus of our work. 1212 2.6 Combining logical and distributional methods using probabilistic logic There are a few recent attempts to combine logical and distributional representations in order to obtain the advantages of both. Lewis and Steedman (2013) use distributional information to determine word senses, but still produce a strictly logical semantic representation that does not address the “graded” nature of linguistic meaning that is important to measuring semantic similarity. Garrette et al. (2011) introduced a framework for combining logic and distributional models using probabilistic logic. Distributional similarity between pairs of words is converted into weighted inference rules that are added to the logical representation, and Markov Logic Networks are used to perform probabilistic logical inference. Beltagy et al. (2013) extended this framework by generating distributional inference rules from phrase similarity and tailoring the system to the STS task. STS is treated as computing the probability of two textual entailments T |= H and H |= T, where T and H are the two sentences whose similarity is being judged. These two entailment probabilities are averaged to produce a measure of similarity. The MLN constructed to determine the probability of a given entailment includes the logical forms for both T and H as well as soft inference rules that are constructed from distributional information. Given a similarity score for all pairs of sentences in the dataset, a regressor is trained on the training set to map the system’s output to the gold standard scores. The trained regressor is applied to the scores in the test set before calculating Pearson correlation. The regression algorithm used is Additive Regression (Friedman, 2002). To determine an entailment probability, first, the two sentences are mapped to logical representations using Boxer (Bos, 2008), a tool for wide-coverage semantic analysis that maps a CCG (Combinatory Categorial Grammar) parse into a lexically-based logical form. Boxer uses C&C for CCG parsing (Clark and Curran, 2004). Distributional semantic knowledge is then encoded as weighted inference rules in the MLN. A rule’s weight (w) is a function of the cosine similarity (sim) between its antecedent and consequent. Rules are generated on the fly for each T and H. Let t and h be the lists of all words and phrases in T and H respectively. For all pairs (a, b), where a ∈t, b ∈h, it generates an inference rule: a →b | w, where w = f(sim(−→a , −→b )). Both a and b can be words or phrases. Phrases are defined in terms of Boxer’s output. A phrase is more than one unary atom sharing the same variable like “a little kid” which in logic is little(K) ∧kid(K). A phrase also can be two unary atoms connected by a relation like “a man is driving” which in logic is man(M) ∧ agent(D, M) ∧drive(D). The similarity function sim takes two vectors as input. Phrasal vectors are constructed using Vector Addition (Landauer and Dumais, 1997). The set of generated inference rules can be regarded as the knowledge base KB. Beltagy et al. (2013) found that the logical conjunction in H is very restrictive for the STS task, so they relaxed the conjunction by using an average evidence combiner (Natarajan et al., 2010). The average combiner results in computationally complex inference and only works for short sentences. In case inference breaks or times-out, they back off to a simpler combiner that leads to much faster inference but loses most of the structure of the sentence and is therefore less accurate. Given T, KB and H from the previous steps, MLN inference is then used to compute p(H|T, KB), which is then used as a measure of the degree to which T entails H. 3 PSL for STS For several reasons, we believe PSL is a more appropriate probabilistic logic for STS than MLNs. First, it is explicitly designed to support efficient inference, therefore it scales better to longer sentences with more complex logical forms. Second, it was also specifically designed for computing similarity between complex structured objects rather than determining probabilistic logical entailment. In fact, the initial version of PSL (Broecheler et al., 2010) was called Probabilistic Similarity Logic, based on its use of similarity functions. This initial version was shown to be very effective for measuring the similarity of noisy database records and performing record linkage (i.e. identifying database entries referring to the same entity, such as bibliographic citations referring to the same paper). Therefore, we have developed an approach that follows that of Beltagy et al. (2013), but replaces Markov Logic with PSL. This section explains how we formulate the STS 1213 task as a PSL program. PSL does not work very well “out of the box” for STS, mainly because Lukasiewicz’s equation for the conjunction is very restrictive. Therefore, we use a different interpretation for conjunction that uses averaging, which requires corresponding changes to the optimization problem and the grounding technique. 3.1 Representation Given the logical forms for a pair of sentences, a text T and a hypothesis H, and given a set of weighted rules derived from the distributional semantics (as explained in section 2.6) composing the knowledge base KB, we build a PSL model that supports determining the truth value of H in the most probable interpretation (i.e. MPE) given T and KB. Consider the pair of sentences is “A man is driving”, and “A guy is walking”. Parsing into logical form gives: T : ∃x, y. man(x) ∧agent(y, x) ∧drive(y) H : ∃x, y. guy(x) ∧agent(y, x) ∧walk(y) The PSL program is constructed as follows: T : The text is represented in the evidence set. For the example, after Skolemizing the existential quantifiers, this contains the ground atoms: {man(A), agent(B, A), drive(B)} KB: The knowledge base is a set of lexical and phrasal rules generated from distributional semantics, along with a similarity score for each rule (section 2.6). For the example, we generate the rules: ∀x. man(x) ∧ vs sim(“man”, “guy”) →guy(x) , ∀x.drive(x)∧vs sim(“drive”, “walk”) → walk(x) where vs sim is a similarity function that calculates the distributional similarity score between the two lexical predicates. All rules are assigned the same weight because all rules are equally important. H: The hypothesis is represented as H → result(), and then PSL is queried for the truth value of the atom result(). For our example, the rule is: ∀x, y. guy(x) ∧ agent(y, x) ∧walk(y) →result(). Priors: A low prior is given to all predicates. This encourages the truth values of ground atoms to be zero, unless there is evidence to the contrary. For each STS pair of sentences S1, S2, we run PSL twice, once where T = S1, H = S2 and another where T = S2, H = S1, and output the two scores. To produce a final similarity score, we train a regressor to learn the mapping between the two PSL scores and the overall similarity score. As in Beltagy et al., (2013) we use Additive Regression (Friedman, 2002). 3.2 Changing Conjunction As mentioned above, Lukasiewicz’s formula for conjunction is very restrictive and does not work well for STS. For example, for T: “A man is driving” and H: “A man is driving a car”, if we use the standard PSL formula for conjunction, the output value is zero because there is no evidence for a car and max(0, X + 0 −1) = 0 for any truth value 0 ≤X ≤1. However, humans find these sentences to be quite similar. Therefore, we introduce a new averaging interpretation of conjunction that we use for the hypothesis H. The truth value for a conjunction is defined as I(p1 ∧.... ∧pn) = 1 n Pn i=1 I(pi). This averaging function is linear, and the result is a valid truth value in the interval [0, 1], therefore this change is easily incorporated into PSL without changing the complexity of inference which remains a linear-programming problem. It would perhaps be even better to use a weighted average, where weights for different components are learned from a supervised training set. This is an important direction for future work. 3.3 Grounding Process Grounding is the process of instantiating the variables in the quantified rules with concrete constants in order to construct the nodes and links in the final graphical model. In principle, grounding requires instantiating each rule in all possible ways, substituting every possible constant for each variable in the rule. However, this is a combinatorial process that can easily result in an explosion in the size of the final network. Therefore, PSL employs a “lazy” approach to grounding that avoids the construction of irrelevant groundings. If there is no evidence for one of the antecedents in a particular grounding of a rule, then the normal PSL formula for conjunction guarantees that the rule is 1214 Algorithm 1 Heuristic Grounding Input: rbody = a1 ∧.... ∧an: antecedent of a rule with average interpretation of conjunction Input: V : set of variables used in rbody Input: Ant(vi): subset of antecedents aj containing variable vi Input: Const(vi): list of possible constants of variable vi Input: Gnd(ai): set of ground atoms of ai. Input: GndConst(a, g, v): takes an atom a, grounding g for a, and variable v, and returns the constant that substitutes v in g Input: gnd limit: limit on the number of groundings 1: for all vi ∈V do 2: for all C ∈Const(vi) do 3: score(C) = P a∈Ant(vi)(max I(g)) for g ∈Gnd(a) ∧GndConst(a, g, vi) = C 4: end for 5: sort Const(vi) on scores, descending 6: end for 7: return For all vi ∈V , take the Cartesianproduct of the sorted Const(vi) and return the top gnd limit results trivially satisfied (I(r) = 1) since the truth value of the antecedent is zero. Therefore, its distance to satisfaction is also zero, and it can be omitted from the ground network without impacting the result of MPE inference. However, this technique does not work once we switch to using averaging to interpret conjunctions. For example, given the rule ∀x. p(x) ∧ q(x) →t() and only one piece of evidence p(C) there are no relevant groundings because there is no evidence for q(C), and therefore, for normal PSL, I(p(C) ∧q(C)) = 0 which does not affect I(t()). However, when using averaging with the same evidence, we need to generate the grounding p(C)∧q(C) because I(p(C)∧q(C)) = 0.5 which does affect I(t()). One way to solve this problem is to eliminate lazy grounding and generate all possible groundings. However, this produces an intractably large network. Therefore, we developed a heuristic approximate grounding technique that generates a subset of the most impactful groundings. Pseudocode for this heuristic approach is shown in algorithm 1. Its goal is to find constants that participate in ground propositions with high truth value and preferentially use them to construct a limited number of groundings of each rule. The algorithm takes the antecedents of a rule employing averaging conjunction as input. It also takes the grounding limit which is a threshold on the number of groundings to be returned. The algorithm uses several subroutines, they are: • Ant(vi): given a variable vi, it returns the set of rule antecedent atoms containing vi. E.g, for the rule: a(x) ∧b(y) ∧c(x), Ant(x) returns the set of atoms {a(x), c(x)}. • Const(vi): given a variable vi, it returns the list of possible constants that can be used to instantiate the variable vi. • Gnd(ai): given an atom ai, it returns the set of all possible ground atoms generated for ai. • GndConst(a, g, v): given an atom a and grounding g for a, and a variable v, it finds the constant that substitutes for v in g. E.g, assume there is an atom a = ai(v1, v2), and the ground atom g = ai(A, B) is one of its groundings. GndConst(a, g, v2) would return the constant B since it is the substitution for the variable v2 in g. Lines 1-6 loop over all variables in the rule. For each variable, lines 2-5 construct a list of constants for that variable and sort it based on a heuristic score. In line 3, each constant is assigned a score that indicates the importance of this constant in terms of its impact on the truth value of the overall grounding. A constant’s score is the sum, over all antedents that contain the variable in question, of the maximum truth value of any grounding of that antecedent that contains that constant. Pushing constants with high scores to the top of each variable’s list will tend to make the overall truth value of the top groundings high. Line 7 computes a subset of the Cartesian product of the sorted lists of constants, selecting constants in ranked order and limiting the number of results to the grounding limit. One point that needs to be clarified about this approach is how it relies on the truth values of ground atoms when the goal of inference is to actually find these values. PSL’s inference is actually an iterative process where in each iteration a grounding phase is followed by an optimization phase (solving the linear program). This loop repeats until convergence, i.e. until the truth 1215 values stop changing. The truth values used in each grounding phase come from the previous optimization phase. The first grounding phase assumes only the propositions in the evidence provided have non-zero truth values. 4 Evaluation This section evaluates the performance of PSL on the STS task. 4.1 Datasets We evaluate our system on three STS datasets. • msr-vid: Microsoft Video Paraphrase Corpus from STS 2012. The dataset consists of 1,500 pairs of short video descriptions collected using crowdsourcing (Chen and Dolan, 2011) and subsequently annotated for the STS task (Agirre et al., 2012). Half of the dataset is for training, and the second half is for testing. • msr-par: Microsoft Paraphrase Corpus from STS 2012 task. The dataset is 5,801 pairs of sentences collected from news sources (Dolan et al., 2004). Then, for STS 2012, 1,500 pairs were selected and annotated with similarity scores. Half of the dataset is for training, and the second half is for testing. • SICK: Sentences Involving Compositional Knowledge is a dataset collected for SemEval 2014. Only the training set is available at this point, which consists of 5,000 pairs of sentences. Pairs are annotated for RTE and STS, but we only use the STS data. Training and testing was done using 10-fold cross validation. 4.2 Systems Compared We compare our PSL system with several others. In all cases, we use the distributional word vectors employed by Beltagy et al. (2013) based on context windows from Gigaword. • vec-add: Vector Addition (Landauer and Dumais, 1997). We compute a vector representation for each sentence by adding the distributional vectors of all of its words and measure similarity using cosine. This is a simple yet powerful baseline that uses only distributional information. • vec-mul: Component-wise Vector Multiplication (Mitchell and Lapata, 2008). The same as vec-add except uses componentwise multiplication to combine word vectors. • MLN: The system of Beltagy et al. (2013), which uses Markov logic instead of PSL for probabilistic inference. MLN inference is very slow in some cases, so we use a 10 minute timeout. When MLN times out, it backs off to a simpler sentence representation as explained in section 2.6. • PSL: Our proposed PSL system for combining logical and distributional information. • PSL-no-DIR: Our PSL system without distributional inference rules(empty knowledge base). This system uses PSL to compute similarity of logical forms but does not use distributional information on lexical or phrasal similarity. It tests the impact of the probabilistic logic only • PSL+vec-add: PSL ensembled with vecadd. Ensembling the MLN approach with a purely distributional approach was found to improve results (Beltagy et al., 2013), so we also tried this with PSL. The methods are ensembled by using both entailment scores of both systems as input features to the regression step that learns to map entailment scores to STS similarity ratings. This way, the training data is used to learn how to weight the contribution of the different components. • PSL+MLN: PSL ensembled with MLN in the same manner. 4.3 Experiments Systems are evaluated on two metrics, Pearson correlation and average CPU time per pair of sentences. • Pearson correlation: The Pearson correlation between the system’s similarity scores and the human gold-standards. • CPU time: This metric only applies to MLN and PSL. The CPU time taken by the inference step is recorded and averaged over all pairs in each of the test datasets. In many cases, MLN inference is very slow, so we timeout after 10 minutes and report the number of timed-out pairs on each dataset. 1216 msr-vid msr-par SICK vec-add 0.78 0.24 0.65 vec-mul 0.76 0.12 0.62 MLN 0.63 0.16 0.47 PSL-no-DIR 0.74 0.46 0.68 PSL 0.79 0.53 0.70 PSL+vec-add 0.83 0.49 0.71 PSL+MLN 0.79 0.51 0.70 Best Score (B¨ar et al., 2012) 0.87 0.68 n/a Table 1: STS Pearson Correlations PSL MLN time time timeouts/total msr-vid 8s 1m 31s 132/1500 msr-par 30s 11m 49s 1457/1500 SICK 10s 4m 24s 1791/5000 Table 2: Average CPU time per STS pair, and number of timed-out pairs in MLN with a 10 minute time limit. PSL’s grounding limit is set to 10,000 groundings. We also evaluated the effect of changing the grounding limit on both Pearson correlation and CPU time for the msr-par dataset. Most of the sentences in msr-par are long, which results is large number of groundings, and limiting the number of groundings has a visible effect on the overall performance. In the other two datasets, the sentences are fairly short, and the full number of groundings is not large; therefore, changing the grounding limit does not significantly affect the results. 4.4 Results and Discussion Table 1 shows the results for Pearson correlation. PSL out-performs the purely distributional baselines (vec-add and vec-mul) because PSL is able to combine the information available to vec-add and vec-mul in a better way that takes sentence structure into account. PSL also outperforms the unaided probabilistic-logic baseline that does not use distributional information (PSL-no-DIR). PSL-no-DIR works fairly well because there is significant overlap in the exact words and structure of the paired sentences in the test data, and PSL combines the evidence from these similarities effectively. In addition, PSL always does significantly better than MLN, because of the large Figure 1: Effect of PSL’s grounding limit on the correlation score for the msr-par dataset number of timeouts, and because the conjunctionaveraging in PSL is combining evidence better than MLN’s average-combiner, whose performance is sensitive to various parameters. These results further support the claim that using probabilistic logic to integrate logical and distributional information is a promising approach to natural-language semantics. More specifically, they strongly indicate that PSL is a more effective probabilistic logic for judging semantic similarity than MLNs. Like for MLNs (Beltagy et al., 2013), ensembling PSL with vector addition improved the scores a bit, except for msr-par where vec-add’s performance is particularly low. However, this ensemble still does not beat the state of the art (B¨ar et al., 2012) which is a large ensemble of many different systems. It would be informative to add our system to their ensemble to see if it could improve it even further. Table 2 shows the CPU time for PSL and MLN. The results clearly demonstrate that PSL is an order of magnitude faster than MLN. Figures 1 and 2 show the effect of changing the grounding limit on Pearson correlation and CPU time. As expected, as the grounding limit is increased, accuracy improves but CPU time also increases. However, note that the difference in scores between the smallest and largest grounding limit tested is not large, suggesting that the heuristic approach to limiting grounding is quite effective. 5 Future Work As mentioned in Section 3.2, it would be good to use a weighted average to compute the truth 1217 Figure 2: Effect of PSL’s grounding limit on CPU time for the msr-par dataset values for conjunctions, weighting some predicates more than others rather than treating them all equally. Appropriate weights for different components could be learned from training data. For example, such an approach could learn that the type of an object determined by a noun should be weighted more than a property specified by an adjective. As a result, “black dog” would be appropriately judged more similar to “white dog” than to “black cat.” One of the advantages of using a probabilistic logic is that additional sources of knowledge can easily be incorporated by adding additional soft inference rules. To complement the soft inference rules capturing distributional lexical and phrasal similarities, PSL rules could be added that encode explicit paraphrase rules, such as those mined from monolingual text (Berant et al., 2011) or multi-lingual parallel text (Ganitkevitch et al., 2013). This paper has focused on STS; however, as shown by Beltagy et al. (2013), probabilistic logic is also an effective approach to recognizing textual entailment (RTE). By using the appropriate functions to combine truth values for various logical connectives, PSL could also be adapted for RTE. Although we have shown that PSL outperforms MLNs on STS, we hypothesize that MLNs may still be a better approach for RTE. However, it would be good to experimentally confirm this intuition. In any case, the high computational complexity of MLN inference could mean that PSL is still a more practical choice for RTE. 6 Conclusion This paper has presented an approach that uses Probabilistic Soft Logic (PSL) to determine Semantic Textual Similarity (STS). The approach uses PSL to effectively combine logical semantic representations of sentences with soft inference rules for lexical and phrasal similarities computed from distributional information. The approach builds upon a previous method that uses Markov Logic (MLNs) for STS, but replaces the probabilistic logic with PSL in order to improve the efficiency and accuracy of probabilistic inference. The PSL approach was experimentally evaluated on three STS datasets and was shown to outperform purely distributional baselines as well as the MLN approach. The PSL approach was also shown to be much more scalable and efficient than using MLNs Acknowledgments This research was supported by the DARPA DEFT program under AFRL grant FA8750-13-2-0026. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the view of DARPA, DoD or the US government. Some experiments were run on the Mastodon Cluster supported by NSF Grant EIA-0303609. References Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of Semantic Evaluation (SemEval-12). Stephen H. Bach, Bert Huang, Ben London, and Lise Getoor. 2013. Hinge-loss Markov random fields: Convex inference for structured prediction. In Proceedings of Uncertainty in Artificial Intelligence (UAI-13). Daniel B¨ar, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. UKP: Computing semantic textual similarity by combining multiple content similarity measures. In Proceedings of Semantic Evaluation (SemEval-12). Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-10). Islam Beltagy, Cuong Chau, Gemma Boleda, Dan Garrette, Katrin Erk, and Raymond Mooney. 2013. 1218 Montague meets Markov: Deep semantics with probabilistic logical form. In Proceedings of the Second Joint Conference on Lexical and Computational Semantics (*SEM-13). Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In Proceedings of Association for Computational Linguistics (ACL-11). Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-13). Johan Bos. 2008. Wide-coverage semantic analysis with Boxer. In Proceedings of Semantics in Text Processing (STEP-08). Matthias Broecheler, Lilyana Mihalkova, and Lise Getoor. 2010. Probabilistic Similarity Logic. In Proceedings of Uncertainty in Artificial Intelligence (UAI-20). David L. Chen and William B. Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Proceedings of Association for Computational Linguistics (ACL-11). Stephen Clark and James R. Curran. 2004. Parsing the WSJ using CCG and log-linear models. In Proceedings of Association for Computational Linguistics (ACL-04). Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the International Conference on Computational Linguistics (COLING-04). Jerome H Friedman. 2002. Stochastic gradient boosting. Journal of Computational Statistics & Data Analysis (CSDA-02). Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-13). Dan Garrette, Katrin Erk, and Raymond Mooney. 2011. Integrating logical representations with probabilistic information using Markov logic. In Proceedings of International Conference on Computational Semantics (IWCS-11). Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-11). Edward Grefenstette. 2013. Towards a formal distributional semantics: Simulating logical calculi with tensors. In Proceedings of Second Joint Conference on Lexical and Computational Semantics (*SEM 2013). Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic. Kluwer. Angelika Kimmig, Stephen H. Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short introduction to Probabilistic Soft Logic. In Proceedings of NIPS Workshop on Probabilistic Programming: Foundations and Applications (NIPS Workshop-12). Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-13). T. K. Landauer and S. T. Dumais. 1997. A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review. Mike Lewis and Mark Steedman. 2013. Combined distributional and logical semantics. Transactions of the Association for Computational Linguistics (TACL-13). Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior Research Methods, Instruments, and Computers. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of Association for Computational Linguistics (ACL08). Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Journal of Cognitive Science. Richard Montague. 1970. Universal grammar. Theoria, 36:373–398. Sriraam Natarajan, Tushar Khot, Daniel Lowd, Prasad Tadepalli, Kristian Kersting, and Jude Shavlik. 2010. Exploiting causal independence in Markov logic networks: Combining undirected and directed models. In Proceedings of European Conference in Machine Learning (ECML-10). Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine Learning, 62:107–136. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research (JAIR-10). 1219
2014
114
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1220–1230, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Abstractive Summarization of Spoken and Written Conversations Based on Phrasal Queries Yashar Mehdad Giuseppe Carenini Raymond T. Ng Department of Computer Science, University of British Columbia Vancouver, BC, V6T 1Z4, Canada {mehdad, carenini, rng}@cs.ubc.ca Abstract We propose a novel abstractive querybased summarization system for conversations, where queries are defined as phrases reflecting a user information needs. We rank and extract the utterances in a conversation based on the overall content and the phrasal query information. We cluster the selected sentences based on their lexical similarity and aggregate the sentences in each cluster by means of a word graph model. We propose a ranking strategy to select the best path in the constructed graph as a query-based abstract sentence for each cluster. A resulting summary consists of abstractive sentences representing the phrasal query information and the overall content of the conversation. Automatic and manual evaluation results over meeting, chat and email conversations show that our approach significantly outperforms baselines and previous extractive models. 1 Introduction Our lives are increasingly reliant on multimodal conversations with others. We email for business and personal purposes, attend meetings in person, chat online, and participate in blog or forum discussions. While this growing amount of personal and public conversations represent a valuable source of information, going through such overwhelming amount of data, to satisfy a particular information need, often leads to an information overload problem (Jones et al., 2004). Automatic summarization has been proposed in the past as a way to address this problem (e.g., (Sakai and Sparck-Jones, 2001)). However, often a good summary cannot be generic and should be a brief and well-organized paragraph that answer a user’s information need. The Document Understanding Conference (DUC)1 has launched query-focused multidocument summarization as its main task since 2004, by focusing on complex queries with very specific answers. For example, “How were the bombings of the US embassies in Kenya and Tanzania conducted? How and where were the attacks planned?”. Such complex queries are appropriate for a user who has specific information needs and can formulate the questions precisely. However, especially when dealing with conversational data that tend to be less structured and less topically focused, a user is often initially only exploring the source documents, with less specific information needs. Moreover, following the common practice in search engines, users are trained to form simpler and shorter queries (Meng and Yu, 2010). For example, when a user is interested in certain characteristics of an entity in online reviews (e.g., “location” or “screen”) or a specific entity in a blog discussion (e.g., “new model of iphone”), she would not initially compose a complex query. To address these issues, in this work, we tackle the task of conversation summarization based on phrasal queries. We define a phrasal query as a concatenation of two or more keywords, which is a more realistic representation of a user’s information needs. For conversational data, this definition is more similar to the concept of search queries in information retrieval systems as well as to the concept of topic labels in the task of topic modeling. Example 1 shows two queries and their associated human written summaries based on a single chat log. We can observe that the two summaries, although generated from the same chat log, are totally distinct. This further demonstrates the importance of phrasal query-based summarization systems for long conversations. To date, most systems in the area of summa1http://www-nlpir.nist.gov/projects/duc/index.html 1220 Query-1: Test/Sample database for GNUe Abstract-1: James Thompson asked Reinhard: I was going to work on the sample tonight. You mentioned wanting a fishhook and all data types. Any other things you want to see in there? Reinhard said that master/detail would be good, as there have been bugs only appearing in 3-level case. James said he already included that and I know I need to add a boolean. Did you want date as well as date-time? Reinhard said yes - we also have time values (time without date). They are especially interesting. James had not ever had use for something like that so I’m not sure where I would graft that in. Query-2: Passing parameters to Forms Abstract-2: James Thompson (jamest) asked how did parameter support in forms change recently? He reported the trigger namespace function referencesGFForm.parameters - which no longer exists. Reinhard said every GFForm should have a parameters. James said he was using parameters in on-startup. Reinhard said that’s probably the only place where they don’t work. James said that I’m thinking about moving that to on-activation instead of on-startup anyway as it should still work for a main form - but i still wonder if the on-startup parameter issue should be considered a bug - as it shouldn’t choke. Reinhard was sure it should be considered a bug but I have no idea how to fix it. We haven’t found a way to deal with parameters that works for every case. I don’t know if there is any chance to pass the parameters to the form before it is activated. James asked how are parameters handled now? Reinhard replied that they are passed to activateForm so they are available from activation for the –main– form, the command line parameters are passed and for dialogs, the parameters are passed that were given in runDialog. Example 1: Sample queries and associated human-written query-based summaries for a chat log. rization focus on news or other well-written documents, while research on summarizing multiparty written conversations (e.g., chats, emails) has been limited. This is because traditional NLP approaches developed for formal texts often are not satisfactory when dealing with multiparty written conversations, which are typically in a casual style and do not display a clear syntactic structure with proper grammar and spelling. Even though some works try to address the problem of summarizing multiparty written conversions (e.g., (Mehdad et al., 2013b; Wang and Cardie, 2013; Murray et al., 2010; Zhou and Hovy, 2005; Gillick et al., 2009)), they do so in a generic way (not querybased) and focus on only one conversational domain (e.g., meetings). Moreover, most of the proposed systems for conversation summarization are extractive. To address such limitations, we propose a fully automatic unsupervised abstract generation framework based on phrasal queries for multimodal conversation summarization. Our key contributions in this work are as follows: 1) To the best of our knowledge, our framework is the first abstractive system that generates summaries based on users phrasal queries, instead of well-formed questions. As a by-product of our approach, we also propose an extractive summarization model based on phrasal queries to select the summary-worthy sentences in the conversation based on query terms and signature terms (Lin and Hovy, 2000). 2) We propose a novel ranking strategy to select the best path in the constructed word graph by taking the query content, overall information content and grammaticality (i.e., fluency) of the sentence into consideration. 3) Although most of the current summarization approaches use supervised algorithms as a part of their system (e.g., (Wang et al., 2013)), our method can be totally unsupervised and does not depend on human annotation. 4) Although different conversational modalities (e.g., email vs. chat vs. meeting) underline domain-specific characteristics, in this work, we take advantage of their underlying similarities to generalize away from specific modalities and determine effective method for query-based summarization of multimodal conversations. We evaluate our system over GNUe Traffic archive2 Internet Relay Chat (IRC) logs, AMI meetings corpus (Carletta et al., 2005) and BC3 emails dataset (Ulrich et al., 2008). Automatic evaluation on the chat dataset and manual evaluation over the meetings and emails show that our system uniformly and statistically significantly outperforms baseline systems, as well as a stateof-the-art query-based extractive summarization system. 2 Phrasal Query Abstraction Framework Our phrasal query abstraction framework generates a grammatical abstract from a conversation following three steps, as shown in Figure 1. 2.1 Utterance Extraction Abstractive summary sentences can be created by aggregating and merging multiple sentences into an abstract sentence. In order to generate such a sentence, we need to identify which sentences from the original document should be extracted and combined to generate abstract sentences. In other words, we want to identify the summaryworthy sentences in the text that can be combined into an abstract sentence. This task can be considered as content selection. Moreover, this step, stand alone, corresponds to an extractive summarization system. 2http://kt.earth.li/GNUe/index.html 1221 Original conversation Query Extracted utterances Filtered utterances Extraction Redundancy Removal Generation Clusters Word graphs Top ranked sentences Query-based abstract Clustering Word Graph Ranking Construction Figure 1: Phrasal query abstraction framework. The steps (arrows) influenced by the query are highlighted. Signature terms: navigator, functionality, reports, UI, schema, gnu Chat log: - but watching them build a UI in the flash demo’s is pretty damn impressive... and have started moving my sales app to all UI being built via ... - i’ll be expanding the technotes in navigator for a while ... - ... in terms of functionality of the underlying databases ... - you mean if I start GNU again I have to read bug reports too? - no, just in case you want to enter bug report - ...I expand the schema before populating with test data ... - i’m willing to scrap it if there is a better schema hidden in gnue somewhere :) Example 2: Sample signature terms for a part of a chat log. In order to select and extract the informative summary-worthy utterances, based on the phrasal query and the original text, we consider two criteria: i) utterances should carry the essence of the original text; and ii) utterances should be relevant to the query. To fulfill such requirements we define the concepts of signature terms and query terms. 2.1.1 Signature Terms Signature terms are generally indicative of the content of a document or collection of documents. To identify such terms, we can use frequency, word probability, standard statistic tests, information-theoretic measures or log-likelihood ratio. In this work, we use log-likelihood ratio to extract the signature terms from chat logs, since log-likelihood ratio leads to better results (Gupta et al., 2007). We use a method described in (Lin and Hovy, 2000) in order to identify such terms and their associated weight. Example 2 demonstrates a chat log and associated signature terms. 2.1.2 Query Terms Query terms are indicative of the content in a phrasal query. In order to identify such terms, we first extract all content terms from the query. Then, following previous studies (e.g., (Gonzalo et al., 1998)), we use the synsets relations in WordNet for query expansion. We extract all concepts that are synonyms to the query terms and add them to the original set of query terms. Note that we limit our synsets to the nouns since verb synonyms do not prove to be effective in query expansion (Hunemark, 2010). While signature terms are weighted, we assume that all query terms are equally important and they all have wight equal to 1. 2.1.3 Utterance Scoring To estimate the utterance score, we view both the query terms and the signature terms as the terms that should appear in a human query-based summary. To achieve this, the most relevant (summary-worthy) utterances that we select are the ones that maximize the coverage of such terms. Given the query terms and signature terms, we can estimate the utterance score as follows: ScoreQ = 1 n n X i=1 t(q)i (1) ScoreS = 1 n n X i=1 t(s)i × w(s)i (2) Score = α · ScoreQ + β · ScoreS (3) where n is number of content words in the utterance, t(q)i = 1 if the term ti is a query term and 0 otherwise, and t(s)i = 1 if ti is a signature term and 0 otherwise, and w(s)i is the normalized associated weight for signature terms. The parameters α and β are tuned on a development set and sum up to 1. After all the utterances are scored, the top scored utterances are selected to be sent to the next step. We estimate the percentage of the retrieved utterances based on the development set. 1222 2.2 Redundancy Removal Utterances selected in previous step often include redundant information, which is semantically equivalent but may vary in lexical choices. By identifying the semantic relations between the sentences, we can discover what information in one sentence is semantically equivalent, novel, or more/less informative with respect to the content of the other sentences. Similar to earlier work (Berant et al., 2011; Adler et al., 2012), we set this problem as a variant of the Textual Entailment (TE) recognition task (Dagan and Glickman, 2004). Using entailment in this phase is motivated by taking advantage of semantic relations instead of pure statistical methods (e.g., Maximal Marginal Relevance) and shown to be more effective (Mehdad et al., 2013a). We follow the same practice as (Mehdad et al., 2013a) to build an entailment graph for all selected sentences to identify relevant sentences and eliminate the redundant (in terms of meaning) and less informative ones. 2.3 Abstract Generation In this phase, our goal is to generate understandable informative abstract sentences that capture the content of the source sentences and represents the information needs defined by queries. There are several ways of generating abstract sentences (e.g. (Barzilay and McKeown, 2005; Liu and Liu, 2009; Ganesan et al., 2010; Murray et al., 2010)); however, most of them rely heavily on the sentence structure. We believe that such approaches are suboptimal, especially in dealing with conversational data, because multiparty written conversations are often poorly structured. Instead, we apply an approach that does not rely on syntax, nor on a standard NLG architecture. Moreover, since dealing with user queries efficiency is an important aspect, we aim for an approach that is also motivated by the speed with which the abstracts are obtained. We perform the task of abstract generation in three steps, as follows: 2.3.1 Clustering In order to generate an abstract summary, we need to identify which sentences from the previous step (i.e., redundancy removal) can be clustered and combined in generated abstract sentences. This task can be viewed as sentence clustering, where each sentence cluster can provide the content for an abstract sentence. We use the K-mean clustering algorithm by cosine similarity as a distance function between sentence vectors composed of tf.idf scores. Also notice that the lexical similarity between sentences in one cluster facilitates both the construction of the word graph and finding the best path in the word graph, as described next. 2.3.2 Word Graph In order to construct a word graph, we adopt the method recently proposed by (Mehdad et al., 2013a; Filippova, 2010) with some optimizations. Below, we show how the word graph is applied to generate the abstract sentences. Let G = (W, L) be a directed graph with the set of nodes W representing words and a set of directed edges L representing the links between words. Given a cluster of related sentences S = {s1, s2, ..., sn}, a word graph is constructed by iteratively adding sentences to it. In the first step, the graph represents one sentence plus the start and end symbols. A node is added to the graph for each word in the sentence, and words adjacent are linked with directed edges. When adding a new sentence, a word from the sentence is merged in an existing node in the graph providing that they have the same POS tag and they satisfy one of the following conditions: i) They have the same word form; ii) They are connected in WordNet by the synonymy relation. In this case the lexical choice for the node is selected based on the tf.idf score of each node; iii) They are from a hypernym/hyponym pair or share a common direct hypernym. In this case, both words are replaced by the hypernym; iv) They are in an entailment relation. In this case, the entailing word is replaced by the entailed one. The motivation behind merging non-identical words is to enrich the common terms between the phrases to increase the chance that they could merge into a single phrase. This also helps to move beyond the limitation of original lexical choices. In case the merging is not possible a new node is created in the graph. When a node can be merged with multiple nodes (i.e., merging is ambiguous), either the preceding and following words in the sentence and the neighboring nodes in the graph or the frequency is used to select the candidate node. We connect adjacent words with directed edges. 1223 For the new nodes or unconnected nodes, we draw an edge with a weight of 1. In contrast, when two already connected nodes are added (merged), the weight of their connection is increased by 1. 2.3.3 Path Ranking A word graph, as described above, may contain many sequences connecting start and end. However, it is likely that most of the paths are not readable. We are aiming at generating an informative abstractive sentence for each cluster based on a user query. Moreover, the abstract sentence should be grammatically correct. In order to satisfy both requirements, we have devised the following ranking strategy. First, we prune the paths in which a verb does not exist, to filter ungrammatical sentences. Then we rank other paths as follows: Query focus: to identify the summary sentence with the highest coverage of query content, we propose a score that counts the number of query terms that appear in the path. In order to reward the ranking score to cover more salient terms in the query content, we also consider the tf.idf score of query terms in the coverage formulation. Q(P) = P qi∈P tfidf (qi) P qi∈G tfidf (qi) where the qi are the query terms. Fluency: in order to improve the grammaticality of the generated sentence, we coach our ranking model to select more fluent (i.e., grammatically correct) paths in the graph. We estimate the grammaticality of generated paths (Pr(P)) using a language model. Path weight: The purpose of this function is twofold: i) to generate a grammatical sentence by favoring the links between nodes (words) which appear often; and ii) to generate an informative sentence by increasing the weight of edges connecting salient nodes. For a path P with m nodes, we define the edge weight w(ni, nj) and the path weight W(P) as below: w(ni, nj) = freq(ni) + freq(nj) P P ′∈G ni,nj∈P ′ diff(P ′, ni, nj)−1 W(P) = Pm−1 i=1 w(ni, ni+1) m −1 where the function diff(P ′, ni, nj) refers to the distance between the offset positions pos(P ′, ni) of nodes ni and nj in path P ′ (any path in G containing ni and nj) and is defined as |pos(P ′, nj)− pos(P ′, ni)|. Overal ranking score: In order to generate a query-based abstract sentence that combines the scores above, we employ a ranking model. The purpose of such a model is three-fold: i) to cover the content of query information optimally; ii) to generate a more readable and grammatical sentence; and iii) to favor strong connections between the concepts. Therefore, the final ranking score of path P is calculated over the normalized scores as: Score(P) = α · Q(P) + β · Pr(P) −γ · W(P) Where α, β and γ are the coefficient factors to tune the ranking score and they sum up to 1. In order to rank the graph paths, we select all the paths that contain at least one verb and rerank them using our proposed ranking function to find the best path as the summary of the original sentences in each cluster. 3 Experimental Setup In this section, we show the evaluation results of our proposed framework and its comparison to the baselines and a state-of-the-art query-focused extractive summarization system. 3.1 Datasets One of the challenges of this work is to find suitable conversational datasets that can be used for evaluating our query-based summarization system. Most available conversational corpora do not contain any human written summaries, or the gold standard human written summaries are generic (Carletta et al., 2005; Joty et al., 2013). In this work, we use available corpora for emails and chats for written conversations, while for spoken conversation, we employ an available corpus in multiparty meeting conversations. Chat: to the best of our knowledge, the only publicly available chat logs with human written summaries can be downloaded from the GNUe Traffic archive (Zhou and Hovy, 2005; Uthus and Aha, 2011; Uthus and Aha, 2013). Each chat log has a human created summary in the form of a digest. Each digest summarizes IRC logs for a period and consists of few summaries over each chat log with a unique title for the associated human written summary. In this way, the title of each summary 1224 can be counted as a phrasal query and the corresponding summary is considered as the querybased abstract of the associated chat log including only the information most relevant to the title. Therefore, we can use the human-written querybased abstract as gold standards and evaluate our system automatically. Our chat dataset consists of 66 query-based (title-based) human written summaries with their associated queries (titles) and chat logs, created from 40 original chat logs. The average number of tokens are 1840, 325 and 6 for chat logs, query-based summaries and queries, respectively. Meeting: we use the AMI meeting corpus (Carletta et al., 2005) that consists of 140 multiparty meetings with a wide range of annotations, including generic abstractive summaries for each meeting. In order to create queries, we extract three key-phrases from generic abstractive summaries using TextRank algorithm (Mihalcea and Tarau, 2004). We use the extracted key-phrases as queries to generate query-based abstracts. Since there is no human-written query-based summary for AMI corpus, we randomly select 10 meetings and evaluate our system manually. Email: we use BC3 (Ulrich et al., 2008), which contains 40 threads from the W3C corpus. BC3 corpus is annotated with generic human-written abstractive summaries, and it has been used in several previous works (e.g., (Joty et al., 2011)). In order to adapt this corpus to our framework, we followed the same query generation process as for the meeting dataset. Finally, we randomly select 10 emails threads and evaluate the results manually. 3.2 Baselines We compare our approach with the following baselines: 1) Cosine-1st: we rank the utterances in the chat log based on the cosine similarity between the utterance and query. Then, we select the first uttrance as the summary; 2) Cosine-all: we rank the utterances in the chat log based on the cosine similarity between the utterance and query and then select the utterances with a cosine similarity greater than 0; 3) TextRank: a widely used graph-based ranking model for single-document sentence extraction that works by building a graph of all sentences in a document and use similarity as edges to compute the salience of sentences in the graph (Mihalcea and Tarau, 2004); 4) LexRank: another popular graph-based content selection algorithm for multi-document summarization (Erkan and Radev, 2004); 5) Biased LexRank: is a state-of-the-art queryfocused summarization that uses LexRank algorithm in order to recursively retrieve additional passages that are similar to the query, as well as to the other nodes in the graph (Otterbacher et al., 2009). Moreover, we compare our abstractive system with the first part of our framework (utterance extraction in Figure 1), which can be presented as an extractive query-based summarization system (our extractive system). We also show the results of the version we use in our pipeline (our pipeline extractive system). The only difference between the two versions is the length of the generated summaries. In our pipeline we aim at higher recall, since we later filter sentences and aggregate them to generate new abstract sentences. In contrast, in the stand alone version (extractive system) we limit the number of retrieved sentences to the desired length of the summary. We also compare the results of our full system (i.e., with tuning) with a non-optimized version when the ranking coefficients are distributed equally (α = β = γ = 0.33). For parameters estimation, we tune all parameters (utterance selection and path ranking) exhaustively with 0.1 intervals using our development set. For manual evaluation of query-based abstracts (meeting and email datasets), we perform a simple user study assessing the following aspects: i) Overall quality given a query (5-point scale)?; and ii) Responsiveness: how responsive is the generated summary to the query (5-point scale)? Each query-based abstract was rated by two annotators (native English speaker). Evaluators are presented with the original conversation, query and generated summary. For the manual evaluation, we only compare our full system with LexRank (LR) and Biased LexRank (Biased LR). We also ask the evaluators to select the best summary for each query and conversation, given our system generated summary and the two baselines. To evaluate the grammaticality of our generated summaries, following common practice (Barzilay and McKeown, 2005), we randomly selected 50 sentences from original conversations and system 1225 Models ROUGE-1 (%) ROUGE-2 (%) Prc Rec F-1 Prc Rec F-1 Cosine-1st 71 5 8 30 3 5 Cosine-all 30 68 38 18 40 22 TextRank 25 76 34 15 44 20 LexRank 36 50 37 14 20 15 Biased LexRank 36 51 38 15 21 16 Utterance extraction (our extractive system) 34 66∗ 40∗† 20∗† 40∗ 24∗† Utterance extraction (our pipeline extractive system) 30 73∗ 38 19∗† 44∗ 24∗† Our abstractive system (without tuning) 38∗ 59∗ 41∗† 18∗ 27∗ 19∗ Our abstractive system (with tuning) 40∗† 56∗ 42∗† 20∗† 25∗ 22∗† Table 1: Performance of different summarization algorithms on chat logs for query-based chat summarization. Statistically significant improvements (p < 0.01) over the biased LexRank system are marked with *. † indicates statistical significance (p < 0.01) over extractive approaches (TextRank and LexRank). Systems in italics use the query in generating the summary. generated abstracts, for each dataset. Then, we asked annotators to give one of three possible ratings for each sentence based on grammaticality: perfect (2 pts), only one mistake (1 pt) and not acceptable (0 pts), ignoring capitalization or punctuation. Each sentence was rated by two annotators. Note that each sentence was evaluated individually, so the human judges were not affected by intra-sentential problems posed by coreference and topic shifts. 3.3 Experimental Settings For preprocessing our dataset we use OpenNLP3 for tokenization, stemming and part-of-speech tagging. We use six randomly selected querylogs from our chat dataset (about 10% of the dataset) for tuning the coefficient parameters. We set the k parameter in our clustering phase to 10 based on the average number of sentences in the human written summaries. For our language model, we use a tri-gram smoothed language model trained using the newswire text provided in the English Gigaword corpus (Graff and Cieri, 2003). For the automatic evaluation we use the official ROUGE software with standard options and report ROUGE-1 and ROUGE-2 precision, recall and F-1 scores. 3.4 Results 3.4.1 Automatic Evaluation (Chat dataset) Abstractive vs. Extractive: our full querybased abstractive summariztion system show statistically significant improvements over baselines 3http://opennlp.apache.org/ and other pure extractive summarization systems for ROUGE-14. This means our systems can effectively aggregate the extracted sentences and generate abstract sentences based on the query content. We can also observe that our full system produces the highest ROUGE-1 precision score among all models, which further confirms the success of this model in meeting the user information needs imposed by queries. The absolute improvement of 10% in precision for ROUGE-1 in our abstractive model over our extractive model (our pipeline) further confirms the effectiveness of our ranking method in generating the abstract sentences considering the query related information. Our extractive query-based method beats all other extractive systems with a higher ROUGE1 and ROUGE-2 which shows the effectiveness of our utterance extraction model in comparison with other extractive models. In other words, using our extractive model described in section 2.1, as a stand alone system, is an effective query-based extractive summarization model. We also observe that our extractive model outperforms our abstractive model for ROUGE-2 score. This can be due to word merging and word replacement choices in the word graph construction, which sometimes change or remove a word in a bigram and consequently may decrease the bigram overlap score. Query Relevance: another interesting observation is that relying only on the cosine similarity (i.e., cosine-all) to measure the query relevance presents a quite strong baseline. This proves the importance of query content in our dataset and further supports the main claim of our work that a 4The statistical significance tests was calculated by approximate randomization, as described in (Yeh, 2000). 1226 Dataset Overal Quality Responsiveness Preference Our Sys Biased LR LR Our Sys Biased LR LR Our Sys Biased LR LR Meeting 2.9 2.5 2.1 3.8 3.2 1.8 70% 30% 0% Email 2.7 1.8 1.7 3.7 3.0 1.5 60% 30% 10% Table 2: Manual evaluation scores for our phrasal query abstraction system in comparison with Biased LexRank and LexRank (LR). Dataset Grammar G=2 G=1 G=0 Orig Sys Orig Sys Orig Sys Orig Sys Chat 1.8 1.6 84% 73% 16% 24% 0% 3% Meeting 1.5 1.3 50% 40% 50% 55% 0% 5% Email 1.9 1.6 85% 60% 15% 35% 0% 5% Table 3: Average rating and distribution over grammaticality scores for phrasal query abstraction system in comparison with original sentences. good summary should express a brief and wellorganized abstract that answers the user’s query. Moreover, a precision of 71% for ROUGE-1 from the simple cosine-1st baseline confirms that some utterances contain more query relevant information in conversational discussions. Query-based vs. Generic: the high recall and low precision in TextRank baseline, both for the ROUGE-1 and ROUGE-2 scores, shows the strength of the model in extracting the generic information from chat conversations while missing the query-relevant content. The LexRank baseline improves the results of the TextRank system by increasing the precision and balancing the precision and recall scores for ROUGE-1 score. We believe that this is due to the robustness of the LexRank method in dealing with noisy texts (chat conversations) (Erkan and Radev, 2004). In addition, the Biased LexRank model slightly improves the generic LexRank system. Considering this marginal improvement and relatively high results of pure extractive systems, we can infer that the Biased LexRank extracted summaries do not carry much query relevant content. In contrast, the significant improvement of our model over the extractive methods demonstrates the success of our approach in presenting the query related content in generated abstracts. An example of a short chat log, its related query and corresponding manual and automatic summaries are shown in Example 3. 3.4.2 Manual Evaluation Content and User Preference: Table 2 demonstrates overall quality, responsiveness (query relatedness) and user preference scores for the abstracts generated by our system and two baselines. Results indicate that our system significantly outperforms baselines in overall quality and responsiveness, for both meeting and email datasets. This confirms the validity of the results we obtained by conducting automatic evaluation over the chat dataset. We also can observe that the absolute improvements in overall quality and responsiveness for emails (0.9 and 0.7) is greater than for meetings (0.4 and 0.6). This is expected since dealing with spoken conversations is more challenging than written ones. Note that the responsiveness scores are greater than overall scores. This further proves the effectiveness of our approach in dealing with phrasal queries. We also evaluate the users’ summary preferences. For both datasets (meeting and email), in majority of cases (70% and 60% respectively), the users prefer the query-based abstractive summary generated by our system. Grammaticality: Table 3 shows grammaticality scores and distributions over the three possible scores for all datasets. The chat dataset results demonstrate the highest scores: 73% of the sentences generated by our phrasal query abstraction model are grammatically correct and 24% of the generated sentences are almost correct with only one grammatical error, while only 3% of the abstract sentences are grammatically incorrect. However, the results varies moving to other datasets. For meeting dataset, the percentage of completely grammatical sentences drops dramatically. This is due to the nature of spoken conversations which is more error prone and ungrammatical. The grammaticality score of the original sentences also proves that the sentences from meet1227 Query: Trigger namespace and the self property Chat log: A: good morning B: good morning C: good morning everyone D: good morning D: good night all F: New GNUe Traffic online F: loadsa deep metaphyisical stuff this week F: D & E discuss the meaning of ’self’ ;-) E: yes, and he took the more metaphysical route, where I took the more scientific route E: I say self’s meaning is derived from one’s ancestry E: self’s meaning is derived from how others use you E: okay, analogy extended too far, I guess :) F: is this a friends vs family debate? E: also noted that the cool part about that is if you have code that needs to happen both on a pre-insert and a pre-update - but only a few lines of the code is different E: you could have one block of trigger code that used self.action to find out why it had been called and branch accordingly. E: there was a big jump from the previous paragraph to that E: that took that out of context E: iirc, I was saying an alternative was that ”self” could refer to neither the trigger’s owner nor to the trigger’s caller E: but to the event itself E: so self.action could be what action is being performed E: self.parent could be the trigger’s owner E: and self.context could be the caller E: and self.name could be the name of the trigger F: umm, I’m afraid apparant non-sequiturs are always a hazard of doing summaries ;-) E: iow, I was presenting a 3rd alternative where self referred to something specific to the trigger F: I normally try to write around them, but not 100% successful E: I’m just convulsing my thoughts to the irc log E: for an errata next week :) E: I think convulsing is a good description F: heh TexRank: There was a big jump from the previous paragraph to that. that took that out of context iirc, I was saying an alternative was that ”self” could refer to neither the trigger’s owner nor to the trigger’s caller. but to the event itself. so self.action could be what action is being performed, self.parent could be the trigger’s owner, and self.context could be the caller and self.name could be the name of the trigger. umm, I’m afraid apparant non-sequiturs are always a hazard of doing summaries ;-) LexRank: good morning everyone. heh. I’m just convulsing my thoughts to the irc log good morning. Jason also noted that the cool part about that is if you have code that needs to happen both on a preinsert and a pre-update - but only a few lines of the code is different you could have one block of trigger code that used self.action to find out why it had been called and branch accordingly. for an errata next week :) self’s meaning is derived from how others use you. I think convulsing is a good description reinhard & jcater discuss the meaning of ’self’ ;-) Biased-LexRank: good morning everyone. heh. I’m just convulsing my thoughts to the irc log. Jason also noted that the cool part about that is if you have code that needs to happen both on a pre-insert and a pre-update - but only a few lines of the code is different - you could have one block of trigger code that used self.action to find out why it had been called and branch accordingly. yes, and he took the more metaphysical route, where I took the more scientific route there was a big jump from the previous paragraph to that but to the event itself. iow, I was presenting a 3rd alternative where self referred to something specific to the trigger. Our system: self could refer to neither the triggers owner nor caller. I was saying an alternative where self referred to something specific to the trigger. and self.name could be the name. so self.action could be what action is being performed, self.parent the triggers owner and self.context caller. Gold: Further to, E clarified that he had suggested that ”self” could refer to neither the trigger’s owner nor to the trigger’s caller - but to the event itself. So self.action could be what action is being performed, self.parent could be the trigger’s owner, and self.context could be the caller. In other words, I was presenting a 3rd alternative where self referred to something specific to the trigger. Example 3. Summaries generated by our system and other baselines in comparison with the humanwritten summary for a short chat log. Speaker information have been anonymized. ing transcripts, although generated by humans, are not fully grammatical. In comparison with the original sentences, for all datasets, our model reports slightly lower results for the grammaticality score. Considering the fact that the abstract sentences are automatically generated and the original sentences are human-written, the grammaticality score and the percentage of fully grammatical sentences generated by our system, with higher ROUGE or quality scores in comparison with other methods, demonstrates that our system is an effective phrasal query abstraction framework for both spoken and written conversations. 4 Conclusion We have presented an unsupervised framework for abstractive summarization of spoken and written conversations based on phrasal queries. For content selection, we propose a sentence extraction model that incorporates query relevance and content importance into the extraction process. For the generation phase, we propose a ranking strategy which selects the best path in the constructed word graph based on fluency, query relevance and content. Both automatic and manual evaluation of our model show substantial improvement over extraction-based methods, including Biased LexRank, which is considered a state-of-the-art system. Moreover, our system also yields good grammaticality score for human evaluation and achieves comparable scores with the original sentences. Our future work is four-fold. First, we are trying to improve our model by incorporating conversational features (e.g., speech acts). Second, we aim at implementing a strategy to order the clusters for generating more coherent abstracts. Third, we try to improve our generated summary by resolving coreferences and incorporating speaker information (e.g., names) in the clustering and sentence generation phases. Finally, we plan to take advantage of topic shifts to better segment the relevant parts of conversations in relation to phrasal queries. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the paper, and the NSERC Business Intelligence Network for financial support. We also would like to acknowledge the early discussions on the related topics with Frank Tompa. 1228 References Meni Adler, Jonathan Berant, and Ido Dagan. 2012. Entailment-based text exploration with application to the health-care domain. In Proceedings of the ACL 2012 System Demonstrations, ACL ’12, pages 79–84, Stroudsburg, PA, USA. Association for Computational Linguistics. Regina Barzilay and Kathleen R. McKeown. 2005. Sentence Fusion for Multidocument News Summarization. Comput. Linguist., 31(3):297–328, September. Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global Learning of Typed Entailment Rules. In Proceedings of ACL, Portland, OR. Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, Guillaume Lathoud, Mike Lincoln, Agnes Lisowska, and Mccowan Wilfried Post Dennis Reidsma. 2005. The AMI meeting corpus: A pre-announcement. In Proc. MLMI, pages 28–39. I. Dagan and O. Glickman. 2004. Probabilistic Textual Entailment: Generic applied modeling of language variability. In PASCAL Workshop on Learning Methods for Text Understanding and Mining. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: graph-based lexical centrality as salience in text summarization. J. Artif. Int. Res., 22(1):457–479, December. Katja Filippova. 2010. Multi-sentence compression: finding shortest paths in word graphs. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 322– 330, Stroudsburg, PA, USA. Association for Computational Linguistics. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 340–348, Stroudsburg, PA, USA. Association for Computational Linguistics. Dan Gillick, Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-tr. 2009. A global optimization framework for meeting summarization. In Proc. IEEE ICASSP, pages 4769–4772. Julio Gonzalo, Felisa Verdejo, Irina Chugur, and Juan M. Cigarrn. 1998. Indexing with wordnet synsets can improve text retrieval. CoRR. David Graff and Christopher Cieri. 2003. English Gigaword Corpus. Technical report, Linguistic Data Consortium, Philadelphia. Surabhi Gupta, Ani Nenkova, and Dan Jurafsky. 2007. Measuring importance and query relevance in topicfocused multi-document summarization. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 193–196, Stroudsburg, PA, USA. Association for Computational Linguistics. Lisa Hunemark. 2010. Query expansion using search logs and WordNet. Technical report, Uppsala University, mar. Masters thesis in Computational Linguistics. Quentin Jones, Gilad Ravid, and Sheizaf Rafaeli. 2004. Information overload and the message dynamics of online interaction spaces: A theoretical model and empirical exploration. Info. Sys. Research, 15(2):194–210, June. Shafiq Joty, Gabriel Murray, and Raymond T. Ng. 2011. Supervised topic segmentation of email conversations. In ICWSM11. AAAI. Shafiq R. Joty, Giuseppe Carenini, and Raymond T. Ng. 2013. Topic segmentation and labeling in asynchronous conversations. J. Artif. Intell. Res. (JAIR), 47:521–573. Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proc. Of the COLING Conference, pages 495–501. Fei Liu and Yang Liu. 2009. From extractive to abstractive meeting summaries: can it be done by sentence compression? In Proceedings of the ACLIJCNLP 2009 Conference Short Papers, ACLShort ’09, pages 261–264, Stroudsburg, PA, USA. Association for Computational Linguistics. Yashar Mehdad, Giuseppe Carenini, and Raymond NG T. 2013a. Towards Topic Labeling with Phrase Entailment and Aggregation. In Proceedings of NAACL 2013, pages 179–189, Atlanta, USA, June. Association for Computational Linguistics. Yashar Mehdad, Giuseppe Carenini, Frank Tompa, and Raymond T. NG. 2013b. Abstractive meeting summarization with entailment and fusion. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 136–146, Sofia, Bulgaria, August. Association for Computational Linguistics. Weiyi Meng and Clement T. Yu. 2010. Advanced Metasearch Engine Technology. Synthesis Lectures on Data Management. Morgan and Claypool Publishers. R. Mihalcea and P. Tarau. 2004. TextRank: Bringing order into texts. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, July. Gabriel Murray, Giuseppe Carenini, and Raymond Ng. 2010. Generating and validating abstracts of meeting conversations: a user study. In Proceedings of 1229 the 6th International Natural Language Generation Conference, INLG ’10, pages 105–113, Stroudsburg, PA, USA. Association for Computational Linguistics. Jahna Otterbacher, Gnes Erkan, and Dragomir R. Radev. 2009. Biased lexrank: Passage retrieval using random walks with question-based priors. Inf. Process. Manage., 45(1):42–54. Tetsuya Sakai and Karen Sparck-Jones. 2001. Generic summaries for indexing in information retrieval. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’01, pages 190–198, New York, NY, USA. ACM. J. Ulrich, G. Murray, and G. Carenini. 2008. A publicly available annotated corpus for supervised email summarization. In AAAI08 EMAIL Workshop, Chicago, USA. AAAI. David C. Uthus and David W. Aha. 2011. Plans toward automated chat summarization. In Proceedings of the Workshop on Automatic Summarization for Different Genres, Media, and Languages, WASDGML ’11, pages 1–7, Stroudsburg, PA, USA. Association for Computational Linguistics. David C. Uthus and David W. Aha. 2013. The ubuntu chat corpus for multiparticipant chat analysis. In AAAI Spring Symposium: Analyzing Microtext. Lu Wang and Claire Cardie. 2013. Domainindependent abstract generation for focused meeting summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1395– 1405, Sofia, Bulgaria, August. Association for Computational Linguistics. Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Florian, and Claire Cardie. 2013. A sentence compression based framework to query-focused multidocument summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1384–1394, Sofia, Bulgaria, August. Association for Computational Linguistics. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th Conference on Computational Linguistics - Volume 2, COLING ’00, pages 947– 953. Association for Computational Linguistics. Liang Zhou and Eduard Hovy. 2005. Digesting virtual “geek” culture: The summarization of technical internet relay chats. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 298–305, Ann Arbor, Michigan, June. Association for Computational Linguistics. 1230
2014
115
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1231–1240, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Comparing Multi-label Classification with Reinforcement Learning for Summarisation of Time-series Data Dimitra Gkatzia, Helen Hastie, and Oliver Lemon School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh {dg106, h.hastie, o.lemon}@hw.ac.uk Abstract We present a novel approach for automatic report generation from time-series data, in the context of student feedback generation. Our proposed methodology treats content selection as a multi-label (ML) classification problem, which takes as input time-series data and outputs a set of templates, while capturing the dependencies between selected templates. We show that this method generates output closer to the feedback that lecturers actually generated, achieving 3.5% higher accuracy and 15% higher F-score than multiple simple classifiers that keep a history of selected templates. Furthermore, we compare a ML classifier with a Reinforcement Learning (RL) approach in simulation and using ratings from real student users. We show that the different methods have different benefits, with ML being more accurate for predicting what was seen in the training data, whereas RL is more exploratory and slightly preferred by the students. 1 Introduction Summarisation of time-series data refers to the task of automatically generating text from variables whose values change over time. We consider the task of automatically generating feedback summaries for students describing their performance during the lab of a Computer Science module over the semester. Students’ learning can be influenced by many variables, such as difficulty of the material (Person et al., 1995), other deadlines (Craig et al., 2004), attendance in lectures (Ames, 1992), etc. These variables have two important qualities. Firstly, they change over time, and secondly they can be dependent on or independent of each other. Therefore, when generating feedback, we need to take into account all variables simultaneously in order to capture potential dependencies and provide more effective and useful feedback that is relevant to the students. In this work, we concentrate on content selection which is the task of choosing what to say, i.e. what information is to be included in a report (Reiter and Dale, 2000). Content selection decisions based on trends in time-series data determine the selection of the useful and important variables, which we refer to here as factors, that should be conveyed in a summary. The decisions of factor selection can be influenced by other factors that their values are correlated with; can be based on the appearance or absence of other factors in the summary; and can be based on the factors’ behaviour over time. Moreover, some factors may have to be discussed together in order to achieve some communicative goal, for instance, a teacher might want to refer to student’s marks as a motivation for increasing the number of hours studied. We frame content selection as a simple classification task: given a set of time-series data, decide for each template whether it should be included in a summary or not. In this paper, with the term ‘template’ we refer to a quadruple consisting of an id, a factor (bottom left of Table 1), a reference type (trend, weeks, average, other) and surface text. However, simple classification assumes that the templates are independent of each other, thus the decision for each template is taken in isolation from the others, which is not appropriate for our domain. In order to capture the dependencies in the context, multiple simple classifiers can make the decisions for each template iteratively. After each iteration, the feature space grows by 1 feature, in order to include the history of the previous template decisions. Here, we propose an alternative method that tackles the challenge of interdependent data by using multi-label (ML) classification, which is efficient in taking data dependencies 1231 Raw Data factors week 2 week 3 ... week 10 marks 5 4 ... 5 hours studied 1 2 ... 3 ... ... ... ... ... Trends from Data factors trend (1) marks (M) trend other (2) hours studied (HS) trend increasing (3) understandability (Und) trend decreasing (4) difficulty (Diff) trend decreasing (5) deadlines (DL) trend increasing (6) health issues (HI) trend other (7) personal issues (PI) trend decreasing (8) lectures attended (LA) trend other (9) revision (R) trend decreasing Summary Your overall performance was excellent during the semester. Keep up the good work and maybe try some more challenging exercises. Your attendance was varying over the semester. Have a think about how to use time in lectures to improve your understanding of the material. You spent 2 hours studying the lecture material on average. You should dedicate more time to study. You seem to find the material easier to understand compared to the beginning of the semester. Keep up the good work! You revised part of the learning material. Have a think whether revising has improved your performance. Table 1: The table on the top left shows an example of the time-series raw data for feedback generation. The table on the bottom left shows an example of described trends. The box on the right presents a target summary (target summaries have been constructed by teaching staff). into account and generating a set of labels (in our case templates) simultaneously (Tsoumakas et al., 2010). ML classification requires no history, i.e. does not keep track of previous decisions, and thus has a smaller feature space. Our contributions to the field are as follows: we present a novel and efficient method for tackling the challenge of content selection using a ML classification approach; we applied this method to the domain of feedback summarisation; we present a comparison with an optimisation technique (Reinforcement Learning), and we discuss the similarities and differences between the two methods. In the next section, we refer to the related work on Natural Language Generation from time-series data and on Content Selection. In Section 4.2, we describe our approach and we carry out a comparison with simple classification methods. In Section 5, we present the evaluation setup and in Section 6 we discuss the results, obtained in simulation and with real students. Finally, in Section 8, directions for future work are discussed. 2 Related Work Natural Language Generation from time-series data has been investigated for various tasks such as weather forecast generation (Belz and Kow, 2010; Angeli et al., 2010; Sripada et al., 2004), report generation from clinical data (Hunter et al., 2011; Gatt et al., 2009), narrative to assist children with communication needs (Black et al., 2010) and audiovisual debrief generation from sensor data from Autonomous Underwater Vehicles missions (Johnson and Lane, 2011). The important tasks of time-series data summarisation systems are content selection (what to say), surface realisation (how to say it) and information presentation (Document Planning, Ordering, etc.). In this work, we concentrate on content selection. Previous methods for content selection include Reinforcement Learning (Rieser et al., 2010); multi-objective optimisation (Gkatzia et al., 2014); Gricean Maxims (Sripada et al., 2003); Integer Linear Programming (Lampouras and Androutsopoulos, 2013); collective content selection (Barzilay and Lapata, 2004); interest scores assigned to content (Androutsopoulos et al., 2013); a combination of statistical and template-based approaches to NLG (Kondadadi et al., 2013); statistical acquisition of rules (Duboue and McKeown, 2003) and the Hidden Markov model approach for Content Selection and ordering (Barzilay and Lee, 2004). Collective content selection (Barzilay and Lapata, 2004) is similar to our proposed method in that it is a classification task that predicts the templates from the same instance simultaneously. The difference between the two methods lies in that the 1232 collective content selection requires the consideration of an individual preference score (which is defined as the preference of the entity to be selected or omitted, and it is based on the values of entity attributes and is computed using a boosting algorithm) and the identification of links between the entities with similar labels. In contrast, ML classification does not need the computation of links between the data and the templates. ML classification can also apply to other problems whose features are correlated, such as text classification (Madjarov et al., 2012), when an aligned dataset is provided. ML classification algorithms have been divided into three categories: algorithm adaptation methods, problem transformation and ensemble methods (Tsoumakas and Katakis, 2007; Madjarov et al., 2012). Algorithm adaptation approaches (Tsoumakas et al., 2010) extend simple classification methods to handle ML data. For example, the k-nearest neighbour algorithm is extended to ML-kNN by Zhang and Zhou (2007). MLkNN identifies for each new instance its k nearest neighbours in the training set and then it predicts the label set by utilising the maximum a posteriori principle according to statistical information derived from the label sets of the k neighbours. Problem transformation approaches (Tsoumakas and Katakis, 2007) transform the ML classification task into one or more simple classification tasks. Ensemble methods (Tsoumakas et al., 2010) are algorithms that use ensembles to perform ML learning and they are based on problem transformation or algorithm adaptation methods. In this paper, we applied RAkEL (Random k-labelsets) (Tsoumakas et al., 2010): an ensemble problem transformation method, which constructs an ensemble of simple-label classifiers, where each one deals with a random subset of the labels. Finally, our domain for feedback generation is motivated by previous studies (Law et al., 2005; van den Meulen et al., 2010) who show that text summaries are more effective in decision making than graphs therefore it is advantageous to provide a summary over showing users the raw data graphically. In addition, feedback summarisation from time-series data can be applied to the field of Intelligent Tutoring Systems (Gross et al., 2012). 3 Data The dataset consists of 37 instances referring to the activities of 26 students. For a few students there is more than 1 instance. An example of one such instance is presented in Table 1. Each instance includes time-series information about the student’s learning habits and the selected templates that lecturers used to provide feedback to this student. The time-series information includes for each week of the semester: (1) the marks achieved at the lab; (2) the hours that the student spent studying; (3) the understandability of the material; (4) the difficulty of the lab exercises as assessed by the student; (5) the number of other deadlines that the student had that week; (6) health issues; (7) personal issues; (8) the number of lectures attended; and (9) the amount of revision that the student had performed. The templates describe these factors in four different ways: 1. <trend>: referring to the trend of a factor over the semester (e.g. “Your performance was increasing...”), 2. <weeks>: explicitly describing the factor value at specific weeks (e.g. “In weeks 2, 3 and 9...”), 3. <average>: considering the average of a factor value (e.g. “You dedicated 1.5 hours studying on average...”), and 4. <other>: mentioning other relevant information (e.g. “Revising material will improve your performance”). For the corpus creation, 11 lecturers selected the content to be conveyed in a summary, given the set of raw data (Gkatzia et al., 2013). As a result, for the same student there are various summaries provided by the different experts. This characteristic of the dataset, that each instance is associated with more than one solution, additionally motivates the use of multi-label classification, which is concerned with learning from examples, where each example is associated with multiple labels. Our analysis of the dataset showed that there are significant correlations between the factors, for example, the number of lectures attended (LA) correlates with the student’s understanding of the material (Und), see Table 2. As we will discuss further in Section 5.1, content decisions are influenced by the previously generated content, for example, if the lecturer has previously mentioned health issues, mentioning hours studied has a high probability of also being mentioned. 1233 Factor (1) M (2) HS (3) Und (4) Diff (5) DL (6) HI (7) PI (8) LA (9) R (1) M 1* 0.52* 0.44* -0.53* -0.31 -0.30 -0.36* 0.44* 0.16 (2) HS 0.52* 1* 0.23 -0.09 -0.11 0.11 -0.29 0.32 0.47* (3) Und 0.44* 0.23 1* -0.54* 0.03 -0.26 0.12 0.60* 0.32 (4) Diff -0.53* -0.09 -0.54* 1* 0.16 -0.06 0.03 -0.19 0.14 (5) DL -0.31 -0.11 0.03 0.16 1* 0.26 0.24 -0.44* 0.14 (6) HI -0.30 -0.11 -0.26 -0.06 0.26 1* 0.27 -0.50* 0.15 (7) PI -0.36* -0.29 0.12 0.03 0.24 0.27 1* -0.46* 0.34* (8) LA 0.44* 0.32 0.60* -0.19 -0.44* -0.50* -0.46* 1* -0.12 (9) R 0.16 0.47* 0.03 0.14 0.14 0.15 0.34* -0.12 1* Table 2: The table presents the Pearson’s correlation coefficients of the factors (* means p<0.05). 4 Methodology In this section, the content selection task and the suggested multi-label classification approach are presented. The development and evaluation of the time-series generation system follows the following pipeline (Gkatzia et al., 2013): 1. Time-Series data collection from students 2. Template construction by Learning and Teaching (L&T) expert 3. Feedback summaries constructed by lecturers; random summaries rated by lecturers 4. Development of time-series generation systems (Section 4.2, Section 5.3): ML system, RL system, Rule-based and Random system 5. Evaluation: (Section 5) - Offline evaluation (Accuracy and Reward) - Online evaluation (Subjective Ratings) 4.1 The Content Selection Task Our learning task is formed as follows: given a set of 9 time-series factors, select the content that is most appropriate to be included in a summary. Content is regarded as labels (each template represents a label) and thus the task can be thought of as a classification problem. As mentioned, there are 4 ways to refer to a factor: (1) describing the trend, (2) describing what happened in every time stamp, (3) mentioning the average and (4) making another general statement. Overall, for all factors there are 29 different templates1. An example of the input data is shown in Table 1. There are two decisions that need to be made: (1) whether to talk about a factor and (2) in which way to refer to it. Instead of dealing with this task in a hierarchical way, where the algorithm will first learn whether to talk about a factor and then to decide how to 1There are fewer than 36 templates, because for some factors there are less than 4 possible ways of referring to them. refer to it, we transformed the task in order to reduce the learning steps. Therefore, classification can reduce the decision workload by deciding either in which way to talk about it, or not to talk about a factor at all. 4.2 The Multi-label Classification Approach Traditional single-label classification is the task of identifying which label one new observation is associated with, by choosing from a set of labels L (Tsoumakas et al., 2010). Multi-label classification is the task of associating an observation with a set of labels Y ⊆L (Tsoumakas et al., 2010). One set of factor values can result in various sets of templates as interpreted by the different experts. A ML classifier is able to make decisions for all templates simultaneously and capture these differences. The RAndom k-labELsets (RAkEL) (Tsoumakas et al., 2010) was applied in order to perform ML classification. RAkEL is based on Label Powerset (LP), a problem transformation method (Tsoumakas et al., 2010). LP benefits from taking into consideration label correlations, but does not perform well when trained with few examples as in our case (Tsoumakas et al., 2010). RAkEL overcomes this limitation by constructing a set of LP classifiers, which are trained with different random subsets of the set of labels (Tsoumakas et al., 2010). The LP method transforms the ML task, into one single-label multi-class classification task, where the possible set of predicted variables for the transformed class is the powerset of labels present in the original dataset. For instance, the set of labels L = {temp0, temp1, ...temp28} could be transformed to {temp0,1,2, temp28,3,17,...}. This algorithm does not perform well when considering a large number of labels, due to the fact that the label space grows exponentially (Tsoumakas 1234 Classifier Accuracy Precision Recall F score (10-fold) Decision Tree (no history) *75.95% 67.56 75.96 67.87 Decision Tree (with predicted history) **73.43% 65.49 72.05 70.95 Decision Tree (with real history) **78.09% 74.51 78.11 75.54 Majority-class (single label) **72.02% 61.73 77.37 68.21 RAkEL (multi-label) (no history) 76.95% 85.08 85.94 85.50 Table 3: Average, precision, recall and F-score of the different classification methods (T-test, * denotes significance with p<0.05 and ** significance with p<0.01, when comparing each result to RAkEL). et al., 2010). RAkEL tackles this problem by constructing an ensemble of LP classifiers and training each one on a different random subset of the set of labels (Tsoumakas et al., 2010). 4.2.1 The Production Phase of RAkEL The algorithm was implemented using the MULAN Open Source Java library (Tsoumakas et al., 2011), which is based on WEKA (Witten and Frank, 2005). The algorithm works in two phases: 1. the production of an ensemble of LP algorithms, and 2. the combination of the LP algorithms. RAkEL takes as input the following parameters: (1) the numbers of iterations m (which is developer specified and denotes the number of models that the algorithm will produce), (2) the size of labelset k (which is also developer specified), (3) the set of labels L, and (4) the training set D. During the initial phase it outputs an ensemble of LP classifiers and the corresponding k-labelsets. A pseudocode for the production phase is shown below: Algorithm 1 RAkEL production phase 1: Input : i t e r a t i o n s m, k l a b e l s e t s , l a b e l s L , t r a i n i n g data D 2: f o r i =0 to m 3: S e l e c t random k−l a b e l s e t from L 4: Train an LP on D 5: Add LP to ensemble 6: end f o r 7: Output : the ensemble of LPs with corresponding k−l a b e l s e t s 4.2.2 The Combination Phase During the combination phase, the algorithm takes as input the results of the production phase, i.e. the ensemble of LPs with the corresponding klabelsets, the set of labels L, and the new instance x and it outputs the result vector of predicted labels for instance x. During run time, RAkEL estimates the average decision for each label in L and if the average is greater than a threshold t (determined by the developer) it includes the label in the predicted labelset. We used the standard parameter values of t, k and m (t = 0.5, k = 3 and m equals to 58 (2*29 templates)). In future, we could perform parameter optimisation by using a technique similar to (Gabsdil and Lemon, 2004). 5 Evaluation Firstly, we performed a preliminary evaluation on classification methods, comparing our proposed ML classification with multiple iterated classification approaches. The summaries generated by the ML classification system are then compared with the output of a RL system and two baseline systems in simulation and with real students. 5.1 Comparison with Simple Classification We compared the RAkEL algorithm with singlelabel (SL) classification. Different SL classifiers were trained using WEKA: JRip, Decision Trees, Naive Bayes, k-nearest neighbour, logistic regression, multi-layer perceptron and support vector machines. It was found out that Decision Trees achieved on average 3% higher accuracy. We, therefore, went on to use Decision Trees that use generation history in three ways. Firstly, for Decision Tree (no history), 29 decision-tree classifiers were trained, one for each template. The input of these classifiers were the 9 factors and each classifier was trained in order to decide whether to include a specific template or not. This method did not take into account other selected templates – it was only based on the timeseries data. Secondly, for Decision Tree (with predicted history), 29 classifiers were also trained, but this time the input included the previous decisions made by the previous classifiers (i.e. the history) 1235 as well as the set of time-series data in order to emulate the dependencies in the dataset. For instance, classifier n was trained using the data from the 9 factors and the template decisions for templates 0 to n −1. Thirdly, for Decision Tree (with real history), the real, expert values were used rather than the predicted ones in the history. The above-mentioned classifiers are compared with, the Majority-class (single label) baseline, which labels each instance with the most frequent template. The accuracy, the weighted precision, the weighted recall, and the weighted F-score of the classifiers are shown in Table 3. It was found that in 10-fold cross validation RAkEL performs significantly better in all these automatic measures (accuracy = 76.95%, F-score = 85.50%). Remarkably, ML achieves more than 10% higher F-score than the other methods (Table 3). The average accuracy of the single-label classifiers is 75.95% (10-fold validation), compared to 73.43% of classification with history. The reduced accuracy of the classification with predicted history is due to the error in the predicted values. In this method, at every step, the predicted outcome was used including the incorrect decisions that the classifier made. The upper-bound accuracy is 78.09% calculated by using the expert previous decisions and not the potentially erroneous predicted decisions. This result is indicative of the significance of the relations between the factors showing that the predicted decisions are dependent due to existing correlations as discussed in Section 1, therefore the system should not take these decisions independently. ML classification performs better because it does take into account these correlations and dependencies in the data. 5.2 The Reinforcement Learning System Reinforcement Learning (RL) is a machine learning technique that defines how an agent learns to take optimal actions so as to maximise a cumulative reward (Sutton and Barto, 1998). Content selection is seen as a Markov Decision problem and the goal of the agent is to learn to take the sequence of actions that leads to optimal content selection. The Temporal Difference learning method was used to train an agent for content selection. Actions and States: The state consists of the time-series data and the selected templates. In order to explore the state space the agent selects a factor (e.g. marks, deadlines etc.) and then decides whether to talk about it or not. Reward Function: The reward function reflects the lecturers’ preferences on summaries and is derived through linear regression analysis of a dataset containing lecturer constructed summaries and ratings of randomly generated summaries. Specifically, it is the following cumulative multivariate function: Reward = a + n ∑ i=1 bi ∗xi + c ∗length where X = {x1, x2, ..., xn} describes the combinations of the data trends observed in the timeseries data and a particular template. a, b and c are the regression coefficients, and their values vary from -99 to 221. The value of xi is given by the function: xi =            1, the combination of a factor trend and a template type is included in a summary 0, if not. The RL system differs from the classification system in the way it performs content selection. In the training phase, the agent selects a factor and then decides whether to talk about it or not. If the agent decides to refer to a factor, the template is selected in a deterministic way, i.e. from the available templates it selects the template that results in higher expected cumulative future reward. 5.3 The Baseline Systems We compared the ML system and the RL system with two baselines described below by measuring the accuracy of their outputs, the reward achieved by the reward function used for the RL system, and finally we also performed evaluation with student users. In order to reduce the confounding variables, we kept the ordering of content in all systems the same, by adopting the ordering of the rule-based system. The baselines are as follows: 1. Rule-based System: generates summaries based on Content Selection rules derived by working with a L&T expert and a student (Gkatzia et al., 2013). 2. Random System: initially, selects a factor randomly and then selects a template randomly, until it makes decisions for all factors. 1236 Time-Series Accuracy Reward Rating Mode (mean) Data Source Summarisation Systems Multi-label Classification 85% 65.4 7 (6.24) Lecturers’ constructed summaries Reinforcement Learning **66% 243.82 8 (6.54) Lecturers’ ratings & summaries Rule-based **65% 107.77 7, 8 (5.86) L&T expert Random **45.2% 43.29 *2 (*4.37) Random Table 4: Accuracy, average rewards (based on lecturers’ preferences) and averages of the means of the student ratings. Accuracy significance (Z-test) with RAkEL at p<0.05 is indicated as * and at p<0.01 as **. Student ratings significance (Mann Whitney U test) with RAkEL at p<0.05 is indicated as *. 6 Results Each of the four systems described above generated 26 feedback summaries corresponding to the 26 student profiles. These summaries were evaluated in simulation and with real student users. 6.1 Results in Simulation Table 4 presents the accuracy, reward, and mode of student rating of each algorithm when used to generate the 26 summaries. Accuracy was estimated as the proportion of the correctly classified templates to the population of templates. In order to have a more objective view on the results, the score achieved by each algorithm using the reward function was also calculated. ML classification achieved significantly higher accuracy, which was expected as it is a supervised learning method. The rule-based system and the RL system have lower accuracy compared to the ML system. There is evidently a mismatch between the rules and the test-set; the content selection rules are based on heuristics provided by a L&T Expert rather than by the same pool of lecturers that created the test-set. On the contrary, the RL is trained to optimise the selected content and not to replicate the existing lecturer summaries, hence there is a difference in accuracy. Accuracy measures how similar the generated output is to the gold standard, whereas the reward function calculates a score regarding how good the output is, given an objective function. RL is trained to optimise for this function, and therefore it achieves higher reward, whereas ML is trained to learn by examples, therefore it produces output closer to the gold standard (lecturer’s produced summaries). RL uses exploration and exploitation to discover combinations of content that result in higher reward. The reward represents predicted ratings that lecturers would give to the summary. The reward for the lecturers’ produced summaries is 124.62 and for the ML method is 107.77. The ML classification system performed worse than this gold standard in terms of reward, which is expected given the error in predictions (supervised methods learn to reproduce the gold standard). Moreover, each decision is rewarded with a different value as some combinations of factors and templates have greater or negative regression coefficients. For instance, the combination of the factors “deadlines” and the template that corresponds to <weeks> is rewarded with 57. On the other hand, when mentioning the <average> difficulty the summary is “punished” with -81 (see description of the reward function in Section 5.2). Consequently, a single poor decision in the ML classification can result in much less reward. 6.2 Subjective Results with Students 37 first year computer science students participated in the study. Each participant was shown a graphical representation of the time-series data of one student and four different summaries generated by the four systems (see Figure 1). The order of the presented summaries was randomised. They were asked to rate each feedback summary on a 10-point rating scale in response to the following statement: “Imagine you are the following student. How would you evaluate the following feedback summaries from 1 to 10?”, where 10 corresponds to the most preferred summary and 1 to the least preferred. The difference in ratings between the ML classification system, the RL system and the Rulebased system is not significant (see Mode (mean) in Table 4, p>0.05). However, there is a trend towards the RL system. The classification method reduces the generation steps, by making the decision of the factor selection and the template selection jointly. Moreover, the training time for the classification method is faster (a couple of seconds compared to over an hour). Finally, the student 1237 Figure 1: The Figure show the evaluation setup. Students were presenting with the data in a graphical way and then they were asked to evaluate each summary in a 10-point Rating scale. Summaries displayed from left to right: ML system, RL, rule-based and random. significantly prefer all the systems over the random. 7 Summary We have shown that ML classification for summarisation of our time-series data has an accuracy of 76.95% and that this approach significantly outperforms other classification methods as it is able to capture dependencies in the data when making content selection decisions. ML classification was also directly compared to a RL method. It was found that although ML classification is almost 20% more accurate than RL, both methods perform comparably when rated by humans. This may be due to the fact that the RL optimisation method is able to provide more varied responses over time rather than just emulating the training data as with standard supervised learning approaches. Foster (2008) found similar results when performing a study on generation of emphatic facial displays. A previous study by Belz and Reiter (2006) has demonstrated that automatic metrics can correlate highly with human ratings if the training dataset is of high quality. In our study, the human ratings correlate well to the average scores achieved by the reward function. However, the human ratings do not correlate well to the accuracy scores. It is interesting that the two methods that score differently on various automatic metrics, such as accuracy, reward, precision, recall and F-score, are evaluated similarly by users. The comparison shows that each method can serve different goals. Multi-label classification generates output closer to gold standard whereas RL can optimise the output according to a reward function. ML classification could be used when the goal of the generation is to replicate phenomena seen in the dataset, because it achieves high accuracy, precision and recall. However, optimisation methods can be more flexible, provide more varied output and can be trained for different goals, e.g. for capturing preferences of different users. 1238 8 Future Work For this initial experiment, we evaluated with students and not with lecturers, since the students are the recipients of feedback. In future, we plan to evaluate with students’ own data under real circumstances as well as with ratings from lecturers. Moreover, we plan to utilise the results from this student evaluation in order to train an optimisation algorithm to perform summarisation according to students’ preferences. In this case, optimisation would be the preferred method as it would not be appropriate to collect gold standard data from students. In fact, it would be of interest to investigate multi-objective optimisation techniques that can balance the needs of the lecturers to convey important content to the satisfaction of students. 9 Acknowledgements The research leading to this work has received funding from the EC’s FP7 programme: (FP7/2011-14) under grant agreement no. 248765 (Help4Mood). References Carole Ames. 1992. Classrooms: Goals, structures, and student motivation. Journal of Educational Psychology, 84(3):261–71. Ion Androutsopoulos, Gerasimos Lampouras, and Dimitrios Galanis. 2013. Generating natural language descriptions from owl ontologies: the natural owl system. Atrificial Intelligence Research, 48:671–715. Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Regina Barzilay and Mirella Lapata. 2004. Collective content selection for concept-to-text generation. In Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT-EMNLP). Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL). Anja Belz and Eric Kow. 2010. Extracting parallel fragments from comparable corpora for data-to-text generation. In 6th International Natural Language Generation Conference (INLG). Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of nlg systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics (ACL). Rolf Black, Joe Reddington, Ehud Reiter, Nava Tintarev, and Annalu Waller. 2010. Using NLG and sensors to support personal narrative for children with complex communication needs. In NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies. Scotty D. Craig, Arthur C. Graesser, Jeremiah Sullins, and Barry Gholson. 2004. Affect and learning: an exploratory look into the role of affect in learning with autotutor. Journal of Educational Media, 29:241–250. Pable Duboue and K.R. McKeown. 2003. Statistical acquisition of content selection rules for natural language generation. In Conference on Human Language Technology and Empirical Methods in Natural Language Processing (EMNLP). Mary Ellen Foster. 2008. Automated metrics that agree with human judgements on generated output for an embodied conversational agent. In 5th International Natural Language Generation Conference (INLG). Malte Gabsdil and Oliver Lemon. 2004. Combining acoustic and pragmatic features to predict recognition performance in spoken dialogue systems. In 42nd Annual Meeting of the Association for Computational Linguistics (ACL). Albert Gatt, Francois Portet, Ehud Reiter, James Hunter, Saad Mahamood, Wendy Moncur, and Somayajulu Sripada. 2009. From data to text in the neonatal intensive care unit: Using NLG technology for decision support and information management. AI Communications, 22: 153-186. Dimitra Gkatzia, Helen Hastie, Srinivasan Janarthanam, and Oliver Lemon. 2013. Generating student feedback from time-series data using Reinforcement Learning. In 14th European Workshop in Natural Language Generation (ENLG). Dimitra Gkatzia, Helen Hastie, and Oliver Lemon. 2014. Finding Middle Ground? Multi-objective Natural Language Generation from time-series data. In 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL) (to appear). Sebastian Gross, Bassam Mokbel, Barbara Hammer, and Niels Pinkwart. 2012. Feedback provision strategies in intelligent tutoring systems based on clustered solution spaces. In J. Desel, J. M. Haake, and C. Spannagel, editors, Tagungsband der 10. eLearning Fachtagung Informatik (DeLFI), number P-207 in GI Lecture Notes in Informatics, pages 27– 38. GI. 1239 Jim Hunter, Yvonne Freer, Albert Gatt, Yaji Sripada, Cindy Sykes, and D Westwater. 2011. Bt-nurse: Computer generation of natural language shift summaries from complex heterogeneous medical data. American Medical Informatics Association, 18:621624. Nicholas Johnson and David Lane. 2011. Narrative monologue as a first step towards advanced mission debrief for AUV operator situational awareness. In 15th International Conference on Advanced Robotics. Ravi Kondadadi, Blake Howald, and Frank Schilder. 2013. A statistical nlg framework for aggregated planning and realization. In 51st Annual Meeting of the Association for Computational Linguistics (ACL). Gerasimos Lampouras and Ion Androutsopoulos. 2013. Using integer linear programming in conceptto-text generation to produce more compact texts. In 51st Annual Meeting of the Association for Computational Linguistics (ACL). Anna S. Law, Yvonne Freer, Jim Hunter, Robert H. Logie, Neil McIntosh, and John Quinn. 2005. A comparison of graphical and textual presentations of time series data to support medical decision making in the neonatal intensive care unit. Journal of Clinical Monitoring and Computing, pages 19: 183–194. Gjorgji Madjarov, Dragi Kocev, Dejan Gjorgjevikj, and Saso Dzeroski. 2012. An extensive experimental comparison of methods for multi-label learning. Pattern Recognition, 45(9):3084–3104. Natalie K. Person, Roger J. Kreuz, Rolf A. Zwaan, and Arthur C. Graesser. 1995. Pragmatics and pedagogy: Conversational rules and politeness strategies may inhibit effective tutoring. Journal of Cognition and Instruction, 13(2):161-188. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge University Press. Verena Rieser, Oliver Lemon, and Xingkun Liu. 2010. Optimising information presentation for spoken dialogue systems. In 48th Annual Meeting of the Association for Computational Linguistics (ACL). Somayajulu Sripada, Ehud Reiter, Jim Hunter, and Jin Yu. 2003. Generating english summaries of time series data using the gricean maxims. In 9th ACM international conference on Knowledge discovery and data mining (SIGKDD). Somayajulu Sripada, Ehud Reiter, I Davy, and K Nilssen. 2004. Lessons from deploying NLG technology for marine weather forecast text generation. In PAIS session of ECAI-2004:760-764. Richart Sutton and Andrew Barto. 1998. Reinforcement learning. MIT Press. Grigorios Tsoumakas and Ioannis Katakis. 2007. Multi-label classification: An overview. International Journal Data Warehousing and Mining, 3(3):1–13. Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2010. Random k-labelsets for multilabel classification. IEEE Transactions on Knowledge and Data Engineering, 99(1):1079–1089. Grigorios Tsoumakas, Eleftherios SpyromitrosXioufis, Josef Vilcek, and Ioannis Vlahavas. 2011. Mulan: A java library for multi-label learning. Journal of Machine Learning Research, 12(1):2411–2414. Marian van den Meulen, Robert Logie, Yvonne Freer, Cindy Sykes, Neil McIntosh, and Jim Hunter. 2010. When a graph is poorer than 100 words: A comparison of computerised natural language generation, human generated descriptions and graphical displays in neonatal intensive care. In Applied Cognitive Psychology, 24: 77-89. Ian Witten and Eibe Frank. 2005. Data mining: Practical machine learning tools and techniques. Morgan Kaufmann Publishers. Min-Ling Zhang and Zhi-Hua Zhou. 2007. Ml-knn: A lazy learning approach to multi-label learning. Pattern Recognition, 40(7):2038–2048. 1240
2014
116
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1241–1251, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Approximation Strategies for Multi-Structure Sentence Compression Kapil Thadani Department of Computer Science Columbia University New York, NY 10025, USA [email protected] Abstract Sentence compression has been shown to benefit from joint inference involving both n-gram and dependency-factored objectives but this typically requires expensive integer programming. We explore instead the use of Lagrangian relaxation to decouple the two subproblems and solve them separately. While dynamic programming is viable for bigram-based sentence compression, finding optimal compressed trees within graphs is NP-hard. We recover approximate solutions to this problem using LP relaxation and maximum spanning tree algorithms, yielding techniques that can be combined with the efficient bigrambased inference approach using Lagrange multipliers. Experiments show that these approximation strategies produce results comparable to a state-of-the-art integer linear programming formulation for the same joint inference task along with a significant improvement in runtime. 1 Introduction Sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed. The compression task has received increasing attention in recent years, in part due to the availability of datasets such as the Ziff-Davis corpus (Knight and Marcu, 2000) and the Edinburgh compression corpora (Clarke and Lapata, 2006), from which the following example is drawn. Original: In 1967 Chapman, who had cultivated a conventional image with his ubiquitous tweed jacket and pipe, by his own later admission stunned a party attended by his friends and future Python colleagues by coming out as a homosexual. Compressed: In 1967 Chapman, who had cultivated a conventional image, stunned a party by coming out as a homosexual. Following an assumption often used in compression systems, the compressed output in this corpus is constructed by dropping tokens from the input sentence without any paraphrasing or reordering.1 A number of diverse approaches have been proposed for deletion-based sentence compression, including techniques that assemble the output text under an n-gram factorization over the input text (McDonald, 2006; Clarke and Lapata, 2008) or an arc factorization over input dependency parses (Filippova and Strube, 2008; Galanis and Androutsopoulos, 2010; Filippova and Altun, 2013). Joint methods have also been proposed that invoke integer linear programming (ILP) formulations to simultaneously consider multiple structural inference problems—both over n-grams and input dependencies (Martins and Smith, 2009) or n-grams and all possible dependencies (Thadani and McKeown, 2013). However, it is wellestablished that the utility of ILP for optimal inference in structured problems is often outweighed by the worst-case performance of ILP solvers on large problems without unique integral solutions. Furthermore, approximate solutions can often be adequate for real-world generation systems, particularly in the presence of linguisticallymotivated constraints such as those described by Clarke and Lapata (2008), or domain-specific 1This is referred to as extractive compression by Cohn and Lapata (2008) & Galanis and Androutsopoulos (2010) following the terminology used in document summarization. 1241 pruning strategies such as the use of sentence templates to constrain the output. In this work, we develop approximate inference strategies to the joint approach of Thadani and McKeown (2013) which trade the optimality guarantees of exact ILP for faster inference by separately solving the n-gram and dependency subproblems and using Lagrange multipliers to enforce consistency between their solutions. However, while the former problem can be solved efficiently using the dynamic programming approach of McDonald (2006), there are no efficient algorithms to recover maximum weighted nonprojective subtrees in a general directed graph. Maximum spanning tree algorithms, commonly used in non-projective dependency parsing (McDonald et al., 2005), are not easily adaptable to this task since the maximum-weight subtree is not necessarily a part of the maximum spanning tree. We therefore consider methods to recover approximate solutions for the subproblem of finding the maximum weighted subtree in a graph, common among which is the use of a linear programming relaxation. This linear program (LP) appears empirically tight for compression problems and our experiments indicate that simply using the non-integral solutions of this LP in Lagrangian relaxation can empirically lead to reasonable compressions. In addition, we can recover approximate solutions to this problem by using the ChuLiu Edmonds algorithm for recovering maximum spanning trees (Chu and Liu, 1965; Edmonds, 1967) over the relatively sparse subgraph defined by a solution to the relaxed LP. Our proposed approximation strategies are evaluated using automated metrics in order to address the question: under what conditions should a real-world sentence compression system implementation consider exact inference with an ILP or approximate inference? The contributions of this work include: • An empirically-useful technique for approximating the maximum-weight subtree in a weighted graph using LP-relaxed inference. • Multiple approaches to generate good approximate solutions for joint multi-structure compression, based on Lagrangian relaxation to enforce equality between the sequential and syntactic inference subproblems. • An analysis of the tradeoffs incurred by joint approaches with regard to runtime as well as performance under automated measures. 2 Multi-Structure Sentence Compression Even though compression is typically formulated as a token deletion task, it is evident that dropping tokens independently from an input sentence will likely not result in fluent and meaningful compressive text. Tokens in well-formed sentences participate in a number of syntactic and semantic relationships with other tokens, so one might expect that accounting for heterogenous structural relationships between tokens will improve the coherence of the output sentence. Furthermore, much recent work has focused on the challenge of joint sentence extraction and compression, also known as compressive summarization (Martins and Smith, 2009; Berg-Kirkpatrick et al., 2011; Almeida and Martins, 2013; Li et al., 2013; Qian and Liu, 2013), in which questions of efficiency are paramount due to the larger problems involved; however, these approaches largely restrict compression to pruning parse trees, thereby imposing a dependency on parser performance. We focus in this work on a sentence-level compression system to approximate the ILP-based inference of Thadani and McKeown (2013) which does not restrict compressions to follow input parses but permits the generation of novel dependency relations in output compressions. The rest of this section is organized as follows: §2.1 provies an overview of the joint sequential and syntactic objective for compression from Thadani and McKeown (2013) while §2.2 discusses the use of Lagrange multipliers to enforce consistency between the different structures considered. Following this, §2.3 discusses a dynamic program to find maximum weight bigram subsequences from the input sentence, while §2.4 covers LP relaxation-based approaches for approximating solutions to the problem of finding a maximum-weight subtree in a graph of potential output dependencies. Finally, §2.5 discusses the features and model training approach used in our experimental results which are presented in §3. 2.1 Joint objective We begin with some notation. For an input sentence S comprised of n tokens including duplicates, we denote the set of tokens in S by T ≜ {ti : 1 ≤i ≤n}. Let C represent a compression of S and let xi ∈{0, 1} denote an indicator variable whose value corresponds to whether token ti ∈T is present in the compressed sentence 1242 C. In addition, we define bigram indicator variables yij ∈{0, 1} to represent whether a particular order-preserving bigram2 ⟨ti, tj⟩from S is present as a contiguous bigram in C as well as dependency indicator variables zij ∈{0, 1} corresponding to whether the dependency arc ti →tj is present in the dependency parse of C. The score for a given compression C can now be defined to factor over its tokens, n-grams and dependencies as follows. score(C) = X ti∈T xi · θtok(ti) + X ti∈T∪{START}, tj∈T∪{END} yij · θbgr(⟨ti, tj⟩) + X ti∈T∪{ROOT}, tj∈T zij · θdep(ti →tj) (1) where θtok, θbgr and θdep are feature-based scoring functions for tokens, bigrams and dependencies respectively. Specifically, each θv(·) ≡w⊤ v φv(·) where φv(·) is a feature map for a given variable type v ∈{tok, bgr, dep} and wv is the corresponding vector of learned parameters. The inference task involves recovering the highest scoring compression C∗under a particular set of model parameters w. C∗= arg max C score(C) = arg max x,y,z x⊤θtok + y⊤θbgr + z⊤θdep (2) where the incidence vector x ≜⟨xi⟩ti∈T represents an entire token configuration over T, with y and z defined analogously to represent configurations of bigrams and dependencies. θv ≜⟨θv(·)⟩ denotes a corresponding vector of scores for each variable type v under the current model parameters. In order to recover meaningful compressions by optimizing (2), the inference step must ensure: 1. The configurations x, y and z are consistent with each other, i.e., all configurations cover the same tokens. 2. The structural configurations y and z are non-degenerate, i.e, the bigram configuration y represents an acyclic path while the dependency configuration z forms a tree. 2Although Thadani and McKeown (2013) is not restricted to bigrams or order-preserving n-grams, we limit our discussion to this scenario as it also fits the assumptions of McDonald (2006) and the datasets of Clarke and Lapata (2006). These requirements naturally rule out simple approximate inference formulations such as searchbased approaches for the joint objective.3 An ILP-based inference solution is demonstrated in Thadani and McKeown (2013) that makes use of linear constraints over the boolean variables xi, yij and zij to guarantee consistency, as well as auxiliary real-valued variables and constraints representing the flow of commodities (Magnanti and Wolsey, 1994) in order to establish structure in y and z. In the following section, we propose an alternative formulation that exploits the modularity of this joint objective. 2.2 Lagrangian relaxation Dual decomposition (Komodakis et al., 2007) and Lagrangian relaxation in general are often used for solving joint inference problems which are decomposable into individual subproblems linked by equality constraints (Koo et al., 2010; Rush et al., 2010; Rush and Collins, 2011; DeNero and Macherey, 2011; Martins et al., 2011; Das et al., 2012; Almeida and Martins, 2013). This approach permits sub-problems to be solved separately using problem-specific efficient algorithms, while consistency over the structures produced is enforced through Lagrange multipliers via iterative optimization. Exact solutions are guaranteed when the algorithm converges on a consistent primal solution, although this convergence itself is not guaranteed and depends on the tightness of the underlying LP relaxation. The primary advantage of this technique is the ability to leverage the underlying structure of the problems in inference rather than relying on a generic ILP formulation while still often producing exact solutions. The multi-structure inference problem described in the previous section seems in many ways to be a natural fit to such an approach since output scores factor over different types of structure that comprise the output compression. Even if ILP-based approaches perform reasonably at the scale of single-sentence compression problems, the exponential worst-case complexity of generalpurpose ILPs will inevitably pose challenges when scaling up to (a) handle larger inputs, (b) use higher-order structural fragments, or (c) incorporate additional models. 3This work follows Thadani and McKeown (2013) in recovering non-projective trees for inference. However, recovering projective trees is tractable when a total ordering of output tokens is assumed. This will be addressed in future work. 1243 Consider once more the optimization problem characterized by (2) The two structural problems that need to be solved in this formulation are the extraction of a maximum-weight acyclic subsequence of bigrams y from the lattice of all order-preserving bigrams from S and the recovery of a maximum-weight directed subtree z. Let α(y) ∈{0, 1}n denote the incidence vector of tokens contained in the n-gram sequence y and β(z) ∈{0, 1}n denote the incidence vector of words contained in the dependency tree z. We can now rewrite the objective in (2) while enforcing the constraint that the words contained in the sequence y are the same as the words contained in the tree z, i.e., α(y) = β(z), by introducing a vector of Lagrange multipliers λ ∈Rn. In addition, the token configuration x can be rewritten in the form of a weighted combination of α(y) and β(z) to ensure its consistency with y and z. This results in the following Lagrangian: L(λ, y, z) = y⊤θbgr + z⊤θdep + θ⊤ tok (ψ · α(y) + (1 −ψ) · β(z)) + λ⊤(α(y) −β(z)) (3) Finding the y and z that maximize this Lagrangian above yields a dual objective, and the dual problem corresponding to the primal objective specified in (2) is therefore the minimization of this objective over the Lagrange multipliers λ. min λ max y,z L(λ, y, z) = min λ max y y⊤θbgr + (λ + ψ · θtok)⊤α(y) + max z z⊤θdep −(λ + (ψ −1) · θtok)⊤β(z) = min λ max y f(y, λ, ψ, θ) + max z g(z, λ, ψ, θ) (4) This can now be solved with the iterative subgradient algorithm illustrated in Algorithm 1. In each iteration i, the algorithm solves for y(i) and z(i) under λ(i), then generates λ(i+1) to penalize inconsistencies between α(y(i)) and β(z(i)). When α(y(i)) = β(z(i)), the resulting primal solution is exact, i.e., y(i) and z(i) represent the optimal structures under (2). Otherwise, if the algorithm starts oscillating between a few primal solutions, the underlying LP must have a non-integral solution in which case approximation heuristics can be emAlgorithm 1 Subgradient-based joint inference Input: scores θ, ratio ψ, repetition limit lmax, iteration limit imax, learning rate schedule η Output: token configuration x 1: λ(0) ←⟨0⟩n 2: M ←∅, Mrepeats ←∅ 3: for iteration i < imax do 4: ˆy ←arg maxy f(y, λ, ψ, θ) 5: ˆz ←arg maxz g(z, λ, ψ, θ) 6: if α(ˆy) = β(ˆz) then return α(ˆy) 7: if α(ˆy) ∈M then 8: Mrepeats ←Mrepeats ∪{α(ˆy)} 9: if β(ˆz) ∈M then 10: Mrepeats ←Mrepeats ∪{β(ˆz)} 11: if |Mrepeats| ≥lmax then break 12: M ←M ∪{α(ˆy), β(ˆz)} 13: λ(i+1) ←λ(i) −ηi (α(ˆy) −β(ˆz)) return arg maxx∈Mrepeats score(x) ployed.4 The application of this Lagrangian relaxation strategy is contingent upon the existence of algorithms to solve the maximization subproblems for f(y, λ, ψ, θ) and g(z, λ, ψ, θ). The following sections discuss our approach to these problems. 2.3 Bigram subsequences McDonald (2006) provides a Viterbi-like dynamic programming algorithm to recover the highestscoring sequence of order-preserving bigrams from a lattice, either in unconstrained form or with a specific length constraint. The latter requires a dynamic programming table Q[i][r] which represents the best score for a compression of length r ending at token i. The table can be populated using the following recurrence: Q[i][1] = score(S, START, i) Q[i][r] = max j<i Q[j][r −1] + score(S, i, j) Q[i][R + 1] = Q[i][R] + score(S, i, END) where R is the required number of output tokens and the scoring function is defined as score(S, i, j) ≜θbgr(⟨ti, tj⟩) + λj + ψ · θtok(tj) so as to solve f(y, λ, ψ, θ) from (4). This approach requires O(n2R) time in order to identify 4Heuristic approaches (Komodakis et al., 2007; Rush et al., 2010), tightening (Rush and Collins, 2011) or branch and bound (Das et al., 2012) can still be used to retrieve optimal solutions, but we did not explore these strategies here. 1244 A B C D -20 3 10 2 1 Figure 1: An example of the difficulty of recovering the maximum-weight subtree (B→C, B→D) from the maximum spanning tree (A→C, C→B, B→D). the highest scoring sequence y and corresponding token configuration α(y). 2.4 Dependency subtrees The maximum-weight non-projective subtree problem over general graphs is not as easily solved. Although the maximum spanning tree for a given token configuration can be recovered efficiently, Figure 1 illustrates that the maximumscoring subtree is not necessarily found within it. The problem of recovering a maximum-weight subtree in a graph has been shown to be NP-hard even with uniform edge weights (Lau et al., 2006). In order to produce a solution to this subproblem, we use an LP relaxation of the relevant portion of the ILP from Thadani and McKeown (2013) by omitting integer constraints over the token and dependency variables in x and z respectively. For simplicity, however, we describe the ILP version rather than the relaxed LP in order to motivate the constraints with their intended purpose rather than their effect in the relaxed problem. The objective for this LP is given by max x,z x⊤θ′tok + z⊤θdep (5) where the vector of token scores is redefined as θ′tok ≜(1 −ψ) · θtok −λ (6) in order to solve g(z, λ, ψ, θ) from (4). Linear constraints are introduced to produce dependency structures that are close to the optimal dependency trees. First, tokens in the solution must only be active if they have a single active incoming dependency edge. In addition, to avoid producing multiple disconnected subtrees, only one dependency is permitted to attach to the ROOT pseudo-token. xj − X i zij = 0, ∀tj ∈T (7) X j zij = 1, if ti = ROOT (8) ROOT Production was closed down at Ford last night . 5 γ3,1 = 1 2 1 γ3,9 = 1 Figure 2: An illustration of commodity values for a valid solution of the non-relaxed ILP. In order to avoid cycles in the dependency tree, we include additional variables to establish singlecommodity flow (Magnanti and Wolsey, 1994) between all pairs of tokens. These γij variables carry non-negative real values which must be consumed by active tokens that they are incident to. γij ≥0, ∀ti, tj ∈T (9) X i γij − X k γjk = xj, ∀tj ∈T (10) These constraints ensure that cyclic structures are not possible in the non-relaxed ILP. In addition, they serve to establish connectivity for the dependency structure z since commodity can only originate in one location—at the pseudo-token ROOT which has no incoming commodity variables. However, in order to enforce these properties on the output dependency structure, this acyclic, connected commodity structure must constrain the activation of the z variables. γij −Cmaxzij ≤0, ∀ti, tj ∈T (11) where Cmax is an arbitrary upper bound on the value of γij variables. Figure 2 illustrates how these commodity flow variables constrain the output of the ILP to be a tree. However, the effect of these constraints is diminished when solving an LP relaxation of the above problem. In the LP relaxation, xi and zij are redefined as real-valued variables in [0, 1], potentially resulting in fractional values for dependency and token indicators. As a result, the commodity flow network is able to establish connectivity but cannot enforce a tree structure, for instance, directed acyclic structures are possible and token indicators xi may be partially be assigned to the solution structure. This poses a challenge in implementing β(z) which is needed to recover a token configuration from the solution of this subproblem. We propose two alternative solutions to address this issue in the context of the joint inference strategy. The first is to simply use the relaxed token configuration identified by the LP in Algorithm 1, 1245 i.e., to set β(˜z) = ˜x where ˜x and ˜z represent the real-valued counterparts of the incidence vectors x and z. The viability of this approximation strategy is due to the following: • The relaxed LP is empirically fairly tight, yielding integral solutions 89% of the time on the compression datasets described in §3. • The bigram subproblem is guaranteed to return a well-formed integral solution which obeys the imposed compression rate, so we are assured of a source of valid—if nonoptimal—solutions in line 13 of Algorithm 1. We also consider another strategy that attempts to approximate a valid integral solution to the dependency subproblem. In order to do this, we first include an additional constraint in the relaxed LP which restrict the number of tokens in the output to a specific number of tokens R that is given by an input compression rate. X i xi = R (12) The addition of this constraint to the relaxed LP reduces the rate of integral solutions drastically— from 89% to approximately 33%—but it serves to ensure that the resulting token configuration ˜x has at least as many non-zero elements as R, i.e., there are at least as many tokens activated in the LP solution as are required in a valid solution. We then construct a subgraph G(˜z) consisting of all dependency edges that were assigned nonzero values in the solution, assigning to each edge a score equal to the score of that edge in the LP as well as the score of its dependent word, i.e., each zij in G(˜z) is assigned a score of θdep(⟨ti, tj⟩) − λj + (1 −ψ) · θtok(tj). Since the commodity flow constraints in (9)–(11) ensure a connected ˜z, it is therefore possible to recover a maximum-weight spanning tree from G(˜z) using the Chu-Liu Edmonds algorithm (Chu and Liu, 1965; Edmonds, 1967).5 Although the runtime of this algorithm is cubic in the size of the input graph, it is fairly speedy when applied on relatively sparse graphs such as the solutions to the LP described above. The resulting spanning tree is a useful integral approximation of ˜z but, as mentioned previously, may contain more nodes than R due to fractional values in ˜x; we therefore repeatedly prune leaves 5A detailed description of the Chu-Liu Edmonds algorithm for MSTs is available in McDonald et al. (2005). with the lowest incoming edge weight in the current tree until exactly R nodes remain. The resulting tree is assumed to be a reasonable approximation of the optimal integral solution to this LP. The Chu-Liu Edmonds algorithm is also employed for another purpose: when the underlying LP for the joint inference problem is not tight—a frequent occurrence in our compression experiments—Algorithm 1 will not converge on a single primal solution and will instead oscillate between solutions that are close to the dual optimum. We identify this phenomenon by counting repeated solutions and, if they exceed some threshold lmax with at least one repeated solution from either subproblem, we terminate the update procedure for Lagrange multipliers and instead attempt to identify a good solution from the repeating ones by scoring them under (2). It is straightforward to recover and score a bigram configuration y from a token configuration β(z). However, scoring solutions produced by the dynamic program from §2.3 also requires the score over a corresponding parse tree; this can be recovered by constructing a dependency subgraph containing across only the tokens that are active in α(y) and retrieving the maximum spanning tree for that subgraph using the Chu-Liu Edmonds algorithm. 2.5 Learning and Features The features used in this work are largely based on the features from Thadani and McKeown (2013). • φtok contains features for part-of-speech (POS) tag sequences of length up to 3 around the token, features for the dependency label of the token conjoined with its POS, lexical features for verb stems and non-word symbols and morphological features that identify capitalized sequences, negations and words in parentheses. • φbgr contains features for POS patterns in a bigram, the labels of dependency edges incident to it, its likelihood under a Gigaword language model (LM) and an indicator for whether it is present in the input sentence. • φdep contains features for the probability of a dependency edge under a smoothed dependency grammar constructed from the Penn Treebank and various conjunctions of the following features: (a) whether the edge appears as a dependency or ancestral relation in the input parse (b) the directionality of the depen1246 dency (c) the label of the edge (d) the POS tags of the tokens incident to the edge and (e) the labels of their surrounding chunks and whether the edge remains within the chunk. For the experiments in the following section, we trained models using a variant of the structured perceptron (Collins, 2002) which incorporates minibatches (Zhao and Huang, 2013) for easy parallelization and faster convergence.6 Overfitting was avoided by averaging parameters and monitoring performance against a held-out development set during training. All models were trained using variants of the ILP-based inference approach of Thadani and McKeown (2013). We followed Martins et al. (2009) in using LP-relaxed inference during learning, assuming algorithmic separability (Kulesza and Pereira, 2007) for these problems. 3 Experiments We ran compression experiments over the newswire (NW) and broadcast news transcription (BN) corpora compiled by Clarke and Lapata (2008) which contain gold compressions produced by human annotators using only word deletion. The datasets were filtered to eliminate instances with less than 2 and more than 110 tokens for parser compatibility and divided into training/development/test sections following the splits from Clarke and Lapata (2008), yielding 953/63/603 instances for the NW corpus and 880/78/404 for the BN corpus. Gold dependency parses were approximated by running the Stanford dependency parser7 over reference compressions. Following evaluations in machine translation as well as previous work in sentence compression (Unno et al., 2006; Clarke and Lapata, 2008; Martins and Smith, 2009; Napoles et al., 2011b; Thadani and McKeown, 2013), we evaluate system performance using F1 metrics over n-grams and dependency edges produced by parsing system output with RASP (Briscoe et al., 2006) and the Stanford parser. All ILPs and LPs were solved using Gurobi,8 a high-performance commercialgrade solver. Following a recent analysis of compression evaluations (Napoles et al., 2011b) which revealed a strong correlation between system compression rate and human judgments of compression quality, we constrained all systems to produce 6We used a minibatch size of 4 in all experiments. 7http://nlp.stanford.edu/software/ 8http://www.gurobi.com compressed output at a specific rate—determined by the the gold compressions available for each instance—to ensure that the reported differences between the systems under study are meaningful. 3.1 Systems We report results over the following systems grouped into three categories of models: tokens + n-grams, tokens + dependencies, and joint models. • 3-LM: A reimplementation of the unsupervised ILP of Clarke and Lapata (2008) which infers order-preserving trigram variables parameterized with log-likelihood under an LM and a significance score for token variables inspired by Hori and Furui (2004), as well as various linguistically-motivated constraints to encourage fluency in output compressions. • DP: The bigram-based dynamic program of McDonald (2006) described in §2.3.9 • LP→MST: An approximate inference approach based on an LP relaxation of ILPDep. As discussed in §2.4, a maximum spanning tree is recovered from the output of the LP and greedily pruned in order to generate a valid integral solution while observing the imposed compression rate. • ILP-Dep: A version of the joint ILP of Thadani and McKeown (2013) without ngram variables and corresponding features. • DP+LP→MST: An approximate joint inference approach based on Lagrangian relaxation that uses DP for the maximum weight subsequence problem and LP→MST for the maximum weight subtree problem. • DP+LP: Another Lagrangian relaxation approach that pairs DP with the non-integral solutions from an LP relaxation of the maximum weight subtree problem (cf. §2.4). • ILP-Joint: The full ILP from Thadani and McKeown (2013), which provides an upper bound on the performance of the proposed approximation strategies. The learning rate schedule for the Lagrangian relaxation approaches was set as ηi ≜τ/(τ + i),10 while the hyperparameter ψ was tuned using the 9For consistent comparisons with the other systems, our reimplementation does not include the k-best inference strategy presented in McDonald (2006) for learning with MIRA. 10τ was set to 100 for aggressive subgradient updates. 1247 Inference n-grams F1% Syntactic relations F1% Inference objective technique n = 1 2 3 4 z Stanford RASP time (s) n-grams 3-LM (CL08) 74.96 60.60 46.83 38.71 60.52 57.49 0.72 DP (McD06) 78.80 66.04 52.67 42.39 63.28 57.89 0.01 deps LP→MST 79.61 64.32 50.36 40.97 66.57 66.82 59.70 0.07 ILP-Dep 80.02 65.99 52.42 43.07 72.43 67.63 60.78 0.16 DP + LP→MST 79.50 66.75 53.48 44.33 64.63 67.69 60.94 0.24 joint DP + LP 79.10 68.22 55.05 45.81 65.74 68.24 62.04 0.12 ILP-Joint (TM13) 80.13 68.34 55.56 46.60 72.57 68.89 62.61 0.31 Table 1: Experimental results for the BN corpus, averaged over 3 gold compressions per instance. All systems were restricted to compress to the size of the median gold compression yielding an average compression rate of 77.26%. Inference n-grams F1% Syntactic relations F1% Inference objective technique n = 1 2 3 4 z Stanford RASP time (s) n-grams 3-LM (CL08) 66.66 51.59 39.33 30.55 50.76 49.57 1.22 DP (McD06) 73.18 58.31 45.07 34.77 56.23 51.14 0.01 deps LP→MST 73.32 55.12 41.18 31.44 61.01 58.37 52.57 0.12 ILP-Dep 73.76 57.09 43.47 33.44 65.45 60.06 54.31 0.28 DP + LP→MST 73.13 57.03 43.79 34.01 57.91 58.46 53.20 0.33 joint DP + LP 72.06 59.83 47.39 37.72 58.13 58.97 53.78 0.21 ILP-Joint (TM13) 74.00 59.90 47.22 37.01 65.65 61.29 56.24 0.60 Table 2: Experimental results for the NW corpus with all systems compressing to the size of the gold compression, yielding an average compression rate of 70.24%. In both tables, bold entries show significant gains within a column under the paired t-test (p < 0.05) and Wilcoxon’s signed rank test (p < 0.01). development split of each corpus.11 3.2 Results Tables 1 and 2 summarize the results from our compression experiments on the BN and NW corpora respectively. Starting with the n-gram approaches, the performance of 3-LM leads us to observe that the gains of supervised learning far outweigh the utility of higher-order n-gram factorization, which is also responsible for a significant increase in wall-clock time. In contrast, DP is an order of magnitude faster than all other approaches studied here although it is not competitive under parse-based measures such as RASP F1% which is known to correlate with human judgments of grammaticality (Clarke and Lapata, 2006). We were surprised by the strong performance of the dependency-based inference techniques, which yielded results that approached the joint model in both n-gram and parse-based measures. 11We were surprised to observe that performance improved significantly when ψ was set closer to 1, thereby emphasizing token features in the dependency subproblem. The final values chosen were ψBN = 0.9 and ψNW = 0.8. The exact ILP-Dep approach halves the runtime of ILP-Joint to produce compressions that have similar (although statistically distinguishable) scores. Approximating dependency-based inference with LP→MST yields similar performance for a further halving of runtime; however, the performance of this approach is notably worse. Turning to the joint approaches, the strong performance of ILP-Joint is expected; less so is the relatively high but yet practically reasonable runtime that it requires. We note, however, that these ILPs are solved using a highlyoptimized commercial-grade solver that can utilize all CPU cores12 while our approximation approaches are implemented as single-processed Python code without significant effort toward optimization. Comparing the two approximation strategies shows a clear performance advantage for DP+LP over DP+LP→MST: the latter approach entails slower inference due to the overhead of running the Chu-Liu Edmonds algorithm at every dual update, and furthermore, the error introduced by approximating an integral solution re1216 cores in our experimental environment. 1248 sults in a significant decrease in dependency recall. In contrast, DP+LP directly optimizes the dual problem by using the relaxed dependency solution to update Lagrange multipliers and achieves the best performance on parse-based F1 outside of the slower ILP approaches. Convergence rates also vary for these two techniques: DP+LP has a lower rate of empirical convergence (15% on BN and 4% on NW) when compared to DP+LP→MST (19% on BN and 6% on NW). Figure 3 shows the effect of input sentence length on inference time and performance for ILPJoint and DP+LP over the NW test corpus.13 The timing results reveal that the approximation strategy is consistently faster than the ILP solver. The variation in RASP F1% with input size indicates the viability of a hybrid approach which could balance accuracy and speed by using ILP-Joint for smaller problems and DP+LP for larger ones. 4 Related Work Sentence compression is one of the better-studied text-to-text generation problems and has been observed to play a significant role in human summarization (Jing, 2000; Jing and McKeown, 2000). Most approaches to sentence compression are supervised (Knight and Marcu, 2002; Riezler et al., 2003; Turner and Charniak, 2005; McDonald, 2006; Unno et al., 2006; Galley and McKeown, 2007; Nomoto, 2007; Cohn and Lapata, 2009; Galanis and Androutsopoulos, 2010; Ganitkevitch et al., 2011; Napoles et al., 2011a; Filippova and Altun, 2013) following the release of datasets such as the Ziff-Davis corpus (Knight and Marcu, 2000) and the Edinburgh compression corpora (Clarke and Lapata, 2006; Clarke and Lapata, 2008), although unsupervised approaches— largely based on ILPs—have also received consideration (Clarke and Lapata, 2007; Clarke and Lapata, 2008; Filippova and Strube, 2008). Compression has also been used as a tool for document summarization (Daum´e and Marcu, 2002; Zajic et al., 2007; Clarke and Lapata, 2007; Martins and Smith, 2009; Berg-Kirkpatrick et al., 2011; Woodsend and Lapata, 2012; Almeida and Martins, 2013; Molina et al., 2013; Li et al., 2013; Qian and Liu, 2013), with recent work formulating the summarization task as joint sentence extraction and compression and often employing ILP or Lagrangian relaxation. Monolingual compression 13Similar results were observed for the BN test corpus. Figure 3: Effect of input size on (a) inference time, and (b) the corresponding difference in RASP F1% (ILP-Joint – DP+LP) on the NW corpus. also faces many obstacles common to decoding in machine translation, and a number of approaches which have been proposed to combine phrasal and syntactic models (Huang and Chiang, 2007; Rush and Collins, 2011) inter alia offer directions for future research into compression problems. 5 Conclusion We have presented approximate inference strategies to jointly compress sentences under bigram and dependency-factored objectives by exploiting the modularity of the task and considering the two subproblems in isolation. Experiments show that one of these approximation strategies produces results comparable to a state-of-the-art integer linear program for the same joint inference task with a 60% reduction in average inference time. Acknowledgments The author is grateful to Alexander Rush for helpful discussions and to the anonymous reviewers for their comments. This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center (DoI/NBC) contract number D11PC20153. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.14 14The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government. 1249 References Miguel Almeida and Andr´e F. T. Martins. 2013. Fast and robust compressive summarization with dual decomposition and multi-task learning. In Proceedings of ACL, pages 196–206, August. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of ACL-HLT, pages 481–490. Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proceedings of the ACL-COLING Interactive Presentation Sessions. Yoeng-jin Chu and Tseng-hong Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396–1400. James Clarke and Mirella Lapata. 2006. Models for sentence compression: a comparison across domains, training requirements and evaluation measures. In Proceedings of ACL-COLING, pages 377– 384. James Clarke and Mirella Lapata. 2007. Modelling compression with discourse constraints. In Proceedings of EMNLP-CoNLL, pages 1–11. James Clarke and Mirella Lapata. 2008. Global inference for sentence compression: an integer linear programming approach. Journal for Artificial Intelligence Research, 31:399–429, March. Trevor Cohn and Mirella Lapata. 2008. Sentence compression beyond word deletion. In Proceedings of COLING, pages 137–144. Trevor Cohn and Mirella Lapata. 2009. Sentence compression as tree transduction. Journal of Artificial Intelligence Research, 34(1):637–674, April. Michael Collins. 2002. Discriminative training methods for hidden Markov models. In Proceedings of EMNLP, pages 1–8. Dipanjan Das, Andr´e F. T. Martins, and Noah A. Smith. 2012. An exact dual decomposition algorithm for shallow semantic parsing with constraints. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM), SemEval ’12, pages 209–217. Hal Daum´e, III and Daniel Marcu. 2002. A noisychannel model for document compression. In Proceedings of ACL, pages 449–456. John DeNero and Klaus Macherey. 2011. Modelbased aligner combination using dual decomposition. In Proceedings of ACL-HLT, pages 420–429. Jack R. Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71B:233–240. Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In Proceedings of EMNLP, pages 1481–1491. Katja Filippova and Michael Strube. 2008. Dependency tree based sentence compression. In Proceedings of INLG, pages 25–32. Dimitrios Galanis and Ion Androutsopoulos. 2010. An extractive supervised two-stage method for sentence compression. In Proceedings of HLT-NAACL, pages 885–893. Michel Galley and Kathleen McKeown. 2007. Lexicalized Markov grammars for sentence compression. In Proceedings of HLT-NAACL, pages 180– 187, April. Juri Ganitkevitch, Chris Callison-Burch, Courtney Napoles, and Benjamin Van Durme. 2011. Learning sentential paraphrases from bilingual parallel corpora for text-to-text generation. In Proceedings of EMNLP, pages 1168–1179. Chiori Hori and Sadaoki Furui. 2004. Speech summarization: an approach through word extraction and a method for evaluation. IEICE Transactions on Information and Systems, E87-D(1):15–25. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of ACL, pages 144–151, June. Hongyan Jing and Kathleen R. McKeown. 2000. Cut and paste based text summarization. In Proceedings of NAACL, pages 178–185. Hongyan Jing. 2000. Sentence reduction for automatic text summarization. In Proceedings of the Conference on Applied Natural Language Processing, pages 310–315. Kevin Knight and Daniel Marcu. 2000. Statisticsbased summarization - step one: Sentence compression. In Proceedings of AAAI, pages 703–710. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Artificial Intelligence, 139(1):91–107, July. Nikos Komodakis, Nikos Paragios, and Georgios Tziritas. 2007. MRF optimization via dual decomposition: Message-passing revisited. In Proceedings of ICCV, pages 1–8, Oct. Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of EMNLP, pages 1288– 1298. Alex Kulesza and Fernando Pereira. 2007. Structured learning with approximate inference. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, NIPS. Curran Associates, Inc. 1250 Hoong Chuin Lau, Trung Hieu Ngo, and Bao Nguyen Nguyen. 2006. Finding a length-constrained maximum-sum or maximum-density subtree and its application to logistics. Discrete Optimization, 3(4):385 – 391. Chen Li, Fei Liu, Fuliang Weng, and Yang Liu. 2013. Document summarization via guided sentence compression. In Proceedings of EMNLP, pages 490– 500, Seattle, Washington, USA, October. Thomas L. Magnanti and Laurence A. Wolsey. 1994. Optimal trees. In Technical Report 290-94, Massechusetts Institute of Technology, Operations Research Center. Andr´e F. T. Martins and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, pages 1–9. Andr´e F. T. Martins, Noah A. Smith, and Eric P. Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of ACL-IJCNLP, pages 342–350. Andr´e F. T. Martins, Noah A. Smith, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2011. Dual decomposition with many overlapping components. In Proceedings of EMNLP, pages 238–249. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of EMNLP-HLT, pages 523–530. Ryan McDonald. 2006. Discriminative sentence compression with soft syntactic evidence. In Proceedings of EACL, pages 297–304. Alejandro Molina, Juan-Manuel Torres-Moreno, Eric SanJuan, Iria da Cunha, and Gerardo Eugenio Sierra Mart´ınez. 2013. Discursive sentence compression. In Computational Linguistics and Intelligent Text Processing, volume 7817, pages 394–407. Springer. Courtney Napoles, Chris Callison-Burch, Juri Ganitkevitch, and Benjamin Van Durme. 2011a. Paraphrastic sentence compression with a character-based metric: tightening without deletion. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, pages 84–90. Courtney Napoles, Benjamin Van Durme, and Chris Callison-Burch. 2011b. Evaluating sentence compression: pitfalls and suggested remedies. In Proceedings of the Workshop on Monolingual Text-ToText Generation, pages 91–97. Tadashi Nomoto. 2007. Discriminative sentence compression with conditional random fields. Information Processing and Management, 43(6):1571– 1587, November. Xian Qian and Yang Liu. 2013. Fast joint compression and summarization via graph cuts. In Proceedings of EMNLP, pages 1492–1502, Seattle, Washington, USA, October. Stefan Riezler, Tracy H. King, Richard Crouch, and Annie Zaenen. 2003. Statistical sentence condensation using ambiguity packing and stochastic disambiguation methods for lexical-functional grammar. In Proceedings of HLT-NAACL, pages 118–125. Alexander M. Rush and Michael Collins. 2011. Exact decoding of syntactic translation models through Lagrangian relaxation. In Proceedings of ACL-HLT, pages 72–82. Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of EMNLP, pages 1–11. Kapil Thadani and Kathleen McKeown. 2013. Sentence compression with joint structural inference. In Proceedings of CoNLL. Jenine Turner and Eugene Charniak. 2005. Supervised and unsupervised learning for sentence compression. In Proceedings of ACL, pages 290–297. Yuya Unno, Takashi Ninomiya, Yusuke Miyao, and Jun’ichi Tsujii. 2006. Trimming CFG parse trees for sentence compression using machine learning approaches. In Proceedings of ACL-COLING, pages 850–857. Kristian Woodsend and Mirella Lapata. 2012. Multiple aspect summarization using integer linear programming. In Proceedings of EMNLP, pages 233– 243. David Zajic, Bonnie J. Dorr, Jimmy Lin, and Richard Schwartz. 2007. Multi-candidate reduction: Sentence compression as a tool for document summarization tasks. Information Processing and Management, 43(6):1549–1570, Nov. Kai Zhao and Liang Huang. 2013. Minibatch and parallelization for online large margin structured learning. In Proceedings of HLT-NAACL, pages 370– 379, Atlanta, Georgia, June. 1251
2014
117
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1252–1261, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Opinion Mining on YouTube Aliaksei Severyn1, Alessandro Moschitti3,1, Olga Uryupina1, Barbara Plank2, Katja Filippova4 1DISI - University of Trento, 2CLT - University of Copenhagen, 3Qatar Computing Research Institute, 4Google Inc. [email protected], [email protected], [email protected], [email protected], [email protected] Abstract This paper defines a systematic approach to Opinion Mining (OM) on YouTube comments by (i) modeling classifiers for predicting the opinion polarity and the type of comment and (ii) proposing robust shallow syntactic structures for improving model adaptability. We rely on the tree kernel technology to automatically extract and learn features with better generalization power than bag-of-words. An extensive empirical evaluation on our manually annotated YouTube comments corpus shows a high classification accuracy and highlights the benefits of structural models in a cross-domain setting. 1 Introduction Social media such as Twitter, Facebook or YouTube contain rapidly changing information generated by millions of users that can dramatically affect the reputation of a person or an organization. This raises the importance of automatic extraction of sentiments and opinions expressed in social media. YouTube is a unique environment, just like Twitter, but probably even richer: multi-modal, with a social graph, and discussions between people sharing an interest. Hence, doing sentiment research in such an environment is highly relevant for the community. While the linguistic conventions used on Twitter and YouTube indeed show similarities (Baldwin et al., 2013), focusing on YouTube allows to exploit context information, possibly also multi-modal information, not available in isolated tweets, thus rendering it a valuable resource for the future research. Nevertheless, there is almost no work showing effective OM on YouTube comments. To the best of our knowledge, the only exception is given by the classification system of YouTube comments proposed by Siersdorfer et al. (2010). While previous state-of-the-art models for opinion classification have been successfully applied to traditional corpora (Pang and Lee, 2008), YouTube comments pose additional challenges: (i) polarity words can refer to either video or product while expressing contrasting sentiments; (ii) many comments are unrelated or contain spam; and (iii) learning supervised models requires training data for each different YouTube domain, e.g., tablets, automobiles, etc. For example, consider a typical comment on a YouTube review video about a Motorola Xoom tablet: this guy really puts a negative spin on this , and I ’m not sure why , this seems crazy fast , and I ’m not entirely sure why his pinch to zoom his laggy all the other xoom reviews The comment contains a product name xoom and some negative expressions, thus, a bag-of-words model would derive a negative polarity for this product. In contrast, the opinion towards the product is neutral as the negative sentiment is expressed towards the video. Similarly, the following comment: iPad 2 is better. the superior apps just destroy the xoom. contains two positive and one negative word, yet the sentiment towards the product is negative (the negative word destroy refers to Xoom). Clearly, the bag-of-words lacks the structural information linking the sentiment with the target product. In this paper, we carry out a systematic study on OM targeting YouTube comments; its contribution is three-fold: firstly, to solve the problems outlined above, we define a classification schema, which separates spam and not related comments from the informative ones, which are, in turn, further categorized into video- or product-related comments 1252 (type classification). At the final stage, different classifiers assign polarity (positive, negative or neutral) to each type of a meaningful comment. This allows us to filter out irrelevant comments, providing accurate OM distinguishing comments about the video and the target product. The second contribution of the paper is the creation and annotation (by an expert coder) of a comment corpus containing 35k manually labeled comments for two product YouTube domains: tablets and automobiles.1 It is the first manually annotated corpus that enables researchers to use supervised methods on YouTube for comment classification and opinion analysis. The comments from different product domains exhibit different properties (cf. Sec. 5.2), which give the possibility to study the domain adaptability of the supervised models by training on one category and testing on the other (and vice versa). The third contribution of the paper is a novel structural representation, based on shallow syntactic trees enriched with conceptual information, i.e., tags generalizing the specific topic of the video, e.g., iPad, Kindle, Toyota Camry. Given the complexity and the novelty of the task, we exploit structural kernels to automatically engineer novel features. In particular, we define an efficient tree kernel derived from the Partial Tree Kernel, (Moschitti, 2006a), suitable for encoding structural representation of comments into Support Vector Machines (SVMs). Finally, our results show that our models are adaptable, especially when the structural information is used. Structural models generally improve on both tasks – polarity and type classification – yielding up to 30% of relative improvement, when little data is available. Hence, the impractical task of annotating data for each YouTube category can be mitigated by the use of models that adapt better across domains. 2 Related work Most prior work on more general OM has been carried out on more standardized forms of text, such as consumer reviews or newswire. The most commonly used datasets include: the MPQA corpus of news documents (Wilson et al., 2005), web customer review data (Hu and Liu, 2004), Amazon review data (Blitzer et al., 2007), the JDPA 1The corpus and the annotation guidelines are publicly available at: http://projects.disi.unitn. it/iKernels/projects/sentube/ corpus of blogs (Kessler et al., 2010), etc. The aforementioned corpora are, however, only partially suitable for developing models on social media, since the informal text poses additional challenges for Information Extraction and Natural Language Processing. Similar to Twitter, most YouTube comments are very short, the language is informal with numerous accidental and deliberate errors and grammatical inconsistencies, which makes previous corpora less suitable to train models for OM on YouTube. A recent study focuses on sentiment analysis for Twitter (Pak and Paroubek, 2010), however, their corpus was compiled automatically by searching for emoticons expressing positive and negative sentiment only. Siersdorfer et al. (2010) focus on exploiting user ratings (counts of ‘thumbs up/down’ as flagged by other users) of YouTube video comments to train classifiers to predict the community acceptance of new comments. Hence, their goal is different: predicting comment ratings, rather than predicting the sentiment expressed in a YouTube comment or its information content. Exploiting the information from user ratings is a feature that we have not exploited thus far, but we believe that it is a valuable feature to use in future work. Most of the previous work on supervised sentiment analysis use feature vectors to encode documents. While a few successful attempts have been made to use more involved linguistic analysis for opinion mining, such as dependency trees with latent nodes (T¨ackstr¨om and McDonald, 2011) and syntactic parse trees with vectorized nodes (Socher et al., 2011), recently, a comprehensive study by Wang and Manning (2012) showed that a simple model using bigrams and SVMs performs on par with more complex models. In contrast, we show that adding structural features from syntactic trees is particularly useful for the cross-domain setting. They help to build a system that is more robust across domains. Therefore, rather than trying to build a specialized system for every new target domain, as it has been done in most prior work on domain adaptation (Blitzer et al., 2007; Daum´e, 2007), the domain adaptation problem boils down to finding a more robust system (Søgaard and Johannsen, 2012; Plank and Moschitti, 2013). This is in line with recent advances in parsing the web (Petrov and McDonald, 2012), where participants where asked to build a single system able to cope with different yet re1253 lated domains. Our approach relies on robust syntactic structures to automatically generate patterns that adapt better. These representations have been inspired by the semantic models developed for Question Answering (Moschitti, 2008; Severyn and Moschitti, 2012; Severyn and Moschitti, 2013) and Semantic Textual Similarity (Severyn et al., 2013). Moreover, we introduce additional tags, e.g., video concepts, polarity and negation words, to achieve better generalization across different domains where the word distribution and vocabulary changes. 3 Representations and models Our approach to OM on YouTube relies on the design of classifiers to predict comment type and opinion polarity. Such classifiers are traditionally based on bag-of-words and more advanced features. In the next sections, we define a baseline feature vector model and a novel structural model based on kernel methods. 3.1 Feature Set We enrich the traditional bag-of-word representation with features from a sentiment lexicon and features quantifying the negation present in the comment. Our model (FVEC) encodes each document using the following feature groups: - word n-grams: we compute unigrams and bigrams over lower-cased word lemmas where binary values are used to indicate the presence/absence of a given item. - lexicon: a sentiment lexicon is a collection of words associated with a positive or negative sentiment. We use two manually constructed sentiment lexicons that are freely available: the MPQA Lexicon (Wilson et al., 2005) and the lexicon of Hu and Liu (2004). For each of the lexicons, we use the number of words found in the comment that have positive and negative sentiment as a feature. - negation: the count of negation words, e.g., {don’t, never, not, etc.}, found in a comment.2 Our structural representation (defined next) enables a more involved treatment of negation. - video concept: cosine similarity between a comment and the title/description of the video. Most of the videos come with a title and a short description, which can be used to encode the topicality of 2The list of negation words is adopted from http://sentiment.christopherpotts.net/lingstruc.html each comment by looking at their overlap. 3.2 Structural model We go beyond traditional feature vectors by employing structural models (STRUCT), which encode each comment into a shallow syntactic tree. These trees are input to tree kernel functions for generating structural features. Our structures are specifically adapted to the noisy usergenerated texts and encode important aspects of the comments, e.g., words from the sentiment lexicons, product concepts and negation words, which specifically targets the sentiment and comment type classification tasks. In particular, our shallow tree structure is a two-level syntactic hierarchy built from word lemmas (leaves) and part-of-speech tags that are further grouped into chunks (Fig. 1). As full syntactic parsers such as constituency or dependency tree parsers would significantly degrade in performance on noisy texts, e.g., Twitter or YouTube comments, we opted for shallow structures, which rely on simpler and more robust components: a part-of-speech tagger and a chunker. Moreover, such taggers have been recently updated with models (Ritter et al., 2011; Gimpel et al., 2011) trained specifically to process noisy texts showing significant reductions in the error rate on usergenerated texts, e.g., Twitter. Hence, we use the CMU Twitter pos-tagger (Gimpel et al., 2011; Owoputi et al., 2013) to obtain the part-of-speech tags. Our second component – chunker – is taken from (Ritter et al., 2011), which also comes with a model trained on Twitter data3 and shown to perform better on noisy data such as user comments. To address the specifics of OM tasks on YouTube comments, we enrich syntactic trees with semantic tags to encode: (i) central concepts of the video, (ii) sentiment-bearing words expressing positive or negative sentiment and (iii) negation words. To automatically identify concept words of the video we use context words (tokens detected as nouns by the part-of-speech tagger) from the video title and video description and match them in the tree. For the matched words, we enrich labels of their parent nodes (part-ofspeech and chunk) with the PRODUCT tag. Similarly, the nodes associated with words found in 3The chunker from (Ritter et al., 2011) relies on its own POS tagger, however, in our structural representations we favor the POS tags from the CMU Twitter tagger and take only the chunk tags from the chunker. 1254 Figure 1: Shallow tree representation of the example comment (labeled with product type and negative sentiment): “iPad 2 is better. the superior apps just destroy the xoom.” (lemmas are replaced with words for readability) taken from the video “Motorola Xoom Review”. We introduce additional tags in the tree nodes to encode the central concept of the video (motorola xoom) and sentiment-bearing words (better, superior, destroy) directly in the tree nodes. For the former we add a PRODUCT tag on the chunk and part-of-speech nodes of the word xoom) and polarity tags (positive and negative) for the latter. Two sentences are split into separate root nodes S. the sentiment lexicon are enriched with a polarity tag (either positive or negative), while negation words are labeled with the NEG tag. It should be noted that vector-based (FVEC) model relies only on feature counts whereas the proposed tree encodes powerful contextual syntactic features in terms of tree fragments. The latter are automatically generated and learned by SVMs with expressive tree kernels. For example, the comment in Figure 1 shows two positive and one negative word from the sentiment lexicon. This would strongly bias the FVEC sentiment classifier to assign a positive label to the comment. In contrast, the STRUCT model relies on the fact that the negative word, destroy, refers to the PRODUCT (xoom) since they form a verbal phase (VP). In other words, the tree fragment: [S [negative-VP [negative-V [destroy]] [PRODUCT-NP [PRODUCT-N [xoom]]]] is a strong feature (induced by tree kernels) to help the classifier to discriminate such hard cases. Moreover, tree kernels generate all possible subtrees, thus producing generalized (back-off) features, e.g., [S [negative-VP [negative-V [destroy]] [PRODUCT-NP]]]] or [S [negative-VP [PRODUCT-NP]]]]. 3.3 Learning We perform OM on YouTube using supervised methods, e.g., SVM. Our goal is to learn a model to automatically detect the sentiment and type of each comment. For this purpose, we build a multiclass classifier using the one-vs-all scheme. A binary classifier is trained for each of the classes and the predicted class is obtained by taking a class from the classifier with a maximum prediction score. Our back-end binary classifier is SVMlight-TK4, which encodes structural kernels in the SVM-light (Joachims, 2002) solver. We define a novel and efficient tree kernel function, namely, Shallow syntactic Tree Kernel (SHTK), which is as expressive as the Partial Tree Kernel (PTK) (Moschitti, 2006a) to handle feature engineering over the structural representations of the STRUCT model. A polynomial kernel of degree 3 is applied to feature vectors (FVEC). Combining structural and vector models. A typical kernel machine, e.g., SVM, classifies a test input xxx using the following prediction function: h(xxx) = P i αiyiK(xxx,xxxi), where αi are the model parameters estimated from the training data, yi are target variables, xxxi are support vectors, and K(·, ·) is a kernel function. The latter computes the similarity between two comments. The STRUCT model treats each comment as a tuple xxx = ⟨TTT,vvv⟩composed of a shallow syntactic tree TTT and a feature vector vvv. Hence, for each pair of comments xxx1 and xxx2, we define the following comment similarity kernel: K(xxx1,xxx2) = KTK(TTT 1,TTT 2) + Kv(vvv1,vvv2), (1) where KTK computes SHTK (defined next), and Kv is a kernel over feature vectors, e.g., linear, polynomial, Gaussian, etc. Shallow syntactic tree kernel. Following the convolution kernel framework, we define the new 4http://disi.unitn.it/moschitti/Tree-Kernel.htm 1255 SHTK function from Eq. 1 to compute the similarity between tree structures. It counts the number of common substructures between two trees T1 and T2 without explicitly considering the whole fragment space. The general equations for Convolution Tree Kernels is: TK(T1, T2) = X n1∈NT1 X n2∈NT2 ∆(n1, n2), (2) where NT1 and NT2 are the sets of the T1’s and T2’s nodes, respectively and ∆(n1, n2) is equal to the number of common fragments rooted in the n1 and n2 nodes, according to several possible definition of the atomic fragments. To improve the speed computation of TK, we consider pairs of nodes (n1, n2) belonging to the same tree level. Thus, given H, the height of the STRUCT trees, where each level h contains nodes of the same type, i.e., chunk, POS, and lexical nodes, we define SHTK as the following5: SHTK(T1, T2) = H X h=1 X n1∈Nh T1 X n2∈Nh T2 ∆(n1, n2), (3) where Nh T1 and Nh T2 are sets of nodes at height h. The above equation can be applied with any ∆ function. To have a more general and expressive kernel, we use ∆previously defined for PTK. More formally: if n1 and n2 are leaves then ∆(n1, n2) = µλ(n1, n2); else ∆(n1, n2) = µ  λ2 + X ⃗I1,⃗I2,|⃗I1|=|⃗I2| λd(⃗I1)+d(⃗I2) |⃗I1| Y j=1 ∆(cn1(⃗I1j), cn2(⃗I2j))  , where λ, µ ∈[0, 1] are decay factors; the large sum is adopted from a definition of the subsequence kernel (Shawe-Taylor and Cristianini, 2004) to generate children subsets with gaps, which are then used in a recursive call to ∆. Here, cn1(i) is the ith child of the node n1; ⃗I1 and ⃗I2 are two sequences of indexes that enumerate subsets of children with gaps, i.e., ⃗I = (i1, i2, .., |I|), with 1 ≤i1 < i2 < .. < i|I|; and d(⃗I1) = ⃗I1l(⃗I1) −⃗I11 + 1 and d(⃗I2) = ⃗I2l(⃗I2) −⃗I21 + 1, which penalizes subsequences with larger gaps. It should be noted that: firstly, the use of a subsequence kernel makes it possible to generate child subsets of the two nodes, i.e., it allows for gaps, which makes matching of syntactic patterns 5To have a similarity score between 0 and 1, a normalization in the kernel space, i.e. SHT K(T1,T2) √ SHT K(T1,T1)×SHT K(T2,T2) is applied. less rigid. Secondly, the resulting SHTK is essentially a special case of PTK (Moschitti, 2006a), adapted to the shallow structural representation STRUCT (see Sec. 3.2). When applied to STRUCT trees, SHTK exactly computes the same feature space as PTK, but in faster time (on average). Indeed, SHTK required to be only applied to node pairs from the same level (see Eq. 3), where the node labels can match – chunk, POS or lexicals. This reduces the time for selecting the matchingnode pairs carried out in PTK (Moschitti, 2006a; Moschitti, 2006b). The fragment space is obviously the same, as the node labels of different levels in STRUCT are different and will not be matched by PTK either. Finally, given its recursive definition in Eq. 3 and the use of subsequence (with gaps), SHTK can derive useful dependencies between its elements. For example, it will generate the following subtree fragments: [positive-NP [positive-A N]], [S [negative-VP [negative-V [destroy]] [PRODUCT-NP]]]] and so on. 4 YouTube comments corpus To build a corpus of YouTube comments, we focus on a particular set of videos (technical reviews and advertisings) featuring commercial products. In particular, we chose two product categories: automobiles (AUTO) and tablets (TABLETS). To collect the videos, we compiled a list of products and queried the YouTube gData API6 to retrieve the videos. We then manually excluded irrelevant videos. For each video, we extracted all available comments (limited to maximum 1k comments per video) and manually annotated each comment with its type and polarity. We distinguish between the following types: product: discuss the topic product in general or some features of the product; video: discuss the video or some of its details; spam: provide advertising and malicious links; and off-topic: comments that have almost no content (“lmao”) or content that is not related to the video (“Thank you!”). Regarding the polarity, we distinguish between {positive, negative, neutral} sentiments with respect to the product and the video. If the comment contains several statements of different polarities, it is annotated as both positive and negative: “Love the video but waiting for iPad 4”. In total we have 6https://developers.google.com/youtube/v3/ 1256 annotated 208 videos with around 35k comments (128 videos TABLETS and 80 for AUTO). To evaluate the quality of the produced labels, we asked 5 annotators to label a sample set of one hundred comments and measured the agreement. The resulting annotator agreement α value (Krippendorf, 2004; Artstein and Poesio, 2008) scores are 60.6 (AUTO), 72.1 (TABLETS) for the sentiment task and 64.1 (AUTO), 79.3 (TABLETS) for the type classification task. For the rest of the comments, we assigned the entire annotation task to a single coder. Further details on the corpus can be found in Uryupina et al. (2014). 5 Experiments This section reports: (i) experiments on individual subtasks of opinion and type classification; (ii) the full task of predicting type and sentiment; (iii) study on the adaptability of our system by learning on one domain and testing on the other; (iv) learning curves that provide an indication on the required amount and type of data and the scalability to other domains. 5.1 Task description Sentiment classification. We treat each comment as expressing positive, negative or neutral sentiment. Hence, the task is a threeway classification. Type classification. One of the challenging aspects of sentiment analysis of YouTube data is that the comments may express the sentiment not only towards the product shown in the video, but also the video itself, i.e., users may post positive comments to the video while being generally negative about the product and vice versa. Hence, it is of crucial importance to distinguish between these two types of comments. Additionally, many comments are irrelevant for both the product and the video (off-topic) or may even contain spam. Given that the main goal of sentiment analysis is to select sentiment-bearing comments and identify their polarity, distinguishing between off-topic and spam categories is not critical. Thus, we merge the spam and off-topic into a single uninformative category. Similar to the opinion classification task, comment type classification is a multi-class classification with three classes: video, product and uninform. Full task. While the previously discussed sentiment and type identification tasks are useful to Task class AUTO TABLETS TRAIN TEST TRAIN TEST Sentiment positive 2005 (36%) 807 (27%) 2393 (27%) 1872 (27%) neutral 2649 (48%) 1413 (47%) 4683 (53%) 3617 (52%) negative 878 (16%) 760 (26%) 1698 (19%) 1471 (21%) total 5532 2980 8774 6960 Type product 2733 (33%) 1761 (34%) 7180 (59%) 5731 (61%) video 3008 (36%) 1369 (26%) 2088 (17%) 1674 (18%) off-topic 2638 (31%) 2045 (39%) 2334 (19%) 1606 (17%) spam 26 (>1%) 17 (>1%) 658 (5%) 361 (4%) total 8405 5192 12260 9372 Full product-pos. 1096 (13%) 517 (10%) 1648 (14%) 1278 (14%) product-neu. 908 (11%) 729 (14%) 3681 (31%) 2844 (32%) product-neg. 554 (7%) 370 (7%) 1404 (12%) 1209 (14%) video-pos. 909 (11%) 290 (6%) 745 (6%) 594 (7%) video-neu. 1741 (21%) 683 (14%) 1002 (9%) 773 (9%) video-neg. 324 (4%) 390 (8%) 294 (2%) 262 (3%) off-topic 2638 (32%) 2045 (41%) 2334 (20%) 1606 (18%) spam 26 (>1%) 17 (>1%) 658 (6%) 361 (4%) total 8196 5041 11766 8927 Table 1: Summary of YouTube comments data used in the sentiment, type and full classification tasks. The comments come from two product categories: AUTO and TABLETS. Numbers in parenthesis show proportion w.r.t. to the total number of comments used in a task. model and study in their own right, our end goal is: given a stream of comments, to jointly predict both the type and the sentiment of each comment. We cast this problem as a single multi-class classification task with seven classes: the Cartesian product between {product, video} type labels and {positive, neutral, negative} sentiment labels plus the uninformative category (spam and off-topic). Considering a real-life application, it is important not only to detect the polarity of the comment, but to also identify if it is expressed towards the product or the video.7 5.2 Data We split all the videos 50% between training set (TRAIN) and test set (TEST), where each video contains all its comments. This ensures that all comments from the same video appear either in TRAIN or in TEST. Since the number of comments per video varies, the resulting sizes of each set are different (we use the larger split for TRAIN). Table 1 shows the data distribution across the task-specific classes – sentiment and type classification. For the sentiment task we exclude off-topic and spam comments as well as comments with ambiguous sentiment, i.e., an7We exclude comments annotated as both video and product. This enables the use of a simple flat multiclassifiers with seven categories for the full task, instead of a hierarchical multi-label classifiers (i.e., type classification first and then opinion polarity). The number of comments assigned to both product and video is relatively small (8% for TABLETS and 4% for AUTO). 1257 notated as both positive and negative. For the sentiment task about 50% of the comments have neutral polarity, while the negative class is much less frequent. Interestingly, the ratios between polarities expressed in comments from AUTO and TABLETS are very similar across both TRAIN and TEST. Conversely, for the type task, we observe that comments from AUTO are uniformly distributed among the three classes, while for the TABLETS the majority of comments are product related. It is likely due to the nature of the TABLETS videos, that are more geek-oriented, where users are more prone to share their opinions and enter involved discussions about a product. Additionally, videos from the AUTO category (both commercials and user reviews) are more visually captivating and, being generally oriented towards a larger audience, generate more video-related comments. Regarding the full setting, where the goal is to have a joint prediction of the comment sentiment and type, we observe that video-negative and video-positive are the most scarce classes, which makes them the most difficult to predict. 5.3 Results We start off by presenting the results for the traditional in-domain setting, where both TRAIN and TEST come from the same domain, e.g., AUTO or TABLETS. Next, we show the learning curves to analyze the behavior of FVEC and STRUCT models according to the training size. Finally, we perform a set of cross-domain experiments that describe the enhanced adaptability of the patterns generated by the STRUCT model. 5.3.1 In-domain experiments We compare FVEC and STRUCT models on three tasks described in Sec. 5.1: sentiment, type and full. Table 2 reports the per-class performance and the overall accuracy of the multi-class classifier. Firstly, we note that the performance on TABLETS is much higher than on AUTO across all tasks. This can be explained by the following: (i) TABLETS contains more training data and (ii) videos from AUTO and TABLETS categories draw different types of audiences – well-informed users and geeks expressing better-motivated opinions about a product for the former vs. more general audience for the latter. This results in the different quality of comments with the AUTO being more challenging to analyze. Secondly, we observe that the STRUCT model provides 1-3% of absolute improvement in accuracy over FVEC for every task. For individual categories the F1 scores are also improved by the STRUCT model (except for the negative classes for AUTO, where we see a small drop). We conjecture that sentiment prediction for AUTO category is largely driven by one-shot phrases and statements where it is hard to improve upon the bag-of-words and sentiment lexicon features. In contrast, comments from TABLETS category tend to be more elaborated and well-argumented, thus, benefiting from the expressiveness of the structural representations. Considering per-class performance, correctly predicting negative sentiment is most difficult for both AUTO and TABLETS, which is probably caused by the smaller proportion of the negative comments in the training set. For the type task, video-related class is substantially more difficult than product-related for both categories. For the full task, the class video-negative accounts for the largest error. This is confirmed by the results from the previous sentiment and type tasks, where we saw that handling negative sentiment and detecting video-related comments are most difficult. 5.3.2 Learning curves The learning curves depict the behavior of FVEC and STRUCT models as we increase the size of the training set. Intuitively, the STRUCT model relies on more general syntactic patterns and may overcome the sparseness problems incurred by the FVEC model when little training data is available. Nevertheless, as we see in Figure 2, the learning curves for sentiment and type classification tasks across both product categories do not confirm this intuition. The STRUCT model consistently outperforms the FVEC across all training sizes, but the gap in the performance does not increase when we move to smaller training sets. As we will see next, this picture changes when we perform the crossdomain study. 5.3.3 Cross-domain experiments To understand the performance of our classifiers on other YouTube domains, we perform a set of cross-domain experiments by training on the data from one product category and testing on the other. Table 3 reports the accuracy for three tasks when we use all comments (TRAIN + TEST) from AUTO to predict on the TEST from TABLETS 1258 Task class AUTO TABLETS FVEC STRUCT FVEC STRUCT P R F1 P R F1 P R F1 P R F1 Sent positive 49.1 72.1 58.4 50.1 73.9 59.0 67.5 70.3 69.9 71.2 71.3 71.3 neutral 68.2 55.0 61.4 70.1 57.6 63.1 81.3 71.4 76.9 81.1 73.1 77.8 negative 42.0 36.9 39.6 41.3 35.8 38.8 48.3 60.0 54.8 50.2 62.6 56.5 Acc 54.7 55.7 68.6 70.5 Type product 66.8 73.3 69.4 68.8 75.5 71.7 78.2 95.3 86.4 80.1 95.5 87.6 video 45.0 52.8 48.2 47.8 49.9 48.7 83.6 45.7 58.9 83.5 46.7 59.4 uninform 59.3 48.2 53.1 60.6 53.0 56.4 70.2 52.5 60.7 72.9 58.6 65.0 Acc 57.4 59.4 77.2 78.6 Full product-pos 34.0 49.6 39.2 36.5 51.2 43.0 48.4 56.8 52.0 52.4 59.3 56.4 product-neu 43.4 31.1 36.1 41.4 36.1 38.4 68.0 67.5 68.1 59.7 83.4 70.0 product-neg 26.3 29.5 28.8 26.3 25.3 25.6 43.0 49.9 45.4 44.7 53.7 48.4 video-pos 23.2 47.1 31.9 26.1 54.5 35.5 69.1 60.0 64.7 64.9 68.8 66.4 video-neu 26.1 30.0 29.0 26.5 31.6 28.8 56.4 32.1 40.0 55.1 35.7 43.3 video-neg 21.9 3.7 6.0 17.7 2.3 4.8 39.0 17.5 23.9 39.5 6.1 11.5 uninform 56.5 52.4 54.9 60.0 53.3 56.3 60.0 65.5 62.2 63.3 68.4 66.9 Acc 40.0 41.5 57.6 60.3 Table 2: In-domain experiments on AUTO and TABLETS using two models: FVEC and STRUCT. The results are reported for sentiment, type and full classification tasks. The metrics used are precision (P), recall (R) and F1 for each individual class and the general accuracy of the multi-class classifier (Acc). AUTOSTRUCT AUTOFVEC TABLETSSTRUCT TABLETSFVEC Accuracy 55 60 65 70 training size 1k 2k 3k 4k 5k ALL (a) Sentiment classification AUTOSTRUCT AUTOFVEC TABLETSSTRUCT TABLETSFVEC Accuracy 40 45 50 55 60 65 70 75 80 training size 1k 2k 3k 4k 5k ALL (b) Type classification Figure 2: In-domain learning curves. ALL refers to the entire TRAIN set for a given product category, i.e., AUTO and TABLETS (see Table 1) and in the opposite direction (TABLETS→AUTO). When using AUTO as a source domain, STRUCT model provides additional 1-3% of absolute imSource Target Task FVEC STRUCT AUTO TABLETS Sent 66.1 66.6 Type 59.9 64.1† Full 35.6 38.3† TABLETS AUTO Sent 60.4 61.9† Type 54.2 55.6† Full 43.4 44.7† Table 3: Cross-domain experiment. Accuracy using FVEC and STRUCT models when trained/tested in both directions, i.e. AUTO→TABLETS and TABLETS→AUTO. † denotes results statistically significant at 95% level (via pairwise t-test). provement, except for the sentiment task. Similar to the in-domain experiments, we studied the effect of the source domain size on the target test performance. This is useful to assess the adaptability of features exploited by the FVEC and STRUCT models with the change in the number of labeled examples available for training. Additionally, we considered a setting including a small amount of training data from the target data (i.e., supervised domain adaptation). For this purpose, we drew the learning curves of the FVEC and STRUCT models applied to the sentiment and type tasks (Figure 3): AUTO is used as the source domain to train models, which are tested on TABLETS.8 The plot shows that when 8The results for the other direction (TABLETS→AUTO) show similar behavior. 1259 STRUCT FVEC Source +Target Accuracy 62 63 64 65 66 67 68 training size 1k 2k 3k 4k 5k 8.5k (ALL) 100 500 1k (a) Sentiment classification STRUCT FVEC Source +Target Accuracy 30 35 40 45 50 55 60 65 70 training size 1k 2k 3k 4k 5k 8.5k (TRAIN) 13k (ALL) 100 500 1k (b) Type classification Figure 3: Learning curves for the cross-domain setting (AUTO→TABLETS). Shaded area refers to adding a small portion of comments from the same domain as the target test data to the training. little training data is available, the features generated by the STRUCT model exhibit better adaptability (up to 10% of improvement over FVEC). The bag-of-words model seems to be affected by the data sparsity problem which becomes a crucial issue when only a small training set is available. This difference becomes smaller as we add data from the same domain. This is an important advantage of our structural approach, since we cannot realistically expect to obtain manual annotations for 10k+ comments for each (of many thousands) product domains present on YouTube. 5.4 Discussion Our STRUCT model is more accurate since it is able to induce structural patterns of sentiment. Consider the following comment: optimus pad is better. this xoom is just to bulky but optimus pad offers better functionality. The FVEC bagof-words model misclassifies it to be positive, since it contains two positive expressions (better, better functionality) that outweigh a single negative expression (bulky). The structural model, in contrast, is able to identify the product of interest (xoom) and associate it with the negative expression through a structural feature and thus correctly classify the comment as negative. Some issues remain problematic even for the structural model. The largest group of errors are implicit sentiments. Thus, some comments do not contain any explicit positive or negative opinions, but provide detailed and well-argumented criticism, for example, this phone is heavy. Such comments might also include irony. To account for these cases, a deep understanding of the product domain is necessary. 6 Conclusions and Future Work We carried out a systematic study on OM from YouTube comments by training a set of supervised multi-class classifiers distinguishing between video and product related opinions. We use standard feature vectors augmented by shallow syntactic trees enriched with additional conceptual information. This paper makes several contributions: (i) it shows that effective OM can be carried out with supervised models trained on high quality annotations; (ii) it introduces a novel annotated corpus of YouTube comments, which we make available for the research community; (iii) it defines novel structural models and kernels, which can improve on feature vectors, e.g., up to 30% of relative improvement in type classification, when little data is available, and demonstrates that the structural model scales well to other domains. In the future, we plan to work on a joint model to classify all the comments of a given video, s.t. it is possible to exploit latent dependencies between entities and the sentiments of the comment thread. Additionally, we plan to experiment with hierarchical multi-label classifiers for the full task (in place of a flat multi-class learner). Acknowledgments The authors are supported by a Google Faculty Award 2011, the Google Europe Fellowship Award 2013 and the European Community’s Seventh Framework Programme (FP7/2007-2013) under the grant #288024: LIMOSINE. 1260 References Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596, December. Timothy Baldwin, Paul Cook, Marco Lui, Andrew MacKinlay, and Li Wang. 2013. How noisy social media text, how diffrnt social media sources? In IJCNLP. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. Hal Daum´e, III. 2007. Frustratingly easy domain adaptation. ACL. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for Twitter: annotation, features, and experiments. In ACL. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In KDD. Jason S. Kessler, Miriam Eckert, Lyndsie Clark, and Nicolas Nicolov. 2010. The 2010 ICWSM JDPA sentiment corpus for the automotive domain. In ICWSM-DWC. Klaus Krippendorf, 2004. Content Analysis: An Introduction to Its Methodology, second edition, chapter 11. Sage, Thousand Oaks, CA. Alessandro Moschitti. 2006a. Efficient convolution kernels for dependency and constituent syntactic trees. In ECML. Alessandro Moschitti. 2006b. Making tree kernels practical for natural language learning. In EACL, pages 113–120. Alessandro Moschitti. 2008. Kernel methods, syntax and semantics for relational text categorization. In CIKM. Olutobi Owoputi, Brendan OConnor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL-HLT. Alexander Pak and Patrick Paroubek. 2010. Twitter as a corpus for sentiment analysis and opinion mining. In LREC. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). Barbara Plank and Alessandro Moschitti. 2013. Embedding semantic similarity in tree kernels for domain adaptation of relation extraction. In ACL. Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: an experimental study. In ACL. Aliaksei Severyn and Alessandro Moschitti. 2012. Structural relationships for large-scale learning of answer re-ranking. In SIGIR. Aliaksei Severyn and Alessandro Moschitti. 2013. Automatic feature engineering for answer selection and extraction. In EMNLP. Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013. Learning semantic textual similarity with structural representations. In ACL. John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press. Stefan Siersdorfer, Sergiu Chelaru, Wolfgang Nejdl, and Jose San Pedro. 2010. How useful are your comments?: Analyzing and predicting YouTube comments and comment ratings. In WWW. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In EMNLP. Anders Søgaard and Anders Johannsen. 2012. Robust learning in random subspaces: Equipping nlp for oov effects. In COLING. Oscar T¨ackstr¨om and Ryan McDonald. 2011. Semisupervised latent variable models for sentence-level sentiment analysis. In ACL. Olga Uryupina, Barbara Plank, Aliaksei Severyn, Agata Rotondi, and Alessandro Moschitti. 2014. SenTube: A corpus for sentiment analysis on YouTube social media. In LREC. Sida Wang and Christopher Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In ACL. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In EMNLP. 1261
2014
118
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1262–1273, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Automatic Keyphrase Extraction: A Survey of the State of the Art Kazi Saidul Hasan and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {saidul,vince}@hlt.utdallas.edu Abstract While automatic keyphrase extraction has been examined extensively, state-of-theart performance on this task is still much lower than that on many core natural language processing tasks. We present a survey of the state of the art in automatic keyphrase extraction, examining the major sources of errors made by existing systems and discussing the challenges ahead. 1 Introduction Automatic keyphrase extraction concerns “the automatic selection of important and topical phrases from the body of a document” (Turney, 2000). In other words, its goal is to extract a set of phrases that are related to the main topics discussed in a given document (Tomokiyo and Hurst, 2003; Liu et al., 2009b; Ding et al., 2011; Zhao et al., 2011). Document keyphrases have enabled fast and accurate searching for a given document from a large text collection, and have exhibited their potential in improving many natural language processing (NLP) and information retrieval (IR) tasks, such as text summarization (Zhang et al., 2004), text categorization (Hulth and Megyesi, 2006), opinion mining (Berend, 2011), and document indexing (Gutwin et al., 1999). Owing to its importance, automatic keyphrase extraction has received a lot of attention. However, the task is far from being solved: state-of-the-art performance on keyphrase extraction is still much lower than that on many core NLP tasks (Liu et al., 2010). Our goal in this paper is to survey the state of the art in keyphrase extraction, examining the major sources of errors made by existing systems and discussing the challenges ahead. 2 Corpora Automatic keyphrase extraction systems have been evaluated on corpora from a variety of sources ranging from long scientific publications to short paper abstracts and email messages. Table 1 presents a listing of the corpora grouped by their sources as well as their statistics.1 There are at least four corpus-related factors that affect the difficulty of keyphrase extraction. Length The difficulty of the task increases with the length of the input document as longer documents yield more candidate keyphrases (i.e., phrases that are eligible to be keyphrases (see Section 3.1)). For instance, each Inspec abstract has on average 10 annotator-assigned keyphrases and 34 candidate keyphrases. In contrast, a scientific paper typically has at least 10 keyphrases and hundreds of candidate keyphrases, yielding a much bigger search space (Hasan and Ng, 2010). Consequently, it is harder to extract keyphrases from scientific papers, technical reports, and meeting transcripts than abstracts, emails, and news articles. Structural consistency In a structured document, there are certain locations where a keyphrase is most likely to appear. For instance, most of a scientific paper’s keyphrases should appear in the abstract and the introduction. While structural information has been exploited to extract keyphrases from scientific papers (e.g., title, section information) (Kim et al., 2013), web pages (e.g., metadata) (Yih et al., 2006), and chats (e.g., dialogue acts) (Kim and Baldwin, 2012), it is most useful when the documents from a source exhibit structural similarity. For this reason, structural information is likely to facilitate keyphrase extraction from scientific papers and technical reports because of their standard format (i.e., standard sections such as abstract, introduction, conclusion, etc.). In contrast, the lack of structural consistency in other types of structured documents (e.g., web pages, which can be blogs, forums, or reviews) 1Many of the publicly available corpora can be found in http://github.com/snkim/AutomaticKeyphraseExtraction/ and http://code.google.com/p/maui-indexer/downloads/list. 1262 Source Dataset/Contributor Statistics Documents Tokens/doc Keys/doc Paper abstracts Inspec (Hulth, 2003)∗ 2,000 <200 10 Scientific papers NUS corpus (Nguyen and Kan, 2007)∗ 211 ≈8K 11 citeulike.org (Medelyan et al., 2009)∗ 180 5 SemEval-2010 (Kim et al., 2010b)∗ 284 >5K 15 Technical reports NZDL (Witten et al., 1999)∗ 1,800 News articles DUC-2001 (Wan and Xiao, 2008b)∗ 308 ≈900 8 Reuters corpus (Hulth and Megyesi, 2006) 12,848 6 Web pages Yih et al. (2006) 828 Hammouda et al. (2005)∗ 312 ≈500 Blogs (Grineva et al., 2009) 252 ≈1K 8 Meeting transcripts ICSI (Liu et al., 2009a) 161 ≈1.6K 4 Emails Enron corpus (Dredze et al., 2008)∗ 14,659 Live chats Library of Congress (Kim and Baldwin, 2012) 15 10 Table 1: Evaluation datasets. Publicly available datasets are marked with an asterisk (∗). may render structural information less useful. Topic change An observation commonly exploited in keyphrase extraction from scientific articles and news articles is that keyphrases typically appear not only at the beginning (Witten et al., 1999) but also at the end (Medelyan et al., 2009) of a document. This observation does not necessarily hold for conversational text (e.g., meetings, chats), however. The reason is simple: in a conversation, the topics (i.e., its talking points) change as the interaction moves forward in time, and so do the keyphrases associated with a topic. One way to address this complication is to detect a topic change in conversational text (Kim and Baldwin, 2012). However, topic change detection is not always easy: while the topics listed in the form of an agenda at the beginning of formal meeting transcripts can be exploited, such clues are absent in casual conversations (e.g., chats). Topic correlation Another observation commonly exploited in keyphrase extraction from scientific articles and news articles is that the keyphrases in a document are typically related to each other (Turney, 2003; Mihalcea and Tarau, 2004). However, this observation does not necessarily hold for informal text (e.g., emails, chats, informal meetings, personal blogs), where people can talk about any number of potentially uncorrelated topics. The presence of uncorrelated topics implies that it may no longer be possible to exploit relatedness and therefore increases the difficulty of keyphrase extraction. 3 Keyphrase Extraction Approaches A keyphrase extraction system typically operates in two steps: (1) extracting a list of words/phrases that serve as candidate keyphrases using some heuristics (Section 3.1); and (2) determining which of these candidate keyphrases are correct keyphrases using supervised (Section 3.2) or unsupervised (Section 3.3) approaches. 3.1 Selecting Candidate Words and Phrases As noted before, a set of phrases and words is typically extracted as candidate keyphrases using heuristic rules. These rules are designed to avoid spurious instances and keep the number of candidates to a minimum. Typical heuristics include (1) using a stop word list to remove stop words (Liu et al., 2009b), (2) allowing words with certain partof-speech tags (e.g., nouns, adjectives, verbs) to be candidate keywords (Mihalcea and Tarau, 2004; Wan and Xiao, 2008b; Liu et al., 2009a), (3) allowing n-grams that appear in Wikipedia article titles to be candidates (Grineva et al., 2009), and (4) extracting n-grams (Witten et al., 1999; Hulth, 2003; Medelyan et al., 2009) or noun phrases (Barker and Cornacchia, 2000; Wu et al., 2005) that satisfy pre-defined lexico-syntactic pattern(s) (Nguyen and Phan, 2009). Many of these heuristics have proven effective with their high recall in extracting gold keyphrases from various sources. However, for a long document, the resulting list of candidates can be long. Consequently, different pruning heuristics have been designed to prune candidates that are unlikely to be keyphrases (Huang et al., 2006; Kumar and Srinathan, 2008; El-Beltagy and Rafea, 2009; You et al., 2009; Newman et al., 2012). 3.2 Supervised Approaches Research on supervised approaches to keyphrase extraction has focused on two issues: task reformulation and feature design. 1263 3.2.1 Task Reformulation Early supervised approaches to keyphrase extraction recast this task as a binary classification problem (Frank et al., 1999; Turney, 1999; Witten et al., 1999; Turney, 2000). The goal is to train a classifier on documents annotated with keyphrases to determine whether a candidate phrase is a keyphrase. Keyphrases and non-keyphrases are used to generate positive and negative examples, respectively. Different learning algorithms have been used to train this classifier, including na¨ıve Bayes (Frank et al., 1999; Witten et al., 1999), decision trees (Turney, 1999; Turney, 2000), bagging (Hulth, 2003), boosting (Hulth et al., 2001), maximum entropy (Yih et al., 2006; Kim and Kan, 2009), multi-layer perceptron (Lopez and Romary, 2010), and support vector machines (Jiang et al., 2009; Lopez and Romary, 2010). Recasting keyphrase extraction as a classification problem has its weaknesses, however. Recall that the goal of keyphrase extraction is to identify the most representative phrases for a document. In other words, if a candidate phrase c1 is more representative than another candidate phrase c2, c1 should be preferred to c2. Note that a binary classifier classifies each candidate keyphrase independently of the others, and consequently it does not allow us to determine which candidates are better than the others (Hulth, 2004; Wang and Li, 2011). Motivated by this observation, Jiang et al. (2009) propose a ranking approach to keyphrase extraction, where the goal is to learn a ranker to rank two candidate keyphrases. This pairwise ranking approach therefore introduces competition between candidate keyphrases, and has been shown to significantly outperform KEA (Witten et al., 1999; Frank et al., 1999), a popular supervised baseline that adopts the traditional supervised classification approach (Song et al., 2003; Kelleher and Luz, 2005). 3.2.2 Features The features commonly used to represent an instance for supervised keyphrase extraction can be broadly divided into two categories. 3.2.2.1 Within-Collection Features Within-collection features are computed based solely on the training documents. These features can be further divided into three types. Statistical features are computed based on statistical information gathered from the training documents. Three such features have been extensively used in supervised approaches. The first one, tf*idf (Salton and Buckley, 1988), is computed based on candidate frequency in the given text and inverse document frequency (i.e., number of other documents where the candidate appears).2 The second one, the distance of a phrase, is defined as the number of words preceding its first occurrence normalized by the number of words in the document. Its usefulness stems from the fact that keyphrases tend to appear early in a document. The third one, supervised keyphraseness, encodes the number of times a phrase appears as a keyphrase in the training set. This feature is designed based on the assumption that a phrase frequently tagged as a keyphrase is more likely to be a keyphrase in an unseen document. These three features form the feature set of KEA (Witten et al., 1999; Frank et al., 1999), and have been shown to perform consistently well on documents from various sources (Yih et al., 2006; Kim et al., 2013). Other statistical features include phrase length and spread (i.e., the number of words between the first and last occurrences of a phrase in the document). Structural features encode how different instances of a candidate keyphrase are located in different parts of a document. A phrase is more likely to be a keyphrase if it appears in the abstract or introduction of a paper or in the metadata section of a web page. In fact, features that encode how frequently a candidate keyphrase occurs in various sections of a scientific paper (e.g., introduction, conclusion) (Nguyen and Kan, 2007) and those that encode the location of a candidate keyphrase in a web page (e.g., whether it appears in the title) (Chen et al., 2005; Yih et al., 2006) have been shown to be useful for the task. Syntactic features encode the syntactic patterns of a candidate keyphrase. For example, a candidate keyphrase has been encoded as (1) a PoS tag sequence, which denotes the sequence of part-of-speech tag(s) assigned to its word(s); and (2) a suffix sequence, which is the sequence of morphological suffixes of its words (Yih et al., 2006; Nguyen and Kan, 2007; Kim and Kan, 2009). However, ablation studies conducted on web pages (Yih et al., 2006) and scientific articles 2A tf*idf-based baseline, where candidate keyphrases are ranked and selected according to tf*idf, has been widely used by both supervised and unsupervised approaches (Zhang et al., 2005; Dredze et al., 2008; Paukkeri et al., 2008; Grineva et al., 2009). 1264 (Kim and Kan, 2009) reveal that syntactic features are not useful for keyphrase extraction in the presence of other feature types. 3.2.2.2 External Resource-Based Features External resource-based features are computed based on information gathered from resources other than the training documents, such as lexical knowledge bases (e.g., Wikipedia) or the Web, with the goal of improving keyphrase extraction performance by exploiting external knowledge. Below we give an overview of the external resource-based features that have proven useful for keyphrase extraction. Wikipedia-based keyphraseness is computed as a candidate’s document frequency multiplied by the ratio of the number of Wikipedia articles where the candidate appears as a link to the number of articles where it appears (Medelyan et al., 2009). This feature is motivated by the observation that a candidate is likely to be a keyphrase if it occurs frequently as a link in Wikipedia. Unlike supervised keyphraseness, Wikipedia-based keyphraseness can be computed without using documents annotated with keyphrases and can work even if there is a mismatch between the training domain and the test domain. Yih et al. (2006) employ a feature that encodes whether a candidate keyphrase appears in the query log of a search engine, exploiting the observation that a candidate is potentially important if it was used as a search query. Terminological databases have been similarly exploited to encode the salience of candidate keyphrases in scientific papers (Lopez and Romary, 2010). While the aforementioned external resourcebased features attempt to encode how salient a candidate keyphrase is, Turney (2003) proposes features that encode the semantic relatedness between two candidate keyphrases. Noting that candidate keyphrases that are not semantically related to the predicted keyphrases are unlikely to be keyphrases in technical reports, Turney employs coherence features to identify such candidate keyphrases. Semantic relatedness is encoded in the coherence features as two candidate keyphrases’ pointwise mutual information, which Turney computes by using the Web as a corpus. 3.3 Unsupervised Approaches Existing unsupervised approaches to keyphrase extraction can be categorized into four groups. 3.3.1 Graph-Based Ranking Intuitively, keyphrase extraction is about finding the important words and phrases from a document. Traditionally, the importance of a candidate has often been defined in terms of how related it is to other candidates in the document. Informally, a candidate is important if it is related to (1) a large number of candidates and (2) candidates that are important. Researchers have computed relatedness between candidates using co-occurrence counts (Mihalcea and Tarau, 2004; Matsuo and Ishizuka, 2004) and semantic relatedness (Grineva et al., 2009), and represented the relatedness information collected from a document as a graph (Mihalcea and Tarau, 2004; Wan and Xiao, 2008a; Wan and Xiao, 2008b; Bougouin et al., 2013). The basic idea behind a graph-based approach is to build a graph from the input document and rank its nodes according to their importance using a graph-based ranking method (e.g., Brin and Page (1998)). Each node of the graph corresponds to a candidate keyphrase from the document and an edge connects two related candidates. The edge weight is proportional to the syntactic and/or semantic relevance between the connected candidates. For each node, each of its edges is treated as a “vote” from the other node connected by the edge. A node’s score in the graph is defined recursively in terms of the edges it has and the scores of the neighboring nodes. The top-ranked candidates from the graph are then selected as keyphrases for the input document. TextRank (Mihalcea and Tarau, 2004) is one of the most well-known graphbased approaches to keyphrase extraction. This instantiation of a graph-based approach overlooks an important aspect of keyphrase extraction, however. A set of keyphrases for a document should ideally cover the main topics discussed in it, but this instantiation does not guarantee that all the main topics will be represented by the extracted keyphrases. Despite this weakness, a graph-based representation of text was adopted by many approaches that propose different ways of computing the similarity between two candidates. 3.3.2 Topic-Based Clustering Another unsupervised approach to keyphrase extraction involves grouping the candidate keyphrases in a document into topics, such that each topic is composed of all and only those candidate keyphrases that are related to that topic (Grineva et al., 2009; Liu et al., 2009b; Liu et 1265 al., 2010). There are several motivations behind this topic-based clustering approach. First, a keyphrase should ideally be relevant to one or more main topic(s) discussed in a document (Liu et al., 2010; Liu et al., 2012). Second, the extracted keyphrases should be comprehensive in the sense that they should cover all the main topics in a document (Liu et al., 2009b; Liu et al., 2010; Liu et al., 2012). Below we examine three representative systems that adopt this approach. KeyCluster Liu et al. (2009b) adopt a clustering-based approach (henceforth KeyCluster) that clusters semantically similar candidates using Wikipedia and co-occurrence-based statistics. The underlying hypothesis is that each of these clusters corresponds to a topic covered in the document, and selecting the candidates close to the centroid of each cluster as keyphrases ensures that the resulting set of keyphrases covers all the topics of the document. While empirical results show that KeyCluster performs better than both TextRank and Hulth’s (2003) supervised system, KeyCluster has a potential drawback: by extracting keyphrases from each topic cluster, it essentially gives each topic equal importance. In practice, however, there could be topics that are not important and these topics should not have keyphrase(s) representing them. Topical PageRank (TPR) Liu et al. (2010) propose TPR, an approach that overcomes the aforementioned weakness of KeyCluster. It runs TextRank multiple times for a document, once for each of its topics induced by a Latent Dirichlet Allocation (Blei et al., 2003). By running TextRank once for each topic, TPR ensures that the extracted keyphrases cover the main topics of the document. The final score of a candidate is computed as the sum of its scores for each of the topics, weighted by the probability of that topic in that document. Hence, unlike KeyCluster, candidates belonging to a less probable topic are given less importance. TPR performs significantly better than both tf*idf and TextRank on the DUC-2001 and Inspec datasets. TPR’s superior performance strengthens the hypothesis of using topic clustering for keyphrase extraction. However, though TPR is conceptually better than KeyCluster, Liu et al. did not compare TPR against KeyCluster. CommunityCluster Grineva et al. (2009) propose CommunityCluster, a variant of the topic clustering approach to keyphrase extraction. Like TPR, CommunityCluster gives more weight to more important topics, but unlike TPR, it extracts all candidate keyphrases from an important topic, assuming that a candidate that receives little focus in the text should still be extracted as a keyphrase as long as it is related to an important topic. CommunityCluster yields much better recall (without losing precision) than extractors such as tf*idf, TextRank, and the Yahoo! term extractor. 3.3.3 Simultaneous Learning Since keyphrases represent a dense summary of a document, researchers hypothesized that text summarization and keyphrase extraction can potentially benefit from each other if these tasks are performed simultaneously. Zha (2002) proposes the first graph-based approach for simultaneous summarization and keyphrase extraction, motivated by a key observation: a sentence is important if it contains important words, and important words appear in important sentences. Wan et al. (2007) extend Zha’s work by adding two assumptions: (1) an important sentence is connected to other important sentences, and (2) an important word is linked to other important words, a TextRank-like assumption. Based on these assumptions, Wan et al. (2007) build three graphs to capture the association between the sentences (S) and the words (W) in an input document, namely, a S–S graph, a bipartite S–W graph, and a W–W graph. The weight of an edge connecting two sentence nodes in a S–S graph corresponds to their content similarity. An edge weight in a S–W graph denotes the word’s importance in the sentence it appears. Finally, an edge weight in a W–W graph denotes the co-occurrence or knowledge-based similarity between the two connected words. Once the graphs are constructed for an input document, an iterative reinforcement algorithm is applied to assign scores to each sentence and word. The top-scored words are used to form keyphrases. The main advantage of this approach is that it combines the strengths of both Zha’s approach (i.e., bipartite S–W graphs) and TextRank (i.e., W– W graphs) and performs better than both of them. However, it has a weakness: like TextRank, it does not ensure that the extracted keyphrases will cover all the main topics. To address this problem, one can employ a topic clustering algorithm on the W– W graph to produce the topic clusters, and then ensure that keyphrases are chosen from every main topic cluster. 1266 3.3.4 Language Modeling Many existing approaches have a separate, heuristic module for extracting candidate keyphrases prior to keyphrase ranking/extraction. In contrast, Tomokiyo and Hurst (2003) propose an approach (henceforth LMA) that combines these two steps. LMA scores a candidate keyphrase based on two features, namely, phraseness (i.e., the extent to which a word sequence can be treated as a phrase) and informativeness (i.e., the extent to which a word sequence captures the central idea of the document it appears in). Intuitively, a phrase that has high scores for phraseness and informativeness is likely to be a keyphrase. These feature values are estimated using language models (LMs) trained on a foreground corpus and a background corpus. The foreground corpus is composed of the set of documents from which keyphrases are to be extracted. The background corpus is a large corpus that encodes general knowledge about the world (e.g., the Web). A unigram LM and an ngram LM are constructed for each of these two corpora. Phraseness, defined using the foreground LM, is calculated as the loss of information incurred as a result of assuming a unigram LM (i.e., conditional independence among the words of the phrase) instead of an n-gram LM (i.e., the phrase is drawn from an n-gram LM). Informativeness is computed as the loss that results because of the assumption that the candidate is sampled from the background LM rather than the foreground LM. The loss values are computed using KullbackLeibler divergence. Candidates are ranked according to the sum of these two feature values. In sum, LMA uses a language model rather than heuristics to identify phrases, and relies on the language model trained on the background corpus to determine how “unique” a candidate keyphrase is to the domain represented by the foreground corpus. The more unique it is to the foreground’s domain, the more likely it is a keyphrase for that domain. While the use of language models to identify phrases cannot be considered a major strength of this approach (because heuristics can identify phrases fairly reliably), the use of a background corpus to identify candidates that are unique to the foreground’s domain is a unique aspect of this approach. We believe that this idea deserves further investigation, as it would allow us to discover a keyphrase that is unique to the foreground’s domain but may have a low tf*idf value. 4 Evaluation In this section, we describe metrics for evaluating keyphrase extraction systems as well as state-ofthe-art results on commonly-used datasets. 4.1 Evaluation Metrics Designing evaluation metrics for keyphrase extraction is by no means an easy task. To score the output of a keyphrase extraction system, the typical approach, which is also adopted by the SemEval-2010 shared task on keyphrase extraction, is (1) to create a mapping between the keyphrases in the gold standard and those in the system output using exact match, and then (2) score the output using evaluation metrics such as precision (P), recall (R), and F-score (F). Conceivably, exact match is an overly strict condition, considering a predicted keyphrase incorrect even if it is a variant of a gold keyphrase. For instance, given the gold keyphrase “neural network”, exact match will consider a predicted phrase incorrect even if it is an expanded version of the gold keyphrase (“artificial neural network”) or one of its morphological (“neural networks”) or lexical (“neural net”) variants. While morphological variations can be handled using a stemmer (ElBeltagy and Rafea, 2009), other variations may not be handled easily and reliably. Human evaluation has been suggested as a possibility (Matsuo and Ishizuka, 2004), but it is timeconsuming and expensive. For this reason, researchers have experimented with two types of automatic evaluation metrics. The first type of metrics addresses the problem with exact match. These metrics reward a partial match between a predicted keyphrase and a gold keyphrase (i.e., overlapping n-grams) and are commonly used in machine translation (MT) and summarization evaluations. They include BLEU, METEOR, NIST, and ROUGE. Nevertheless, experiments show that these MT metrics only offer a partial solution to problem with exact match: they can only detect a subset of the near-misses (Kim et al., 2010a). The second type of metrics focuses on how a system ranks its predictions. Given that two systems A and B have the same number of correct predictions, binary preference measure (Bpref) and mean reciprocal rank (MRR) (Liu et al., 2010) will award more credit to A than to B if the ranks of the correct predictions in A’s output are higher than those in B’s output. R-precision (Rp) is an 1267 IR metric that focuses on ranking: given a document with n gold keyphrases, it computes the precision of a system over its n highest-ranked candidates (Zesch and Gurevych, 2009). The motivation behind the design of Rp is simple: a system will achieve a perfect Rp value if it ranks all the keyphrases above the non-keyphrases. 4.2 The State of the Art Table 2 lists the best scores on some popular evaluation datasets and the corresponding systems. For example, the best F-scores on the Inspec test set, the DUC-2001 dataset, and the SemEval-2010 test set are 45.7, 31.7, and 27.5, respectively.3 Two points deserve mention. First, F-scores decrease as document length increases. These results are consistent with the observation we made in Section 2 that it is more difficult to extract keyphrases correctly from longer documents. Second, recent unsupervised approaches have rivaled their supervised counterparts in performance (Mihalcea and Tarau, 2004; El-Beltagy and Rafea, 2009; Liu et al., 2009b). For example, KP-Miner (El-Beltagy and Rafea, 2010), an unsupervised system, ranked third in the SemEval-2010 shared task with an F-score of 25.2, which is comparable to the best supervised system scoring 27.5. 5 Analysis With the goal of providing directions for future work, we identify the errors commonly made by state-of-the-art keyphrase extractors below. 5.1 Error Analysis Although a few researchers have presented a sample of their systems’ output and the corresponding gold keyphrases to show the differences between them (Witten et al., 1999; Nguyen and Kan, 2007; Medelyan et al., 2009), a systematic analysis of the major types of errors made by state-of-the-art keyphrase extraction systems is missing. To fill this gap, we ran four keyphrase extraction systems on four commonly-used datasets of varying sources, including Inspec abstracts (Hulth, 2003), DUC-2001 news articles (Over, 2001), scientific papers (Kim et al., 2010b), and meeting transcripts (Liu et al., 2009a). Specifically, we randomly selected 25 documents from each of these 3A more detailed analysis of the results of the SemEval2010 shared task and the approaches adopted by the participating systems can be found in Kim et al. (2013). Dataset Approach and System [Supervised?] Score P R F Abstracts (Inspec) Topic clustering (Liu et al., 2009b) [×] 35.0 66.0 45.7 Blogs Topic community detection (Grineva et al., 2009) [×] 35.1 61.5 44.7 News (DUC -2001) Graph-based ranking for extended neighborhood (Wan and Xiao, 2008b) [×] 28.8 35.4 31.7 Papers (SemEval -2010) Statistical, semantic, and distributional features (Lopez and Romary, 2010) [✓] 27.2 27.8 27.5 Table 2: Best scores achieved on various datasets. four datasets and manually analyzed the output of the four systems, including tf*idf, the most frequently used baseline, as well as three state-of-theart keyphrase extractors, of which two are unsupervised (Wan and Xiao, 2008b; Liu et al., 2009b) and one is supervised (Medelyan et al., 2009). Our analysis reveals that the errors fall into four major types, each of which contributes significantly to the overall errors made by the four systems, despite the fact that the contribution of each of these error types varies from system to system. Moreover, we do not observe any significant difference between the types of errors made by the four systems other than the fact that the supervised system has the expected tendency to predict keyphrases seen in the training data. Below we describe these four major types of errors. Overgeneration errors are a major type of precision error, contributing to 28–37% of the overall error. Overgeneration errors occur when a system correctly predicts a candidate as a keyphrase because it contains a word that appears frequently in the associated document, but at the same time erroneously outputs other candidates as keyphrases because they contain the same word. Recall that for many systems, it is not easy to reject a nonkeyphrase containing a word with a high term frequency: many unsupervised systems score a candidate by summing the score of each of its component words, and many supervised systems use unigrams as features to represent a candidate. To be more concrete, consider the news article on athlete Ben Johnson in Figure 1, where the keyphrases are boldfaced. As we can see, the word Olympic(s) has a significant presence in the document. Consequently, many systems not only correctly predict Olympics as a keyphrase, but also erroneously predict Olympic movement as a keyphrase, yielding overgeneration errors. Infrequency errors are a major type of re1268 Canadian Ben Johnson left the Olympics today “in a complete state of shock,” accused of cheating with drugs in the world’s fastest 100-meter dash and stripped of his gold medal. The prize went to American Carl Lewis. Many athletes accepted the accusation that Johnson used a muscle-building but dangerous and illegal anabolic steroid called stanozolol as confirmation of what they said they know has been going on in track and field. Two tests of Johnson’s urine sample proved positive and his denials of drug use were rejected today. “This is a blow for the Olympic Games and the Olympic movement,” said International Olympic Committee President Juan Antonio Samaranch. Figure 1: A news article on Ben Johnson from the DUC-2001 dataset. The keyphrases are boldfaced. call error contributing to 24–27% of the overall error. Infrequency errors occur when a system fails to identify a keyphrase owing to its infrequent presence in the associated document (Liu et al., 2011). Handling infrequency errors is a challenge because state-of-the-art keyphrase extractors rarely predict candidates that appear only once or twice in a document. In the Ben Johnson example, many keyphrase extractors fail to identify 100-meter dash and gold medal as keyphrases, resulting in infrequency errors. Redundancy errors are a type of precision error contributing to 8–12% of the overall error. Redundancy errors occur when a system correctly identifies a candidate as a keyphrase, but at the same time outputs a semantically equivalent candidate (e.g., its alias) as a keyphrase. This type of error can be attributed to a system’s failure to determine that two candidates are semantically equivalent. Nevertheless, some researchers may argue that a system should not be penalized for redundancy errors because the extracted candidates are in fact keyphrases. In our example, Olympics and Olympic games refer to the same concept, so a system that predicts both of them as keyphrases commits a redundancy error. Evaluation errors are a type of recall error contributing to 7–10% of the overall error. An evaluation error occurs when a system outputs a candidate that is semantically equivalent to a gold keyphrase, but is considered erroneous by a scoring program because of its failure to recognize that the predicted phrase and the corresponding gold keyphrase are semantically equivalent. In other words, an evaluation error is not an error made by a keyphrase extractor, but an error due to the naivety of a scoring program. In our example, while Olympics and Olympic games refer to the same concept, only the former is annotated as keyphrase. Hence, an evaluation error occurs if a system predicts Olympic games but not Olympics as a keyphrase and the scoring program fails to identify them as semantically equivalent. 5.2 Recommendations We recommend that background knowledge be extracted from external lexical databases (e.g., YAGO2 (Suchanek et al., 2007), Freebase (Bollacker et al., 2008), BabelNet (Navigli and Ponzetto, 2012)) to address the four types of errors discussed above. First, we discuss how redundancy errors could be addressed by using the background knowledge extracted from external databases. Note that if we can identify semantically equivalent candidates, then we can reduce redundancy errors. The question, then, is: can background knowledge be used to help us identify semantically equivalent candidates? To answer this question, note that Freebase, for instance, has over 40 million topics (i.e., realworld entities such as people, places, and things) from over 70 domains (e.g., music, business, education). Hence, before a system outputs a set of candidates as keyphrases, it can use Freebase to determine whether any of them is mapped to the same Freebase topic. Referring back to our running example, both Olympics and Olympic games are mapped to a Freebase topic called Olympic games. Based on this information, a keyphrase extractor can determine that the two candidates are aliases and should output only one of them, thus preventing a redundancy error. Next, we discuss how infrequency errors could be addressed using background knowledge. A natural way to handle this problem would be to make an infrequent keyphrase frequent. To accomplish this, we suggest exploiting an influential idea in the keyphrase extraction literature: the importance of a candidate is defined in terms of how related it is to other candidates in the text (see Section 3.3.1). In other words, if we could relate an infrequent keyphrase to other candidates in the text, we could boost its importance. We believe that this could be accomplished using background knowledge. The idea is to boost the importance of infrequent keyphrases using their frequent counterparts. Consider again our running example. All four systems have managed to identify Ben Johnson as a keyphrase due to its 1269 significant presence. Hence, we can boost the importance of 100-meter dash and gold medal if we can relate them to Ben Johnson. To do so, note that Freebase maps a candidate to one or more pre-defined topics, each of which is associated with one or more types. Types are similar to entity classes. For instance, the candidate Ben Johnson is mapped to a Freebase topic with the same name, which is associated with Freebase types such as Person, Athlete, and Olympic athlete. Types are defined for a specific domain in Freebase. For instance, Person, Athlete, and Olympic athlete are defined in the People, Sports, and Olympics domains, respectively. Next, consider the two infrequent candidates, 100-meter dash and gold medal. 100-meter dash is mapped to the topic Sprint of type Sports in the Sports domain, whereas gold medal is mapped to a topic with the same name of type Olympic medal in the Olympics domain. Consequently, we can relate 100-meter dash to Ben Johnson via the Sports domain (i.e., they belong to different types under the same domain). Additionally, gold medal can be related to Ben Johnson via the Olympics domain. As discussed before, the relationship between two candidates is traditionally established using co-occurrence information. However, using cooccurrence windows has its shortcomings. First, an ad-hoc window size cannot capture related candidates that are not inside the window. So it is difficult to predict 100-meter dash and gold medal as keyphrases: they are more than 10 tokens away from frequent words such as Johnson and Olympics. Second, the candidates inside a window are all assumed to be related to each other, but it is apparently an overly simplistic assumption. There have been a few attempts to design Wikipediabased relatedness measures, with promising initial results (Grineva et al., 2009; Liu et al., 2009b; Medelyan et al., 2009).4 Overgeneration errors could similarly be addressed using background knowledge. Recall that Olympic movement is not a keyphrase in our example although it includes an important word (i.e., Olympic). Freebase maps Olympic movement to a topic with the same name, which is associated with a type called Musical Recording in the Music domain. However, it does not map Olympic 4Note that it may be difficult to employ our recommendations to address infrequency errors in informal text with uncorrelated topics because the keyphrases it contains may not be related to each other (see Section 2). movement to any topic in the Olympics domain. The absence of such a mapping in the Olympics domain could be used by a keyphrase extractor as a supporting evidence against predicting Olympic movement as a keyphrase. Finally, as mentioned before, evaluation errors should not be considered errors made by a system. Nevertheless, they reveal a problem with the way keyphrase extractors are currently evaluated. To address this problem, one possibility is to conduct human evaluations. Cheaper alternatives include having human annotators identify semantically equivalent keyphrases during manual labeling, and designing scoring programs that can automatically identify such semantic equivalences. 6 Conclusion and Future Directions We have presented a survey of the state of the art in automatic keyphrase extraction. While unsupervised approaches have started to rival their supervised counterparts in performance, the task is far from being solved, as reflected by the fairly poor state-of-the-art results on various commonlyused evaluation datasets. Our analysis revealed that there are at least three major challenges ahead. 1. Incorporating background knowledge. While much recent work has focused on algorithmic development, keyphrase extractors need to have a deeper “understanding” of a document in order to reach the next level of performance. Such an understanding can be facilitated by the incorporation of background knowledge. 2. Handling long documents. While it may be possible to design better algorithms to handle the large number of candidates in long documents, we believe that employing sophisticated features, especially those that encode background knowledge, will enable keyphrases and non-keyphrases to be distinguished more easily even in the presence of a large number of candidates. 3. Improving evaluation schemes. To more accurately measure the performance of keyphrase extractors, they should not be penalized for evaluation errors. We have suggested several possibilities as to how this problem can be addressed. Acknowledgments We thank the anonymous reviewers for their detailed and insightful comments on earlier drafts of this paper. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142. 1270 References Ken Barker and Nadia Cornacchia. 2000. Using noun phrase heads to extract document keyphrases. In Proceedings of the 13th Biennial Conference of the Canadian Society on Computational Studies of Intelligence, pages 40–52. G´abor Berend. 2011. Opinion expression mining by exploiting keyphrase extraction. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 1162–1170. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247–1250. Adrien Bougouin, Florian Boudin, and B´eatrice Daille. 2013. Topicrank: Graph-based topic ranking for keyphrase extraction. In Proceedings of the 6th International Joint Conference on Natural Language Processing, pages 543–551. Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Computer Networks, 30(1–7):107–117. Mo Chen, Jian-Tao Sun, Hua-Jun Zeng, and Kwok-Yan Lam. 2005. A practical system of keyphrase extraction for web pages. In Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pages 277–278. Zhuoye Ding, Qi Zhang, and Xuanjing Huang. 2011. Keyphrase extraction from online news using binary integer programming. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 165–173. Mark Dredze, Hanna M. Wallach, Danny Puller, and Fernando Pereira. 2008. Generating summary keywords for emails using topics. In Proceedings of the 13th International Conference on Intelligent User Interfaces, pages 199–206. Samhaa R. El-Beltagy and Ahmed A. Rafea. 2009. KP-Miner: A keyphrase extraction system for English and Arabic documents. Information Systems, 34(1):132–144. Samhaa R. El-Beltagy and Ahmed Rafea. 2010. KPMiner: Participation in SemEval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 190–193. Eibe Frank, Gordon W. Paynter, Ian H. Witten, Carl Gutwin, and Craig G. Nevill-Manning. 1999. Domain-specific keyphrase extraction. In Proceedings of 16th International Joint Conference on Artificial Intelligence, pages 668–673. Maria Grineva, Maxim Grinev, and Dmitry Lizorkin. 2009. Extracting key terms from noisy and multitheme documents. In Proceedings of the 18th International Conference on World Wide Web, pages 661–670. Carl Gutwin, Gordon Paynter, Ian Witten, Craig NevillManning, and Eibe Frank. 1999. Improving browsing in digital libraries with keyphrase indexes. Decision Support Systems, 27:81–104. Khaled M. Hammouda, Diego N. Matute, and Mohamed S. Kamel. 2005. CorePhrase: Keyphrase extraction for document clustering. In Proceedings of the 4th International Conference on Machine Learning and Data Mining in Pattern Recognition, pages 265–274. Kazi Saidul Hasan and Vincent Ng. 2010. Conundrums in unsupervised keyphrase extraction: Making sense of the state-of-the-art. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 365–373. Chong Huang, Yonghong Tian, Zhi Zhou, Charles X. Ling, and Tiejun Huang. 2006. Keyphrase extraction using semantic networks structure analysis. In Proceedings of the 6th International Conference on Data Mining, pages 275–284. Anette Hulth and Be´ata B. Megyesi. 2006. A study on automatically extracted keywords in text categorization. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 537–544. Anette Hulth, Jussi Karlgren, Anna Jonsson, Henrik Bostr¨om, and Lars Asker. 2001. Automatic keyword extraction using domain knowledge. In Proceedings of the 2nd International Conference on Computational Linguistics and Intelligent Text Processing, pages 472–482. Anette Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216– 223. Anette Hulth. 2004. Enhancing linguistically oriented automatic keyword extraction. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: Short Papers, pages 17– 20. Xin Jiang, Yunhua Hu, and Hang Li. 2009. A ranking approach to keyphrase extraction. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 756–757. Daniel Kelleher and Saturnino Luz. 2005. Automatic hypertext keyphrase detection. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, pages 1608–1609. 1271 Su Nam Kim and Timothy Baldwin. 2012. Extracting keywords from multi-party live chats. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation, pages 199– 208. Su Nam Kim and Min-Yen Kan. 2009. Re-examining automatic keyphrase extraction approaches in scientific articles. In Proceedings of the ACL-IJCNLP Workshop on Multiword Expressions, pages 9–16. Su Nam Kim, Timothy Baldwin, and Min-Yen Kan. 2010a. Evaluating n-gram based evaluation metrics for automatic keyphrase extraction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 572–580. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010b. SemEval-2010 Task 5: Automatic keyphrase extraction from scientific articles. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2013. Automatic keyphrase extraction from scientific articles. Language Resources and Evaluation, 47(3):723–742. Niraj Kumar and Kannan Srinathan. 2008. Automatic keyphrase extraction from scientific documents using n-gram filtration technique. In Proceedings of the 8th ACM Symposium on Document Engineering, pages 199–208. Feifan Liu, Deana Pennell, Fei Liu, and Yang Liu. 2009a. Unsupervised approaches for automatic keyword extraction using meeting transcripts. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 620–628. Zhiyuan Liu, Peng Li, Yabin Zheng, and Maosong Sun. 2009b. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 257–266. Zhiyuan Liu, Wenyi Huang, Yabin Zheng, and Maosong Sun. 2010. Automatic keyphrase extraction via topic decomposition. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 366–376. Zhiyuan Liu, Xinxiong Chen, Yabin Zheng, and Maosong Sun. 2011. Automatic keyphrase extraction by bridging vocabulary gap. In Proceedings of the 15th Conference on Computational Natural Language Learning, pages 135–144. Zhiyuan Liu, Chen Liang, and Maosong Sun. 2012. Topical word trigger model for keyphrase extraction. In Proceedings of the 24th International Conference on Computational Linguistics, pages 1715–1730. Patrice Lopez and Laurent Romary. 2010. HUMB: Automatic key term extraction from scientific articles in GROBID. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 248–251. Yutaka Matsuo and Mitsuru Ishizuka. 2004. Keyword extraction from a single document using word co-occurrence statistical information. International Journal on Artificial Intelligence Tools, 13. Olena Medelyan, Eibe Frank, and Ian H. Witten. 2009. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1318–1327. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into texts. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404–411. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. David Newman, Nagendra Koilada, Jey Han Lau, and Timothy Baldwin. 2012. Bayesian text segmentation for index term identification and keyphrase extraction. In Proceedings of the 24th International Conference on Computational Linguistics, pages 2077–2092. Thuy Dung Nguyen and Min-Yen Kan. 2007. Keyphrase extraction in scientific publications. In Proceedings of the International Conference on Asian Digital Libraries, pages 317–326. Chau Q. Nguyen and Tuoi T. Phan. 2009. An ontology-based approach for key phrase extraction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing: Short Papers, pages 181–184. Paul Over. 2001. Introduction to DUC-2001: An intrinsic evaluation of generic news text summarization systems. In Proceedings of the 2001 Document Understanding Conference. Mari-Sanna Paukkeri, Ilari T. Nieminen, Matti P¨oll¨a, and Timo Honkela. 2008. A language-independent approach to keyphrase extraction and evaluation. In Proceedings of the 22nd International Conference on Computational Linguistics: Companion Volume: Posters, pages 83–86. Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. Information Processing and Management, 24(5):513– 523. 1272 Min Song, Il-Yeol Song, and Xiaohua Hu. 2003. KPSpotter: A flexible information gain-based keyphrase extraction system. In Proceedings of the 5th ACM International Workshop on Web Information and Data Management, pages 50–53. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A core of semantic knowledge. In Proceedings of the 16th International World Wide Web Conference, pages 697–706. Takashi Tomokiyo and Matthew Hurst. 2003. A language model approach to keyphrase extraction. In Proceedings of the ACL Workshop on Multiword Expressions, pages 33–40. Peter Turney. 1999. Learning to extract keyphrases from text. National Research Council Canada, Institute for Information Technology, Technical Report ERB-1057. Peter Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, 2:303–336. Peter Turney. 2003. Coherent keyphrase extraction via web mining. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, pages 434–439. Xiaojun Wan and Jianguo Xiao. 2008a. CollabRank: Towards a collaborative approach to single-document keyphrase extraction. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 969–976. Xiaojun Wan and Jianguo Xiao. 2008b. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of the 23rd AAAI Conference on Artificial Intelligence, pages 855–860. Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 552–559. Chen Wang and Sujian Li. 2011. CoRankBayes: Bayesian learning to rank under the co-training framework and its application in keyphrase extraction. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, pages 2241–2244. Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-Manning. 1999. KEA: Practical automatic keyphrase extraction. In Proceedings of the 4th ACM Conference on Digital Libraries, pages 254–255. Yi-Fang Brook Wu, Quanzhi Li, Razvan Stefan Bot, and Xin Chen. 2005. Domain-specific keyphrase extraction. In Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pages 283–284. Wen-Tau Yih, Joshua Goodman, and Vitor R. Carvalho. 2006. Finding advertising keywords on web pages. In Proceedings of the 15th International Conference on World Wide Web, pages 213–222. Wei You, Dominique Fontaine, and Jean-Paul Barth`es. 2009. Automatic keyphrase extraction with a refined candidate set. In Proceedings of the IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, pages 576–579. Torsten Zesch and Iryna Gurevych. 2009. Approximate matching for evaluating keyphrase extraction. In Proceedings of the International Conference on Recent Advances in Natural Language Processing 2009, pages 484–489. Hongyuan Zha. 2002. Generic summarization and keyphrase extraction using mutual reinforcement principle and sentence clustering. In Proceedings of 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 113–120. Yongzheng Zhang, Nur Zincir-Heywood, and Evangelos Milios. 2004. World Wide Web site summarization. Web Intelligence and Agent Systems, 2:39–53. Yongzheng Zhang, Nur Zincir-Heywood, and Evangelos Milios. 2005. Narrative text classification for automatic key phrase extraction in web document corpora. In Proceedings of the 7th ACM International Workshop on Web Information and Data Management, pages 51–58. Xin Zhao, Jing Jiang, Jing He, Yang Song, Palakorn Achanauparp, Ee-Peng Lim, and Xiaoming Li. 2011. Topical keyphrase extraction from Twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 379–388. 1273
2014
119
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 122–132, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning New Semi-Supervised Deep Auto-encoder Features for Statistical Machine Translation Shixiang Lu, Zhenbiao Chen, Bo Xu Interactive Digital Media Technology Research Center (IDMTech) Institute of Automation, Chinese Academy of Sciences, Beijing, China {shixiang.lu,zhenbiao.chen,xubo}@ia.ac.cn Abstract In this paper, instead of designing new features based on intuition, linguistic knowledge and domain, we learn some new and effective features using the deep autoencoder (DAE) paradigm for phrase-based translation model. Using the unsupervised pre-trained deep belief net (DBN) to initialize DAE’s parameters and using the input original phrase features as a teacher for semi-supervised fine-tuning, we learn new semi-supervised DAE features, which are more effective and stable than the unsupervised DBN features. Moreover, to learn high dimensional feature representation, we introduce a natural horizontal composition of more DAEs for large hidden layers feature learning. On two ChineseEnglish tasks, our semi-supervised DAE features obtain statistically significant improvements of 1.34/2.45 (IWSLT) and 0.82/1.52 (NIST) BLEU points over the unsupervised DBN features and the baseline features, respectively. 1 Introduction Recently, many new features have been explored for SMT and significant performance have been obtained in terms of translation quality, such as syntactic features, sparse features, and reordering features. However, most of these features are manually designed on linguistic phenomena that are related to bilingual language pairs, thus they are very difficult to devise and estimate. Instead of designing new features based on intuition, linguistic knowledge and domain, for the first time, Maskey and Zhou (2012) explored the possibility of inducing new features in an unsupervised fashion using deep belief net (DBN) (Hinton et al., 2006) for hierarchical phrase-based translation model. Using the 4 original phrase features in the phrase table as the input features, they pre-trained the DBN by contrastive divergence (Hinton, 2002), and generated new unsupervised DBN features using forward computation. These new features are appended as extra features to the phrase table for the translation decoder. However, the above approach has two major shortcomings. First, the input original features for the DBN feature learning are too simple, the limited 4 phrase features of each phrase pair, such as bidirectional phrase translation probability and bidirectional lexical weighting (Koehn et al., 2003), which are a bottleneck for learning effective feature representation. Second, it only uses the unsupervised layer-wise pre-training of DBN built with stacked sets of Restricted Boltzmann Machines (RBM) (Hinton, 2002), does not have a training objective, so its performance relies on the empirical parameters. Thus, this approach is unstable and the improvement is limited. In this paper, we strive to effectively address the above two shortcomings, and systematically explore the possibility of learning new features using deep (multilayer) neural networks (DNN, which is usually referred under the name Deep Learning) for SMT. To address the first shortcoming, we adapt and extend some simple but effective phrase features as the input features for new DNN feature learning, and these features have been shown significant improvement for SMT, such as, phrase pair similarity (Zhao et al., 2004), phrase frequency, phrase length (Hopkins and May, 2011), and phrase generative probability (Foster et al., 2010), which also show further improvement for new phrase feature learning in our experiments. To address the second shortcoming, inspired by the successful use of DAEs for handwritten digits recognition (Hinton and Salakhutdinov, 2006; Hinton et al., 2006), information retrieval (Salakhutdinov and Hinton, 2009; Mirowski et 122 al., 2010), and speech spectrograms (Deng et al., 2010), we propose new feature learning using semi-supervised DAE for phrase-based translation model. By using the input data as the teacher, the “semi-supervised” fine-tuning process of DAE addresses the problem of “back-propagation without a teacher” (Rumelhart et al., 1986), which makes the DAE learn more powerful and abstract features (Hinton and Salakhutdinov, 2006). For our semisupervised DAE feature learning task, we use the unsupervised pre-trained DBN to initialize DAE’s parameters and use the input original phrase features as the “teacher” for semi-supervised backpropagation. Compared with the unsupervised DBN features, our semi-supervised DAE features are more effective and stable. Moreover, to learn high dimensional feature representation, we introduce a natural horizontal composition for DAEs (HCDAE) that can be used to create large hidden layer representations simply by horizontally combining two (or more) DAEs (Baldi, 2012), which shows further improvement compared with single DAE in our experiments. It is encouraging that, non-parametric feature expansion using gaussian mixture model (GMM) (Nguyen et al., 2007), which guarantees invariance to the specific embodiment of the original features, has been proved as a feasible feature generation approach for SMT. Deep models such as DNN have the potential to be much more representationally efficient for feature learning than shallow models like GMM. Thus, instead of GMM, we use DNN (DBN, DAE and HCDAE) to learn new non-parametric features, which has the similar evolution in speech recognition (Dahl et al., 2012; Hinton et al., 2012). DNN features are learned from the non-linear combination of the input original features, they strong capture highorder correlations between the activities of the original features, and we believe this deep learning paradigm induces the original features to further reach their potential for SMT. Finally, we conduct large-scale experiments on IWSLT and NIST Chinese-English translation tasks, respectively, and the results demonstrate that our solutions solve the two aforementioned shortcomings successfully. Our semi-supervised DAE features significantly outperform the unsupervised DBN features and the baseline features, and our introduced input phrase features significantly improve the performance of DAE feature learning. The remainder of this paper is organized as follows. Section 2 briefly summarizes the recent related work about the applications of DNN for SMT tasks. Section 3 presents our introduced input features for DNN feature learning. Section 4 describes how to learn our semi-supervised DAE features for SMT. Section 5 describes and discusses the large-scale experimental results. Finally, we end with conclusions in section 6. 2 Related Work Recently, there has been growing interest in use of DNN for SMT tasks. Le et al. (2012) improved translation quality of n-gram translation model by using a bilingual neural LM, where translation probabilities are estimated using a continuous representation of translation units in lieu of standard discrete representations. Kalchbrenner and Blunsom (2013) introduced recurrent continuous translation models that comprise a class for purely continuous sentence-level translation models. Auli et al. (2013) presented a joint language and translation model based on a recurrent neural network which predicts target words based on an unbounded history of both source and target words. Liu et al. (2013) went beyond the log-linear model for SMT and proposed a novel additive neural networks based translation model, which overcome some of the shortcomings suffered by the log-linear model: linearity and the lack of deep interpretation and representation in features. Li et al. (2013) presented an ITG reordering classifier based on recursive autoencoders, and generated vector space representations for variable-sized phrases, which enable predicting orders to exploit syntactic and semantic information. Lu et al. (2014) adapted and extended the max-margin based RNN (Socher et al., 2011) into HPB translation with force decoding and converting tree, and proposed a RNN based word topology model for HPB translation, which successfully capture the topological structure of the words on the source side in a syntactically and semantically meaningful order. However, none of these above works have focused on learning new features automatically with input data, and while learning suitable features (representations) is the superiority of DNN since it has been proposed. In this paper, we systematically explore the possibility of learning new fea123 tures using DNN for SMT. 3 Input Features for DNN Feature Learning The phrase-based translation model (Koehn et al., 2003; Och and Ney, 2004) has demonstrated superior performance and been widely used in current SMT systems, and we employ our implementation on this translation model. Next, we adapt and extend some original phrase features as the input features for DAE feature learning. 3.1 Baseline phrase features We assume that source phrase f = f1, · · · , flf and target phrase e = e1, · · · , ele include lf and le words, respectively. Following (Maskey and Zhou, 2012), we use the following 4 phrase features of each phrase pair (Koehn et al., 2003) in the phrase table as the first type of input features, bidirectional phrase translation probability (P(e|f) and P(f|e)), bidirectional lexical weighting (Lex(e|f) and Lex(f|e)), X1 →P(f|e), Lex(f|e), P(e|f), Lex(e|f) 3.2 Phrase pair similarity Zhao et al. (2004) proposed a way of using term weight based models in a vector space as additional evidences for phrase pair translation quality. This model employ phrase pair similarity to encode the weights of content and non-content words in phrase translation pairs. Following (Zhao et al., 2004), we calculate bidirectional phrase pair similarity using cosine distance and BM25 distance as, Scos i (e, f) = Ple j=1 Plf i=1 wejp(ej|fi)wfi sqrt(Ple j=1 w2ej)sqrt(Ple j=1 wej a 2) Scos d (f, e) = Plf i=1 Ple j=1 wfip(fi|ej)wej sqrt(Plf i=1 w2 fi)sqrt(Plf i=1 wfi a 2) where, p(ej|fi) and p(fi|ej) represents bidirectional word translation probability. wfi and wej are term weights for source and target words, wej a and wfi a are the transformed weights mapped from all source/target words to the target/source dimension at word ej and fi, respectively. Sbm25 i (e, f) = lf X i=1 idffi (k1 + 1)wfi(k3 + 1)wfi a (K + wfi)(k3 + wfi a ) Sbm25 d (f, e) = le X j=1 idfej (k1 + 1)wej(k3 + 1)wej a (K + wej)(k3 + wej a ) where, k1, b, k3 are set to be 1, 1 and 1000, respectively. K = k1((1 −b) + J/avg(l)), and J is the phrase length (le or lf), avg(l) is the average phrase length. Thus, we have the second type of input features X2 →Scos i (f, e), Sbm25 i (f, e), Scos d (e, f), Sbm25 d (e, f) 3.3 Phrase generative probability We adapt and extend bidirectional phrase generative probabilities as the input features, which have been used for domain adaptation (Foster et al., 2010). According to the background LMs, we estimate the bidirectional (source/target side) forward and backward phrase generative probabilities as Pf(f) = P(f1)P(f2|f1) · · · P(flf |flf −n+1, · · · , flf −1) Pf(e) = P(e1)P(e2|e1) · · · P(ele|ele−n+1, · · · , ele−1) Pb(f) = P(flf )P(flf −1|flf ) · · · P(f1|fn, · · · , f2) Pb(e) = P(ele)P(ele−1|ele) · · · P(e1|en, · · · , e2) where, the bidirectional forward and backward1 background 4-gram LMs are trained by the corresponding side of bilingual corpus2. Then, we have the third type of input features X3 →Pf(e), Pb(e), Pf(f), Pb(f) 3.4 Phrase frequency We consider bidirectional phrase frequency as the input features, and estimate them as P(f) = count(f) P |fi|=|f| count(fi) P(e) = count(e) P |ej|=|e| count(ej) where, the count(f)/count(e) are the total number of phrase f/e appearing in the source/target side of the bilingual corpus, and the denominator are the total number of the phrases whose length are equal to |f|/|e|, respectively. Then, we have the forth type of input features X4 →P(f), P(e) 1Backward LM has been introduced by Xiong et al. (2011), which successfully capture both the preceding and succeeding contexts of the current word, and we estimate the backward LM by inverting the order in each sentence in the training data from the original order to the reverse order. 2This corpus is used to train the translation model in our experiments, and we will describe it in detail in section 5.1. 124 3.5 Phrase length Phrase length plays an important role in the translation process (Koehn, 2010; Hopkins and May, 2011). We normalize bidirectional phrase length by the maximum phrase length, and introduce them as the last type of input features X5 →ln e , ln f In summary, except for the first type of phrase feature X1 which is used by (Maskey and Zhou, 2012), we introduce another four types of effective phrase features X2, X3, X4 and X5. Now, the input original phrase features X includes 16 features in our experiments, as follows, X →X1, X2, X3, X4, X5 We build the DAE network where the first layer with visible nodes equaling to 16, and each visible node vi corresponds to the above original features X in each phrase pair. 4 Semi-Supervised Deep Auto-encoder Features Learning for SMT Each translation rule in the phrase-based translation model has a set number of features that are combined in the log-linear model (Och and Ney, 2002), and our semi-supervised DAE features can also be combined in this model. In this section, we design our DAE network with various network structures for new feature learning. 4.1 Learning a Deep Belief Net Inspired by (Maskey and Zhou, 2012), we first learn a deep generative model for feature learning using DBN. DBN is composed of multiple layers of latent variables with the first layer representing the visible feature vectors, which is built with stacked sets of RBMs (Hinton, 2002). For a RBM, there is full connectivity between layers, but no connections within either layer. The connection weight W, hidden layer biases c and visible layer biases b can be learned efficiently using the contrastive divergence (Hinton, 2002; Carreira-Perpinan and Hinton, 2005). When given a hidden layer h, factorial conditional distribution of visible layer v can be estimated by P(v = 1|h) = σ(b + hT W T ) where σ denotes the logistic sigmoid. Given v, the element-wise conditional distribution of h is P(h = 1|v) = σ(c + vT W) Figure 1: Pre-training consists of learning a stack of RBMs, and these RBMs create an unsupervised DBN. The two conditional distributions can be shown to correspond to the generative model, P(v, h) = 1 Z exp(−E(v, h)) where, Z = X v,h e−E(v,h) E(v, h) = −bT v −cT h −vT Wh After learning the first RBM, we treat the activation probabilities of its hidden units, when they are being driven by data, as the data for training a second RBM. Similarly, a nth RBM is built on the output of the n −1th one and so on until a sufficiently deep architecture is created. These n RBMs can then be composed to form a DBN in which it is easy to infer the states of the nth layer of hidden units from the input in a single forward pass (Hinton et al., 2006), as shown in Figure 1. This greedy, layer-by-layer pre-training can be repeated several times to learn a deep, hierarchical model (DBN) in which each layer of features captures strong high-order correlations between the activities of features in the layer below. To deal with real-valued input features X in our task, we use an RBM with Gaussian visible units (GRBM) (Dahl et al., 2012) with a variance of 1 on each dimension. Hence, P(v|h) and E(v, h) in the first RBM of DBN need to be modified as P(v|h) = N(v; b + hT W T , I) E(v, h) = 1 2(v −b)T (v −b) −cT h −vT Wh where I is the appropriate identity matrix. 125 Figure 2: After the unsupervised pre-training, the DBNs are “unrolled” to create a semisupervised DAE, which is then fine-tuned using back-propagation of error derivatives. To speed-up the pre-training, we subdivide the entire phrase pairs (with features X) in the phrase table into small mini-batches, each containing 100 cases, and update the weights after each minibatch. Each layer is greedily pre-trained for 50 epochs through the entire phrase pairs. The weights are updated using a learning rate of 0.1, momentum of 0.9, and a weight decay of 0.0002 × weight × learning rate. The weight matrix W are initialized with small random values sampled from a zero-mean normal distribution with variance 0.01. After the pre-training, for each phrase pair in the phrase table, we generate the DBN features (Maskey and Zhou, 2012) by passing the original phrase features X through the DBN using forward computation. 4.2 From DBN to Deep Auto-encoder To learn a semi-supervised DAE, we first “unroll” the above n layer DBN by using its weight matrices to create a deep, 2n-1 layer network whose lower layers use the matrices to “encode” the input and whose upper layers use the matrices in reverse order to “decode” the input (Hinton and Salakhutdinov, 2006; Salakhutdinov and Hinton, 2009; Deng et al., 2010), as shown in Figure 2. The layer-wise learning of DBN as above must be treated as a pre-training stage that finds a good region of the parameter space, which is used to initialize our DAE’s parameters. Starting in this region, the DAE is then fine-tuned using average squared error (between the output and input) backpropagation to minimize reconstruction error, as to make its output as equal as possible to its input. For the fine-tuning of DAE, we use the method of conjugate gradients on larger mini-batches of 1000 cases, with three line searches performed for each mini-batch in each epoch. To determine an adequate number of epochs and to avoid overfitting, we fine-tune on a fraction phrase table and test performance on the remaining validation phrase table, and then repeat fine-tuning on the entire phrase table for 100 epochs. We experiment with various values for the noise variance and the threshold, as well as the learning rate, momentum, and weight-decay parameters used in the pre-training, the batch size and epochs in the fine-tuning. Our results are fairly robust to variations in these parameters. The precise weights found by the pre-training do not matter as long as it finds a good region of the parameter space from which to start the fine-tuning. The fine-tuning makes the feature representation in the central layer of the DAE work much better (Salakhutdinov and Hinton, 2009). After the fine-tuning, for each phrase pair in the phrase table, we estimate our DAE features by passing the original phrase features X through the “encoder” part of the DAE using forward computation. To combine these learned features (DBN and DAE feature) into the log-linear model, we need to eliminate the impact of the non-linear learning mechanism. Following (Maskey and Zhou, 2012), these learned features are normalized by the average of each dimensional respective feature set. Then, we append these features for each phrase pair to the phrase table as extra features. 4.3 Horizontal Composition of Deep Auto-encoders (HCDAE) Although DAE can learn more powerful and abstract feature representation, the learned features usually have smaller dimensionality compared with the dimensionality of the input features, such as the successful use for handwritten digits recognition (Hinton and Salakhutdinov, 2006; Hinton et al., 2006), information retrieval (Salakhutdinov and Hinton, 2009; Mirowski et al., 2010), and 126 Figure 3: Horizontal composition of DAEs to expand high-dimensional features learning. speech spectrograms (Deng et al., 2010). Moreover, although we have introduced another four types of phrase features (X2, X3, X4 and X5), the only 16 features in X are a bottleneck for learning large hidden layers feature representation, because it has limited information, the performance of the high-dimensional DAE features which are directly learned from single DAE is not very satisfactory. To learn high-dimensional feature representation and to further improve the performance, we introduce a natural horizontal composition for DAEs that can be used to create large hidden layer representations simply by horizontally combining two (or more) DAEs (Baldi, 2012), as shown in Figure 3. Two single DAEs with architectures 16/m1/16 and 16/m2/16 can be trained and the hidden layers can be combined to yield an expanded hidden feature representation of size m1 + m2, which can then be fed to the subsequent layers of the overall architecture. Thus, these new m1 + m2-dimensional DAE features are added as extra features to the phrase table. Differences in m1- and m2-dimensional hidden representations could be introduced by many different mechanisms (e.g., learning algorithms, initializations, training samples, learning rates, or distortion measures) (Baldi, 2012). In our task, we introduce differences by using different initializations and different fractions of the phrase table. 4-16-8-2 4-16-8-4 4-16-16-8 4-16-8-4-2 4-16-16-8-4 4-16-16-8-8 4-16-16-8-4-2 4-16-16-8-8-4 4-16-16-16-8-8 4-16-16-8-8-4-2 4-16-16-16-8-8-4 4-16-16-16-16-8-8 6-16-8-2 6-16-8-4 6-16-16-8 6-16-8-4-2 6-16-16-8-4 6-16-16-8-8 6-16-16-8-4-2 6-16-16-8-8-4 6-16-16-16-8-8 6-16-16-16-8-4-2 6-16-16-16-8-8-4 6-16-16-16-16-8-8 8-16-8-2 8-16-8-4 8-16-16-8 8-16-8-4-2 8-16-16-8-4 8-16-16-8-8 8-16-16-8-4-2 8-16-16-8-8-4 8-16-16-16-8-8 8-16-16-16-8-4-2 8-16-16-16-8-8-4 8-16-16-16-16-8-8 16-32-16-2 16-32-16-4 16-32-16-8 16-32-16-8-2 16-32-16-8-4 16-32-32-16-8 16-32-16-8-4-2 16-32-32-16-8-4 16-32-32-16-16-8 16-32-32-16-8-4-2 16-32-32-16-16-8-4 16-32-32-32-16-16-8 Table 1: Details of the used network structure. For example, the architecture 16-32-16-2 (4 layers’ network depth) corresponds to the DAE with 16-dimensional input features (X) (input layer), 32/16 hidden units (first/second hidden layer), and 2-dimensional output features (new DAE features) (output layer). During the fine-tuning, the DAE’s network structure becomes 16-32-16-2-16-32-16. Correspondingly, 4-16-8-2 and 6(8)-16-8-2 represent the input features are X1 and X1+Xi. 5 Experiments and Results 5.1 Experimental Setup We now test our DAE features on the following two Chinese-English translation tasks. IWSLT. The bilingual corpus is the ChineseEnglish part of Basic Traveling Expression corpus (BTEC) and China-Japan-Korea (CJK) corpus (0.38M sentence pairs with 3.5/3.8M Chinese/English words). The LM corpus is the English side of the parallel data (BTEC, CJK and CWMT083) (1.34M sentences). Our development set is IWSLT 2005 test set (506 sentences), and our test set is IWSLT 2007 test set (489 sentences). NIST. The bilingual corpus is LDC4 (3.4M sentence pairs with 64/70M Chinese/English words). The LM corpus is the English side of the parallel data as well as the English Gigaword corpus (LDC2007T07) (11.3M sentences). Our development set is NIST 2005 MT evaluation set (1084 sentences), and our test set is NIST 2006 MT evaluation set (1664 sentences). We choose the Moses (Koehn et al., 2007) framework to implement our phrase-based machine system. The 4-gram LMs are estimated by the SRILM toolkit with modified Kneser-Ney 3the 4th China Workshop on Machine Translation 4LDC2002E18, LDC2002T01, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004T07, LDC2004T08, LDC2005T06, LDC2005T10, LDC2005T34, LDC2006T04, LDC2007T09 127 # Features IWSLT NIST Dev Test Dev Test 1 Baseline 50.81 41.13 36.12 32.59 2 X1 +DBN X1 2f 51.92 42.07∗ 36.33 33.11∗ 3 +DAE X1 2f 52.49 43.22∗∗ 36.92 33.44∗∗ 4 +DBN X1 4f 51.45 41.78∗ 36.45 33.12∗ 5 +DAE X1 4f 52.45 43.06∗∗ 36.88 33.47∗∗ 6 +HCDAE X1 2+2f 53.69 43.23∗∗∗ 37.06 33.68∗∗∗ 7 +DBN X1 8f 51.74 41.85∗ 36.61 33.24∗ 8 +DAE X1 8f 52.33 42.98∗∗ 36.81 33.36∗∗ 9 +HCDAE X1 4+4f 52.52 43.26∗∗∗ 37.01 33.63∗∗∗ 10 X +DBN X 2f 52.21 42.24∗ 36.72 33.21∗ 11 +DAE X 2f 52.86 43.45∗∗ 37.39 33.83∗∗ 12 +DBN X 4f 51.83 42.08∗ 34.45 33.07∗ 13 +DAE X 4f 52.81 43.47∗∗ 37.48 33.92∗∗ 14 +HCDAE X 2+2f 53.05 43.58∗∗∗ 37.59 34.11∗∗∗ 15 +DBN X 8f 51.93 42.01∗ 36.74 33.29∗ 16 +DAE X 8f 52.69 43.26∗∗ 37.36 33.75∗∗ 17 +HCDAE X 4+4f 52.93 43.49∗∗∗ 37.53 34.02∗∗∗ 18 +(X2+X3+X4+X5) 52.23 42.91∗ 36.96 33.65∗ 19 +(X2+X3+X4+X5)+DAE X 2f 53.55 44.17+∗∗∗ 38.23 34.50+∗∗∗ 20 +(X2+X3+X4+X5)+DAE X 4f 53.61 44.22+∗∗∗ 38.28 34.47+∗∗∗ 21 +(X2+X3+X4+X5)+HCDAE X 2+2f 53.75 44.28+∗∗∗∗ 38.35 34.65+∗∗∗∗ 22 +(X2+X3+X4+X5)+DAE X 8f 53.47 44.19+∗∗∗ 38.26 34.46+∗∗∗ 23 +(X2+X3+X4+X5)+HCDAE X 4+4f 53.62 44.29+∗∗∗∗ 38.39 34.57+∗∗∗∗ Table 2: The translation results by adding new DNN features (DBN feature (Maskey and Zhou, 2012), our proposed DAE and HCDAE feature) as extra features to the phrase table on two tasks. “DBN X1 xf”, “DBN X xf”, “DAE X1 xf” and “DAE X xf” represent that we use DBN and DAE, input features X1 and X, to learn x-dimensional features, respectively. “HCDAE X x+xf” represents horizontally combining two DAEs and each DAE has the same x-dimensional learned features. All improvements on two test sets are statistically significant by the bootstrap resampling (Koehn, 2004). *: significantly better than the baseline (p < 0.05), **: significantly better than “DBN X1 xf” or “DBN X xf” (p < 0.01), ***: significantly better than “DAE X1 xf” or “DAE X xf” (p < 0.01), ****: significantly better than “HCDAE X x+xf” (p < 0.01), +: significantly better than “X2+X3+X4+X5” (p < 0.01). discounting. We perform pairwise ranking optimization (Hopkins and May, 2011) to tune feature weights. The translation quality is evaluated by case-insensitive IBM BLEU-4 metric. The baseline translation models are generated by Moses with default parameter settings. In the contrast experiments, our DAE and HCDAE features are appended as extra features to the phrase table. The details of the used network structure in our experiments are shown in Table 1. 5.2 Results Table 2 presents the main translation results. We use DBN, DAE and HCDAE (with 6 layers’ network depth), input features X1 and X, to learn 2-, 4- and 8-dimensional features, respectively. From the results, we can get some clear trends: 1. Adding new DNN features as extra features significantly improves translation accuracy (row 2-17 vs. 1), with the highest increase of 2.45 (IWSLT) and 1.52 (NIST) (row 14 vs. 1) BLEU points over the baseline features. 2. Compared with the unsupervised DBN features, our semi-supervised DAE features are more effective for translation decoder (row 3 vs. 2; row 5 vs. 4; row 8 vs. 7; row 11 vs. 10; row 13 vs. 12; row 16 vs. 15). Specially, Table 3 shows the variance distributions of the learned each dimensional DBN and DAE feature, our DAE features have bigger variance distributions which means 128 Features IWSLT NIST σ1 σ2 σ3 σ4 σ1 σ2 σ3 σ4 DBN X1 4f 0.1678 0.2873 0.2037 0.1622 0.0691 0.1813 0.0828 0.1637 DBN X 4f 0.2010 0.1590 0.2793 0.1692 0.1267 0.1146 0.2147 0.1051 DAE X1 4f 0.5072 0.4486 0.1309 0.6012 0.2136 0.2168 0.2047 0.2526 DAE X 4f 0.5215 0.4594 0.2371 0.6903 0.2421 0.2694 0.3034 0.2642 Table 3: The variance distributions of each dimensional learned DBN feature and DAE feature on the two tasks. Figure 4: The compared results of feature learning with different network structures on two development sets. Features IWSLT NIST Dev Test Dev Test +DAE X1 4f 52.45 43.06 36.88 33.47 +DAE X1+X2 4f 52.76 43.38∗ 37.28 33.80∗ +DAE X1+X3 4f 52.61 43.27∗ 37.13 33.66∗ +DAE X1+X4 4f 52.52 43.24∗ 36.96 33.58∗ +DAE X1+X5 4f 52.49 43.13∗ 36.96 33.56∗ +DAE X 4f 52.81 43.47∗ 37.48 33.92∗ Table 4: The effectiveness of our introduced input features. “DAE X1+Xi 4f” represents that we use DAE, input features X1 + Xi, to learn 4dimensional features. *: significantly better than “DAE X1 4f” (p < 0.05). our DAE features have more discriminative power, and also their variance distributions are more stable. 3. HCDAE outperforms single DAE for high dimensional feature learning (row 6 vs. 5; row 9 vs. 8; row 14 vs. 13; row 17 vs. 16), and further improve the performance of DAE feature learning, which can also somewhat address the bring shortcoming of the limited input features. 4. Except for the phrase feature X1 (Maskey and Zhou, 2012), our introduced input features X significantly improve the DAE feature learning (row 11 vs. 3; row 13 vs. 5; row 16 vs. 8). Specially, Table 4 shows the detailed effectiveness of our introduced input features for DAE feature learning, and the results show that each type of features are very effective for DAE feature learning. 5. Adding the original features (X2, X3, X4 and X5) and DAE/HCDAE features together can further improve translation performance (row 19-23 vs. 18), with the highest increase of 3.16 (IWSLT) and 2.06 (NIST) (row 21 vs. 1) BLEU points over the baseline features. DAE and HCDAE features are learned from the non-linear combination of the original features, they strong capture high-order correlations between the activities of the original features, as to be further interpreted to reach their potential for SMT. We suspect these learned fea129 tures are complementary to the original features. 5.3 Analysis Figure 5: The compared results of using single DAE and the HCDAE for feature learning on two development sets. Figure 4 shows our DAE features are not only more effective but also more stable than DBN features with various network structures. Also, adding more input features (X vs. X1) not only significantly improves the performance of DAE feature learning, but also slightly improves the performance of DBN feature learning. Figure 5 shows there is little change in the performance of using single DAE to learn different dimensional DAE features, but the 4-dimensional features work more better and more stable. HCDAE outperforms the single DAE and learns highdimensional representation more effectively, especially for the peak point in each condition. Figures 5 also shows the best network depth for DAE feature learning is 6 layers. When the network depth of DBN is 7 layers, the network depth of corresponding DAE during the fine-tuning is 13 layers. Although we have pre-trained the corresponding DBN, this DAE network is so deep, the fine-tuning does not work very well and typically finds poor local minima. We suspect this leads to the decreased performance. 6 Conclusions In this paper, instead of designing new features based on intuition, linguistic knowledge and domain, we have learned new features using the DAE for the phrase-based translation model. Using the unsupervised pre-trained DBN to initialize DAE’s parameters and using the input original phrase features as the “teacher” for semi-supervised backpropagation, our semi-supervised DAE features are more effective and stable than the unsupervised DBN features (Maskey and Zhou, 2012). Moreover, to further improve the performance, we introduce some simple but effective features as the input features for feature learning. Lastly, to learn high dimensional feature representation, we introduce a natural horizontal composition of two DAEs for large hidden layers feature learning. On two Chinese-English translation tasks, the results demonstrate that our solutions solve the two aforementioned shortcomings successfully. Firstly, our DAE features obtain statistically significant improvements of 1.34/2.45 (IWSLT) and 0.82/1.52 (NIST) BLEU points over the DBN features and the baseline features, respectively. Secondly, compared with the baseline phrase features X1, our introduced input original phrase features X significantly improve the performance of not only our DAE features but also the DBN features. The results also demonstrate that DNN (DAE and HCDAE) features are complementary to the original features for SMT, and adding them together obtain statistically significant improvements of 3.16 (IWSLT) and 2.06 (NIST) BLEU points over the baseline features. Compared with the original features, DNN (DAE and HCDAE) features are learned from the non-linear combination of the original features, they strong capture high-order correlations between the activities of the original features, and we believe this deep learning paradigm induces the original features to further reach their potential for SMT. Acknowledgments This work was supported by 863 program in China (No. 2011AA01A207). We would like to thank Xingyuan Peng, Lichun Fan and Hongyan Li for their helpful discussions. We also thank the anonymous reviewers for their insightful comments. 130 References Michael Auli, Michel Galley, Chris Quirk and Geoffrey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proceedings of EMNLP, pages 1044-1054. Pierre Baldi. 2012. Autoencoders, unsupervised learning, and deep architectures. JMLR: workshop on unsupervised and transfer learning, 27:37-50. Miguel A. Carreira-Perpinan and Geoffrey E. Hinton. 2005. On contrastive divergence learning. In Proceedings of AI and Statistics. George Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-dependent pre-trained deep neural networks for large vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1):30-42. Li Deng, Mike Seltzer, Dong Yu, Alex Acero, Abdelrahman Mohamed, and Geoffrey E. Hinton. 2010. Binary coding of speech spectrograms using a deep auto-encoder. In Proceedings of INTERSPEECH, pages 1692-1695. George Foster, Cyril Goutte, and Roland Kuhn. 2010. Discriminative instance weighting for domain adaptation in statistical machine translation. In Proceedings of EMNLP, pages 451-459. Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771-1800. Geoffrey E. Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. 2012. Deep neural networks for acoustic modeling in speech tecognition. IEEE Signal Processing Magazine, 29(6):8297. Geoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. 2001. Transforming auto-encoders. In Proceedings of ANN. Geoffrey E. Hinton and Ruslan R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science, 313:504-507. Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527-1544. Mark Hopkins and Jonathan May 2011. Tuning as ranking. In Proceedings of EMNLP, pages 13521362. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of EMNLP, pages 1700-1709. Philipp Koehn. 2004. Statistical significance tests from achine translation evaluation. In Proceedings of ACL, pages 388-395. Philipp Koehn. 2010. Statistical machine translation. Cambridge University Press. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL, Demonstration Session, pages 177-180. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL, pages 48-54. Hai-Son Le, Alexandre Allauzen, and Franc¸ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of NAACL, pages 39-48. Peng Li, Yang Liu, Maosong Sun. 2013. Recursive autoencoders for ITG-based translation. In Proceedings of EMNLP, pages 567-577. Lemao Liu, Taro Watanabe, Eiichiro Sumita, and Tiejun Zhao. 2013. Additive neural networks for statistical machine translation. In Proceedings of ACL, pages 791-801. Shixiang Lu, Wei Wei, Xiaoyin Fu and Bo Xu. 2014. Recursive neural network based word topology model for hierarchical phrase-based speech translation. In Proceedings of ICASSP. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrase-based translation. In Proceedings of ACL, pages 1003-1011. Sameer Maskey and Bowen Zhou. 2012. Unsupervised deep belief features for speech translation. In Proceedings of INTERSPEECH. Piotr Mirowski, MarcAurelio Ranzato, and Yann LeCun. 2010. Dynamic auto-encoders for semantic indexing. In Proceedings of NIPS-2010 Workshop on Deep Learning. Patrick Nguyen, Milind Mahajan, and Xiaodong He. 2007. Training non-parametric features for statistical machine translation. In Proceedings of WMT, pages 72-79. Franz J. Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of ACL, pages 440-447. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of ACL, pages 295-302. Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417-449. 131 David Rumelhart, Geoffrey E. Hinton, and Ronale Williams. 1986. Learning internal representations by back-propagation errors. Parallel Distributed Processing, Vol 1: Foundations, MIT Press. Ruslan R. Salakhutdinov and Geoffrey E. Hinton. 2009. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969-978. Richard Socher, Cliff C. Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of ICML. Deyi Xiong, Min Zhang, and Haizhou Li. 2011. Enhancing language models in statistical machine translation with backward n-grams and mutual information triggers. In Proceedings of ACL, pages 1288-1297. Bing Zhao, Stephan Vogel, and Alex Waibel. 2004. Phrase pair rescoring with term weightings for statistical machine translation. In Proceedings of EMNLP. 132
2014
12
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1274–1283, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Pattern Dictionary of English Prepositions Ken Litkowski CL Research 9208 Gue Road Damascus, MD 20872 USA [email protected] Abstract We present a new lexical resource for the study of preposition behavior, the Pattern Dictionary of English Prepositions (PDEP). This dictionary, which follows principles laid out in Hanks’ theory of norms and exploitations, is linked to 81,509 sentences for 304 prepositions, which have been made available under The Preposition Project (TPP). Notably, 47,285 sentences, initially untagged, provide a representative sample of preposition use, unlike the tagged sentences used in previous studies. Each sentence has been parsed with a dependency parser and our system has near-instantaneous access to features developed with this parser to explore and annotate properties of individual senses. The features make extensive use of WordNet. We have extended feature exploration to include lookup of FrameNet lexical units and VerbNet classes for use in characterizing preposition behavior. We have designed our system to allow public access to any of the data available in the system. 1 Introduction Recent studies (Zapirain et al. (2013); Srikumar and Roth (2011)) have shown the value of prepositional phrases in joint modeling with verbs for semantic role labeling. Although recent studies have shown improved preposition disambiguation, they have received little systematic treatment from a lexicographic perspective. Recently, a new corpus has been made available that promises to be much more representative of preposition behavior. Our initial examination of this corpus has suggested clear indications of senses previously overlooked and reduced prominence for senses thought to constitute a large role in preposition use. In section 2, we describe the interface to the Pattern Dictionary of English Prepositions (PDEP), identifying how we are building upon data developed in The Preposition Project (TPP) and investigating its sense inventory with corpora also made available under TPP. Section 3 describes the procedures for tagging a representative corpus drawn from the British National Corpus, including some findings that have emerged in assessing previous studies of preposition disambiguation. Section 4 describes how we are able to investigate the relationship of WordNet, FrameNet, and VerbNet to this effort and how this examination of preposition behavior can be used in working with these resources. Section 5 describes how we can use PDEP for the analysis of semantic role and semantic relation inventories. Section 6 describes how we envision further developments of PDEP and how the data are available for further analysis. In section 7, we present our conclusions for PDEP. 2 The Pattern Dictionary of English Prepositions Litkowski and Hargraves (2005) and Litkowski and Hargraves (2006) describe The Preposition Project (TPP) as an attempt to describe preposition behavior using a sense inventory made available for public use from the Oxford Dictionary of English (Stevenson and Soanes, 2003) by tagging sentences drawn from FrameNet. In TPP, each sense was characterized with its complement and attachment (or governor) properties, its class and semantic relation, substitutable prepositions, its syntactic positions, and any FrameNet frame and frame element usages (where available). The FrameNet sentences were sense-tagged using the sense inventory and were 1274 later used as the basis for a preposition disambiguation task in SemEval 2007 (Litkowski and Hargraves, 2007). Initial results in SemEval achieved a best accuracy of 69.3 percent (Ye and Baldwin, 2007). The data from SemEval has subsequently been used in several further investigations of preposition disambiguation. Most notably, Tratz (2011) achieved a result of 88.4 percent accuracy and Srikumar and Roth (2013) achieved a similar result. However, Litkowski (2013b) showed that these results did not extend to other corpora, concluding that the FrameNet-based corpus may not have been representative, with a reduction of accuracy to 39.4 percent using a corpus developed by Oxford. Litkowski (2013a) announced the creation of the TPP corpora in order to develop a more representative account of preposition behavior. The TPP corpora includes three subcorpora: (1) the full SemEval 2007 corpus (drawn from FrameNet data, henceforth FN), (2) sentences taken from the Oxford English Corpus to exemplify preposition senses in the Oxford Dictionary of English (henceforth, OEC), and (3) a sample of sentences drawn from the written portion of the British National Corpus (BNC), using the Word Sketch Engine as implemented in the system for the Corpus Pattern Analysis of verbs (henceforth, CPA or TPP). We have used the TPP data and the TPP corpora to implement an editorial interface, the Pattern Dictionary of English Prepositions (PDEP).1 This dictionary is intended to identify the prototypical syntagmatic patterns with which prepositions in use are associated, identifying linguistic units used sequentially to make well-formed structures and characterizing the relationship between these units. In the case of prepositions, the units are the complement (object) of the preposition and the governor (point of attachment) of the prepositional phrase. The editorial interface is used to make changes in the underlying databases, as described in the following subsections. Editorial access to make changes is limited, but the system can be explored publicly and the underlying data can be accessed publicly, either in its entirety or through publicly available scripts used in accessing the data during editorial operations. Standard dictionaries include definitions of prepositions, but only loosely characterize the syntagmatic patterns associated with each sense. 1 http://www.clres.com/db/TPPEditor.html PDEP takes this a step further, looking for prototypical sentence contexts to characterize the patterns. PDEP is modeled on the principles of Corpus Pattern Analysis (CPA), developed to characterize syntagmatic patterns for verbs. 2 These principles are described more fully in Hanks (2013). Currently, CPA is being used in the project Disambiguation of Verbs by Collocation to develop a Pattern Dictionary of English Verbs (PDEV). 3 PDEP is closely related to PDEV, since most syntagmatic patterns for prepositions are related to the main verb in a clause. PDEP is viewed as subordinate to PDEV, sufficiently so that PDEP employs significant portions of code being used in PDEV, with appropriate modifications as necessary to capture the syntagmatic patterns for prepositions.4 2.1 The Preposition Inventory After a start page for entry into PDEP, a table of all prepositions in the sense inventory is displayed. Figure 1 contains a truncated snapshot of this table. The table has a row for each of 304 prepositions as identified in TPP. The second column indicates the number of patterns (senses) for each preposition. The next two columns show the number of TPP (CPA) instances that have been tagged and the total number of TPP instances that have been obtained as the sample from the total number of instances in the BNC. Figure 1. Preposition Inventory Additional columns not shown in Figure 1 show (1) the status of the analysis for the preposition, (2) the number of instances from FrameNet (i.e., FN Insts, as developed for SemEval 2007), and (3) the number of instances from the Oxford English Corpus (i.e., OEC Insts). The number of prepositions with 2 See http://nlp.fi.muni.cz/projects/cpa/. 3 See http://clg.wlv.ac.uk/projects/DVC 4 PDEP is implemented as a combination of HTML and Javascript. Within the Javascript code, calls are made to PHP scripts to retrieve data from MySQL database tables and from additional files (described below). 1275 FrameNet instances is 57 (larger than the 34 prepositions used in SemEval). There are no OEC instances for 57 prepositions. There are no TPP instances for 41 prepositions. Notwithstanding the lack of instances, there are TPP characterizations for all 304 prepositions. The BNC frequency shown in Figure 1 provides a basis for extrapolating results from PDEP to the totality of prepositions. In total, the number of instances in the BNC is 5,391,042, which can be used as the denominator when examining the relative frequency of any preposition (e.g., between has a frequency of 0.0109, 58,865/5,391,042).5 In general, the target sample size was 250 CPA instances. If the number available was less than 250, all instances were used. The TPP CPA corpus contains 250 instances for 170 prepositions. Where the number of senses for a preposition was large (about 15 or more), larger samples of 750 (of, to, on, and with) or 500 (in, for, by, from, at, into, over, like, and through) were drawn. 2.2 Preposition Patterns When a row in Figure 1 is clicked, the preposition is selected and a new page is opened to show the patterns for that preposition. Figure 2 shows the four patterns for below. Each pattern is presented as an instance of the template [[Governor]] prep [[Complement]], followed by its primary implicature, where the current definition is substituted for the preposition. Figure 2. Preposition Pattern List The display in Figure 2 provides an overview for each preposition, with the top line showing the number of tagged instances available from 5 The total number of instances for of and in in this estimate is 1,000,000. As a result, the relative frequency calculation should not be construed as completely accurate. each corpus. For the TPP instances, this identifies the number of instances that have been tagged and the number that remain to be tagged. In the body of the table, the first column shows the TPP sense number. The next three columns show the number of instances that have been tagged with this sense. Note that the top line of the pattern list includes a menu option for adding a pattern, for the case when we find that a new sense is required by the corpus evidence. Clicking on any row in the pattern list opens the details for that pattern, with a pattern box entitled with the preposition and the pattern number, as shown in Figure 3. The pattern box contains data developed in TPP and several new fields intended to capture our enhancements. TPP data include the fields for the Complement, the Governor, the TPP Class, the TPP Relation, the Substitutable Prepositions, the Syntactic Position, the Quirk Reference, the Sense Relation, and the Comment. We have added the checkboxes for complement type (common nouns, proper nouns, WH-phrases, and -ing phrases), as well as a field to identify a particular lexical item (lexset) if the sense is an idiomatic usage. We have added the Selector fields for the complement and the governor. For the complement, we have a field Category to hold its ontological category (using the shallow ontology being developed for verbs in the DVC project mentioned above).6 We also provided a field for the Semantic Class of the governor; this field has not yet been implemented. We have added two Cluster/Relation fields. The Cluster field is based on data available from Tratz (2011), where senses in the SemEval 2007 data have been put into 34 clusters. The Relation field is based on data available from Srikumar and Roth (2013), where senses in the SemEval 2007 data have been put into 32 classes. A key element of Srikumar and Roth was the use of these classes to model semantic relations across prepositions (e.g., grouping all the Temporal senses of the SemEval prepositions). In the pattern box, each of these two fields has a dropdown list of the clusters and relations, enabling us to categorize the senses of other prepositions with these classes. Below, we describe how we are able to use the TPP classes and relations along with the Tratz clusters and Srikumar relations in an analysis of these classes across the 6 This ontology is an evolution of the Brandeis Semantic Ontology (Pustejovsky et al., 2006). 1276 full set of prepositions, instead of just those used in SemEval. Any number of pattern boxes may be opened at one time. The data in any of the fields may be altered (with the menu bar changing color to red) and then saved to the underlying databases. An individual pattern box may then be closed. The drop-down box labeled Corpus Instances in the menu bar is used to open the set of corpus instances for the given sense. As shown in Figure 2, this sense has 6 FN instances, 20 OEC instances, and 15 TPP instances. The drop-down box has an option for each of these sets, along with an option for all TPP instances that have not yet been tagged. When one of these options is selected, the corresponding set of instances is opened in a new tab, discussed in the next section. 2.3 Preposition Corpus Instances As indicated, selecting an instance set from the pattern box opens this set in a separate tab, as shown in Figure 4. This tab, labeled Annotation: below (3(1b)), identifies the preposition and the sense, if any, associated with the instance set (the sense will be identified as unk if the set has not yet been tagged. The instance set is displayed, identifying the corpus, the instance identifier, the TPP sense (if identified, or “unk” if not), the location in the sentence of the target preposition, and the sentence, with the preposition in bold. This tab is where the annotation takes place. Any set of sentences may be selected; each selected sentence is highlighted in yellow (as shown in Figure 6). The sense value may be changed using the drop-down box labeled Tag Instances in the menu bar. This drop-down box contains all the current senses for the preposition, along with possible tags x (to indicate that the instance is invalid for the preposition) and unk (to indicate that a tagging decision has not yet been made). The sense tags in Figure 4 were originally untagged in the CPA (TPP) corpus and were tagged in this manner. In general, sense-tagging follows standard lexicographic principles, where an attempt is made to group instances that appear to represent distinct senses. PDEP provides an enhanced environment for this process. Firstly, we can make use of the current TPP sense inventory to tag sentences. Since the pattern sets (definitions) are based on the Oxford Dictionary of English, the likelihood that the coverage and accuracy of the sense distinctions is quite high. However, since prepositions have not generally received the close attention of words in other parts of speech, Figure 3. Preposition Pattern Details Figure 4. Preposition Corpus Instance Annotation 1277 PDEP is intended to ensure the coverage and accuracy. During the tagging of the SemEval instances, the lexicographer found it necessary to increase the number of senses by about 10 percent. Since the lack of coverage of FrameNet is well-recognized, the representative sample developed for the TPP corpus should provide the basis for ensuring the coverage and accuracy. In addition to adhering to standard lexicographic principles, the availability of the tagged FN and OEC instances can be used as the basis for tagging decisions. Where available, these tagged instances can be opened in separate tabs and used as examples for tagging the unknown TPP instances. 3 Tagging the TPP Corpus 3.1 Examining Corpus Instances The main contribution of the present work is the ability to interactively examine characteristics of the context surrounding the target preposition in the corpus instances. In the menu bar shown in Figure 4, there is an Examine item. Next to it are two drop-down boxes, one labeled WFRs (wordfinding rules) and one labeled FERs (feature extraction rules). These rules are taken from the system described in Tratz and Hovy (2011) and Tratz (2011). 7 The TPP corpora described in Litkowski (2013a) includes full dependency parses and feature files for all sentences. Each sentence may have as many as 1500 features describing the context of the target preposition. We have made the feature files for these sentences (1309 MB) available for exploration in PDEP. In our system, we make available seven wordfinding rules and nine feature extraction rules. The word-finding rules fall into two groups: words pertaining to the governor and words pertaining to the complement. The five governor word-finding rules are (1) verb or head to the left (l), (2) head to the left (hl), (3) verb to the left (vl), (4) word to the left (wl), and (5) governor (h). The two complement word-finding rules are (1) syntactic preposition complement (c) and (2) heuristic preposition complement (hr). The feature extraction rules are (1) word class (wc), (2) part of speech (pos), (3) lemma (l), (4) word (w), (5) WordNet lexical name (ln), (6) WordNet synonyms (s), (7) WordNet hypernyms (h), (8) whether the word is capitalized (c), and (9) affixes (af). Thus, we are able to examine any of 63 7 An updated version of this system is available at http://sourceforge.net/projects/miacp/. WFR FER combinations for whatever corpus set happens to be open. In addition to these features, we are able to determine the extent to which prepositions associated with FrameNet lexical units and VerbNet classes occur in a given corpus set. In Figure 4, there is a checkbox labeled FN next to the FERs drop-down list to examine FrameNet lexical units. There is a similar checkbox labeled VN to examine members of VerbNet classes. These boxes appear only when either of these resources has identified the given preposition as part of its frame (75 for FrameNet and 31 for VerbNet). When a particular WFR-FER combination is selected and the Examine menu item is clicked, a new tab is opened showing the values for those features for the given corpus set, as shown in Figure 5. The tab shows the WFR and FER that were used, the number of features for which the value was found in the feature data, the values, and the count for each feature. The description column is used when displaying results for the part of speech, the affix type, FrameNet frame elements, and VerbNet classes, since the value column for these hits are not self-explanatory. The example in Figure 5 is showing the lemma, which requires no further explanation. Figure 5. Feature Examination Results For most features (e.g., lemma or part of speech), the number of possible values is relatively small, limited by the number of instances in the corpus set. For features such as the WordNet lexical name, synonyms and hypernyms, the number of values may be much larger. For FrameNet and VerbNet, the feature examination is limited to the combination of the WFR for the governor (h) and the FER lemma (l), both of which will generally identify verbs in the value column. The general objective of examining features is to identify those that are diagnostic of specific senses. When applied to the full untagged TPP corpus set, this process is akin to developing 1278 word sketches for prepositions (Kilgarriff et al., 2004). However, since we have tagged corpus sets for most preposition senses, we can begin our efforts looking at these sets. The hypothesis is that the tagged corpora will show patterns which can then be used for tagging instances in the TPP corpus.8 The first step in examining features generally is to look at the word classes and parts of speech for the complement and the governor.9 These are useful for filling in their checkboxes in Figure 3. Another useful feature is word to the left (wl), which can be used to verify the syntactic position checkboxes, particularly the adverbial positions (adjunct, subjunct, disjunct, and conjunct). These first steps provide a general overview of a sense’s behavior. The next step of feature examination delves more into the semantic characteristics of the complement and the governor. Tratz (2011) reported that the use of heuristics provided a more accurate identification of the preposition complement; this is the WFR hr in our system. After getting some idea of the word class and the part of speech, we next examine the WordNet lexical name of the complement to determine its broad semantic grouping. As mentioned, this feature may return a number of values larger than the size of the corpus set, since WordNet senses for a given lexeme may be polysemous. Notwithstanding, this feature examination generally shows the dominant categories and can be used to charac 8 Currently, 21.5 percent of the TPP instances (10347 of 47,285) have been tagged. 9 Accurate identification of the complement and governor is likely improved with the reliance on the Tratz dependency parser. Moreover, this is likely to improve the word sketches in PDEP. Ambati et al. (2012) report that dependency parses provide improved word sketches over purpose-built finite-state grammars. Their findings provide additional support for the methods presented here. terize and act as a selector for the complement in the pattern details. Similar procedures are used for characterizing the governor selection criteria. In the example in Figure 3, for below, sense 3(1b), our preliminary analysis shows hr:pos:cd (i.e., a cardinal number) and hr:l:average, standard (i.e., the lemmas average and standard) are particularly useful for identifying this sense. 3.2 Selecting Corpus Instances In addition to enabling feature examination, PDEP also facilitates selection of corpus instances. We can use the specifications for any WFR - FER combination, along with one of the values (as shown in Figure 5), to select the corpus instances having that feature. Figure 6 shows, in part, the result of the WFR hr and FER l with the value average, against the instances in the open corpus set. As shown in the menu bar in Figure 6, we can select all instances and unselect all selections. Based on any selections, we can then tag such instances with one of the options that appear in the Tag Instances drop-down box. In the specific example, we could change all the selected instances to some other sense, if we have decided that the current assignment is not the best. The selection mechanism is not used absolutely. For example, in examining the untagged instances for over, we used the specification hr:ln:noun.time (looking for instances with the heuristic complement having the WordNet lexical name noun.time). Out of 500 instances, we found 122 with this property. We then scrolled through the selected items, deselecting instances that did not provide a time period, and then tagged 99 instances with the sense 14(5), with the meaning expressing duration. Once we have made such a tagging, we can look at just those instances the next time we examine this sense. In this case, we might decide, pace the TPP lexicographer’s comment, that the instances should be Figure 6. Selected Corpus Instances 1279 broken down into those which express a time period and those which describe “accompanying circumstances” (e.g., over coffee). 3.3 Accuracy of Features PDEP uses the output from Tratz’ system (2011), which is of high quality, but which is not always correct. In addition, the TPP corpus also has some shortcomings, which are revealed in examining the instances. The TPP corpus has not been cleaned in the same manner as the FN and the OEC corpora. As a result, we see many cases which are more difficult to parse and hence, from which to generate feature sets. We believe this provides a truer real-world picture of the complexities of preposition behavior. As a result, in the Tag Instances drop-down box, we have included an option to tag a sentence as x, to indicate that it is not a valid instance. A small percentage of the TPP instances are ill-formed, i.e., incomplete sentences; these are marked as x. For some prepositions, e.g., down, a substantial number of instances are not prepositions, but rather adverbs or particles. For some phrasal prepositions, such as on the strength of, the phrase is literal, rather than the preposition idiom; in this case, 20 of 124 instances were marked as x. The occurrence of these invalid instances provides an opportunity for improving taggers, parsers, and semantic role labelers. 4 Assessment of Lexical Resources Since the PDEP system enables exploration of features from WordNet, FrameNet, and VerbNet, we are able to make some assessment of these resources. WordNet played a statistically significant role in the systems developed by Tratz (2011) and Srikumar and Roth (2013). This includes the WordNet lexicographer’s file name (e.g., noun.time), synsets, and hypernyms. We make extensive use of the file name, but less so from the synsets and hypernyms. However, in general, we find that the file names are too coarse-grained and the synsets and hypernyms too fine-grained for generalizations on the selectors for the complements and the governors. The issue of granularity also affects the use of the DVC ontology. We discuss this issue further in section 6, on investigations of suitable categorization schemes for PDEP. In using FrameNet, our results illustrate the unbalanced corpus used in SemEval 2007 (as suggested in Litkowski (2013b)). For the sense of of, “used to indicate the contents of a container”, we first examined the FrameNet corpus set for that sense, which contains 278 instances (out of 4482, or 6.2 percent). Using PDEP, we found that FrameNet feature values for the governor accounted for 264 of these instances (95 percent), all of which were related to the frame elements Contents or Stuff. However, in the TPP corpus, only 3 out of 750 instances were identified for this sense (0.4 percent). Thus, while FrameNet culled a large number of instances which had these frame element realizations, these instances do not appear to be representative of their occurrence in a random sample of of uses. We have seen similar patterns for the other SemEval prepositions. A similar situation exists for Cause senses of major prepositions: for (385 in FrameNet, 5/500 in TPP), from (71 in FrameNet, 16/500 in TPP), of (68 in FrameNet, 0/750 in TPP), and with (127 in FrameNet, 8/750 in TPP). Each of these cases further emphasizes how the SemEval 2007 instances are not representative and thus degrade the ability to apply existing preposition disambiguation results beyond these instances. )We discuss Cause senses further in the wider context of all PDEP prepositions in the next section on class analyses.) As indicated earlier, VerbNet identifies fewer prepositions in its frames than FrameNet. We believe this is the case since VerbNet prepositions are generally arguments, rather than adjuncts. Many of the FrameNet prepositions are evoking peripheral and extra-thematic frame elements, so the number of prepositions is correspondingly higher. Also, VerbNet contains fewer members in its verb classes. As a result, the number of hits when using VerbNet is somewhat smaller, although some use of VerbNet classes is possible with the governor selectors. PDEP provides a vehicle for expanding the items in all these resources. While prepositions are not central to these resources, their supporting role provides additional information that might be useful in developing and using these other resources. 5 Class Analyses In SemEval 2007, Yuret (2007) investigated the possibility of using the substitutable prepositions as the basis for disambiguation (as part of more general lexical sample substitution). Although his methodology yielded significant gains over the baseline, his best results were only 54.7 per1280 cent accuracy, concluding that preposition use is highly idiosyncratic. Srikumar and Roth (2013) broadened this perspective by considering a class-based approach by collapsing semanticallyrelated senses across prepositions, thereby deriving a semantic relation inventory. While their emphasis was on modeling semantic relations, they achieved an accuracy of 83.53 percent for preposition disambiguation. As mentioned above, PDEP has a field for the Srikumar semantic relation, initially populated for the SemEval prepositions, and being extended to cover all other prepositions. For example, Srikumar and Roth identified 21 temporal senses across 14 SemEval prepositions, while we have thus far identified 62 senses across 50 prepositions. Similar increases in the sizes of other classes occur as well. For causal senses, Srikumar and Roth identified 11 senses over 7 prepositions, while PDEP has 27 senses under 25 prepositions. PDEP enables an in-depth analysis of TPP classes, Tratz clusters, and Srikumar semantic realations. First, we query the database underlying Figure 3 to identify all senses with a particular class. We then examine each sense on each list in detail. We follow the procedures laid out above for examining the features to add information about selectors, complement types, and categories. We use this information to tag the TPP instances, conservatively assuring the tagging, e.g., leaving untagged questionable instances. Finally, we carefully place each sense into a preposition class or subclass, grouping senses together and making annotations that attempt to capture any nuance of meaning that distinguishes the sense from other members of the class. To build a description of the class and its subclasses, we make use of the Quirk reference in Figure 3 (i.e., the relevant discussions in Quirk et al. (1985)). We build the description of a class as a separate web page and make this available as a menu item in Figure 3 (not shown for the Scalar class when that screenshot was made). The description provides an overview of the class, making use of the TPP data and the Quirk discussion, and indicating the number of senses and the number of prepositions. Next, the description provides a list of the categories within the class, characterizing the complements of the category and then listing each sense in the category, with any nuance of meaning as necessary. Finally, we attempt to summarize the selection criteria that have been used across all the senses in the class. The process of building a class description reveals inconsistencies in each of the class fields. When we place a preposition sense into the class, we may find it necessary to make changes in the underlying data. At the top level, these class analyses in effect constitute a coarse-grained sense inventory. As the subclasses are developed, a finer-grained analysis of a particular area is available. We believe these analyses may provide a comprehensive characterization of particular semantic roles that can be used for various NLP applications. 6 Availability of PDEP Data and Potential for Further Enhancements As indicated above, each of the tables shown in the figures is generated in Javascript through a system call to a PHP script. Each of these scripts is described in detail at the PDEP web site. Each script returns data in Javascript Object Notation (JSON), enabling users to obtain whatever data is of interest to them and perhaps using this data dynamically. While PDEP provides access to a large amount of data, the architecture is very flexible and easy to extend. For this, we are grateful for the Tratz parser and the DVC code. In building PDEP, we found it necessary to reprocess the SemEval 2007 data of the full 28,052 sentences that were available through TPP, rather than just those that were used in the SemEval task itself. Tagging, parsing, and creating feature files for these sentences took less than 10 minutes, with an equal time to upload the feature files. We would be able to add or substitute new corpora to the PDEP databases with relatively little effort. Similarly, we can add new elements or modify existing elements that describe preposition patterns. This would require easily-made modifications to the underlying MySQL database tables. The PHP scripts that access these tables are also easily developed or modified. Most of these scripts use less than 100 lines of code. In developing PDEP, we have added various resources incrementally. This applies to such resources as the DVC ontology, FrameNet, and VerbNet. Each of these resources required relatively little effort to integrate into PDEP. We will continue to investigate the utility of other resources that will assist in characterizing preposition behavior. We have begun to look at the noun clusters used in Srikumar and Roth (2013) for better characterizing complements. We are also 1281 examining an Oxford noun hierarchy as another alternative for complement analysis. We are examining the WordNet detour to FrameNet, as described in Burchardt et al. (2005), particularly for use in further characterizing the governors. We recognize that an important element of PDEP will be in its utility for preposition disambiguation. While we have not yet begun the necessary experimentation and evaluation, we believe the representativeness and sample sizes of the TPP corpus (mostly with 250 or more sentences per preposition) should provide a basis for constructing the needed studies. We expect that this will follow techniques used by Cinkova et al. (2012), in examining the Pattern Dictionary of English Verbs developed as the precursor to DVC. We expect that interaction with the NLP community will help PDEP evolve into a useful resource, not only for characterizing preposition behavior, but also for assisting in the development of other lexical resources. 7 Conclusion and Future Plans We have described the Pattern Dictionary of English Prepositions (PDEP) as a new lexical resource for examining and recording preposition behavior. PDEP does not introduce any ideas that have not already been explored in the investigation of other parts of speech. However, by bringing together work from these disparate sources, we have shown that it is possible to analyze preposition behavior in a manner equivalent to the major parts of speech. Since dictionary publishers have not previously devoted much effort in analyzing preposition behavior, we believe PDEP may serve an important role, particularly for various NLP applications in which semantic role labeling is important. On the other hand, PDEP as described in this paper is only in its initial stages. In following the principles laid out for verbs in PDEV, a main goal is to provide a sufficient characterization of how frequently different preposition patterns (senses) occur, with some idea of a statistical characterization of the probability of the conjunction of a preposition, its complement, and its governor. Better development of a desired syntagmatic characterization of preposition behavior, consistent with the principles of TNE, is still needed. Since preposition behavior is strongly linked to verb behavior, further effort is needed to link PDEP to PDEV. The resource will benefit from futher experimentation and evaluation stages. We expect that desired improvements will come from usage in various NLP tasks, particularly word-sense disambiguation and semantic role labeling. In particular, we anticipate that interaction with the NLP community will identify further enhancements, developments, and hints from usage. Acknowledgments Stephen Tratz (and Dirk Hovy) provided considerable assistance in using the Tratz parser. Vivek Srikumar graciously provided his data on preposition classes. Vitek Baisa similarly helped with the adaptation of the PDEV Javascript modules. Orin Hargraves, Patrick Hanks, and Eduard Hovy continued to provide valuable insights. Reviewer comments helped sharpen the draft version of the paper. References Bharat Ram Ambati, Siva Reddy, and Adam Kilgarriff. 2012. Word Sketches for Turkish. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC). Istanbul, 2945-2950. Aljoscha Burchardt, Katrin Erk, and Anette Frank. 2005. A WordNet Detour to FrameNet. Proceedings of GLDV workshop GermaNet II. Bonn. Silvie Cinkova, Martin Holub, Adam Rambousek, and Lenka Smejkalova. 2012. A database of semantic clusters of verb usages. Lexical Resources and Evaluation Conference. Istanbul, 3176-83. Patrick Hanks. 2004. Corpus Pattern Analysis. In EURALEX Proceedings. Vol. I, pp. 87-98. Lorient, France: Université de Bretagne-Sud. Patrick Hanks. 2013. Lexical Analysis: Norms and Exploitations. MIT Press. Adam Kilgarriff, Pavel Rychly, Pavel Smrz, and David Tugwell. 2004. The Sketch Engine. Proceedings of EURALEX. Lorient, France, pp. 105-16. Ken Litkowski. 2013a. The Preposition Project Corpora. Technical Report 13-01. Damascus, MD: CL Research. Ken Litkowski. 2013b. Preposition Disambiguation: Still a Problem. Technical Report 13-02. Damascus, MD: CL Research. Ken Litkowski and Orin Hargraves. 2005. The preposition project. ACL-SIGSEM Workshop on “The Linguistic Dimensions of Prepositions and Their Use in Computational Linguistic Formalisms and Applications”, pages 171–179. Ken Litkowski and Orin Hargraves. 2006. Coverage and Inheritance in The Preposition Project. In: Proceedings of the Third ACL-SIGSEM Workshop on Prepositions. Trento, Italy.ACL. 89-94. 1282 Ken Litkowski and Orin Hargraves. 2007. SemEval2007 Task 06: Word-Sense Disambiguation of Prepositions. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), Prague, Czech Republic. James Pustejovsky, Catherine Havasi, Jessica Littman, Anna Rumshisky, and Marc Verhagen. 2006. Towards a Generative Lexical Resource: The Brandeis Semantic Ontology. 5th Edition of the International Conference on Lexical Resources and Evaluation., 1702-5. Randolph Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehensive Grammar of the English Language. New York: Longman Inc. Vivek Srikumar and Dan Roth. 2011. A Joint Model for Extended Semantic Role Labeling. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. ACL, 129-139. Vivek Srikumar and Dan Roth. 2013. Modeling Semantic Relations Expressed by Prepositions. Transactions of the Association for Computational Linguistics, 1. Angus Stevenson and Catherine Soanes (Eds.). 2003. The Oxford Dictionary of English. Oxford: Clarendon Press. Stephen Tratz. 2011. Semantically-Enriched Parsing for Natural Language Understanding. PhD Thesis, University of Southern California. Stephen Tratz and Eduard Hovy. 2011. A Fast, Accurate, Non-Projective, Semantically-Enriched Parser. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Edinburgh, Scotland, UK. Deniz Yuret. 2007. KU: Word Sense Disambiguation by Substitution. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), Prague, Czech Republic. Zapirain, B., E. Agirre, L. Marquez, and M. Surdeanu. 2013. Selectional Preferences for Semantic Role Classification. Computational Linguistics, 39:3. 1283
2014
120
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1284–1293, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Looking at Unbalanced Specialized Comparable Corpora for Bilingual Lexicon Extraction Emmanuel Morin and Amir Hazem Universit´e de Nantes, LINA UMR CNRS 6241 2 rue de la houssini`ere, BP 92208, 44322 Nantes Cedex 03, France {emmanuel.morin,amir.hazem}@univ-nantes.fr Abstract The main work in bilingual lexicon extraction from comparable corpora is based on the implicit hypothesis that corpora are balanced. However, the historical contextbased projection method dedicated to this task is relatively insensitive to the sizes of each part of the comparable corpus. Within this context, we have carried out a study on the influence of unbalanced specialized comparable corpora on the quality of bilingual terminology extraction through different experiments. Moreover, we have introduced a regression model that boosts the observations of word cooccurrences used in the context-based projection method. Our results show that the use of unbalanced specialized comparable corpora induces a significant gain in the quality of extracted lexicons. 1 Introduction The bilingual lexicon extraction task from bilingual corpora was initially addressed by using parallel corpora (i.e. a corpus that contains source texts and their translation). However, despite good results in the compilation of bilingual lexicons, parallel corpora are scarce resources, especially for technical domains and for language pairs not involving English. For these reasons, research in bilingual lexicon extraction has focused on another kind of bilingual corpora comprised of texts sharing common features such as domain, genre, sampling period, etc. without having a source text/target text relationship (McEnery and Xiao, 2007). These corpora, well known now as comparable corpora, have also initially been introduced as non-parallel corpora (Fung, 1995; Rapp, 1995), and non-aligned corpora (Tanaka and Iwasaki, 1996). According to Fung and Cheung (2004), who range bilingual corpora from parallel corpora to quasi-comparable corpora going through comparable corpora, there is a continuum from parallel to comparable corpora (i.e. a kind of filiation). The bilingual lexicon extraction task from comparable corpora inherits this filiation. For instance, the historical context-based projection method (Fung, 1995; Rapp, 1995), known as the standard approach, dedicated to this task seems implicitly to lead to work with balanced comparable corpora in the same way as for parallel corpora (i.e. each part of the corpus is composed of the same amount of data). In this paper we want to show that the assumption that comparable corpora should be balanced for bilingual lexicon extraction task is unfounded. Moreover, this assumption is prejudicial for specialized comparable corpora, especially when involving the English language for which many documents are available due the prevailing position of this language as a standard for international scientific publications. Within this context, our main contribution consists in a re-reading of the standard approach putting emphasis on the unfounded assumption of the balance of the specialized comparable corpora. In specialized domains, the comparable corpora are traditionally of small size (around 1 million words) in comparison with comparable corpus-based general language (up to 100 million words). Consequently, the observations of word co-occurrences which is the basis of the standard approach are unreliable. To make them more reliable, our second contribution is to contrast different regression models in order to boost the observations of word co-occurrences. This strategy allows to improve the quality of extracted bilingual lexicons from comparable corpora. 1284 2 Bilingual Lexicon Extraction In this section, we first describe the standard approach that deals with the task of bilingual lexicon extraction from comparable corpora. We then present an extension of this approach based on regression models. Finally, we discuss works related to this study. 2.1 Standard Approach The main work in bilingual lexicon extraction from comparable corpora is based on lexical context analysis and relies on the simple observation that a word and its translation tend to appear in the same lexical contexts. The basis of this observation consists in the identification of “first-order affinities” for each source and target language: “First-order affinities describe what other words are likely to be found in the immediate vicinity of a given word” (Grefenstette, 1994, p. 279). These affinities can be represented by context vectors, and each vector element represents a word which occurs within the window of the word to be translated (e.g. a seven-word window approximates syntactic dependencies). In order to emphasize significant words in the context vector and to reduce word-frequency effects, the context vectors are normalized according to an association measure. Then, the translation is obtained by comparing the source context vector to each translation candidate vector after having translated each element of the source vector with a general dictionary. The implementation of the standard approach can be carried out by applying the following three steps (Rapp, 1999; Chiao and Zweigenbaum, 2002; D´ejean et al., 2002; Morin et al., 2007; Laroche and Langlais, 2010, among others): Computing context vectors We collect all the words in the context of each word i and count their occurrence frequency in a window of n words around i. For each word i of the source and the target languages, we obtain a context vector vi which gathers the set of co-occurrence words j associated with the number of times that j and i occur together cooc(i, j). In order to identify specific words in the lexical context and to reduce wordfrequency effects, we normalize context vectors using an association score such as Mutual Information, Log-likelihood, or the discounted log-odds (LO) (Evert, 2005) (see equation 1 and Table 1 where N = a + b + c + d). Transferring context vectors Using a bilingual dictionary, we translate the elements of the source context vector. If the bilingual dictionary provides several translations for an element, we consider all of them but weight the different translations according to their frequency in the target language. Finding candidate translations For a word to be translated, we compute the similarity between the translated context vector and all target vectors through vector distance measures such as Jaccard or Cosine (see equation 2 where associ j stands for “association score”, vk is the transferred context vector of the word k to translate, and vl is the context vector of the word l in the target language). Finally, the candidate translations of a word are the target words ranked following the similarity score. j ¬j i a = cooc(i, j) b = cooc(i, ¬j) ¬i c = cooc(¬i, j) d = cooc(¬i, ¬j) Table 1: Contingency table LO(i, j) = log (a + 1 2) × (d + 1 2) (b + 1 2) × (c + 1 2) (1) Cosinevk vl = ∑ t assocl t assock t √∑ t assocl t 2√∑ t assock t 2 (2) This approach is sensitive to the choice of parameters such as the size of the context, the choice of the association and similarity measures. The most complete study about the influence of these parameters on the quality of word alignment has been carried out by Laroche and Langlais (2010). The standard approach is used by most researchers so far (Rapp, 1995; Fung, 1998; Peters and Picchi, 1998; Rapp, 1999; Chiao and Zweigenbaum, 2002; D´ejean et al., 2002; Gaussier et al., 2004; Morin et al., 2007; Laroche and Langlais, 2010; Prochasson and Fung, 2011; 1285 References Domain Languages Source/Target Sizes Tanaka and Iwasaki (1996) Newspaper EN/JP 30/33 million words Fung and McKeown (1997) Newspaper EN/JP 49/60 million bytes of data Rapp (1999) Newspaper GE/EN 135/163 million words Chiao and Zweigenbaum (2002) Medical FR/EN 602,484/608,320 words D´ejean et al. (2002) Medical GE/EN 100,000/100,000 words Morin et al. (2007) Medical FR/JP 693,666/807,287 words Otero (2007) European Parliament SP/EN 14/17 million words Ismail and Manandhar (2010) European Parliament EN/SP 500,000/500,000 sentences Bouamor et al. (2013) Financial FR/EN 402,486/756,840 words Medical FR/EN 396,524/524,805 words Table 2: Characteristics of the comparable corpora used for bilingual lexicon extraction Bouamor et al., 2013, among others) with the implicit hypothesis that comparable corpora are balanced. As McEnery and Xiao (2007, p. 21) observe, a specialized comparable corpus is built as balanced by analogy with a parallel corpus: “Therefore, in relation to parallel corpora, it is more likely for comparable corpora to be designed as general balanced corpora.”. For instance, Table 2 describes the comparable corpora used in the main work dedicated to bilingual lexicon extraction for which the ratio between the size of the source and the target texts is comprised between 1 and 1.8. In fact, the assumption that words which have the same meaning in different languages should have the same lexical context distributions does not involve working with balanced comparable corpora. To our knowledge, no attention1 has been paid to the problem of using unbalanced comparable corpora for bilingual lexicon extraction. Since the context vectors are computed from each part of the comparable corpus rather than through the parts of the comparable corpora, the standard approach is relatively insensitive to differences in corpus sizes. The only precaution for using the standard approach with unbalanced corpora is to normalize the association measure (for instance, this can be done by dividing each entry of a given context vector by the sum of its association scores). 2.2 Prediction Model Since comparable corpora are usually small in specialized domains (see Table 2), the discrimina1We only found mention of this aspect in Diab and Finch (2000, p. 1501) “In principle, we do not have to have the same size corpora in order for the approach to work”. tive power of context vectors (i.e. the observations of word co-occurrences) is reduced. One way to deal with this problem is to re-estimate co-occurrence counts by a prediction function (Hazem and Morin, 2013). This consists in assigning to each observed co-occurrence count of a small comparable corpora, a new value learned beforehand from a large training corpus. In order to make co-occurrence counts more discriminant and in the same way as Hazem and Morin (2013), one strategy consists in addressing this problem through regression: given training corpora of small and large size (abundant in the general domain), we predict word cooccurrence counts in order to make them more reliable. We then apply the resulting regression function to each word co-occurrence count as a pre-processing step of the standard approach. Our work differs from Hazem and Morin (2013) in two ways. First, while they experienced the linear regression model, we propose to contrast different regression models. Second, we apply regression to unbalanced comparable corpora and study the impact of prediction when applied to the source texts, the target texts and both source and target texts of the used comparable corpora. We use regression analysis to describe the relationship between word co-occurrence counts in a large corpus (the response variable) and word cooccurrence counts in a small corpus (the predictor variable). As most regression models have already been described in great detail (Christensen, 1997; Agresti, 2007), the derivation of most models is only briefly introduced in this work. As we can not claim that the prediction of word co-occurrence counts is a linear problem, we consider in addition to the simple linear regression 1286 model (Lin), a generalized linear model which is the logistic regression model (Logit) and non linear regression models such as polynomial regression model (Polyn) of order n. Given an input vector x ∈Rm, where x1,...,xm represent features, we find a prediction ˆy ∈Rm for the cooccurrence count of a couple of words y ∈R using one of the regression models presented below: ˆyLin = β0 + β1x (3) ˆyLogit = 1 1 + exp(−(β0 + β1x)) (4) ˆyPolyn = β0 + β1x + β2x2 + ... + βnxn (5) where βi are the parameters to estimate. Let us denote by f the regression function and by cooc(wi, wj) the co-occurrence count of the words wi and wj. The resulting predicted value of cooc(wi, wj), noted ˆ cooc(wi, wj) is given by the following equation: ˆ cooc(wi, wj) = f(cooc(wi, wj)) (6) 2.3 Related Work In the past few years, several contributions have been proposed to improve each step of the standard approach. Prochasson et al. (2009) enhance the representativeness of the context vector by strengthening the context words that happen to be transliterated words and scientific compound words in the target language. Ismail and Manandhar (2010) also suggest that context vectors should be based on the most important contextually relevant words (indomain terms), and thus propose a method for filtering the noise of the context vectors. In another way, Rubino and Linar`es (2011) improve the context words based on the hypothesis that a word and its candidate translations share thematic similarities. Yu and Tsujii (2009) and Otero (2007) propose, for their part, to replace the window-based method by a syntax-based method in order to improve the representation of the lexical context. To improve the transfer context vectors step, and increase the number of elements of translated context vectors, Chiao and Zweigenbaum (2003) and Morin and Prochasson (2011) combine a standard general language dictionary with a specialized dictionary, whereas D´ejean et al. (2002) use the hierarchical properties of a specialized thesaurus. Koehn and Knight (2002) automatically induce the initial seed bilingual dictionary by using identical spelling features such as cognates and similar contexts. As regards the problem of words ambiguities, Bouamor et al. (2013) carried out word sense disambiguation process only in the target language whereas Gaussier et al. (2004) solve the problem through the source and target languages by using approaches based on CCA (Canonical Correlation Analysis) and multilingual PLSA (Probabilistic Latent Semantic Analysis). The rank of candidate translations can be improved by integrating different heuristics. For instance, Chiao and Zweigenbaum (2002) introduce a heuristic based on word distribution symmetry. From the ranked list of candidate translations, the standard approach is applied in the reverse direction to find the source counterparts of the first target candidate translations. And then only the target candidate translations that had the initial source word among the first reverse candidate translations are kept. Laroche and Langlais (2010) suggest a heuristic based on the graphic similarity between source and target terms. Here, candidate translations which are cognates of the word to be translated are ranked first among the list of translation candidates. 3 Linguistic Resources In this section, we outline the different textual resources used for our experiments: the comparable corpora, the bilingual dictionary and the terminology reference lists. 3.1 Specialized Comparable Corpora For our experiments, we used two specialized French/English comparable corpora: Breast cancer corpus This comparable corpus is composed of documents collected from the Elsevier website2. The documents were taken from the medical domain within the subdomain of “breast cancer”. We have automatically selected the documents published between 2001 and 2008 where the title or the keywords contain the term cancer du sein in French and breast cancer in English. We collected 130 French documents (about 530,000 words) and 1,640 English documents (about 2http://www.elsevier.com 1287 7.4 million words). We split the English documents into 14 parts each containing about 530,000 words. Diabetes corpus The documents making up the French part of the comparable corpus have been craweled from the web using three keywords: diab`ete (diabetes), alimentation (food), and ob´esit´e (obesity). After a manual selection, we only kept the documents which were relative to the medical domain. As a result, 65 French documents were extracted (about 257,000 words). The English part has been extracted from the medical website PubMed3 using the keywords: diabetes, nutrition and feeding. We only kept the free fulltext available documents. As a result, 2,339 English documents were extracted (about 3,5 million words). We also split the English documents into 14 parts each containing about 250,000 words. The French and English documents were then normalised through the following linguistic preprocessing steps: tokenisation, part-of-speech tagging, and lemmatisation. These steps were carried out using the TTC TermSuite4 that applies the same method to several languages including French and English. Finally, the function words were removed and the words occurring less than twice in the French part and in each English part were discarded. Table 3 shows the number of distinct words (# words) after these steps. It also indicates the comparability degree in percentage (comp.) between the French part and each English part of each comparable corpus. The comparability measure (Li and Gaussier, 2010) is based on the expectation of finding the translation for each word in the corpus and gives a good idea about how two corpora are comparable. We can notice that all the comparable corpora have a high degree of comparability with a better comparability of the breast cancer corpora as opposed to the diabetes corpora. In the remainder of this article, [breast cancer corpus i] for instance stands for the breast cancer comparable corpus composed of the unique French part and the English part i (i ∈[1, 14]). 3.2 Bilingual Dictionary The bilingual dictionary used in our experiments is the French/English dictionary ELRA-M0033 3http://www.ncbi.nlm.nih.gov/pubmed/ 4http://code.google.com/p/ttc-project Breast cancer Diabetes # words (comp.) # words (comp.) French Part 1 7,376 4,982 English Part 1 8,214 (79.2) 5,181 (75.2) Part 2 7,788 (78.8) 5,446 (75.9) Part 3 8,370 (78.8) 5,610 (76.6) Part 4 7,992 (79.3) 5,426 (74.8) Part 5 7,958 (78.7) 5,610 (75.0) Part 6 8,230 (79.1) 5,719 (73.6) Part 7 8,035 (78.3) 5,362 (75.6) Part 8 8,008 (78.8) 5,432 (74.6) Part 9 8,334 (79.6) 5,398 (74.2) Part 10 7,978 (79.1) 5,059 (75.6) Part 11 8,373 (79.4) 5,264 (74.9) Part 12 8,065 (78.9) 4,644 (73.4) Part 13 7,847 (80.0) 5,369 (74.8) Part 14 8,457 (78.9) 5,669 (74.8) Table 3: Number of distinct words (# words) and degree of comparability (comp.) for each comparable corpora available from the ELRA catalogue5. This resource is a general language dictionary which contains only a few terms related to the medical domain. 3.3 Terminology Reference Lists To evaluate the quality of terminology extraction, we built a bilingual terminology reference list for each comparable corpus. We selected all French/English single words from the UMLS6 meta-thesaurus. We kept only i) the French single words which occur more than four times in the French part and ii) the English single words which occur more than four times in each English part i7. As a result of filtering, 169 French/English single words were extracted for the breast cancer corpus and 244 French/English single words were extracted for the diabetes corpus. It should be noted that the evaluation of terminology extraction using specialized comparable corpora of5http://www.elra.info/ 6http://www.nlm.nih.gov/research/umls 7The threshold sets to four is required to build a bilingual terminology reference list composed of about a hundred words. This value is very low to obtain representative context vectors. For instance, Prochasson and Fung (2011) showed that the standard approach is not relevant for infrequent words (since the context vectors are very unrepresentative i.e. poor in information). 1288 Breast cancer corpus 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Balanced 26.1 26.2 21.0 27.0 22.8 27.1 26.3 25.8 29.2 23.3 21.7 29.6 29.1 26.1 Unbalanced 26.1 31.9 34.7 36.0 37.7 36.4 36.6 37.2 39.8 40.5 40.6 42.3 40.9 41.6 Diabetes corpus 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Balanced 13.6 13.5 11.9 14.6 14.6 11.0 16.5 10.5 12.9 13.3 15.2 11.8 13.0 14.3 Unbalanced 13.6 17.5 18.9 21.2 23.4 23.8 24.8 24.7 24.7 24.4 24.8 25.2 26.0 24.9 Table 4: Results (MAP %) of the standard approach using the balanced and unbalanced comparable corpora ten relies on lists of a small size: 95 single words in Chiao and Zweigenbaum (2002), 100 in Morin et al. (2007), 125 and 79 in Bouamor et al. (2013). 4 Experiments and Results In this section, we present experiments to evaluate the influence of comparable corpus size and prediction models on the quality of bilingual terminology extraction. We present the results obtained for the terms belonging to the reference list for English to French direction measured in terms of the Mean Average Precision (MAP) (Manning et al., 2008) as follows: MAP(Ref) = 1 |Ref| |Ref| ∑ i=1 1 ri (7) where |Ref| is the number of terms of the reference list and ri the rank of the correct candidate translation i. 4.1 Standard Approach Evaluation In order to evaluate the influence of corpus size on the bilingual terminology extraction task, two experiments have been carried out using the standard approach. We first performed an experiment using each comparable corpus independently of the others (we refer to these corpora as balanced corpora). We then conducted a second experiment where we varied the size of the English part of the comparable corpus, from 530,000 to 7.4 million words for the breast cancer corpus in 530,000 words steps, and from 250,000 to 3.5 million words for the diabetes corpus in 250,000 words steps (we refer to these corpora as unbalanced corpora). In the experiments reported here, the size of the context window w was set to 3 (i.e. a seven-word window that approximates syntactic dependencies), the retained association and similarity measures were the discounted log-odds and the Cosine (see Section 2.1). The results shown were those that give the best performance for the comparable corpora used individually. Table 4 shows the results of the standard approach on the balanced and the unbalanced breast cancer and diabetes comparable corpora. Each column corresponds to the English part i (i ∈ [1, 14]) of a given comparable corpus. The first line presents the results for each individual comparable corpus and the second line presents the results for the cumulative comparable corpus. For instance, the column 3 indicates the MAP obtained by using a comparable corpus that is composed i) only of [breast cancer corpus 3] (MAP of 21.0%), and ii) of [breast cancer corpus 1, 2 and 3] (MAP of 34.7%). As a preliminary remark, we can notice that the results differ noticeably according to the comparable corpus used individually (MAP variation between 21.0% and 29.6% for the breast cancer corpora and between 10.5% and 16.5% for the diabetes corpora). We can also note that the MAP of all the unbalanced comparable corpora is always higher than any individual comparable corpus. Overall, starting with a MAP of 26.1% as provided by the balanced [breast cancer corpus 1], we are able to increase it to 42.3% with the unbalanced [breast cancer corpus 12] (the variation observed for some unbalanced corpora such as [diabetes corpus 12, 13 and 14] can be explained by the fact that adding more data in the source language increases the error rate of the translation phase of the standard approach, which leads to the introduction of additional noise in the translated context vectors). 1289 Balanced breast cancer corpus 1 2 3 4 5 6 7 8 9 10 11 12 13 14 No prediction 26.1 26.2 21.0 27.0 22.8 27.1 26.3 25.8 29.2 23.3 21.7 29.6 29.1 26.1 Sourcepred 26.5 26.0 23.0 30.0 25.4 30.1 28.3 29.4 32.1 24.9 24.4 30.5 30.1 29.0 T argetpred 19.5 20.0 17.2 23.4 19.9 23.1 21.4 21.6 24.1 19.3 18.1 26.6 24.3 22.6 Sourcepred + T argetpred 23.9 21.9 20.5 25.8 23.5 25.3 24.1 26.1 27.4 22.5 21.0 25.6 28.5 24.6 Balanced diabetes corpus 1 2 3 4 5 6 7 8 9 10 11 12 13 14 No prediction 13.6 13.5 11.9 14.6 14.6 11.0 16.5 10.5 12.9 13.3 15.2 11.8 13.0 14.3 Sourcepred 13.9 14.3 12.6 15.5 14.9 10.9 17.6 11.1 14.0 14.2 16.4 13.3 13.5 15.7 T argetpred 09.8 09.0 08.3 11.9 10.1 08.0 15.9 07.3 10.8 10.0 10.1 08.8 10.8 10.2 Sourcepred + T argetpred 10.9 11.0 09.0 13.6 11.8 08.6 15.4 07.7 12.8 11.5 11.9 10.5 11.7 11.8 Table 5: Results (MAP %) of the standard approach using the Lin regression model on the balanced breast cancer and diabetes corpora (comparison of predicting the source side, the target side and both sides of the comparable corpora) 4.2 Prediction Evaluation The aim of this experiment is two-fold: first, we want to evaluate the usefulness of predicting word co-occurrence counts and second, we want to find out whether it is more appropriate to apply prediction to the source side, the target side or both sides of the bilingual comparable corpora. Breast cancer Diabetes No prediction 29.6 16.5 Lin 30.5 17.6 Poly2 30.6 17.5 Poly3 30.4 17.6 Logit 22.3 13.6 Table 6: Results (MAP %) of the standard approach using different regression models on the balanced breast cancer and diabetes corpora 4.2.1 Regression Models Comparison We contrast the prediction models presented in Section 2.2 to findout which is the most appropriate model to use as a pre-processing step of the standard approach. We chose the balanced corpora where the standard approach has shown the best results in the previous experiment, namely [breast cancer corpus 12] and [diabetes corpus 7]. Table 6 shows a comparison between the standard approach without prediction noted No prediction and the standard approach with prediction models. We contrast the simple linear regression model (Lin) with the second and the third order polynomial regressions (Poly2 and Poly3) and the logistic regression model (Logit). We can notice that except for the Logit model, all the regression models outperform the baseline (No prediction). Also, as we can see, the results obtained with the linear and polynomial regressions are very close. This suggests that both linear and polynomial regressions are suitable as a preprocessing step of the standard approach, while the logistic regression seems to be inappropriate according to the results shown in Table 6. That said, the gain of regression models is not significant. This may be due to the regression parameters that have been learned from a training corpus of the general domain. Another reason that could explain these results is the prediction process. We applied the same regression function to all co-occurrence counts while learning models for low and high frequencies should have been more appropriate. In the light of the above results, we believe that prediction can be beneficial to our task. 4.2.2 Source versus Target Prediction Table 5 shows a comparison between the standard approach without prediction noted No prediction and the standard approach based on the prediction of the source side noted Sourcepred, the target side noted Targetpred and both sides noted Sourcepred+Targetpred. If prediction can not replace a large amount of data, it aims at increasing co-occurrence counts as if large amounts of data were at our disposal. In this case, applying prediction to the source side may simulate a configuration of using unbalanced comparable corpora where the source side is n times bigger than the target side. Predicting the target side only, may 1290 1 2 3 4 5 6 7 8 9 10 11 12 13 14 0 5 10 15 20 25 30 35 40 45 50 [English-i]-French breast cancer corpus MAP (%) Balanced Balanced+Prediction Unbalanced Unbalanced+Prediction (a) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 0 5 10 15 20 25 30 35 [English-i]-French diabetes corpus MAP (%) Balanced Balanced+Prediction Unbalanced Unbalanced+Prediction (b) Figure 1: Results (MAP %) of the standard approach using the best configurations of the prediction models (Lin for Balanced + Prediction and Poly2 for Unbalanced + Prediction) on the breast cancer and the diabetes corpora leads us to the opposite configuration where the target side is n times bigger than the source side. Finally, predicting both sides may simulate a large comparable corpora on both sides. In this experiment, we chose to use the linear regression model (Lin) for the prediction part. That said, the other regression models have shown the same behavior as Lin. We can see that the best results are obtained by the Sourcepred approach for both comparable corpora. We can also notice that predicting the target side and both sides of the comparable corpora degrades the results. It is not surprising that predicting the target side only leads to lower results, since it is well known that a better characterization of a word to translate (given from the source side) leads to better results. We can deduce from Table 5 that source prediction is the most appropriate configuration to improve the quality of extracted lexicons. This configuration which simulates the use of unbalanced corpora leads us to think that using prediction with unbalanced comparable corpora should also increase the performance of the standard approach. This assumption is evaluated in the next Subsection. 4.3 Predicting Unbalanced Corpora In this last experiment we contrast the standard approach applied to the balanced and unbalanced corpora noted Balanced and Unbalanced with the standard approach combined with the prediction model noted Balanced + Prediction and Unbalanced + Prediction. Figure 1(a) illustrates the results of the experiments conducted on the breast cancer corpus. We can see that the Unbalanced approach significantly outperforms the baseline (Balanced). The big difference between the Balanced and the Unbalanced approaches would indicate that the latter is optimal. We can also notice that the prediction model applied to the balanced corpus (Balanced + Prediction) slightly outperforms the baseline while the Unbalanced + Prediction approach significantly outperforms the three other approaches (moreover the variation observed with the Unbalanced approach are lower than the Unbalanced + Prediction approach). Overall, the prediction increases the performance of the standard approach especially for unbalanced corpora. The results of the experiments conducted on the diabetes corpus are shown in Figure 1(b). As for the previous experiment, we can see that the Unbalanced approach significantly outperforms the Balanced approach. This confirms the unbalanced hypothesis and would motivate the use of unbalanced corpora when they are available. We can also notice that the Balanced + Prediction approach slightly outperforms the baseline while the Unbalanced+Prediction approach gives the best results. Here also, the prediction increases the performance of the standard approach especially for unbalanced corpora. It is clear that in addition to the benefit of using unbalanced comparable 1291 corpora, prediction shows a positive impact on the performance of the standard approach. 5 Conclusion In this paper, we have studied how an unbalanced specialized comparable corpus could influence the quality of the bilingual lexicon extraction. This aspect represents a significant interest when working with specialized comparable corpora for which the quantity of the data collected may differ depending on the languages involved, especially when involving the English language as many scientific documents are available. More precisely, our different experiments show that using an unbalanced specialized comparable corpus always improves the quality of word translations. Thus, the MAP goes up from 29.6% (best result on the balanced corpora) to 42.3% (best result on the unbalanced corpora) in the breast cancer domain, and from 16.5% to 26.0% in the diabetes domain. Additionally, these results can be improved by using a prediction model of the word co-occurrence counts. Here, the MAP goes up from 42.3% (best result on the unbalanced corpora) to 46.9% (best result on the unbalanced corpora with prediction) in the breast cancer domain, and from 26.0% to 29.8% in the diabetes domain. We hope that this study will pave the way for using specialized unbalanced comparable corpora for bilingual lexicon extraction. Acknowledgments This work is supported by the French National Research Agency under grant ANR-12-CORD-0020. References Alan Agresti. 2007. An Introduction to Categorical Data Analysis (2nd ed.). Wiley & Sons, Inc., Hoboken, New Jersey. Dhouha Bouamor, Nasredine Semmar, and Pierre Zweigenbaum. 2013. Context vector disambiguation for bilingual lexicon extraction from comparable corpora. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL’13), pages 759–764, Sofia, Bulgaria. Yun-Chuang Chiao and Pierre Zweigenbaum. 2002. Looking for candidate translational equivalents in specialized, comparable corpora. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 1208–1212, Tapei, Taiwan. Yun-Chuang Chiao and Pierre Zweigenbaum. 2003. The Effect of a General Lexicon in Corpus-Based Identification of French-English Medical Word Translations. In The New Navigators: from Professionals to Patients, Actes Medical Informatics Europe, pages 397–402. Ronald Christensen. 1997. Log-Linear Models and Logistic Regression. Springer-Verlag, Berlin. Herv´e D´ejean, Fatia Sadat, and ´Eric Gaussier. 2002. An approach based on multilingual thesauri and model combination for bilingual lexicon extraction. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 218–224, Tapei, Taiwan. Mona T. Diab and Steve Finch. 2000. A Statistical Word-Level Translation Model for Comparable Corpora. In Proceedings of the 6th International Conference on Computer-Assisted Information Retrieval (RIAO’00), pages 1500–1501, Paris, France. Stefan Evert. 2005. The Statistics of Word Cooccurrences: Word Pairs and Collocations. Ph.D. thesis, Universit¨at Stuttgart, Germany. Pascale Fung and Percy Cheung. 2004. Multilevel bootstrapping for extracting parallel sentences from a quasi-comparable corpus. In Proceedings of the 20th International Conference on Computational Linguistics (COLING’04), pages 1051–1057, Geneva, Switzerland. Pascale Fung and Kathleen McKeown. 1997. Finding Terminology Translations from Non-parallel Corpora. In Proceedings of the 5th Annual Workshop on Very Large Corpora (VLC’97), pages 192–202, Hong Kong. Pascale Fung. 1995. Compiling Bilingual Lexicon Entries from a non-Parallel English-Chinese Corpus. In Proceedings of the 3rd Annual Workshop on Very Large Corpora (VLC’95), pages 173–183, Cambridge, MA, USA. Pascale Fung. 1998. A Statistical View on Bilingual Lexicon Extraction: From Parallel Corpora to Non-parallel Corpora. In David Farwell, Laurie Gerber, and Eduard Hovy, editors, Proceedings of the 3rd Conference of the Association for Machine Translation in the Americas (AMTA’98), pages 1– 16, Langhorne, PA, USA. ´Eric Gaussier, Jean-Michel Renders, Irena Matveeva, Cyril Goutte, and Herv´e D´ejean. 2004. A Geometric View on Bilingual Lexicon Extraction from Comparable Corpora. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL’04), pages 526–533, Barcelona, Spain. Gregory Grefenstette. 1994. Corpus-Derived First, Second and Third-Order Word Affinities. In Proceedings of the 6th Congress of the European Association for Lexicography (EURALEX’94), pages 279–290, Amsterdam, The Netherlands. 1292 Amir Hazem and Emmanuel Morin. 2013. Word co-occurrence counts prediction for bilingual terminology extraction from comparable corpora. In Proceedings of the Sixth International Joint Conference on Natural Language Processing (IJCNLP’13), pages 1392–1400, Nagoya, Japan. Azniah Ismail and Suresh Manandhar. 2010. Bilingual lexicon extraction from comparable corpora using in-domain terms. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING’10), pages 481–489, Beijing, China. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of the ACL-02 Workshop on Unsupervised Lexical Acquisition (ULA’02), pages 9–16, Philadelphia, PA, USA. Audrey Laroche and Philippe Langlais. 2010. Revisiting Context-based Projection Methods for TermTranslation Spotting in Comparable Corpora. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING’10), pages 617–625, Beijing, China. Bo Li and ´Eric Gaussier. 2010. Improving corpus comparability for bilingual lexicon extraction from comparable corpora. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING’10), pages 644–652, Beijing, China. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schtze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA. Anthony McEnery and Zhonghua Xiao. 2007. Parallel and comparable corpora: What are they up to? In Gunilla Anderman and Margaret Rogers, editors, Incorporating Corpora: Translation and the Linguist, Multilingual Matters, chapter 2, pages 18–31. Clevedon, UK. Emmanuel Morin and Emmanuel Prochasson. 2011. Bilingual lexicon extraction from comparable corpora enhanced with parallel corpora. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora (BUCC’11), pages 27–34, Portland, OR, USA. Emmanuel Morin, B´eatrice Daille, Koichi Takeuchi, and Kyo Kageura. 2007. Bilingual Terminology Mining – Using Brain, not brawn comparable corpora. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL’07), pages 664–671, Prague, Czech Republic. Pablo Gamallo Otero. 2007. Learning bilingual lexicons from comparable english and spanish corpora. In Proceedings of the 11th Conference on Machine Translation Summit (MT Summit XI), pages 191– 198, Copenhagen, Denmark. Carol Peters and Eugenio Picchi. 1998. Crosslanguage information retrieval: A system for comparable corpus querying. In Gregory Grefenstette, editor, Cross-language information retrieval, chapter 7, pages 81–90. Kluwer Academic Publishers. Emmanuel Prochasson and Pascale Fung. 2011. Rare Word Translation Extraction from Aligned Comparable Documents. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL’11), pages 1327–1335, Portland, OR, USA. Emmanuel Prochasson, Emmanuel Morin, and Kyo Kageura. 2009. Anchor points for bilingual lexicon extraction from small comparable corpora. In Proceedings of the 12th Conference on Machine Translation Summit (MT Summit XII), pages 284–291, Ottawa, Canada. Reinhard Rapp. 1995. Identify Word Translations in Non-Parallel Texts. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL’95), pages 320–322, Boston, MA, USA. Reinhard Rapp. 1999. Automatic Identification of Word Translations from Unrelated English and German Corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL’99), pages 519–526, College Park, MD, USA. Rapha¨el Rubino and Georges Linar`es. 2011. A multiview approach for term translation spotting. In Proceedings of the 12th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing’11), pages 29–40, Tokyo, Japan. Kumiko Tanaka and Hideya Iwasaki. 1996. Extraction of Lexical Translations from Non-Aligned Corpora. In Proceedings of the 16th International Conference on Computational Linguistics (COLING’96), pages 580–585, Copenhagen, Denmark. Kun Yu and Junichi Tsujii. 2009. Extracting bilingual dictionary from comparable corpora with dependency heterogeneity. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT’09), pages 121–124, Boulder, CO, USA. 1293
2014
121
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1294–1304, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Validating and Extending Semantic Knowledge Bases using Video Games with a Purpose Daniele Vannella, David Jurgens, Daniele Scarfini, Domenico Toscani and Roberto Navigli Department of Computer Science Sapienza University of Rome [email protected] Abstract Large-scale knowledge bases are important assets in NLP. Frequently, such resources are constructed through automatic mergers of complementary resources, such as WordNet and Wikipedia. However, manually validating these resources is prohibitively expensive, even when using methods such as crowdsourcing. We propose a cost-effective method of validating and extending knowledge bases using video games with a purpose. Two video games were created to validate conceptconcept and concept-image relations. In experiments comparing with crowdsourcing, we show that video game-based validation consistently leads to higher-quality annotations, even when players are not compensated. 1 Introduction Large-scale knowledge bases are an essential component of many approaches in Natural Language Processing (NLP). Semantic knowledge bases such as WordNet (Fellbaum, 1998), YAGO (Suchanek et al., 2007), and BabelNet (Navigli and Ponzetto, 2010) provide ontological structure that enables a wide range of tasks, such as measuring semantic relatedness (Budanitsky and Hirst, 2006) and similarity (Pilehvar et al., 2013), paraphrasing (Kauchak and Barzilay, 2006), and word sense disambiguation (Navigli and Ponzetto, 2012; Moro et al., 2014). Furthermore, such knowledge bases are essential for building unsupervised algorithms when training data is sparse or unavailable. However, constructing and updating semantic knowledge bases is often limited by the significant time and human resources required. Recent approaches have attempted to build or extend these knowledge bases automatically. For example, Snow et al. (2006) and Navigli (2005) extend WordNet using distributional or structural features to identify novel semantic connections between concepts. The recent advent of large semi-structured resources has enabled the creation of new semantic knowledge bases (Medelyan et al., 2009; Hovy et al., 2013) through automatically merging WordNet and Wikipedia (Suchanek et al., 2007; Navigli and Ponzetto, 2010; Niemann and Gurevych, 2011). While these automatic approaches offer the scale needed for opendomain applications, the automatic processes often introduce errors, which can prove detrimental to downstream applications. To overcome issues from fully-automatic construction methods, several works have proposed validating or extending knowledge bases using crowdsourcing (Biemann and Nygaard, 2010; Eom et al., 2012; Sarasua et al., 2012). However, these methods, too, are limited by the resources required for acquiring large numbers of responses. In this paper, we propose validating and extending semantic knowledge bases using video games with a purpose. Here, the annotation tasks are transformed into elements of a video game where players accomplish their jobs by virtue of playing the game, rather than by performing a more traditional annotation task. While prior efforts in NLP have incorporated games for performing annotation and validation (Siorpaes and Hepp, 2008b; Herda˘gdelen and Baroni, 2012; Poesio et al., 2013), these games have largely been text-based, adding game-like features such as high-scores on top of an existing annotation task. In contrast, we introduce two video games with graphical 2D gameplay that is similar to what game players are familiar with. The fun nature of the games provides an intrinsic motivation for players to keep playing, which can increase the quality of their work and lower the cost per annotation. Our work provides the following three contributions. First, we demonstrate effective video gamebased methods for both validating and extending 1294 semantic networks, using two games that operate on complementary sources of information: semantic relations and sense-image mappings. In contrast to previous work, the annotation quality is determined in a fully automatic way. Second, we demonstrate that converting games with a purpose into more traditional video games creates an increased player incentive such that players annotate for free, thereby significantly lowering annotation costs below that of crowdsourcing. Third, for both games, we show that games produce better quality annotations than crowdsourcing. 2 Related Work Multiple works have proposed linguistic annotation-based games with a purpose for tasks such as anaphora resolution (Hladk´a et al., 2009; Poesio et al., 2013), paraphrasing (Chklovski and Gil, 2005), term associations (Artignan et al., 2009; Lafourcade and Joubert, 2010), query expansion (Simko et al., 2011), and word sense disambiguation (Chklovski and Mihalcea, 2002; Seemakurty et al., 2010; Venhuizen et al., 2013). Notably, all of these linguistic games focus on users interacting with text, in contrast to other highly successful games with a purpose in other domains, such as Foldit (Cooper et al., 2010), in which players fold protein sequences, and the ESP game (von Ahn and Dabbish, 2004), where players label images with words. Most similar to our work are games that create or validate common sense knowledge. Two games with a purpose have incorporated video gamelike mechanics for annotation. First, Herda˘gdelen and Baroni (2012) validate automatically acquired common sense relations using a slot machine game where players must identify valid relations and arguments from randomly aligned data within a time limit. Although the validation is embedded in a game-like setting, players are limited to one action (pulling the lever) unlike our games, which feature a variety of actions and rich gameplay experience to keep players interested longer. Second, Kuo et al. (2009) describe a pet-raising game where players must answer common sense questions in order to obtain pet food. While their game is among the most video game-like, the annotation task is a chore the player must perform in order to return to the game, rather than an integrated, fun part of the game’s objectives, which potentially decreases motivation for answering correctly. Several works have proposed adapting existing word-based board game designs to create or validate common sense knowledge. von Ahn et al. (2006) generate common sense facts by using a game similar to TabooTM, where one player must list facts about a computer-selected lemma and a second player must guess the original lemma having seen only the facts. Similarly, Vickrey et al. (2008) gather free associations to a target word with the constraint, similar to TabooTM, where players cannot enter a small set of banned words. Vickrey et al. (2008) also present two games similar to the ScattergoriesTM, where players are given a category and then must list things in that category. The two variants differ in the constraints imposed on the players, such as beginning all items with a specific letter. For all three games, two players play the same game under time limits and then are rewarded if their answers match. Last, three two-player games have focused on validating and extending knowledge bases. Rzeniewicz and Szyma´nski (2013) extend WordNet with common-sense knowledge using a 20 Questions-like game. In a rapid-play style game, OntoPronto attempts to classify Wikipedia pages as either categories or individuals (Siorpaes and Hepp, 2008a). SpotTheLink uses a similar rapid question format to have players align the DBpedia and PROTON ontologies by agreeing on the distinctions between classes (Thaler et al., 2011). Unlike dynamic gaming elements common in our video games, the above games are all focused on interacting with textual items. Another major limitation is their need for always having two players, which requires them to sustain enough interest to always maintain an active pool of players. While the computer can potentially act as a second player, such a simulated player is often limited to using preexisting knowledge or responses, which makes it difficult to validate new types of entities or create novel answers. In contrast, we drop this requirement thanks to a new strategy for assigning confidence scores to the annotations based on negative associations. 3 Video Game with a Purpose Design To create video games, our development process focused on a common design philosophy and a common data set. 3.1 Design Objectives Three design objectives were used to develop the video games. First, the annotation task should be a central and natural action with familiar video game mechanics. That is, the annotation should 1295 be supplied by common actions such as collecting items, puzzles, or destroying objects, rather than through extrinsic tasks that players must complete in order to return to the game. This design has the benefits of (1) growing the annotator pool with video games players, and (2) potentially increasing annotator enjoyment. Second, the game should be playable by a single player, with reinforcement for correct game play coming from gold standard examples.1 We note that gold standard examples may come from both true positive and true negative items. Third, the game design should be sufficiently general to annotate a variety of linguistic phenomena, such that only the game data need be changed to accomplish a different annotation task. While some complex linguistic annotation tasks such as preposition attachment may be difficult to integrate directly into gameplay, many simpler but still necessary annotation tasks such as word and image associations can be easily modeled with traditional video game mechanics. 3.2 Annotation Setup Tasks We focused on two annotation tasks: (1) validating associations between two concepts, and (2) validating associations between a concept and an image. For each task we developed a video game with a purpose that integrates the task within the game, as illustrated in Sections 4 and 5. Knowledge base As the reference knowledge base, we chose BabelNet2 (Navigli and Ponzetto, 2010), a large-scale multilingual semantic ontology created by automatically merging WordNet with other collaboratively-constructed resources such as Wikipedia and OmegaWiki. BabelNet data offers two necessary features for generating the games’ datasets. First, by connecting WordNet synsets to Wikipedia pages, most synsets are associated with a set of pictures; while often noisy, these pictures sometimes illustrate the target concept and are an ideal case for validation. Second, BabelNet contains the semantic relations from both WordNet and hyperlinks in Wikipedia; these relations are again an ideal case of validation, as not all hyperlinks connect semanticallyrelated pages in Wikipedia. Last, we stress that while our games use BabelNet data, they could easily validate or extend other knowledge bases such as YAGO (Suchanek et al., 2007) as well. 1This design is in contrast to two-player games where mutual agreement reinforces correct behavior. 2http://babelnet.org Data We created a common set of concepts, C, used in both games, containing sixty synsets selected from all BabelNet synsets with at least fifty associated images. Using the same set of synsets, separate datasets were created for the two validation tasks. In each dataset, a concept c ∈C is associated with two sets: a set Vc containing items to validate, and a set Nc with examples of true negative items (i.e., items where the relation to c does not hold). We use the notation V and N when referring to the to-validate and true negative sets for all concepts in a dataset, respectively. For the concept-concept dataset, Vc is the union of V B c , which contains the lemmas of all synsets incident to c in BabelNet, and V n c , which contains novel lemmas derived from statistical associations. Specifically, novel lemmas were selected by computing the χ2 statistic for co-occurrences between the lemmas of c and all other part of speech-tagged lemmas in Wikipedia. The 30 lemmas with the highest χ2 are included in Vc. To enable concept-to-concept annotations, we disambiguate novel lemmas using a simple heuristic based on link co-occurrence count (Navigli and Ponzetto, 2012). Each set Vc contains 77.6 lemmas on average. For the concept-image data, Vc is the union of V B c , which contains all images associated with c in BabelNet, and V n c , which contains web-gathered images using a lemma of c as the query. Webgathered images were retrieved using Yahoo! Boss image search and the first result set (35 images) was added to Vc. Each set Vc contains 77.0 images on average. For both datasets, each negative set Nc is constructed as ∪c′∈C\{c}V B c′ , i.e., from the items related in BabelNet to all other concepts in C. By constructing Nc directly from the knowledge base, play actions may be validated based on recognition of true negatives, removing the heavy burden for ever manually creating a gold standard test set. Annotation Aggregation In each game, an item is annotated when players make a binary choice as to whether the item’s relation is true (e.g., whether an image is related to a concept). To produce a final annotation, a rating of p −n is computed, where p and n denote the number of times players have marked the item’s relation as true or false, respectively. Items with a positive rating after aggregating are marked as true examples of the relation and false otherwise. 1296 (a) The passphrase shown at the start (b) Main gameplay screen with a close-up of a player’s interaction with two humans Figure 1: Screenshots of the key elements of Infection 4 Game 1: Infection The first game, Infection, validates the conceptconcept relation dataset. Design Infection is designed as a top-down shooter game in the style of Commando. Infection features the classic game premise that a virus has partially infected humanity, turning people into zombies. The player’s responsibility is to stop zombies from reaching the city and rescue humans that are fleeing to the city. Both zombies and humans appear at the top of the screen, advance to the bottom and, upon reaching it, enter the city. In the game, some humans are infected, but have not yet become zombies; these infected humans must be stopped before reaching the city. Because infected and uninfected humans look identical, the player uses a passphrase call-andresponse mechanism to distinguish between the two. Each level features a randomly-chosen passphrase that the player’s character shouts. Uninfected humans are expected to respond with a word or phrase related to the passphrase; in contrast, infected humans have become confused due to the infection and will say something completely unrelated in an attempt to sneak past. When an infected human reaches the city, the city’s total infection level increases; should the infection level increase beyond a certain threshold, the player fails the stage and must replay it to advance the game. Furthermore, if any time after ten humans have been seen, the player has killed more than 80% of the uninfected humans, the player’s gun is taken by the survivors and she loses the stage. Figure 1a shows instructions for the passphrase “medicine.” In the corresponding gameplay, shown in the close up of Figure 1b, a human shouts a valid response, “radiology” for the level’s passphrase, while the nearby infected human shouts an incorrect response “longitude.” Gameplay is divided into eight stages, each with increasing difficulty. Each stage has a goal of saving a specific number of uninfected humans. Infection incorporates common game mechanics, such as unlockable weapons, power-ups that restore health, and achievements. Scoring is based on both the number of zombies killed and the percentage of uninfected humans saved, motivating players to kill infected humans in order to increase their score. Importantly, Infection also includes a leaderboard where players compete for top positions based on their total scores. Annotation Each human is assigned a response selected uniformly from V or N. Humans with responses from N are treated as infected. Players annotate by selecting which humans are infected: Allowing a human with a response from V to enter the city is treated as a positive annotation; killing that human is treated as a negative annotation. The design of Infection enables annotating multiple types of conceptual relations such as synonymy or antonymy by changing only the description of the passphrase and how uninfected humans are expected to respond. Quality Enforcement Mechanisms Infection includes two game mechanics to limit adversarial players from creating many low quality annotations. Specifically, the game prevents players from both (1) allowing all humans to live, via the city infection level and (2) killing all humans, via survivors taking the player’s gun; these actions would both generate many false positives and false negatives, respectively. These mechanics ensure the game naturally produces better quality annotations; in contrast, common crowdsourcing platforms do not support analogous mechanics for enforcing this type of correctness at annotation time. 5 Game 2: The Knowledge Towers The second game, The Knowledge Towers (TKT), validates the concept-image dataset. Design TKT is designed as a single-player role playing game (RPG) where the player explores a 1297 (a) An example tower’s concept (b) Image selection screen (c) Gameplay Figure 2: Screenshots of the key elements of The Knowledge Towers. series of towers to unlock long-forgotten knowledge. At the start of each tower, a target concept is shown, e.g., the tower of “tango,” along with a description of the concept (Figure 2a). The player must then recover the knowledge of the target concept by acquiring pictures of it. Pictures are obtained through defeating monsters and opening treasure chests, such as those shown in Figure 2c. However, players must distinguish pictures of the tower’s concept from unrelated pictures. When an image is picked up, the player may keep or discard it, as shown in Figure 2b. A player’s inventory is limited to eight pictures to encourage them to select the most relevant pictures only. Once the player has collected enough pictures, the door to the boss room is unlocked and the player may enter to defeat the boss and complete the tower. Pictures may also be deposited in special reward chests that grant experience bonuses if the deposited pictures are from V . Gathering unrelated pictures has adverse effects on the player. If the player finishes the level with a majority of unrelated pictures, the player’s journey is unsuccessful and she must replay the tower. TKT includes RPG game elements commonly found in game series such as Diablo and the Legend of Zelda: players begin with a specific character class that has class-specific skills, such as Warrior or Thief, but will unlock the ability to play as other classes by successfully completing the towers. Last, TKT includes a leaderboard where players can compete for positions; a player’s score is based on increasing her character’s abilities and her accuracy at discarding images from N. Annotation Players annotate by deciding which images to keep in their inventory. Images receive positive rating annotations from: (1) depositing the image in a reward chest, and (2) ending the level with the image still in the inventory. Conversely, images receive a negative rating when a player (1) views the image but intentionally avoids picking it up or (2) drops the image from her inventory. TKT is designed to assist in the validation and extension of automatically-created image libraries that link to semantic concepts, such as ImageNet (Deng et al., 2009) and that of Torralba et al. (2008). However, its general design allows for other types of annotations, such as image labeling, by changing the tower’s instructions and pictures. Quality Enforcement Mechanisms Similar to Infection, TKT includes analogous mechanisms for limiting adversarial player annotations. Players who collect no images are prevented from entering the boss room, limiting their ability to generate false negative annotations. Similarly, players who collect all images are likely to have half of their images from N and therefore fail the tower’s quality-check after defeating the boss. 6 Experiments Two experiments were performed with Infection and TKT: (1) an evaluation of players’ ability to play accurately and to validate semantic relations and image associations and (2) a comprehensive cost comparison. Each experiment compared (a) free and financially-incentivized versions of each game, (b) crowdsourcing, and (c) a non-video game with a purpose. 6.1 Experimental Setup Gold Standard Data To compare the quality of annotation from games and crowdsourcing, a gold standard annotation was produced for a 10% sample of each dataset (cf. Section 3.2). Two annotators independently rated the items and, in cases of disagreement, a third expert annotator adjudicated. Unlike in the game setting, annotators were free to consult additional resources such as Wikipedia. To measure inter-annotator agreement (IAA) on the gold standard annotations, we calculated Krip1298 pendorff’s α (Krippendorff, 2004; Artstein and Poesio, 2008); α ranges between [-1,1] where 1 indicates complete agreement, -1 indicates systematic disagreement, and values near 0 indicate agreement at chance levels. Gold standard annotators had high agreement, 0.774, for conceptconcept relations. However, image-concept agreement was only moderate, 0.549. A further analysis revealed differences in the annotators’ thresholds for determining association, with one annotator permitting more abstract relations. However, the adjudication process resolved these disputes, resulting in substantial agreement by all annotators on the final gold annotations. Incentives At the start of each game, players were shown brief descriptions of the game and a description of a contest where the top-ranked players would win either (1) monetary prizes in the form of gift cards, or (2) a mention and thanks in this paper. We refer to these as the paid and free versions of the game, respectively. In the paid setting, the five top-ranking players were offered gift cards valued at 25, 15, 15, 10, and 10 USD, starting from first place (a total of 75 USD per game). To increase competition among players and to perform a fairer time comparison with crowdsourcing, the contest period was limited to two weeks. 6.2 Comparison Methods To compare with the video games, items were annotated using two additional methods: crowdsourcing and a non-video game with a purpose. Crowdsourcing Setup Crowdsourcing was performed using the CrowdFlower platform. Annotation tasks were designed to closely match each game’s annotation process. A task begins with a description of a target synset and its textual definition; following, ten annotation questions are shown. Separate tasks were used for validating concept-concept and concept-image relations. Each tasks’ questions were shown as a binary choice of whether the item is related to the task’s concept. Workers were paid 0.05 USD per task. Each question was answered by three workers. Following common practices for guarding against adversarial workers (Mason and Suri, 2012), the tasks for concept c include quality check questions using items from Nc. Workers who rate too many relations from Nc as valid are removed by CrowdFlower and prevented from participating further. One of the ten questions in a task used an item from Nc, resulting in a task mixture of 90% annotation questions and 10% qualitycheck questions. However, we note that both of our video games use data that is 50% annotation, 50% quality-check. While the crowdsourcing task could be adjusted to use an increased number of quality-check options, such a design is uncommon and artificially inflates the cost of the crowdsourcing comparison beyond what would be expected. Therefore, although the crowdsourcing and gamebased annotation tasks differ slightly, we chose to use the common setup in order to create a fair costcomparison between the two. Non-video Game with a Purpose To measure the impact of the video game itself on the annotation process, we developed a non-video game with a purpose, referred to as SuchGame. Players perform a single action in SuchGame: after being shown a concept c and its textual definition, a player answers whether an item is related to the concept. Items are drawn equally from Vc and Nc, with players scoring a point each time they select that an item from N is not related. A round of gameplay contains ten questions. After the round ends, players see their score for that round and the current leaderboard. Two versions of SuchGame were released, one for each dataset. SuchGame was promoted with same free recognition incentive as Infection and TKT. 6.3 Game Release Both video games were released to multiple online forums, social media sites, and Facebook groups. SuchGame was released to separate Facebook groups promoting free webgames and groups for indie games. For each release, we estimated an upper-bound of the audience sizes using available statistics such as Facebook group sites, website analytics, and view counts. The free and paid versions had sizes of 21,546 and 14,842 people, respectively; SuchGame had an upper bound of 569,131 people. Notices promoting the game were separated so that audiences saw promotions for one of either the paid or free incentive version. Games were also released in such a way as to preserve the anonymity of the study, which limited our ability to advertise to public venues where the anonymity might be compromised. 7 Results and Discussion 7.1 Gameplay Analysis In this section we analyze the games in terms of participation and player’s ability to correctly play. Players completed over 1388 games during the 1299 0 50 100 150 200 250 300 350 400 450 Number of Items Player Correct Incorrect (a) Infection (free) 0 50 100 150 200 250 300 350 400 Number of Items Player Correct Incorrect (b) Infection (paid) 0 100 200 300 400 500 600 700 Number of Items Player Correct Incorrect (c) TKT (free) 0 200 400 600 800 1000 1200 1400 1600 Number of Items Player Correct Incorrect (d) TKT (paid) Figure 3: Accuracy of the top-40 players in rejecting true negative items during gameplay. G.S. Agreement # Players # Anno. N-Acc. Krip.’s α True Pos. True Neg. All Cost per Ann. TKT free 100 3005 97.0 0.333 82.5 82.5 82.5 $0.000 TKT paid 97 3318 95.4 0.304 69.0 92.1 74.0 $0.023 Crowdflower 290 13854 0.478 59.5 93.7 66.2 $0.008 Infection free 89 3150 71.0 0.445 67.8 68.4 68.1 $0.000 Infection paid 163 3355 65.9 0.330 69.1 54.8 61.1 $0.022 Crowdflower 1097 13764 0.167 16.9 96.4 59.6 $0.008 Table 1: Annotation statistics from all sources. N-Accuracy denotes accuracy at rejecting items from N; G.S. Agreement denotes percentage agreement of the aggregated annotations with the gold standard. study period. The paid and free versions of TKT had similar numbers of players, while the paid version of Infection attracted nearly twice the players compared to the free version, shown in Table 1, Column 1. However, both versions created approximately the same number of annotations, shown in Column 2. Surprisingly, SuchGame received little attention, with only a few players completing a full round of game play. We believe this emphasizes the strength of video game-based annotation; adding incentives and game-like features to an annotation task will not necessarily increase its appeal. Given SuchGame’s minimal interest, we omit it from further analysis. Second, the type of incentive did not change the percentage of items from N that players correctly reject, shown for all players as N-accuracy in Table 1 Column 3 and per-player in Figure 3. However, players were much more accurate at rejecting items from N in TKT than in Infection. We attribute this difference to the nature of the items and the format of the games. The images used by TKT provide concrete examples of a concept, which can be easily compared with the game’s current concept; in addition, TKT allows players to inspect items as long as a player prefers. In contrast, concept-concept associations require more background knowledge to determine if a relation exists; furthermore, Infection gives players limited time to decide (due to board length) and also contains cognitive distractors (zombies). Nevertheless, player accuracy remains high for both games (Table 1, Col. 3) indicating the games represent a viable medium for making annotation decisions. Last, the distribution of player annotation frequencies (Figure 3) suggests that the leaderboard and incentives motivated players. Especially in the paid condition, a clear group appears in the top five positions, which were advertised as receiving prizes. The close proximity of players in the paid positions is a result of continued competition as players jostled for higher-paying prizes. 7.2 Annotation Quality This section assesses the annotation quality of both games and of CrowdFlower in terms of (1) the IAA of the participants, measured using Krippendorff’s α, and (2) the percentage agreement of the resulting annotations with the gold standard. Players in both free and paid games had similar IAA, though the free version is consistently higher (Table 1, Col. 4).3 For images, crowdsourcing workers have a higher IAA than game players; however, this increased agreement is due to adversarial workers consistently selecting the same, incorrect answer. In contrast, both video games contain mechanisms for limiting such behavior. The strength of both crowdsourcing and games with a purpose comes from aggregating multiple annotations of a single item; i.e., while IAA may 3In conversations with players after the contest ended, several mentioned that being aware their play was contributing to research motivated them to play more accurately. 1300 Lemma Abbreviated Definition Most-selected Items atom The smallest possible particle of a chemical element spectrum, nonparticulate radiation, molecule, hydrogen, electron ‡ ‡ ‡ chord A combination of three or more notes voicing, triad, tonality,‡ strum, note, harmony ‡ color An attribute from reflected or emitted light orange, brown,‡ video, sadness, RGB, pigment ‡ ‡ ‡ ‡ fire The state of combustion in which inflammable material burns sprinkler, machine gun, chemical reduction, volcano, organic chemistry ‡ ‡ ‡ religion The expression of man’s belief in and reverence for a superhuman power polytheistic,‡ monotheistic, Jainism, Christianity,‡ Freedom of religion ‡ ‡ ‡ Table 2: Examples of the most-selected words and images from the free version of both games. Bolded words and images with a dashed border denote items not in BabelNet. Only the items marked with a ‡ were rated as valid in the aggregated CrowdFlower annotations. be low, the majority annotation of an item may be correct. Therefore, in Table 1, we calculate the percentage agreement of the aggregated annotations with the gold standard annotations for approving valid relations (true positives; Col. 5), rejecting invalid relations (true negatives; Col. 6), and for both combined (Col. 7). On average, both video games in all settings produce more accurate annotations than crowdsourcing. Indeed, despite having lower IAA for images, the free version of TKT provides an absolute 16.3% improvement in gold standard agreement over crowdsourcing. Examining the difference in annotation quality for true positives and negatives, we see a strong bias with crowdsourcing towards rejecting all items. This bias leads to annotations with few false positives, but as Column 5 shows, crowdflower workers consistently performed much worse than game players at identifying valid relations, producing many false negative annotations. Indeed, for concept-concept relations, workers identified only 16.9% of the valid relations. In contrast to crowdsourcing, both games were effective at identifying valid relations. Table 2 shows examples of the most frequently chosen items from V for the free versions of both games. For both games, players were equally likely to select novel items, suggesting the games can serve a useful purpose of adding these missing relations in automatically constructed knowledge bases. Highlighting one example, the five most selected concept-concept relations for chord were all novel; BabelNet included many relations to highly-specific concepts (e.g., “Circle of fifths”) but did not include relations to more commonlyassociated concepts, like note and harmony. 7.3 Cost Analysis This section provides a cost-comparison between the video games and crowdsourcing. The free versions of both games proved highly successful, yielding high-quality annotations at no direct cost. Both free and paid conditions produced similar volumes of annotations, suggesting that players do not need financial incentives provided that the games are fun to play. It could be argued that the recognition incentive was motivating players in the free condition and thus some incentive was required. However, player behavior indicates otherwise: After the contest period ended, no players in the free setting registered for being acknowledged by name, which strongly suggests the incentive was not contributing to their motivation for playing. Furthermore, a minority of players continued to play even after the contest period ended, suggesting that enjoyment was a driving factor. 1301 Last, while crowdsourcing has seen different quality and volume from workers in paid and unpaid settings (Rogstadius et al., 2011), in contrast, our games produced approximately-equivalent results from players in both settings. Crowdsourcing was slightly more cost-effective than both games in the paid condition, as shown in Table 1, Column 8. However, three additional factors need to be considered. First, both games intentionally uniformly sample between V and N to increase player engagement,4 which generates a larger number of annotations for items in N than are produced by crowdsourcing. When annotations on items in N are included for both games and crowdsourcing, the costs per annotation drop to comparable levels: $0.007 for CrowdFlower tasks, $0.008 for TKT, and $0.011 for Infection. Second, for both annotation tasks, crowdsourcing produced lower quality annotations, especially for valid relations. Based on agreement with the gold standard (Table 1, Col. 5), the estimated cost for crowdsourcing a correct true positive annotation increases to $0.014 for a concept-image and a $0.048 for concepts-concept annotation. In contrast, the cost when using video games increases only to $0.033 for concept-image and $0.031 for concept-concept. These cost increases suggest that crowdsourcing is not always cheaper with respect to quality. Third, we note that both video games in the paid setting incur a fixed cost (for the prizes) and therefore additional games played can only further decrease the cost per annotation. Indeed, the present study divided the audience pool into two separate groups which effectively halved the potential number of annotations per game. Assuming combining the audiences would produce the same number of annotations, both our games’ costs per annotation drop to $0.012. Last, video games can potentially come with indirect costs due to software development and maintenance. Indeed, Poesio et al. (2013) report spending 60,000£ in developing their Phrase Detectives game with a purpose over a two-year period. In contrast, both games here were developed as a part of student projects using open source software and assets and thus incurred no cost; furthermore, games were created in a few months, rather than years. Given that few online games attain significant sustained interest, we argue that 4Earlier versions that used mostly items from V proved less engaging due to players frequently performing the same action, e.g., saving most humans or collecting most pictures. our lightweight model is preferable for producing video games with a purpose. While using students is not always possible, the development process is fast enough to sufficiently reduce costs below those reported for Phrase Detectives. 8 Conclusion Two video games have been presented for validating and extending knowledge bases. The first game, Infection, validates concept-concept relations, and the second, The Knowledge Towers, validates image-concept relations. In experiments involving online players, we demonstrate three contributions. First, games were released in two conditions whereby players either saw financial incentives for playing or a personal satisfaction incentive where they were thanked by us. We demonstrated that both conditions produced nearly identical numbers of annotations and, moreover, that players were disinterested in the satisfaction incentive, suggesting they played out of interest in the game itself. Furthermore, we demonstrated the effectiveness of a novel design for games with a purpose which does not require two players for validation and instead reinforces behavior only using true negative items that required no manual annotation. Second, in a comparison with crowdsourcing, we demonstrate that video gamebased annotations consistently generated higherquality annotations. Last, we demonstrate that video game-based annotation can be more costeffective than crowdsourcing or annotation tasks with game-like features: The significant number of annotations generated by the satisfaction incentive condition shows that a fun game can generate high-quality annotations at virtually no cost. All annotated resources, demos of the games, and a live version of the top-ranking items for each concept are currently available online.5 In the future we will apply our video games to the validation of more data, such as the new Wikipedia bitaxonomy (Flati et al., 2014). Acknowledgments The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. We thank Francesco Cecconi for his support with the websites and the many video game players without whose enjoyment this work would not be possible. 5http://lcl.uniroma1.it/games/ 1302 References Guillaume Artignan, Mountaz Hasco¨et, and Mathieu Lafourcade. 2009. Multiscale visual analysis of lexical networks. In Proceedings of the International Conference on Information Visualisation, pages 685–690. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Chris Biemann and Valerie Nygaard. 2010. Crowdsourcing wordnet. In Proceedings of the 5th Global WordNet conference. Alexander Budanitsky and Graeme Hirst. 2006. Evaluating WordNet-based measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1):13–47. Timothy Chklovski and Yolanda Gil. 2005. Improving the design of intelligent acquisition interfaces for collecting world knowledge from web contributors. In Proceedings of the International Conference on Knowledge Capture, pages 35–42. ACM. Tim Chklovski and Rada Mihalcea. 2002. Building a Sense Tagged Corpus with Open Mind Word Expert. In Proceedings of ACL 2002 Workshop on WSD: Recent Successes and Future Directions, Philadelphia, PA, USA. Seth Cooper, Firas Khatib, Adrien Treuille, Janos Barbero, Jeehyung Lee, Michael Beenen, Andrew Leaver-Fay, David Baker, Zoran Popovi´c, and Foldit players. 2010. Predicting protein structures with a multiplayer online game. Nature, 466(7307):756– 760. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 248–255. Soojeong Eom, Markus Dickinson, and Graham Katz. 2012. Using semi-experts to derive judgments on word sense alignment: a pilot study. In Proceedings of the Conference on Language Resources and Evaluation (LREC), pages 605–611. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA. Tiziano Flati, Daniele Vannella, Tommaso Pasini, and Roberto Navigli. 2014. Two is bigger (and better) than one: the Wikipedia Bitaxonomy Project. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), Baltimore, Maryland. Amac¸ Herda˘gdelen and Marco Baroni. 2012. Bootstrapping a game with a purpose for common sense collection. ACM Transactions on Intelligent Systems and Technology, 3(4):1–24. Barbora Hladk´a, Jiˇr´ı M´ırovsk`y, and Pavel Schlesinger. 2009. Play the language: Play coreference. In Proceedings of the Joint Conference of the Association for Computational Linguistics and International Joint Conference of the Asian Federation of Natural Language Processing (ACL-IJCNLP), pages 209–212. Association for Computational Linguistics. Eduard H. Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semistructured content and Artificial Intelligence: The story so far. Artificial Intelligence, 194:2–27. David Kauchak and Regina Barzilay. 2006. Paraphrasing for automatic evaluation. In Proceedings of the Conference of the North American Chapter of the Association of Computational Linguistics (NAACL), pages 455–462. Klaus Krippendorff. 2004. Content Analysis: An Introduction to Its Methodology. Sage, Thousand Oaks, CA, second edition. Yen-ling Kuo, Jong-Chuan Lee, Kai-yang Chiang, Rex Wang, Edward Shen, Cheng-wei Chan, and Jane Yung-jen Hsu. 2009. Community-based game design: experiments on social games for commonsense data collection. In Proceedings of the ACM SIGKDD Workshop on Human Computation, pages 15–22. Mathieu Lafourcade and Alain Joubert. 2010. Computing trees of named word usages from a crowdsourced lexical network. In Proceedings of the International Multiconference on Computer Science and Information Technology (IMCSIT), pages 439– 446, Wisla, Poland. Winter Mason and Siddharth Suri. 2012. Conducting behavioral research on amazons mechanical turk. Behavior Research Methods, 44(1):1–23. Olena Medelyan, David Milne, Catherine Legg, and Ian H. Witten. 2009. Mining meaning from Wikipedia. International Journal of HumanComputer Studies, 67(9):716–754. Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity Linking meets Word Sense Disambiguation: A Unified Approach. Transactions of the Association for Computational Linguistics. Roberto Navigli and Simone Paolo Ponzetto. 2010. BabelNet: Building a very large multilingual semantic network. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), Uppsala, Sweden, pages 216–225. Roberto Navigli and Simone Paolo Ponzetto. 2012. Joining forces pays off: Multilingual Joint Word Sense Disambiguation. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1399– 1410, Jeju, Korea. 1303 Roberto Navigli. 2005. Semi-automatic extension of large-scale linguistic knowledge bases. In Proceedings of the 18th Internationa Florida AI Research Symposium Conference, Clearwater Beach, Florida, 15–17 May 2005, pages 548–553. Elisabeth Niemann and Iryna Gurevych. 2011. The people’s web meets linguistic knowledge: Automatic sense alignment of Wikipedia and WordNet. In Proceedings of the International Conference on Computational Semantics (IWCS), pages 205–214. Mohammad Taher Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, Disambiguate and Walk: a Unified Approach for Measuring Semantic Similarity. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 1341–1351, Sofia, Bulgaria. Massimo Poesio, Jon Chamberlain, Udo Kruschwitz, Livio Robaldo, and Luca Ducceschi. 2013. Phrase detectives: Utilizing collective intelligence for internet-scale language resource creation. ACM Transactions on Interactive Intelligent Systems, 3(1):3:1–3:44, April. Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, and Maja Vukovic. 2011. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM). Jacek Rzeniewicz and Julian Szyma´nski. 2013. Bringing Common Sense to WordNet with a Word Game. In Computational Collective Intelligence. Technologies and Applications, volume 8083 of Lecture Notes in Computer Science, pages 296–305. Springer. Cristina Sarasua, Elena Simperl, and Natalya F Noy. 2012. CrowdMap: Crowdsourcing ontology alignment with microtasks. In Proceedings of the International Semantic Web Conference (ISWC), pages 525–541. Nitin Seemakurty, Jonathan Chu, Luis Von Ahn, and Anthony Tomasic. 2010. Word sense disambiguation via human computation. In Proceedings of the ACM SIGKDD Workshop on Human Computation, pages 60–63. ACM. Jakub Simko, Michal Tvarozek, and Maria Bielikova. 2011. Little search game: term network acquisition via a human computation game. In Proceedings of the ACM conference on Hypertext and Hypermedia, pages 57–62. Katharina Siorpaes and Martin Hepp. 2008a. Games with a purpose for the semantic web. IEEE Intelligent Systems, 23(3):50–60. Katharina Siorpaes and Martin Hepp. 2008b. Ontogame: Weaving the semantic web by online games. In Sean Bechhofer, Manfred Hauswirth, Jrg Hoffmann, and Manolis Koubarakis, editors, The Semantic Web: Research and Applications, volume 5021 of Lecture Notes in Computer Science, pages 751–766. Springer Berlin Heidelberg. Rion Snow, Dan Jurafsky, and Andrew Ng. 2006. Semantic taxonomy induction from heterogeneous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL), Sydney, Australia, pages 801–808. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A core of semantic knowledge. unifying WordNet and Wikipedia. In Proceedings of the 16th World Wide Web Conference, Banff, Canada, 8–12 May 2007, pages 697–706. Stefan Thaler, Elena Paslaru Bontas Simperl, and Katharina Siorpaes. 2011. SpotTheLink: A Game for Ontology Alignment. In Proceedings of the 6th Conference on Professional Knowledge Management: From Knowledge to Action, pages 246– 253. Antonio Torralba, Robert Fergus, and William T Freeman. 2008. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970. Noortje J. Venhuizen, Valerio Basile, Kilian Evang, and Johan Bos. 2013. Gamification for word sense labeling. In Proceedings of the International Conference on Computational Semantics (IWCS). David Vickrey, Aaron Bronzan, William Choi, Aman Kumar, Jason Turner-Maier, Arthur Wang, and Daphne Koller. 2008. Online word games for semantic data collection. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 533–542. Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), pages 319–326. Luis von Ahn, Mihir Kedia, and Manuel Blum. 2006. Verbosity: a game for collecting common-sense facts. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), pages 75–78. 1304
2014
122
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1305–1315, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Shallow Analysis Based Assessment of Syntactic Complexity for Automated Speech Scoring Suma Bhat Beckman Institute, University of Illinois, Urbana, IL [email protected] Huichao Xue Dept. of Computer Science University of Pittsburgh Pittsburgh, PA [email protected] Su-Youn Yoon Educational Testing Service Princeton, NJ [email protected] Abstract Designing measures that capture various aspects of language ability is a central task in the design of systems for automatic scoring of spontaneous speech. In this study, we address a key aspect of language proficiency assessment – syntactic complexity. We propose a novel measure of syntactic complexity for spontaneous speech that shows optimum empirical performance on real world data in multiple ways. First, it is both robust and reliable, producing automatic scores that agree well with human rating compared to the stateof-the-art. Second, the measure makes sense theoretically, both from algorithmic and native language acquisition points of view. 1 Introduction Assessment of a speaker’s proficiency in a second language is the main task in the domain of automatic evaluation of spontaneous speech (Zechner et al., 2009). Prior studies in language acquisition and second language research have conclusively shown that proficiency in a second language is characterized by several factors, some of which are, fluency in language production, pronunciation accuracy, choice of vocabulary, grammatical sophistication and accuracy. The design of automated scoring systems for non-native speaker speaking proficiency is guided by these studies in the choice of pertinent objective measures of these key aspects of language proficiency. The focus of this study is the design and performance analysis of a measure of the syntactic complexity of non-native English responses for use in automatic scoring systems. The state-ofthe art automated scoring system for spontaneous speech (Zechner et al., 2009; Higgins et al., 2011) currently uses measures of fluency and pronunciation (acoustic aspects) to produce scores that are in reasonable agreement with human-rated scores of proficiency. Despite its good performance, there is a need to extend its coverage to higher order aspects of language ability. Fluency and pronunciation may, by themselves, already be good indicators of proficiency in non-native speakers, but from a construct validity perspective1, it is necessary that an automatic assessment model measure higher-order aspects of language proficiency. Syntactic complexity is one such aspect of proficiency. By “syntactic complexity”, we mean a learner’s ability to use a wide range of sophisticated grammatical structures. This study is different from studies that focus on capturing grammatical errors in non-native speakers (Foster and Skehan, 1996; Iwashita et al., 2008). Instead of focusing on grammatical errors that are found to be highly representative of language proficiency, our interest is in capturing the range of forms that surface in language production and the degree of sophistication of such forms, collectively referred to as syntactic complexity in (Ortega, 2003). The choice and design of objective measures of language proficiency is governed by two crucial constraints: 1. Validity: a measure should show high discriminative ability between various levels of language proficiency, and the scores produced by the use of this measure should show high agreement with human-assigned scores. 2. Robustness: a measures should be derived automatically and should be robust to errors in the measure generation process. A critical impediment to the robustness constraint in the state-of-the-art is the multi-stage au1Construct validity is the degree to which a test measures what it claims, or purports, to be measuring and an important criterion in the development and use of assessments or tests. 1305 tomated process, where errors in the speech recognition stage (the very first stage) affect subsequent stages. Guided by studies in second language development, we design a measure of syntactic complexity that captures patterns indicative of proficient and non-proficient grammatical structures by a shallow-analysis of spoken language, as opposed to a deep syntactic analysis, and analyze the performance of the automatic scoring model with its inclusion. We compare and contrast the proposed measure with that found to be optimum in Yoon and Bhat (2012). Our primary contributions in this study are: • We show that the measure of syntactic complexity derived from a shallow-analysis of spoken utterances satisfies the design constraint of high discriminative ability between proficiency levels. In addition, including our proposed measure of syntactic complexity in an automatic scoring model results in a statistically significant performance gain over the state-of-the-art. • The proposed measure, derived through a completely automated process, satisfies the robustness criterion reasonably well. • In the domain of native language acquisition, the presence or absence of a grammatical structure indicates grammatical development. We observe that the proposed approach elegantly and effectively captures this presencebased criterion of grammatical development, since the feature indicative of presence or absence of a grammatical structure is optimal from an algorithmic point of view. 2 Related Work Speaking in a non-native language requires diverse abilities, including fluency, pronunciation, intonation, grammar, vocabulary, and discourse. Informed by studies in second language acquisition and language testing that regard these factors as key determiners of spoken language proficiency, some researchers have focused on the objective measurement of these aspects of spoken language in the context of automatic assessment of language ability. Notable are studies that have focused on assessment of fluency (Cucchiarini et al., 2000; Cucchiarini et al., 2002), pronunciation (Witt and Young, 1997; Witt, 1999; Franco et al., 1997; Neumeyer et al., 2000), and intonation (Zechner et al., 2009). The relative success of these studies has yielded objective measures of acoustic aspects of speaking ability, resulting in a shift in focus to more complex aspects of assessment of grammar (Bernstein et al., 2010; Chen and Yoon, 2011; Chen and Zechner, 2011), topic development (Xie et al., 2012), and coherence (Wang et al., 2013). In an effort to assess grammar and usage in a second language learning environment, numerous studies have focused on identifying relevant quantitative measures. These measures have been used to estimate proficiency levels in English as a second language (ESL) writing with reasonable success. Wolf-Quintero et al. (1998), Ortega (2003), and Lu (2010) found that measures such as mean length of T-unit2 and dependent clauses per clause (henceforth termed as length-based measures) are well correlated with holistic proficiency scores suggesting that these quantitative measures can be used as objective indices of grammatical development. In the context of spoken ESL, these measures have been studied as well but the results have been inconclusive. The measures could only broadly discriminate between students’ proficiency levels, rated on a scale with moderate to weak correlations, and strong data dependencies on the participant groups were observed (Halleck, 1995; Iwashita et al., 2008; Iwashita, 2010). With the recent interest in the area of automatic assessment of speech, there is a concurrent need to assess the grammatical development of ESL students automatically. Studies that explored the applicability of length-based measures in an automated scoring system (Chen and Zechner, 2011; Chen and Yoon, 2011) observed another important drawback of these measures in that setting. Length-based measures do not meet the constraints of the design, that, in order for measures to be effectively incorporated in the automated speech scoring system, they must be generated in a fully automated manner, via a multi-stage automated process that includes speech recognition, part of speech (POS) tagging, and parsing. A major bottleneck in the multi-stage process of an automated speech scoring system for second language is the stage of automated speech recognition (ASR). Automatic recognition of non-native speakers’ spontaneous speech is a challenging task as evidenced by the error rate of the state-of-the2T-units are defined as “the shortest grammatically allowable sentences into which writing can be split.” (Hunt, 1965) 1306 art speech recognizer. For instance, Chen and Zechner (2011) reported a 50.5% word error rate (WER) and Yoon and Bhat (2012) reported a 30% WER in the recognition of ESL students’ spoken responses. These high error rates at the recognition stage negatively affect the subsequent stages of the speech scoring system in general, and in particular, during a deep syntactic analysis, which operates on a long sequence of words as its context. As a result, measures of grammatical complexity that are closely tied to a correct syntactic analysis are rendered unreliable. Not surprisingly, Chen and Zechner (2011) studied measures of grammatical complexity via syntactic parsing and found that a Pearson’s correlation coefficient of 0.49 between syntactic complexity measures (derived from manual transcriptions) and proficiency scores, was drastically reduced to near nonexistence when the measures were applied to ASR word hypotheses. This suggests that measures that rely on deep syntactic analysis are unreliable in current ASR-based scoring systems for spontaneous speech. In order to avoid the problems encountered with deep analysis-based measures, Yoon and Bhat (2012) explored a shallow analysis-based approach, based on the assumption that the level of grammar sophistication at each proficiency level is reflected in the distribution of part-of-speech (POS) tag bigrams. The idea of capturing differences in POS tag distributions for classification has been explored in several previous studies. In the area of text-genre classification, POS tag distributions have been found to capture genre differences in text (Feldman et al., 2009; Marin et al., 2009); in a language testing context, it has been used in grammatical error detection and essay scoring (Chodorow and Leacock, 2000; Tetreault and Chodorow, 2008). We will see next what aspects of syntactic complexity are captured by such a shallow-analysis. 3 Shallow-analysis approach to measuring syntactic complexity The measures of syntactic complexity in this approach are POS bigrams and are not obtained by a deep analysis (syntactic parsing) of the structure of the sentence. Hence we will refer to this approach as ‘shallow analysis’. In a shallow-analysis approach to measuring syntactic complexity, we rely on the distribution of POS bigrams at every proficiency level to be representative of the range and sophistication of grammatical constructions at that level. At the outset, POS-bigrams may seem too simplistic to represent any aspect of true syntactic complexity. We illustrate to the contrary, that they are indeed able to capture certain grammatical errors and sophisticated constructions by means of the following instances. Consider the two sentence fragments below taken from actual responses (the bigrams of interest and their associated POS tags are bold-faced). 1. They can/MD to/TO survive ... 2. They created the culture/NN that/WDT now/RB is common in the US. We notice that Example 1 is not only less grammatically sophisticated than Example 2 but also has a grammatical error. The error stems from the fact that it has a modal verb followed by the word “to”. On the other hand, Example 2 contains a relative clause composed of a noun introduced by “that”. Notice how these grammatical expressions (one erroneous and the other sophisticated) can be detected by the POS bigrams “MD-TO” and “NNWDT”, respectively. The idea that the level of syntactic complexity (in terms of its range and sophistication) can be assessed based on the distribution of POS-tags is informed by prior studies in second language acquisition. It has been shown that the usage of certain grammatical constructions (such as that of the embedded relative clause in the second sentence above) are indicators of specific milestones in grammar development (Covington et al., 2006). In addition, studies such as Foster and Skehan (1996) have successfully explored the utility of frequency of grammatical errors as objective measures of grammatical development. Based on this idea, Yoon and Bhat (2012) developed a set of features of syntactic complexity based on POS sequences extracted from a large corpus of ESL learners’ spoken responses, grouped by human-assigned scores of proficiency level. Unlike previous studies, it did not rely on the occurrence of normative grammatical constructions. The main assumption was that each score level is characterized by different types of prominent grammatical structures. These representative constructions are gathered from a collection of ESL learners’ spoken responses rated for overall proficiency. The syntactic complexity of a test spoken response was estimated based on its 1307 similarity to the proficiency groups in the reference corpus with respect to the score-specific constructions. A score was assigned to the response based on how similar it was to the high score group. In Section 4.1, we go over the approach in further detail. Our current work is inspired by the shallow analysis-based approach of Yoon and Bhat (2012) and operates under the same assumptions of capturing the range and sophistication of grammatical constructions at each score level. However, the approaches differ in the way in which a spoken response is assigned to a score group. We first analyze the limitations of the model studied in (Yoon and Bhat, 2012) and then describe how our model can address those limitations. The result is a new measure based on POS bigrams to assess ESL learners’ mastery of syntactic complexity. 4 Models for Measuring Grammatical Competence We mentioned that the measure proposed in this study is derived from assumptions similar to those studied in (Yoon and Bhat, 2012). Accordingly, we will summarize the previously studied model, outline its limitations, show how our proposed measure addresses those limitations and compare the two measures for the task of automatic scoring of speech. 4.1 Vector-Space Model based approach Yoon and Bhat (2012) explored an approach inspired by information retrieval. They treat the concatenated collection of responses from a particular score-class as a ‘super’ document. Then, regarding POS bigrams as terms, they construct POSbased vector space models for each score-class (there are four score classes denoting levels of proficiency as will be explained in Section 5.2), thus yielding four score-specific vector-space models (VSMs). The terms of the VSM are weighted by the term frequency-inverse document frequency (tf-idf) weighting scheme (Salton et al., 1975). The intuition behind the approach is that responses in the same proficiency level often share similar grammar and usage patterns. The similarity between a test response and a score-specific vector is then calculated by a cosine similarity metric. Although a total of 4 cosine similarity scores (one per score group) were generated, only cos4from among the four similarity scores, and cosmax, were selected as features. • cos4: the cosine similarity score between the test response and the vector of POS bigrams for the highest score class (level 4); and, • cosmax: the score level of the VSM with which the given response shows maximum similarity. Of these, cos4was selected based on its empirical performance (it showed the strongest correlation with human-assigned scores of proficiency among the distance-based measures). In addition, an intuitive justification for the choice is that the score-4 vector is a grammatical “norm” representing the average grammar usage distribution of the most proficient ESL students. The measure of syntactic complexity of a response, cos4, is its similarity to the highest score class. The study found that the measures showed reasonable discriminative ability across proficiency levels. Despite its encouraging empirical performance, the VSM method of capturing grammatical sophistication has the following limitations. First, the VSM-based method is likely to overestimate the contribution of the POS bigrams when highly correlated bigrams occur as terms in the VSM. Consider the presence of a grammar pattern represented by more than one POS bigram. For example, both “NN-WDT” and “WDT-RB” in Sentence 2 reflect the learner’s usage of a relative clause. However, we note that the two bigrams are correlated and including them both results in an over-estimation of their contribution. The VSM set-up has no mechanism to handle correlated features. Second, the tf-idf weighting scheme for relatively rare POS bigrams does not adequately capture their underlying distribution with respect to score groups. Grammatical expressions that occur frequently in one score level but rarely in other levels can be assumed to be characteristic of a specific score level. Therefore, the more uneven the distribution of a grammatical expression across score classes, the more important that grammatical expression should be as an indicator of a particular score class. However, the simple idf scheme cannot capture this uneven distribution. A pattern that occurs rarely but uniformly across different score groups can get the same weight as a pattern which is unevenly distributed to one score group. Martineau and Finin (2009) observed this weakness of the tf-idf weighting in the domain of sentiment 1308 analysis. When using tf-idf weighting to extract words that were strongly associated with positive sentiment in a movie review corpus (they considered each review as a document and a word as a term), it was found that a substantial proportion of words with the highest tf-idf were rare words (e.g., proper nouns) which were not directly associated with the sentiment. We propose to address these important limitations of the VSM approach by the use of a method that accounts for each of the deficiencies. This is done by resorting to a maximum entropy model based approach, to which we turn next. 4.2 Maximum Entropy-Based model In order to address the limitations discussed in 4.1, we propose a classification-based approach. Taking an approach different from previous studies, we formulate the task of assigning a score of syntactic complexity to a spoken response as a classification problem: given a spoken response, assign the response to a proficiency class. A classifier is trained in an inductive fashion, using a large corpus of learner responses that is divided into proficiency scores as the training data and then used to test data that is similar to the training data. A distinguishing feature of the current study is that the measure is based on a comparison of characteristics of the test response to models trained on large amounts of data from each score point, as opposed to measures that are simply characteristics of the responses themselves (which is how measures have been considered in prior studies). The inductive classifier we use here is the maximum-entropy model (MaxEnt) which has been used to solve several statistical natural language processing problems with much success (Berger et al., 1996; Borthwick et al., 1998; Borthwick, 1999; Pang et al., 2002; Klein et al., 2003; Rosenfeld, 2005). The productive feature engineering aspects of incorporating features into the discriminative MaxEnt classifier motivate the model choice for the problem at hand. In particular, the ability of the MaxEnt model’s estimation routine to handle overlapping (correlated) features makes it directly applicable to address the first limitation of the VSM model. The second limitation, related to the ineffective weighting of terms via the the tf-idf scheme, seems to be addressed by the fact that the MaxEnt model assigns a weight to each feature (in our case, POS bigrams) on a per-class basis (in our case, score group), by taking every instance into consideration. Therefore, a MaxEnt model has an advantage over the model described in 4.1 in that it uses four different weight schemes (one per score level) and each scheme is optimized for each score level. This is beneficial in situations where the features are not evenly important across all score levels. 5 Experimental Setup Our experiments seek answers to the following questions. 1. To what extent does a MaxEnt-score of syntactic complexity discriminate between levels of proficiency? 2. What is the effect of including the proposed measure of syntactic complexity in the stateof-the-art automatic scoring model? 3. How robust is the measure to errors in the various stages of automatic generation? 5.1 Tasks In order to answer the motivating questions of the study, we set-up two tasks. In the first task, we compare the extent to which the VSM-based measure and the MaxEnt-based measure (outlined in 4.1 and 4.2 above) discriminate between levels of syntactic complexity. Additionally, we compare the performance of an automatic scoring model of overall proficiency that includes the measures of syntactic complexity from each of the two models being compared and analyze the gains with respect to the state-of-the-art. In the second task, we study the measures’ robustness to errors incurred by ASR. 5.2 Data In this study, we used a collection of responses from an international English language assessment. The assessment consisted of questions to which speakers were prompted to provide spontaneous spoken responses lasting approximately 4560 seconds per question. Test takers read and/or listened to stimulus materials and then responded to questions based on the stimuli. All questions solicited spontaneous, unconstrained natural speech. A small portion of the available data with inadequate audio quality and lack of student response was excluded from the study. The remaining responses were partitioned into two datasets: the ASR set and the scoring model training/test (SM) 1309 set. The ASR set, with 47,227 responses, was used for ASR training and POS similarity model training. The SM set, with 2,950 responses, was used for feature evaluation and automated scoring model evaluation. There was no overlap in speakers between the ASR set and the SM set. Each response was rated for overall proficiency by trained human scorers using a 4-point scoring scale, where 1 indicates low speaking proficiency and 4 indicated high speaking proficiency. The distribution of proficiency scores, along with other details of the data sets, are presented in Table 1. As seen in Table 1, there is a strong bias towards the middle scores (score 2 and 3) with approximately 84-85% of the responses belonging to these two score levels. Although the skewed distribution limits the number of score-specific instances for the highest and lowest scores available for model training, we used the data without modifying the distribution since it is representative of responses in a large-scale language assessment scenario. Human raters’ extent of agreement in the subjective task of rating responses for language proficiency constrains the extent to which we can expect a machine’s score to agree with that of humans. An estimate of the extent to which human raters agree on the subjective task of proficiency assessment, is obtained by two raters scoring approximately 5% of data (2,388 responses from ASR set and 140 responses from SM set). Pearson correlation r between the scores assigned by the two raters was 0.62 in ASR set and 0.58 in SM set. This level of agreement will guide the evaluation of the human-machine agreement on scores. 5.3 Stages of Automatic Grammatical Competence Assessment Here we outline the multiple stages involved in the automatic syntactic complexity assessment. The first stage, ASR, yields an automatic transcription, which is followed by the POS tagging stage. Subsequently, the feature extraction stage (a VSM or a MaxEnt model as the case may be) generates the syntactic complexity feature which is then incorporated in a multiple linear regression model to generate a score. The steps for automatic assessment of overall proficiency follow an analogous process (either including the POS tagger or not), depending on the objective measure being evaluated. The various objective measures are then combined in the multiple regression scoring model to generate an overall score of proficiency. 5.3.1 Automatic Speech Recognizer An HMM recognizer was trained using ASR set (approximately 733 hours of non-native speech collected from 7,872 speakers). A gender independent triphone acoustic model and combination of bigram, trigram, and four-gram language models were used. A word error rate (WER) of 31% on the SM dataset was observed. 5.3.2 POS tagger POS tags were generated using the POS tagger implemented in the Open-NLP toolkit3. It was trained on the Switchboard (SWBD) corpus. This POS tagger was trained on about 528K word/tag pairs. A combination of 36 tags from the Penn Treebank tag set and 6 tags generated for spoken languages were used in the tagger. The tagger achieved a tagging accuracy of 96.3% on a Switchboard evaluation set composed of 379K words, suggesting high accuracy of the tagger. However, due to substantial amount of speech recognition errors in our data, the POS error rate (resulting from the combined errors of ASR and automated POS tagger) is expected to be higher. 5.3.3 VSM-based Model We used the ASR data set to train a POS-bigram VSM for the highest score class and generated cos4 and cosmax reported in Yoon and Bhat (2012), for the SM data set as outlined in Section 4.1. 5.3.4 Maximum Entropy Model Classifier The input to the classifier is a set of POS bigrams (1366 bigrams in all) obtained from the POS-tagged output of the data. We considered binary-valued features (whether a POS bigram occurred or not), occurrence frequency, and relative frequency as input for the purpose of experimentation. We used the maximum entropy classifier implementation in the MaxEnt toolkit4. The classifier was trained using the LBFGS algorithm for parameter estimation and used equal-scale gaussian priors for smoothing. The results that follow are based on MaxEnt classifier’s parameter settings initialized to zero. Since a preliminary 3http://opennlp.apache.org 4http://homepages.inf.ed.ac.uk/ lzhang10/maxent_toolkit.html. 1310 Data set No. of No. of Score Score distribution responses speakers Mean SD 1 2 3 4 ASR 47,227 7,872 2.67 0.73 1,953 16,834 23,106 5,334 4% 36% 49% 11% SM 2,950 500 2.61 0.74 166 1,103 1,385 296 6% 37% 47% 10% Table 1: Data size and score distribution analysis of the effect of varying the feature (binary or frequency) revealed that the binary-valued feature was optimal (in terms of yielding the best agreement between human and machine scores), we only report our results for this case. The ASR data set was used to train the MaxEnt classifier and the features generated from the SM data set were used for evaluation. One straightforward way of using the maximum entropy classifier’s prediction for our case is to directly use its predicted score-level – 1, 2, 3 or 4. However, this forces the classifier to make a coarse-grained choice and may over-penalize the classifier’s scoring errors. To illustrate this, consider a scenario where the classifier assigns two responses A and B to score level 2 (based on the maximum a posteriori condition). Suppose that, for response A, the score class with the second highest probability corresponds to score level 1 and that, for response B, it corresponds to score level 3. It is apparent that the classifier has an overall tendency to assign a higher score to B, but looking at its top preference alone (2 for both responses), masks this tendency. We thus capture the classifier’s finer-grained scoring tendency by calculating the expected value of the classifier output. For a given response, the MaxEnt classifier calculates the conditional probability of a score-class given the response, in turn yielding conditional probabilities of each score group given the observation – pi for score group i ∈{1, 2, 3, 4}. In our case, we consider the predicted score of syntactic complexity to be the expected value of the class label given the observation as, mescore = 1×p1+2×p2+3×p3+4×p4. This permits us to better represent the score assigned by the MaxEnt classifier as a relative preference over score assignments. 5.3.5 Automatic Scoring System We consider a multiple regression automatic scoring model as studied in Zechner et al. (2009; Chen and Zechner (2011; Higgins et al. (2011). In its state-of-the-art set-up, the following model uses the features – HMM acoustic model score (global normalized), speaking rate, word types per second, average chunk length in words and language model score (global normalized). We use these features by themselves (Base), and also in conjunction with the VSM-based feature (cva4) and the MaxEnt-based feature (mescore). 5.4 Evaluation Metric We evaluate the measures using the metrics chosen in previous studies (Zechner et al., 2009; Chen and Zechner, 2011; Yoon and Bhat, 2012). A measure’s utility has been evaluated according to its ability to discriminate between levels of proficiency assigned by human raters. This is done by considering the Pearson correlation coefficient between the feature and the human scores. In an ideal situation, we would have compared machine score with scores of grammatical skill assigned by human raters. In our case, however, with only access to the overall proficiency scores, we use scores of language proficiency as those of grammatical skill. A criterion for evaluating the performance of the scoring model is the extent to which the automatic scores of overall proficiency agree with the human scores. As in prior studies, here too the level of agreement is evaluated by means of the weighted kappa measure as well as unrounded and rounded Pearson’s correlations between machine and human scores (since the output of the regression model can either be rounded or regarded as is). The feature that maximizes this degree of agreement will be preferred. 6 Experimental Results First, we compare the discriminative ability of measures of syntactic complexity (VSM-model based measure with that of the MaxEnt-based measure) across proficiency levels. Table 2 summarizes our experimental results for this task. We 1311 Features Manual Transcriptions ASR mescore 0.57 0.52 cos4 0.48 0.43 cosmax 0.31 Table 2: Pearson correlation coefficients between measures and holistic proficiency scores. All values are significant at level 0.01. Only the measures cos4 and mescore were compared for robustness using manual and ASR transcriptions. notice that of the measures compared, mescore shows the highest correlation with scores of syntactic complexity. The correlation was approximately 0.1 higher in absolute value than that of cos4, which was the best performing feature in the VSM-based model and the difference is statistically significant. Seeking to study the robustness of the measures derived using a shallow analysis, we next compare the two measures studied here, with respect to the impact of speech recognition errors on their correlation with scores of syntactic complexity. Towards this end, we compare mescore and cos4when POS bigrams are extracted from manual transcriptions (ideal ASR) and ASR transcriptions. In Table 2, noticing that the correlations decrease going along a row, we can say that the errors in the ASR system caused both mescore and cos4to under-perform. However, the performance drop (around 0.05) resulting from a shallow analysis is relatively small compared to the drop observed while employing a deep syntactic analysis. Chen and Zechner (2011) found that while using measures of syntactic complexity obtained from transcriptions, errors in ASR transcripts caused over 0.40 drop in correlation from that found with manual transcriptions5. This comparison suggests that the current POS-based shallow analysis approach is more robust to ASR errors compared to a syntactic analysis-based approach. The effect of the measure of syntactic complexity is best studied by including it in an automatic scoring model of overall proficiency. We compare the performance gains over the state-of-theart with the inclusion of additional features (VSMbased and MaxEnt-based, in turn). Table 3 shows the system performance with different grammar sophistication measures. The results reported are averaged over a 5-fold cross validation of the multiple regression model, where 80% of the SM data 5Due to differences in the dataset and ASR system, a direct comparison between the current study and the cited prior study was not possible. set is used to train the model and the evaluation is done using 20% of the data in every fold. As seen in Table 3, using the proposed measure, mescore, leads to an improved agreement between human and machine scores of proficiency. Comparing the unrounded correlation results in Table 3 we notice that the model Base+mescore shows the highest correlation of predicted scores with human scores. In addition, we test the significance of the difference between two dependent correlations using Steiger’s Z-test (via the paired.r function in the R statistical package (Revelle, 2012)). We note that the performance gain of Base+mescore over Base as well as over Base + cos4 is statistically significant at level = 0.01. The performance gain of Base+cos4 over Base, however, is not statistically significant at level = 0.01. Thus, the inclusion of the MaxEntbased measure of syntactic complexity results in improved agreement between machine and human scores compared to the state-of-the-art model (here, Base). 7 Discussions We now discuss some of the observations and results of our study with respect to the following items. Improved performance: We sought to verify empirically that the MaxEnt model really outperforms the VSM in the case of correlated POS bigrams. To see this, we separate the test set into three subsets A, B, C. Set A contains responses where MaxEnt outperforms VSM; set B contains responses where VSM outperforms MaxEnt; set C contains responses where their predictions are comparable. For each group of responses s ∈{A, B, C}, we calculate the percentage of responses Ps where two highly correlated POS bigrams occur6. We found that the percentages follow the order: PA = 12.93% > PC = 7.29% > 6We consider two POS bigrams to be highly correlated, when the their pointwise-mutual information is higher than 4. 1312 Evaluation method Base Base+cos4 Base+mescore Weighted kappa 0.503 0.524 0.546 Correlation (unrounded) 0.548 0.562 0.592 Correlation (rounded) 0.482 0.492 0.519 Table 3: Comparison of scoring model performances using features of syntactic complexity studied in this paper along with those available in the state-of-the-art. Here, Base is the scoring model without the measures of syntactic complexity. All correlations are significant at level 0.01. PB = 4.41%. This suggests that when correlated POS bigrams occur, MaxEnt is more likely to provide better score predictions than VSM does. Feature design: In the case of MaxEnt, the observation that binary-valued features (presence/absence of POS bigrams) yield better performance than features indicative of the occurrence frequency of the bigram has interesting implications. This was also observed in Pang et al. (2002) where it was interpreted to mean that overall sentiment is indicated by the presence/absence of keywords, as opposed to topic of a text, which is indicated by the repeated use of the same or similar terms. An analogous explanation is applicable here. At first glance, the use of the presence/absence of grammatical structures may raise concerns about a potential loss of information (e.g. the distinction between an expression that is used once and another that is used multiple times is lost). However, when considered in the context of language acquisition studies, this approach seems to be justified. Studies in native language acquisition, have considered multiple grammatical developmental indices that represent the grammatical levels reached at various stages of language acquisition. For instance, Covington et al. (2006) proposed the revised D-level scale which was originally studied by Rosenberg and Abbeduto (1987). The D-Level Scale categorizes grammatical development into 8 levels according to the presence of a set of diverse grammatical expressions varying in difficulty (for example, level 0 consists of simple sentences, while level 5 consists of sentences joined by a subordinating conjunction). Similarly, Scarborough (1990) proposed the Index of Productive Syntax (IPSyn), according to which, the presence of particular grammatical structures, from a list of 60 structures (ranging from simple ones such as including only subjects and verbs, to more complex constructions such as conjoined sentences) is evidence of language acquisition milestones. Despite the functional differences between the indices, there is a fundamental operational similarity - that they both use the presence or absence of grammatical structures, rather than their occurrence count, as evidence of acquisition of certain grammatical levels. The assumption that a presence-based view of grammatical level acquisition is also applicable to second language assessment helps validate our observation that binaryvalued features yield a better performance when compared with frequency-valued features. Generalizability: The training and test sets used in this study had similar underlying distributions – they both sought unconstrained responses to a set of items with some minor differences in item type. Looking ahead, an important question is the extent to which our measure is sensitive to a mismatch between training and test data. 8 Conclusions Seeking alternatives to measuring syntactic complexity of spoken responses via syntactic parsers, we study a shallow-analysis based approach for use in automatic scoring. Empirically, we show that the proposed measure, based on a maximum entropy classification, satisfied the constraints of the design of an objective measure to a high degree. In addition, the proposed measure was found to be relatively robust to ASR errors. The measure outperformed a related measure of syntactic complexity (also based on shallow-analysis of spoken response) previously found to be well-suited for automatic scoring. Including the measure of syntactic complexity in an automatic scoring model resulted in statistically significant performance gains over the stateof-the-art. We also make an interesting observation that the impressionistic evaluation of syntactic complexity is better approximated by the presence or absence of grammar and usage patterns (and not by their frequency of occurrence), an idea supported by studies in native language acquisition. 1313 References Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational linguistics, 22(1):39–71. Jared Bernstein, Jian Cheng, and Masanori Suzuki. 2010. Fluency and structural complexity as predictors of L2 oral proficiency. In Proceedings of InterSpeech, pages 1241–1244. Andrew Borthwick, John Sterling, Eugene Agichtein, and Ralph Grishman. 1998. Exploiting diverse knowledge sources via maximum entropy in named entity recognition. In Proc. of the Sixth Workshop on Very Large Corpora. Andrew Borthwick. 1999. A maximum entropy approach to named entity recognition. Ph.D. thesis, New York University. Lei Chen and Su-Youn Yoon. 2011. Detecting structural events for assessing non-native speech. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications, IUNLPBEA ’11, pages 38–45, Stroudsburg, PA, USA. Association for Computational Linguistics. Miao Chen and Klaus Zechner. 2011. Computing and evaluating syntactic complexity features for automated scoring of spontaneous non-native speech. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 722– 731. Martin Chodorow and Claudia Leacock. 2000. An unsupervised method for detecting grammatical errors. In Proceedings of NAACL, pages 140–147. Michael A Covington, Congzhou He, Cati Brown, Lorina Naci, and John Brown. 2006. How complex is that sentence? a proposed revision of the rosenberg and abbeduto d-level scale. ReVision. Washington, DC http://www. ai. uga. edu/caspr/2006-01Covington. pdf.(Accessed May 10, 2010.). Catia Cucchiarini, Helmer Strik, and Lou Boves. 2000. Quantitative assessment of second language learners’ fluency by means of automatic speech recognition technology. The Journal of the Acoustical Society of America, 107(2):989–999. Catia Cucchiarini, Helmer Strik, and Lou Boves. 2002. Quantitative assessment of second language learners’ fluency: comparisons between read and spontaneous speech. The Journal of the Acoustical Society of America, 111(6):2862–2873. Sergey Feldman, M.A. Marin, Mari Ostendorf, and Maya R. Gupta. 2009. Part-of-speech histograms for genre classification of text. In Proceedings of ICASSP, pages 4781 –4784. Pauline Foster and Peter Skehan. 1996. The influence of planning and task type on second language performance. Studies in Second Language Acquisition, 18:299–324. Horacio Franco, Leonardo Neumeyer, Yoon Kim, and Orith Ronen. 1997. Automatic pronunciation scoring for language instruction. In Proceedings of ICASSP, pages 1471–1474. Gene B Halleck. 1995. Assessing oral proficiency: a comparison of holistic and objective measures. The Modern Language Journal, 79(2):223–234. Derrick Higgins, Xiaoming Xi, Klaus Zechner, and David Williamson. 2011. A three-stage approach to the automated scoring of spontaneous spoken responses. Computer Speech & Language, 25(2):282– 306. Kellogg W Hunt. 1965. Grammatical structures written at three grade levels. ncte research report no. 3. Noriko Iwashita, Annie Brown, Tim McNamara, and Sally O’Hagan. 2008. Assessed levels of second language speaking proficiency: How distinct? Applied Linguistics, 29(1):24–49. Noriko Iwashita. 2010. Features of oral proficiency in task performance by efland jfllearners. In Selected proceedings of the Second Language Research Forum, pages 32–47. Dan Klein, Joseph Smarr, Huy Nguyen, and Christopher D Manning. 2003. Named entity recognition with character-level models. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 180–183. Association for Computational Linguistics. Xiaofei Lu. 2010. Automatic analysis of syntactic complexity in second language writing. International Journal of Corpus Linguistics, 15(4):474– 496. M.A Marin, Sergey Feldman, Mari Ostendorf, and Maya R. Gupta. 2009. Filtering web text to match target genres. In Proceedings of ICASSP, pages 3705–3708. Justin Martineau and Tim Finin. 2009. Delta tfidf: An improved feature space for sentiment analysis. In ICWSM. Leonardo Neumeyer, Horacio Franco, Vassilios Digalakis, and Mitchel Weintraub. 2000. Automatic scoring of pronunciation quality. Speech Communication, pages 88–93. Lourdes Ortega. 2003. Syntactic complexity measures and their relationship to L2 proficiency: A research synthesis of college–level L2 writing. Applied Linguistics, 24(4):492–518. 1314 Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79–86. Association for Computational Linguistics. William Revelle, 2012. psych: Procedures for Psychological, Psychometric, and Personality Research. Northwestern University, Evanston, Illinois. R package version 1.2.1. Sheldon Rosenberg and Leonard Abbeduto. 1987. Indicators of linguistic competence in the peer group conversational behavior of mildly retarded adults. Applied Psycholinguistics, 8:19–32. Ronald Rosenfeld. 2005. Adaptive statistical language modeling: a maximum entropy approach. Ph.D. thesis, IBM. Gerard Salton, Anita Wong, and Chung-Shu Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613–620. Hollis S Scarborough. 1990. Index of productive syntax. Applied Psycholinguistics, 11(1):1–22. Joel R. Tetreault and Martin Chodorow. 2008. The ups and downs of preposition error detection in ESL writing. In Proceedings of COLING, pages 865– 872. Xinhao Wang, Keelan Evanini, and Klaus Zechner. 2013. Coherence modeling for the automated assessment of spontaneous spoken responses. In Proceedings of NAACL-HLT, pages 814–819. Silke Witt and Steve Young. 1997. Performance measures for phone-level pronunciation teaching in CALL. In Proceedings of STiLL, pages 99–102. Silke Witt. 1999. Use of the speech recognition in computer-assisted language learning. Unpublished dissertation, Cambridge University Engineering department, Cambridge, U.K. Kate Wolf-Quintero, Shunji Inagaki, and Hae-Young Kim. 1998. Second language development in writing: Measures of fluency, accuracy, and complexity. Technical Report 17, Second Language Teaching and curriculum Center, The University of Hawai’i, Honolulu, HI. Shasha Xie, Keelan Evanini, and Klaus Zechner. 2012. Exploring content features for automated speech scoring. In Proceedings of the NAACL-HLT, pages 103–111. Su-Youn Yoon and Suma Bhat. 2012. Assessment of esl learners’ syntactic competence based on similarity measures. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 600–608. Association for Computational Linguistics. Klaus Zechner, Derrick Higgins, Xiaoming Xi, and David M Williamson. 2009. Automatic scoring of non-native spontaneous speech in tests of spoken english. Speech Communication, 51(10):883–895. 1315
2014
123
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1316–1325, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Can You Repeat That? Using Word Repetition to Improve Spoken Term Detection Jonathan Wintrode and Sanjeev Khudanpur Center for Language and Speech Processing Johns Hopkins University [email protected] , [email protected] Abstract We aim to improve spoken term detection performance by incorporating contextual information beyond traditional Ngram language models. Instead of taking a broad view of topic context in spoken documents, variability of word co-occurrence statistics across corpora leads us to focus instead the on phenomenon of word repetition within single documents. We show that given the detection of one instance of a term we are more likely to find additional instances of that term in the same document. We leverage this burstiness of keywords by taking the most confident keyword hypothesis in each document and interpolating with lower scoring hits. We then develop a principled approach to select interpolation weights using only the ASR training data. Using this re-weighting approach we demonstrate consistent improvement in the term detection performance across all five languages in the BABEL program. 1 Introduction The spoken term detection task arises as a key subtask in applying NLP applications to spoken content. Tasks like topic identification and namedentity detection require transforming a continuous acoustic signal into a stream of discrete tokens which can then be handled by NLP and other statistical machine learning techniques. Given a small vocabulary of interest (1000-2000 words or multi-word terms) the aim of the term detection task is to enumerate occurrences of the keywords within a target corpus. Spoken term detection converts the raw acoustics into time-marked keyword occurrences, which may subsequently be fed (e.g. as a bag-of-terms) to standard NLP algorithms. Although spoken term detection does not require the use of word-based automatic speech recognition (ASR), it is closely related. If we had perfectly accurate ASR in the language of the corpus, term detection is reduced to an exact string matching task. The word error rate (WER) and term detection performance are clearly correlated. Given resource constraints, domain, channel, and vocabulary limitations, particularly for languages other than English, the errorful token stream makes term detection a non-trivial task. In order to improve detection performance, and restricting ourselves to an existing ASR system or systems at our disposal, we focus on leveraging broad document context around detection hypotheses. ASR systems traditionally use N-gram language models to incorporate prior knowledge of word occurrence patterns into prediction of the next word in the token stream. N-gram models cannot, however, capture complex linguistic or topical phenomena that occur outside the typical 3-5 word scope of the model. Yet, though many language models more sophisticated than N-grams have been proposed, N-grams are empirically hard to beat in terms of WER. We consider term detection rather than the transcription task in considering how to exploit topic context, because in evaluating the retrieval of certain key terms we need not focus on improving the entire word sequence. Confidence scores from an ASR system (which incorporate N-gram probabilities) are optimized in order to produce the most likely sequence of words rather than the accuracy of individual word detections. Looking at broader document context within a more limited task might allow us to escape the limits of N-gram performance. We will show that by focusing on contextual information in the form of word repetition within documents, we obtain consistent improvement across five languages in the so called Base Phase of the IARPA BABEL program. 1316 1.1 Task Overview We evaluate term detection and word repetitionbased re-scoring on the IARPA BABEL training and development corpora1 for five languages Cantonese, Pashto, Turkish, Tagalog and Vietnamese (Harper, 2011). The BABEL task is modeled on the 2006 NIST Spoken Term Detection evaluation (NIST, 2006) but focuses on limited resource conditions. We focus specifically on the so called no target audio reuse (NTAR) condition to make our method broadly applicable. In order to arrive at our eventual solution, we take the BABEL Tagalog corpus and analyze word co-occurrence and repetition statistics in detail. Our observation of the variability in co-occurrence statistics between Tagalog training and development partitions leads us to narrow the scope of document context to same word co-occurrences, i.e. word repetitions. We then analyze the tendency towards withindocument repetition. The strength of this phenomenon suggests it may be more viable for improving term-detection than, say, topic-sensitive language models. We validate this by developing an interpolation formula to boost putative word repetitions in the search results, and then investigate a method for setting interpolation weights without manually tuning on a development set. We then demonstrate that the method generalizes well, by applying it to the 2006 English data and the remaining four 2013 BABEL languages. We demonstrate consistent improvements in all languages in both the Full LP (80 hours of ASR training data) and Limited LP (10 hours) settings. 2 Motivation We seek a workable definition of broad document context beyond N-gram models that will improve term detection performance on an arbitrary set of queries. Given the rise of unsupervised latent topic modeling with Latent Dirchlet Allocation (Blei et al., 2003) and similar latent variable approaches for discovering meaningful word cooccurrence patterns in large text corpora, we ought to be able to leverage these topic contexts instead of merely N-grams. Indeed there is work in the literature that shows that various topic models, latent or otherwise, can be useful for improving lan1Language collection releases IARPA-babel101-v0.4c, IARPA-babel104b-v0.4bY, IARPA-babel105b-v0.4, IARPAbabel106-v0.2g and IARPA-babel107b-v0.7 respectively. guage model perplexity and word error rate (Khudanpur and Wu, 1999; Chen, 2009; Naptali et al., 2012). However, given the preponderance of highly frequent non-content words in the computation of a corpus’ WER, it’s not clear that a 1-2% improvement in WER would translate into an improvement in term detection. Still, intuition suggests that knowing the topic context of a detected word ought to be useful in predicting whether or not a term does belong in that context. For example, if we determine the context of the detection hypothesis is about computers, containing words like ‘monitor,’ ‘internet’ and ‘mouse,’ then we would be more confident of a term such as ‘keyboard’ and less confident of a term such as ‘cheese board’. The difficulty in this approach arises from the variability in word co-occurrence statistics. Using topic information will be helpful if ‘monitor,’ ‘keyboard’ and ‘mouse’ consistently predict that ‘keyboard’ is present. Unfortunately, estimates of cooccurrence from small corpora are not very consistent, and often over- or underestimate concurrence probabilities needed for term detection. We illustrate this variability by looking at how consistent word co-occurrences are between two separate corpora in the same language: i.e., if we observe words that frequently co-occur with a keyword in the training corpus, do they also co-occur with the keywords in a second held-out corpus? Figure 1, based on the BABEL Tagalog corpus, suggests this is true only for high frequency keywords. Figure 1: Correlation between the co-occurrence counts in the training and held-out sets for a fixed keyword (term) and all its “context” words. Each point in Figure 1 represents one of 355 1317 (a) High frequency keyword ‘bukas’ (b) Low frequency keyword ‘Davao’ Figure 2: The number of times a fixed keyword k co-occurs with a vocabulary word w in the training speech collection — T(k, w) — versus the search collection — D(k, w). Tagalog keywords used for system development by all BABEL participants. For each keyword k, we count how often it co-occurs in the same conversation as a vocabulary word w in the ASR training data and the development data, and designate the counts T(k, w) and D(k, w) respectively. The x-coordinate of each point in Figure 1 is the frequency of k in the training data, and the y-coordinate is the correlation coefficient ρk between T(k, w) and D(k, w). A high ρk implies that words w that co-occur frequently with k in the training data also do so in the search collection. To further illustrate how Figure 1 was obtained, consider the high-frequency keyword bukas (count = 879) and the low-frequency keyword Davao (count = 11), and plot T(k, ·) versus D(k, ·), as done in Figure 2. The correlation coefficients ρbukas and ρDavao from the two plots end up as two points in Figure 1. Figure 1 suggests that (k, w) co-occurrences are consistent between the two corpora (ρk > 0.8) for keywords occurring 100 or more times. However, if the goal is to help a speech retrieval system detect content-rich (and presumably infrequent) keywords, then using word co-occurrence information (i.e. topic context) does not appear to be too promising, even though intuition suggests that such information ought to be helpful. In light of this finding, we will restrict the type of context we use for term detection to the cooccurrence of the term itself elsewhere within the document. As it turns out this ‘burstiness’ of words within documents, as the term is defined by Church and Gale in their work on Poisson mixtures (1995), provides a more reliable framework for successfully exploiting document context. 2.1 Related Work A number of efforts have been made to augment traditional N-gram models with latent topic information (Khudanpur and Wu, 1999; Florian and Yarowsky, 1999; Liu and Liu, 2008; Hsu and Glass, 2006; Naptali et al., 2012) including some of the early work on Probabilistic Latent Semantic Analysis by Hofmann (2001). In all of these cases WER gains in the 1-2% range were observed by interpolating latent topic information with N-gram models. The re-scoring approach we present is closely related to adaptive or cache language models (Jelinek, 1997; Kuhn and De Mori, 1990; Kneser and Steinbiss, 1993). The primary difference between this and previous work on similar language models is the narrower focus here on the term detection task, in which we consider each search term in isolation, rather than all words in the vocabulary. Most recently, Chiu and Rudnicky (2013) looked at word bursts in the IARPA BABEL conversational corpora, and were also able to successfully improve performance by leveraging the burstiness of language. One advantage of the approach proposed here, relative to their approach, is its simplicity and its not requiring an additional tuning set to estimate parameters. In the information retrieval community, clustering and latent topic models have yielded improvements over traditional vector space models. We will discuss in detail in the following section related works by Church and Gale (1995, 1999, and 2000). Work by Wei and Croft (2006) and Chen (2009) take a language model-based approach to 1318 (a) fw versus IDFw ‘ (b) Obsered versus predicted IDFw Figure 3: Tagalog corpus frequency statistics, unigrams information retrieval, and again, interpolate latent topic models with N-grams to improve retrieval performance. However, in many text retrieval tasks, queries are often tens or hundreds of words in length rather than short spoken phrases. In these efforts, the topic model information was helpful in boosting retrieval performance above the baseline vector space or N-gram models. Clearly topic or context information is relevant to a retrieval type task, but we need a stable, consistent framework in which to apply it. 3 Term and Document Frequency Statistics To this point we have assumed an implicit property of low-frequency words which Church and Gale state concisely in their 1999 study of inverse document frequency: Low frequency words tend to be rich in content, and vice versa. But not all equally frequent words are equally meaningful. Church and Gale (1999). The typical use of Document Frequency (DF) in information retrieval or text categorization is to emphasize words that occur in only a few documents and are thus more “rich in content”. Close examination of DF statistics by Church and Gale in their work on Poisson Mixtures (1995) resulted in an analysis of the burstiness of content words. In this section we look at DF and burstiness statistics applying some of the analyses of Church and Gale (1999) to the BABEL Tagalog corpus. We observe, in 648 Tagalog conversations, similar phenomena as observed by Church and Gale on 89,000 AP English newswire articles. We proceed in this fashion to make a case for why burstiness ought to help in the term detection task. For the Tagalog conversations, as with English newswire, we observe that the document frequency, DFw, of a word w is not a linear function of word frequency fw in the log domain, as would be expected under a naive Poisson generative assumption. The implication of deviations from a Poisson model is that words tend to be concentrated in a small number of documents rather than occurring uniformly across the corpus. This is the burstiness we leverage to improve term detection. The first illustration of word burstiness can be seen by plotting observed inverse document frequency, IDFw, versus fw in the log domain (Figure 3a). We use the same definition of IDFw as Church and Gale (1999): IDFw = −log2 DFw N , (1) where N is the number of documents (i.e. conversations) in the corpus. There is good linear correlation (ρ = 0.73) between log fw and IDFw. Yet, visually, the relationship in Figure 3a is clearly not linear. In contrast, the AP English data exhibits a correlation of ρ = 0.93 (Church and Gale, 1999). Thus the deviation in the Tagalog corpus is more pronounced, i.e. words are less uniformly distributed across documents. A second perspective on word burstiness that follows from Church and Gale (1999) is that a Poisson assumption should lead us to predict: d IDFw = −log2  1 −e−fw N  . (2) 1319 Figure 4: Difference between observed and predicted IDFw for Tagalog unigrams. For the AP newswire, Church and Gale found the largest deviation between the predicted d IDFw and observed IDFw to occur in the middle of the frequency range. We see a somewhat different picture for Tagalog speech in Figure 3b. Observed IDFw values again deviate significantly from their predictions (2), but all along the frequency range. There is a noticeable quantization effect occurring in the high IDF range, given that our N is at least a factor of 100 smaller than the number of AP articles they studied: 648 vs. 89,000. Figure 4 also shows the difference between and observed IDFw and Poisson estimate d IDFw and further illustrates the high variance in IDFw for low frequency words. Two questions arise: what is happening with infrequent words, and why does this matter for term detection? To look at the data from a different perspective, we consider the random variable k, which is the number of times a word occurs in a particular document. In Figure 5 we plot the following ratio, which Church and Gale (1995) define as burstiness : Ew[k|k > 0] = fw DFw (3) as a function of fw. We denote this as E[k] and can interpret burstiness as the expected word count given we see w at least once. In Figure 5 we see two classes of words emerge. A similar phenomenon is observed concerning adaptive language models (Church, 2000). In general, we can think of using word repetitions to re-score term detection as applying a limited form of adaptive or cache language model (Jelinek, 1997). Likewise, Katz attempts to capture Figure 5: Tagalog burstiness. these two classes in his G model of word frequencies (1996). For the first class, burstiness increases slowly but steadily as w occurs more frequently. Let us label these Class A words. Since our corpus size is fixed, we might expect this to occur, as more word occurrences must be pigeon-holed into the same number of documents Looking close to the y-axis in Figure 5, we observe a second class of exclusively low frequency words whose burstiness ranges from highly concentrated to singletons. We will refer to these as Class B words. If we take the Class A concentration trend as typical, we can argue that most Class B words exhibit a larger than average concentration. In either case we see evidence that both high and low frequency words tend towards repeating within a document. 3.1 Unigram Probabilities In applying the burstiness quantity to term detection, we recall that the task requires us to locate a particular instance of a term, not estimate a count, hence the utility of N-gram language models predicting words in sequence. We encounter the burstiness property of words again by looking at unigram occurrence probabilities. We compare the unconditional unigram probability (the probability that a given word token is w) with the conditional unigram probability, given the term has occurred once in the document. We compute the conditional probability for w using frequency information. 1320 Figure 6: Difference between conditional and unconditional unigram probabilities for Tagalog P(w|k > 0) = fw −DFw P D:w∈D |D| (4) Figure 6 shows the difference between conditional and unconditional unigram probabilities. Without any other information, Zipf’s law suggests that most word types do not occur in a particular document. However, conditioning on one occurrence, most word types are more likely to occur again, due to their burstiness. Finally we measure the adaptation of a word, which is defined by Church and Gale (1995) as: Padapt(w) = Pw(k > 1|k > 0) (5) When we plot adaptation versus fw (Figure 7) we see that all high-frequency and a significant number of low-frequency terms have adaptation greater that 50%. To be precise, 26% of all tokens and 25% of low-frequency (fw < 100) have at least 50% adaptation. Given that adaptation values are roughly an order of magnitude higher than the conditional unigram probabilities, in the next two sections we describe how we use adaptation to boost term detection scores. 4 Term Detection Re-scoring We summarize our re-scoring of repeated words with the observation: given a correct detection, the likelihood of additional terms in the same documents should increase. When we observe a term detection score with high confidence, we boost the other lower-scoring terms in the same document to reflect this increased likelihood of repeated terms. Figure 7: Tagalog word adaptation probability For each term t and document d we propose interpolating the ASR confidence score for a particular detection td with the top scoring hit in d which we’ll call btd. S(td) = (1 −α)Pasr(td|O) + αPasr(btd|O) (6) We will we develop a principled approach to selecting α using the adaptation property of the corpus. However to verify that this approach is worth pursuing, we sweep a range of small α values, on the assumption that we still do want to mostly rely on the ASR confidence score for term detection. For the Tagalog data, we let α range from 0 (the baseline) to 0.4 and re-score each term detection score according to (6). Table 1 shows the results of this parameter sweep and yields us 1 to 2% absolute performance gains in a number of term detection metrics. α ATWV P(Miss) 0.00 0.470 0.430 0.05 0.481 0.422 0.10 0.483 0.420 0.15 0.484 0.418 0.20 0.483 0.416 0.25 0.480 0.417 0.30 0.477 0.417 0.35 0.475 0.415 0.40 0.471 0.413 0.45 0.465 0.413 0.50 0.462 0.410 Table 1: Term detection scores for swept α values on Tagalog development data 1321 The primary metric for the BABEL program, Actual Term Weighted Value (ATWV) is defined by NIST using a cost function of the false alarm probability P(FA) and P(Miss), averaged over a set of queries (NIST, 2006). The manner in which the components of ATWV are defined: P(Miss) = 1 −Ntrue(term)/fterm (7) P(FA) = Nfalse/Durationcorpus (8) implies that cost of a miss is inversely proportional to the frequency of the term in the corpus, but the cost of a false alarm is fixed. For this reason, we report both ATWV and the P(Miss) component. A decrease in P(Miss) reflects the fact that we are able to boost correct detections of the repeated terms. 4.1 Interpolation Weights We would prefer to use prior knowledge rather than naive tuning to select an interpolation weight α. Our analysis of word burstiness suggests that adaptation, is a reasonable candidate. Adaptation also has the desirable property that we can estimate it for each word in the training vocabulary directly from training data and not post-hoc on a per-query basis. We consider several different estimates and we can show that the favorable result extends across languages. Intuition suggests that we prefer per-term interpolation weights related to the term’s adaptation. But despite the strong evidence of the adaptation phenomenon in both high and low-frequency words (Figure 7), we have less confidence in the adaptation strength of any particular word. As with word co-occurrence, we consider if estimates of Padapt(w) from training data are consistent when estimated on development data. Figure 8 shows the difference between Padapt(w) measured on the two corpora (for words occurring in both). We see that the adaptation estimates are only consistent between corpora for high-frequency words. Using this Padapt(w) estimate directly actually hurts ATWV performance by 4.7% absolute on the 355 term development query set (Table 2). Given the variability in estimating Padapt(w), an alternative approach would be take c Pw as an upper bound on α, reached as the DFw increases (cf. Equation 9). We would discount the adaptation factor when DFw is low and we are unsure of Figure 8: Difference in adaptation estimates between Tagalog training and development corpora Interpolation Weight ATWV P(Miss) None 0.470 0.430 Padapt(w) 0.423 0.474 (1 −e−DFw)Padapt(w) 0.477 0.415 bα = 0.20 0.483 0.416 Table 2: Term detection performance using various interpolation weight strategies on Tagalog dev data the effect. αw = (1 −e−DFw) · bPadapt(w) (9) This approach shows a significant improvement (0.7% absolute) over the baseline. However, considering this estimate in light of the two classes of words in Figure 5, there are clearly words in Class B with high burstiness that will be ignored by trying to compensate for the high adaptation variability in the low-frequency range. Alternatively, we take a weighted average of αw’s estimated on training transcripts to obtain a single bα per language (cf. Equation 10). bα = Avg w h1 −e−DFw · bPadapt(w) i (10) Using this average as a single interpolation weight for all terms gives near the best performance as we observed in our parameter sweep. Table 2 contrasts the results for using the three different interpolation heuristics on the Tagalog development queries. Using the mean bα instead of individual αw’s provides an additional 0.5% absolute 1322 Language bα ATWV (%±) P(Miss) (%±) Full LP setting Tagalog 0.20 0.523 (+1.1) 0.396 (-1.9) Cantonese 0.23 0.418 (+1.3) 0.458 (-1.9) Pashto 0.19 0.419 (+1.1) 0.453 (-1.6) Turkish 0.14 0.466 (+0.8) 0.430 (-1.3) Vietnamese 0.30 0.420 (+0.7) 0.445 (-1.0) English (Dev06) 0.20 0.670 (+0.3) 0.240 (-0.4) Limited LP setting Tagalog 0.22 0.228 (+0.9) 0.692 (-1.7) Cantonese 0.26 0.205 (+1.0) 0.684 (-1.3) Pashto 0.21 0.206 (+0.9) 0.682 (-0.9) Turkish 0.16 0.202 (+1.1) 0.700 (-0.8) Vietnamese 0.34 0.227 (+1.0) 0.646 (+0.4) Table 3: Word-repetition re-scored results for available CTS term detection corpora improvement, suggesting that we find additional gains boosting low-frequency words. 5 Results Now that we have tested word repetition-based re-scoring on a small Tagalog development set we want to know if our approach, and particularly our bα estimate is sufficiently robust to apply broadly. At our disposal, we have the five BABEL languages — Tagalog, Cantonese, Pashto, Turkish and Vietnamese — as well as the development data from the NIST 2006 English evaluation. The BABEL evaluation query sets contain roughly 2000 terms each and the 2006 English query set contains roughly 1000 terms. The procedure we follow for each language condition is as follows. We first estimate adaptation probabilities from the ASR training transcripts. From these we take the weighted average as described previously to obtain a single interpolation weight bα for each training condition. We train ASR acoustic and language models from the training corpus using the Kaldi speech recognition toolkit (Povey et al., 2011) following the default BABEL training and search recipe which is described in detail by Chen et al. (2013). Lastly, we re-score the search output by interpolating the top term detection score for a document with subsequent hits according to Equation 6 using the bα estimated for this training condition. For each of the BABEL languages we consider both the FullLP (80 hours) and LimitedLP (10 hours) training conditions. For the English system, we also train a Kaldi system on the 240 hours of the Switchboard conversational English corpus. Although Kaldi can produce multiple types of acoustic models, for simplicity we report results using discriminatively trained Subspace Gaussian Mixture Model (SGMM) acoustic output densities, but we do find that similar results can be obtained with other acoustic model configurations. Using our final algorithm, we are able to boost repeated term detections and improve results in all languages and training conditions. Table 3 lists complete results and the associated estimates for bα. For the BABEL languages, we observe improvements in ATWV from 0.7% to 1.3% absolute and reductions in the miss rate of 0.8% to 1.9%. The only test for which P(Miss) did not improve was the Vietnamese Limited LP setting, although overall ATWV did improve, reflecting a lower P(FA). In all conditions we also obtain α estimates which correspond to our expectations for particular languages. For example, adaptation is lowest for the agglutinative Turkish language where longer word tokens should be less likely to repeat. For Vietnamese, with shorter, syllable length word tokens, we observe the lowest adaptation estimates. Lastly, the reductions in P(Miss) suggests that we are improving the term detection metric, which is sensitive to threshold changes, by doing what we set out to do, which is to boost lower confidence repeated words and correctly asserting them 1323 as true hits. Moreover, we are able to accomplish this in a wide variety of languages. 6 Conclusions Leveraging the burstiness of content words, we have developed a simple technique to consistently boost term detection performance across languages. Using word repetitions, we effectively use a broad document context outside of the typical 2-5 N-gram window. Furthermore, we see improvements across a broad spectrum of languages: languages with syllable-based word tokens (Vietnamese, Cantonese), complex morphology (Turkish), and dialect variability (Pashto). Secondly, our results are not only effective but also intuitive, given that the interpolation weight parameter matches our expectations for the burstiness of the word tokens in the language on which it is estimated. We have focused primarily on re-scoring results for the term detection task. Given the effectiveness of the technique across multiple languages, we hope to extend our effort to exploit our human tendency towards redundancy to decoding or other aspects of the spoken document processing pipeline. Acknowledgements This work was partially supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense U.S. Army Research Laboratory (DoD / ARL) contract number W911NF-12-C-0015. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government. Insightful discussions with Chiu and Rudnicky (2013) are also gratefully acknowledged. References David Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. Guoguo Chen, Sanjeev Khudanpur, Daniel Povey, Jan Trmal, David Yarowsky, and Oguz Yilmaz. 2013. Quantifying the value of pronunciation lexicons for keyword search in low resource languages. In International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. Berlin Chen. 2009. Latent topic modelling of word co-occurence information for spoken document retrieval. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3961–3964. IEEE. Justin Chiu and Alexander Rudnicky. 2013. Using conversational word bursts in spoken term detection. In Proceedings of the 14th Annual Conference of the International Speech Communication Association, pages 2247–2251. ISCA. Kenneth Church and William Gale. 1995. Poisson Mixtures. Natural Language Engineering, 1(2):163–190. Kenneth Church and William Gale. 1999. Inverse Focument Frequency (IDF): A measure of deviations from Poisson. In Natural Language Processing Using Very Large Corpora, pages 283–295. Springer. Kenneth Church. 2000. Empirical estimates of adaptation: the chance of two Noriegas is closer to p/2 than p 2. In Proceedings of the 18th Conference on Computational Linguistics, volume 1, pages 180– 186. ACL. Radu Florian and David Yarowsky. 1999. Dynamic nonlocal language modeling via hierarchical topicbased adaptation. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics, pages 167–174. ACL. Mary Harper. 2011. IARPA Solicitation IARPABAA-11-02. http://www.iarpa.gov/ solicitations_babel.html. Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42(1):177–196. Bo-June Paul Hsu and James Glass. 2006. Style & topic language model adaptation using HMM-LDA. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. ACL. Fred Jelinek. 1997. Statistical Methods for Speech Recognition. MIT Press. Slava Katz. 1996. Distribution of content words and phrases in text and language modelling. Natural Language Engineering, 2(1):15–59. Sanjeev Khudanpur and Jun Wu. 1999. A maximum entropy language model integrating n-grams and topic dependencies for conversational speech recognition. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 1, pages 553–556. IEEE. 1324 Reinhard Kneser and Volker Steinbiss. 1993. On the dynamic adaptation of stochastic language models. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 2, pages 586–589. IEEE. Roland Kuhn and Renato De Mori. 1990. A cachebased natural language model for speech recognition. Transactions on Pattern Analysis and Machine Intelligence, 12(6):570–583. Yang Liu and Feifan Liu. 2008. Unsupervised language model adaptation via topic modeling based on named entity hypotheses. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, (ICASSP), pages 4921–4924. IEEE. Welly Naptali, Masatoshi Tsuchiya, and Seiichi Nakagawa. 2012. Topic-dependent-class-based n-gram language model. Transactions on Audio, Speech, and Language Processing, 20(5):1513–1525. NIST. 2006. The Spoken Term Detection (STD) 2006 Evaluation Plan. http://www.itl. nist.gov/iad/mig/tests/std/2006/ docs/std06-evalplan-v10.pdf. [Online; accessed 28-Feb-2013]. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The Kaldi speech recognition toolkit. In Proceedings of the Automatic Speech Recognition and Understanding Workshop (ASRU). Xing Wei and W Bruce Croft. 2006. LDA-based document models for ad-hoc retrieval. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 178–185. ACM. 1325
2014
124
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1326–1336, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Character-Level Chinese Dependency Parsing Meishan Zhang†, Yue Zhang‡ , Wanxiang Che†, Ting Liu†∗ †Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China {mszhang, car, tliu}@ir.hit.edu.cn ‡Singapore University of Technology and Design yue [email protected] Abstract Recent work on Chinese analysis has led to large-scale annotations of the internal structures of words, enabling characterlevel analysis of Chinese syntactic structures. In this paper, we investigate the problem of character-level Chinese dependency parsing, building dependency trees over characters. Character-level information can benefit downstream applications by offering flexible granularities for word segmentation while improving wordlevel dependency parsing accuracies. We present novel adaptations of two major shift-reduce dependency parsing algorithms to character-level parsing. Experimental results on the Chinese Treebank demonstrate improved performances over word-based parsing methods. 1 Introduction As a light-weight formalism offering syntactic information to downstream applications such as SMT, the dependency grammar has received increasing interest in the syntax parsing community (McDonald et al., 2005; Nivre and Nilsson, 2005; Carreras et al., 2006; Duan et al., 2007; Koo and Collins, 2010; Zhang and Clark, 2008; Nivre, 2008; Bohnet, 2010; Zhang and Nivre, 2011; Choi and McCallum, 2013). Chinese dependency trees were conventionally defined over words (Chang et al., 2009; Li et al., 2012), requiring word segmentation and POS-tagging as pre-processing steps. Recent work on Chinese analysis has embarked on investigating the syntactic roles of characters, leading to large-scale annotations of word internal structures (Li, 2011; Zhang et al., 2013). Such annotations enable dependency parsing on the character level, building dependency trees over Chinese characters. Figure 1(c) shows an example of ∗Corresponding author. 林业局 副局长 会 上 发言 forestry administration deputy director meeting in make a speech (a) a word-based dependency tree 林 业 局 副 局 长 会 上 发 言 woods industry office deputy office manager meeting in make speech (b) a character-level dependency tree by Zhao (2009) with real intra-word and pseudo inter-word dependencies 林 业 局 副 局 长 会 上 发 言 woods industry office deputy office manager meeting in make speech (c) a character-level dependency tree investigated in this paper with both real intra- and inter-word dependencies Figure 1: An example character-level dependency tree. “林业局副局长在大会上发言(The deputy director of forestry administration make a speech in the meeting)”. a character-level dependency tree, where the leaf nodes are Chinese characters. Character-level dependency parsing is interesting in at least two aspects. First, character-level trees circumvent the issue that no universal standard exists for Chinese word segmentation. In the well-known Chinese word segmentation bakeoff tasks, for example, different segmentation standards have been used by different data sets (Emerson, 2005). On the other hand, most disagreement on segmentation standards boils down to disagreement on segmentation granularity. As demonstrated by Zhao (2009), one can extract both finegrained and coarse-grained words from characterlevel dependency trees, and hence can adapt to flexible segmentation standards using this formalism. In Figure 1(c), for example, “副局长(deputy 1326 director)” can be segmented as both “副(deputy) | 局长(director)” and “副局长(deputy director)”, but not “副(deputy) 局(office) | 长(manager)”, by dependency coherence. Chinese language processing tasks, such as machine translation, can benefit from flexible segmentation standards (Zhang et al., 2008; Chang et al., 2008). Second, word internal structures can also be useful for syntactic parsing. Zhang et al. (2013) have shown the usefulness of word structures in Chinese constituent parsing. Their results on the Chinese Treebank (CTB) showed that characterlevel constituent parsing can bring increased performances even with the pseudo word structures. They further showed that better performances can be achieved when manually annotated word structures are used instead of pseudo structures. In this paper, we make an investigation of character-level Chinese dependency parsing using Zhang et al. (2013)’s annotations and based on a transition-based parsing framework (Zhang and Clark, 2011). There are two dominant transitionbased dependency parsing systems, namely the arc-standard and the arc-eager parsers (Nivre, 2008). We study both algorithms for characterlevel dependency parsing in order to make a comprehensive investigation. For direct comparison with word-based parsers, we incorporate the traditional word segmentation, POS-tagging and dependency parsing stages in our joint parsing models. We make changes to the original transition systems, and arrive at two novel transition-based character-level parsers. We conduct experiments on three data sets, including CTB 5.0, CTB 6.0 and CTB 7.0. Experimental results show that the character-level dependency parsing models outperform the wordbased methods on all the data sets. Moreover, manually annotated intra-word dependencies can give improved word-level dependency accuracies than pseudo intra-word dependencies. These results confirm the usefulness of character-level syntax for Chinese analysis. The source codes are freely available at http://sourceforge. net/projects/zpar/, version 0.7. 2 Character-Level Dependency Tree Character-level dependencies were first proposed by Zhao (2009). They show that by annotating character dependencies within words, one can adapt to different segmentation standards. The dependencies they study are restricted to intraword characters, as illustrated in Figure 1(b). For inter-word dependencies, they use a pseudo rightheaded representation. In this study, we integrate inter-word syntactic dependencies and intra-word dependencies using large-scale annotations of word internal structures by Zhang et al. (2013), and study their interactions. We extract unlabeled dependencies from bracketed word structures according to Zhang et al.’s head annotations. In Figure 1(c), the dependencies shown by dashed arcs are intra-word dependencies, which reflect the internal word structures, while the dependencies with solid arcs are inter-word dependencies, which reflect the syntactic structures between words. In this formulation, a character-level dependency tree satisfies the same constraints as the traditional word-based dependency tree for Chinese, including projectivity. We differentiate intraword dependencies and inter-word dependencies by the arc type, so that our work can be compared with conventional word segmentation, POStagging and dependency parsing pipelines under a canonical segmentation standard. The character-level dependency trees hold to a specific word segmentation standard, but are not limited to it. We can extract finer-grained words of different granulities from a coarse-grained word by taking projective subtrees of different sizes. For example, taking all the intra-word modifier nodes of “长(manager)” in Figure 1(c) results in the word “副局长(deputy director)”, while taking the first modifier node of “长(manager)” results in the word “局长(director)”. Note that “副局(deputy office)” cannot be a word because it does not form a projective span without “长(manager)”. Inner-word dependencies can also bring benefits to parsing word-level dependencies. The head character can be a less sparse feature compared to a word. As intra-word dependencies lead to fine-grained subwords, we can also use these subwords for better parsing. In this work, we use the innermost left/right subwords as atomic features. To extract the subwords, we find the innermost left/right modifiers of the head character, respectively, and then conjoin them with all their descendant characters to form the smallest left/right subwords. Figure 2 shows an example, where the smallest left subword of “大法官(chief lawyer)” is “法官(lawyer)”, and the smallest right subword 1327 大 法 官 big law officer (a) smallest left subword 合 法 化 agree with law ize (b) smallest right subword Figure 2: An example to illustrate the innermost left/right subwords. of “合法化(legalize)” is “合法(legal)”. 3 Character-Level Dependency Parsing A transition-based framework with global learning and beam search decoding (Zhang and Clark, 2011) has been applied to a number of natural language processing tasks, including word segmentation, POS-tagging and syntactic parsing (Zhang and Clark, 2010; Huang and Sagae, 2010; Bohnet and Nivre, 2012; Zhang et al., 2013). It models a task incrementally from a start state to an end state, where each intermediate state during decoding can be regarded as a partial output. A number of actions are defined so that the state advances step by step. To learn the model parameters, it usually uses the online perceptron algorithm with early-update under the inexact decoding condition (Collins, 2002; Collins and Roark, 2004). Transition-based dependency parsing can be modeled under this framework, where the state consists of a stack and a queue, and the set of actions can be either the arc-eager (Zhang and Clark, 2008) or the arc-standard (Huang et al., 2009) transition systems. When the internal structures of words are annotated, character-level dependency parsing can be treated as a special case of word-level dependency parsing, with “words” being “characters”. A big weakness of this approach is that full words and POS-tags cannot be used for feature engineering. Both are crucial to well-established features for word segmentation, POS-tagging and syntactic parsing. In this section, we introduce novel extensions to the arc-standard and the arc-eager transition systems, so that word-based and characterbased features can be used simultaneously for character-level dependency parsing. 3.1 The Arc-Standard Model The arc-standard model has been applied to joint segmentation, POS-tagging and dependency parsing (Hatori et al., 2012), but with pseudo word structures. For unified processing of annotated word structures and fair comparison between character-level arc-eager and arc-standard systems, we define a different arc-standard transition system, consistent with our character-level arceager system. In the word-based arc-standard model, the transition state includes a stack and a queue, where the stack contains a sequence of partially-parsed dependency trees, and the queue consists of unprocessed input words. Four actions are defined for state transition, including arc-left (AL, which creates a left arc between the top element s0 and the second top element s1 on the stack), arc-right (AR, which creates a right arc between s0 and s1), pop-root (PR, which defines the root node of a dependency tree when there is only one element on the stack and no element in the queue), and the last shift (SH, which shifts the first element q0 of the queue onto the stack). For character-level dependency parsing, there are two types of dependencies: inter-word dependencies and intra-word dependencies. To parse them with both character and word features, we extend the original transition actions into two categories, for inter-word dependencies and intraword dependencies, respectively. The actions for inter-word dependencies include inter-word arcleft (ALw), inter-word arc-right (ARw), pop-root (PR) and inter-word shift (SHw). Their definitions are the same as the word-based model, with one exception that the inter-word shift operation has a parameter denoting the POS-tag of the incoming word, so that POS disambiguation is performed by the SHw action. The actions for intra-word dependencies include intra-word arc-left (ALc), intra-word arcright (ARc), pop-word (PW) and inter-word shift (SHc). The definitions of ALc, ARc and SHc are the same as the word-based arc-standard model, while PW changes the top element on the stack into a full-word node, which can only take interword dependencies. One thing to note is that, due to variable word sizes in character-level parsing, the number of actions can vary between different sequences of actions corresponding to different analyses. We use the padding method (Zhu et al., 2013), adding an IDLE action to finished transition action sequences, for better alignments between states in the beam. In the character-level arc-standard transition 1328 step action stack queue dependencies 0 φ 林业· · · φ 1 SHw(NR) 林/NR 业局· · · φ 2 SHc 林/NR 业/NR 局副· · · φ 3 ALc 业/NR 局副· · · A1 = {林 ↶业} 4 SHc 业/NR 局/NR 副局· · · A1 5 ALc 局/NR 副局· · · A2 = A1 S{业↶局} 6 PW 林业局/NR 副局· · · A2 7 SHw(NN) 林业局/NR 副/NN 局长· · · A2 · · · · · · · · · · · · · · · 12 PW 林业局/NR 副局长/NN 会上· · · Ai 13 ALw 副局长/NN 会上· · · Ai+1 = Ai S{林业局/NR ↶副局长/NN} · · · · · · · · · · · · · · · (a) character-level dependency parsing using the arc-standard algorithm step action stack deque queue dependencies 0 φ 林业· · · 1 SHc(NR) φ 林/NR 业局· · · φ 2 ALc φ φ 业/NR 局· · · A1 = {林 ↶业} 3 SHc φ 业/NR 局副· · · A1 4 ALc φ φ 局/NR 副· · · A2 = A1 S{业↶局} 5 SHc φ 局/NR 副局· · · A2 6 PW φ 林业局/NR 副局· · · A2 7 SHw 林业局/NR φ 副局· · · A2 · · · · · · · · · · · · · · · · · · 13 PW 林业局/NR 副局长/NN 会上· · · Ai 14 ALw φ 副局长/NN 会上· · · Ai+1 = Ai S{林业局/NR ↶副局长/NN} · · · · · · · · · · · · · · · · · · (b) character-level dependency parsing using the arc-eager algorithm, t = 1 Figure 3: Character-level dependency parsing of the sentence in Figure 1(c). system, each word is initialized by the action SHw with a POS tag, before being incrementally modified by a sequence of intra-word actions, and finally being completed by the action PW. The interword actions can be applied when all the elements on the stack are full-word nodes, while the intraword actions can be applied when at least the top element on the stack is a partial-word node. For the actions ALc and ARc to be valid, the top two elements on the stack are both partial-word nodes. For the action PW to be valid, only the top element on the stack is a partial-word node. Figure 3(a) gives an example action sequence. There are three types of features. The first two types are traditionally established features for the dependency parsing and joint word segmentation and POS-tagging tasks. We use the features proposed by Hatori et al. (2012). The word-level dependency parsing features are added when the inter-word actions are applied, and the features for joint word segmentation and POS-tagging are added when the actions PW, SHw and SHc are applied. Following the work of Hatori et al. (2012), we have a parameter α to adjust the weights for joint word segmentation and POS-tagging features. We apply word-based dependency parsing features to intra-word dependency parsing as well, by using subwords (the conjunction of characters spanning the head node) to replace words in word features. The third type of features is wordstructure features. We extract the head character and the smallest subwords containing the head character from the intra-word dependencies (Section 2). Table 1 summarizes the features. 3.2 The Arc-Eager Model Similar to the arc-standard case, the state of a word-based arc-eager model consists of a stack and a queue, where the stack contains a sequence of partial dependency trees, and the queue consists of unprocessed input words. Unlike the arcstandard model, which builds dependencies on the top two elements on the stack, the arc-eager model builds dependencies between the top element of the stack and the first element of the queue. Five actions are defined for state transformation: arcleft (AL, which creates a left arc between the top element of the stack s0 and the first element in the queue q0, while popping s0 off the stack), arc-right (AR, which creates a right arc between 1329 Feature templates Lc, Lct, Rc, Rct, Llc1c, Lrc1c, Rlc1c, Lc · Rc, Llc1ct, Lrc1ct, Rlc1ct, Lc · Rw, Lw · Rc, Lct · Rw, Lwt · Rc, Lw · Rct, Lc · Rwt, Lc · Rc · Llc1c, Lc · Rc · Lrc1c, Lc · Rc · Llc2c, Lc · Rc · Lrc2c, Lc · Rc · Rlc1c, Lc · Rc · Rlc2c, Llsw, Lrsw, Rlsw, Rrsw, Llswt, Lrswt, Rlswt, Rrswt, Llsw · Rw, Lrsw · Rw, Lw · Rlsw, Lw · Rrsw Table 1: Feature templates encoding intra-word dependencies. L and R denote the two elements over which the dependencies are built; the subscripts lc1 and rc1 denote the left-most and rightmost children, respectively; the subscripts lc2 and rc2 denote the second left-most and second rightmost children, respectively; w denotes the word; t denotes the POS tag; c denotes the head character; lsw and rsw denote the smallest left and right subwords respectively, as shown in Figure 2. s0 and q0, while shifting q0 from the queue onto the stack), pop-root (PR, which defines the ROOT node of the dependency tree when there is only one element on the stack and no element in the queue), reduce (RD, which pops s0 off the stack), and shift (SH, which shifts q0 onto the stack). There is no previous work that exploits the arc-eager algorithm for jointly performing POStagging and dependency parsing. Since the first element of the queue can be shifted onto the stack by either SH or AR, it is more difficult to assign a POS tag to each word by using a single action. In this work, we make a change to the configuration state, adding a deque between the stack and the queue to save partial words with intra-word dependencies. We divide the transition actions into two categories, one for inter-word dependencies (ARw, ALw, SHw, RDw and PR) and the other for intra-word dependencies (ARc, ALc, SHc, RDc and PW), requiring that the intra-word actions be operated between the deque and the queue, while the inter-word actions be operated between the stack and the deque. For character-level arc-eager dependency parsing, the inter-word actions are the same as the word-based methods. The actions ALc and ARc are the same as ALw and ARw, except that they operate on characters, but the SHc operation has a parameter to denote the POS tag of a word. The PW action recognizes a full-word. We also have an IDLE action, for the same reason as the arcstandard model. In the character-level arc-eager transition system, a word is formed in a similar way with that of character-level arc-standard algorithm. Each word is initialized by the action SHc with a POS tag, and then incrementally changed a sequence of intra-word actions, before being finalized by the action PW. All these actions operate between the queue and deque. For the action PW, only the first element in the deque (close to the queue) is a partial-word node. For the actions ARc and ALc to be valid, the first element in the deque must be a partial-word node. The action SHc have a POS tag when shifting the first character of a word,but does not have such a parameter when shifting the next characters of a word. For the action SHc with a POS tag to be valid, the first element in the deque must be a full-word node. Different from the arcstandard model, at any stage we can choose either the action SHc with a POS tag to initialize a new word on the deque, or the inter-word actions on the stack. In order to eliminate the ambiguity, we define a new parameter t to limit the max size of the deque. If the deque is full with t words, interword actions are performed; otherwise intra-word actions are performed. All the inter-word actions must be applied on full-word nodes between the stack an the deque. Figure 3(b) gives an example action sequence. Similar to the arc-standard case, there are three types of features, with the first two types being traditionally established features for dependency parsing and joint word segmentation and POStagging. The dependency parsing features are taken from the work of Zhang and Nivre (2011), and the features for joint word segmentation and POS-tagging are taken from Zhang and Clark (2010)1. The word-level dependency parsing features are triggered when the inter-word actions are applied, while the features of joint word segmentation and POS-tagging are added when the actions SHc, ARc and PW are applied. Again we use a parameter α to adjust the weights for joint word segmentation and POS-tagging features. The wordlevel features for dependency parsing are applied to intra-word dependency parsing as well, by using subwords to replace words. The third type of features is word-structure features, which are the 1Since Hatori et al. (2012) also use Zhang and Clark (2010)’s features, the arc-standard and arc-eager characterlevel dependency parsing models have the same features for joint word segmentation and POS-tagging. 1330 CTB50 CTB60 CTB70 Training #sent 18k 23k 31k #word 494k 641k 718k Development #sent 350 2.1k 10k #word 6.8k 60k 237k #oov 553 3.3k 13k Test #sent 348 2.8k 10k #word 8.0k 82k 245k #oov 278 4.6k 13k Table 2: Statistics of datasets. same as those of the character-level arc-standard model, shown in Table 1. 4 Experiments 4.1 Experimental Settings We use the Chinese Penn Treebank 5.0, 6.0 and 7.0 to conduct the experiments, splitting the corpora into training, development and test sets according to previous work. Three different splitting methods are used, namely CTB50 by Zhang and Clark (2010), CTB60 by the official documentation of CTB 6.0, and CTB70 by Wang et al. (2011). The dataset statistics are shown in Table 2. We use the head rules of Zhang and Clark (2008) to convert phrase structures into dependency structures. The intra-word dependencies are extracted from the annotations of Zhang et al. (2013)2. The standard measures of word-level precision, recall and F1 score are used to evaluate word segmentation, POS-tagging and dependency parsing, following Hatori et al. (2012). In addition, we use the same measures to evaluate intra-word dependencies, which indicate the performance of predicting word structures. A word’s structure is correct only if all the intra-word dependencies are all correctly recognized. 4.2 Baseline and Proposed Models For the baseline, we have two different pipeline models. The first consists of a joint segmentation and POS-tagging model (Zhang and Clark, 2010) and a word-based dependency parsing model using the arc-standard algorithm (Huang et al., 2009). We name this model STD (pipe). The second consists of the same joint segmentation and POS-tagging model and a word-based dependency parsing model using the arc-eager algorithm 2https://github.com/zhangmeishan/ wordstructures; their annotation was conducted on CTB 5.0, while we made annotations of the remainder of the CTB 7.0 words. We also make the annotations publicly available at the same site. (Zhang and Nivre, 2011). We name this model EAG (pipe). For the pipeline models, we use a beam of size 16 for joint segmentation and POStagging, and a beam of size 64 for dependency parsing, according to previous work. We study the following character-level dependency parsing models: • STD (real, pseudo): the arc-standard model with annotated intra-word dependencies and pseudo inter-word dependencies; • STD (pseudo, real): the arc-standard model with pseudo intra-word dependencies and real inter-word dependencies; • STD (real, real): the arc-standard model with annotated intra-word dependencies and real inter-word dependencies; • EAG (real, pseudo): the arc-eager model with annotated intra-word dependencies and pseudo inter-word dependencies; • EAG (pseudo, real): the arc-eager model with pseudo intra-word dependencies and real inter-word dependencies; • EAG (real, real): the arc-eager model with annotated intra-word dependencies and real inter-word dependencies. The annotated intra-word dependencies refer to the dependencies extracted from annotated word structures, while the pseudo intra-word dependencies used in the above models are similar to those of Hatori et al. (2012). For a given word w = c1c2 · · · cm, the intra-word dependency structure is c↶ 1 c↶ 2 · · ·↶cm3. The real interword dependencies refer to the syntactic wordlevel dependencies by head-finding rules from CTB, while the pseudo inter-word dependencies refer to the word-level dependencies used by Zhao (2009) (w↶ 1 w↶ 2 · · ·↶wn). The character-level models with annotated intra-word dependencies and pseudo inter-word dependencies are compared with the pipelines on word segmentation and POStagging accuracies, and are compared with the character-level models with annotated intra-word dependencies and real inter-word dependencies on word segmentation, POS-tagging and wordstructure predicating accuracies. All the proposed 3We also tried similar structures with right arcs, which gave lower accuracies. 1331 STD (real, real) SEG POS DEP WS α = 1 95.85 91.60 76.96 95.14 α = 2 96.09 91.89 77.28 95.29 α = 3 96.02 91.84 77.22 95.23 α = 4 96.10 91.96 77.49 95.29 α = 5 96.07 91.90 77.31 95.21 Table 3: Development test results of the characterlevel arc-standard model on CTB60. EAG (real, real) SEG POS DEP WS α = 1 t = 1 96.00 91.66 74.63 95.49 t = 2 95.93 91.75 76.60 95.37 t = 3 95.93 91.74 76.94 95.36 t = 4 95.91 91.71 76.82 95.33 t = 5 95.95 91.73 76.84 95.40 t = 3 α = 1 95.93 91.74 76.94 95.36 α = 2 96.11 91.99 77.17 95.56 α = 3 96.16 92.01 77.48 95.62 α = 4 96.11 91.93 77.40 95.53 α = 5 96.00 91.84 77.10 95.43 Table 4: Development test results of the characterlevel arc-eager model on CTB60. models use a beam of size 64 after considering both speeds and accuracies. 4.3 Development Results Our development tests are designed for two purposes: adjusting the parameters for the two proposed character-level models and testing the effectiveness of the novel word-structure features. Tuning is conducted by maximizing word-level dependency accuracies. All the tests are conducted on the CTB60 data set. 4.3.1 Parameter Tuning For the arc-standard model, there is only one parameter α that needs tuning. It adjusts the weights of segmentation and POS-tagging features, because the number of feature templates is much less for the two tasks than for parsing. We set the value of α to 1 · · · 5, respectively. Table 3 shows the accuracies on the CTB60 development set. According to the results, we use α = 4 for our final character-level arc-standard model. For the arc-eager model, there are two parameters t and α. t denotes the deque size of the arceager model, while α shares the same meaning as the arc-standard model. We take two steps for parameter tuning, first adjusting the more crucial parameter t and then adjusting α on the best t. Both parameters are assigned the values of 1 to 5. TaSEG POS DEP WS STD (real, real) 96.10 91.96 77.49 95.29 STD (real, real)/wo 95.99 91.79 77.19 95.35 ∆ -0.11 -0.17 -0.30 +0.06 EAG (real, real) 96.16 92.01 77.48 95.62 EAG (real, real)/wo 96.09 91.82 77.12 95.56 ∆ -0.07 -0.19 -0.36 -0.06 Table 5: Feature ablation tests for the novel wordstructure features, where “/wo” denotes the corresponding models without the novel intra-word dependency features. ble 4 shows the results. According to results, we set t = 3 and α = 3 for the final character-level arc-eager model, respectively. 4.3.2 Effectiveness of Word-Structure Features To test the effectiveness of our novel wordstructure features, we conduct feature ablation experiments on the CTB60 development data set for the proposed arc-standard and arc-eager models, respectively. Table 5 shows the results. We can see that both the two models achieve better accuracies on word-level dependencies with the novel word-structure features, while the features do not affect word-structure predication significantly. 4.4 Final Results Table 6 shows the final results on the CTB50, CTB60 and CTB70 data sets, respectively. The results demonstrate that the character-level dependency parsing models are significantly better than the corresponding word-based pipeline models, for both the arc-standard and arc-eager systems. Similar to the findings of Zhang et al. (2013), we find that the annotated word structures can give better accuracies than pseudo word structures. Another interesting finding is that, although the arceager algorithm achieves lower accuracies in the word-based pipeline models, it obtains comparative accuracies in the character-level models. We also compare our results to those of Hatori et al. (2012), which is comparable to STD (pseudo, real) since similar arc-standard algorithms and features are used. The major difference is the set of transition actions. We rerun their system on the three datasets4. As shown in Table 6, our arc-standard system with pseudo word structures 4http://triplet.cc/. We use a different constituent-to-dependency conversion scheme in comparison with Hatori et al. (2012)’s work. 1332 Model CTB50 CTB60 CTB70 SEG POS DEP WS SEG POS DEP WS SEG POS DEP WS The arc-standard models STD (pipe) 97.53 93.28 79.72 – 95.32 90.65 75.35 – 95.23 89.92 73.93 – STD (real, pseudo) 97.78 93.74 – 97.40 95.77‡ 91.24‡ – 95.08 95.59‡ 90.49‡ – 94.97 STD (pseudo, real) 97.67 94.28‡ 81.63‡ – 95.63‡ 91.40‡ 76.75‡ – 95.53‡ 90.75‡ 75.63‡ – STD (real, real) 97.84 94.62‡ 82.14‡ 97.30 95.56‡ 91.39‡ 77.09‡ 94.80 95.51‡ 90.76‡ 75.70‡ 94.78 Hatori+ ’12 97.75 94.33 81.56 – 95.26 91.06 75.93 – 95.27 90.53 74.73 – The arc-eager models EAG (pipe) 97.53 93.28 79.59 – 95.32 90.65 74.98 – 95.23 89.92 73.46 – EAG (real, pseudo) 97.75 93.88 – 97.45 95.63‡ 91.07‡ – 95.06 95.50‡ 90.36‡ – 95.00 EAG (pseudo, real) 97.76 94.36‡ 81.70‡ – 95.63‡ 91.34‡ 76.87‡ – 95.39‡ 90.56‡ 75.56‡ – EAG (real, real) 97.84 94.36‡ 82.07‡ 97.49 95.71‡ 91.51‡ 76.99‡ 95.16 95.47‡ 90.72‡ 75.76‡ 94.94 Table 6: Main results, where the results marked with ‡ denote that the p-value is less than 0.001 compared with the pipeline word-based models using pairwise t-test. brings consistent better accuracies than their work on all the three data sets. Both the pipelines and character-level models with pseudo inter-word dependencies perform word segmentation and POS-tagging jointly, without using real word-level syntactic information. A comparison between them (STD/EAG (pipe) vs. STD/EAG (real, pseudo)) reflects the effectiveness of annotated intra-word dependencies on segmentation and POS-tagging. We can see that both the arc-standard and arc-eager models with annotated intra-word dependencies can improve the segmentation accuracies by 0.3% and the POS-tagging accuracies by 0.5% on average on the three datasets. Similarly, a comparison between the characterlevel models with pseudo inter-word dependencies and the character-level models with real interword dependencies (STD/EAG (real, pseudo) vs. STD/EAG (real, real)) can reflect the effectiveness of annotated inter-word structures on morphology analysis. We can see that improved POS-tagging accuracies are achieved using the real inter-word dependencies when jointly performing inner- and inter-word dependencies. However, we find that the inter-word dependencies do not help the wordstructure accuracies. 4.5 Analysis To better understand the character-level parsing models, we conduct error analysis in this section. All the experiments are conducted on the CTB60 test data sets. The new advantage of the characterlevel models is that one can parse the internal word structures of intra-word dependencies. Thus we are interested in their capabilities of predicting word structures. We study the word-structure accuracies in two aspects, including OOV, word length, POS tags and the parsing model. 4.5.1 OOV The word-structure accuracy of OOV words reflects a model’s ability of handling unknown words. The overall recalls of OOV word structures are 67.98% by STD (real, real) and 69.01% by EAG (real, real), respectively. We find that most errors are caused by failures of word segmentation. We further investigate the accuracies when words are correctly segmented, where the accuracies of OOV word structures are 87.64% by STD (real, real) and 89.07% by EAG (real, real). The results demonstrate that the structures of Chinese words are not difficult to predict, and confirm the fact that Chinese word structures have some common syntactic patterns. 4.5.2 Parsing Model From the above analysis in terms of OOV, word lengths and POS tags, we can see that the EAG (real, real) model and the STD (real, real) models behave similarly on word-structure accuracies. Here we study the two models more carefully, comparing their word accuracies sentence by sentence. Figure 4 shows the results, where each point denotes a sentential comparison between STD (real, real) and EAG (real, real), the x-axis denotes the sentential word-structure accuracy of STD (real, real), and the y-axis denotes that of EAG (real, real). The points at the diagonal show the same accuracies by the two models, while others show that the two models perform differently on the corresponding sentences. We can see that most points are beyond the diagonal line, indicat1333 0.6 0.7 0.8 0.9 1 0.6 0.7 0.8 0.9 1 STD (real, real) EAG (real, real) Figure 4: Sentential word-structure accuracies of STD (real, real) and EAG (real, real). ing that the two parsing models can be complementary in parsing intra-word dependencies. 5 Related Work Zhao (2009) was the first to study character-level dependencies; they argue that since no consistent word boundaries exist over Chinese word segmentation, dependency-based representations of word structures serve as a good alternative for Chinese word segmentation. Thus their main concern is to parse intra-word dependencies. In this work, we extend their formulation, making use of largescale annotations of Zhang et al. (2013), so that the syntactic word-level dependencies can be parsed together with intra-word dependencies. Hatori et al. (2012) proposed a joint model for Chinese word segmentation, POS-tagging and dependency parsing, studying the influence of joint model and character features for parsing, Their model is extended from the arc-standard transition-based model, and can be regarded as an alternative to the arc-standard model of our work when pseudo intra-word dependencies are used. Similar work is done by Li and Zhou (2012). Our proposed arc-standard model is more concise while obtaining better performance than Hatori et al. (2012)’s work. With respect to word structures, real intra-word dependencies are often more complicated, while pseudo word structures cannot be used to correctly guide segmentation. Zhao (2009), Hatori et al. (2012) and our work all study character-level dependency parsing. While Zhao (2009) focus on word internal structures using pseudo inter-word dependencies, Hatori et al. (2012) investigate a joint model using pseudo intra-word dependencies. We use manual dependencies for both inner- and inter-word structures, studying their influences on each other. Zhang et al. (2013) was the first to perform Chinese syntactic parsing over characters. They extended word-level constituent trees by annotated word structures, and proposed a transition-based approach to parse intra-word structures and wordlevel constituent structures jointly. For Hebrew, Tsarfaty and Goldberg (2008) investigated joint segmentation and parsing over characters using a graph-based method. Our work is similar in exploiting character-level syntax. We study the dependency grammar, another popular syntactic representation, and propose two novel transition systems for character-level dependency parsing. Nivre (2008) gave a systematic description of the arc-standard and arc-eager algorithms, currently two popular transition-based parsing methods for word-level dependency parsing. We extend both algorithms to character-level joint word segmentation, POS-tagging and dependency parsing. To our knowledge, we are the first to apply the arceager system to joint models and achieve comparative performances to the arc-standard model. 6 Conclusions We studied the character-level Chinese dependency parsing, by making novel extensions to two commonly-used transition-based dependency parsing algorithms for word-based dependency parsing. With both pseudo and annotated word structures, our character-level models obtained better accuracies than previous work on segmentation, POS-tagging and word-level dependency parsing. We further analyzed some important factors for intra-word dependencies, and found that two proposed character-level parsing models are complementary in parsing intraword dependencies. We make the source code publicly available at http://sourceforge. net/projects/zpar/, version 0.7. Acknowledgments We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Basic Research Program (973 Program) of China via Grant 2014CB340503, the National Natural Science Foundation of China (NSFC) via Grant 61133012 and 61370164, the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301 and SRG ISTD 2012 038 from Singapore University of Technology and Design. 1334 References Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the EMNLP-CONLL, pages 1455–1465, Jeju Island, Korea, July. Bernd Bohnet. 2010. Very high accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd COLING, number August, pages 89–97. Xavier Carreras, Mihai Surdeanu, and Llu´ıs M`arquez. 2006. Projective dependency parsing with perceptron. In Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X), pages 181–185, New York City, June. Pi-Chuan Chang, Michel Galley, and Chris Manning. 2008. Optimizing chinese word segmentation for machine translation performance. In ACL Workshop on Statistical Machine Translation. Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, , and Christopher D. Manning. 2009. Discriminative reordering with chinese grammatical relations features. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation. Jinho D. Choi and Andrew McCallum. 2013. Transition-based dependency parsing with selectional branching. In Proceedings of ACL, pages 1052–1062, August. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 111–118, Barcelona, Spain, July. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 7th EMNLP. Xiangyu Duan, Jun Zhao, and Bo Xu. 2007. Probabilistic models for action-based chinese dependency parsing. In Proceedings of ECML/ECPPKDD, volume 4701 of Lecture Notes in Computer Science, pages 559–566. Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, pages 123–133. Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2012. Incremental joint approach to word segmentation, pos tagging, and dependency parsing in chinese. In Proceedings of the 50th ACL, pages 1045–1053, Jeju Island, Korea, July. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of the 48th ACL, pages 1077–1086, Uppsala, Sweden, July. Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3-Volume 3, pages 1222–1231. Association for Computational Linguistics. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the ACL, pages 1–11. Zhongguo Li and Guodong Zhou. 2012. Unified dependency parsing of chinese morphological and syntactic structures. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1445–1454, Jeju Island, Korea, July. Zhenghua Li, Ting Liu, and Wanxiang Che. 2012. Exploiting multiple treebanks for parsing with quasisynchronous grammars. In Proceedings of the 50th ACL, pages 675–684, Jeju Island, Korea, July. Zhongguo Li. 2011. Parsing the internal structure of words: A new paradigm for chinese word segmentation. In Proceedings of the 49th ACL, pages 1405– 1414, Portland, Oregon, USA, June. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL, number June, pages 91–98, Morristown, NJ, USA. Joakim Nivre and Jens Nilsson. 2005. Pseudoprojective dependency parsing. In Proceedings of ACL. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. Reut Tsarfaty and Yoav Goldberg. 2008. Word-based or morpheme-based? annotation strategies for modern hebrew clitics. In LREC. European Language Resources Association. Yiou Wang, Jun’ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th IJCNLP, pages 309–317, Chiang Mai, Thailand, November. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graphbased and transition-based dependency parsing. In Proceedings of EMNLP, pages 562–571, Honolulu, Hawaii, October. Yue Zhang and Stephen Clark. 2010. A fast decoder for joint word segmentation and POS-tagging using a single discriminative model. In Proceedings of the EMNLP, pages 843–852, Cambridge, MA, October. 1335 Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105–151. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th ACL, pages 188–193, Portland, Oregon, USA, June. Ruiqiang Zhang, Keiji Yasuda, and Eiichiro Sumita. 2008. Chinese word segmentation and statistical machine translation. IEEE Transactions on Signal Processing, 5(2). Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2013. Chinese parsing exploiting characters. In Proceedings of the 51st ACL, pages 125–134, Sofia, Bulgaria, August. Hai Zhao. 2009. Character-level dependencies in chinese: Usefulness and learning. In Proceedings of the EACL, pages 879–887, Athens, Greece, March. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of the 51st ACL, pages 434–443, Sofia, Bulgaria, August. 1336
2014
125
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1337–1348, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Unsupervised Dependency Parsing with Transferring Distribution via Parallel Guidance and Entropy Regularization Xuezhe Ma Department of Linguistics University of Washington Seattle, WA 98195, USA [email protected] Fei Xia Department of Linguistics University of Washington Seattle, WA 98195, USA [email protected] Abstract We present a novel approach for inducing unsupervised dependency parsers for languages that have no labeled training data, but have translated text in a resourcerich language. We train probabilistic parsing models for resource-poor languages by transferring cross-lingual knowledge from resource-rich language with entropy regularization. Our method can be used as a purely monolingual dependency parser, requiring no human translations for the test data, thus making it applicable to a wide range of resource-poor languages. We perform experiments on three Data sets — Version 1.0 and version 2.0 of Google Universal Dependency Treebanks and Treebanks from CoNLL shared-tasks, across ten languages. We obtain stateof-the art performance of all the three data sets when compared with previously studied unsupervised and projected parsing systems. 1 Introduction In recent years, dependency parsing has gained universal interest due to its usefulness in a wide range of applications such as synonym generation (Shinyama et al., 2002), relation extraction (Nguyen et al., 2009) and machine translation (Katz-Brown et al., 2011; Xie et al., 2011). Several supervised dependency parsing algorithms (Nivre and Scholz, 2004; McDonald et al., 2005a; McDonald et al., 2005b; McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012; Zhang et al., 2013) have been proposed and achieved high parsing accuracies on several treebanks, due in large part to the availability of dependency treebanks in a number of languages (McDonald et al., 2013). However, the manually annotated treebanks that these parsers rely on are highly expensive to create, in particular when we want to build treebanks for resource-poor languages. This led to a vast amount of research on unsupervised grammar induction (Carroll and Charniak, 1992; Klein and Manning, 2004; Smith and Eisner, 2005; Cohen and Smith, 2009; Spitkovsky et al., 2010; Blunsom and Cohn, 2010; Mareˇcek and Straka, 2013; Spitkovsky et al., 2013), which appears to be a natural solution to this problem, as unsupervised methods require only unannotated text for training parsers. Unfortunately, the unsupervised grammar induction systems’ parsing accuracies often significantly fall behind those of supervised systems (McDonald et al., 2011). Furthermore, from a practical standpoint, it is rarely the case that we are completely devoid of resources for most languages. In this paper, we consider a practically motivated scenario, in which we want to build statistical parsers for resource-poor target languages, using existing resources from a resource-rich source language (like English).1 We assume that there are absolutely no labeled training data for the target language, but we have access to parallel data with a resource-rich language and a sufficient amount of labeled training data to build an accurate parser for the resource-rich language. This scenario appears similar to the setting in bilingual text parsing. However, most bilingual text parsing approaches require bilingual treebanks — treebanks that have manually annotated tree structures on both sides of source and target languages (Smith and Smith, 2004; Burkett and Klein, 2008), or have tree structures on the source side and translated sentences in the target languages (Huang et 1For the sake of simplicity, we refer to the resource-poor language as the “target language”, and resource-rich language as the “source language”. In addition, in this study we use English as the source resource-rich language, but our methodology can be applied to any resource-rich languages. 1337 al., 2009; Chen et al., 2010). Obviously, bilingual treebanks are much more difficult to acquire than the resources required in our scenario, since the labeled training data and the parallel text in our case are completely separated. What is more important is that most studies on bilingual text parsing assumed that the parser is applied only on bilingual text. But our goal is to develop a parser that can be used in completely monolingual setting for each target language of interest. This scenario is applicable to a large set of languages and many research studies (Hwa et al., 2005) have been made on it. Ganchev et al. (2009) presented a parser projection approach via parallel text using the posterior regularization framework (Graca et al., 2007). McDonald et al. (2011) proposed two parser transfer approaches between two different languages — one is directly transferred parser from delexicalized parsers, and the other parser is transferred using constraint driven learning algorithm where constraints are drawn from parallel corpora. In that work, they demonstrate that even the directly transferred delexicalized parser produces significantly higher accuracies than unsupervised parsers. Cohen et al. (2011) proposed an approach for unsupervised dependency parsing with non-parallel multilingual guidance from one or more helper languages, in which parallel data is not used. In this work, we propose a learning framework for transferring dependency grammars from a resource-rich language to resource-poor languages via parallel text. We train probabilistic parsing models for resource-poor languages by maximizing a combination of likelihood on parallel data and confidence on unlabeled data. Our work is based on the learning framework used in Smith and Eisner (2007), which is originally designed for parser bootstrapping. We extend this learning framework so that it can be used to transfer cross-lingual knowledge between different languages. Throughout this paper, English is used as the source language and we evaluate our approach on ten target languages — Danish (da), Dutch (nl), French (fr), German (de), Greek (el), Italian (it), Korean (ko), Portuguese (pt), Spanish (es) and Swedish (sv). Our approach achieves significant improvement over previous state-of-the-art unsupervised and projected parsing systems across all the ten languages, and considerably bridges the Economic news had little effect on financial markets Root Figure 1: An example dependency tree. gap to fully supervised dependency parsing performance. 2 Our Approach Dependency trees represent syntactic relationships through labeled directed edges between heads and their dependents. For example, Figure 1 shows a dependency tree for the sentence, Economic news had little effect on financial markets, with the sentence’s root-symbol as its root. The focus of this work is on building dependency parsers for target languages, assuming that an accurate English dependency parser and some parallel text between the two languages are available. Central to our approach is a maximizing likelihood learning framework, in which we use an English parser and parallel text to estimate the “transferring distribution” of the target language parsing model (See Section 2.2 for more details). Another advantage of the learning framework is that it combines both the likelihood on parallel data and confidence on unlabeled data, so that both parallel text and unlabeled data can be utilized in our approach. 2.1 Edge-Factored Parsing Model In this paper, we will use the following notation: x represents a generic input sentence, and y represents a generic dependency tree. T(x) is used to denote the set of possible dependency trees for sentence x. The probabilistic model for dependency parsing defines a family of conditional probability pλ(y|x) over all y given sentence x, with a log-linear form: pλ(y|x) = 1 Z(x) exp  X j λjFj(y, x)  (1) where Fj are feature functions, λ = (λ1, λ2, . . .) are parameters of the model, and Z(x) is a normalization factor, which is commonly referred to as the partition function: Z(x) = X y∈T(x) exp  X j λjFj(y, x)  (2) 1338 A common strategy to make this parsing model efficiently computable is to factor dependency trees into sets of edges: Fj(y, x) = X e∈y fj(e, x). (3) That is, dependency tree y is treated as a set of edges e and each feature function Fj(y, x) is equal to the sum of all the features fj(e, x). We denote the weight function of each edge e as follows: w(e, x) = exp  X j λjfj(e, x)  (4) and the conditional probability pλ(y|x) has the following form: pλ(y|x) = 1 Z(x) Y e∈y w(e, x) (5) 2.2 Model Training One of the most common model training methods for supervised dependency parser is Maximum conditional likelihood estimation. For a supervised dependency parser with a set of training data {(xi, yi)}, the logarithm of the likelihood (a.k.a. the log-likelihood) is given by: L(λ) = X i log pλ(yi|xi) (6) Maximum likelihood training chooses parameters such that the log-likelihood L(λ) is maximized. However, in our scenario we have no labeled training data for target languages but we have some parallel and unlabeled data plus an English dependency parser. For the purpose of transferring cross-lingual information from the English parser via parallel text, we explore the model training method proposed by Smith and Eisner (2007), which presented a generalization of K function (Abney, 2004), and related it to another semi-supervised learning technique, entropy regularization (Jiao et al., 2006; Mann and McCallum, 2007). The objective K function to be minimized is actually the expected negative loglikelihood: K = − X i X yi ˜p(yi|xi) log pλ(yi|xi) = X i D(˜pi||pλ,i) + H(˜pi) (7) where ˜pi(·) def = ˜p(·|xi) and pλ,i(·) def = pλ(·|xi). ˜p(y|x) is the “transferring distribution” that reflects our uncertainty about the true labels, and we are trying to learn a parametric model pλ(y|x) by minimizing the K function. In our scenario, we have a set of aligned parallel data P = {xs i, xt i, ai} where ai is the word alignment for the pair of source-target sentences (xs i, xt i), and a set of unlabeled sentences of the target language U = {xt i}. We also have a trained English parsing model pλE(y|x). Then the K in equation (7) can be divided into two cases, according to whether xi belongs to parallel data set P or unlabeled data set U. For the unlabeled examples {xi ∈U}, some previous studies (e.g., (Abney, 2004)) simply use a uniform distribution over labels (e.g., parses), to reflect that the label is unknown. We follow the method in Smith and Eisner (2007) and take the transferring distribution ˜pi to be the actual current belief pλ,i. The total contribution of the unsupervised examples to K then simplifies to KU = P xi∈U H(pλ,i), which may be regarded as the entropy item used to constrain the model’s uncertainty H to be low, as presented in the work on entropy regularization (Jiao et al., 2006; Mann and McCallum, 2007). But how can we define the transferring distribution for the parallel examples {xt i ∈P}? We define the transferring distribution by defining the transferring weight utilizing the English parsing model pλE(y|x) via parallel data with word alignments: ˜w(et, xt i) = ( wE(es, xs i), if et align −→es wE(et delex, xs i), otherwise (8) where wE(·, ·) is the weight function of the English parsing model pλE(y|x), and et delex is the delexicalized form2 of the edge et. From the definition of the transferring weight, we can see that, if an edge et of the target language sentence xt i is aligned to an edge es of the English sentence xs i, we transfer the weight of edge et to the corresponding weight of edge es in the English parsing model pλE(y|x). If the edge et is not aligned to any edges of the English sentence xs i, we reduce the edge et to the delexicalized form and calculate the transferring weight in the English parsing model. There are two advan2The delexicalized form of an edge is an edge for which only delexicalized features are considered. 1339 tages for this definition of the transferring weight. First, by transferring the weight function to the corresponding weight in the well-developed English parsing model, we can project syntactic information across language boundaries. Second, McDonald et al. (2011) demonstrates that parsers with only delexicalized features produce considerably high parsing performance. By reducing unaligned edges to their delexicalized forms, we can still use those delexicalized features, such as part-of-speech tags, for those unaligned edges, and can address problem that automatically generated word alignments include errors. From the definition of transferring weight in equation (8), the transferring distribution can be defined in the following way: ˜p(y|x) = 1 ˜Z(x) Y e∈y ˜w(e, x) (9) where ˜Z(x) = X y Y e∈y ˜w(e, x) (10) Due to the normalizing factor ˜Z(x), the transferring distribution is a valid one. We introduce a multiplier γ as a trade-off between the two contributions (parallel and unsupervised) of the objective function K, and the final objective function K ′ has the following form: K ′ = − X xi∈P X yi ˜p(yi|xi) log pλ(yi|xi) + γ X xi∈U H(pλ,i) = KP + γKU (11) KP and KU are the contributions of the parallel and unsupervised data, respectively. One may regard γ as a Lagrange multiplier that is used to constrain the parser’s uncertainty H to be low, as presented in several studies on entropy regularization (Brand, 1998; Grandvalet and Bengio, 2004; Jiao et al., 2006). 2.3 Algorithms and Complexity for Model Training To train our parsing model, we need to find out the parameters λ that minimize the objective function K ′ in equation (11). This optimization problem is typically solved using quasi-Newton numerical methods such as L-BFGS (Nash and Nocedal, 1991), which requires efficient calculation of the objective function and the gradient of the objective function. The first item (KP ) of the K ′ function in equation (11) can be rewritten in the following form: KP = − X xi∈P  X yi ˜p(yi|xi) X e∈yi log w(e, xi) − log Z(xi)  (12) and according to equation (1) and (3) the gradient of KP can be written as: ∂KP ∂λj = X xi∈P ∂˜p(yi|xi) log pλ(yi|xi) ∂λj = X xi∈P  X yi ˜p(yi|xi) X e∈yi fj(e, xi) − X yi pλ(yi|xi) X e∈yi fj(e, xi)  (13) According to equation (9), ˜p(y|x) can also be factored into the multiplication of the weight of each edge, so both KP and its gradient can be calculated by running the O(n3) inside-outside algorithm (Baker, 1979; Paskin, 2001) for projective parsing. For non-projective parsing, the analogy to the inside algorithm is the O(n3) matrixtree algorithm based on Kirchhoff’s Matrix-Tree Theorem, which is dominated asymptotically by a matrix determinant (Koo et al., 2007; Smith and Smith, 2007). The gradient of a determinant may be computed by matrix inversion, so evaluating the gradient again has the same O(n3) complexity as evaluating the function. The second item (KU) of the K ′ function in equation (11) is the Shannon entropy of the posterior distribution over parsing trees, and can be written into the following form: KU = − X xi∈U  X yi pλ(yi|xi) X e∈yi log w(e, xi) − log Z(xi)  (14) and the gradient of KU is in the following: ∂KU ∂λj = X xi∈U ∂pλ(yi|xi) log pλ(yi|xi) ∂λj = − X yi pλ(yi|xi) log pλ(yi|xi)Fj(yi, xi) +  X yi pλ(yi|xi) log pλ(yi|xi)  ·  X yi pλ(yi|xi)Fj(yi, xi)  (15) 1340 #sents/#tokens training dev test Version 1.0 de 2,200/30,460 800/12,215 1,000/16,339 es 3,345/94,232 370/10,191 300/8,295 fr 3,312/74,979 366/8,071 300/6,950 ko 5,308/62,378 588/6,545 298/2,917 sv 4,447/66,631 493/9,312 1,219/20,376 Version 2.0 de 14,118/26,4906 800/12,215 1,000/16,339 es 14,138/37,5180 1,569/40,950 300/8,295 fr 14,511/35,1233 1,611/38,328 300/6,950 it 6,389/14,9145 400/9,541 400/9,187 ko 5437/60,621 603/6,438 299/2,631 pt 9,600/23,9012 1,200/29,873 1,198/29,438 sv 4,447/66,631 493/9,312 1,219/20,376 Table 1: Data statistics of two versions of Google Universal Treebanks for the target languages. Similar with the calculation of KP , KU can also be computed by running the inside-outside algorithm (Baker, 1979; Paskin, 2001) for projective parsing. For the gradient of KU, both the two multipliers of the second item in equation (15) can be computed using the same inside-outside algorithm. For the first item in equation (15), an O(n3) dynamic programming algorithm that is closely related to the forward-backward algorithm (Mann and McCallum, 2007) for the entropy regularized CRF (Jiao et al., 2006) can be used for projective parsing. For non-projective parsing, however, the runtime rises to O(n4). In this paper, we focus on projective parsing. 2.4 Summary of Our Approach To summarize the description in the previous sections, our approach is performed in the following steps: 1. Train an English parsing model pλE(y|x), which is used to estimate the transferring distribution ˜p(y|x). 2. Prepare parallel text by running word alignment method to obtain word alignments,3 and prepare the unlabeled data. 3. Train a parsing model for the target language by minimizing the objective K ′ function which is the combination of expected negative log-likelihood on parallel and unlabeled data. 3The word alignment methods do not require additional resources besides parallel text. # sents 500 1000 2000 5000 10000 20000 da 12,568 25,225 49,889 126,623 254,565 509,480 de 13,548 26,663 53,170 133,596 265,589 527,407 el 14,198 28,302 56,744 143,753 286,126 572,777 es 15,147 29,214 57,526 144,621 290,517 579,164 fr 15,046 29,982 60,569 153,874 306,332 609,541 it 15,151 29,786 57,696 145,717 288,337 573,557 ko 3,814 7,679 15,337 38,535 77,388 155,051 nl 13,234 26,777 54,570 137,277 274,692 551,463 pt 14,346 28,109 55,998 143,221 285,590 571,109 sv 12,242 24,897 50,047 123,069 246,619 490,086 Table 2: The number of tokens in parallel data used in our experiments. For all these corpora, the other language is English. 3 Data and Tools In this section, we illustrate the data sets used in our experiments and the tools for data preparation. 3.1 Choosing Target Languages Our experiments rely on two kinds of data sets: (i) Monolingual Treebanks with consistent annotation schema — English treebank is used to train the English parsing model, and the Treebanks for target languages are used to evaluate the parsing performance of our approach. (ii) Large amounts of parallel text with English on one side. We select target languages based on the availability of these resources. The monolingual treebanks in our experiments are from the Google Universal Dependency Treebanks (McDonald et al., 2013), for the reason that the treebanks of different languages in Google Universal Dependency Treebanks have consistent syntactic representations. The parallel data come from the Europarl corpus version 7 (Koehn, 2005) and Kaist Corpus4. Taking the intersection of languages in the two kinds of resources yields the following seven languages: French, German, Italian, Korean, Portuguese, Spanish and Swedish. The treebanks from CoNLL shared-tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) appear to be another reasonable choice. However, previous studies (McDonald et al., 2011; McDonald et al., 2013) have demonstrated that a homogeneous representation is critical for multilingual language technologies that require consistent cross-lingual analysis for downstream components, and the heterogenous representations used in CoNLL shared-tasks treebanks weaken any conclusion that can be drawn. 4http://semanticweb.kaist.ac.kr/home/ index.php/Corpus10 1341 DTP DTP† PTP† -U +U OR de 58.50 58.46 69.21 73.72 74.01 78.64 es 68.07 68.72 72.57 75.32 75.60 82.56 fr 70.14 71.13 74.60 76.65 76.93 83.69 ko 42.37 43.57 53.72 59.72 59.94 89.85 sv 70.56 70.59 75.87 78.91 79.27 85.59 Ave 61.93 62.49 69.19 72.86 73.15 84.67 Table 3: UAS for two versions of our approach, together with baseline and oracle systems on Google Universal Treebanks version 1.0. “Ave” is the macro-average across the five languages. For comparison with previous studies, nevertheless, we also run experiments on CoNLL treebanks (see Section 4.4 for more details). We evaluate our approach on three target languages from CoNLL shared task treebanks, which do not appear in Google Universal Treebanks. The three languages are Danish, Dutch and Greek. So totally we have ten target languages. The parallel data for these three languages are also from the Europarl corpus version 7. 3.2 Word Alignments In our approach, word alignments for the parallel text are required. We perform word alignments with the open source GIZA++ toolkit5. The parallel corpus was preprocessed in standard ways, selecting sentences with the length in the range from 3 to 100. Then we run GIZA++ with the default setting to generate word alignments in both directions. We then make the intersection of the word alignments of two directions to generate one-toone alignments. 3.3 Part-of-Speech Tagging Several features in our parsing model involve partof-speech (POS) tags of the input sentences. The set of POS tags needs to be consistent across languages and treebanks. For this reason we use the universal POS tag set of Petrov et al. (2011). This set consists of the following 12 coarsegrained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words). POS tags are not available for parallel data in the Europarl and Kaist corpus, so we need to pro5https://code.google.com/p/giza-pp/ DTP† PTP† -U +U OR de 58.56 69.77 73.92 74.30 81.65 es 68.72 73.22 75.21 75.53 83.92 fr 71.13 74.75 76.14 76.53 83.51 it 70.74 76.08 77.55 77.74 85.47 ko 38.55 43.34 59.71 59.89 90.42 pt 69.82 74.59 76.30 76.65 85.67 sv 70.59 75.87 78.91 79.27 85.59 Ave 64.02 69.66 73.96 74.27 85.18 Table 4: UAS for two versions of our approach, together with baseline and oracle systems on Google Universal Treebanks version 2.0. “Ave” is the macro-average across the seven languages. vide the POS tags for these data. In our experiments, we train a Stanford POS Tagger (Toutanova et al., 2003) for each language. The labeled training data for each POS tagger are extracted from the training portion of each Treebanks. The average tagging accuracy is around 95%. Undoubtedly, we are primarily interested in applying our approach to build statistical parsers for resource-poor target languages without any knowledge. For the purpose of evaluation of our approach and comparison with previous work, we need to exploit the gold POS tags to train the POS taggers. As part-of-speech tags are also a form of syntactic analysis, this assumption weakens the applicability of our approach. Fortunately, some recently proposed POS taggers, such as the POS tagger of Das and Petrov (2011), rely only on labeled training data for English and the same kind of parallel text in our approach. In practice we can use this kind of POS taggers to predict POS tags, whose tagging accuracy is around 85%. 4 Experiments In this section, we will describe the details of our experiments and compare our results with previous methods. 4.1 Data Sets As presented in Section 3.1, we evaluate our parsing approach on both version 1.0 and version 2.0 of Google Univereal Treebanks for seven languages6. We use the standard splits of the treebank for each language as specified in the release of the data7. Table 1 presents the statistics of the two versions of Google Universal Treebanks. We strip all 6Japanese and Indonesia are excluded as no practicable parallel data are available. 7https://code.google.com/p/uni-dep-tb/ 1342 Google Universal Treebanks V1.0 de es fr ko sv # sents PTP† -U +U PTP† -U +U PTP† -U +U PTP† -U +U PTP† -U +U 500 63.23 70.79 70.93 70.09 72.32 72.64 72.24 74.64 74.90 47.71 56.87 57.22 71.70 75.88 76.13 1000 65.61 71.71 71.86 70.90 73.44 73.67 72.95 75.07 75.35 47.83 57.65 58.15 72.38 76.55 77.03 2000 66.52 72.33 72.48 72.01 73.57 73.81 73.69 75.88 76.22 48.37 58.19 58.44 73.65 77.86 78.12 5000 67.79 73.06 73.31 72.34 74.30 74.79 74.31 76.02 76.29 53.02 58.57 59.04 74.88 78.48 78.70 10000 68.44 73.59 73.92 72.48 74.86 75.26 74.43 76.14 76.34 53.61 59.17 59.55 75.34 78.78 79.08 20000 69.21 73.72 74.01 72.57 75.32 75.60 74.60 76.55 76.93 53.72 59.72 59.94 75.87 78.91 79.27 Google Universal Treebanks V2.0 de es fr ko it # sents PTP† -U +U PTP† -U +U PTP† -U +U PTP† -U +U PTP† -U +U 500 60.10 71.07 71.39 69.52 72.97 73.28 71.10 74.57 74.70 40.09 56.60 57.10 72.80 75.67 75.94 1000 61.76 72.15 72.39 70.78 73.48 73.79 72.14 75.13 75.43 40.44 57.55 57.93 73.55 76.43 76.67 2000 65.35 72.73 73.04 71.75 74.10 74.35 73.21 75.78 76.06 40.87 58.11 58.43 74.44 76.99 77.39 5000 67.86 73.32 73.62 72.43 74.55 74.83 74.14 75.83 76.02 40.90 58.48 58.96 75.07 77.10 77.34 10000 68.70 73.71 74.02 72.85 74.80 74.95 74.53 75.97 76.17 41.29 59.13 59.44 75.65 77.50 77.71 20000 69.77 73.92 74.30 73.22 75.21 75.53 74.75 76.14 76.53 43.34 59.71 59.89 76.08 77.55 77.74 pt # sents PTP† -U +U 500 71.34 74.41 74.68 1000 71.91 74.48 75.08 2000 72.93 75.10 75.32 5000 73.78 75.88 75.98 10000 74.40 75.99 76.15 20000 74.59 76.30 76.65 Table 5: Parsing results of our approach with different amount of parallel data on Google Universal Treebanks version 1.0 and 2.0. We omit the results of Swedish for treebanks version 2.0 since the data for Swedish from version 2.0 are exactly the same with those from version 1.0. the dependency annotations off the training portion of each treebank, and use that as the unlabeled data for that target language. We train our parsing model with different numbers of parallel sentences to analyze the influence of the amount of parallel data on the parsing performance of our approach. The parallel data sets contain 500, 1000, 2000, 5000, 10000 and 20000 parallel sentences, respectively. We randomly extract parallel sentences from each corpora, and smaller data sets are subsets of larger ones. Table 2 shows the number of tokens in the parallel data used in the experiments. 4.2 System performance and comparison on Google Universal Treebanks For the comparison of parsing performance, we run experiments on the following systems: DTP: The direct transfer parser (DTP) proposed by McDonald et al. (2011), who train a delexicalized parser on English labeled training data with no lexical features, then apply this parser to parse target languages directly. It is based on the transition-based dependency parsing paradigm (Nivre, 2008). We directly cite the results reported in McDonald et al. (2013). In addition to their original results, we also report results by reimplementing the direct transfer parser based on the first-order projective dependency parsing model (McDonald et al., 2005a) (DTP†). PTP The projected transfer parser (PTP) described in McDonald et al. (2011). The results of the projected transfer parser reimplemented by us is marked as “PTP†”. -U: Our approach training on only parallel data without unlabeled data for the target language. The parallel data set for each language contains 20,000 sentences. +U: Our approach training on both parallel and unlabeled data. The parallel data sets are the ones contains 20,000 sentences. OR: the supervised first-order projective dependency parsing model (McDonald et al., 2005a), trained on the original treebanks with maximum likelihood estimation (equation 6). One may regard this system as an oracle of transfer parsing. Parsing accuracy is measured with unlabeled attachment score (UAS): the percentage of words with the correct head. Table 3 and Table 4 shows the parsing results of our approach, together with the results of the baseline systems and the oracle, on version 1.0 and version 2.0 of Google Universal Treebanks, respectively. Our approaches significantly outperform all the baseline systems across all the seven target languages. For the results on Google Universal Treebanks version 1.0, the improvement on average over the projected transfer paper (PTP†) is 3.96% 1343 and up to 6.22% for Korean and 4.80% for German. For the other three languages, the improvements are remarkable, too — 2.33% for French, 3.03% for Spanish and 3.40% for Swedish. By adding entropy regularization from unlabeled data, our full model achieves average improvement of 0.29% over the “-U” setting. Moreover, our approach considerably bridges the gap to fully supervised dependency parsers, whose average UAS is 84.67%. For the results on treebanks version 2.0, we can get similar observation and draw the same conclusion. 4.3 Effect of the Amount of Parallel Text Table 5 illustrates the UAS of our approach trained on different amounts of parallel data, together with the results of the projected transfer parser re-implemented by us (PTP†). We run two versions of our approach for each of the parallel data sets, one with unlabeled data (+U) and the other without them (-U). From table 5 we can get three observations. First, even the parsers trained with only 500 parallel sentences achieve considerably high parsing accuracies (average 70.10% for version 1.0 and 71.59% for version 2.0). This demonstrates that our approach does not rely on a large amount of parallel data. Second, when gradually increasing the amount of parallel data, the parsing performance continues improving. Third, entropy regularization with unlabeled data makes modest improvement on parsing performance over the parsers without unlabeled data. This proves the effectiveness of the entropy regularization from unlabeled data. 4.4 Experiments on CoNLL Treebanks To make a thorough empirical comparison with previous studies, we also evaluate our system without unlabeled data (-U) on treebanks from CoNLL shared task on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007). To facilitate comparison, we use the same eight IndoEuropean languages as target languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish, and same experimental setup as McDonald et al. (2011). We report both the results of the direct transfer and projected transfer parsers directly cited from McDonald et al. (2011) (DTP and PTP) and re-implemented by us (DTP†and PTP†). Table 6 gives the results comparing the model without unlabeled data (-U) presented in this work DMV DTP DTP† PTP PTP† -U OR da 33.4 45.9 46.8 48.2 50.0 50.1 87.1 de 18.0 47.2 46.0 50.9 52.4 57.3 87.0 el 39.9 63.9 62.9 66.8 65.3 67.4 82.3 es 28.5 53.3 54.4 55.8 59.9 60.3 83.6 it 43.1 57.7 59.9 60.8 63.4 64.0 83.9 nl 38.5 60.8 60.7 67.8 66.5 68.2 78.2 pt 20.1 69.2 71.1 71.3 74.8 75.1 87.2 sv 44.0 58.3 60.3 61.3 62.8 66.7 88.0 Ave 33.2 57.0 57.8 60.4 61.9 63.6 84.7 Table 6: Parsing results on treebanks from CoNLL shared tasks for eight target languages. The results of unsupervised DMV model are from Table 1 of McDonald et al. (2011). to those five baseline systems and the oracle (OR). The results of unsupervised DMV model (Klein and Manning, 2004) are from Table 1 of McDonald et al. (2011). Our approach outperforms all these baseline systems and achieves state-of-theart performance on all the eight languages. In order to compare with more previous methods, we also report parsing performance on sentences of length 10 or less after punctuation has been removed. Table 7 shows the results of our system and the results of baseline systems. “USR†” is the weakly supervised system of Naseem et al. (2010). “PGI” is the phylogenetic grammar induction model of Berg-Kirkpatrick and Klein (2010). Both the results of the two systems are cited from Table 4 of McDonald et al. (2011). We also include the results of the unsupervised dependency parsing model with non-parallel multilingual guidance (NMG) proposed by Cohen et al. (2011)8, and “PR” which is the posterior regularization approach presented in Gillenwater et al. (2010). All the results are shown in Table 7. From Table 7, we can see that among the eight target languages, our approach achieves best parsing performance on six languages — Danish, German, Greek, Italian, Portuguese and Swedish. It should be noted that the “NMG” system utilizes more than one helper languages. So it is not directly comparable to our work. 4.5 Extensions In this section, we briefly outline a few extensions to our approach that we want to explore in future work. 8For each language, we use the best result of the four systems in Table 3 of Cohen et al. (2011) 1344 DTP DTP† PTP PTP† USR† PGI PR NMG -U da 53.2 55.3 57.4 59.8 55.1 41.6 44.0 59.9 60.1 de 65.9 57.9 67.0 63.5 60.0 — — — 67.5 el 73.9 70.8 73.9 72.3 60.3 — — 73.0 74.3 es 58.0 62.3 62.3 66.1 68.3 58.4 62.4 76.7 64.6 it 65.5 66.9 69.9 71.5 47.9 — — — 73.6 nl 67.6 66.0 72.2 72.1 44.0 45.1 37.9 50.7 70.5 pt 77.9 79.2 80.6 82.9 70.9 63.0 47.8 79.8 83.3 sv 70.4 70.2 71.3 70.4 52.6 58.3 42.2 74.0 75.1 Ave 66.6 66.1 69.4 69.8 57.4 — — — 71.1 Table 7: UAS on sentences of length 10 or less without punctuation from CoNLL shared task treebanks. “USR†” is the weakly supervised system of Naseem et al. (2010). “PGI” is the phylogenetic grammar induction model of Berg-Kirkpatrick and Klein (2010). Both the “USR†” and “PGI” systems are implemented and reported by McDonald et al. (2011). “NMG” is the unsupervised dependency parsing model with non-parallel multilingual guidance (Cohen et al., 2011). “PR” is the posterior regularization approach presented in Gillenwater et al. (2010). Some systems’ results for certain target languages are not available as marked by —. 4.5.1 Non-Projective Parsing As mentioned in section 2.3, the runtime to compute KU and its gradient is O(n4). One reasonable speedup, as presented in Smith and Eisner (2007), is to replace Shannon entropy with R´enyi entropy. The R´enyi entropy is parameterized by α: Rα(p) = 1 1 −α log  X y p(y)α (16) With R´enyi entropy, the computation of KU and its gradient is O(n3), even for non-projective case. 4.5.2 Higher-Order Models for Projective Parsing Our learning framework can be extended to higher-order dependency parsing models. For example, if we want to make our model capable of utilizing more contextual information, we can extend our transferring weight to higher-order parts: ˜w(pt, xt i) = ( wE(ps, xs i), if pt align −→ps wE(pt delex, xs i), otherwise (17) where p is a small part of tree y that has limited interactions. For projective parsing, several algorithms (McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012) have been proposed to solve the model training problems (calculation of objective function and gradient) for different factorizations. 4.5.3 IGT Data One possible direction to improve our approach is to replace parallel text with Interlinear Glossed Text (IGT) (Lewis and Xia, 2010), which is a semi-structured data type encoding more syntactic information than parallel data. By using IGT Data, not only can we obtain more accurate word alignments, but also extract useful cross-lingual information for the resource-poor language. 5 Conclusion In this paper, we propose an unsupervised projective dependency parsing approach for resourcepoor languages, using existing resources from a resource-rich source language. By presenting a model training framework, our approach can utilize parallel text to estimate transferring distribution with the help of a well-developed resourcerich language dependency parser, and use unlabeled data as entropy regularization. The experimental results on three data sets across ten target languages show that our approach achieves significant improvement over previous studies. Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. BCS-0748919. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. 1345 References Steven Abney. 2004. Understanding the Yarowsky algorithm. Computational Linguistics, 30:2004. James K. Baker. 1979. Trainable grammars for speech recognition. In Proceedings of 97th meeting of the Acoustical Society of America, pages 547–550. Taylor Berg-Kirkpatrick and Dan Klein. 2010. Phylogenetic grammar induction. In Proceedings of ACL2010, pages 1288–1297, Uppsala, Sweden, July. Phil Blunsom and Trevor Cohn. 2010. Unsupervised induction of tree substitution grammars for dependency parsing. In Proceedings of EMNLP-2010, pages 1204–1213, Cambridge, MA, October. Matthew Brand. 1998. Structure learning in conditional probability models via an entropic prior and parameter extinction. Neural Computation, 11(5):1155–1182. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceeding of CoNLL-2006, pages 149–164, New York, NY. David Burkett and Dan Klein. 2008. Two languages are better than one (for syntactic parsing). In Proceedings of EMNLP-2008, pages 877–886, Honolulu, Hawaii, October. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In Proceedings of the CoNLL Shared Task Session of EMNLPCONLL, pages 957–961. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. In Proceedings of Working Notes of the Workshop Statistically-Based NLP Techniques. Wenliang Chen, Jun’ichi Kazama, and Kentaro Torisawa. 2010. Bitext dependency parsing with bilingual subtree constraints. In Proceedings of ACL2010, pages 21–29, Uppsala, Sweden, July. Shay Cohen and Noah A. Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of NAACL/HLT-2009, pages 74–82, Boulder, Colorado, June. Shay B. Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised structure prediction with nonparallel multilingual guidance. In Proceedings of EMNLP-2011, pages 50–61, Edinburgh, Scotland, UK., July. Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graphbased projections. In Proceedings of ACL/HLT2011, pages 600–609, Portland, Oregon, USA, June. Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of ACL/AFNLP-2009, pages 369–377, Suntec, Singapore, August. Jennifer Gillenwater, Kuzman Ganchev, Jo˜ao Grac¸a, Fernando Pereira, and Ben Taskar. 2010. Sparsity in dependency grammar induction. In Proceedings of the ACL 2010 Conference Short Papers, pages 194– 199, Uppsala, Sweden, July. Joao V. Graca, Lf Inesc-id, Kuzman Ganchev, and Ben Taskar. 2007. Expectation maximization and posterior constraints. In Advances in NIPS, pages 569– 576. Yves Grandvalet and Yoshua Bengio. 2004. Semisupervised learning by entropy minimization. In Advances in Neural Information Processing Systems. Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proceedings of EMNLP-2009, pages 1222–1231, Singapore, August. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11:11–311. Feng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, and Dale Schuurmans. 2006. Semisupervised conditional random fields for improved sequence segmentation and labeling. In Proceedings of COLING/ACL-2006, pages 209–216, Sydney, Australia, July. Jason Katz-Brown, Slav Petrov, Ryan McDonald, Franz Och, David Talbot, Hiroshi Ichikawa, Masakazu Seno, and Hideto Kazawa. 2011. Training a parser for machine translation reordering. In Proceedings of EMNLP-2011, pages 183–192, Edinburgh, Scotland, UK., July. Dan Klein and Christopher Manning. 2004. Corpusbased induction of syntactic structure: Models of dependency and constituency. In Proceedings of ACL2004, pages 478–485, Barcelona, Spain, July. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79–86, Phuket, Thailand. AAMT, AAMT. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of ACL2010, pages 1–11, Uppsala, Sweden, July. Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured predicition models via the matrix-tree theorem. In Proceedings of EMNLP-CONLL 2007, pages 141–150, Prague, Czech, June. 1346 William D. Lewis and Fei Xia. 2010. Developing odin: A multilingual repository of annotated language data for hundreds of the world’s languages. LLC, 25(3):303–319. Xuezhe Ma and Hai Zhao. 2012. Fourth-order dependency parsing. In Proceedings of COLING 2012: Posters, pages 785–796, Mumbai, India, December. Gideon S. Mann and Andrew McCallum. 2007. Efficient computation of entropy gradient for semisupervised conditional random fields. In Proceedings of NAACL/HLT-2007, pages 109–112, Stroudsburg, PA, USA. David Mareˇcek and Milan Straka. 2013. Stopprobability estimates computed on a large corpus improve unsupervised dependency parsing. In Proceedings of ACL-2013, pages 281–290, Sofia, Bulgaria, August. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL-2006, pages 81–88, Trento, Italy, April. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of ACL-2005, pages 91–98, Ann Arbor, Michigan, USA, June 2530. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of HLT/EMNLP-2005, pages 523–530, Vancouver, Canada, October. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of EMNLP-2011, pages 62– 72, Edinburgh, Scotland, UK., July. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of ACL-2013, pages 92–97, Sofia, Bulgaria, August. Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowledge to guide grammar induction. In Proceedings of EMNLP-2010, pages 1234–1244, Cambridge, MA, October. Stephen G. Nash and Jorge Nocedal. 1991. A numerical study of the limited memory bfgs method and truncated-newton method for large scale optimization. SIAM Journal on Optimization, 1(2):358–372. Truc-Vien T. Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution kernels on constituent, dependency and sequential structures for relation extraction. In Proceedings of EMNLP2009, pages 1378–1387, Singapore, August. Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING-2004, pages 64–70, Geneva, Switzerland, August 23-27. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan Mcdonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The conll 2007 shared task on dependency parsing. In Proceeding of EMNLP-CoNLL 2007, pages 915–932, Prague, Czech. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Comput. Linguist., 34(4):513–553, December. Mark A. Paskin. 2001. Cubic-time parsing and learning algorithms for grammatical bigram models. Technical Report, UCB/CSD-01-1148. Slav Petrov, Dipanjan Das, and Ryan T. McDonald. 2011. A universal part-of-speech tagset. CoRR, abs/1104.2086. Yusuke Shinyama, Satoshi Sekine, and Kiyoshi Sudo. 2002. Automatic paraphrase acquisition from news articles. In Proceeding of HLT-2002, pages 313– 318. Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of ACL-2005, pages 354–362, Ann Arbor, Michigan, June. David A. Smith and Jason Eisner. 2007. Bootstrapping feature-rich dependency parsers with entropic priors. In Proceedings of EMNLP/CoNLL-2007, pages 667–677, Prague, Czech Republic, June. David A. Smith and Noah A. Smith. 2004. Bilingual parsing with factored estimation: Using English to parse Korean. In Proceedings of EMNLP2004, pages 49–56. David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonporjective dependency trees. In Proceedings of EMNLP-CONLL 2007, pages 132– 140, Prague, Czech, June. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2010. From baby steps to leapfrog: How “less is more” in unsupervised dependency parsing. In Proceedings of NAACL/HLT-2010, pages 751– 759, Los Angeles, California, June. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2013. Breaking out of local optima with count transforms and model recombination: A study in grammar induction. In Proceedings of EMNLP2013, pages 1983–1995, Seattle, Washington, USA, October. 1347 Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of NAACL/HLT-2003, pages 252– 259. Jun Xie, Haitao Mi, and Qun Liu. 2011. A novel dependency-to-string model for statistical machine translation. In Proceedings of EMNLP-2011, pages 216–226, Edinburgh, Scotland, UK., July. Hao Zhang, Liang Huang, Kai Zhao, and Ryan McDonald. 2013. Online learning for inexact hypergraph search. In Proceedings of EMNLP-2013, pages 908–913, Seattle, Washington, USA, October. 1348
2014
126
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1349–1359, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Unsupervised Morphology-Based Vocabulary Expansion Mohammad Sadegh Rasooli, Thomas Lippincott, Nizar Habash and Owen Rambow Center for Computational Learning Systems Columbia University, New York, NY, USA {rasooli,tom,habash,rambow}@ccls.columbia.edu Abstract We present a novel way of generating unseen words, which is useful for certain applications such as automatic speech recognition or optical character recognition in low-resource languages. We test our vocabulary generator on seven low-resource languages by measuring the decrease in out-of-vocabulary word rate on a held-out test set. The languages we study have very different morphological properties; we show how our results differ depending on the morphological complexity of the language. In our best result (on Assamese), our approach can predict 29% of the token-based out-of-vocabulary with a small amount of unlabeled training data. 1 Introduction In many applications in human language technologies (HLT), the goal is to generate text in a target language, using its standard orthography. Typical examples include automatic speech recognition (ASR, also known as STT or speech-to-text), optical character recognition (OCR), or machine translation (MT) into a target language. We will call such HLT applications “target-language generation technologies” (TLGT). The best-performing systems for these applications today rely on training on large amounts of data: in the case of ASR, the data is aligned audio and transcription, plus large unannotated data for the language modeling; in the case of OCR, it is transcribed optical data; in the case of MT, it is aligned bitexts. More data provides for better results. For languages with rich resources, such as English, more data is often the best solution, since the required data is readily available (including bitexts), and the cost of annotating (e.g., transcribing) data is outweighed by the potential significance of the systems that the data will enable. Thus, in HLT, improvements in quality are often brought about by using larger data sets (Banko and Brill, 2001). When we move to low-resource languages, the solution of simply using more data becomes less appealing. Unannotated data is less readily available: for example, at the time of publishing this paper, 55% of all websites are in English, the top 10 languages collectively account for 90% of web presence, and the top 36 languages have a web presence that covers at least 0.1% of web sites.1 All other languages (and all languages considered in this paper except Persian) have a web presence of less than 0.1%. Considering Wikipedia, another resource often used in HLT, English has 4.4 million articles, while only 48 other languages have more than 100,000.2 As attention turns to developing HLT for more languages, including lowresource languages, alternatives to “more-data” approaches become important. At the same time, it is often not possible to use knowledge-rich approaches. For low-resource languages, resources such as morphological analyzers are not usually available, and even good scholarly descriptions of the morphology (from which a tool could be built) are often not available. The challenge is therefore to use data, but to make do with a small amount of data, and thus to use data better. This paper is a contribution to this goal. Specifically, we address TLGTs, i.e., the types of HLT mentioned above that generate target language text. We propose a new approach to generating unseen words of the target language which have not been seen in the training data. Our approach is entirely unsupervised. It assumes that word-units are specified, typically by whitespace and punctuation. 1http://en.wikipedia.org/wiki/ Languages_used_on_the_Internet 2http://meta.wikimedia.org/wiki/List_ of_Wikipedias 1349 Expanding the vocabulary of the target language can be useful for TLGTs in different ways. For ASR and OCR, which can compose words from smaller units (phones or graphically recognized letters), an expanded target language vocabulary can be directly exploited without the need for changing the technology at all: the new words need to be inserted into the relevant resources (lexicon, language model) etc, with appropriately estimated probabilities. In the case of MT into morphologically rich low-resource languages, morphological segmentation is typically used in developing the translation models to reduce sparsity, but this does not guarantee against generating wrong word combinations. The expanded word combinations can be used to extend the language models used for MT to bias against incoherent hypothesized new sequences of segmented words. Our approach relies on unsupervised morphological segmentation. We do not in this paper contribute to research in unsupervised morphological segmentation; we only use it. The contribution of this paper lies in proposing how to use the results of unsupervised morphological segmentation in order to generate unseen words of the language. We investigate several ways of doing so, and we test them on seven low-resource languages. These languages have very different morphological properties, and we show how our results differ depending on the morphological complexity of the language. In our best result (on Assamese), we show that our approach can predict 29% of the tokenbased out-of-vocabulary with a small amount of unlabeled training data. The paper is structured as follows. We first discuss related work in Section 2. We then present our method in Section 3, and present experimental results in Section 4. We conclude with a discussion of future work in Section 5. 2 Related Work Approaches to Morphological Modeling Computational morphology is a very active area of research with a multitude of approaches that vary in the degree of manual annotation needed, and the amount of machine learning used. At one extreme, we find systems that are painstakingly and carefully designed by hand (Koskenniemi, 1983; Buckwalter, 2004; Habash and Rambow, 2006; D´etrez and Ranta, 2012). Next on the continuum, we find work that focuses on defining morphological models with limited lexica that are then extended using raw text (Cl´ement et al., 2004; Forsberg et al., 2006). In the middle of this continuum, we find efforts to learn complete paradigms using fully supervised methods relying on completely annotated data points with rich morphological information (Durrett and DeNero, 2013; Eskander et al., 2013). Next, there is work on minimally supervised methods that use available resources such as dictionaries, bitexts, and other additional morphological annotations (Yarowsky and Wicentowski, 2000; Cucerzan and Yarowsky, 2002; Neuvel and Fulop, 2002; Snyder and Barzilay, 2008). At the other extreme, we find unsupervised methods that learn morphology models from unannotated data (Creutz and Lagus, 2007; Monson et al., 2008; Dreyer and Eisner, 2011; Sirts and Goldwater, 2013). The work we present in this paper makes no use of any morphological annotations whatsoever, yet we are quite distinct from the approaches cited above. We compare our work to two efforts specifically. First, consider work in automatic morphological segmentation learning from unannotated data (Creutz and Lagus, 2007; Monson et al., 2008). Unlike these approaches which provide segmentations for training data and produce models that can be used to segment unseen words, our approach can generate words that have not been seen in the training data. The focus of efforts is rather complementary: we actually use an off-theshelf unsupervised segmentation system (Creutz and Lagus, 2007) as part of our approach. Second, consider paradigm completion methods such as the work of Dreyer and Eisner (2011). This effort is closely related to our work although unlike it, we make no assumptions about the data and do not introduce any restrictions along the lines of derivation/inflectional morphology: Dreyer and Eisner (2011) limited their work to verbal paradigms and used annotated training data in addition to basic assumptions about the problem such as the size of the paradigms. In our approach, we have zero annotated information and we do not distinguish between inflectional and derivational morphology, nor do we limit ourselves to a specific part-ofspeech (POS). Vocabulary Expansion in HLT There have been diverse approaches towards dealing with outof-vocabulary (OOV) words in ASR. In some models, the approach is to expand the lexicon by 1350 adding new words or pronunciations. Ohtsuki et al. (2005) propose a two-run model where in the first run, the input speech is recognized by the reference vocabulary and relevant words are extracted from the vocabulary database and added thereafter to the reference vocabulary to build an expanded lexicon. Word recognition is done in the second run based on the lexicon. Lei et al. (2009) expanded the pronunciation lexicon via generating all possible pronunciations for a word before lattice generation and indexation. There are also other methods for generating abbreviations in voice search systems such as Yang et al. (2012). While all of these approaches involve lexicon expansion, they do not employ any morphological information. In the context of MT, several researchers have addressed the problem of OOV words by relating them to known in-vocabulary (INV) words. Yang and Kirchhoff (2006) anticipated OOV words that are potentially morphologically related using phrase-based backoff models. Habash (2008) considered different techniques for vocabulary expansion online. One of their techniques learned models of morphological mapping between morphologically rich source words in Arabic that produce the same English translation. This was used to relate an OOV word to a morphologically related INV word. Another technique expanded the MT phrase tables with possible transliterations and spelling alternatives. 3 Morphology-based Vocabulary Expansion 3.1 Approach Our approach to morphology-based vocabulary expansion consists of three steps (Figure 1). We start with a “training” corpus of (unannotated) words and generate a list of new (unseen) words that expands the vocabulary of the training corpus. 1. Unsupervised Morphology Segmentation The first step is to segment each word in the training corpus into sequences of prefixes, stem and suffixes, where the prefixes and suffixes are optional.3 2. FST-based Morphology Expansion We then construct new word models using the 3In this paper, we use an off-the-shelf system for this step but plan to explore new methods in the future, such as joint segmentation and expansion. segmented stems and affixes. We explore two different techniques for morphology-based vocabulary expansion that we discuss below. The output of these models is represented as a weighted finite state machine (WFST). 3. Reranking Models Given that the size of the expanded vocabulary can be quite large and it may include a lot of over-generation, we rerank the expanded set of words before taking the top n words to use in downstream processes. We consider four reranking conditions which we describe below. Training Transcripts Unsupervised Morphology Segmentation Segmented Words FST-based Expansion Model Expanded List Reranking Reranked Expansion Figure 1: The flowchart of the lexicon expansion system. 3.2 Morphology Expansion Techniques As stated above, the input to the morphology expansion step is a list of words segmented into morphemes: zero or more prefixes, one stem, and zero or more suffixes. Figure 2a presents an example of such input using English words (for clarity). We use two different models of morphology expansion in this paper: Fixed Affix model and Bigram Affix model. 3.2.1 Fixed Affix Expansion Model In the Fixed Affix model, we construct a set of fused prefixes from all the unique prefix sequences in the training data; and we similarly construct a 1351 re+ pro+ duc +e func +tion +al re+ duc +e re+ duc +tion +s in pro+ duct concept +u +al + ly (a) Training data with morpheme boundaries. Prefixes end with and suffixes start with “+” signs. 3 0 1 repro <epsilon> re pro 2 duc func in concept duct e tional tions utually <epsilon> (b) FST for the Fixed Affix expansion model 3 0 4 re <epsilon> 1 pro <epsilon> 2 duc func in concept duct e <epsilon> 5 tion u 7 tion 6 al s ly <epsilon> (c) FST for the Bigram Affix expansion model Figure 2: Two models of word generation from morphologically annotated data. In our experiments, we used weighted finite state machine. We use character-based WFST in the implementation to facilitate analyzing inputs as well as word generation. set of fused suffixes from all the unique suffix sequences in the training data. In other words, we simply pick characters from beginning of the word up to the first stem as the prefix and characters from the first suffix to the end of the word as the suffix. Everything in the middle is the stem. In this model, each word has one single prefix and one single suffix (each of which can be empty independently). The Fixed Affix model is simply the concatenation of the disjunction of all prefixes with the disjunction of all stems and the disjunction of all suffixes into one FST: prefix →stem →suffix The morpheme paths in the FST are weighted to reflect their probability in the training corpus.4 Figure 2b exemplifies a Fixed Affix model derived from the example training data in Figure 2a. 4We convert the probability into a cost by taking the negative of the log of the probability. 3.2.2 Bigram Affix Expansion Model In the Bigram Affix model, we do the same for the stem as in the Fixed Affix model, but for prefixes and suffixes, we create a bigram language model in the finite state machine. The advantage of this technique is that unseen compound affixes can be generated by our model. For example, the Fixed Affix model in Figure 2b cannot generate the word func+tion+al+ly since the suffix +tionally is not seen in the training data. However, this word can be generated in the Bigram Affix model as shown in Figure 2c: there is a path passing 0 →4 →1 → 2 →5 →6 →3 in the FST that can produce this word. We expect this model to have better recall for generating new words in the language because of its affixation flexibility. 3.3 Reranking Techniques The expanded models allow for a large number of words to be generated. We limit the number of vocabulary expansion using different thresholds after reranking or reweighing the WFSTs generated 1352 above. We consider four reranking conditions. 3.3.1 No Reranking (NRR) The baseline reranking option is no reranking (NRR). In this approach we use the weights in the WFST, which are based on the independent prefix/stem/suffix probabilities, to determine the ranking of the expanded vocabulary. 3.3.2 Trigraph-based Reweighting (W◦Tr) We reweight the weights in the WFST model (Fixed or Bigram) by composing it with a letter trigraph language model (W◦Tr). A letter trigraph LM is itself a WFST where each trigraph (a sequence of three consequent letters) has an associated weight equal to its negative log-likelihood in the training data. This reweighting allows us to model preferences of sequences of word letters seen more in the training data. For example, in a word like producttions, the trigraphs ctt and tti are very rare and thus decrease its probability. 3.3.3 Trigraph-based Reranking (TRR) When we compose our initial WFST with the trigraph FST, the probability of each generated word from the new FST is equal to the product of the probability of its morphemes and the probabilities of each trigraph in that word. This basically makes the model prefer shorter words and may degrade the effect of morphology information. Instead of reweighting the WFST, we get the n-best list of generated words and rerank them using their trigraph probabilities. We will refer to this technique as TRR. 3.3.4 Reranking Morpheme Boundaries (BRR) The last reranking technique reranks the n-best generated word list with trigraphs that are incident on the morpheme boundaries (in case of Bigram Affix model, the last prefix and first suffix). The intuition is that we already know that any morpheme that is generated from the morphology FST is already seen in the training data but the boundary for different morphemes are not guaranteed to be seen in the training data. For example, for the word producttions, we only take into account the trigraphs rod, odu, ctt and tti instead of all possible trigraphs. We will refer to this technique as BRR. 4 Evaluation 4.1 Evaluation Data and Tools Evaluation Data The IARPA Babel program is a research program for developing rapid spoken detection systems for under-resourced languages (Harper, 2013). We use the IARPA Babel program limited language pack data which consists of 20 hours of telephone speech with transcription. We use six languages which are known to have rich morphology: Assamese (IARPAbabel102b-v0.5a), Bengali (IARPA-babel103bv0.4b), Pashto (IARPA-babel104b-v0.4bY), Tagalog (IARPA-babel106-v0.2g), Turkish (IARPAbabel105b-v0.4) and Zulu (IARPA-babel206bv0.1e). Speech annotation such as silences and hesitations are removed from transcription and all words are turned into lower-case (for languages using the Roman script – Tagalog, Turkish and Zulu). Moreover, in order to be able to perform a manual error analysis, we include a language that has rich morphology and of which the first author is a native speaker: Persian. We sampled data from the training and development set of the Persian dependency treebank (Rasooli et al., 2013) to create a comparable seventh dataset in Persian. Statistics about the datasets are shown in Table 1. We also conduct further experiments on just verbs and nouns in the data set for Persian (Persian-N and Persian V). As shown in Table 1, the training data is very small and the OOV rate is high especially in terms of types. For some languages that have richer morphology such as Turkish and Zulu, the OOV rate is much higher than other languages. Word Generation Tools and Settings For unsupervised learning of morphology, we use Morfessor CAT-MAP (v. 0.9.2) which was shown to be a very accurate morphological analyzer for morphologically rich languages (Creutz and Lagus, 2007). In order to be able to analyze Unicodebased data, we convert each character in our dataset to some conventional ASCII character and then train Morfessor on the mapped dataset; after finishing the training, we map the data back to the original character set. We use the default setting in Morfessor for unsupervised learning. For preparing the WFST, we use OpenFST (Riley et al., 2009). We get the top one million shortest paths (i.e., least costly paths of words) and apply our reranking models on them. It is worth pointing out that our WFSTs are character-based 1353 Language Training Data Development Data Type Token Type Token Type OOV% Token OOV% Assamese 8694 73151 7253 66184 49.57 8.28 Bengali 9460 81476 7794 70633 50.65 8.47 Pashto 6968 115069 6135 108137 44.89 4.25 Persian 14047 71527 10479 42939 44.16 12.78 Tagalog 6213 69577 5480 64334 54.95 7.81 Turkish 11985 77128 9852 67042 56.84 12.34 Zulu 15868 65655 13756 57141 68.72 21.76 Persian-N 9204 31369 7502 18816 46.36 22.11 Persian-V 2653 11409 1332 7318 41.07 9.01 Table 1: Statistics of training and development data for morphology-based unsupervised word generation experiments. and thus we also have a morphological analyzer that can give all possible segmentations for a given word. By running the morphological analyzer on the OOVs, we can have the potential upper bound of OOV reduction by the system (labeled “∞” in Tables 2 and 3). 4.2 Lexicon Expansion Results The results for lexicon expansion are shown in Table 2 for types and Table 3 for tokens. We use the trigraph WFST as our baseline model. This model does not use any morphological information. In this case, words are generated according to the likelihood of their trigraphs, without using any information from the morphological segmentation. We call this model the trigraph WFST (Tr. WFST). We consistently have better numbers than this baseline in all of our models except for Pashto when measured by tokens. ∞ is the upper-bound OOV reduction for our expansion model: for each word in the development set, we ask if our model, without any vocabulary size restriction at all, could generate it. The best results (again, except for Pashto) are achieved using one of the three reranking methods (reranking by trigraph probabilities or morpheme boundaries) as opposed to doing no reranking. To our surprise, the Fixed Affix model does a slightly better job in reducing out of vocabulary than the Bigram Affix model. We can also see from the results that reranking in general is very effective. We also compare our models with the case that there is much more training data and we do not do vocabulary expansion at all. In Table 2 and Table 3, “FP” indicates the full language pack for the Babel project data which is approximately six to eight times larger than the limited pack training data, and the full training data for Persian which is approximately five times larger. We see that the larger training data outperforms our methods in all languages. However, from the results of ∞, which is the upper-bound OOV reduction by our expansion model, for some languages such as Assamese, our numbers are close to the FP results and for Zulu it is even better than FP. We also study how OOV reduction is affected by the size of the generated vocabulary. The trends for different sizes of the lexicon expansion by Fixed Affix model that is reranked by trigraph probabilities is shown in Figure 3. As seen in the results, for languages that have richer morphology, it is harder to achieve results near to the upper bound. As an outlier, morphology does not help for Pashto. One possible reason might be that based on the results in Table 4, Morfessor does not explore morphology in Pashto as well as other languages. Morphological Complexity As for further analysis, we can study the correlation between morphological complexity and hardness of reducing OOVs. Much work has been done in linguistics to classify languages (Sapir, 1921; Greenberg, 1960). The common wisdom is that languages are not either agglutinative or fusional, but are on a spectrum; however, no work to our knowledge places all languages (or at least the ones we worked on) on such a spectrum. We propose several metrics. First, we can consider the number of unique affixival morphemes in each language, as determined by Morfessor. As shown in Table 4 (|pr| + |sf|), Zulu has the most morphemes and Pashto the fewest. A second possible metric of the 1354 Language Tr. Fixed Affix Model Bigram Affix Model FP WFST NRR W◦Tr TRR BRR ∞ NRR W◦Tr TRR BRR ∞ Assamese 15.94 24.03 28.46 28.15 27.15 48.07 23.50 28.15 27.84 26.59 51.02 50.96 Bengali 15.68 20.09 24.75 24.49 22.54 40.98 21.78 24.65 24.67 23.51 42.55 48.83 Pashto 18.70 19.03 19.28 19.24 18.63 25.13 19.43 18.81 18.92 18.77 25.24 64.96 Persian 12.83 18.95 18.39 19.30 19.99 50.11 18.58 18.09 18.65 18.84 53.13 58.45 Tagalog 11.39 14.61 16.51 16.21 16.81 35.64 14.45 16.01 15.81 16.74 38.72 53.64 Turkish 07.75 09.11 14.79 14.79 14.71 55.48 09.04 13.63 14.34 13.52 66.54 53.54 Zulu 07.63 11.87 12.96 13.87 13.68 66.73 12.04 12.35 13.69 13.75 82.38 35.62 Average 12.85 16.81 19.31 19.31 19.07 46.02 17.02 18.81 19.13 18.81 51.37 52.29 Persian-N 14.86 24.67 22.74 22.83 24.15 37.32 23.78 21.68 22.51 23.32 38.38 Persian-V 54.84 68.19 72.39 73.49 71.12 80.44 67.28 71.48 72.58 70.02 80.62 Table 2: Type-based expansion results for the 50k-best list for different models. Tr. WFST stands for trigraph WFST, NRR for no reranking, W◦Tr for trigraph reweighting, TRR for trigraph-based rereanking, BRR for reranking morpheme boundary, and ∞for the upper bound of OOV reduction via lexicon expansion if we produce all words. FP (full-pack data) shows the effect of using bigger data with the size of about seven times larger than our data set, instead of using our unsupervised approach. Language Tr. Fixed Affix Model Bigram Affix Model FP WFST NRR W◦Tr TRR BRR ∞ NRR W◦Tr TRR BRR ∞ Assamese 18.07 25.70 29.43 29.12 28.13 47.88 25.34 29.06 28.82 27.64 50.31 58.03 Bengali 17.79 20.91 25.61 25.27 23.65 40.60 22.58 25.20 25.41 24.77 42.22 55.92 Pashto 21.27 19.40 19.94 19.92 18.59 25.45 19.68 19.40 19.29 18.72 25.58 71.46 Persian 14.78 20.77 20.32 21.30 22.03 51.00 20.63 19.72 20.61 20.95 54.01 63.10 Tagalog 12.88 14.55 16.88 16.36 16.60 33.95 14.37 16.12 16.12 16.38 37.07 61.53 Turkish 09.97 11.42 17.82 17.67 17.23 56.54 11.05 16.82 17.41 15.98 66.54 59.68 Zulu 08.85 13.70 14.72 15.62 15.67 68.07 13.70 14.07 15.47 15.60 87.90 41.27 Average 14.80 18.06 20.67 20.75 20.27 44.78 18.19 20.48 20.45 20.01 51.95 58.71 Persian-N 16.82 26.46 24.42 24.56 25.71 38.40 25.69 23.50 24.20 25.04 39.41 – Persian-V 60.09 71.47 75.57 76.48 73.60 82.55 70.56 74.81 75.72 72.53 82.70 – Table 3: Token-based expansion results for the 50k-best list for different models. Abbreviations are the same as Table 2. complexity of the morphology is by calculating the average number of unique prefix-suffix pairs in the training data after morpheme segmentation which is shown as |If| in Table 4. Finally, a third possible metric is the number of all possible words that can be generated (|L|). These three metrics correlate fairly well across the languages. The metrics we propose also correlate with commonly accepted classifications: e.g., Zulu and Turkish (highly agglutinative) have higher scores in terms of our |pr| + |sf|, |If| and |L| metrics in Table 4 than other languages. The results from full language packs in Table 3 also show that there is a reverse interaction of morphological complexity and the effect of blindly adding more data. Thus for morphologically rich languages, adding more data is less effective than for languages with poor morphology. The size of the languages (|L|) suggests that we are suffering from vast overgeneration; we overgenerate because in our model any affix can attach to any stem, which is not in general true. Thus there is a lack of linguistic knowledge such as paradigm information (Stump, 2001) for each word category in our model. In other words, all morphemes are treated the same in our model which is not true in natural languages. One way to tackle this problem is through an unsupervised POS tagger. The challenge here is that fully unsupervised POS taggers (without any tag dictionary) are not very accurate (Christodoulopoulos et al., 2010). Another way is through using joint mor1355 Figure 3: Trends for token-based OOV reduction with different sizes for the Fixed Affix model with trigraph reranking. Language |pr| |stm| |sf| |L| |If| Assamese 4 4791 564 10.8M 1.8 Bengali 3 6496 378 7.4M 1.5 Pashto 1 5395 271 1.5M 1.3 Persian 49 6998 538 184M 2.0 Tagalog 179 4259 299 228M 1.5 Turkish 45 5266 1801 427M 2.3 Zulu 2254 5680 427 5.5B 2.8 Persian-N 3 6121 268 4.9M 1.5 Persian-V 43 788 44 1.5M 3.4 Table 4: Information about the number of unique morphemes in the Fixed Affix model for each dataset including empty affixes. |L| shows the upper bound of the number of possible unique words that can be generated from the word generation model. |If| is the average number of unique prefix-suffix pairs (including empty pairs) for each stem. phology and tagging models such as Frank et al. (2013). Error Analysis on Turkish Unfortunately for most languages we could not find an available rule-based or supervised morphological analyzer to verify the words generated by our model. The only available tool for us is a Turkish finite-state morphological analyzer (Oflazer, 1996) implemented with the Xerox FST toolkit (Beesley and Karttunen, 2003). As we can see in Table 5, the system with the largest proportion of correct generated words reranks the expansion with trigraph probabilities using a Fixed Affix model. Results also show that we are overgenerating many nonsense words that we ought to be pruning from our results. Another observation is that the recognition percentage of the morphological analyzer on INV words is much higher than on OOVs, which shows that OOVs in Turkish dataset are much harder to analyze. 1356 Model Precision Tr. WFST 17.19 Fixed Affix Model NRR 13.36 W◦Tr 25.66 TRR 26.30 BRR 25.14 Bigram Affix Model NRR 12.94 W◦Tr 24.21 TRR 25.39 BRR 23.45 Development words 89.30 INVs 95.44 OOVs 84.64 Table 5: Results from running a hand-crafted Turkish morphological analyzer (Oflazer, 1996) on different expansions and on the development set. Precision refers to the percentage of the words are recognized by the analyzer. The results on development are also separated into INV and OOV. Error Analysis on Persian From the best 50k word result for Persian (Fixed Affix Model:BRR), we randomly picked 200 words and manually analyzed them. 89 words are correct (45.5%) where 55.0% of these words are from noun affixation, 23.6% from verb clitics, 9.0% from verb inflections, 5.6% from incorrect affixations that accidentally resulted in possible words, 4.5% from uninflected stems, and a few from adjective affixation. Among incorrectly generated words, 65.8% are from combining a stem of one POS with affixes from another POS (e.g., attaching a noun affix to a verb stem), 14.4% from combining a stem with affixes which are compatible with POS but not allowed for that particular stem (e.g., there is a noun suffix that can only attach to a subset of noun stems), 9.0% are from wrong affixes produced by Morfessor and others are from incorrect vowel harmony or double affixation. In order to study the effect of vocabulary expansion more deeply, we trained a subset of all nouns and verbs in the same dataset (also shown in Table 1). Verbs in Persian have rich but more or less regular morphology, while nouns, which have many irregular cases, have rich morphology but not as rich as verbs. The results in Table 4 show that Morfessor captures these phenomena. Furthermore, our results in Table 2 and Table 3 show that our performance on OOV reduction with verbs is far superior to our performance with nouns. We also randomly picked 200 words from each of the experiments (noun and verbs) to study the degree of correctness of generated forms. For nouns, 94 words are correct and for verbs only 71 words are correct. Most verb errors are due to incorrect morpheme extraction by Morfessor. In contrast, most noun errors result from affixes that are only compatible with a subset of all possible noun stems. This suggests that if we conduct experiments using more accurate unsupervised morphology and also have a more finegrained paradigm completion model, we might improve our performance. 5 Conclusion and Future Work We have presented an approach to generating new words. This approach is useful for low-resource, morphologically rich languages. It provides words that can be used in HLT applications that require target-language generation in this language, such as ASR, OCR, and MT. An implementation of our approach, named BabelGUM (Babel General Unsupervised Morphology), will be publicly available. Please contact the authors for more information. In future work we will explore the possibility of jointly performing unsupervised morphological segmentation with clustering of words into classes with similar morphological behavior. These classes will extend POS classes. We will tune the system for our purposes, namely OOV reduction. Acknowledgements We thank Anahita Bhiwandiwalla, Brian Kingsbury, Lidia Mangu, Michael Picheny, Benoˆıt Sagot, Murat Saraclar, and G´eraldine Walther for helpful discussions. The project is supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense U.S. Army Research Laboratory (DoD/ARL) contract number W911NF-12-C-0012. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government. 1357 References Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL ’01, pages 26–33, Stroudsburg, PA, USA. Association for Computational Linguistics. Kenneth R Beesley and Lauri Karttunen. 2003. Finitestate morphology: Xerox tools and techniques. CSLI, Stanford. Tim Buckwalter. 2004. Buckwalter Arabic Morphological Analyzer Version 2.0. LDC catalog number LDC2004L02, ISBN 1-58563-324-0. Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2010. Two decades of unsupervised pos induction: How far have we come? In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 575–584. Association for Computational Linguistics. Lionel Cl´ement, Benoˆıt Sagot, and Bernard Lang. 2004. Morphology based automatic acquisition of large-coverage lexica. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04). European Language Resources Association (ELRA). Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing (TSLP), 4(1):3. Silviu Cucerzan and David Yarowsky. 2002. Bootstrapping a multilingual part-of-speech tagger in one person-day. In The 6th Conference on Natural Language Learning (CoNLL-2002), pages 1–7. Gr´egoire D´etrez and Aarne Ranta. 2012. Smart paradigms and the predictability and complexity of inflectional morphology. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 645–653. Association for Computational Linguistics. Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a dirichlet process mixture model. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 616–627. Association for Computational Linguistics. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1185–1195. Association for Computational Linguistics. Ramy Eskander, Nizar Habash, and Owen Rambow. 2013. Automatic extraction of morphological lexicons from morphologically annotated corpora. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1032–1043, Seattle, Washington, USA, October. Association for Computational Linguistics. Markus Forsberg, Harald Hammarstr¨om, and Aarne Ranta. 2006. Morphological lexicon extraction from raw text data. Advances in Natural Language Processing, pages 488–499. Stella Frank, Frank Keller, and Sharon Goldwater. 2013. Exploring the utility of joint morphological and syntactic learning from child-directed speech. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 30–41. Association for Computational Linguistics. Joseph H Greenberg. 1960. A quantitative approach to the morphological typology of language. International journal of American linguistics, pages 178– 194. Nizar Habash and Owen Rambow. 2006. MAGEAD: A morphological analyzer and generator for the Arabic dialects. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 681–688, Sydney, Australia. Nizar Habash. 2008. Four techniques for online handling of out-of-vocabulary words in Arabic-English statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, pages 57–60. Association for Computational Linguistics. Mary Harper. 2013. The babel program and low resource speech technology. In Automatic Speech Recognition and Understanding Workshop (ASRU) Invited talk. Kimmo Koskenniemi. 1983. Two-Level Model for Morphological Analysis. In Proceedings of the 8th International Joint Conference on Artificial Intelligence, pages 683–685. Xin Lei, Wen Wang, and Andreas Stolcke. 2009. Data-driven lexicon expansion for Mandarin broadcast news and conversation speech recognition. In International conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4329–4332. Christian Monson, Jaime Carbonell, Alon Lavie, and Lori Levin. 2008. Paramor: Finding paradigms across morphology. Advances in Multilingual and Multimodal Information Retrieval, pages 900–907. Sylvain Neuvel and Sean A Fulop. 2002. Unsupervised learning of morphology without morphemes. In Proceedings of the ACL-02 workshop on Morphological and phonological learning-Volume 6, pages 31–40. Association for Computational Linguistics. 1358 Kemal Oflazer. 1996. Error-tolerant finite-state recognition with applications to morphological analysis and spelling correction. Computational Linguistics, 22(1):73–89. Katsutoshi Ohtsuki, Nobuaki Hiroshima, Masahiro Oku, and Akihiro Imamura. 2005. Unsupervised vocabulary expansion for automatic transcription of broadcast news. In International conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1021–1024. Mohammad Sadegh Rasooli, Manouchehr Kouhestani, and Amirsaeid Moloodi. 2013. Development of a Persian syntactic dependency treebank. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 306–314. Association for Computational Linguistics. Michael Riley, Cyril Allauzen, and Martin Jansche. 2009. Openfst: An open-source, weighted finitestate transducer library and its applications to speech and language. In Human Language Technologies Tutorials: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 9–10. Edward Sapir. 1921. Language: An introduction to the study of speech. Harcourt, Brace and company (New York). Kairit Sirts and Sharon Goldwater. 2013. Minimallysupervised morphological segmentation using adaptor grammars. Transactions for the ACL, 1:255–266. Benjamin Snyder and Regina Barzilay. 2008. Unsupervised multilingual learning for morphological segmentation. In Proceedings of the 46th annual meeting of the association for computational linguistics: Human language Technologies (ACLHLT), pages 737–745. Association for Computational Linguistics. Gregory T. Stump. 2001. A theory of paradigm structure. Cambridge. Mei Yang and Katrin Kirchhoff. 2006. Phrase-based backoff models for machine translation of highly inflected languages. In Proceedings of Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 41–48, Trento, Italy. Dong Yang, Yi-Cheng Pan, and Sadaoki Furui. 2012. Vocabulary expansion through automatic abbreviation generation for Chinese voice search. Computer Speech & Language, 26(5):321–335. David Yarowsky and Richard Wicentowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 207–216. 1359
2014
127
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1360–1369, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Toward Better Chinese Word Segmentation for SMT via Bilingual Constraints Xiaodong Zeng† Lidia S. Chao† Derek F. Wong† Isabel Trancoso‡ Liang Tian† †NLP2CT Lab / Department of Computer and Information Science, University of Macau ‡INESC-ID / Instituto Superior T´enico, Lisboa, Portugal [email protected], {lidiasc, derekfw}@umac.mo, [email protected],[email protected] Abstract This study investigates on building a better Chinese word segmentation model for statistical machine translation. It aims at leveraging word boundary information, automatically learned by bilingual character-based alignments, to induce a preferable segmentation model. We propose dealing with the induced word boundaries as soft constraints to bias the continuous learning of a supervised CRFs model, trained by the treebank data (labeled), on the bilingual data (unlabeled). The induced word boundary information is encoded as a graph propagation constraint. The constrained model induction is accomplished by using posterior regularization algorithm. The experiments on a Chinese-to-English machine translation task reveal that the proposed model can bring positive segmentation effects to translation quality. 1 Introduction Word segmentation is regarded as a critical procedure for high-level Chinese language processing tasks, since Chinese scripts are written in continuous characters without explicit word boundaries (e.g., space in English). The empirical works show that word segmentation can be beneficial to Chinese-to-English statistical machine translation (SMT) (Xu et al., 2005; Chang et al., 2008; Zhao et al., 2013). In fact most current SMT models assume that parallel bilingual sentences should be segmented into sequences of tokens that are meant to be “words” (Ma and Way, 2009). The practice in state-of-the-art MT systems is that Chinese sentences are tokenized by a monolingual supervised word segmentation model trained on the handannotated treebank data, e.g., Chinese treebank (CTB) (Xue et al., 2005). These models are conducive to MT to some extent, since they commonly have relatively good aggregate performance and segmentation consistency (Chang et al., 2008). But one outstanding problem is that these models may leave out some crucial segmentation features for SMT, since the output words conform to the treebank segmentation standard designed for monolingually linguistic intuition, rather than specific to the SMT task. In recent years, a number of works (Xu et al., 2005; Chang et al., 2008; Ma and Way, 2009; Xi et al., 2012) attempted to build segmentation models for SMT based on bilingual unsegmented data, instead of monolingual segmented data. They proposed to learn gainful bilingual knowledge as golden-standard segmentation supervisions for training a bilingual unsupervised model. Frequently, the bilingual knowledge refers to the mappings of an individual English word to one or more consecutive Chinese characters, generated via statistical character-based alignment. They leverage such mappings to either constitute a Chinese word dictionary for maximum-matching segmentation (Xu et al., 2004), or form labeled data for training a sequence labeling model (Paul et al., 2011). The prior works showed that these models help to find some segmentations tailored for SMT, since the bilingual word occurrence feature can be captured by the character-based alignment (Och and Ney, 2003). However, these models tend to miss out other linguistic segmentation patterns as monolingual supervised models, and suffer from the negative effects of erroneously alignments to word segmentation. This paper proposes an alternative Chinese Word Segmentation (CWS) model adapted to the SMT task, which seeks not only to maintain the advantages of a monolingual supervised model, having hand-annotated linguistic knowledge, but also to assimilate the relevant bilingual segmenta1360 tion nature. We propose leveraging the bilingual knowledge to form learning constraints that guide a supervised segmentation model toward a better solution for SMT. Besides the bilingual motivated models, character-based alignment is also employed to achieve the mappings of the successive Chinese characters and the target language words. Instead of directly merging the characters into concrete segmentations, this work attempts to extract word boundary distributions for characterlevel trigrams (types) from the “chars-to-word” mappings. Furthermore, these word boundaries are encoded into a graph propagation (GP) expression, in order to widen the influence of the induced bilingual knowledge among Chinese texts. The GP expression constrains similar types having approximated word boundary distributions. Crucially, the GP expression with the bilingual knowledge is then used as side information to regularize a CRFs (conditional random fields) model’s learning over treebank and bitext data, based on the posterior regularization (PR) framework (Ganchev et al., 2010). This constrained learning amounts to a jointly coupling of GP and CRFs, i.e., integrating GP into the estimation of a parametric structural model. This paper is structured as follows: Section 2 points out the main differences with the related works of this study. Section 3 presents the details of the proposed segmentation model. Section 4 reports the experimental results of the proposed model for a Chinese-to-English MT task. The conclusion is drawn in Section 5. 2 Related Work In the literature, many approaches have been proposed to learn CWS models for SMT. They can be put into two categories, monolingual-motivated and bilingual-motivated. The former primarily optimizes monolingual supervised models according to some predefined segmentation properties that are manually summarized from empirical MT evaluations. Chang et al. (2008) enhanced a CRFs segmentation model in MT tasks by tuning the word granularity and improving the segmentation consistence. Zhang et al. (2008) produced a better segmentation model for SMT by concatenating various corpora regardless of their different specifications. Distinct from their behaviors, this work uses automatically learned constraints instead of manually defined ones. Most importantly, the constraints have a better learning guidance since they originate from the bilingual texts. On the other hand, the bilingual-motivated CWS models typically rely on character-based alignments to generate segmentation supervisions. Xu et al. (2004) proposed to employ “chars-to-word” alignments to generate a word dictionary for maximum matching segmentation in SMT task. The works in (Ma and Way, 2009; Zhao et al., 2013) extended the dictionary extraction strategy. Ma and Way (2009) adopted co-occurrence frequency metric to iteratively optimize “candidate words” extract from the alignments. Zhao et al. (2013) attempted to find an optimal subset of the dictionary learned by the character-based alignment to maximize the MT performance. Paul et al. (2011) used the words learned from “chars-to-word” alignments to train a maximum entropy segmentation model. Rather than playing the “hard” uses of the bilingual segmentation knowledge, i.e., directly merging “char-to-word” alignments to words as supervisions, this study extracts word boundary information of characters from the alignments as soft constraints to regularize a CRFs model’s learning. The graph propagation (GP) technique provides a natural way to represent data in a variety of target domains (Belkin et al., 2006). In this technique, the constructed graph has vertices consisting of labeled and unlabeled examples. Pairs of vertices are connected by weighted edges encoding the degree to which they are expected to have the same label (Zhu et al., 2003). Many recent works, such as by Subramanya et al. (2010), Das and Petrov (2011), Zeng et al. (2013; 2014) and Zhu et al. (2014), proposed GP for inferring the label information of unlabeled data, and then leverage these GP outcomes to learn a semi-supervised scalable model (e.g., CRFs). These approaches are referred to as pipelined learning with GP. This study also works with a similarity graph, encoding the learned bilingual knowledge. But, unlike the prior pipelined approaches, this study performs a joint learning behavior in which GP is used as a learning constraint to interact with the CRFs model estimation. One of our main objectives is to bias CRFs model’s learning on unlabeled data, under a non-linear GP constraint encoding the bilingual knowledge. This is accomplished by the posterior regularization (PR) framework (Ganchev et 1361 al., 2010). PR performs regularization on posteriors, so that the learned model itself remains simple and tractable, while during learning it is driven to obey the constraints through setting appropriate parameters. The closest prior study is constrained learning, or learning with prior knowledge. Chang et al. (2008) described constraint driven learning (CODL) that augments model learning on unlabeled data by adding a cost for violating expectations of constraint features designed by domain knowledge. Mann and McCallum (2008) and McCallum et al. (2007) proposed to employ generalized expectation criteria (GE) to specify preferences about model expectations in the form of linear constraints on some feature expectations. 3 Methodology This work aims at building a CWS model adapted to the SMT task. The model induction is shown in Algorithm 1. The input data requires two types of training resources, segmented Chinese sentences from treebank Dc l and parallel unsegmented sentences of Chinese and foreign language Dc u and Df u. The first step is to conduct characterbased alignment over bitexts Dc u and Df u, where every Chinese character is an alignment target. Here, we are interested on n-to-1 alignment patterns, i.e., one target word is aligned to one or more source Chinese characters. The second step aims to collect word boundary distributions for all types, i.e., character-level trigrams, according to the n-to-1 mappings (Section 3.1). The third step is to encode the induced word boundary information into a k-nearest-neighbors (k-NN) similarity graph constructed over the entire set of types from Dc l and Dc u (Section 3.2). The final step trains a discriminative sequential labeling model, conditional random fields, on Dc l and Dc u under bilingual constraints in a graph propagation expression (Section 3.3). This constrained learning is carried out based on posterior regularization (PR) framework (Ganchev et al., 2010). 3.1 Word Boundaries Learned from Character-based Alignments The gainful supervisions toward a better segmentation solution for SMT are naturally extracted from MT training resources, i.e., bilingual parallel data. This study employs an approximated method introduced in (Xu et al., 2004; Ma and Way, 2009; Chung and Gildea, 2009) to learn bilingual segAlgorithm 1 CWS model induction with bilingual constraints Require: Segmented Chinese sentences from treebank Dc l ; Parallel sentences of Chinese and foreign language Dc u and Df u Ensure: θ: the CRFs model parameters 1: Dc↔f ←char align bitext (Dc u, Df u) 2: r ←learn word bound (Dc↔f) 3: G ←encode graph constraint (Dc l , Dc u, r) 4: θ ←pr crf graph (Dc l , Dc u, G) mentation knowledge. This relies on statistical character-based alignment: first, every Chinese character in the bitexts is divided by a white space so that individual characters are regarded as special “words” or alignment targets, and second, they are connected with English words by using a statistical word aligner, e.g., GIZA++ (Och and Ney, 2003). Note that the aligner is restricted to use an n-to-1 alignment pattern. The primary idea is that consecutive Chinese characters are grouped to a candidate word, if they are aligned to the same foreign word. It is worth mentioning that prior works presented a straightforward usage for candidate words, treating them as golden segmentations, either dictionary units or labeled resources. But this study treats the induced candidate words in a different way. We propose to extract the word boundary distributions1 for character-level trigrams (type)2, as shown in Figure 1, instead of the very specific words. There are two main reasons to do so. First, it is a more general expression which can reduce the impact amplification of erroneous character alignments. Second, boundary distributions can play more flexible roles as constraints over labelings to bias the model learning. The type-level word boundary extraction is formally described as follows. Given the ith sentence pair ⟨xc i, xf i , Ac→f i ⟩of the aligned bilingual corpus Dc↔f, the Chinese sentence xc i consisting of m characters {xc i,1, xc i,2, ..., xc i,m}, and the foreign language sentence xf i , consisting of 1The distribution is on four word boundary labels indicating the character positions in a word, i.e., B (begin), M (middle), E (end) and S (single character). 2A word boundary distribution corresponds to the center character of a type. In fact, it aims at reducing label ambiguities to collect boundary information of character trigrams, rather than individual characters (Altun et al., 2006). 1362 n words {xf i,1, xf i,2, ..., xf i,n}, Ac→f i represents a set of alignment pairs aj = ⟨Cj, xf i,j⟩that defines connections between a few Chinese characters Cj = {xc i,j1, xc i,j2, ..., xc i,jk} and a single foreign word xf i,j. For an alignment aj = ⟨Cj, xf i,j⟩, only the sequence of characters Cj = {xc i,j1, xc i,j2, ..., xc i,jk} ∀d ∈[1, k −1], jd+1 −jd = 1 constitutes a valid candidate word. For the whole bilingual corpus, we assign each character in the candidate words with a word boundary tag T ∈{B, M, E, S}, and then count across the entire corpus to collect the tag distributions ri = {ri,t; t ∈T} for each type xc i,j−1xc i,jxc i,j+1. 北京奥运会 Beijing Olympus Character-based alignment 北京奥运会 B E B M E Beijing Olympus Word boundaries 北京奥 奥运会 … Type-level Word boundary distributions Bei Ping Shi 北平市 Bei Jing Ren 北京人 Bei Jing Di 北京地 Quan Yun Hui 全运会 Bei Jing Shi 北京市 0.8 0.6 0.3 0.2 0.9 Ao Yun Hui 奥运会 0.2 Figure 1: An example of similarity graph over character-level trigrams (types). 3.2 Constraints Encoded by Graph Propagation Expression The previous step contributes to generate bilingual segmentation supervisions, i.e., type-level word boundary distributions. An intuitive manner is to directly leverage the induced boundary distributions as label constraints to regularize segmentation model learning, based on a constrained learning algorithm. This study, however, makes further efforts to elevate the positive effects of the bilingual knowledge via the graph propagation technique. We adopt a similarity graph to encode the learned type-level word boundary distributions. The GP expression will be defined as a PR constraint in Section 3.3 that reflects the interactions between the graph and the CRFs model. In other words, GP is integrated with estimation of parametric structural model. This is greatly different from the prior pipelined approaches (Subramanya et al., 2010; Das and Petrov, 2011; Zeng et al., 2013), where GP is run first and its propagated outcomes are then used to bias the structural model. This work seeks to capture the GP benefits during the modeling of sequential correlations. In what follows, the graph setting and propagation expression are introduced. As in conventional GP examples (Das and Smith, 2012), a similarity graph G = (V, E) is constructed over N types extracted from Chinese training data, including treebank Dc l and bitexts Dc u. Each vertex Vi has a |T|-dimensional estimated measure vi = {vi,t; t ∈T} representing a probability distribution on word boundary tags. The induced typelevel word boundary distributions ri = {ri,t; t ∈ T} are empirical measures for the corresponding M graph vertices. The edges E ∈Vi ×Vj connect all the vertices. Scores between pairs of graph vertices (types), wij, refer to the similarities of their syntactic environment, which are computed following the method in (Subramanya et al., 2010; Das and Petrov, 2011; Zeng et al., 2013). The similarities are measured based on co-occurrence statistics over a set of predefined features (introduced in Section 4.1). Specifically, the point-wise mutual information (PMI) values, between vertices and each feature instantiation that they have in common, are summed to sparse vectors, and their cosine distances are computed as the similarities. The nature of this similarity graph enforces that the connected types with high weights appearing in different texts should have similar word boundary distributions. The quality (smoothness) of the similarity graph can be estimated by using a standard propagation function, as shown in Equation 1. The square-loss criterion (Zhu et al., 2003; Bengio et al., 2006) is used to formulate this function: P(v) = T X t=1 M X i=1 (vi,t −ri,t)2 +µ N X j=1 N X i=1 wij(vi,t −vj,t)2 + ρ N X i=1 (vi,t)2 ! (1) The first term in this equation refers to seed matches that compute the distances between the estimated measure vi and the empirical probabilities ri. The second term refers to edge smoothness that measures how vertices vi are smoothed with respect to the graph. Two types connected by an edge with high weight should be assigned similar word boundary distributions. The third term, a ℓ2 norm, evaluates the distribution sparsity (Das and 1363 Smith, 2012) per vertex. Typically, the GP process amounts to an optimization process with respect to parameter v such that Equation 1 is minimized. This propagation function can be used to reflect the graph smoothness, where the higher the score, the lower the smoothness. 3.3 PR Learning with GP Constraint Our learning problem belongs to semi-supervised learning (SSL), as the training is done on treebank labeled data (XL, YL) = {(x1, y1), ..., (xl, yl)}, and bilingual unlabeled data (XU) = {x1, ..., xu} where xi = {x1, ..., xm} is an input word sequence and yi = {y1, ..., ym}, y ∈T is its corresponding label sequence. Supervised linear-chain CRFs can be modeled in a standard conditional log-likelihood objective with a Gaussian prior: L(θ) = pθ(yi|xi) −∥θ∥2 2σ (2) The conditional probabilities pθ are expressed as a log-linear form: pθ(yi|xi) = exp( m X k=1 θTf(yk−1 i , yk i , xi)) Zθ(xi) (3) Where Zθ(xi) is a partition function that normalizes the exponential form to be a probability distribution, and f(yk−1 i , yk i , xi) are arbitrary feature functions. In our setting, the CRFs model is required to learn from unlabeled data. This work employs the posterior regularization (PR) framework3 (Ganchev et al., 2010) to bias the CRFs model’s learning on unlabeled data, under a constraint encoded by the graph propagation expression. It is expected that similar types in the graph should have approximated expected taggings under the CRFs model. We follow the approach introduced by (He et al., 2013) to set up a penaltybased PR objective with GP: the CRFs likelihood is modified by adding a regularization term, as shown in Equation 4, representing the constraints: RU(θ, q) = KL(q||pθ) + λP(v) (4) Rather than regularize CRFs model’s posteriors pθ(Y|xi) directly, our model uses an auxiliary distribution q(Y|xi) over the possible labelings 3The readers are refered to the original paper of Ganchev et al. (2010). Y for xi, and penalizes the CRFs marginal loglikelihood by a KL-divergence term4, representing the distance between the estimated posteriors p and the desired posteriors q, as well as a penalty term, formed by the GP function. The hyperparameter λ is used to control the impacts of the penalty term. Note that the penalty is fired if the graph score computed based on the expected taggings given by the current CRFs model is increased vis-a-vis the previous training iteration. This nature requires that the penalty term P(v) should be formed as a function of posteriors q over CRFs model predictions5, i.e., P(q). To state this, a mapping M : ({1, ..., u}, {1, ..., m}) →V from words in the corpus to vertices in the graph is defined. We can thus decompose vi,t into a function of q as follows: vi,t = u X a=1 m X b=1; M(a,b)=Vi T X c=1 X y∈Y 1(yb = t, yb−1 = c)q(y|xa) u X a=1 m X b=1 1(M(a, b) = Vi) (5) The final learning objective combines the CRFs likelihood with the PR regularization term: J (θ, q) = L(θ) + RU(θ, q). This joint objective, over θ and q, can be optimized by an expectation maximization (EM) style algorithm as reported in (Ganchev et al., 2010). We start from initial parameters θ0, estimated by supervised CRFs model training on treebank data. The E-step is to minimize RU(θ, q) over the posteriors q that are constrained to the probability simplex. Since the penalty term P(v) is a non-linear form, the optimization method in (Ganchev et al., 2010) via projected gradient descent on the dual is inefficient6. This study follows the optimization method (He et al., 2013) that uses exponentiated gradient descent (EGD) algorithm. It allows that the variable update expression, as shown in Equation 6, takes a multiplicative rather than an additive form. q(w+1)(y|xi) = q(w)(y|xi) exp(−η ∂R ∂q(w)(y|xi)) (6) where the parameter η controls the optimization rate in the E-step. With the contributions from 4The form of KL term: KL(q||p) = P q∈Y q(y) log q(y) p(y). 5The original PR setting also requires that the penalty term should be a linear (Ganchev et al., 2010) or non-linear (He et al., 2013) function on q. 6According to (He et al., 2013), the dual of quadratic program implies an expensive matrix inverse. 1364 the E-step that further encourage q and p to agree, the M-step aims to optimize the objective J (θ, q) with respect to θ. The M-step is similar to the standard CRFs parameter estimation, where the gradient ascent approach still works. This EM-style approach monotonically increases J (θ, q) and thus is guaranteed to converge to a local optimum. E-step: q(t+1) = arg min q RU(θ(t), q(t)) M-step: θ(t+1) = arg max θ L(θ) +δ u X i=1 X y∈Y q(t+1)(y|xi) log pθ(y|xi) (7) 4 Experiments 4.1 Data and Setup The experiments in this study evaluated the performances of various CWS models in a Chineseto-English translation task. The influence of the word segmentation on the final translation is our main investigation. We adopted three state-of-the-art metrics, BLEU (Papineni et al., 2002), NIST (Doddington et al., 2000) and METEOR (Banerjee and Lavie, 2005), to evaluate the translation quality. The monolingual segmented data, trainTB, is extracted from the Penn Chinese Treebank (CTB7) (Xue et al., 2005), containing 51,447 sentences. The bilingual training data, trainMT, is formed by a large in-house Chinese-English parallel corpus (Tian et al., 2014). There are in total 2,244,319 Chinese-English sentence pairs crawled from online resources, concentrated in 5 different domains including laws, novels, spoken, news and miscellaneous7. This in-house bilingual corpus is the MT training data as well. The target-side language model is built on over 35 million monolingual English sentences, trainLM, crawled from online resources. The NIST evaluation campaign data, MT-03 and MT-05, are selected to comprise the MT development data, devMT, and testing data, testMT, respectively. For the settings of our model, we adopted the standard feature templates introduced by Zhao et al. (2006) for CRFs. The character-based alignment for achieving the “chars-to-word” mappings is accomplished by GIZA++ aligner (Och and Ney, 2003). For the GP, a 10-NNs similarity graph 7The in-house corpus has been manually validated, in a long process that exceeded 500 hours. was constructed8. Following (Subramanya et al., 2010; Zeng et al., 2013), the features used to compute similarities between vertices were (Suppose given a type “ w2w3w4” surrounding contexts “w1w2w3w4w5”): unigram (w3), bigram (w1w2, w4w5, w2w4), trigram (w2w3w4, w2w4w5, w1w2w4), trigram+context (w1w2w3w4w5) and character classes in number, punctuation, alphabetic letter and other (t(w2)t(w3)t(w4)). There are four hyperparameters in our model to be tuned by using the development data (devMT) among the following settings: for the graph propagation, µ ∈{0.2, 0.5, 0.8} and ρ ∈{0.1, 0.3, 0.5, 0.8}; for the PR learning, λ ∈{0 ≤λi ≤1} and σ ∈ {0 ≤σi ≤1} where the step is 0.1. The best performed joint settings, µ = 0.5, ρ = 0.5, λ = 0.9 and σ = 0.8, were used to measure the final performance. The MT experiment was conducted based on a standard log-linear phrase-based SMT model. The GIZA++ aligner was also adopted to obtain word alignments (Och and Ney, 2003) over the segmented bitexts. The heuristic strategy of growdiag-final-and (Koehn et al., 2007) was used to combine the bidirectional alignments for extracting phrase translations and reordering tables. A 5-gram language model with Kneser-Ney smoothing was trained with SRILM (Stolcke, 2002) on monolingual English data. Moses (Koehn et al., 2007) was used as decoder. The Minimum Error Rate Training (MERT) (Och, 2003) was used to tune the feature parameters on development data. 4.2 Various Segmentation Models To provide a thorough analysis, the MT experiments in this study evaluated three baseline segmentation models and two off-the-shelf models, in addition to four variant models that also employ the bilingual constraints. We start from three baseline models: • Character Segmenter (CS): this model simply divides Chinese sentences into sequences of characters. • Supervised Monolingual Segmenter (SMS): this model is trained by CRFs on treebank training data (trainTB). The same feature templates (Zhao et al., 2006) are used. The standard four-tags (B, M, E and S) were used 8We evaluated graphs with top k (from 3 to 20) nearest neighbors on development data, and found that the performance converged beyond 10-NNs. 1365 as the labels. The stochastic gradient descent is adopted to optimize the parameters. • Unsupervised Bilingual Segmenter (UBS): this model is trained on the bitexts (trainMT) following the approach introduced in (Ma and Way, 2009). The optimal set of the model parameter values was found on devMT to be k = 3, tAC = 0.0 and tCOOC = 15. The comparison candidates also involve two popular off-the-shelf segmentation models: • Stanford Segmenter: this model, trained by Chang et al. (2008), treats CWS as a binary word boundary decision task. It covers several features specific to the MT task, e.g., external lexicons and proper noun features. • ICTCLAS Segmenter: this model, trained by Zhang et al. (2003), is a hierarchical HMM segmenter that incorporates parts-ofspeech (POS) information into the probability models and generates multiple HMM models for solving segmentation ambiguities. This work also evaluated four variant models9 that perform alternative ways to incorporate the bilingual constraints based on two state-of-the-art graph-based SSL approaches. • Self-training Segmenters (STS): two variant models were defined by the approach reported in (Subramanya et al., 2010) that uses the supervised CRFs model’s decodings, incorporating empirical and constraint information, for unlabeled examples as additional labeled data to retrain a CRFs model. One variant (STS-NO-GP) skips the GP step, directly decoding with type-level word boundary probabilities induced from bitexts, while the other (STS-GP-PL) runs the GP at first and then decodes with GP outcomes. The optimal hyperparameter values were found to be: STS-NO-GP (α = 0.8) and η = 0.6) and STS-GP-PL (µ = 0.5, ρ = 0.3, α = 0.8 and η = 0.6). • Virtual Evidences Segmenters (VES): Two variant models based on the approach in (Zeng et al., 2013) were defined. The typelevel word boundary distributions, induced 9Note that there are two variant models working with GP. To be fair, the same similarity graph settings introduced in this paper were used. by the character-based alignment (VES-NOGP), and the graph propagation (VES-GPPL), are regarded as virtual evidences to bias CRFs model’s learning on the unlabeled data. The optimal hyperparameter values were found to be: VES-NO-GP (α = 0.7) and VES-GP-PL (µ = 0.5, ρ = 0.3 and α = 0.7). 4.3 Main Results Table 1 summarizes the final MT performance on the MT-05 test data, evaluated with ten different CWS models. In what follows, we summarized four major observations from the results. Firstly, as expected, having word segmentation does help Chinese-to-English MT. All other nine CWS models outperforms the CS baseline which does not try to identify Chinese words at all. Secondly, the other two baselines, SMS and UBS, are on a par with each other, showing less than 0.36 average performance differences on the three evaluation metrics. This outcome validated that the models, trained by either the treebank or the bilingual data, performed reasonably well. But they only capture partial segmentation features so that less gains for SMT are achieved when comparing to other sophisticated models. Thirdly, we notice that the two off-the-shelf models, Stanford and ICTCLAS, just brought minor improvements over the SMS baseline, although they are trained using richer supervisions. This behaviour illustrates that the conventional optimizations to the monolingual supervised model, e.g., accumulating more supervised data or predefined segmentation properties, are insufficient to help model for achieving better segmentations for SMT. Finally, highlighting the five models working with the bilingual constraints, most of them can achieve significant gains over the other ones without using the bilingual constraints. This strongly demonstrates that bilingually-learned segmentation knowledge does helps CWS for SMT. The models working with GP, STS-GP-PL, VES-GP-PL and ours outperform all others. We attribute this to the role of GP in assisting the spread of bilingual knowledge on the Chinese side. Importantly, it can be observed that our model outperforms STS-GP, VES-GP, which greatly supports that joint learning of CRFs and GP can alleviate the error transfer by the pipelined models. This is one of the most crucial findings in this study. Overall, the boldface numbers in the last row illustrate that our model obtains average improvements of 1.89, 1.76 and 1.61 on BLEU, 1366 NIST and METEOR over others. Models BLEU NIST METEOR CS 29.38 59.85 54.07 SMS 30.05 61.33 55.95 UBS 30.15 61.56 55.39 Stanford 30.40 61.94 56.01 ICTCLAS 30.29 61.26 55.72 STS-NO-GP 31.47 62.35 56.12 STS-GP-PL 31.94 63.20 57.09 VES-NO-GP 31.98 62.63 56.59 VES-GP-PL 32.04 63.49 57.34 Our Model 32.75 63.72 57.64 Table 1: Translation performances (%) on MT-05 testing data by using ten different CWS models. 4.4 Analysis & Discussion This section aims to further analyze the three primary observations concluded in Section 4.3: i) word segmentation is useful to SMT; ii) the treebank and the bilingual segmentation knowledge are helpful, performing segmentation of different nature; and iii) the bilingual constraints lead to learn segmentations better tailored for SMT. The first observation derives from the comparisons between the CS baseline and other models. Our results, showing the significant CWS benefits to SMT, are consistent with the works reported in the literature (Xu et al., 2004; Chang et al., 2008). In our experiment, two additional evidences found in the translation model are provided to further support that NO tokenization of Chinese (i.e., the CS model’s output) could harm the MT system. First, the SMT phrase extraction, i.e., building “phrases” on top of the character sequences, cannot fully capture all meaningful segmentations produced by the CS model. The character based model leads to missing some useful longer phrases, and to generate many meaningless or redundant translations in the phrase table. Moreover, it is affected by translation ambiguities, caused by the cases where a Chinese character has very different meanings in different contextual environments. The second observation shifts the emphasis to SMS and UBS, based on the treebank and the bilingual segmentation, respectively. Our results show that both segmentation patterns can bring positive effects to MT. Through analyzing both models’ segmentations for trainMT and testMT, we attempted to get a closer inspection on the segmentation preferences and their influence on MT. Our first finding is that the segmentation consensuses between SMS and UBS are positive to MT. There have about 35% identical segmentations produced by the two models. If these identical segmentations are removed, and the experiments are rerun, the translation scores decrease (on average) by 0.50, 0.85 and 0.70 on BLEU, NIST and METEOR, respectively. Our second finding is that SMS exhibits better segmentation consistency than UBS. One representative example is the segmentations for “孤零零(lonely)”. All the outputs of SMS were “孤零零”, while UBS generated three ambiguous segmentations, “孤(alone) 零 零(double zero)”, “孤零(lonely) 零(zero)” and “孤(alone) 零(zero) 零(zero)”. The segmentation consistency of SMS rests on the high-quality treebank data and the robust CRFs tagging model. On the other hand, the advantage of UBS is to capture the segmentations matching the aligned target words. For example, UBS grouped “国(country) 际(border) 间(between)” to a word “国际间(international)”, rather than two words “国际(international) 间(between)” (as given by SMS), since these three characters are aligned to a single English word “international”. The above analysis shows that SMS and UBS have their own merits and combining the knowledge derived from both segmentations is highly encouraged. The third observation concerns the great impact of the bilingual constraints to the segmentation models in the MT task. The use of the bilingual constraints is the prime objective of this study. Our first contribution for this purpose is on using the word boundary distributions to capture the bilingual segmentation supervisions. This representation contributes to reduce the negative impacts of erroneous “chars-to-word” alignments. The ambiguous types (having relatively uniform boundary distribution), caused by alignment errors, cannot directly bias the model tagging preferences. Furthermore, the word boundary distributions are convenient to make up the learning constraints over the labelings among various constrained learning approaches. They have successfully played in three types of constraints for our experiments: PR penalty (Our model), decoding constraints in self-training (STS) and virtual evidences (VES). The second contribution is the use of GP, illustrated by STS-GP-PL, VES-GP-PL and 1367 Our model. The major effect is to multiply the impacts of the bilingual knowledge through the similarity graph. The graph vertices (types)10, without any supervisions, can learn the word boundary information from their similar types (neighborhoods) having the empirical boundary probabilities. The segmentations given by the three GP models show about 70% positive segmentation changes, affected by the unlabeled graph vertices, with respect to the ones given by the NOGP models, STS-NO-GP and VES-NO-GP. In our opinion, the learning mechanism of our approach, joint coupling of GP and CRFs, rather than the pipelined one as the other two models, contributes to maximizing the graph smoothness effects to the CRFs estimation so that the error propagation of the pipelined approaches is alleviated. 5 Conclusion This paper proposed a novel CWS model for the SMT task. This model aims to maintain the linguistic segmentation supervisions from treebank data and simultaneously integrate useful bilingual segmentations induced from the bitexts. This objective is accomplished by three main steps: 1) learn word boundaries from character-based alignments; 2) encode the learned word boundaries into a GP constraint; and 3) training a CRFs model, under the GP constraint, by using the PR framework. The empirical results indicate that the proposed model can yield better segmentations for SMT. Acknowledgments The authors are grateful to the Science and Technology Development Fund of Macau and the Research Committee of the University of Macau (Grant No. MYRG076 (Y1-L2)-FST13WF and MYRG070 (Y1-L2)-FST12-CS) for the funding support for our research. The work of Isabel Trancoso was supported by national funds through FCT-Fundac¸˜ao para a Ciˆecia e a Tecnologia, under project PEst-OE/EEI/LA0021/2013. The authors also wish to thank the anonymous reviewers for many helpful comments. 10This experiment yielded a similarity graph that consists of 11,909,620 types from trainTB and trainMT, where there have 8,593,220 (72.15%) types without any empirical boundary distributions. References Yasemin Altun, David McAllester, and Mikhail Belkin. 2006. Maximum margin semi-supervised learning for structured variables. Advances in Neural Information Processing Systems, 18:33. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. Association for Computational Linguistics. Yoshua Bengio, Olivier Delalleau, and Nicolas Le Roux. 2006. Label propagation and quadratic criterion. Semi-Supervised Learning, pages 193– 216. Pi-Chuan Chang, Michel Galley, and Christopher D Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proceedings of WMT, pages 224–232. Association for Computational Linguistics. Tagyoung Chung and Daniel Gildea. 2009. Unsupervised tokenization for machine translation. In Proceedings of EMNLP, pages 718–726. Association for Computational Linguistics. Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceedings of ACL, pages 600–609. Association for Computational Linguistics. Dipanjan Das and Noah A Smith. 2012. Graph-based lexicon expansion with sparsity-inducing penalties. In Proceedings of NAACL, pages 677–687. Association for Computational Linguistics. George R. Doddington, Mark A. Przybocki, Alvin F. Martin, and Douglas A. Reynolds. 2000. The nist speaker recognition evaluation–overview, methodology, systems, results, perspective. Speech Communication, 31(2):225–254. Kuzman Ganchev, J˜oao Grac¸a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. The Journal of Machine Learning Research, 11:2001–2049. Luheng He, Jennifer Gillenwater, and Ben Taskar. 2013. Graph-based posterior regularization for semi-supervised structured prediction. In Proceedings of CoNLL, page 38. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL on Interactive Poster and Demonstration Sessions, pages 177–180. Association for Computational Linguistics. 1368 Yanjun Ma and Andy Way. 2009. Bilingually motivated domain-adapted word segmentation for statistical machine translation. In Proceedings of EACL, pages 549–557. Association for Computational Linguistics. Gideon S. Mann and Andrew McCallum. 2008. Generalized expectation criteria for semi-supervised learning of conditional random fields. In Proceedings of ACL, pages 870–878. Association for Computational Linguistics. Andrew McCallum, Gideon Mann, and Gregory Druck. 2007. Generalized expectation criteria. Computer Science Technical Note, University of Massachusetts, Amherst, MA. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160–167. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. Association for Computational Linguistics. Michael Paul, Finch Andrew, and Sumita Eiichiro. 2011. Integration of multiple bilingually-trained segmentation schemes into statistical machine translation. IEICE Transactions on Information and Systems, 94(3):690–697. Andreas Stolcke. 2002. SRILM-an extensible language modeling toolkit. In Proceedings of Interspeech. Amarnag Subramanya, Slav Petrov, and Fernando Pereira. 2010. Efficient graph-based semisupervised learning of structured tagging models. In Proceedings of EMNLP, pages 167–176. Association for Computational Linguistics. Liang Tian, Derek F. Wong, Lidia S. Chao, Paulo Quaresma, Francisco Oliveira, Shuo Li, Yiming Wang, and Yi Lu. 2014. UM-Corpus: A large English-Chinese parallel corpus for statistical machine translation. In Proceedings of LREC. European Language Resources Association. Ning Xi, Guangchao Tang, Xinyu Dai, Shujian Huang, and Jiajun Chen. 2012. Enhancing statistical machine translation with character alignment. In Proceedings of ACL, pages 285–290. Association for Computational Linguistics. Jia Xu, Richard Zens, and Hermann Ney. 2004. Do we need Chinese word segmentation for statistical machine translation? In Proceedings of the Third SIGHAN Workshop on Chinese Language Learning, pages 122–128. Association for Computational Linguistics. Jia Xu, Evgeny Matusov, Richard Zens, and Hermann Ney. 2005. Integrated Chinese word segmentation in statistical machine translation. In Proceedings of IWSLT, pages 216–223. Association for Computational Linguistics. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, and Isabel Trancoso. 2013. Graph-based semi-supervised model for joint Chinese word segmentation and partof-speech tagging. In Proceedings of ACL, pages 770–779. Association for Computational Linguistics. Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, Isabel Trancoso, Liangye He, and Qiuping Huang. 2014. Lexicon expansion for latent variable grammars. Pattern Recognition Letters, 42:47–55. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. HHMM-based Chinese lexical analyzer ICTCLAS. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, pages 184–187. Association for Computational Linguistics. Ruiqiang Zhang, Keiji Yasuda, and Eiichiro Sumita. 2008. Improved statistical machine translation by multiple Chinese word segmentation. In Proceedings of WMT, pages 216–223. Association for Computational Linguistics. Hai Zhao, Chang-Ning Huang, and Mu Li. 2006. An improved Chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing. Association for Computational Linguistics. Hai Zhao, Masao Utiyama, Eiichiro Sumita, and BaoLiang Lu. 2013. An empirical study on word segmentation for Chinese machine translation. In Computational Linguistics and Intelligent Text Processing, pages 248–263. Springer. Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. 2003. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of ICML, volume 3, pages 912–919. Ling Zhu, Derek F. Wong, and Lidia S. Chao. 2014. Unsupervised chunking based on graph propagation from bilingual corpus. The Scientific World Journal, 2014(401943):10. 1369
2014
128
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1370–1380, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Fast and Robust Neural Network Joint Models for Statistical Machine Translation Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul Raytheon BBN Technologies, 10 Moulton St, Cambridge, MA 02138, USA {jdevlin,rzbib,zhuang,tlamar,schwartz,makhoul}@bbn.com Abstract Recent work has shown success in using neural network language models (NNLMs) as features in MT systems. Here, we present a novel formulation for a neural network joint model (NNJM), which augments the NNLM with a source context window. Our model is purely lexicalized and can be integrated into any MT decoder. We also present several variations of the NNJM which provide significant additive improvements. Although the model is quite simple, it yields strong empirical results. On the NIST OpenMT12 Arabic-English condition, the NNJM features produce a gain of +3.0 BLEU on top of a powerful, featurerich baseline which already includes a target-only NNLM. The NNJM features also produce a gain of +6.3 BLEU on top of a simpler baseline equivalent to Chiang’s (2007) original Hiero implementation. Additionally, we describe two novel techniques for overcoming the historically high cost of using NNLM-style models in MT decoding. These techniques speed up NNJM computation by a factor of 10,000x, making the model as fast as a standard back-off LM. This work was supported by DARPA/I2O Contract No. HR0011-12-C-0014 under the BOLT program (Approved for Public Release, Distribution Unlimited). The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense. 1 Introduction In recent years, neural network models have become increasingly popular in NLP. Initially, these models were primarily used to create n-gram neural network language models (NNLMs) for speech recognition and machine translation (Bengio et al., 2003; Schwenk, 2010). They have since been extended to translation modeling, parsing, and many other NLP tasks. In this paper we use a basic neural network architecture and a lexicalized probability model to create a powerful MT decoding feature. Specifically, we introduce a novel formulation for a neural network joint model (NNJM), which augments an n-gram target language model with an m-word source window. Unlike previous approaches to joint modeling (Le et al., 2012), our feature can be easily integrated into any statistical machine translation (SMT) decoder, which leads to substantially larger improvements than k-best rescoring only. Additionally, we present several variations of this model which provide significant additive BLEU gains. We also present a novel technique for training the neural network to be self-normalized, which avoids the costly step of posteriorizing over the entire vocabulary in decoding. When used in conjunction with a pre-computed hidden layer, these techniques speed up NNJM computation by a factor of 10,000x, with only a small reduction on MT accuracy. Although our model is quite simple, we obtain strong empirical results. We show primary results on the NIST OpenMT12 Arabic-English condition. The NNJM features produce an improvement of +3.0 BLEU on top of a baseline that is already better than the 1st place MT12 result and includes 1370 a powerful NNLM. Additionally, on top of a simpler decoder equivalent to Chiang’s (2007) original Hiero implementation, our NNJM features are able to produce an improvement of +6.3 BLEU – as much as all of the other features in our strong baseline system combined. We also show strong improvements on the NIST OpenMT12 Chinese-English task, as well as the DARPA BOLT (Broad Operational Language Translation) Arabic-English and Chinese-English conditions. 2 Neural Network Joint Model (NNJM) Formally, our model approximates the probability of target hypothesis T conditioned on source sentence S. We follow the standard n-gram LM decomposition of the target, where each target word ti is conditioned on the previous n −1 target words. To make this a joint model, we also condition on source context vector Si: P(T|S) ≈ Π|T| i=1P(ti|ti−1, · · · , ti−n+1, Si) Intuitively, we want to define Si as the window that is most relevant to ti. To do this, we first say that each target word ti is affiliated with exactly one source word at index ai. Si is then the m-word source window centered at ai: Si = sai−m−1 2 , · · · , sai, · · · , sai+ m−1 2 This notion of affiliation is derived from the word alignment, but unlike word alignment, each target word must be affiliated with exactly one non-NULL source word. The affiliation heuristic is very simple: (1) If ti aligns to exactly one source word, ai is the index of the word it aligns to. (2) If ti align to multiple source words, ai is the index of the aligned word in the middle.1 (3) If ti is unaligned, we inherit its affiliation from the closest aligned word, with preference given to the right.2 An example of the NNJM context model for a Chinese-English parallel sentence is given in Figure 1. For all of our experiments we use n = 4 and m = 11. It is clear that this model is effectively an (n+m)-gram LM, and a 15-gram LM would be 1We arbitrarily round down. 2We have found that the affiliation heuristic is robust to small differences, such as left vs. right preference. far too sparse for standard probability models such as Kneser-Ney back-off (Kneser and Ney, 1995) or Maximum Entropy (Rosenfeld, 1996). Fortunately, neural network language models are able to elegantly scale up and take advantage of arbitrarily large context sizes. 2.1 Neural Network Architecture Our neural network architecture is almost identical to the original feed-forward NNLM architecture described in Bengio et al. (2003). The input vector is a 14-word context vector (3 target words, 11 source words), where each word is mapped to a 192-dimensional vector using a shared mapping layer. We use two 512dimensional hidden layers with tanh activation functions. The output layer is a softmax over the entire output vocabulary. The input vocabulary contains 16,000 source words and 16,000 target words, while the output vocabulary contains 32,000 target words. The vocabulary is selected by frequency-sorting the words in the parallel training data. Out-ofvocabulary words are mapped to their POS tag (or OOV, if POS is not available), and in this case P(POSi|ti−1, · · · ) is used directly without further normalization. Out-of-bounds words are represented with special tokens <src>, </src>, <trg>, </trg>. We chose these values for the hidden layer size, vocabulary size, and source window size because they seemed to work best on our data sets – larger sizes did not improve results, while smaller sizes degraded results. Empirical comparisons are given in Section 6.5. 2.2 Neural Network Training The training procedure is identical to that of an NNLM, except that the parallel corpus is used instead of a monolingual corpus. Formally, we seek to maximize the log-likelihood of the training data: L = X i log(P(xi)) where xi is the training sample, with one sample for every target word in the parallel corpus. Optimization is performed using standard back propagation with stochastic gradient ascent (LeCun et al., 1998). Weights are randomly initialized in the range of [−0.05, 0.05]. We use an initial learning rate of 10−3 and a minibatch size of 1371 Figure 1: Context vector for target word “the”, using a 3-word target history and a 5-word source window (i.e., n = 4 and m = 5). Here, “the” inherits its affiliation from “money” because this is the first aligned word to its right. The number in each box denotes the index of the word in the context vector. This indexing must be consistent across samples, but the absolute ordering does not affect results. 128.3 At every epoch, which we define as 20,000 minibatches, the likelihood of a validation set is computed. If this likelihood is worse than the previous epoch, the learning rate is multiplied by 0.5. The training is run for 40 epochs. The training data ranges from 10-30M words, depending on the condition. We perform a basic weight update with no L2 regularization or momentum. However, we have found it beneficial to clip each weight update to the range of [-0.1, 0.1], to prevent the training from entering degenerate search spaces (Pascanu et al., 2012). Training is performed on a single Tesla K10 GPU, with each epoch (128*20k = 2.6M samples) taking roughly 1100 seconds to run, resulting in a total training time of ∼12 hours. Decoding is performed on a CPU. 2.3 Self-Normalized Neural Network The computational cost of NNLMs is a significant issue in decoding, and this cost is dominated by the output softmax over the entire target vocabulary. Even class-based approaches such as Le et al. (2012) require a 2-20k shortlist vocabulary, and are therefore still quite costly. Here, our goal is to be able to use a fairly large vocabulary without word classes, and to simply avoid computing the entire output layer at decode time.4 To do this, we present the novel technique of self-normalization, where the output layer scores are close to being probabilities without explicitly performing a softmax. Formally, we define the standard softmax log 3We do not divide the gradient by the minibatch size. For those who do, this is equivalent to using an initial learning rate of 10−3 ∗128 ≈10−1. 4We are not concerned with speeding up training time, as we already find GPU training time to be adequate. likelihood as: log(P(x)) = log eUr(x) Z(x) ! = Ur(x) −log(Z(x)) Z(x) = Σ|V | r′=1eUr′(x) where x is the sample, U is the raw output layer scores, r is the output layer row corresponding to the observed target word, and Z(x) is the softmax normalizer. If we could guarantee that log(Z(x)) were always equal to 0 (i.e., Z(x) = 1) then at decode time we would only have to compute row r of the output layer instead of the whole matrix. While we cannot train a neural network with this guarantee, we can explicitly encourage the log-softmax normalizer to be as close to 0 as possible by augmenting our training objective function: L = X i  log(P(xi)) −α(log(Z(xi)) −0)2 = X i  log(P(xi)) −α log2(Z(xi))  In this case, the output layer bias weights are initialized to log(1/|V |), so that the initial network is self-normalized. At decode time, we simply use Ur(x) as the feature score, rather than log(P(x)). For our NNJM architecture, selfnormalization increases the lookup speed during decoding by a factor of ∼15x. Table 1 shows the neural network training results with various values of the free parameter α. In all subsequent MT experiments, we use α = 10−1. We should note that Vaswani et al. (2013) implements a method called Noise Contrastive Estimation (NCE) that is also used to train selfnormalized NNLMs. Although NCE results in faster training time, it has the downside that there 1372 Arabic BOLT Val α log(P(x)) | log(Z(x))| 0 −1.82 5.02 10−2 −1.81 1.35 10−1 −1.83 0.68 1 −1.91 0.28 Table 1: Comparison of neural network likelihood for various α values. log(P(x)) is the average log-likelihood on a held-out set. | log(Z(x))| is the mean error in log-likelihood when using Ur(x) directly instead of the true softmax probability log(P(x)). Note that α = 0 is equivalent to the standard neural network objective function. is no mechanism to control the degree of selfnormalization. By contrast, our α parameter allows us to carefully choose the optimal trade-off between neural network accuracy and mean selfnormalization error. In future work, we will thoroughly compare self-normalization vs. NCE. 2.4 Pre-Computing the Hidden Layer Although self-normalization significantly improves the speed of NNJM lookups, the model is still several orders of magnitude slower than a back-off LM. Here, we present a “trick” for precomputing the first hidden layer, which further increases the speed of NNJM lookups by a factor of 1,000x. Note that this technique only results in a significant speedup for self-normalized, feed-forward, NNLM-style networks with one hidden layer. We demonstrate in Section 6.6 that using one hidden layer instead of two has minimal effect on BLEU. For the neural network described in Section 2.1, computing the first hidden layer requires multiplying a 2689-dimensional input vector5 with a 2689 × 512 dimensional hidden layer matrix. However, note that there are only 3 possible positions for each target word, and 11 for each source word. Therefore, for every word in the vocabulary, and for each position, we can pre-compute the dot product between the word embedding and the first hidden layer. These are computed offline and stored in a lookup table, which is <500MB in size. Computing the first hidden layer now only requires 15 scalar additions for each of the 512 hidden rows – one for each word in the input 52689 = 14 words × 192 dimensions + 1 bias vector, plus the bias. This can be reduced to just 5 scalar additions by pre-summing each 11word source window when starting a test sentence. If our neural network has only one hidden layer and is self-normalized, the only remaining computation is 512 calls to tanh() and a single 513-dimensional dot product for the final output score.6 Thus, only ∼3500 arithmetic operations are required per n-gram lookup, compared to ∼2.8M for self-normalized NNJM without precomputation, and ∼35M for the standard NNJM.7 Neural Network Speed Condition lookups/sec sec/word Standard 110 10.9 + Self-Norm 1500 0.8 + Pre-Computation 1,430,000 0.0008 Table 2: Speed of the neural network computation on a single CPU thread. “lookups/sec” is the number of unique n-gram probabilities that can be computed per second. “sec/word” is the amortized cost of unique NNJM lookups in decoding, per source word. Table 2 shows the speed of self-normalization and pre-computation for the NNJM. The decoding cost is based on a measurement of ∼1200 unique NNJM lookups per source word for our ArabicEnglish system.8 By combining self-normalization and precomputation, we can achieve a speed of 1.4M lookups/second, which is on par with fast backoff LM implementations (Tanaka et al., 2013). We demonstrate in Section 6.6 that using the selfnormalized/pre-computed NNJM results in only a very small BLEU degradation compared to the standard NNJM. 3 Decoding with the NNJM Because our NNJM is fundamentally an n-gram NNLM with additional source context, it can easily be integrated into any SMT decoder. In this section, we describe the considerations that must be taken when integrating the NNJM into a hierarchical decoder. 6tanh() is implemented using a lookup table. 73500 ≈5 × 512 + 2 × 513; 2.8M ≈2 × 2689 × 512 + 2 × 513; 35M ≈2 × 2689 × 512 + 2 × 513 × 32000. For the sake of a fair comparison, these all use one hidden layer. A second hidden layer adds 0.5M floating point operations. 8This does not include the cost of duplicate lookups within the same test sentence, which are cached. 1373 3.1 Hierarchical Parsing When performing hierarchical decoding with an n-gram LM, the leftmost and rightmost n −1 words from each constituent must be stored in the state space. Here, we extend the state space to also include the index of the affiliated source word for these edge words. This does not noticeably increase the search space. We also train a separate lower-order n-gram model, which is necessary to compute estimate scores during hierarchical decoding. 3.2 Affiliation Heuristic For aligned target words, the normal affiliation heuristic can be used, since the word alignment is available within the rule. For unaligned words, the normal heuristic can also be used, except when the word is on the edge of a rule, because then the target neighbor words are not necessarily known. In this case, we infer the affiliation from the rule structure. Specifically, if unaligned target word t is on the right edge of an arc that covers source span [si, sj], we simply say that t is affiliated with source word sj. If t is on the left edge of the arc, we say it is affiliated with si. 4 Model Variations Recall that our NNJM feature can be described with the following probability: Π|T| i=1P(ti|ti−1, ti−2, · · · , sai, sai−1, sai+1, · · · ) This formulation lends itself to several natural variations. In particular, we can reverse the translation direction of the languages, as well as the direction of the language model. We denote our original formulation as a sourceto-target, left-to-right model (S2T/L2R). We can train three variations using target-to-source (T2S) and right-to-left (R2L) models: S2T/R2L Π|T| i=1P(ti|ti+1, ti+2, · · · , sai, sai−1, sai+1, · · · ) T2S/L2R Π|S| i=1P(si|si−1, si−2, · · · , ta′ i, ta′ i−1, ta′ i+1, · · · ) T2S/R2L Π|S| i=1P(si|si+1, si+2, · · · , ta′ i, ta′ i−1, ta′ i+1, · · · ) where a′ i is the target-to-source affiliation, defined analogously to ai. The T2S variations cannot be used in decoding due to the large target context required, and are thus only used in k-best rescoring. The S2T/R2L variant could be used in decoding, but we have not found this beneficial, so we only use it in rescoring. 4.1 Neural Network Lexical Translation Model (NNLTM) One issue with the S2T NNJM is that the probability is computed over every target word, so it does not explicitly model NULL-aligned source words. In order to assign a probability to every source word during decoding, we also train a neural network lexical translation model (NNLMT). Here, the input context is the 11-word source window centered at si, and the output is the target token tsi which si aligns to. The probability is computed over every source word in the input sentence. We treat NULL as a normal target word, and if a source word aligns to multiple target words, it is treated as a single concatenated token. Formally, the probability model is: Π|S| i=1P(tsi|si, si−1, si+1, · · · ) This model is trained and evaluated like our NNJM. It is easy and computationally inexpensive to use this model in decoding, since only one neural network computation must be made for each source word. In rescoring, we also use a T2S NNLTM model computed over every target word: Π|T| i=1P(sti|ti, ti−1, ti+1, · · · ) 5 MT System In this section, we describe the MT system used in our experiments. 5.1 MT Decoder We use a state-of-the-art string-to-dependency hierarchical decoder (Shen et al., 2010). Our baseline decoder contains a large and powerful set of features, which include: • Forward and backward rule probabilities • 4-gram Kneser-Ney LM • Dependency LM (Shen et al., 2010) • Contextual lexical smoothing (Devlin, 2009) • Length distribution (Shen et al., 2010) • Trait features (Devlin and Matsoukas, 2012) • Factored source syntax (Huang et al., 2013) • 7 sparse feature types, totaling 50k features (Chiang et al., 2009) • LM adaptation (Snover et al., 2008) 1374 We also perform 1000-best rescoring with the following features: • 5-gram Kneser-Ney LM • Recurrent neural network language model (RNNLM) (Mikolov et al., 2010) Although we consider the RNNLM to be part of our baseline, we give it special treatment in the results section because we would expect it to have the highest overlap with our NNJM. 5.2 Training and Optimization For Arabic word tokenization, we use the MADAARZ tokenizer (Habash et al., 2013) for the BOLT condition, and the Sakhr9 tokenizer for the NIST condition. For Chinese tokenization, we use a simple longest-match-first lexicon-based approach. For word alignment, we align all of the training data with both GIZA++ (Och and Ney, 2003) and NILE (Riesa et al., 2011), and concatenate the corpora together for rule extraction. For MT feature weight optimization, we use iterative k-best optimization with an ExpectedBLEU objective function (Rosti et al., 2010). 6 Experimental Results We present MT primary results on Arabic-English and Chinese-English for the NIST OpenMT12 and DARPA BOLT conditions. We also present a set of auxiliary results in order to further analyze our features. 6.1 NIST OpenMT12 Results Our NIST system is fully compatible with the OpenMT12 constrained track, which consists of 10M words of high-quality parallel training for Arabic, and 25M words for Chinese.10 The Kneser-Ney LM is trained on 5B words of data from English GigaWord. For test, we use the “Arabic-To-English Original Progress Test” (1378 segments) and “Chinese-to-English Original Progress Test + OpenMT12 Current Test” (2190 segments), which consists of a mix of newswire and web data.11 All test segments have 4 references. Our tuning set contains 5000 segments, and is a mix of the MT02-05 eval set as well as held-out parallel training. 9http://www.sakhr.com 10We also make weak use of 30M-100M words of UN data + ISI comparable corpora, but this data provides almost no benefit. 11http://www.nist.gov/itl/iad/mig/openmt12results.cfm NIST MT12 Test Ar-En Ch-En BLEU BLEU OpenMT12 - 1st Place 49.5 32.6 OpenMT12 - 2nd Place 47.5 32.2 OpenMT12 - 3rd Place 47.4 30.8 · · · · · · · · · OpenMT12 - 9th Place 44.0 27.0 OpenMT12 - 10th Place 41.2 25.7 Baseline (w/o RNNLM) 48.9 33.0 Baseline (w/ RNNLM) 49.8 33.4 + S2T/L2R NNJM (Dec) 51.2 34.2 + S2T NNLTM (Dec) 52.0 34.2 + T2S NNLTM (Resc) 51.9 34.2 + S2T/R2L NNJM (Resc) 52.2 34.3 + T2S/L2R NNJM (Resc) 52.3 34.5 + T2S/R2L NNJM (Resc) 52.8 34.7 “Simple Hier.” Baseline 43.4 30.1 + S2T/L2R NNJM (Dec) 47.2 31.5 + S2T NNLTM (Dec) 48.5 31.8 + Other NNJMs (Resc) 49.7 32.2 Table 3: Primary results on Arabic-English and Chinese-English NIST MT12 Test Set. The first section corresponds to the top and bottom ranked systems from the evaluation, and are taken from the NIST website. The second section corresponds to results on top of our strongest baseline. The third section corresponds to results on top of a simpler baseline. Within each section, each row includes all of the features from previous rows. BLEU scores are mixed-case. Results are shown in the second section of Table 3. On Arabic-English, the primary S2T/L2R NNJM gains +1.4 BLEU on top of our baseline, while the S2T NNLTM gains another +0.8, and the directional variations gain +0.8 BLEU more. This leads to a total improvement of +3.0 BLEU from the NNJM and its variations. Considering that our baseline is already +0.3 BLEU better than the 1st place result of MT12 and contains a strong RNNLM, we consider this to be quite an extraordinary improvement.12 For the Chinese-English condition, there is an improvement of +0.8 BLEU from the primary NNJM and +1.3 BLEU overall. Here, the baseline system is already +0.8 BLEU better than the 12Note that the official 1st place OpenMT12 result was our own system, so we can assure that these comparisons are accurate. 1375 best MT12 system. The smaller improvement on Chinese-English compared to Arabic-English is consistent with the behavior of our baseline features, as we show in the next section. 6.2 “Simple Hierarchical” NIST Results The baseline used in the last section is a highlyengineered research system, which uses a wide array of features that were refined over a number of years, and some of which require linguistic resources. Because of this, the baseline BLEU scores are much higher than a typical MT system – especially a real-time, production engine which must support many language pairs. Therefore, we also present results using a simpler version of our decoder which emulates Chiang’s original Hiero implementation (Chiang, 2007). Specifically, this means that we don’t use dependency-based rule extraction, and our decoder only contains the following MT features: (1) rule probabilities, (2) n-gram Kneser-Ney LM, (3) lexical smoothing, (4) target word count, (5) concat rule penalty. Results are shown in the third section of Table 3. The “Simple Hierarchical” Arabic-English system is -6.4 BLEU worse than our strong baseline, and would have ranked 10th place out of 11 systems in the evaluation. When the NNJM features are added to this system, we see an improvement of +6.3 BLEU, which would have ranked 1st place in the evaluation. Effectively, this means that for Arabic-English, the NNJM features are equivalent to the combined improvements from the string-to-dependency model plus all of the features listed in Section 5.1. For Chinese-English, the “Simple Hierarchical” system only degrades by -3.2 BLEU compared to our strongest baseline, and the NNJM features produce a gain of +2.1 BLEU on top of that. 6.3 BOLT Web Forum Results DARPA BOLT is a major research project with the goal of improving translation of informal, dialectical Arabic and Chinese into English. The BOLT domain presented here is “web forum,” which was crawled from various Chinese and Egyptian Internet forums by LDC. The BOLT parallel training consists of all of the high-quality NIST training, plus an additional 3 million words of translated forum data provided by LDC. The tuning and test sets consist of roughly 5000 segments each, with 2 references for Arabic and 3 for Chinese. Results are shown in Table 4. The baseline here uses the same feature set as the strong NIST system. On Arabic, the total gain is +2.6 BLEU, while on Chinese, the gain is +1.3 BLEU. BOLT Test Ar-En Ch-En BLEU BLEU Baseline (w/o RNNLM) 40.2 30.6 Baseline (w/ RNNLM) 41.3 30.9 + S2T/L2R NNJM (Dec) 42.9 31.9 + S2T NNLTM (Dec) 43.2 31.9 + Other NNJMs (Resc) 43.9 32.2 Table 4: Primary results on Arabic-English and Chinese-English BOLT Web Forum. Each row includes the aggregate features from all previous rows. 6.4 Effect of k-best Rescoring Only Table 5 shows performance when our S2T/L2R NNJM is used only in 1000-best rescoring, compared to decoding. The primary purpose of this is as a comparison to Le et al. (2012), whose model can only be used in k-best rescoring. BOLT Test Ar-En Without With RNNLM RNNLM BLEU BLEU Baseline 40.2 41.3 S2T/L2R NNJM (Resc) 41.7 41.6 S2T/L2R NNJM (Dec) 42.8 42.9 Table 5: Comparison of our primary NNJM in decoding vs. 1000-best rescoring. We can see that the rescoring-only NNJM performs very well when used on top of a baseline without an RNNLM (+1.5 BLEU), but the gain on top of the RNNLM is very small (+0.3 BLEU). The gain from the decoding NNJM is large in both cases (+2.6 BLEU w/o RNNLM, +1.6 BLEU w/ RNNLM). This demonstrates that the full power of the NNJM can only be harnessed when it is used in decoding. It is also interesting to see that the RNNLM is no longer beneficial when the NNJM is used. 1376 6.5 Effect of Neural Network Configuration Table 6 shows results using the S2T/L2R NNJM with various configurations. We can see that reducing the source window size, layer size, or vocab size will all degrade results. Increasing the sizes beyond the default NNJM has almost no effect (102%). Also note that the target-only NNLM (i.e., Source Window=0) only obtains 33% of the improvements of the NNJM. BOLT Test Ar-En BLEU % Gain “Simple Hier.” Baseline 33.8 S2T/L2R NNJM (Dec) 38.4 100% Source Window=7 38.3 98% Source Window=5 38.2 96% Source Window=3 37.8 87% Source Window=0 35.3 33% Layers=384x768x768 38.5 102% Layers=192x512 38.1 93% Layers=128x128 37.1 72% Vocab=64,000 38.5 102% Vocab=16,000 38.1 93% Vocab=8,000 37.3 83% Activation=Rectified Lin. 38.5 102% Activation=Linear 37.3 76% Table 6: Results with different neural network architectures. The “default” NNJM in the second row uses these parameters: SW=11, L=192x512x512, V=32,000, A=tanh. All models use a 3-word target history (i.e., 4-gram LM). “Layers” refers to the size of the word embedding followed by the hidden layers. “Vocab” refers to the size of the input and output vocabularies. “% Gain” is the BLEU gain over the baseline relative to the default NNJM. 6.6 Effect of Speedups All previous results use a self-normalized neural network with two hidden layers. In Table 7, we compare this to using a standard network (with two hidden layers), as well as a pre-computed neural network.13 The “Simple Hierarchical” baseline is used here because it more closely approximates a real-time MT engine. For the sake of speed, these experiments only use the S2T/L2R NNJM+S2T NNLTM. 13The difference in score for self-normalized vs. precomputed is entirely due to two vs. one hidden layers. Each result from Table 7 corresponds to a row in Table 2 of Section 2.4. We can see that going from the standard model to the pre-computed model only reduces the BLEU improvement from +6.4 to +6.1, while increasing the NNJM lookup speed by a factor of 10,000x. BOLT Test Ar-En BLEU Gain “Simple Hier.” Baseline 33.8 Standard NNJM 40.2 +6.4 Self-Norm NNJM 40.1 +6.3 Pre-Computed NNJM 39.9 +6.1 Table 7: Results for the standard NNs vs. selfnormalized NNs vs. pre-computed NNs. In Table 2 we showed that the cost of unique lookups for the pre-computed NNJM is only ∼0.001 seconds per source word. This does not include the cost of n-gram creation or cached lookups, which amount to ∼0.03 seconds per source word in our current implementation.14 However, the n-grams created for the NNJM can be shared with the Kneser-Ney LM, which reduces the cost of that feature. Thus, the total cost increase of using the NNJM+NNLTM features in decoding is only ∼0.01 seconds per source word. In future work we will provide more detailed analysis regarding the usability of the NNJM in a low-latency, high-throughput MT engine. 7 Related Work Although there has been a substantial amount of past work in lexicalized joint models (Marino et al., 2006; Crego and Yvon, 2010), nearly all of these papers have used older statistical techniques such as Kneser-Ney or Maximum Entropy. However, not only are these techniques intractable to train with high-order context vectors, they also lack the neural network’s ability to semantically generalize (Mikolov et al., 2013) and learn nonlinear relationships. A number of recent papers have proposed methods for creating neural network translation/joint models, but nearly all of these works have obtained much smaller BLEU improvements than ours. For each related paper, we will briefly con14In our decoder, roughly 95% of NNJM n-gram lookups within the same sentence are duplicates. 1377 trast their methodology with our own and summarize their BLEU improvements using scores taken directly from the cited paper. Auli et al. (2013) use a fixed continuous-space source representation, obtained from LDA (Blei et al., 2003) or a source-only NNLM. Also, their model is recurrent, so it cannot be used in decoding. They obtain +0.2 BLEU improvement on top of a target-only NNLM (25.6 vs. 25.8). Schwenk (2012) predicts an entire target phrase at a time, rather than a word at a time. He obtains +0.3 BLEU improvement (24.8 vs. 25.1). Zou et al. (2013) estimate context-free bilingual lexical similarity scores, rather than using a large context. They obtain an +0.5 BLEU improvement on Chinese-English (30.0 vs. 30.5). Kalchbrenner and Blunsom (2013) implement a convolutional recurrent NNJM. They score a 1000-best list using only their model and are able to achieve the same BLEU as using all 12 standard MT features (21.8 vs 21.7). However, additive results are not presented. The most similar work that we know of is Le et al. (2012). Le’s basic procedure is to re-order the source to match the linear order of the target, and then segment the hypothesis into minimal bilingual phrase pairs. Then, he predicts each target word given the previous bilingual phrases. However, Le’s formulation could only be used in kbest rescoring, since it requires long-distance reordering and a large target context. Le’s model does obtain an impressive +1.7 BLEU gain on top of a baseline without an NNLM (25.8 vs. 27.5). However, when compared to the strongest baseline which includes an NNLM, Le’s best models (S2T + T2S) only obtain an +0.6 BLEU improvement (26.9 vs. 27.5). This is consistent with our rescoring-only result, which indicates that k-best rescoring is too shallow to take advantage of the power of a joint model. Le’s model also uses minimal phrases rather than being purely lexicalized, which has two main downsides: (a) a number of complex, hand-crafted heuristics are required to define phrase boundaries, which may not transfer well to new languages, (b) the effective vocabulary size is much larger, which substantially increases data sparsity issues. We should note that our best results use six separate models, whereas all previous work only uses one or two models. However, we have demonstrated that we can obtain 50%-80% of the total improvement with only one model (S2T/L2R NNJM), and 70%-90% with only two models (S2T/L2R NNJM + S2T NNLTM). Thus, the one and two-model conditions still significantly outperform any past work. 8 Discussion We have described a novel formulation for a neural network-based machine translation joint model, along with several simple variations of this model. When used as MT decoding features, these models are able to produce a gain of +3.0 BLEU on top of a very strong and feature-rich baseline, as well as a +6.3 BLEU gain on top of a simpler system. Our model is remarkably simple – it requires no linguistic resources, no feature engineering, and only a handful of hyper-parameters. It also has no reliance on potentially fragile outside algorithms, such as unsupervised word clustering. We consider the simplicity to be a major advantage. Not only does this suggest that it will generalize well to new language pairs and domains, but it also suggests that it will be straightforward for others to replicate these results. Overall, we believe that the following factors set us apart from past work and allowed us to obtain such significant improvements: 1. The ability to use the NNJM in decoding rather than rescoring. 2. The use of a large bilingual context vector, which is provided to the neural network in “raw” form, rather than as the output of some other algorithm. 3. The fact that the model is purely lexicalized, which avoids both data sparsity and implementation complexity. 4. The large size of the network architecture. 5. The directional variation models. One of the biggest goals of this work is to quell any remaining doubts about the utility of neural networks in machine translation. We believe that there are large areas of research yet to be explored. For example, creating a new type of decoder centered around a purely lexicalized neural network model. Our short term ideas include using more interesting types of context in our input vector (such as source syntax), or using the NNJM to model syntactic/semantic structure of the target. 1378 References Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1044– 1054, Seattle, Washington, USA, October. Association for Computational Linguistics. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine translation. In HLT-NAACL, pages 218–226. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Josep Maria Crego and Franc¸ois Yvon. 2010. Factored bilingual n-gram language models for statistical machine translation. Machine Translation, 24(2):159– 175. Jacob Devlin and Spyros Matsoukas. 2012. Traitbased hypothesis selection for machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 528–532, Stroudsburg, PA, USA. Association for Computational Linguistics. Jacob Devlin. 2009. Lexical features for statistical machine translation. Master’s thesis, University of Maryland. Nizar Habash, Ryan Roth, Owen Rambow, Ramy Eskander, and Nadi Tomeh. 2013. Morphological analysis and disambiguation for dialectal arabic. In HLT-NAACL, pages 426–432. Zhongqiang Huang, Jacob Devlin, and Rabih Zbib. 2013. Factored soft source syntactic constraints for hierarchical machine translation. In EMNLP, pages 556–566. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pages 181–184. IEEE. Hai-Son Le, Alexandre Allauzen, and Franc¸ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 39– 48, Stroudsburg, PA, USA. Association for Computational Linguistics. Yann LeCun, L´eon Bottou, Genevieve B Orr, and Klaus-Robert M¨uller. 1998. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–50. Springer. Jos´e B Marino, Rafael E Banchs, Josep M Crego, Adri`a De Gispert, Patrik Lambert, Jos´e AR Fonollosa, and Marta R Costa-Juss`a. 2006. N-gram-based machine translation. Computational Linguistics, 32(4):527– 549. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, pages 1045–1048. Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In HLT-NAACL, pages 746– 751. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063. Jason Riesa, Ann Irvine, and Daniel Marcu. 2011. Feature-rich language-independent syntax-based alignment for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 497–507, Stroudsburg, PA, USA. Association for Computational Linguistics. Ronald Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modeling. Computer, Speech and Language, 10:187–228. Antti Rosti, Bing Zhang, Spyros Matsoukas, and Rich Schwartz. 2010. BBN system description for WMT10 system combination task. In WMT/MetricsMATR, pages 321–326. Holger Schwenk. 2010. Continuous-space language models for statistical machine translation. Prague Bull. Math. Linguistics, 93:137–146. Holger Schwenk. 2012. Continuous space translation models for phrase-based statistical machine translation. In COLING (Posters), pages 1071–1080. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2010. String-to-dependency statistical machine translation. Computational Linguistics, 36(4):649–671, December. 1379 Matthew Snover, Bonnie Dorr, and Richard Schwartz. 2008. Language and translation model adaptation using comparable corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 857–866, Stroudsburg, PA, USA. Association for Computational Linguistics. Makoto Tanaka, Yasuhara Toru, Jun-ya Yamamoto, and Mikio Norimatsu. 2013. An efficient language model using double-array structures. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with largescale neural language models improves translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1387–1392, Seattle, Washington, USA, October. Association for Computational Linguistics. Will Y Zou, Richard Socher, Daniel Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393–1398. 1380
2014
129
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 133–143, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Topic Representation for SMT with Neural Networks∗ Lei Cui1, Dongdong Zhang2, Shujie Liu2, Qiming Chen3, Mu Li2, Ming Zhou2, and Muyun Yang1 1School of Computer Science and Technology, Harbin Institute of Technology, Harbin, P.R. China [email protected], [email protected] 2Microsoft Research, Beijing, P.R. China {dozhang,shujliu,muli,mingzhou}@microsoft.com 3Shanghai Jiao Tong University, Shanghai, P.R. China [email protected] Abstract Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. 1 Introduction Making translation decisions is a difficult task in many Statistical Machine Translation (SMT) systems. Current translation modeling approaches usually use context dependent information to disambiguate translation candidates. For example, translation sense disambiguation approaches (Carpuat and Wu, 2005; Carpuat and Wu, 2007) are proposed for phrase-based SMT systems. Meanwhile, for hierarchical phrase-based or syntax-based SMT systems, there is also much work involving rich contexts to guide rule selection (He et al., 2008; Liu et al., 2008; Marton and Resnik, 2008; Xiong et al., 2009). Although these methods are effective and proven successful in many SMT systems, they only leverage within∗This work was done while the first and fourth authors were visiting Microsoft Research. sentence contexts which are insufficient in exploring broader information. For example, the word driver often means “the operator of a motor vehicle” in common texts. But in the sentence “Finally, we write the user response to the buffer, i.e., pass it to our driver”, we understand that driver means “computer program”. In this case, people understand the meaning because of the IT topical context which goes beyond sentence-level analysis and requires more relevant knowledge. Therefore, it is important to leverage topic information to learn smarter translation models and achieve better translation performance. Topic modeling is a useful mechanism for discovering and characterizing various semantic concepts embedded in a collection of documents. Attempts on topic-based translation modeling include topic-specific lexicon translation models (Zhao and Xing, 2006; Zhao and Xing, 2007), topic similarity models for synchronous rules (Xiao et al., 2012), and document-level translation with topic coherence (Xiong and Zhang, 2013). In addition, topic-based approaches have been used in domain adaptation for SMT (Tam et al., 2007; Su et al., 2012), where they view different topics as different domains. One typical property of these approaches in common is that they only utilize parallel data where document boundaries are explicitly given. In this way, the topic of a sentence can be inferred with document-level information using off-the-shelf topic modeling toolkits such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003) or Hidden Topic Markov Model (HTMM) (Gruber et al., 2007). Most of them also assume that the input must be in document level. However, this situation does not always happen since there is considerable amount of parallel data which does not have document boundaries. In addition, contemporary SMT systems often works on sentence level rather than document level due to the efficiency. Although we can easily apply LDA at the 133 sentence level, it is quite difficult to infer the topic accurately with only a few words in the sentence. This makes previous approaches inefficient when applied them in real-world commercial SMT systems. Therefore, we need to devise a systematical approach to enriching the sentence and inferring its topic more accurately. In this paper, we propose a novel approach to learning topic representations for sentences. Since the information within the sentence is insufficient for topic modeling, we first enrich sentence contexts via Information Retrieval (IR) methods using content words in the sentence as queries, so that topic-related monolingual documents can be collected. These topic-related documents are utilized to learn a specific topic representation for each sentence using a neural network based approach. Neural network is an effective technique for learning different levels of data representations. The levels inferred from neural network correspond to distinct levels of concepts, where high-level representations are obtained from low-level bag-ofwords input. It is able to detect correlations among any subset of input features through non-linear transformations, which demonstrates the superiority of eliminating the effect of noisy words which are irrelevant to the topic. Our problem fits well into the neural network framework and we expect that it can further improve inferring the topic representations for sentences. To incorporate topic representations as translation knowledge into SMT, our neural network based approach directly optimizes similarities between the source language and target language in a compact topic space. This underlying topic space is learned from sentence-level parallel data in order to share topic information across the source and target languages as much as possible. Additionally, our model can be discriminatively trained with a large number of training instances, without expensive sampling methods such as in LDA or HTMM, thus it is more practicable and scalable. Finally, we associate the learned representation to each bilingual translation rule. Topic-related rules are selected according to distributional similarity with the source text, which helps hypotheses generation in SMT decoding. We integrate topic similarity features in the log-linear model and evaluate the performance on the NIST Chinese-to-English translation task. Experimental results demonstrate that our model significantly improves translation accuracy over a state-of-the-art baseline. 2 Background: Deep Learning Deep learning is an active topic in recent years which has triumphed in many machine learning research areas. This technique began raising public awareness in the mid-2000s after researchers showed how a multi-layer feed-forward neural network can be effectively trained. The training procedure often involves two phases: a layerwise unsupervised pre-training phase and a supervised fine-tuning phase. For pre-training, Restricted Boltzmann Machine (RBM) (Hinton et al., 2006), auto-encoding (Bengio et al., 2006) and sparse coding (Lee et al., 2006) are most frequently used. Unsupervised pre-training trains the network one layer at a time and helps guide the parameters of the layer towards better regions in parameter space (Bengio, 2009). Followed by finetuning in this parameter region, deep learning is able to achieve state-of-the-art performance in various research areas, including breakthrough results on the ImageNet dataset for objective recognition (Krizhevsky et al., 2012), significant error reduction in speech recognition (Dahl et al., 2012), etc. Deep learning has also been successfully applied in a variety of NLP tasks such as part-ofspeech tagging, chunking, named entity recognition, semantic role labeling (Collobert et al., 2011), parsing (Socher et al., 2011a), sentiment analysis (Socher et al., 2011b), etc. Most NLP research converts a high-dimensional and sparse binary representation into a low-dimensional and real-valued representation. This low-dimensional representation is usually learned from huge amount of monolingual texts in the pre-training phase, and then fine-tuned towards task-specific criterion. Inspired by previous successful research, we first learn sentence representations using topic-related monolingual texts in the pretraining phase, and then optimize the bilingual similarity by leveraging sentence-level parallel data in the fine-tuning phase. 3 Topic Similarity Model with Neural Network In this section, we explain our neural network based topic similarity model in detail, as well as how to incorporate the topic similarity features into SMT decoding procedure. Figure 1 sketches the high-level overview which illustrates how to 134 𝐳𝑓= 𝑔(𝐟) 𝐳𝑒= 𝑔(𝐞) cos(𝐳𝑓, 𝐳𝑒) 𝑠𝑖𝑚(𝑓, 𝑒) 𝑓 𝑒 English document collection 𝐝𝑓 𝐝𝑒 Parallel sentence IR IR 𝐟 𝐞 Chinese document collection Neural Network Training Data Preprocessing Figure 1: Overview of neural network based topic similarity model. learn topic representations using sentence-level parallel data. Given a parallel sentence pair ⟨f, e⟩, the first step is to treat f and e as queries, and use IR methods to retrieve relevant documents to enrich contextual information for them. Specifically, the ranking model we used is a Vector Space Model (VSM), where the query and document are converted into tf-idf weighted vectors. The most relevant N documents df and de are retrieved and converted to a high-dimensional, bag-of-words input f and e for the representation learning1. There are two phases in our neural network training process: pre-training and fine-tuning. In the pre-training phase (Section 3.1), we build two neural networks with the same structure but different parameters to learn a low-dimensional representation for sentences in two different languages. Then, in the fine-tuning phase (Section 3.2), our model directly optimizes the similarity of two lowdimensional representations, so that it highly correlates to SMT decoding. Finally, the learned representation is used to calculate similarities which are integrated as features in SMT decoding procedure (Section 3.3). 3.1 Pre-training using denoising auto-encoder In the pre-training phase, we leverage neural network structures to transform high-dimensional sparse vectors to low-dimensional dense vectors. The topic similarity is calculated on top of the learned dense vectors. This dense representation should preserve the information from the bag-of1We use f and e to denote the n-of-V vector converted from the retrieved documents. words input, meanwhile alleviate data sparse problem. Therefore, we use a specially designed mechanism called auto-encoder to solve this problem. Auto-encoder (Bengio et al., 2006) is one of the basic building blocks of deep learning. Assuming that the input is a n-of-V binary vector x representing the bag-of-words (V is the vocabulary size), an auto-encoder consists of an encoding process g(x) and a decoding process h(g(x)). The objective of the auto-encoder is to minimize the reconstruction error L(h(g(x)), x). Our goal is to learn a low-dimensional vector which can preserve information from the original n-of-V vector. One problem with auto-encoder is that it treats all words in the same way, making no distinguishment between function words and content words. The representation learned by auto-encoders tends to be influenced by the function words, thereby it is not robust. To alleviate this problem, Vincent et al. (2008) proposed the Denoising Auto-Encoder (DAE), which aims to reconstruct a clean, “repaired” input from a corrupted, partially destroyed vector. This is done by corrupting the initial input x to get a partially destroyed version ˜x. DAE is capable of capturing the global structure of the input while ignoring the noise. In our task, for each sentence, we treat the retrieved N relevant documents as a single large document and convert it to a bag-of-words vector x in Figure 2. With DAE, the input x is manually corrupted by applying masking noise (randomly mask 1 to 0) and getting ˜x. Denoising training is considered as “filling in the blanks” (Vincent et al., 2010), which means the masking components can be recovered from the non-corrupted components. For example, in IT related texts, if the word driver is masked, it should be predicted through hidden units in neural networks by active signals such as “buffer”, “user response”, etc. In our case, the encoding process transforms the corrupted input ˜x into g(˜x) with two layers: a linear layer connected with a non-linear layer. Assuming that the dimension of the g(˜x) is L, the linear layer forms a L × V matrix W which projects the n-of-V vector to a L-dimensional hidden layer. After the bag-of-words input has been transformed, they are fed into a subsequent layer to model the highly non-linear relations among words: z = f(W˜x + b) (1) where z is the output of the non-linear layer, b is a 135 𝐱 𝐱 𝑔(𝐱 ) ℎ(𝑔𝐱 ) ℒ(ℎ𝑔𝐱 , 𝐱) Figure 2: Denoising auto-encoder with a bag-ofwords input. L-length bias vector. f(·) is a non-linear function, where common choices include sigmoid function, hyperbolic function, “hard” hyperbolic function, rectifier function, etc. In this work, we use the rectifier function as our non-linear function due to its efficiency and better performance (Glorot et al., 2011): rec(x) = ( x if x > 0 0 otherwise (2) The decoding process consists of a linear layer and a non-linear layer with similar network structures, but different parameters. It transforms the L-dimensional vector g(˜x) to a V -dimensional vector h(g(˜x)). To minimize reconstruction error with respect to ˜x, we define the loss function as the L2-norm of the difference between the uncorrupted input and reconstructed input: L(h(g(˜x)), x) = ∥h(g(˜x)) −x∥2 (3) Multi-layer neural networks are trained with the standard back-propagation algorithm (Rumelhart et al., 1988). The gradient of the loss function is calculated and back-propagated to the previous layer to update its parameters. Training neural networks involves many factors such as the learning rate and the length of hidden layers. We will discuss the optimization of these parameters in Section 4. 3.2 Fine-tuning with parallel data In the fine-tuning phase, we stack another layer on top of the two low-dimensional vectors to maximize the similarity between source and target languages. The similarity scores are integrated into the standard log-linear model for making translation decisions. Since the vectors from DAE are trained using information from monolingual training data independently, these vectors may be inadequate to measure bilingual topic similarity due to their different topic spaces. Therefore, in this stage, parallel sentence pairs are used to help connecting the vectors from different languages because they express the same topic. In fact, the objective of fine-tuning is to discover a latent topic space which is shared by both languages as much as possible. This shared topic space is particularly useful when the SMT decoder tries to match the source texts and translation candidates in the target language. Given a parallel sentence pair ⟨f, e⟩, the DAE learns representations for f and e respectively, as zf = g(f) and ze = g(e) in Figure 1. We then take two vectors as the input to calculate their similarity. Consequently, the whole neural network can be fine-tuned towards the supervised criteria with the help of parallel data. The similarity score of the representation pair ⟨zf, ze⟩is defined as the cosine similarity of the two vectors: sim(f, e) = cos(zf, ze) = zf · ze ∥zf∥∥ze∥ (4) Since a parallel sentence pair should have the same topic, our goal is to maximize the similarity score between the source sentence and target sentence. Inspired by the contrastive estimation method (Smith and Eisner, 2005), for each parallel sentence pair ⟨f, e⟩as a positive instance, we select another sentence pair ⟨f′, e′⟩from the training data and treat ⟨f, e′⟩as a negative instance. To make the similarity of the positive instance larger than the negative instance by some margin η, we utilize the following pairwise ranking loss: L(f, e) = max{0, η −sim(f, e) + sim(f, e′)} (5) where η = 1 2 −sim(f, f′). The rationale behind this criterion is, the smaller sim(f, f′) is, the more we should penalize negative instances. To effectively train the model in this task, negative instances must be selected carefully. Since different sentences may have very similar topic distributions, we select negative instances that are dissimilar with the positive instances based on the following criteria: 1. For each positive instance ⟨f, e⟩, we select e′ which contains at least 30% different content words from e. 136 2. If we cannot find such e′, remove ⟨f, e⟩from the training instances for network learning. The model minimizes the pairwise ranking loss across all training instances: L = X ⟨f,e⟩ L(f, e) (6) We used standard back-propagation algorithm to further fine-tune the neural network parameters W and b in Equation (1). The learned neural networks are used to obtain sentence topic representations, which will be further leveraged to infer topic representations of bilingual translation rules. 3.3 Integration into SMT decoding We incorporate the learned topic similarity scores into the standard log-linear framework for SMT. When a synchronous rule ⟨α, γ⟩is extracted from a sentence pair ⟨f, e⟩, a triple instance I = (⟨α, γ⟩, ⟨f, e⟩, c) is collected for inferring the topic representation of ⟨α, γ⟩, where c is the count of rule occurrence. Following (Chiang, 2007), we give a count of one for each phrase pair occurrence and a fractional count for each hierarchical phrase pair. The topic representation of ⟨α, γ⟩is then calculated as the weighted average: zα = P (⟨α,γ⟩,⟨f,e⟩,c)∈T {c × zf} P (⟨α,γ⟩,⟨f,e⟩,c)∈T {c} (7) zγ = P (⟨α,γ⟩,⟨f,e⟩,c)∈T {c × ze} P (⟨α,γ⟩,⟨f,e⟩,c)∈T {c} (8) where T denotes all instances for the rule ⟨α, γ⟩, zα and zγ are the source-side and target-side topic vectors respectively. By measuring the similarity between the source texts and bilingual translation rules, the SMT decoder is able to encourage topic relevant translation candidates and penalize topic irrelevant candidates. Therefore, it helps to train a smarter translation model with the embedded topic information. Given a source sentence s to be translated, we define the similarity as follows: Sim(zs, zα) = cos(zs, zα) (9) Sim(zs, zγ) = cos(zs, zγ) (10) where zs is the topic representation of s. The similarity calculated against zα or zγ denotes the source-to-source or the source-to-target similarity. We also consider the topic sensitivity estimation since general rules have flatter distributions while topic-specific rules have sharper distributions. A standard entropy metric is used to measure the sensitivity of the source-side of ⟨α, γ⟩as: Sen(α) = − |zα| X i=1 zαi × log zαi (11) where zαi is a component in the vector zα. The target-side sensitivity Sen(γ) can be calculated in a similar way. The larger the sensitivity is, the more topic-specific the rule manifests. In addition to traditional SMT features, we add new topic-related features into the standard loglinear framework. For the SMT system, the best translation candidate ˆe is given by: ˆe = arg max e P(e|f) (12) where the translation probability is given by: P(e|f) ∝ X i wi · log φi(f, e) = X j wj · log φj(f, e) | {z } Standard + X k wk · log φk(f, e) | {z } Topic related (13) where φj(f, e) is the standard feature function and wj is the corresponding feature weight. φk(f, e) is the topic-related feature function and wk is the feature weight. The detailed feature description is as follows: Standard features: Translation model, including translation probabilities and lexical weights for both directions (4 features), 5-gram language model (1 feature), word count (1 feature), phrase count (1 feature), NULL penalty (1 feature), number of hierarchical rules used (1 feature). Topic-related features: rule similarity scores (2 features), rule sensitivity scores (2 features). 4 Experiments 4.1 Setup We evaluate the performance of our neural network based topic similarity model on a Chineseto-English machine translation task. In neural network training, a large number of monolingual documents are collected in both source and target languages. The documents are mainly from two domains: news and weblog. We use Chinese and 137 English Gigaword corpus (Version 5) which are mainly from news domain. In addition, we also collect weblog documents with a variety of topics from the web. The total data statistics are presented in Table 1. These documents are built in the format of inverted index using Lucene2, which can be efficiently retrieved by the parallel sentence pairs. The most relevant N documents are collected, where we experiment with N = {1, 5, 10, 20, 50}. Domain Chinese English Docs Words Docs Words News 5.7M 5.4B 9.9M 25.6B Weblog 2.1M 8B 1.2M 2.9B Total 7.8M 13.4B 11.1M 28.5B Table 1: Statistics of monolingual data, in numbers of documents and words (main content). “M” refers to million and “B” refers to billion. We implement a distributed framework to speed up the training process of neural networks. The network is learned with mini-batch asynchronous gradient descent with the adaptive learning rate procedure called AdaGrad (Duchi et al., 2011). We use 32 model replicas in each iteration during the training. The model parameters are averaged after each iteration and sent to each replica for the next iteration. The vocabulary size for the input layer is 100,000, and we choose different lengths for the hidden layer as L = {100, 300, 600, 1000} in the experiments. In the pre-training phase, all parallel data is fed into two neural networks respectively for DAE training, where network parameters W and b are randomly initialized. In the fine-tuning phase, for each parallel sentence pair, we randomly select other ten sentence pairs which satisfy the criterion as negative instances. These training instances are leveraged to optimize the similarity of two vectors. In SMT training, an in-house hierarchical phrase-based SMT decoder is implemented for our experiments. The CKY decoding algorithm is used and cube pruning is performed with the same default parameter settings as in Chiang (2007). The parallel data we use is released by LDC3. In total, the datasets contain nearly 1.1 million sentence pairs. Translation models are trained over the parallel data that is automatically word-aligned 2http://lucene.apache.org/ 3LDC2003E14, LDC2002E18, LDC2003E07, LDC2005T06, LDC2005T10, LDC2005E83, LDC2006E34, LDC2006E85, LDC2006E92, LDC2006E26, LDC2007T09 using GIZA++ in both directions, and the diaggrow-final heuristic is used to refine symmetric word alignment. An in-house language modeling toolkit is used to train the 5-gram language model with modified Kneser-Ney smoothing (Kneser and Ney, 1995). The English monolingual data used for language modeling is the same as in Table 1. The NIST 2003 dataset is the development data. The testing data consists of NIST 2004, 2005, 2006 and 2008 datasets. The evaluation metric for the overall translation quality is caseinsensitive BLEU4 (Papineni et al., 2002). The reported BLEU scores are averaged over 5 times of running MERT (Och, 2003). A statistical significance test is performed using the bootstrap resampling method (Koehn, 2004). 4.2 Baseline The baseline is a re-implementation of the Hiero system (Chiang, 2007). The phrase pairs that appear only once in the parallel data are discarded because most of them are noisy. We also use the fix-discount method in Foster et al. (2006) for phrase table smoothing. This implementation makes the system perform much better and the translation model size is much smaller. We compare our method with the LDA-based approach proposed by Xiao et al. (2012). In (Xiao et al., 2012), the topic of each sentence pair is exactly the same as the document it belongs to. Since some of our parallel data does not have documentlevel information, we rely on the IR method to retrieve the most relevant document and simulate this approach. The PLDA toolkit (Liu et al., 2011) is used to infer topic distributions, which takes 34.5 hours to finish. 4.3 Effect of retrieved documents and length of hidden layers We illustrate the relationship among translation accuracy (BLEU), the number of retrieved documents (N) and the length of hidden layers (L) on different testing datasets. The results are shown in Figure 3. The best translation accuracy is achieved when N=10 for most settings. This confirms that enriching the source text with topic-related documents is very useful in determining topic representations, thereby help to guide the synchronous rule selection. However, we find that as N becomes larger in the experiments, e.g. N=50, the translation accuracy drops drastically. As more documents are retrieved, less relevant information 138 0 5 10 20 50 42 42.2 42.4 42.6 42.8 43 Number of Retrieved Documents (N) BLEU NIST 2004 L=100 L=300 L=600 L=1000 0 5 10 20 50 41 41.2 41.4 41.6 41.8 42 Number of Retrieved Documents (N) BLEU NIST 2005 L=100 L=300 L=600 L=1000 0 5 10 20 50 37.8 38 38.2 38.4 38.6 38.8 39 39.2 Number of Retrieved Documents (N) BLEU NIST 2006 L=100 L=300 L=600 L=1000 0 5 10 20 50 31 31.2 31.4 31.6 31.8 32 Number of Retrieved Documents (N) BLEU NIST 2008 L=100 L=300 L=600 L=1000 Figure 3: End-to-end translation results (BLEU%) using all standard and topic-related features, with different settings on the number of retrieved documents N and the length of hidden layers L. is also used to train the neural networks. Irrelevant documents bring so many unrelated topic words hence degrade neural network learning performance. Another important factor is the length of hidden layers L in the network. In deep learning, this parameter is often empirically tuned with human efforts. As shown in Figure 3, the translation accuracy is better when L is relatively small. Actually, there is no obvious distinction of the performance when L is less than 600. However, when L equals 1,000, the translation accuracy is inferior to other settings. The main reason is that parameters in the neural networks are too many to be effectively trained. As we know when L=1000, there are a total of 100, 000 × 1, 000 parameters between the linear and non-linear layers in the network. Limited training data prevents the model from getting close to the global optimum. Therefore, the model is likely to fall in local optima and lead to unacceptable representations. 4.4 Effect of topic related features We evaluate the performance of adding new topicrelated features to the log-linear model and compare the translation accuracy with the method in (Xiao et al., 2012). To make different methods comparable, we set the dimension of topic representation as 100 for all settings. This takes 10 hours in pre-training phase and 22 hours in finetuning phase. Table 2 shows how the accuracy is improved with more features added. The results confirm that topic information is indispensable for SMT since both (Xiao et al., 2012) and our neural network based method significantly outperforms the baseline system. Our method improves 0.86 BLEU points at most and 0.76 BLEU points on average over the baseline. We observe that sourceside similarity is more effective than target-side similarity, but their contributions are cumulative. This proves that bilingually induced topic representation with neural network helps the SMT system disambiguate translation candidates. Furthermore, rule sensitivity features improve SMT performance compared with only using similarity features. Because topic-specific rules usually have a larger sensitivity score, they can beat general rules when they obtain the same similarity score against the input sentence. Finally, when all new features are integrated, the performance is the best, preforming substantially better than (Xiao et al., 2012) with 0.39 BLEU points on average. It is worth mentioning that the performance of (Xiao et al., 2012) is similar to the settings with N=1 and L=100 in Figure 3. This is not simply coincidence since we can interpret their approach as a special case in our neural network method: when a parallel sentence pair has 139 Settings NIST 2004 NIST 2005 NIST 2006 NIST 2008 Average Baseline 42.25 41.21 38.05 31.16 38.17 (Xiao et al., 2012) 42.58 41.61 38.39 31.58 38.54 Sim(Src) 42.51 41.55 38.53 31.57 38.54 Sim(Trg) 42.43 41.48 38.4 31.49 38.45 Sim(Src+Trg) 42.7 41.66 38.66 31.66 38.67 Sim(Src+Trg)+Sen(Src) 42.77 41.81 38.85 31.73 38.79 Sim(Src+Trg)+Sen(Trg) 42.85 41.79 38.76 31.7 38.78 Sim(Src+Trg)+Sen(Src+Trg) 42.95 41.97 38.91 31.88 38.93 Table 2: Effectiveness of different features in BLEU% (p < 0.05), with N=10 and L=100. “Sim” denotes the rule similarity feature and “Sen” denotes rule sensitivity feature. “Src” and “Trg” means utilizing source-side/target-side rule topic vectors to calculate similarity or sensitivity, respectively. The “Average” setting is the averaged result of four datasets. document-level information, that document will be retrieved for training; otherwise, the most relevant document will be retrieved from the monolingual data. Therefore, our method can be viewed as a more general framework than previous LDAbased approaches. 4.5 Discussion In this section, we give a case study to explain why our method works. An example of translation rule disambiguation for a sentence from the NIST 2005 dataset is shown in Figure 4. We find that the topic of this sentence is about “rescue after a natural disaster”. Under this topic, the Chinese rule “发送X” should be translated to “deliver X” or “distribute X”. However, the baseline system prefers “send X” rather than those two candidates. Although the translation probability of “send X” is much higher, it is inappropriate in this context since it is usually used in IT texts. For example, ⟨发送邮件, send emails⟩, ⟨发送信息, send messages⟩and ⟨发送数据, send data⟩. In contrast, with our neural network based approach, the learned topic distributions of “deliver X” or “distribute X” are more similar with the input sentence than “send X”, which is shown in Figure 4. The similarity scores indicate that “deliver X” and “distribute X” are more appropriate to translate the sentence. Therefore, adding topic-related features is able to keep the topic consistency and substantially improve the translation accuracy. 5 Related Work Topic modeling was first leveraged to improve SMT performance in (Zhao and Xing, 2006; Zhao and Xing, 2007). They proposed a bilingual topical admixture approach for word alignment and assumed that each word-pair follows a topicspecific model. They reported extensive empirical analysis and improved word alignment accuracy as well as translation quality. Following this work, (Xiao et al., 2012) extended topicspecific lexicon translation models to hierarchical phrase-based translation models, where the topic information of synchronous rules was directly inferred with the help of document-level information. Experiments show that their approach not only achieved better translation performance but also provided a faster decoding speed compared with previous lexicon-based LDA methods. Another direction of approaches leveraged topic modeling techniques for domain adaptation. Tam et al. (2007) used bilingual LSA to learn latent topic distributions across different languages and enforce one-to-one topic correspondence during model training. They incorporated the bilingual topic information into language model adaptation and lexicon translation model adaptation, achieving significant improvements in the large-scale evaluation. (Su et al., 2012) investigated the relationship between out-of-domain bilingual data and in-domain monolingual data via topic mapping using HTMM methods. They estimated phrasetopic distributions in translation model adaptation and generated better translation quality. Recently, Chen et al. (2013) proposed using vector space model for adaptation where genre resemblance is leveraged to improve translation accuracy. We also investigated multi-domain adaptation where explicit topic information is used to train domain specific models (Cui et al., 2013). Generally, most previous research has leveraged conventional topic modeling techniques such as LDA or HTMM. In our work, a novel neural network based approach is proposed to infer topic representations for parallel data. The advantage of 140 Src 联合国儿童基金会也开始发送基本医药包 Ref (1) the united nations children’s fund has also begun delivering basic medical kits (2) the unicef has also started to distribute basic medical kits (3) the united nations children’s fund has also begun distributing basic medical kits (4) the united nations children’s fund has begun delivering basic medical kits Baseline the united nations children’s fund began to send basic medical kits Ours the united nations children’s fund has begun to distribute basic medical kits Table 4: Acknowledgments The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review. References Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2006. Greedy layer-wise training of deep networks. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 153–160. MIT Press, Cambridge, MA. Yoshua Bengio. 2009. Learning deep architectures for ai. Found. Trends Mach. Learn., 2(1):1–127, January. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. Marine Carpuat and Dekai Wu. 2007. Contextdependent phrasal translation lexicons for statistical machine translation. Proceedings of Machine Translation Summit XI, pages 73–80. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November. G. E. Dahl, Dong Yu, Li Deng, and A. Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Trans. Audio, Speech and Lang. Proc., 20(1):30–42, January. John Duchi, Elad Hazan, and Yoram Singer. 2011. Amit Gruber, Michal Rosen-zvi, and Yair Weiss. 2007. Hidden topic markov models. In In Proceedings of Artificial Intelligence and Statistics. Zhongjun He, Qun Liu, and Shouxun Lin. 2008. Improving statistical machine translation using lexicalized rule selection. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 321–328, Manchester, UK, August. Coling 2008 Organizing Committee. Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, July. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pages 181–184. IEEE. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Association for Computational Linguistics. Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. 2012. Imagenet classification with deep convolutional neural networks. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1106–1114. Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. 2006. Efficient sparse coding algorithms. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 801–808. MIT Press, Cambridge, MA. Qun Liu, Zhongjun He, Yang Liu, and Shouxun Lin. 2008. Maximum entropy based rule selection model for syntax-based statistical machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 0 20 40 60 80 100 0 0.02 0.04 0.06 0.08 0.1 联合国儿童基金会也开始发送基本医药包 0 20 40 60 80 100 0 0.02 0.04 0.06 0.08 0.1 <发送 X , deliver X> 0 20 40 60 80 100 0 0.02 0.04 0.06 0.08 0.1 <发送 X , distribute X> 0 20 40 60 80 100 0 0.02 0.04 0.06 0.08 0.1 <发送 X , send X> Sim(Src) 42.51 41.55 38.53 31.57 38.54 Sim(Trg) 42.43 41.48 38.4 31.49 38.45 Sim(Src+Trg) 42.7 41.66 38.66 31.66 38.67 Sim(Src+Trg)+Sen(Src) 42.77 41.81 38.85 31.73 38.79 Sim(Src+Trg)+Sen(Trg) 42.85 41.79 38.76 31.7 38.78 Sim(Src+Trg)+Sen(Src+Trg) 42.95 41.97 38.91 31.88 38.93 ffectiveness of different features in BLEU% (p < 0.05), with N = 10 and L = 100. ”Sim” e rule similarity feature and ”Sen” denotes rule sensitivity feature. ”Src” and ”Trg” means ource-side/target-side rule topic vectors to calculate similarity or sensitivity, respectively. The setting is the averaged results of four datasets. only using similarity features. Because fic rules usually have a larger sensitivthey can beat general rules when they same similarity score against the input Finally, when all new features are inhe performance is the best, preforming ly better than (Xiao et al., 2012) with U points on average. eresting observation is, the performance t al., 2012) is quite similar to the setN = 1 and L = 100 in Figure 3. This ply coincidence since we can interpret ach as a special case in our neural netod. When a parallel sentence pair has level information, that document will d for training. Otherwise, the most siment will be obtained from the monolinOur method can be viewed as a more amework than previous LDA-based apussion eriments, ous LDA-based method, if a document ins M sentences, all M sentences will same topic distribution of Doc. Alferent sentences may express slightly mplications and the topic will change, ntional LDA-based approach does not pic transition into consideration. In conapproach directly learns the topic repn with an abundancy of related docuadditional to the original document from sentence is extracted, the IR method ves other relevant documents which prolementary topic information. Therefore, representations learned are more fined thus more accurate. Rules P(γ|α) Sim(zs, zα) ⟨发送X, deliver X⟩ 0.0237 0.8469 ⟨发送X, distribute X⟩ 0.0546 0.8268 ⟨发送X, send X⟩ 0.2464 0.6119 Table 3: Development and testing data used in the experiments. 5 Related Work Topic modeling was first leveraged to improve SMT performance in (Zhao and Xing, 2006; Zhao and Xing, 2007). They proposed a bilingual topical admixture approach for word alignment and assumed that each word-pair follows a topicspecific model. They reported extensive empirical analysis and improved word alignment accuracy as well as translation quality. Following this work, (Xiao et al., 2012) extended topicspecific lexicon translation models to hierarchical phrase-based translation models, where the topic information of synchronous rules was directly inferred with the help of document-level information. Experiments show that their approach not only achieved better translation performance but also provided a faster decoding speed compared with previous lexicon-based methods. Another direction of approaches leveraged topic modeling techniques for domain adaptation. Tam et al. (2007) used bilingual LSA to learn latent topic distributions across different languages and enforce one-to-one topic correspondence during model training. They incorporated the bilingual topic information into language model adaptation and lexicon translation model adaptation, achieving significant improvements in the large-scale evaluation. (Su et al., 2012) investigated the relationship between out-of-domain bilingual data and in-domain monolingual data via topic mapping usFigure 4: An example from the NIST 2005 dataset. We illustrate the normalized topic representations of the source sentence and three ambiguous synchronous rules. Details are explained in Section 4.5. our method is that it is applicable to both sentencelevel and document-level SMT, since we do not place any restrictions on the input. In addition, our method directly maximizes the similarity between parallel sentence pairs, which is ideal for SMT decoding. Compared to document-level topic modeling which uses the topic of a document for all sentences within the document (Xiao et al., 2012), our contributions are: • We proposed a more general approach to leveraging topic information for SMT by using IR methods to get a collection of related documents, regardless of whether or not document boundaries are explicitly given. • We used neural networks to learn topic representations more accurately, with more practicable and scalable modeling techniques. • We directly optimized bilingual topic similarity in the deep learning framework with the help of sentence-level parallel data, so that the learned representation could be easily used in SMT decoding procedure. 6 Conclusion and Future Work In this paper, we propose a neural network based approach to learning bilingual topic representation for SMT. We enrich contexts of parallel sentence pairs with topic related monolingual data and obtain a set of documents to represent sentences. These documents are converted to a bagof-words input and fed into neural networks. The learned low-dimensional vector is used to obtain the topic representations of synchronous rules. In SMT decoding, appropriate rules are selected to best match source texts according to their similarity in the topic space. Experimental results show that our approach is promising for SMT systems to learn a better translation model. It is a significant improvement over the state-of-the-art Hiero system, as well as a conventional LDA-based method. In the future research, we will extend our neural network methods to address document-level translation, where topic transition between sentences is a crucial problem to be solved. Since the translation of the current sentence is usually influenced by the topic of previous sentences, we plan to leverage recurrent neural networks to model this phenomenon, where the history translation information is naturally combined in the model. Acknowledgments We are grateful to the anonymous reviewers for their insightful comments. We also thank Fei Huang (BBN), Nan Yang, Yajuan Duan, Hong Sun and Duyu Tang for the helpful discussions. This work is supported by the National Natural Science Foundation of China (Granted No. 61272384) 141 References Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2006. Greedy layer-wise training of deep networks. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 153–160. MIT Press, Cambridge, MA. Yoshua Bengio. 2009. Learning deep architectures for ai. Found. Trends Mach. Learn., 2(1):1–127, January. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. Marine Carpuat and Dekai Wu. 2005. Word sense disambiguation vs. statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 387–394, Ann Arbor, Michigan, June. Association for Computational Linguistics. Marine Carpuat and Dekai Wu. 2007. Contextdependent phrasal translation lexicons for statistical machine translation. Proceedings of Machine Translation Summit XI, pages 73–80. Boxing Chen, Roland Kuhn, and George Foster. 2013. Vector space model for adaptation in statistical machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1285– 1293, Sofia, Bulgaria, August. Association for Computational Linguistics. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November. Lei Cui, Xilun Chen, Dongdong Zhang, Shujie Liu, Mu Li, and Ming Zhou. 2013. Multi-domain adaptation for SMT using multi-task learning. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1055– 1065, Seattle, Washington, USA, October. Association for Computational Linguistics. George E. Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech and Language Processing, 20(1):30–42, January. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, July. George Foster, Roland Kuhn, and Howard Johnson. 2006. Phrasetable smoothing for statistical machine translation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 53–61, Sydney, Australia, July. Association for Computational Linguistics. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. JMLR W&CP Volume, volume 15, pages 315–323. Amit Gruber, Michal Rosen-zvi, and Yair Weiss. 2007. Hidden topic markov models. In In Proceedings of Artificial Intelligence and Statistics. Zhongjun He, Qun Liu, and Shouxun Lin. 2008. Improving statistical machine translation using lexicalized rule selection. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 321–328, Manchester, UK, August. Coling 2008 Organizing Committee. Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, July. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pages 181–184. IEEE. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Association for Computational Linguistics. Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. 2012. Imagenet classification with deep convolutional neural networks. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1106–1114. Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. 2006. Efficient sparse coding algorithms. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 801–808. MIT Press, Cambridge, MA. Qun Liu, Zhongjun He, Yang Liu, and Shouxun Lin. 2008. Maximum entropy based rule selection model for syntax-based statistical machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 89–97, Honolulu, Hawaii, October. Association for Computational Linguistics. Zhiyuan Liu, Yuzhou Zhang, Edward Y. Chang, and Maosong Sun. 2011. Plda+: Parallel latent dirichlet allocation with data placement and pipeline processing. ACM Transactions on Intelligent Systems and 142 Technology, special issue on Large Scale Machine Learning. Software available at http://code. google.com/p/plda. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In Proceedings of ACL-08: HLT, pages 1003– 1011, Columbus, Ohio, June. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan, July. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1988. Neurocomputing: Foundations of research. chapter Learning Representations by Back-propagating Errors, pages 696–699. MIT Press, Cambridge, MA, USA. Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 354–362, Ann Arbor, Michigan, June. Association for Computational Linguistics. Richard Socher, Cliff C. Lin, Andrew Y. Ng, and Christopher D. Manning. 2011a. Parsing Natural Scenes and Natural Language with Recursive Neural Networks. In Proceedings of the 26th International Conference on Machine Learning (ICML). Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011b. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 151–161, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Jinsong Su, Hua Wu, Haifeng Wang, Yidong Chen, Xiaodong Shi, Huailin Dong, and Qun Liu. 2012. Translation model adaptation for statistical machine translation with monolingual topic information. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 459–468, Jeju Island, Korea, July. Association for Computational Linguistics. Yik-Cheung Tam, Ian Lane, and Tanja Schultz. 2007. Bilingual lsa-based adaptation for statistical machine translation. Machine Translation, 21(4):187– 207, December. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 1096–1103, New York, NY, USA. ACM. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11:3371– 3408, December. Xinyan Xiao, Deyi Xiong, Min Zhang, Qun Liu, and Shouxun Lin. 2012. A topic similarity model for hierarchical phrase-based translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 750–758, Jeju Island, Korea, July. Association for Computational Linguistics. Deyi Xiong and Min Zhang. 2013. A topic-based coherence model for statistical machine translation. In AAAI. Deyi Xiong, Min Zhang, Aiti Aw, and Haizhou Li. 2009. A syntax-driven bracketing model for phrasebased translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 315– 323, Suntec, Singapore, August. Association for Computational Linguistics. Bing Zhao and Eric P. Xing. 2006. Bitam: Bilingual topic admixture models for word alignment. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 969–976, Sydney, Australia, July. Association for Computational Linguistics. Bing Zhao and Eric P. Xing. 2007. Hm-bitam: Bilingual topic exploration, word alignment, and translation. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1689–1696. MIT Press, Cambridge, MA. 143
2014
13
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1381–1391, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Low-Rank Tensors for Scoring Dependency Structures Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {taolei, yuxin, yuanzh, regina, tommi}@csail.mit.edu Abstract Accurate scoring of syntactic structures such as head-modifier arcs in dependency parsing typically requires rich, highdimensional feature representations. A small subset of such features is often selected manually. This is problematic when features lack clear linguistic meaning as in embeddings or when the information is blended across features. In this paper, we use tensors to map high-dimensional feature vectors into low dimensional representations. We explicitly maintain the parameters as a low-rank tensor to obtain low dimensional representations of words in their syntactic roles, and to leverage modularity in the tensor for easy training with online algorithms. Our parser consistently outperforms the Turbo and MST parsers across 14 different languages. We also obtain the best published UAS results on 5 languages.1 1 Introduction Finding an expressive representation of input sentences is crucial for accurate parsing. Syntactic relations manifest themselves in a broad range of surface indicators, ranging from morphological to lexical, including positional and part-of-speech (POS) tagging features. Traditionally, parsing research has focused on modeling the direct connection between the features and the predicted syntactic relations such as head-modifier (arc) relations in dependency parsing. Even in the case of firstorder parsers, this results in a high-dimensional vector representation of each arc. Discrete features, and their cross products, can be further complemented with auxiliary information about words 1Our code is available at https://github.com/ taolei87/RBGParser. participating in an arc, such as continuous vector representations of words. The exploding dimensionality of rich feature vectors must then be balanced with the difficulty of effectively learning the associated parameters from limited training data. A predominant way to counter the high dimensionality of features is to manually design or select a meaningful set of feature templates, which are used to generate different types of features (McDonald et al., 2005a; Koo and Collins, 2010; Martins et al., 2013). Direct manual selection may be problematic for two reasons. First, features may lack clear linguistic interpretation as in distributional features or continuous vector embeddings of words. Second, designing a small subset of templates (and features) is challenging when the relevant linguistic information is distributed across the features. For instance, morphological properties are closely tied to part-of-speech tags, which in turn relate to positional features. These features are not redundant. Therefore, we may suffer a performance loss if we select only a small subset of the features. On the other hand, by including all the rich features, we face over-fitting problems. We depart from this view and leverage highdimensional feature vectors by mapping them into low dimensional representations. We begin by representing high-dimensional feature vectors as multi-way cross-products of smaller feature vectors that represent words and their syntactic relations (arcs). The associated parameters are viewed as a tensor (multi-way array) of low rank, and optimized for parsing performance. By explicitly representing the tensor in a low-rank form, we have direct control over the effective dimensionality of the set of parameters. We obtain role-dependent low-dimensional representations for words (head, modifier) that are specifically tailored for parsing accuracy, and use standard online algorithms for optimizing the low-rank tensor components. The overall approach has clear linguistic and 1381 computational advantages: • Our low dimensional embeddings are tailored to the syntactic context of words (head, modifier). This low dimensional syntactic abstraction can be thought of as a proxy to manually constructed POS tags. • By automatically selecting a small number of dimensions useful for parsing, we can leverage a wide array of (correlated) features. Unlike parsers such as MST, we can easily benefit from auxiliary information (e.g., word vectors) appended as features. We implement the low-rank factorization model in the context of first- and third-order dependency parsing. The model was evaluated on 14 languages, using dependency data from CoNLL 2008 and CoNLL 2006. We compare our results against the MST (McDonald et al., 2005a) and Turbo (Martins et al., 2013) parsers. The low-rank parser achieves average performance of 89.08% across 14 languages, compared to 88.73% for the Turbo parser, and 87.19% for MST. The power of the low-rank model becomes evident in the absence of any part-of-speech tags. For instance, on the English dataset, the low-rank model trained without POS tags achieves 90.49% on first-order parsing, while the baseline gets 86.70% if trained under the same conditions, and 90.58% if trained with 12 core POS tags. Finally, we demonstrate that the model can successfully leverage word vector representations, in contrast to the baselines. 2 Related Work Selecting Features for Dependency Parsing A great deal of parsing research has been dedicated to feature engineering (Lazaridou et al., 2013; Marton et al., 2010; Marton et al., 2011). While in most state-of-the-art parsers, features are selected manually (McDonald et al., 2005a; McDonald et al., 2005b; Koo and Collins, 2010; Martins et al., 2013; Zhang and McDonald, 2012a; Rush and Petrov, 2012a), automatic feature selection methods are gaining popularity (Martins et al., 2011b; Ballesteros and Nivre, 2012; Nilsson and Nugues, 2010; Ballesteros, 2013). Following standard machine learning practices, these algorithms iteratively select a subset of features by optimizing parsing performance on a development set. These feature selection methods are particularly promising in parsing scenarios where the optimal feature set is likely to be a small subset of the original set of candidate features. Our technique, in contrast, is suitable for cases where the relevant information is distributed across a larger set of related features. Embedding for Dependency Parsing A lot of recent work has been done on mapping words into vector spaces (Collobert and Weston, 2008; Turian et al., 2010; Dhillon et al., 2011; Mikolov et al., 2013). Traditionally, these vector representations have been derived primarily from co-occurrences of words within sentences, ignoring syntactic roles of the co-occurring words. Nevertheless, any such word-level representation can be used to offset inherent sparsity problems associated with full lexicalization (Cirik and S¸ensoy, 2013). In this sense they perform a role similar to POS tags. Word-level vector space embeddings have so far had limited impact on parsing performance. From a computational perspective, adding nonsparse vectors directly as features, including their combinations, can significantly increase the number of active features for scoring syntactic structures (e.g., dependency arc). Because of this issue, Cirik and S¸ensoy (2013) used word vectors only as unigram features (without combinations) as part of a shift reduce parser (Nivre et al., 2007). The improvement on the overall parsing performance was marginal. Another application of word vectors is compositional vector grammar (Socher et al., 2013). While this method learns to map word combinations into vectors, it builds on existing word-level vector representations. In contrast, we represent words as vectors in a manner that is directly optimized for parsing. This framework enables us to learn new syntactically guided embeddings while also leveraging separately estimated word vectors as starting features, leading to improved parsing performance. Dimensionality Reduction Many machine learning problems can be cast as matrix problems where the matrix represents a set of co-varying parameters. Such problems include, for example, multi-task learning and collaborative filtering. Rather than assuming that each parameter can be set independently of others, it is helpful to assume that the parameters vary in a low dimensional subspace that has to be estimated together with the parameters. In terms of the parameter matrix, this corresponds to a low-rank assumption. Low-rank constraints are commonly used for improving 1382 generalization (Lee and Seung, 1999; Srebro et al., 2003; Srebro et al., 2004; Evgeniou and Pontil, 2007) A strict low-rank assumption can be restrictive. Indeed, recent approaches to matrix problems decompose the parameter matrix as a sum of lowrank and sparse matrices (Tao and Yuan, 2011; Zhou and Tao, 2011). The sparse matrix is used to highlight a small number of parameters that should vary independently even if most of them lie on a low-dimensional subspace (Waters et al., 2011; Chandrasekaran et al., 2011). We follow this decomposition while extending the parameter matrix into a tensor. Tensors are multi-way generalizations of matrices and possess an analogous notion of rank. Tensors are increasingly used as tools in spectral estimation (Hsu and Kakade, 2013), including in parsing (Cohen et al., 2012) and other NLP problems (de Cruys et al., 2013), where the goal is to avoid local optima in maximum likelihood estimation. In contrast, we expand features for parsing into a multi-way tensor, and operate with an explicit low-rank representation of the associated parameter tensor. The explicit representation sidesteps inherent complexity problems associated with the tensor rank (Hillar and Lim, 2009). Our parameters are divided into a sparse set corresponding to manually chosen MST or Turbo parser features and a larger set governed by a low-rank tensor. 3 Problem Formulation We will commence here by casting first-order dependency parsing as a tensor estimation problem. We will start by introducing the notation used in the paper, followed by a more formal description of our dependency parsing task. 3.1 Basic Notations Let A ∈Rn×n×d be a 3-dimensional tensor (a 3way array). We denote each element of the tensor as Ai,j,k where i ∈[n], j ∈[n], k ∈[d] and [n] is a shorthand for the set of integers {1, 2, · · · , n}. Similarly, we use Mi,j and ui to represent the elements of matrix M and vector u, respectively. We define the inner product of two tensors (or matrices) as ⟨A, B⟩= vec(A)T vec(B), where vec(·) concatenates the tensor (or matrix) elements into a column vector. The squared norm of a tensor/matrix is denoted by ∥A∥2 = ⟨A, A⟩. The Kronecker product of three vectors is denoted by u⊗v ⊗w and forms a rank-1 tensor such that (u ⊗v ⊗w)i,j,k = uivjwk. Note that the vectors u, v, and w may be column or row vectors. Their orientation is defined based on usage. For example, u ⊗v is a rank-1 matrix uvT when u and v are column vectors (uT v if they are row vectors). We say that tensor A is in Kruskal form if A = r X i=1 U(i, :) ⊗V (i, :) ⊗W(i, :) (1) where U, V ∈Rr×n, W ∈Rr×d and U(i, :) is the ith row of matrix U. We will directly learn a lowrank tensor A (because r is small) in this form as one of our model parameters. 3.2 Dependency Parsing Let x be a sentence and Y(x) the set of possible dependency trees over the words in x. We assume that the score S(x, y) of each candidate dependency tree y ∈Y(x) decomposes into a sum of “local” scores for arcs. Specifically: S(x, y) = X h→m ∈y s(h →m) ∀y ∈Y(x) where h →m is the head-modifier dependency arc in the tree y. Each y is understood as a collection of arcs h →m where h and m index words in x.2 For example, x(h) is the word corresponding to h. We suppress the dependence on x whenever it is clear from context. For example, s(h →m) can depend on x in complicated ways as discussed below. The predicted parse is obtained as ˆy = arg maxy∈Y(x) S(x, y). A key problem is how we parameterize the arc scores s(h → m). Following the MST parser (McDonald et al., 2005a) we can define rich features characterizing each head-modifier arc, compiled into a sparse binary vector φh→m ∈ RL that depends on the sentence x as well as the chosen arc h →m (again, we suppress the dependence on x). Based on this feature representation, we define the score of each arc as sθ(h →m) = 2Note that in the case of high-order parsing, the sum S(x, y) may also include local scores for other syntactic structures, such as grandhead-head-modifier score s(g → h →m). See (Martins et al., 2013) for a complete list of these structures. 1383 Unigram features: form form-p form-n lemma lemma-p lemma-n pos pos-p pos-n morph bias Bigram features: pos-p, pos pos, pos-n pos, lemma morph, lemma Trigram features: pos-p, pos, pos-n Table 1: Word feature templates used by our model. pos, form, lemma and morph stand for the fine POS tag, word form, word lemma and the morphology feature (provided in CoNLL format file) of the current word. There is a bias term that is always active for any word. The suffixes -p and -n refer to the left and right of the current word respectively. For example, pos-p means the POS tag to the left of the current word in the sentence. ⟨θ, φh→m⟩where θ ∈RL represent adjustable parameters to be learned, and L is the number of parameters (and possible features in φh→m). We can alternatively specify arc features in terms of rank-1 tensors by taking the Kronecker product of simpler feature vectors associated with the head (vector φh ∈Rn), and modifier (vector φm ∈Rn), as well as the arc itself (vector φh,m ∈ Rd). Here φh,m is much lower dimensional than the MST arc feature vector φh→m discussed earlier. For example, φh,m may be composed of only indicators for binned arc lengths3. φh and φm, on the other hand, are built from features shown in Table 1. By taking the cross-product of all these component feature vectors, we obtain the full feature representation for arc h →m as a rank-1 tensor φh ⊗φm ⊗φh,m ∈Rn×n×d Note that elements of this rank-1 tensor include feature combinations that are not part of the feature crossings in φh→m. In this sense, the rank-1 tensor represents a substantial feature expansion. The arc score stensor(h →m) associated with the 3In our current version, φh,m only contains the binned arc length. Other possible features include, for example, the label of the arc h →m, the POS tags between the head and the modifier, boolean flags which indicate the occurence of in-between punctutations or conjunctions, etc. tensor representation is defined analogously as stensor(h →m) = ⟨A, φh ⊗φm ⊗φh,m⟩ where the adjustable parameters A also form a tensor. Given the typical dimensions of the component feature vectors, φh, φm, φh,m, it is not even possible to store all the parameters in A. Indeed, in the full English training set of CoNLL-2008, the tensor involves around 8 × 1011 entries while the MST feature vector has approximately 1.5 × 107 features. To counter this feature explosion, we restrict the parameters A to have low rank. Low-Rank Dependency Scoring We can represent a rank-r tensor A explicitly in terms of parameter matrices U, V , and W as shown in Eq. 1. As a result, the arc score for the tensor reduces to evaluating Uφh, V φm, and Wφh,m which are all r dimensional vectors and can be computed efficiently based on any sparse vectors φh, φm, and φh,m. The resulting arc score stensor(h →m) is then r X i=1 [Uφh]i[V φm]i[Wφh,m]i (2) By learning parameters U, V , and W that function well in dependency parsing, we also learn contextdependent embeddings for words and arcs. Specifically, Uφh (for a given sentence, suppressed) is an r dimensional vector representation of the word corresponding to h as a head word. Similarly, V φm provides an analogous representation for a modifier m. Finally, Wφh,m is a vector embedding of the supplemental arc-dependent information. The resulting embedding is therefore tied to the syntactic roles of the words (and arcs), and learned in order to perform well in parsing. We expect a dependency parsing model to benefit from several aspects of the low-rank tensor scoring. For example, we can easily incorporate additional useful features in the feature vectors φh, φm and φh,m, since the low-rank assumption (for small enough r) effectively counters the otherwise uncontrolled feature expansion. Moreover, by controlling the amount of information we can extract from each of the component feature vectors (via rank r), the statistical estimation problem does not scale dramatically with the dimensions of φh, φm and φh,m. In particular, the low-rank constraint can help generalize to unseen arcs. Consider a feature δ(x(h) = a) · δ(x(m) = 1384 b) · δ(dis(x, h, m) = c) which is non-zero only for an arc a →b with distance c in sentence x. If the arc has not been seen in the available training data, it does not contribute to the traditional arc score sθ(·). In contrast, with the low-rank constraint, the arc score in Eq. 2 would typically be non-zero. Combined Scoring Our parsing model aims to combine the strengths of both traditional features from the MST/Turbo parser as well as the new low-rank tensor features. In this way, our model is able to capture a wide range of information including the auxiliary features without having uncontrolled feature explosion, while still having the full accessibility to the manually engineered features that are proven useful. Specifically, we define the arc score sγ(h →m) as the combination (1 −γ)stensor(h →m) + γsθ(h →m) = (1 −γ) r X i=1 [Uφh]i[V φm]i[Wφh,m]i + γ ⟨θ, φh→m⟩ (3) where θ ∈RL, U ∈Rr×n, V ∈Rr×n, and W ∈ Rr×d are the model parameters to be learned. The rank r and γ ∈[0, 1] (balancing the two scores) represent hyper-parameters in our model. 4 Learning The training set D = {(ˆxi, ˆyi)}N i=1 consists of N pairs, where each pair consists of a sentence xi and the corresponding gold (target) parse yi. The goal is to learn values for the parameters θ, U, V and W that optimize the combined scoring function Sγ(x, y) = P h→m∈y sγ(h →m), defined in Eq. 3, for parsing performance. We adopt a maximum soft-margin framework for this learning problem. Specifically, we find parameters θ, U, V , W, and {ξi} that minimize C X i ξi + ∥θ∥2 + ∥U∥2 + ∥V ∥2 + ∥W∥2 s.t. Sγ(ˆxi, ˆyi) ≥Sγ(ˆxi, yi) + ∥ˆyi −yi∥1 −ξi ∀yi ∈Y(ˆxi), ∀i. (4) where ∥ˆyi−yi∥1 is the number of mismatched arcs between the two trees, and ξi is a non-negative slack variable. The constraints serve to separate the gold tree from other alternatives in Y(ˆxi) with a margin that increases with distance. The objective as stated is not jointly convex with respect to U, V and W due to our explicit representation of the low-rank tensor. However, if we fix any two sets of parameters, for example, if we fix V and W, then the combined score Sγ(x, y) will be a linear function of both θ and U. As a result, the objective will be jointly convex with respect to θ and U and could be optimized using standard tools. However, to accelerate learning, we adopt an online learning setup. Specifically, we use the passive-aggressive learning algorithm (Crammer et al., 2006) tailored to our setting, updating pairs of parameter sets, (θ, U), (θ, V ) and (θ, W) in an alternating manner. This method is described below. Online Learning In an online learning setup, we update parameters successively based on each sentence. In order to apply the passive-aggressive algorithm, we fix two of U, V and W (say, for example, V and W) in an alternating manner, and apply a closed-form update to the remaining parameters (here U and θ). This is possible since the objective function with respect to (θ, U) has a similar form as in the original passive-aggressive algorithm. To illustrate this, consider a training sentence xi. The update involves finding first the best competing tree, ˜yi = arg max yi∈Y(ˆxi) Sγ(ˆxi, yi) + ∥ˆyi −yi∥1 (5) which is the tree that violates the constraint in Eq. 4 most (i.e. maximizes the loss ξi). We then obtain parameter increments ∆θ and ∆U by solving min ∆θ, ∆U, ξ≥0 1 2∥∆θ∥2 + 1 2∥∆U∥2 + Cξ s.t. Sγ(ˆxi, ˆyi) ≥Sγ(ˆxi, ˜yi) + ∥ˆyi −˜yi∥1 −ξ In this way, the optimization problem attempts to keep the parameter change as small as possible, while forcing it to achieve mostly zero loss on this single instance. This problem has a closed form solution ∆θ = min  C, loss γ2∥dθ∥2 + (1 −γ)2∥du∥2  γdθ ∆U = min  C, loss γ2∥dθ∥2 + (1 −γ)2∥du∥2  (1 −γ)du 1385 where loss = Sγ(ˆxi, ˜yi) + ∥ˆyi −˜yi∥1 −Sγ(ˆxi, ˆyi) dθ = X h→m ∈ˆyi φh→m − X h→m ∈˜yi φh→m du = X h→m ∈ˆyi [(V φm) ⊙(Wφh,m)] ⊗φh − X h→m ∈˜yi [(V φm) ⊙(Wφh,m)] ⊗φh where (u ⊙v)i = uivi is the Hadamard (elementwise) product. The magnitude of change of θ and U is controlled by the parameter C. By varying C, we can determine an appropriate step size for the online updates. The updates also illustrate how γ balances the effect of the MST component of the score relative to the low-rank tensor score. When γ = 0, the arc scores are entirely based on the lowrank tensor and ∆θ = 0. Note that φh, φm, φh,m, and φh→m are typically very sparse for each word or arc. Therefore du and dθ are also sparse and can be computed efficiently. Initialization The alternating online algorithm relies on how we initialize U, V , and W since each update is carried out in the context of the other two. A random initialization of these parameters is unlikely to work well, both due to the dimensions involved, and the nature of the alternating updates. We consider here instead a reasonable deterministic “guess” as the initialization method. We begin by training our model without any low-rank parameters, and obtain parameters θ. The majority of features in this MST component can be expressed as elements of the feature tensor, i.e., as [φh ⊗φm ⊗φh,m]i,j,k. We can therefore create a tensor representation of θ such that Bi,j,k equals the corresponding parameter value in θ. We use a low-rank version of B as the initialization. Specifically, we unfold the tensor B into a matrix B(h) of dimensions n and nd, where n = dim(φh) = dim(φm) and d = dim(φh,m). For instance, a rank-1 tensor can be unfolded as u ⊗v ⊗w = u ⊗vec(v ⊗w). We compute the top-r SVD of the resulting unfolded matrix such that B(h) = P T SQ. U is initialized as P. Each right singular vector SiQ(i, :) is also a matrix in Rn×d. The leading left and right singular vectors of this matrix are assigned to V (i, :) and W(i, :) respectively. In our implementation, we run one epoch of our model without low-rank parameters and initialize the tensor A. Parameter Averaging The passive-aggressive algorithm regularizes the increments (e.g. ∆θ and ∆U) during each update but does not include any overall regularization. In other words, keeping updating the model may lead to large parameter values and over-fitting. To counter this effect, we use parameter averaging as used in the MST and Turbo parsers. The final parameters are those averaged across all the iterations (cf. (Collins, 2002)). For simplicity, in our algorithm we average U, V , W and θ separately, which works well empirically. 5 Experimental Setup Datasets We test our dependency model on 14 languages, including the English dataset from CoNLL 2008 shared tasks and all 13 datasets from CoNLL 2006 shared tasks (Buchholz and Marsi, 2006; Surdeanu et al., 2008). These datasets include manually annotated dependency trees, POS tags and morphological information. Following standard practices, we encode this information as features. Methods We compare our model to MST and Turbo parsers on non-projective dependency parsing. For our parser, we train both a first-order parsing model (as described in Section 3 and 4) as well as a third-order model. The third order parser simply adds high-order features, those typically used in MST and Turbo parsers, into our sθ(x, y) = ⟨θ, φ(x, y)⟩scoring component. The decoding algorithm for the third-order parsing is based on (Zhang et al., 2014). For the Turbo parser, we directly compare with the recent published results in (Martins et al., 2013). For the MST parser, we train and test using the most recent version of the code.4 In addition, we implemented two additional baselines, NT-1st (first order) and NT-3rd (third order), corresponding to our model without the tensor component. Features For the arc feature vector φh→m, we use the same set of feature templates as MST v0.5.1. For head/modifier vector φh and φm, we show the complete set of feature templates used by our model in Table 1. Finally, we use a similar set of feature templates as Turbo v2.1 for 3rd order parsing. To add auxiliary word vector representations, we use the publicly available word vectors (Cirik 4http://sourceforge.net/projects/mstparser/ 1386 First-order only High-order Ours NT-1st MST Turbo Ours-3rd NT-3rd MST-2nd Turbo-3rd Best Published Arabic 79.60 78.71 78.3 77.23 79.95 79.53 78.75 79.64 81.12 (Ma11) Bulgarian 92.30 91.14 90.98 91.76 93.50 92.79 91.56 93.1 94.02 (Zh13) Chinese 91.43 90.85 90.40 88.49 92.68 92.39 91.77 89.98 91.89 (Ma10) Czech 87.90 86.62 86.18 87.66 90.50 89.43 87.3 90.32 90.32 (Ma13) Danish 90.64 89.80 89.84 89.42 91.39 90.82 90.5 91.48 92.00 (Zh13) Dutch 84.81 83.77 82.89 83.61 86.41 86.08 84.11 86.19 86.19 (Ma13) English 91.84 91.40 90.59 91.21 93.02 92.82 91.54 93.22 93.22 (Ma13) German 90.24 89.70 89.54 90.52 91.97 92.26 90.14 92.41 92.41 (Ma13) Japanese 93.74 93.36 93.38 92.78 93.71 93.23 92.92 93.52 93.72 (Ma11) Portuguese 90.94 90.67 89.92 91.14 91.92 91.63 91.08 92.69 93.03 (Ko10) Slovene 84.25 83.15 82.09 82.81 86.24 86.07 83.25 86.01 86.95 (Ma11) Spanish 85.27 84.95 83.79 83.61 88.00 87.47 84.33 85.59 87.96 (Zh13) Swedish 89.86 89.66 88.27 89.36 91.00 90.83 89.05 91.14 91.62 (Zh13) Turkish 75.84 74.89 74.81 75.98 76.84 75.83 74.39 76.9 77.55 (Ko10) Average 87.76 87.05 86.5 86.83 89.08 88.66 87.19 88.73 89.43 Table 2: First-order parsing (left) and high-order parsing (right) results on CoNLL-2006 datasets and the English dataset of CoNLL-2008. For our model, the experiments are ran with rank r = 50 and hyperparameter γ = 0.3. To remove the tensor in our model, we ran experiments with γ = 1, corresponding to columns NT-1st and NT-3rd. The last column shows results of most accurate parsers among Nivre et al. (2006), McDonald et al. (2006), Martins et al. (2010), Martins et al. (2011a), Martins et al. (2013), Koo et al. (2010), Rush and Petrov (2012b), Zhang and McDonald (2012b) and Zhang et al. (2013). and S¸ensoy, 2013), learned from raw data (Globerson et al., 2007; Maron et al., 2010). Three languages in our dataset – English, German and Swedish – have corresponding word vectors in this collection.5 The dimensionality of this representation varies by language: English has 50 dimensional word vectors, while German and Swedish have 25 dimensional word vectors. Each entry of the word vector is added as a feature value into feature vectors φh and φm. For each word in the sentence, we add its own word vector as well as the vectors of its left and right words. We should note that since our model parameter A is represented and learned in the low-rank form, we only have to store and maintain the low-rank projections Uφh, V φm and Wφh,m rather than explicitly calculate the feature tensor φh⊗φm⊗φh,m. Therefore updating parameters and decoding a sentence is still efficient, i.e., linear in the number of values of the feature vector. In contrast, assume we take the cross-product of the auxiliary word vector values, POS tags and lexical items of a word and its context, and add the crossed values into a normal model (in φh→m). The number of features for each arc would be at least quadratic, growing into thousands, and would be a significant impediment to parsing efficiency. Evaluation Following standard practices, we train our full model and the baselines for 10 5https://github.com/wolet/sprml13-word-embeddings epochs. As the evaluation measure, we use unlabeled attachment scores (UAS) excluding punctuation. In all the reported experiments, the hyperparameters are set as follows: r = 50 (rank of the tensor), C = 1 for first-order model and C = 0.01 for third-order model. 6 Results Overall Performance Table 2 shows the performance of our model and the baselines on 14 CoNLL datasets. Our model outperforms Turbo parser, MST parser, as well as its own variants without the tensor component. The improvements of our low-rank model are consistent across languages: results for the first order parser are better on 11 out of 14 languages. By comparing NT-1st and NT-3rd (models without low-rank) with our full model (with low-rank), we obtain 0.7% absolute improvement on first-order parsing, and 0.3% improvement on third-order parsing. Our model also achieves the best UAS on 5 languages. We next focus on the first-order model and gauge the impact of the tensor component. First, we test our model by varying the hyper-parameter γ which balances the tensor score and the traditional MST/Turbo score components. Figure 1 shows the average UAS on CoNLL test datasets after each training epoch. We can see that the improvement of adding the low-rank tensor is consistent across various choices of hyper parame1387 2 4 6 8 10 84.0% 84.5% 85.0% 85.5% 86.0% 86.5% 87.0% 87.5% 88.0% # Epochs γ=0.0 γ=0.2 γ=0.3 γ=0.4 NT−1st Figure 1: Average UAS on CoNLL testsets after different epochs. Our full model consistently performs better than NT-1st (its variation without tensor component) under different choices of the hyper-parameter γ. no word vector with word vector English 91.84 92.07 German 90.24 90.48 Swedish 89.86 90.38 Table 3: Results of adding unsupervised word vectors to the tensor. Adding this information yields consistent improvement for all languages. ter γ. When training with the tensor component alone (γ = 0), the model converges more slowly. Learning of the tensor is harder because the scoring function is not linear (nor convex) with respect to parameters U, V and W. However, the tensor scoring component achieves better generalization on the test data, resulting in better UAS than NT1st after 8 training epochs. To assess the ability of our model to incorporate a range of features, we add unsupervised word vectors to our model. As described in previous section, we do so by appending the values of different coordinates in the word vector into φh and φm. As Table 3 shows, adding this information increases the parsing performance for all the three languages. For instance, we obtain more than 0.5% absolute improvement on Swedish. Syntactic Abstraction without POS Since our model learns a compressed representation of feature vectors, we are interested to measure its performance when part-of-speech tags are not provided (See Table 4). The rationale is that given all other features, the model would induce representations that play a similar role to POS tags. Note that Our model NT-1st -POS +wv. -POS +POS English 88.89 90.49 86.70 90.58 German 82.63 85.80 78.71 88.50 Swedish 81.84 85.90 79.65 88.75 Table 4: The first three columns show parsing results when models are trained without POS tags. The last column gives the upper-bound, i.e. the performance of a parser trained with 12 Core POS tags. The low-rank model outperforms NT-1st by a large margin. Adding word vector features further improves performance. the performance of traditional parsers drops when tags are not provided. For example, the performance gap is 10% on German. Our experiments show that low-rank parser operates effectively in the absence of tags. In fact, it nearly reaches the performance of the original parser that used the tags on English. Examples of Derived Projections We manually analyze low-dimensional projections to assess whether they capture syntactic abstraction. For this purpose, we train a model with only a tensor component (such that it has to learn an accurate tensor) on the English dataset and obtain low dimensional embeddings Uφw and V φw for each word. The two r-dimension vectors are concatenated as an “averaged” vector. We use this vector to calculate the cosine similarity between words. Table 5 shows examples of five closest neighbors of queried words. While these lists include some noise, we can clearly see that the neighbors exhibit similar syntactic behavior. For example, “on” is close to other prepositions. More interestingly, we can consider the impact of syntactic context on the derived projections. The bottom part of Table 5 shows that the neighbors change substantially depending on the syntactic role of the word. For example, the closest words to the word “increase” are verbs in the context phrase “will increase again”, while the closest words become nouns given a different phrase “an increase of”. Running Time Table 6 illustrates the impact of estimating low-rank tensor parameters on the running time of the algorithm. For comparison, we also show the NT-1st times across three typical languages. The Arabic dataset has the longest average sentence length, while the Chinese dataset 1388 greatly profit says on when actively earnings adds with where openly franchisees predicts into what significantly shares noted at why outright revenue wrote during which substantially members contends over who increase will increase again an increase of rise arguing gain advance be prices contest charging payment halt gone members Exchequer making subsidiary hit attacks hit the hardest hit is shed distributes monopolies rallied stayed pills triggered sang sophistication appeared removed ventures understate eased factors Table 5: Five closest neighbors of the queried words (shown in bold). The upper part shows our learned embeddings group words with similar syntactic behavior. The two bottom parts of the table demonstrate that how the projections change depending on the syntactic context of the word. #Tok. Len. Train. Time (hour) NT-1st Ours Arabic 42K 32 0.13 0.22 Chinese 337K 6 0.37 0.65 English 958K 24 1.88 2.83 Table 6: Comparison of training times across three typical datasets. The second column is the number of tokens in each data set. The third column shows the average sentence length. Both first-order models are implemented in Java and run as a single process. has the shortest sentence length in CoNLL 2006. Based on these results, estimating a rank-50 tensor together with MST parameters only increases the running time by a factor of 1.7. 7 Conclusions Accurate scoring of syntactic structures such as head-modifier arcs in dependency parsing typically requires rich, high-dimensional feature representations. We introduce a low-rank factorization method that enables to map high dimensional feature vectors into low dimensional representations. Our method maintains the parameters as a low-rank tensor to obtain low dimensional representations of words in their syntactic roles, and to leverage modularity in the tensor for easy training with online algorithms. We implement the approach on first-order to third-order dependency parsing. Our parser outperforms the Turbo and MST parsers across 14 languages. Future work involves extending the tensor component to capture higher-order structures. In particular, we would consider second-order structures such as grandparent-head-modifier by increasing the dimensionality of the tensor. This tensor will accordingly be a four or five-way array. The online update algorithm remains applicable since each dimension is optimized in an alternating fashion. 8 Acknowledgements The authors acknowledge the support of the MURI program (W911NF-10-1-0533) and the DARPA BOLT program. This research is developed in collaboration with the Arabic Language Technoligies (ALT) group at Qatar Computing Research Institute (QCRI) within the LYAS project. We thank Volkan Cirik for sharing the unsupervised word vector data. Thanks to Amir Globerson, Andreea Gane, the members of the MIT NLP group and the ACL reviewers for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. References Miguel Ballesteros and Joakim Nivre. 2012. MaltOptimizer: An optimization tool for MaltParser. In EACL. The Association for Computer Linguistics. Miguel Ballesteros. 2013. Effective morphological feature selection with MaltOptimizer at the SPMRL 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. Association for Computational Linguistics. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X ’06. Association for Computational Linguistics. Venkat Chandrasekaran, Sujay Sanghavi, Pablo A Parrilo, and Alan S Willsky. 2011. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization. Volkan Cirik and H¨usn¨u S¸ensoy. 2013. The AI-KU system at the SPMRL 2013 shared task : Unsupervised features for dependency parsing. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. Association for Computational Linguistics. 1389 Shay B Cohen, Karl Stratos, Michael Collins, Dean P Foster, and Lyle Ungar. 2012. Spectral learning of latent-variable PCFGs. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing - Volume 10, EMNLP ’02. Association for Computational Linguistics. R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In International Conference on Machine Learning, ICML. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. The Journal of Machine Learning Research. Tim Van de Cruys, Thierry Poibeau, and Anna Korhonen. 2013. A tensor-based factorization model of semantic compositionality. In HLT-NAACL. The Association for Computational Linguistics. Paramveer S. Dhillon, Dean Foster, and Lyle Ungar. 2011. Multiview learning of word embeddings via CCA. In Advances in Neural Information Processing Systems. A Evgeniou and Massimiliano Pontil. 2007. Multitask feature learning. In Advances in neural information processing systems: Proceedings of the 2006 conference. The MIT Press. Amir Globerson, Gal Chechik, Fernando Pereira, and Naftali Tishby. 2007. Euclidean embedding of cooccurrence data. Journal of Machine Learning Research. Christopher Hillar and Lek-Heng Lim. 2009. Most tensor problems are NP-hard. arXiv preprint arXiv:0911.1393. Daniel Hsu and Sham M Kakade. 2013. Learning mixtures of spherical gaussians: moment methods and spectral decompositions. In Proceedings of the 4th Conference on Innovations in Theoretical Computer Science. ACM. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10. Association for Computational Linguistics. Terry Koo, Alexander M Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Angeliki Lazaridou, Eva Maria Vecchi, and Marco Baroni. 2013. Fish transporters and miracle homes: How compositional distributional semantics can help NP parsing. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Daniel D Lee and H Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factorization. Nature. Yariv Maron, Michael Lamar, and Elie Bienenstock. 2010. Sphere embedding: An application to partof-speech induction. In Advances in Neural Information Processing Systems. Andr´e FT Martins, Noah A Smith, Eric P Xing, Pedro MQ Aguiar, and M´ario AT Figueiredo. 2010. Turbo parsers: Dependency parsing by approximate variational inference. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Andr´e F. T. Martins, Noah A. Smith, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2011a. Dual decomposition with many overlapping components. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11. Association for Computational Linguistics. Andr´e FT Martins, Noah A Smith, Pedro MQ Aguiar, and M´ario AT Figueiredo. 2011b. Structured sparsity in structured prediction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Andr´e FT Martins, Miguel B Almeida, and Noah A Smith. 2013. Turning on the turbo: Fast third-order non-projective turbo parsers. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Yuval Marton, Nizar Habash, and Owen Rambow. 2010. Improving arabic dependency parsing with lexical and inflectional morphological features. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, SPMRL ’10. Association for Computational Linguistics. Yuval Marton, Nizar Habash, and Owen Rambow. 2011. Improving arabic dependency parsing with form-based and functional morphological features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05). 1390 Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a two-stage discriminative parser. In Proceedings of the Tenth Conference on Computational Natural Language Learning. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR. Peter Nilsson and Pierre Nugues. 2010. Automatic discovery of feature sets for dependency parsing. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee. Joakim Nivre, Johan Hall, Jens Nilsson, G¨uls¸en Eryiit, and Svetoslav Marinov. 2006. Labeled pseudoprojective dependency parsing with support vector machines. In Proceedings of the Tenth Conference on Computational Natural Language Learning. Association for Computational Linguistics. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering. Alexander Rush and Slav Petrov. 2012a. Vine pruning for efficient multi-pass dependency parsing. In The 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL ’12). Alexander M Rush and Slav Petrov. 2012b. Vine pruning for efficient multi-pass dependency parsing. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compositional vector grammars. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics. Nathan Srebro, Tommi Jaakkola, et al. 2003. Weighted low-rank approximations. In ICML. Nathan Srebro, Jason Rennie, and Tommi S Jaakkola. 2004. Maximum-margin matrix factorization. In Advances in neural information processing systems. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, CoNLL ’08. Association for Computational Linguistics. Min Tao and Xiaoming Yuan. 2011. Recovering lowrank and sparse components of matrices from incomplete and noisy observations. SIAM Journal on Optimization. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10. Association for Computational Linguistics. Andrew E Waters, Aswin C Sankaranarayanan, and Richard Baraniuk. 2011. SpaRCS: Recovering lowrank and sparse matrices from compressive measurements. In Advances in Neural Information Processing Systems. Hao Zhang and Ryan McDonald. 2012a. Generalized higher-order dependency parsing with cube pruning. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12. Association for Computational Linguistics. Hao Zhang and Ryan McDonald. 2012b. Generalized higher-order dependency parsing with cube pruning. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics. Hao Zhang, Liang Huang Kai Zhao, and Ryan McDonald. 2013. Online learning for inexact hypergraph search. In Proceedings of EMNLP. Yuan Zhang, Tao Lei, Regina Barzilay, Tommi Jaakkola, and Amir Globerson. 2014. Steps to excellence: Simple inference with refined scoring of dependency trees. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Tianyi Zhou and Dacheng Tao. 2011. Godec: Randomized low-rank & sparse matrix decomposition in noisy case. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). 1391
2014
130
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1392–1402, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics CoSimRank: A Flexible & Efficient Graph-Theoretic Similarity Measure Sascha Rothe and Hinrich Sch¨utze Center for Information & Language Processing University of Munich [email protected] Abstract We present CoSimRank, a graph-theoretic similarity measure that is efficient because it can compute a single node similarity without having to compute the similarities of the entire graph. We present equivalent formalizations that show CoSimRank’s close relationship to Personalized PageRank and SimRank and also show how we can take advantage of fast matrix multiplication algorithms to compute CoSimRank. Another advantage of CoSimRank is that it can be flexibly extended from basic node-node similarity to several other graph-theoretic similarity measures. In an experimental evaluation on the tasks of synonym extraction and bilingual lexicon extraction, CoSimRank is faster or more accurate than previous approaches. 1 Introduction Graph-theoretic algorithms have been successfully applied to many problems in NLP (Mihalcea and Radev, 2011). These algorithms are often based on PageRank (Brin and Page, 1998) and other centrality measures (e.g., (Erkan and Radev, 2004)). An alternative for tasks involving similarity is SimRank (Jeh and Widom, 2002). SimRank is based on the simple intuition that nodes in a graph should be considered as similar to the extent that their neighbors are similar. Unfortunately, SimRank has time complexity O(n3) (where n is the number of nodes in the graph) and therefore does not scale to the large graphs that are typical of NLP. This paper introduces CoSimRank,1 a new graph-theoretic algorithm for computing node similarity that combines features of SimRank and PageRank. Our key observation is that to compute the similarity of two nodes, we need not consider 1Code available at code.google.com/p/cistern all other nodes in the graph as SimRank does; instead, CoSimRank starts random walks from the two nodes and computes their similarity at each time step. This offers large savings in computation time if we only need the similarities of a small subset of all n2 node similarities. These two cases – computing a few similarities and computing many similarities – correspond to two different representations we can compute CoSimRank on: a vector representation, which is fast for only a few similarities, and a matrix representation, which can take advantage of fast matrix multiplication algorithms. CoSimRank can be used to compute many variations of basic node similarity – including similarity for graphs with weighted and typed edges and similarity for sets of nodes. Thus, CoSimRank has the added advantage of being a flexible tool for different types of applications. The extension of CoSimRank to similarity across graphs is important for the application of bilingual lexicon extraction: given a set of correspondences between nodes in two graphs A and B (corresponding to two different languages), a pair of nodes (a ∈A, b ∈B) is a good candidate for a translation pair if their node similarity is high. In an experimental evaluation, we show that CoSimRank is more efficient and more accurate than both SimRank and PageRank-based algorithms. This paper is structured as follows. Section 2 discusses related work. Section 3 introduces CoSimRank. In Section 4, we compare CoSimRank and SimRank. By providing some useful extensions, we demonstrate the great flexibility of CoSimRank (Section 5). We perform an experimental evaluation of CoSimRank in Section 6. Section 7 summarizes the paper. 2 Related Work Our work is unsupervised. We therefore do not review graph-based methods that make extensive 1392 use of supervised learning (e.g., de Melo and Weikum (2012)). Since the original version of SimRank (Jeh and Widom, 2002) has complexity O(n4), many extensions have been proposed to speed up its calculation. A Monte Carlo algorithm, which is scalable to the whole web, was suggested by Fogaras and R´acz (2005). However, in an evaluation of this algorithm we found that it does not give competitive results (see Section 6). A matrix representation of SimRank called SimFusion (Xi et al., 2005) improves the computational complexity from O(n4) to O(n3). Lizorkin et al. (2010) also reduce complexity to O(n3) by selecting essential node pairs and using partial sums. They also give a useful overview of SimRank, SimFusion and the Monte Carlo methods of Fogaras and R´acz (2005). A non-iterative computation for SimRank was introduced by Li et al. (2010). This is especially useful for dynamic graphs. However, all of these methods have to run SimRank on the entire graph and are not efficient enough for very large graphs. We are interested in applications that only need a fraction of all O(n2) pairwise similarities. The algorithm we propose below is an order of magnitude faster in such applications because it is based on a local formulation of the similarity measure.2 Apart from SimRank, many other similarity measures have been proposed. Leicht et al. (2006) introduce a similarity measure that is also based on the idea that nodes are similar when their neighbors are, but that is designed for bipartite graphs. However, most graphs in NLP are not bipartite and Jeh and Widom (2002) also proposed a SimRank variant for bipartite graphs. Another important similarity measure is cosine similarity of Personalized PageRank (PPR) vectors. We will refer to this measure as PPR+cos. Hughes and Ramage (2007) find that PPR+cos has high correlation with human similarity judgments on WordNet-based graphs. Agirre et al. (2009) use PPR+cos for WordNet and for crosslingual studies. Like CoSimRank, PPR+cos is efficient when computing single node pair similarities; we therefore use it as one of our baselines below. This method is also used by Chang et al. (2013) for semantic relatedness. They also experimented with Euclidean distance and KL2A reviewer suggests that CoSimRank is an efficient version of SimRank in a way analogous to SALSA’s (Lempel and Moran, 2000) relationship to HITS (Kleinberg, 1999) in that different aspects of similarity are decoupled. divergence. Interestingly, a simpler method performed best when comparing with human similarity judgments. In this method only the entries corresponding to the compared nodes are used for a similarity score. Rao et al. (2008) compared PPR+cos to other graph based similarity measures like shortest-path and bounded-length random walks. PPR+cos performed best except for a new similarity measure based on commute time. We do not compare against this new measure as it uses the graph Laplacian and so cannot be computed for a single node pair. One reason CoSimRank is efficient is that we need only compute a few iterations of the random walk. This is often true of this type of algorithm; cf. (Sch¨utze and Walsh, 2008). LexRank (Erkan and Radev, 2004) is similar to PPR+cos in that it combines PageRank and cosine; it initializes the sentence similarity matrix of a document using cosine and then applies PageRank to compute lexical centrality. Despite this superficial relatedness, applications like lexicon extraction that look for similar entities and applications that look for central entities are quite different. In addition to faster versions of SimRank, there has also been work on extensions of SimRank. Dorow et al. (2009) and Laws et al. (2010) extend SimRank to edge weights, edge labels and multiple graphs. We use their Multi-Edge Extraction (MEE) algorithm as one of our baselines below. A similar graph of dependency structures was built by Minkov and Cohen (2008). They applied different similarity measures, e.g., cosine of dependency vectors or a new algorithm called pathconstrained graph walk, on synonym extraction (Minkov and Cohen, 2012). We compare CoSimRank with their results in our experiments (see Section 6). Some other applications of SimRank or other graph based similarity measures in NLP include work on document similarity (Li et al., 2009), the transfer of sentiment information between languages (Scheible et al., 2010) and named entity disambiguation (Han and Zhao, 2010). Hoang and Kan (2010) use SimRank for related work summarization. Muthukrishnan et al. (2010) combine link based similarity and content based similarity for document clustering and classification. These approaches use at least one of cosine similarity, PageRank and SimRank. CoSimRank can either be interpreted as an efficient version of Sim1393 Rank or as a version of Personalized PageRank for similarity measurement. The novelty is that we compute similarity for vectors that are induced using a new algorithm, so that the similarity measurement is much more efficient when an application only needs a fraction of all O(n2) pairwise similarities. 3 CoSimRank We first first give an intuitive introduction of CoSimRank as a Personalized PageRank (PPR) derivative. Later on, we will give a matrix formulation to compare CoSimRank with SimRank. 3.1 Personalized PageRank Haveliwala (2002) introduced Personalized PageRank – or topic-sensitive PageRank – based on the idea that the uniform damping vector p(0) can be replaced by a personalized vector, which depends on node i. We usually set p(0)(i) = ei, with ei being a vector of the standard basis, i.e., the ith entry is 1 and all other entries are 0. The PPR vector of node i is given by: p(k)(i) = dAp(k−1)(i) + (1 −d)p(0)(i) (1) where A is the stochastic matrix of the Markov chain, i.e., the row normalized adjacency matrix. The damping factor d ∈(0, 1) ensures that the computation converges. The PPR vector after k iterations is given by p(k). To visualize this formula, one can imagine a random surfer starting at node i and following one of the links with probability d or jumping back to the starting node i with probability (1 −d). Entry i of the converged PPR vector represents the probability that the random surfer is on node i after an unlimited number of steps. To simulate the behavior of SimRank we will simplify this equation and set the damping factor d = 1. We will re-add a damping factor later in the calculation. p(k) = Ap(k−1) (2) Note that the personalization vector p(0) was eliminated, but is still present as the starting vector of the iteration. 3.2 Similarity of vectors Let p(i) be the PPR vector of node i. The cosine of two vectors u and v is computed by dividing Figure 1: Graph motivating CoSimRank algorithm. Whereas PPR gives relatively high similarity to the pair (law,suit), CoSimRank assigns the pair similarity 0. the inner product ⟨u, v⟩by the lengths of the vectors. The cosine of two PPR vectors can be used as a similarity measure for the corresponding nodes (Hughes and Ramage, 2007; Agirre et al., 2009): s(i, j) = ⟨p(i), p(j)⟩ |p(i)||p(j)| (3) This measure s(i, j) looks at the probability that a random walker is on a certain edge after an unlimited number of steps. This is potentially problematic as the example in Figure 1 shows. The PPR vectors of suit and dress will have some weight on tailor, which is good. However, the PPR vector of law will also have a non-zero weight for tailor. So law and dress are similar because of the node tailor. This is undesirable. We can prevent this type of spurious similarity by taking into account the path the random surfer took to get to a particular node. We formalize this by defining CoSimRank s(i, j) as follows: s(i, j) = ∞ X k=0 ck⟨p(k)(i), p(k)(j)⟩ (4) where p(k)(i) is the PPR vector of node i from Eq. 2 after k iterations. We compare the PPR vectors at each time step k. The sum of all similarities is the value of CoSimRank, i.e., the final similarity. We add a damping factor c, so that early meetings are more valuable than later meetings. To compute the similarity of two vectors u and v we use the inner product ⟨·, ·⟩in Eq. 4 for two reasons: 1. This is similar to cosine similarity except that the 1-norm is used instead of the 2-norm. Since our vectors are probability vectors, we have ⟨p(i), p(j)⟩ |p(i)||p(j)| = ⟨p(i), p(j)⟩ 1394 for the 1-norm.3 2. Without expensive normalization, we can give a simple matrix formalization of CoSimRank and compute it efficiently using fast matrix multiplication algorithms. Later on, the following iterative computation of CoSimRank will prove useful: s(k)(i, j) = ck⟨p(k)(i), p(k)(j)⟩+ s(k−1)(i, j) (5) 3.3 Matrix formulation The matrix formulation of CoSimRank is: S(0) = E S(1) = cAAT + S(0) S(2) = c2A2(AT )2 + S(1) . . . S(k) = ckAk(AT )k + S(k−1) (6) We will see in Section 5 that this formulation is the basis for a very efficient version of CoSimRank. 3.4 Convergence properties As the PPR vectors have only positive values, we can easily see in Eq. 4 that the CoSimRank of one node pair is monotonically non-decreasing. For the dot product of two vectors, the CauchySchwarz inequality gives the upper bound: ⟨u, v⟩≤∥u∥∥v∥ where ∥x∥is the norm of x. From Eq. 2 we get p(k) 1 = 1, where ∥·∥1 is the 1-norm. We also know from elementary functional analysis that the 1-norm is the biggest of all p-norms and so one has p(k) ≤1. It follows that CoSimRank grows more slowly than a geometric series and converges if |c| < 1: s(i, j) ≤ ∞ X k=0 ck = 1 1 −c If an upper bound of 1 is desired for s(i, j) (instead of 1/(1 −c)), then we can use s′: s′(i, j) = (1 −c)s(i, j) 3This type of similarity measure has also been used and investigated by ´O S´eaghdha and Copestake (2008), Cha (2007), Jebara et al. (2004) (probability product kernel) and (Jaakkola et al., 1999) (Fisher kernel) among others. 4 Comparison to SimRank The original SimRank equation can be written as follows (Jeh and Widom, 2002): r(i, j) =      1, if i = j c |N(i)||N(j)| P k∈N(i) l∈N(j) r(k, l), else where N(i) denotes the nodes connected to i. SimRank is computed iteratively. With A being the normalized adjacency matrix we can write SimRank in matrix formulation: R(0) = E R(k) = max{cAR(k−1)AT , R(0)} (7) where the maximum of two matrices refers to the element-wise maximum. We will now prove by induction that the matrix formulation of CoSimRank (Eq. 6) is equivalent to: S′(k) = cAS′(k−1)AT + S(0) (8) and thus very similar to SimRank (Eq. 7). The base case S(1) = S′(1) is trivial. Inductive step: S′(k) (8) = cAS′(k−1)AT + S(0) = cA(ck−1Ak−1(AT )k−1 + S(k−2))AT + S(0) = ckAk(AT )k + cAS(k−2)AT + S(0) = ckAk(AT )k + S(k−1) (6) = S(k) Comparing Eqs. 7 and 8, we see that SimRank and CoSimRank are very similar except that they initialize the similarities on the diagonal differently. Whereas SimRank sets each of these entries back to one at each iteration, CoSimRank adds one. Thus, when computing the two similarity measures iteratively, the diagonal element (i, i) will be set to 1 by both methods for those initial iterations for which this entry is 0 for cAS(k−1)AT (i.e., before applying either max or add). The methods diverge when the entry is ̸= 0 for the first time. Complexity of computing all n2 similarities. The matrix formulas of both SimRank (Eq. 7) and CoSimRank (Eq. 8) have time complexity O(n3) or – if we want to take the higher efficiency of computation for sparse graphs into account – O(dn2) where n is the number of nodes and d the 1395 average degree. Space complexity is O(n2) for both algorithms. Complexity of computing k2 ≪n2 similarities. In most cases, we only want to compute k2 similarities for k nodes. For CoSimRank, we compute the k PPR vectors in O(kdn) (Eq. 2) and compute the k2 similarities in O(k2n) (Eq. 5). If d < k, then the time complexity of CoSimRank is O(k2n). If we only compute a single similarity, then the complexity is O(dn). In contrast, the complexity of SimRank is the same as in the allsimilarities case: O(dn2). It is not obvious how to design a lower-complexity version of SimRank for this case. Thus, we have reduced SimRank’s cubic time complexity to a quadratic time complexity for CoSimRank or – assuming that the average degree d does not depend on n – SimRank’s quadratic time complexity to linear time complexity for the case of computing few similarities. Space complexity for computing k2 similarities is O(kn) since we need only store k vectors, not the complete similarity matrix. This complexity can be exploited even for the all similarities application: If the matrix formulation cannot be used because the O(n2) similarity matrix is too big for available memory, then we can compute all similarities in batches – and if desired in parallel – whose size is chosen such that the vectors of each batch still fit in memory. In summary, CoSimRank and SimRank have similar space and time complexities for computing all n2 similarities. For the more typical case that we only want to compute a fraction of all similarities, we have recast the global SimRank formulation as a local CoSimRank formulation. As a result, time and space complexities are much improved. In Section 6, we will show that this is also true in practice. 5 Extensions We will show now that the basic CoSimRank algorithm can be extended in a number of ways and is thus a flexible tool for different NLP applications. 5.1 Weighted edges The use of weighted edges was first proposed in the PageRank patent. It is straightforward and easy to implement by replacing the row normalized adjacency matrix A with an arbitrary stochastic matrix P. We can use this edge weighted PageRank for CoSimRank. 5.2 CoSimRank across graphs We often want to compute the similarity of nodes in two different graphs with a known node-node correspondence; this is the scenario we are faced with in the lexicon extraction task (see Section 6). A variant of SimRank for this task was presented by Dorow et al. (2009). We will now present an equivalent method for CoSimRank. We denote the number of nodes in the two graphs U and V by |U| and |V |, respectively. We compute PPR vectors p ∈R|U| and q ∈R|V | for each graph. Let S(0) ∈R|U|×|V | be the known node-node correspondences. The analog of CoSimRank (Eq. 4) for two graphs is then: s(i, j) = ∞ X k=0 ck X (u,v)∈S(0) p(k) u (i)q(k) v (j) (9) The matrix formulation (cf. Eq. 6) is: S(k) = ckAkS(0)(BT )k + S(k−1) (10) where A and B are row-normalized adjacency matrices. We can interpret S(0) as a change of basis. A similar approach for word embeddings was published by Mikolov et al. (2013). They call S(0) the translation matrix. 5.3 Typed edges To be able to directly compare to prior work in our experiments, we also present a method to integrate a set of typed edges T in the CoSimRank calculation. For this we will compute a similarity matrix for each edge type τ and merge them into one matrix for the next iteration: S(k) = c |T | X τ∈T AτS(k−1)BT τ ! + S(0) (11) This formula is identical to the random surfer model where two surfers only meet iff they are on the same node and used the same edge type to get there. A more strict claim would be to use the same edge type at any time of their journey: S(k) = ck |T |k X τ∈T k k Y i=1 Aτi ! S(0) k−1 Y i=0 BT τk−i ! + S(k−1) (12) We will not use Eq. 12 due to its space complexity. 1396 5.4 Similarity of sets of nodes CoSimRank can also be used to compute the similarity s(V, W) of two sets V and W of nodes, e.g., short text snippets. We are not including this method in our experiments, but we will give the equation here, as traditional document similarity measures (e.g., cosine similarity) perform poorly on this task although there also are known alternatives with good results (Sahami and Heilman, 2006). For a set V , the initial PPR vector is given by: p(0) i (V ) = ( 1 |V |, if i ∈V 0, else We then reuse Eq. 4 to compute s(V, W): s(V, W) = ∞ X k=0 ck⟨p(k)(V ), p(k)(W)⟩ In summary, modifications proposed for SimRank (weighted and typed edges, similarity across graphs) as well as modifications proposed for PageRank (sets of nodes) can also be applied to CoSimRank. This makes CoSimRank a very flexible similarity measure. We will test the first three extensions experimentally in the next section and leave similarity of node sets for future work. 6 Experiments We evaluate CoSimRank for the tasks of synonym extraction and bilingual lexicon extraction. We use the basic version of CoSimRank (Eq. 4) for synonym extraction and the two-graph version (Eq. 9) for lexicon extraction, both with weighted edges. Our motivation for this application is that two words that are synonyms of each other should have similar lexical neighbors and that two words that are translations of each other should have neighbors that correspond to each other; thus, in each case the nodes should be similar in the graphtheoretic sense and CoSimRank should be able to identify this similarity. We use the English and German graphs published by Laws et al. (2010), including edge weighting and normalization. Nodes are nouns, adjectives and verbs occurring in Wikipedia. There are three types of edges, corresponding to three types of syntactic configurations extracted from the parsed Wikipedias: adjective-noun, verbobject and noun-noun coordination. Table 1 gives examples and number of nodes and edges. Edge types relation entities description example amod a, v adjective-noun a fast car dobj v, n verb-object drive a car ncrd n, n noun-noun cars and busses Graph statistics nodes nouns adjectives verbs de 34,544 10,067 2,828 en 22,258 12,878 4,866 edges ncrd amod dobj de 65,299 417,151 143,905 en 288,878 686,069 510,351 Table 1: Edge types (above) and number of nodes and edges (below) 6.1 Baselines We propose CoSimRank as an efficient algorithm for computing the similarity of nodes in a graph. Consequently, we compare against the two main methods for this task in NLP: SimRank and extensions of PageRank. We also compare against the MEE (Multi-Edge Extraction) variant of SimRank (Dorow et al., 2009), which handles labeled edges more efficiently than SimRank: S′(k) = c |T | X τ∈T AτS(k−1)BT τ S(k) = max{S′(k), S(0)} where Aτ is the row-normalized adjacency matrix for edge type τ (see edge types in Table 1). Apart from SimRank, extensions of PageRank are the main methods for computing the similarity of nodes in graphs in NLP (e.g., Hughes and Ramage (2007), Agirre et al. (2009) and other papers discussed in related work). Generally, these methods compute the Personalized PageRank for each node (see Eq. 1). When the computation has converged, the similarity of two nodes is given by the cosine similarity of the Personalized PageRank vectors. We implemented this method for our experiments and call it PPR+cos. 6.2 Synonym Extraction We use TS68, a test set of 68 synonym pairs published by Minkov and Cohen (2012) for evaluation. This gold standard lists a single word as the 1397 P@1 P@10 MRR one-synonym PPR+cos 20.6% 52.9% 0.32 SimRank 25.0% 61.8% 0.37 CoSimRank 25.0% 61.8% 0.37 Typed CoSimRank 23.5% 63.2% 0.37 extended PPR+cos 32.6% 73.5% 0.48 SimRank 45.6% 83.8% 0.59 CoSimRank 45.6% 83.8% 0.59 Typed CoSimRank 44.1% 83.8% 0.59 Table 2: Results for synonym extraction on TS68. Best result in each column in bold. correct synonym even if there are several equally acceptable near-synonyms (see Table 3 for examples). We call this the one-synonym evaluation. Three native English speakers were asked to mark synonyms, that were proposed by a baseline or by CoSimRank, i.e. ranked in the top 10. If all three of them agreed on one word as being a synonym in at least one meaning, we added this as a correct answer to the test set. We call this the “extended” evaluation (see Table 2). Synonym extraction is run on the English graph. To calculate PPR+cos, we computed 20 iterations with a decay factor of 0.8 and used the cosine similarity with the 2-norm in the denominator to compare two vectors. For the other three methods, we also used a decay factor of 0.8 and computed 5 iterations. Recall that CoSimRank uses the simple inner product ⟨·, ·⟩to compare vectors. Our evaluation measures are proportion of words correctly translated by word in the top position (P@1), proportion of words correctly translated by a word in one of the top 10 positions (P@10) and Mean Reciprocal Rank (MRR). CoSimRank’s MRR scores of 0.37 (one-synonym) and 0.59 (extended) are the same or better than all baselines (see Table 2). CoSimRank and SimRank have the same P@1 and P@10 accuracy (although they differed on some decisions). CoSimRank is better than PPR+cos on both evaluations, but as this test set is very small, the results are not significant. Table 3 shows a sample of synonyms proposed by CoSimRank. Minkov and Cohen (2012) tested cosine and random-walk measures on grammatical relationkeyword expected extracted movie film film modern contemporary contemporary demonstrate protest show attractive appealing beautiful economic profitable financial close shut open Table 3: Examples for extracted synonyms. Correct synonyms according to extended evaluation in bold. ships (similar to our setup) as well as on cooccurrence statistics. The MRR scores for these methods range from 0.29 to 0.59. (MRR is equivalent to MAP as reported by Minkov and Cohen (2012) when there is only one correct answer.) Their best number (0.59) is better than our one-synonym result; however, they performed manual postprocessing of results – e.g., discarding words that are morphologically or semantically related to other words in the list – so our fully automatic results cannot be directly compared. 6.3 Lexicon Extraction We evaluate lexicon extraction on TS1000, a test set of 1000 items, (Laws et al., 2010) each consisting of an English word and its German translations. For lexicon extraction, we use the same parameters as in the synonym extraction task for all four similarity measures. We use a seed dictionary of 12,630 word pairs to establish node-node correspondences between the two graphs. We remove a search keyword from the seed dictionary before calculating similarities for it, something that the architecture of CoSimRank makes easy because we can use a different seed dictionary S(0) for every keyword. Both CoSimRank methods outperform SimRank significantly (see Table 4). The difference between CoSimRank with and without typed edges is not significant. (This observation was also made for SimRank on a smaller graph and test set (Laws et al., 2010).) PPR+cos’s performance at 14.8% correct translations is much lower than SimRank and CoSimRank. The disadvantage of this similarity measure is significant and even more visible on bilingual lexicon extraction than on synonym extraction (see Table 2). The reason might be that we are not comparing the whole PPR vector anymore, 1398 P@1 P@10 PPR+cos 14.8%† 45.7%† SimRank MEE 48.0%† 76.0%† CoSimRank 61.1% 84.0% Typed CoSimRank 61.4% 83.9% Table 4: Results for bilingual lexicon extraction (TS1000 EN →DE). Best result in each column in bold. but only entries which occur in the seed dictionary (see Eq. 9). As the seed dictionary contains 12,630 word pairs, this means that only every fourth entry of the PPR vector (the German graph has 47,439 nodes) is used for similarity calculation. This is also true for CoSimRank, but it seems that CoSimRank is more stable because we compare more than one vector.† We also experimented with the method of Fogaras and R´acz (2005). We tried a number of different ways of modifying it for weighted graphs: (i) running the random walks with the weighted adjacency matrix as Markov matrix, (ii) storing the weight (product of each edge weight) of a random walk and using it as a factor if two walks meet and (iii) a combination of both. We needed about 10,000 random walks in all three conditions. As a result, the computational time was approximately 30 minutes per test word, so this method is even slower than SimRank for our application. The accuracies P@1 and P@10 were worse in all experiments than those of CoSimRank. 6.4 Run time performance Table 5 compares the run time performance of CoSimRank with the baselines. We ran all experiments on a 64-bit Linux machine with 64 Intel Xenon X7560 2.27Ghz CPUs and 1TB RAM. The calculated time is the sum of the time spent in user mode and the time spent in kernel mode. The actual wall clock time was significantly lower as we used up to 64 CPUs. Compared to SimRank, CoSimRank is more than 40 times faster on synonym extraction and six times faster on lexicon extraction. SimRank is at a disadvantage because it computes all similarities in the graph regardless of the size of the test set; it is particularly inefficient on synonym extraction because the English graph contains a large number †significantly worse than CoSimRank (α = 0.05, onetailed Z-Test) synonym extraction lexicon extraction (68 word pairs) (1000 word pairs) PPR+cos 2,228 2,195 SimRank 23,423 14,418 CoSimRank 524 2,342 Typed CoSimRank 615 6,108 Table 5: Execution times in minutes for CoSimRank and the baselines. Best result in each column in bold. of edges (see Table 1). Compared to PPR+cos, CoSimRank is roughly four times faster on synonym extraction and has comparable performance on lexicon extraction. We compute 20 iterations of PPR+cos to reach convergence and then calculate a single cosine similarity. For CoSimRank, we need only compute five iterations to reach convergence, but we have to compute a vector similarity in each iteration. The counteracting effects of fewer iterations and more vector similarity computations can give either CoSimRank or PPR+cos an advantage, as is the case for synonym extraction and lexicon extraction, respectively. CoSimRank should generally be three times faster than typed CoSimRank since the typed version has to repeat the computation for each of the three types. This effect is only visible on the larger test set (lexicon extraction) because the general computation overhead is about the same on a smaller test set. 6.5 Comparison with WINTIAN Here we address inducing a bilingual lexicon from a seed set based on grammatical relations found by a parser. An alternative approach is to induce a bilingual lexicon from Wikipedia’s interwiki links (Rapp et al., 2012). These two approaches have different strengths and weaknesses; e.g., the interwiki-link-based approach does not require a seed set, but it can only be applied to comparable corpora that consist of corresponding – although not necessarily “parallel” – documents. Despite these differences it is still interesting to compare the two algorithms. Rapp et al. (2012) kindly provided their test set to us. It contains 1000 English words and a single correct German translation for each. We evaluate on a subset we call TS774 that consists of the 774 test word pairs that are in the intersection of words covered by the 1399 P@1 P@10 Wintian 43.8% 55.4%† CoSimRank 43.0% 73.6% Table 6: Results for bilingual lexicon extraction (TS774 DE →EN). Best result in each column in bold. WINTIAN Wikipedia data (Rapp et al., 2012) and words covered by our data. Most of the 226 missing word pairs are adverbs, prepositions and plural forms that are not covered by our graphs due to the construction algorithm we use: lemmatization, restriction to adjectives, nouns and verbs etc. Table 6 shows that CoSimRank is slightly, but not significantly worse than WINTIAN on P@1 (43.0 vs 43.8), but significantly better on P@10 (73.6 vs 55.4).4 The reason could be that CoSimRank is a more effective algorithm than WINTIAN; but the different initializations (seed set vs interwiki links) or the different linguistic representations (grammatical relations vs bag-of-words) could also be responsible. 6.6 Error Analysis The results on TS774 can be considered conservative since only one translation is accepted as being correct. In reality other translations might also be acceptable (e.g., both street and road for Straße). In contrast, TS1000 accepts more than one correct translation. Additionally, TS774 was created by translating English words into German (using Google translate). We are now testing the reverse direction. So we are doomed to fail if the original English word is a less common translation of an ambiguous German word. For example, the English word gulf was translated by Google to Golf, but the most common sense of Golf is the sport. Hence our algorithm will incorrectly translate it back to golf. As we can see in Table 7, we also face the problems discussed by Laws et al. (2010): the algorithm sometimes picks cohyponyms (which can still be seen as reasonable) and antonyms (which are clear errors). Contrary to our intuition, the edge-typed variant of CoSimRank did not perform significantly better than the non-edge-typed version. Looking 4We achieved better results for CoSimRank by optimizing the damping factor, but in this paper, we only present results for a fixed damping factor of 0.8. keyword gold standard CoSimRank arm poor impoverished erreichen reach achieve gehen go walk direkt directly direct weit far further breit wide narrow reduzieren reduce increase Stunde hour second Westen west southwest Junge boy child Table 7: Examples for CoSimRank translation errors on TS774. We counted translations as incorrect if they were not listed in the gold standard even if they were correct translations according to www.dict.cc (in bold). at Table 1, we see that there is only one edge type connecting adjectives. The same is true for verbs. The random surfer only has a real choice between different edge types when she is on a noun node. Combined with the fact that only the last edge type is important this has absolutely no effect for a random surfer meeting at adjectives or verbs. Two possible solutions would be (i) to use more fine-grained edge types, (ii) to apply Eq. 12, in which the edge type of each step is important. However, this will increase the memory needed for calculation. 7 Summary We have presented CoSimRank, a new similarity measure that can be computed for a single node pair without relying on the similarities in the whole graph. We gave two different formalizations of CoSimRank: (i) a derivation from Personalized PageRank and (ii) a matrix representation that can take advantage of fast matrix multiplication algorithms. We also presented extensions of CoSimRank for a number of applications, thus demonstrating the flexibility of CoSimRank as a similarity measure. We showed that CoSimRank is superior to SimRank in time and space complexity; and we demonstrated that CoSimRank performs better than PPR+cos on two similarity computation tasks. Acknowledgments. This work was supported by DFG (SCHU 2246/2-2). 1400 References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 19–27. Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. In WWW, pages 107–117. Sung-Hyuk Cha. 2007. Comprehensive survey on distance/similarity measures between probability density functions. Mathematical Models and Methods in Applied Sciences, 1(4):300–307. Ching-Yun Chang, Stephen Clark, and Brian Harrington. 2013. Getting creative with semantic similarity. In Semantic Computing (ICSC), 2013 IEEE Seventh International Conference on, pages 330–333. Gerard de Melo and Gerhard Weikum. 2012. Uwn: A large multilingual lexical knowledge base. In ACL (System Demonstrations), pages 151–156. Beate Dorow, Florian Laws, Lukas Michelbacher, Christian Scheible, and Jason Utt. 2009. A graphtheoretic algorithm for automatic extension of translation lexicons. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics, GEMS ’09, pages 91–95. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res. (JAIR), 22:457– 479. D´aniel Fogaras and Bal´azs R´acz. 2005. Scaling link-based similarity search. In Proceedings of the 14th international conference on World Wide Web, WWW ’05, pages 641–650. Xianpei Han and Jun Zhao. 2010. Structural semantic relatedness: a knowledge-based method to named entity disambiguation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 50–59. Taher H. Haveliwala. 2002. Topic-sensitive pagerank. In Proceedings of the 11th international conference on World Wide Web, WWW ’02, pages 517–526. Cong Duy Vu Hoang and Min-Yen Kan. 2010. Towards automated related work summarization. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING ’10, pages 427–435. Thad Hughes and Daniel Ramage. 2007. Lexical semantic relatedness with random graph walks. In EMNLP-CoNLL, pages 581–589. Tommi Jaakkola, David Haussler, et al. 1999. Exploiting generative models in discriminative classifiers. Advances in neural information processing systems, pages 487–493. Tony Jebara, Risi Kondor, and Andrew Howard. 2004. Probability product kernels. The Journal of Machine Learning Research, 5:819–844. Glen Jeh and Jennifer Widom. 2002. Simrank: a measure of structural-context similarity. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’02, pages 538–543. Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604–632. Florian Laws, Lukas ichelbacher, Beate Dorow, Christian Scheible, Ulrich Heid, and Hinrich Sch¨utze. 2010. A linguistically grounded graph model for bilingual lexicon extraction. In Coling 2010: Posters, pages 614–622. Elizabeth Leicht, Petter Holme, and Mark Newman. 2006. Vertex similarity in networks. Physical Review E, 73(2):026120. Ronny Lempel and Shlomo Moran. 2000. The stochastic approach for link-structure analysis (salsa) and the tkc effect. Computer Networks, 33(1):387–401. Pei Li, Zhixu Li, Hongyan Liu, Jun He, and Xiaoyong Du. 2009. Using link-based content analysis to measure document similarity effectively. In Proceedings of the Joint International Conferences on Advances in Data and Web Management, APWeb/WAIM ’09, pages 455–467. Cuiping Li, Jiawei Han, Guoming He, Xin Jin, Yizhou Sun, Yintao Yu, and Tianyi Wu. 2010. Fast computation of simrank for static and dynamic information networks. In Proceedings of the 13th International Conference on Extending Database Technology, EDBT ’10, pages 465–476. Dmitry Lizorkin, Pavel Velikhov, Maxim Grinev, and Denis Turdakov. 2010. Accuracy estimate and optimization techniques for simrank computation. The VLDB Journal—The International Journal on Very Large Data Bases, 19(1):45–66. Rada Mihalcea and Dragomir Radev. 2011. Graphbased natural language processing and information retrieval. Cambridge University Press. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Einat Minkov and William W. Cohen. 2008. Learning graph walk based similarity measures for parsed text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 907–916. 1401 Einat Minkov and William W. Cohen. 2012. Graph based similarity measures for synonym extraction from parsed text. In Workshop Proceedings of TextGraphs-7 on Graph-based Methods for Natural Language Processing, TextGraphs-7 ’12, pages 20– 24. Pradeep Muthukrishnan, Dragomir Radev, and Qiaozhu Mei. 2010. Edge weight regularization over multiple graphs for similarity learning. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 374–383. Diarmuid ´O S´eaghdha and Ann Copestake. 2008. Semantic classification with distributional kernels. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 649– 656. Delip Rao, David Yarowsky, and Chris Callison-Burch. 2008. Affinity measures based on the graph Laplacian. In Proceedings of the 3rd Textgraphs Workshop on Graph-Based Algorithms for Natural Language Processing, TextGraphs-3, pages 41–48. Reinhard Rapp, Serge Sharoff, and Bogdan Babych. 2012. Identifying word translations from comparable documents without a seed lexicon. In LREC, pages 460–466. Mehran Sahami and Timothy D. Heilman. 2006. A web-based kernel function for measuring the similarity of short text snippets. In Proceedings of the 15th international conference on World Wide Web, WWW ’06, pages 377–386. Christian Scheible, Florian Laws, Lukas Michelbacher, and Hinrich Sch¨utze. 2010. Sentiment translation through multi-edge graphs. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING ’10, pages 1104–1112. Hinrich Sch¨utze and Michael Walsh. 2008. A graphtheoretic model of lexical syntactic acquisition. In EMNLP, pages 917–926. Wensi Xi, Edward A. Fox, Weiguo Fan, Benyu Zhang, Zheng Chen, Jun Yan, and Dong Zhuang. 2005. Simfusion: measuring similarity using unified relationship matrix. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’05, pages 130–137. 1402
2014
131
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1403–1414, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Is this a wampimuk? Cross-modal mapping between distributional semantics and the visual world Angeliki Lazaridou and Elia Bruni and Marco Baroni Center for Mind/Brain Sciences University of Trento {angeliki.lazaridou|elia.bruni|marco.baroni}@unitn.it Abstract Following up on recent work on establishing a mapping between vector-based semantic embeddings of words and the visual representations of the corresponding objects from natural images, we first present a simple approach to cross-modal vector-based semantics for the task of zero-shot learning, in which an image of a previously unseen object is mapped to a linguistic representation denoting its word. We then introduce fast mapping, a challenging and more cognitively plausible variant of the zero-shot task, in which the learner is exposed to new objects and the corresponding words in very limited linguistic contexts. By combining prior linguistic and visual knowledge acquired about words and their objects, as well as exploiting the limited new evidence available, the learner must learn to associate new objects with words. Our results on this task pave the way to realistic simulations of how children or robots could use existing knowledge to bootstrap grounded semantic knowledge about new concepts. 1 Introduction Computational models of meaning that rely on corpus-extracted context vectors, such as LSA (Landauer and Dumais, 1997), HAL (Lund and Burgess, 1996), Topic Models (Griffiths et al., 2007) and more recent neural-network approaches (Collobert and Weston, 2008; Mikolov et al., 2013b) have successfully tackled a number of lexical semantics tasks, where context vector similarity highly correlates with various indices of semantic relatedness (Turney and Pantel, 2010). Given that these models are learned from naturally occurring data using simple associative techniques, various authors have advanced the claim that they might be also capturing some crucial aspects of how humans acquire and use language (Landauer and Dumais, 1997; Lenci, 2008). However, the models induce the meaning of words entirely from their co-occurrence with other words, without links to the external world. This constitutes a serious blow to claims of cognitive plausibility in at least two respects. One is the grounding problem (Harnad, 1990; Searle, 1984). Irrespective of their relatively high performance on various semantic tasks, it is debatable whether models that have no access to visual and perceptual information can capture the holistic, grounded knowledge that humans have about concepts. However, a possibly even more serious pitfall of vector models is lack of reference: natural language is, fundamentally, a means to communicate, and thus our words must be able to refer to objects, properties and events in the outside world (Abbott, 2010). Current vector models are purely language-internal, solipsistic models of meaning. Consider the very simple scenario in which visual information is being provided to an agent about the current state of the world, and the agent’s task is to determine the truth of a statement similar to There is a dog in the room. Although the agent is equipped with a powerful context vector model, this will not suffice to successfully complete the task. The model might suggest that the concepts of dog and cat are semantically related, but it has no means to determine the visual appearance of dogs, and consequently no way to verify the truth of such a simple statement. Mapping words to the objects they denote is such a core function of language that humans are highly optimized for it, as shown by the so-called fast mapping phenomenon, whereby children can learn to associate a word to an object or property by a single exposure to it (Bloom, 2000; Carey, 1978; Carey and Bartlett, 1978; Heibeck and Markman, 1987). But lack of reference is not 1403 only a theoretical weakness: Without the ability to refer to the outside world, context vectors are arguably useless for practical goals such as learning to execute natural language instructions (Branavan et al., 2009; Chen and Mooney, 2011), that could greatly benefit from the rich network of lexical meaning such vectors encode, in order to scale up to real-life challenges. Very recently, a number of papers have exploited advances in automated feature extraction form images and videos to enrich context vectors with visual information (Bruni et al., 2014; Feng and Lapata, 2010; Leong and Mihalcea, 2011; Regneri et al., 2013; Silberer et al., 2013). This line of research tackles the grounding problem: Word representations are no longer limited to their linguistic contexts but also encode visual information present in images associated with the corresponding objects. In this paper, we rely on the same image analysis techniques but instead focus on the reference problem: We do not aim at enriching word representations with visual information, although this might be a side effect of our approach, but we address the issue of automatically mapping objects, as depicted in images, to the context vectors representing the corresponding words. This is achieved by means of a simple neural network trained to project image-extracted feature vectors to text-based vectors through a hidden layer that can be interpreted as a cross-modal semantic space. We first test the effectiveness of our crossmodal semantic space on the so-called zero-shot learning task (Palatucci et al., 2009), which has recently been explored in the machine learning community (Frome et al., 2013; Socher et al., 2013). In this setting, we assume that our system possesses linguistic and visual information for a set of concepts in the form of text-based representations of words and image-based vectors of the corresponding objects, used for vision-to-language-mapping training. The system is then provided with visual information for a previously unseen object, and the task is to associate it with a word by cross-modal mapping. Our approach is competitive with respect to the recently proposed alternatives, while being overall simpler. The aforementioned task is very demanding and interesting from an engineering point of view. However, from a cognitive angle, it relies on strong, unrealistic assumptions: The learner is asked to establish a link between a new object and a word for which they possess a full-fledged textbased vector extracted from a billion-word corpus. On the contrary, the first time a learner is exposed to a new object, the linguistic information available is likely also very limited. Thus, in order to consider vision-to-language mapping under more plausible conditions, similar to the ones that children or robots in a new environment are faced with, we next simulate a scenario akin to fast mapping. We show that the induced cross-modal semantic space is powerful enough that sensible guesses about the correct word denoting an object can be made, even when the linguistic context vector representing the word has been created from as little as 1 sentence containing it. The contributions of this work are three-fold. First, we conduct experiments with simple imageand text-based vector representations and compare alternative methods to perform cross-modal mapping. Then, we complement recent work (Frome et al., 2013) and show that zero-shot learning scales to a large and noisy dataset. Finally, we provide preliminary evidence that cross-modal projections can be used effectively to simulate a fast mapping scenario, thus strengthening the claims of this approach as a full-fledged, fully inductive theory of meaning acquisition. 2 Related Work The problem of establishing word reference has been extensively explored in computational simulations of cross-situational learning (see Fazly et al. (2010) for a recent proposal and extended review of previous work). This line of research has traditionally assumed artificial models of the external world, typically a set of linguistic or logical labels for objects, actions and possibly other aspects of a scene (Siskind, 1996). Recently, Yu and Siskind (2013) presented a system that induces word-object mappings from features extracted from short videos paired with sentences. Our work complements theirs in two ways. First, unlike Yu and Siskind (2013) who considered a limited lexicon of 15 items with only 4 nouns, we conduct experiments in a large search space containing a highly ambiguous set of potential target words for every object (see Section 4.1). Most importantly, by projecting visual representations of objects into a shared semantic space, we do not limit ourselves to establishing a link between ob1404 jects and words. We induce a rich semantic representation of the multimodal concept, that can lead, among other things, to the discovery of important properties of an object even when we lack its linguistic label. Nevertheless, Yu and Siskind’s system could in principle be used to initialize the vision-language mapping that we rely upon. Closer to the spirit of our work are two very recent studies coming from the machine learning community. Socher et al. (2013) and Frome et al. (2013) focus on zero-shot learning in the visionlanguage domain by exploiting a shared visuallinguistic semantic space. Socher et al. (2013) learn to project unsupervised vector-based image representations onto a word-based semantic space using a neural network architecture. Unlike us, Socher and colleagues train an outlier detector to decide whether a test image should receive a known-word label by means of a standard supervised object classifier, or be assigned an unseen label by vision-to-language mapping. In our zeroshot experiments, we assume no access to an outlier detector, and thus, the search for the correct label is performed in the full concept space. Furthermore, Socher and colleagues present a much more constrained evaluation setup, where only 10 concepts are considered, compared to our experiments with hundreds or thousands of concepts. Frome et al. (2013) use linear regression to transform vector-based image representations onto vectors representing the same concepts in linguistic semantic space. Unlike Socher et al. (2013) and the current study that adopt simple unsupervised techniques for constructing image representations, Frome et al. (2013) rely on a supervised state-ofthe-art method: They feed low-level features to a deep neural network trained on a supervised object recognition task (Krizhevsky et al., 2012). Furthermore, their text-based vectors encode very rich information, such as ⃗ king − ⃗ man + ⃗ woman = ⃗ queen (Mikolov et al., 2013c). A natural question we aim to answer is whether the success of cross-modal mapping is due to the high-quality embeddings or to the general algorithmic design. If the latter is the case, then these results could be extended to traditional distributional vectors bearing other desirable properties, such as high interpretability of dimensions. (a) (b) Figure 1: A potential wampimuk (a) together with its projection onto the linguistic space (b). 3 Zero-shot learning and fast mapping “We found a cute, hairy wampimuk sleeping behind the tree.” Even though the previous statement is certainly the first time one hears about wampimuks, the linguistic context already creates some visual expectations: Wampimuks probably resemble small animals (Figure 1a). This is the scenario of zero-shot learning. Moreover, if this is also the first linguistic encounter of that concept, then we refer to the task as fast mapping. Concretely, we assume that concepts, denoted for convenience by word labels, are represented in linguistic terms by vectors in a text-based distributional semantic space (see Section 4.3). Objects corresponding to concepts are represented in visual terms by vectors in an image-based semantic space (Section 4.2). For a subset of concepts (e.g., a set of animals, a set of vehicles), we possess information related to both their linguistic and visual representations. During training, this cross-modal vocabulary is used to induce a projection function (Section 4.4), which – intuitively – represents a mapping between visual and linguistic dimensions. Thus, this function, given a visual vector, returns its corresponding linguistic representation. At test time, the system is presented with a previously unseen object (e.g., wampimuk). This object is projected onto the linguistic space and associated with the word label of the nearest neighbor in that space (degus in Figure 1b). The fast mapping setting can be seen as a special case of the zero-shot task. Whereas for the latter our system assumes that all concepts have rich linguistic representations (i.e., representations estimated from a large corpus), in the case of the former, new concepts are assumed to be encounted in a limited linguistic context and therefore lacking rich linguistic representations. This is operationalized by constructing the text-based vector for these 1405 Figure 2: Images of chair as extracted from CIFAR-100 (left) and ESP (right). concepts from a context of just a few occurrences. In this way, we simulate the first encounter of a learner with a concept that is new in both visual and linguistic terms. 4 Experimental Setup 4.1 Visual Datasets CIFAR-100 The CIFAR-100 dataset (Krizhevsky, 2009) consists of 60,000 32x32 colour images (note the extremely small size) representing 100 distinct concepts, with 600 images per concept. The dataset covers a wide range of concrete domains and is organized into 20 broader categories. Table 1 lists the concepts used in our experiments organized by category. ESP Our second dataset consists of 100K images from the ESP-Game data set, labeled through a “game with a purpose” (Von Ahn, 2006).1 The ESP image tags form a vocabulary of 20,515 unique words. Unlike other datasets used for zeroshot learning, it covers adjectives and verbs in addition to nouns. On average, an image has 14 tags and a word appears as a tag for 70 images. Unlike the CIFAR-100 images, which were chosen specifically for image object recognition tasks (i.e., each image is clearly depicting a single object in the foreground), ESP contains a random selection of images from the Web. Consequently, objects do not appear in most images in their prototypical display, but rather as elements of complex scenes (see Figure 2). Thus, ESP constitutes a more realistic, and at the same time more challenging, simulation of how things are encountered in real life, testing the potentials of cross-modal mapping in dealing with the complex scenes that one would encounter in event recognition and caption generation tasks. 1http://www.cs.cmu.edu/˜biglou/ resources/ 4.2 Visual Semantic Spaces Image-based vectors are extracted using the unsupervised bag-of-visual-words (BoVW) representational architecture (Sivic and Zisserman, 2003; Csurka et al., 2004), that has been widely and successfully applied to computer vision tasks such as object recognition and image retrieval (Yang et al., 2007). First, low-level visual features (Szeliski, 2010) are extracted from a large collection of images and clustered into a set of “visual words”. The low-level features of a specific image are then mapped to the corresponding visual words, and the image is represented by a count vector recording the number of occurrences of each visual word in it. We do not attempt any parameter tuning of the pipeline. As low-level features, we use Scale Invariant Feature Transform (SIFT) features (Lowe, 2004). SIFT features are tailored to capture object parts and to be invariant to several image transformations such as rotation, illumination and scale change. These features are clustered into vocabularies of 5,000 (ESP) and 4,096 (CIFAR-100) visual words.2 To preserve spatial information in the BoVW representation, we use the spatial pyramid technique (Lazebnik et al., 2006), which consists in dividing the image into several regions, computing BoVW vectors for each region and concatenating them. In particular, we divide ESP images into 16 regions and the smaller CIFAR-100 images into 4. The vectors resulting from region concatenation have dimensionality 5000 × 16 = 80, 000 (ESP) and 4, 096 × 4 = 16, 384 (CIFAR-100), respectively. We apply Local Mutual Information (LMI, (Evert, 2005)) as weighting scheme and reduce the full co-occurrence space to 300 dimensions using the Singular Value Decomposition. For CIFAR-100, we extract distinct visual vectors for single images. For ESP, given the size and amount of noise in this dataset, we build vectors for visual concepts, by normalizing and summing the BoVW vectors of all the images that have the relevant concept as a tag. Note that relevant literature (Pereira et al., 2010) has emphasized the importance of learners self-generating multiple views when faced with new objects. Thus, our multiple-image assumption should not be considered as problematic in the current setup. 2For selecting the size of the vocabulary size, we relied on standard settings found in the relevant literature (Bruni et al., 2014; Chatfield et al., 2011). 1406 Category Seen Concepts Unseen (Test) Concepts aquatic mammals beaver, otter, seal, whale dolphin fish ray, trout shark flowers orchid, poppy, sunflower, tulip rose food containers bottle, bowl, can ,plate cup fruit vegetable apple, mushroom, pear orange household electrical devices keyboard, lamp, telephone, television clock household furniture chair, couch, table, wardrobe bed insects bee, beetle, caterpillar, cockroach butterfly large carnivores bear, leopard, lion, wolf tiger large man-made outdoor things bridge, castle, house, road skyscraper large natural outdoor scenes cloud, mountain, plain, sea forest large omnivores and herbivores camel, cattle, chimpanzee, kangaroo elephant medium-sized mammals fox, porcupine, possum, skunk raccoon non-insect invertebrates crab, snail, spider, worm lobster people baby, girl, man, woman boy reptiles crocodile, dinosaur, snake, turtle lizard small mammals hamster, mouse, rabbit, shrew squirrel vehicles 1 bicycle, motorcycle, train bus vehicles 2 rocket, tank, tractor streetcar Table 1: Concepts in our version of the CIFAR-100 data set We implement the entire visual pipeline with VSEM, an open library for visual semantics (Bruni et al., 2013).3 4.3 Linguistic Semantic Spaces For constructing the text-based vectors, we follow a standard pipeline in distributional semantics (Turney and Pantel, 2010) without tuning its parameters and collect co-occurrence statistics from the concatenation of ukWaC4 and the Wikipedia, amounting to 2.7 billion tokens in total. Semantic vectors are constructed for a set of 30K target words (lemmas), namely the top 20K most frequent nouns, 5K most frequent adjectives and 5K most frequent verbs, and the same 30K lemmas are also employed as contextual elements. We collect co-occurrences in a symmetric context window of 20 elements around a target word. Finally, similarly to the visual semantic space, raw counts are transformed by applying LMI and then reduced to 300 dimensions with SVD.5 4.4 Cross-modal Mapping The process of learning to map objects to the their word label is implemented by training a projection function fprojv→w from the visual onto the linguistic semantic space. For the learning, we use a set of Ns seen concepts for which we have both image-based visual representations Vs ∈RNs×dv 3http://clic.cimec.unitn.it/vsem/ 4http://wacky.sslmit.unibo.it 5We also experimented with the image- and text-based vectors of Socher et al. (2013), but achieved better performance with the reported setup. and text-based linguistic representations Ws ∈ RNs×dw. The projection function is subject to an objective that aims at minimizing some cost function between the induced text-based representations ˆ Ws ∈RNs×dw and the gold ones Ws. The induced fprojv→w is then applied to the imagebased representations Vu ∈RNu×dv of Nu unseen objects to transform them into text-based representations ˆ Wu ∈RNu×dw. We implement 4 alternative learning algorithms for inducing the cross-modal projection function fprojv→w. Linear Regression (lin) Our first model is a very simple linear mapping between the two modalities estimated by solving a least-squares problem. This method is similar to the one introduced by Mikolov et al. (2013a) for estimating a translation matrix, only solved analytically. In our setup, we can see the two different modalities as if they were different languages. By using least-squares regression, the projection function fprojv→w can be derived as fprojv→w = (VT s Vs) −1VT s Ws (1) Canonical Correlation Analysis (CCA) CCA (Hardoon et al., 2004; Hotelling, 1936) and variations thereof have been successfully used in the past for annotation of regions (Socher and Fei-Fei, 2010) and complete images (Hardoon et al., 2006; Hodosh et al., 2013). Given two paired observation matrices, in our case Vs and Ws, CCA aims at capturing the linear relationship that exists between these variables. This is achieved by finding a pair of matrices, in our 1407 case CV ∈Rdv×d and CW ∈Rdw×d, such that the correlation between the projections of the two multidimensional variables into a common, lower-rank space is maximized. The resulting multimodal space has been shown to provide a good approximation to human concept similarity judgments (Silberer and Lapata, 2012). In our setup, after applying CCA on the two spaces Vs and Ws, we obtain the two projection mappings onto the common space and thus our projection function can be derived as: fprojv→w = CV CW −1 (2) Singular Value Decomposition (SVD) SVD is the most widely used dimensionality reduction technique in distributional semantics (Turney and Pantel, 2010), and it has recently been exploited to combine visual and linguistic dimensions in the multimodal distributional semantic model of Bruni et al. (2014). SVD smoothing is also a way to infer values of unseen dimensions in partially incomplete matrices, a technique that has been applied to the task of inferring word tags of unannotated images (Hare et al., 2008). Assuming that the concept-representing rows of Vs and Ws are ordered in the same way, we apply the (k-truncated) SVD to the concatenated matrix [VsWs], such that [ ˆVs ˆ Ws] = UkΣkZT k is a k-rank approximation of the original matrix.6 The projection function is then: fprojv→w = ZkZT k (3) where the input is appropriately padded with 0s ([Vu0Nu×W ]) and we discard the visual block of the output matrix [ ˆVu ˆ Wu]. Neural Network (NNet) The last model that we introduce is a neural network with one hidden layer. The projection function in this model can be described as: fprojv→w = Θv→w (4) where Θv→w consists of the model weights θ(1) ∈ Rdv×h and θ(2) ∈ Rh×dw that map the input image-based vectors Vs first to the hidden layer and then to the output layer in order to obtain text-based vectors, i.e., ˆ Ws = σ(2)(σ(1)(Vsθ(1))θ(2)), where σ(1) and σ(2) are 6We denote the right singular vectors matrix by Z instead of the customary V to avoid confusion with the visual matrix. the non-linear activation functions. We experimented with sigmoid, hyperbolic tangent and linear; hyperbolic tangent yielded the highest performance. The weights are estimated by minimizing the objective function J(Θv→w) = 1 2(1 −sim(Ws, ˆ Ws)) (5) where sim is some similarity function. In our experiments we used cosine as similarity function, so that sim(A, B) = AB ∥A∥∥B∥, thus penalizing parameter settings leading to a low cosine between the target linguistic representations Ws and those produced by the projection function ˆ Ws. The cosine has been widely used in the distributional semantic literature, and it has been shown to outperform Euclidean distance (Bullinaria and Levy, 2007).7 Parameters were estimated with standard backpropagation and L-BFGS. 5 Results Our experiments focus on the tasks of zero-shot learning (Sections 5.1 and 5.2) and fast mapping (Section 5.3). In both tasks, the projected vector of the unseen concept is labeled with the word associated to its cosine-based nearest neighbor vector in the corresponding semantic space. For the zero-shot task we report the accuracy of retrieving the correct label among the top k neighbors from a semantic space populated with the union of seen and unseen concepts. For fast mapping, we report the mean rank of the correct concept among fast mapping candidates. 5.1 Zero-shot Learning in CIFAR-100 For this experiment, we use the intersection of our linguistic space with the concepts present in CIFAR-100, containing a total of 90 concepts. For each concept category, we treat all concepts but one as seen concepts (Table 1). The 71 seen concepts correspond to 42,600 distinct visual vectors and are used to induce the projection function. Table 2 reports results obtained by averaging the performance on the 11,400 distinct vectors of the 19 unseen concepts. Our 4 models introduced in Section 4.4 are compared to a theoretically derived baseline Chance simulating selecting a label at random. For the neural network NN, we use prior knowledge 7We also experimented with the same objective function as Socher et al. (2013), however, our objective function yielded consistently better results in all experimental settings. 1408 PPPPPP Model k 1 2 3 5 10 20 Chance 1.1 2.2 3.3 5.5 11.0 22.0 SVD 1.9 5.0 8.1 14.5 29.0 48.6 CCA 3.0 6.9 10.7 17.9 31.7 51.7 lin 2.4 6.4 10.5 18.7 33.0 55.0 NN 3.9 6.6 10.6 21.9 37.9 58.2 Table 2: Percentage accuracy among top k nearest neighbors on CIFAR-100. about the number of concept categories to set the number of hidden units to 20 in order to avoid tuning of this parameter. For the SVD model, we set the number of dimensions to 300, a common choice in distributional semantics, coherent with the settings we used for the visual and linguistic spaces. First and foremost, all 4 models outperform Chance by a large margin. Surprisingly, the very simple lin method outperforms both CCA and SVD. However, NN, an architecture that can capture more complex, non-linear relations in features across modalities, emerges as the best performing model, confirming on a larger scale the recent findings of Socher et al. (2013). 5.1.1 Concept Categorization In order to gain qualitative insights into the performance of the projection process of NN, we attempt to investigate the role and interpretability of the hidden layer. We achieve this by looking at which visual concepts result in the highest hidden unit activation.8 This is inspired by analogous qualitative analysis conducted in Topic Models (Griffiths et al., 2007), where “topics” are interpreted in terms of the words with the highest probability under each of them. Table 3 presents both seen and unseen concepts corresponding to visual vectors that trigger the highest activation for a subset of hidden units. The table further reports, for each hidden unit, the “correct” unseen concept for the category of the top seen concepts, together with its rank in terms of activation of the unit. The analysis demonstrates that, although prior knowledge about categories was not explicitly used to train the network, the latter induced an organization of concepts into superordinate categories in which the 8For this post-hoc analysis, we include a sparsity parameter in the objective function of Equation 5 in order to get more interpretable results; hidden units are therefore maximally activated by a only few concepts. Unseen Concept Nearest Neighbors tiger cat, microchip, kitten, vet, pet bike spoke, wheel, brake, tyre, motorcycle blossom bud, leaf, jasmine, petal, dandelion bakery quiche, bread, pie, bagel, curry Table 4: Top 5 neighbors in linguistic space after visual vector projection of 4 unseen concepts. hidden layer acts as a cross-modal concept categorization/organization system. When the induced projection function maps an object onto the linguistic space, the derived text vector will inherit a mixture of textual features from the concepts that activated the same hidden unit as the object. This suggests a bias towards seen concepts. Furthermore, in many cases of miscategorization, the concepts are still semantically coherent with the induced category, confirming that the projection function is indeed capturing a latent, cross-modal semantic space. A squirrel, although not a “large omnivore”, is still an animal, while butterflies are not flowers but often feed on their nectar. 5.2 Zero-shot Learning in ESP For this experiment, we focus on NN, the best performing model in the previous experiment. We use a set of approximately 9,500 concepts, the intersection of the ESP-based visual semantic space with the linguistic space. For tuning the number of hidden units of NN, we use the MEN-concrete dataset of Bruni et al. (2014). Finally, we randomly pick 70% of the concepts to induce the projection function fprojv→w and report results on the remaining 30%. Note that the search space for the correct label in this experiment is approximately 95 times larger than the one used for the experiment presented in Section 5.1. Although our experimental setup differs from the one of Frome et al. (2013), thus preventing a direct comparison, the results reported in Table 5 are on a comparable scale to theirs. We note that previous work on zero-shot learning has used standard object recognition benchmarks. To the best of our knowledge, this is the first time this task has been performed on a dataset as noisy as ESP. Overall, the results suggest that cross-modal mapping could be applied in tasks where images exhibit a more complex structure, e.g., caption generation and event recognition. 1409 Seen Concepts Unseen Concept Rank of Correct CIFAR-100 Category Unseen Concept Unit 1 sunflower, tulip, pear butterfly 2 (rose) flowers Unit 2 cattle, camel, bear squirrel 2 (elephant) large omnivores and herbivores Unit 3 castle, bridge, house bus 4 (skyscraper) large man-made outdoor things Unit 4 man, girl, baby boy 1 people Unit 5 motorcycle, bicycle, tractor streetcar 2 (bus) vehicles 1 Unit 6 sea, plain, cloud forest 1 large natural outdoor scenes Unit 7 chair, couch, table bed 1 household furniture Unit 8 plate, bowl, can clock 3 (cup) food containers Unit 9 apple, pear, mushroom orange 1 fruit and vegetables Table 3: Categorization induced by the hidden layer of the NN; concepts belonging in the same CIFAR100 categories, reported in the last column, are marked in bold. Example: Unit 1 receives the highest activation during training by the category flowers and at test time by butterfly, belonging to insects. The same unit receives the second highest activation by the “correct” test concept, the flower rose. PPPPPP Model k 1 2 5 10 50 Chance 0.01 0.02 0.05 0.10 0.5 NN 0.8 1.9 5.6 9.7 30.9 Table 5: Percentage accuracy among top k nearest neighbors on ESP. 5.3 Fast Mapping in ESP In this section, we aim at simulating a fast mapping scenario in which the learner has been just exposed to a new concept, and thus has limited linguistic evidence for that concept. We operationalize this by considering the 34 concrete concepts introduced by Frassinelli and Keller (2012), and deriving their text-based representations from just a few sentences randomly picked from the corpus. Concretely, we implement 5 models: context 1, context 5, context 10, context 20 and context full, where the name of the model denotes the number of sentences used to construct the text-based representations. The derived vectors were reduced with the same SVD projection induced from the complete corpus. Cross-modal mapping is done via NN. The zero-shot framework leads us to frame fast mapping as the task of projecting visual representations of new objects onto language space for retrieving their word labels (v →w). This mapping from visual to textual representations is arguably a more plausible task than vice versa. If we think about how linguistic reference is acquired, a scenario in which a learner first encounters a new object and then seeks its reference in the language of the surrounding environment (e.g., adults having a conversation, the text of a book with an illustration of an unknown object) is very natural. Furthermore, since not all new concepts in the linguistic environment refer to new objects (they might denote abstract concepts or out-of-scene objects), it seems more reasonable for the learner to be more alerted to linguistic cues about a recently-spotted new object than vice versa. Moreover, once the learner observes a new object, she can easily construct a full visual representation for it (and the acquisition literature has shown that humans are wired for good object segmentation and recognition (Spelke, 1994)) – the more challenging task is to scan the ongoing and very ambiguous linguistic communication for contexts that might be relevant and informative about the new object. However, fast mapping is often described in the psychological literature as the opposite task: The learner is exposed to a new word in context and has to search for the right object referring to it. We implement this second setup (w →v) by training the projection function fprojw→v which maps linguistic vectors to visual ones. The adaptation of NN is straightforward; the new objective function is derived as J(Θw→v) = 1 2(1 −sim(Vs, ˆVs)) (6) where ˆVs = σ(2)(σ(1)(Wsθ(1))θ(2)), θ(1) ∈ Rdw×h and θ(2) ∈Rh×dv. Table 7 presents the results. Not surprisingly, performance increases with the number of sentences that are used to construct the textual representations. Furthermore, all models perform better than Chance, including those that are based on just 1 or 5 sentences. This suggests that the system can make reasonable inferences about object-word connections even when linguistic evidence is very scarce. Regarding the sources of error, a qualitative analysis of predicted word labels and objects as 1410 v→w w→v cooker→potato dishwasher→corkscrew clarinet→drum potato→corn gorilla→elephant guitar→violin scooter→car scarf→trouser Table 6: Top-ranked concepts in cases where the gold concepts received numerically high ranks. XXXXXXXX Context Mapping v →w w →v Chance 17 17 context 1 12.6 14.5 context 5 8.08 13.29 context 10 7.29 13.44 context 20 6.02 12.17 context full 5.52 5.88 Table 7: Mean rank results averaged across 34 concepts when mapping an image-based vector and retrieving its linguistic neighbors (v →w) as well as when mapping a text-based vector and retrieving its visual neighbors (w →v). Lower numbers cue better performance. presented in Table 6 suggests that both textual and visual representations, although capturing relevant “topical” or “domain” information, are not enough to single out the properties of the target concept. As an example, the textual vector of dishwasher contains kitchen-related dimensions such as ⟨fridge, oven, gas, hob, ..., sink⟩. After projecting onto the visual space, its nearest visual neighbours are the visual ones of the same-domain concepts corkscrew and kettle. The latter is shown in Figure 3a, with a gas hob well in evidence. As a further example, the visual vector for cooker is extracted from pictures such as the one in Figure 3b. Not surprisingly, when projecting it onto the linguistic space, the nearest neighbours are other kitchenrelated terms, i.e., potato and dishwasher. 6 Conclusion At the outset of this work, we considered the problem of linking purely language-based distri(a) A kettle (b) A cooker Figure 3: Two images from ESP. butional semantic spaces with objects in the visual world by means of cross-modal mapping. We compared recent models for this task both on a benchmark object recognition dataset and on a more realistic and noisier dataset covering a wide range of concepts. The neural network architecture emerged as the best performing approach, and our qualitative analysis revealed that it induced a categorical organization of concepts. Most importantly, our results suggest the viability of crossmodal mapping for grounded word-meaning acquisition in a simulation of fast mapping. Given the success of NN, we plan to experiment in the future with more sophisticated neural network architectures inspired by recent work in machine translation (Gao et al., 2013) and multimodal deep learning (Srivastava and Salakhutdinov, 2012). Furthermore, we intend to adopt visual attributes (Farhadi et al., 2009; Silberer et al., 2013) as visual representations, since they should allow a better understanding of how crossmodal mapping works, thanks to their linguistic interpretability. The error analysis in Section 5.3 suggests that automated localization techniques (van de Sande et al., 2011), distinguishing an object from its surroundings, might drastically improve mapping accuracy. Similarly, in the textual domain, models that extract collocates of a word that are more likely to denote conceptual properties (Kelly et al., 2012) might lead to more informative and discriminative linguistic vectors. Finally, the lack of large child-directed speech corpora constrained the experimental design of fast mapping simulations; we plan to run more realistic experiments with true nonce words and using source corpora (e.g., the Simple Wikipedia, child stories, portions of CHILDES) that contain sentences more akin to those a child might effectively hear or read in her word-learning years. Acknowledgments We thank Adam Liˇska for helpful discussions and the 3 anonymous reviewers for useful comments. This work was supported by ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). References Barbara Abbott. 2010. Reference. Oxford University Press, Oxford, UK. 1411 Paul Bloom. 2000. How Children Learn the Meanings of Words. MIT Press, Cambridge, MA. S. R. K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of ACL/IJCNLP, pages 82–90. Elia Bruni, Ulisse Bordignon, Adam Liska, Jasper Uijlings, and Irina Sergienya. 2013. Vsem: An open library for visual semantics representation. In Proceedings of ACL, Sofia, Bulgaria. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49:1–47. John Bullinaria and Joseph Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39:510–526. Susan Carey and Elsa Bartlett. 1978. Acquiring a single new word. Papers and Reports on Child Language Development, 15:17–29. Susan Carey. 1978. The child as a word learner. In M. Halle, J. Bresnan, and G. Miller, editors, Linguistics Theory and Psychological Reality. MIT Press, Cambridge, MA. Ken Chatfield, Victor Lempitsky, Andrea Vedaldi, and Andrew Zisserman. 2011. The devil is in the details: an evaluation of recent feature encoding methods. In Proceedings of BMVC, Dundee, UK. David Chen and Raymond Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of AAAI, pages 859–865, San Francisco, CA. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML, pages 160–167, Helsinki, Finland. Gabriella Csurka, Christopher Dance, Lixin Fan, Jutta Willamowski, and C´edric Bray. 2004. Visual categorization with bags of keypoints. In In Workshop on Statistical Learning in Computer Vision, ECCV, pages 1–22, Prague, Czech Republic. Stefan Evert. 2005. The Statistics of Word Cooccurrences. Ph.D dissertation, Stuttgart University. Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth. 2009. Describing objects by their attributes. In Proceedings of CVPR, pages 1778– 1785, Miami Beach, FL. Afsaneh Fazly, Afra Alishahi, and Suzanne Stevenson. 2010. A probabilistic computational model of cross-situational word learning. Cognitive Science, 34:1017–1063. Yansong Feng and Mirella Lapata. 2010. Visual information in semantic representation. In Proceedings of HLT-NAACL, pages 91–99, Los Angeles, CA. Diego Frassinelli and Frank Keller. 2012. The plausibility of semantic properties generated by a distributional model: Evidence from a visual world experiment. In Proceedings of CogSci, pages 1560–1565. Andrea Frome, Greg Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. 2013. DeViSE: A deep visual-semantic embedding model. In Proceedings of NIPS, pages 2121–2129, Lake Tahoe, Nevada. Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2013. Learning semantic representations for the phrase translation model. arXiv preprint arXiv:1312.0482. Tom Griffiths, Mark Steyvers, and Josh Tenenbaum. 2007. Topics in semantic representation. Psychological Review, 114:211–244. David R Hardoon, Sandor Szedmak, and John ShaweTaylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12):2639–2664. David R Hardoon, Craig Saunders, Sandor Szedmak, and John Shawe-Taylor. 2006. A correlation approach for automatic image annotation. In Advanced Data Mining and Applications, pages 681– 692. Springer. Jonathon Hare, Sina Samangooei, Paul Lewis, and Mark Nixon. 2008. Semantic spaces revisited: Investigating the performance of auto-annotation and semantic retrieval using semantic spaces. In Proceedings of CIVR, pages 359–368, Niagara Falls, Canada. Stevan Harnad. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335– 346. Tracy Heibeck and Ellen Markman. 1987. Word learning in children: an examination of fast mapping. Child Development, 58:1021–1024. Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research, 47:853–899. Harold Hotelling. 1936. Relations between two sets of variates. Biometrika, 28(3/4):321–377. Colin Kelly, Barry Devereux, and Anna Korhonen. 2012. Semi-supervised learning for automatic conceptual property extraction. In Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics, pages 11–20, Montreal, Canada. 1412 Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proceedings of NIPS, pages 1106–1114. Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Master’s thesis. Thomas Landauer and Susan Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211– 240. Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. 2006. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of CVPR, pages 2169–2178, Washington, DC. Alessandro Lenci. 2008. Distributional approaches in linguistic and cognitive research. Italian Journal of Linguistics, 20(1):1–31. Chee Wee Leong and Rada Mihalcea. 2011. Going beyond text: A hybrid image-text approach for measuring word relatedness. In Proceedings of IJCNLP, pages 1403–1407. David Lowe. 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2). Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior Research Methods, 28:203– 208. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119, Lake Tahoe, Nevada. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of NAACL, pages 746–751, Atlanta, Georgia. Mark Palatucci, Dean Pomerleau, Geoffrey Hinton, and Tom Mitchell. 2009. Zero-shot learning with semantic output codes. In Proceedings of NIPS, pages 1410–1418, Vancouver, Canada. Alfredo F Pereira, Karin H James, Susan S Jones, and Linda B Smith. 2010. Early biases and developmental changes in self-generated object views. Journal of vision, 10(11). Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25–36. John Searle. 1984. Minds, Brains and Science. Harvard University Press, Cambridge, MA. Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In Proceedings of EMNLP, pages 1423–1433, Jeju, Korea. Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of semantic representation with visual attributes. In Proceedings of ACL, pages 572–582, Sofia, Bulgaria. Jeffrey Siskind. 1996. A computational study of crosssituational techniques for learning word-to-meaning mappings. Cognition, 61:39–91. Josef Sivic and Andrew Zisserman. 2003. Video Google: A text retrieval approach to object matching in videos. In Proceedings of ICCV, pages 1470– 1477, Nice, France. Richard Socher and Li Fei-Fei. 2010. Connecting modalities: Semi-supervised segmentation and annotation of images using unaligned text corpora. In Proceedings of CVPR, pages 966–973. Richard Socher, Milind Ganjoo, Christopher Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Proceedings of NIPS, pages 935–943, Lake Tahoe, Nevada. Elizabeth Spelke. 1994. Initial knowledge: Six suggestions. Cognition, 50:431–445. Nitish Srivastava and Ruslan Salakhutdinov. 2012. Multimodal learning with deep boltzmann machines. In Proceedings of NIPS, pages 2231–2239. Richard Szeliski. 2010. Computer Vision : Algorithms and Applications. Springer, Berlin. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Koen van de Sande, Jasper Uijlings, Theo Gevers, and Arnold Smeulders. 2011. Segmentation as selective search for object recognition. In Proceedings of ICCV, pages 1879–1886, Barcelona, Spain. Luis Von Ahn. 2006. Games with a purpose. Computer, 29(6):92–94. Jun Yang, Yu-Gang Jiang, Alexander Hauptmann, and Chong-Wah Ngo. 2007. Evaluating bag-of-visualwords representations in scene classification. In James Ze Wang, Nozha Boujemaa, Alberto Del Bimbo, and Jia Li, editors, Multimedia Information Retrieval, pages 197–206. ACM. 1413 Haonan Yu and Jeffrey Siskind. 2013. Grounded language learning from video described with sentences. In Proceedings of ACL, pages 53–63, Sofia, Bulgaria. 1414
2014
132
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1415–1425, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Semantic Parsing via Paraphrasing Jonathan Berant Stanford University [email protected] Percy Liang Stanford University [email protected] Abstract A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question-answer pairs. Our system PARASEMPRE improves stateof-the-art accuracies on two recently released question-answering datasets. 1 Introduction We consider the semantic parsing problem of mapping natural language utterances into logical forms to be executed on a knowledge base (KB) (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Kwiatkowski et al., 2010). Scaling semantic parsers to large knowledge bases has attracted substantial attention recently (Cai and Yates, 2013; Berant et al., 2013; Kwiatkowski et al., 2013), since it drives applications such as question answering (QA) and information extraction (IE). Semantic parsers need to somehow associate natural language phrases with logical predicates, e.g., they must learn that the constructions “What What party did Clay establish? paraphrase model What political party founded by Henry Clay? ... What event involved the people Henry Clay? Type.PoliticalParty ⊓Founder.HenryClay ... Type.Event ⊓Involved.HenryClay Whig Party Figure 1: Semantic parsing via paraphrasing: For each candidate logical form (in red), we generate canonical utterances (in purple). The model is trained to paraphrase the input utterance (in green) into the canonical utterances associated with the correct denotation (in blue). does X do for a living?”, “What is X’s profession?”, and “Who is X?”, should all map to the logical predicate Profession. To learn these mappings, traditional semantic parsers use data which pairs natural language with the KB. However, this leaves untapped a vast amount of text not related to the KB. For instance, the utterances “Where is ACL in 2014?” and “What is the location of ACL 2014?” cannot be used in traditional semantic parsing methods, since the KB does not contain an entity ACL2014, but this pair clearly contains valuable linguistic information. As another reference point, out of 500,000 relations extracted by the ReVerb Open IE system (Fader et al., 2011), only about 10,000 can be aligned to Freebase (Berant et al., 2013). In this paper, we present a novel approach for semantic parsing based on paraphrasing that can exploit large amounts of text not covered by the KB (Figure 1). Our approach targets factoid questions with a modest amount of compositionality. Given an input utterance, we first use a simple deterministic procedure to construct a manageable set of candidate logical forms (ideally, we would generate canonical utterances for all possible logical forms, but this is intractable). Next, we heuris1415 utterance underspecified logical form canonical utterance logical form ontology matching paraphrase direct (traditional) (Kwiatkowski et al. 2013) (this work) Figure 2: The main challenge in semantic parsing is coping with the mismatch between language and the KB. (a) Traditionally, semantic parsing maps utterances directly to logical forms. (b) Kwiatkowski et al. (2013) map the utterance to an underspecified logical form, and perform ontology matching to handle the mismatch. (c) We approach the problem in the other direction, generating canonical utterances for logical forms, and use paraphrase models to handle the mismatch. tically generate canonical utterances for each logical form based on the text descriptions of predicates from the KB. Finally, we choose the canonical utterance that best paraphrases the input utterance, and thereby the logical form that generated it. We use two complementary paraphrase models: an association model based on aligned phrase pairs extracted from a monolingual parallel corpus, and a vector space model, which represents each utterance as a vector and learns a similarity score between them. The entire system is trained jointly from question-answer pairs only. Our work relates to recent lines of research in semantic parsing and question answering. Kwiatkowski et al. (2013) first maps utterances to a domain-independent intermediate logical form, and then performs ontology matching to produce the final logical form. In some sense, we approach the problem from the opposite end, using an intermediate utterance, which allows us to employ paraphrasing methods (Figure 2). Fader et al. (2013) presented a QA system that maps questions onto simple queries against Open IE extractions, by learning paraphrases from a large monolingual parallel corpus, and performing a single paraphrasing step. We adopt the idea of using paraphrasing for QA, but suggest a more general paraphrase model and work against a formal KB (Freebase). We apply our semantic parser on two datasets: WEBQUESTIONS (Berant et al., 2013), which contains 5,810 question-answer pairs with common questions asked by web users; and FREE917 (Cai and Yates, 2013), which has 917 questions manually authored by annotators. On WEBQUESTIONS, we obtain a relative improvement of 12% in accuracy over the state-of-the-art, and on FREE917 we match the current best performing system. The source code of our system PARASEMPRE is released at http://www-nlp.stanford.edu/ software/sempre/. 2 Setup Our task is as follows: Given (i) a knowledge base K, and (ii) a training set of question-answer pairs {(xi, yi)}n i=1, output a semantic parser that maps new questions x to answers y via latent logical forms z. Let E denote a set of entities (e.g., BillGates), and let P denote a set of properties (e.g., PlaceOfBirth). A knowledge base K is a set of assertions (e1, p, e2) ∈E × P × E (e.g., (BillGates, PlaceOfBirth, Seattle)). We use the Freebase KB (Google, 2013), which has 41M entities, 19K properties, and 596M assertions. To query the KB, we use a logical language called simple λ-DCS. In simple λ-DCS, an entity (e.g., Seattle) is a unary predicate (i.e., a subset of E) denoting a singleton set containing that entity. A property (which is a binary predicate) can be joined with a unary predicate; e.g., Founded.Microsoft denotes the entities that are Microsoft founders. In PlaceOfBirth.Seattle ⊓Founded.Microsoft, an intersection operator allows us to denote the set of Seattle-born Microsoft founders. A reverse operator reverses the order of arguments: R[PlaceOfBirth].BillGates denotes Bill Gates’s birthplace (in contrast to PlaceOfBirth.Seattle). Lastly, count(Founded.Microsoft) denotes set cardinality, in this case, the number of Microsoft founders. The denotation of a logical form z with respect to a KB K is given by JzKK. For a formal description of simple λ-DCS, see Liang (2013) and Berant et al. (2013). 3 Model overview We now present the general framework for semantic parsing via paraphrasing, including the model and the learning algorithm. In Sections 4 and 5, we provide the details of our implementation. Canonical utterance construction Given an utterance x and the KB, we construct a set of candi1416 date logical forms Zx, and then for each z ∈Zx generate a small set of canonical natural language utterances Cz. Our goal at this point is only to generate a manageable set of logical forms containing the correct one, and then generate an appropriate canonical utterance from it. This strategy is feasible in factoid QA where compositionality is low, and so the size of Zx is limited (Section 4). Paraphrasing We score the canonical utterances in Cz with respect to the input utterance x using a paraphrase model, which offers two advantages. First, the paraphrase model is decoupled from the KB, so we can train it from large text corpora. Second, natural language utterances often do not express predicates explicitly, e.g., the question “What is Italy’s money?” expresses the binary predicate CurrencyOf with a possessive construction. Paraphrasing methods are well-suited for handling such text-to-text gaps. Our framework accommodates any paraphrasing method, and in this paper we propose an association model that learns to associate natural language phrases that co-occur frequently in a monolingual parallel corpus, combined with a vector space model, which learns to score the similarity between vector representations of natural language utterances (Section 5). Model We define a discriminative log-linear model that places a probability distribution over pairs of logical forms and canonical utterances (c, z), given an utterance x: pθ(c, z | x) = exp{φ(x, c, z)⊤θ} P z′∈Zx,c′∈Cz exp{φ(x, c′, z′)⊤θ}, where θ ∈Rb is the vector of parameters to be learned, and φ(x, c, z) is a feature vector extracted from the input utterance x, the canonical utterance c, and the logical form z. Note that the candidate set of logical forms Zx and canonical utterances Cx are constructed during the canonical utterance construction phase. The model score decomposes into two terms: φ(x, c, z)⊤θ = φpr(x, c)⊤θpr + φlf(x, z)⊤θlf, where the parameters θpr define the paraphrase model (Section 5), which is based on features extracted from text only (the input and canonical utterance). The parameters θlf correspond to semantic parsing features based on the logical form and input utterance, and are briefly described in this section. Many existing paraphrase models introduce latent variables to describe the derivation of c from x, e.g., with transformations (Heilman and Smith, 2010; Stern and Dagan, 2011) or alignments (Haghighi et al., 2005; Das and Smith, 2009; Chang et al., 2010). However, we opt for a simpler paraphrase model without latent variables in the interest of efficiency. Logical form features The parameters θlf correspond to the following features adopted from Berant et al. (2013). For a logical form z, we extract the size of its denotation JzKK. We also add all binary predicates in z as features. Moreover, we extract a popularity feature for predicates based on the number of instances they have in K. For Freebase entities, we extract a popularity feature based on the entity frequency in an entity linked subset of Reverb (Lin et al., 2012). Lastly, Freebase formulas have types (see Section 4), and we conjoin the type of z with the first word of x, to capture the correlation between a word (e.g., “where”) with the Freebase type (e.g., Location). Learning As our training data consists of question-answer pairs (xi, yi), we maximize the log-likelihood of the correct answer. The probability of an answer y is obtained by marginalizing over canonical utterances c and logical forms z whose denotation is y. Formally, our objective function O(θ) is as follows: O(θ) = n X i=1 log pθ(yi | xi) −λ∥θ∥1, pθ(y | x) = X z∈Zx:y=JzKK X c∈Cz pθ(c, z | x). The strength λ of the L1 regularizer is set based on cross-validation. We optimize the objective by initializing the parameters θ to zero and running AdaGrad (Duchi et al., 2010). We approximate the set of pairs of logical forms and canonical utterances with a beam of size 2,000. 4 Canonical utterance construction We construct canonical utterances in two steps. Given an input utterance x, we first construct a set of logical forms Zx, and then generate canonical utterances from each z ∈Zx. Both steps are performed with a small and simple set of deterministic rules, which suffices for our datasets, as 1417 they consist of factoid questions with a modest amount of compositional structure. We describe these rules below for completeness. Due to its soporific effect though, we advise the reader to skim it quickly. Candidate logical forms We consider logical forms defined by a set of templates, summarized in Table 1. The basic template is a join of a binary and an entity, where a binary can either be one property p.e (#1 in the table) or two properties p1.p2.e (#2). To handle cases of events involving multiple arguments (e.g., “Who did Brad Pitt play in Troy?”), we introduce the template p.(p1.e1 ⊓p2.e2) (#3), where the main event is modified by more than one entity. Logical forms can be further modified by a unary “filter”, e.g., the answer to “What composers spoke French?” is a set of composers, i.e., a subset of all people (#4). Lastly, we handle aggregation formulas for utterances such as “How many teams are in the NCAA?” (#5). To construct candidate logical forms Zx for a given utterance x, our strategy is to find an entity in x and grow the logical form from that entity. As we show later, this procedure actually produces a set with better coverage than constructing logical forms recursively from spans of x, as is done in traditional semantic parsing. Specifically, for every span of x, we take at most 10 entities whose Freebase descriptions approximately match the span. Then, we join each entity e with all type-compatible1 binaries b, and add these logical forms to Zx (#1 and #2). To construct logical forms with multiple entities (#3) we do the following: For any logical form z = p.p1.e1, where p1 has type signature (t1, ∗), we look for other entities e2 that were matched in x. Then, we add the logical form p.(p1.e1 ⊓p2.e2), if there exists a binary p2 with a compatible type signature (t1, t2), where t2 is one of e2’s types. For example, for the logical form Character.Actor.BradPitt, if we match the entity Troy in x, we obtain Character.(Actor.BradPitt ⊓Film.Troy). We further modify logical forms by intersecting with a unary filter (#4): given a formula z with some Freebase type (e.g., People), we look at all Freebase sub-types t (e.g., Composer), and 1Entities in Freebase are associated with a set of types, and properties have a type signature (t1, t2) We use these types to compute an expected type t for any logical form z. check whether one of their Freebase descriptions (e.g., “composer”) appears in x. If so, we add the formula Type.t ⊓z to Zx. Finally, we check whether x is an aggregation formula by identifying whether it starts with phrases such as “how many” or “number of” (#5). On WEBQUESTIONS, this results in 645 formulas per utterance on average. Clearly, we can increase the expressivity of this step by expanding the template set. For example, we could handle superlative utterances (“What NBA player is tallest?”) by adding a template with an argmax operator. Utterance generation While mapping general language utterances to logical forms is hard, we observe that it is much easier to generate a canonical natural language utterances of our choice given a logical form. Table 2 summarizes the rules used to generate canonical utterances from the template p.e. Questions begin with a question word, are followed by the Freebase description of the expected answer type (d(t)), and followed by Freebase descriptions of the entity (d(e)) and binary (d(p)). To fill in auxiliary verbs, determiners, and prepositions, we parse the description d(p) into one of NP, VP, PP, or NP VP. This determines the generation rule to be used. Each Freebase property p has an explicit property p′ equivalent to the reverse R[p] (e.g., ContainedBy and R[Contains]). For each logical form z, we also generate using equivalent logical forms where p is replaced with R[p′]. Reversed formulas have different generation rules, since entities in these formulas are in the subject position rather than object position. We generate the description d(t) from the Freebase description of the type of z (this handles #4). For the template p1.p2.e (#2), we have a similar set of rules, which depends on the syntax of d(p1) and d(p2) and is omitted for brevity. The template p.(p1.e1 ⊓p2.e2) (#3) is generated by appending the prepositional phrase in d(e2), e.g, “What character is the character of Brad Pitt in Troy?”. Lastly, we choose the question phrase “How many” for aggregation formulas (#5), and “What” for all other formulas. We also generate canonical utterances using an alignment lexicon, released by Berant et al. (2013), which maps text phrases to Freebase binary predicates. For a binary predicate b mapped from text phrase d(b), we generate the utterance 1418 # Template Example Question 1 p.e Directed.TopGun Who directed Top Gun? 2 p1.p2.e Employment.EmployerOf.SteveBalmer Where does Steve Balmer work? 3 p.(p1.e1 ⊓p2.e2) Character.(Actor.BradPitt ⊓Film.Troy) Who did Brad Pitt play in Troy? 4 Type.t ⊓z Type.Composer ⊓SpeakerOf.French What composers spoke French? 5 count(z) count(BoatDesigner.NatHerreshoff) How many ships were designed by Nat Herreshoff? Table 1: Logical form templates, where p, p1, p2 are Freebase properties, e, e1, e2 are Freebase entities, t is a Freebase type, and z is a logical form. d(p) Categ. Rule Example p.e NP WH d(t) has d(e) as NP ? What election contest has George Bush as winner? VP WH d(t) (AUX) VP d(e) ? What radio station serves area New-York? PP WH d(t) PP d(e) ? What beer from region Argentina? NP VP WH d(t) VP the NP d(e) ? What mass transportation system served the area Berlin? R(p).e NP WH d(t) is the NP of d(e) ? What location is the place of birth of Elvis Presley? VP WH d(t) AUX d(e) VP ? What film is Brazil featured in? PP WH d(t) d(e) PP ? What destination Spanish steps near travel destination? NP VP WH NP is VP by d(e) ? What structure is designed by Herod? Table 2: Generation rules for templates of the form p.e and R[p].e based on the syntactic category of the property description. Freebase descriptions for the type, entity, and property are denoted by d(t), d(e) and d(p) respectively. The surface form of the auxiliary AUX is determined by the POS tag of the verb inside the VP tree. WH d(t) d(b) d(e) ?. On the WEBQUESTIONS dataset, we generate an average of 1,423 canonical utterances c per input utterance x. In Section 6, we show that an even simpler method of generating canonical utterances by concatenating Freebase descriptions hurts accuracy by only a modest amount. 5 Paraphrasing Once the candidate set of logical forms paired with canonical utterances is constructed, our problem is reduced to scoring pairs (c, z) based on a paraphrase model. The NLP paraphrase literature is vast and ranges from simple methods employing surface features (Wan et al., 2006), through vector space models (Socher et al., 2011), to latent variable models (Das and Smith, 2009; Wang and Manning, 2010; Stern and Dagan, 2011). In this paper, we focus on two paraphrase models that emphasize simplicity and efficiency. This is important since for each question-answer pair, we consider thousands of canonical utterances as potential paraphrases. In contrast, traditional paraphrase detection (Dolan et al., 2004) and Recognizing Textual Entailment (RTE) tasks (Dagan et al., 2013) consider examples consisting of only a single pair of candidate paraphrases. Our paraphrase model decomposes into an association model and a vector space model: φpr(x, c)⊤θpr = φas(x, c)⊤θas + φvs(x, c)⊤θvs. x : What type of music did Richard Wagner play c : What is the musical genres of Richard Wagner Figure 3: Token associations extracted for a paraphrase pair. Blue and dashed (red and solid) indicate positive (negative) score. Line width is proportional to the absolute value of the score. 5.1 Association model The goal of the association model is to determine whether x and c contain phrases that are likely to be paraphrases. Given an utterance x = ⟨x0, x1, .., xn−1⟩, we denote by xi:j the span from token i to token j. For each pair of utterances (x, c), we go through all spans of x and c and identify a set of pairs of potential paraphrases (xi:j, ci′:j′), which we call associations. (We will describe how associations are identified shortly.) We then define features on each association; the weighted combination of these features yields a score. In this light, associations can be viewed as soft paraphrase rules. Figure 3 presents examples of associations extracted from a paraphrase pair and visualizes the learned scores. We can see that our model learns a positive score for associating “type” with “genres”, and a negative score for associating “is” with “play”. We define associations in x and c primarily by looking up phrase pairs in a phrase table constructed using the PARALEX corpus (Fader et al., 2013). PARALEX is a large monolingual parallel 1419 Category Description Assoc. lemma(xi:j) ∧lemma(ci′:j′) pos(xi:j) ∧pos(ci′:j′) lemma(xi:j) = lemma(ci′:j′)? pos(xi:j) = pos(ci′:j′)? lemma(xi:j) and lemma(ci′:j′) are synonyms? lemma(xi:j) and lemma(ci′:j′) are derivations? Deletions Deleted lemma and POS tag Table 3: Full feature set in the association model. xi:j and ci′:j′ denote spans from x and c. pos(xi:j) and lemma(xi:j) denote the POS tag and lemma sequence of xi:j. corpora, containing 18 million pairs of question paraphrases from wikianswers.com, which were tagged as having the same meaning by users. PARALEX is suitable for our needs since it focuses on question paraphrases. For example, the phrase “do for a living” occurs mostly in questions, and we can extract associations for this phrase from PARALEX. Paraphrase pairs in PARALEX are word-aligned using standard machine translation methods. We use the word alignments to construct a phrase table by applying the consistent phrase pair heuristic (Och and Ney, 2004) to all 5-grams. This results in a phrase table with approximately 1.3 million phrase pairs. We let A denote this set of mined candidate associations. For a pair (x, c), we also consider as candidate associations the set B (represented implicitly), which contains token pairs (xi, ci′) such that xi and ci′ share the same lemma, the same POS tag, or are linked through a derivation link on WordNet (Fellbaum, 1998). This allows us to learn paraphrases for words that appear in our datasets but are not covered by the phrase table, and to handle nominalizations for phrase pairs such as “Who designed the game of life?” and “What game designer is the designer of the game of life?”. Our model goes over all possible spans of x and c and constructs all possible associations from A and B. This results in many poor associations (e.g., “play” and “the”), but as illustrated in Figure 3, we learn weights that discriminate good from bad associations. Table 3 specifies the full set of features. Note that unlike standard paraphrase detection and RTE systems, we use lexicalized features, firing approximately 400,000 features on WEBQUESTIONS. By extracting POS features, we obtain soft syntactic rules, e.g., the feature “JJ N ∧N” indicates that omitting adjectives before nouns is possible. Once associations are constructed, we mark tokens in x and c that were not part of any association, and extract deletion features for their lemmas and POS tags. Thus, we learn that deleting pronouns is acceptable, while deleting nouns is not. To summarize, the association model links phrases of two utterances in multiple overlapping ways. During training, the model learns which associations are characteristic of paraphrases and which are not. 5.2 Vector space model The association model relies on having a good set of candidate associations, but mining associations suffers from coverage issues. We now introduce a vector space (VS) model, which assigns a vector representation for each utterance, and learns a scoring function that ranks paraphrase candidates. We start by constructing vector representations of words. We run the WORD2VEC tool (Mikolov et al., 2013) on lower-cased Wikipedia text (1.59 billion tokens), using the CBOW model with a window of 5 and hierarchical softmax. We also experiment with publicly released word embeddings (Huang et al., 2012), which were trained using both local and global context. Both result in kdimensional vectors (k = 50). Next, we construct a vector vx ∈Rk for each utterance x by simply averaging the vectors of all content words (nouns, verbs, and adjectives) in x. We can now estimate a paraphrase score for two utterances x and c via a weighted combination of the components of the vector representations: v⊤ x Wvc = k X i,j=1 wijvx,ivc,j where W ∈Rk×k is a parameter matrix. In terms of our earlier notation, we have θvs = vec(W) and φvs(x, c) = vec(vxv⊤ c ), where vec(·) unrolls a matrix into a vector. In Section 6, we experiment with W equal to the identity matrix, constraining W to be diagonal, and learning a full W matrix. The VS model can identify correct paraphrases in cases where it is hard to directly associate phrases from x and c. For example, the answer to “Where is made Kia car?” (from WEBQUESTIONS), is given by the canonical utterance “What city is Kia motors a headquarters of?”. The association model does not associate “made” and “headquarters”, but the VS model is able to determine that these utterances are semantically related. In other cases, the VS model cannot distinguish correct paraphrases from incorrect ones. For 1420 Dataset # examples # word types FREE917 917 2,036 WEBQUESTIONS 5,810 4,525 Table 4: Statistics on WEBQUESTIONS and FREE917. example, the association model identifies that the paraphrase for “What type of music did Richard Wagner Play?” is “What is the musical genres of Richard Wagner?”, by relating phrases such as “type of music” and “musical genres”. The VS model ranks the canonical utterance “What composition has Richard Wagner as lyricist?” higher, as this utterance is also in the music domain. Thus, we combine the two models to benefit from their complementary nature. In summary, while the association model aligns particular phrases to one another, the vector space model provides a soft vector-based representation for utterances. 6 Empirical evaluation In this section, we evaluate our system on WEBQUESTIONS and FREE917. After describing the setup (Section 6.1), we present our main empirical results and analyze the components of the system (Section 6.2). 6.1 Setup We use the WEBQUESTIONS dataset (Berant et al., 2013), which contains 5,810 question-answer pairs. This dataset was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechanical Turk. We use the original train-test split, and divide the training set into 3 random 80%–20% splits for development. This dataset is characterized by questions that are commonly asked on the web (and are not necessarily grammatical), such as “What character did Natalie Portman play in Star Wars?” and “What kind of money to take to Bahamas?”. The FREE917 dataset contains 917 questions, authored by two annotators and annotated with logical forms. This dataset contains questions on rarer topics (for example, “What is the engine in a 2010 Ferrari California?” and “What was the cover price of the X-men Issue 1?”), but the phrasing of questions tends to be more rigid compared to WEBQUESTIONS. Table 4 provides some statistics on the two datasets. Following Cai and Yates (2013), we hold out 30% of the data for the final test, and perform 3 random 80%-20% splits of the training set for development. Since we train from question-answer pairs, we collect answers by executing the gold logical forms against Freebase. We execute λ-DCS queries by converting them into SPARQL and executing them against a copy of Freebase using the Virtuoso database engine. We evaluate our system with accuracy, that is, the proportion of questions we answer correctly. We run all questions through the Stanford CoreNLP pipeline (Toutanova and Manning, 2003; Finkel et al., 2005; Klein and Manning, 2003). We tuned the L1 regularization strength, developed features, and ran analysis experiments on the development set (averaging across random splits). On WEBQUESTIONS, without L1 regularization, the number of non-zero features was 360K; L1 regularization brings it down to 17K. 6.2 Results We compare our system to Cai and Yates (2013) (CY13), Berant et al. (2013) (BCFL13), and Kwiatkowski et al. (2013) (KCAZ13). For BCFL13, we obtained results using the SEMPRE package2 and running Berant et al. (2013)’s system on the datasets. Table 5 presents results on the test set. We achieve a substantial relative improvement of 12% in accuracy on WEBQUESTIONS, and match the best results on FREE917. Interestingly, our system gets an oracle accuracy of 63% on WEBQUESTIONS compared to 48% obtained by BCFL13, where the oracle accuracy is the fraction of questions for which at least one logical form in the candidate set produced by the system is correct. This demonstrates that our method for constructing candidate logical forms is reasonable. To further examine this, we ran BCFL13 on the development set, allowing it to use only predicates from logical forms suggested by our logical form construction step. This improved oracle accuracy on the development set to 64.5%, but accuracy was 32.2%. This shows that the improvement in accuracy should not be attributed only to better logical form generation, but also to the paraphrase model. We now perform more extensive analysis of our system’s components and compare it to various baselines. Component ablation We ablate the association model, the VS model, and the entire paraphrase 2http://www-nlp.stanford.edu/software/sempre/ 1421 FREE917 WEBQUESTIONS CY13 59.0 – BCFL13 62.0 35.7 KCAZ13 68.0 – This work 68.5 39.9 Table 5: Results on the test set. FREE917 WEBQUESTIONS Our system 73.9 41.2 –VSM 71.0 40.5 –ASSOCIATION 52.7 35.3 –PARAPHRASE 31.8 21.3 SIMPLEGEN 73.4 40.4 Full matrix 52.7 35.3 Diagonal 50.4 30.6 Identity 50.7 30.4 JACCARD 69.7 31.3 EDIT 40.8 24.8 WDDC06 71.0 29.8 Table 6: Results for ablations and baselines on development set. model (using only logical form features). Table 5 shows that our full system obtains highest accuracy, and that removing the association model results in a much larger degradation compared to removing the VS model. Utterance generation Our system generates relatively natural utterances from logical forms using simple rules based on Freebase descriptions (Section 4). We now consider simply concatenating Freebase descriptions. For example, the logical form R[PlaceOfBirth].ElvisPresley would generate the utterance “What location Elvis Presley place of birth?”. Row SIMPLEGEN in Table 6 demonstrates that we still get good results in this setup. This is expected given that our paraphrase models are not sensitive to the syntactic structure of the generated utterance. VS model Our system learns parameters for a full W matrix. We now examine results when learning parameters for a full matrix W, a diagonal matrix W, and when setting W to be the identity matrix. Table 6 (third section) illustrates that learning a full matrix substantially improves accuracy. Figure 4 gives an example for a correct paraphrase pair, where the full matrix model boosts the overall model score. Note that the full matrix assigns a high score for the phrases “official language” and “speak” compared to the simpler models, but other pairs are less interpretable. Baselines We also compared our system to the following implemented baselines: Full do people czech republic speak offical 0.7 8.09 15.34 21.62 24.44 language 3.86 -3.13 7.81 2.58 14.74 czech 0.67 16.55 2.76 republic -8.71 12.47 -10.75 Diagonal do people czech republic speak offical 2.31 -0.72 1.88 0.27 -0.49 language 0.27 4.72 11.51 12.33 11 czech 1.4 8.13 5.21 republic -0.16 6.72 9.69 Identity do people czech republic speak offical 2.26 -1.41 0.89 0.07 -0.58 language 0.62 4.19 11.91 10.78 12.7 czech 2.88 7.31 5.42 republic -1.82 4.34 9.44 Figure 4: Values of the paraphrase score v⊤ xiWvci′ for all content word tokens xi and ci′, where W is an arbitrary full matrix, a diagonal matrix, or the identity matrix. We omit scores for the words “czech” and “republic” since they appear in all canonical utterances for this example. • JACCARD: We compute the Jaccard score between the tokens of x and c and define φpr(x, c) to be this single feature. • EDIT: We compute the token edit distance between x and c and define φpr(x, c) to be this single feature. • WDDC06: We re-implement 13 features from Wan et al. (2006), who obtained close to state-of-the-art performance on the Microsoft Research paraphrase corpus.3 Table 6 demonstrates that we improve performance over all baselines. Interestingly, JACCARD and WDDC06 obtain reasonable performance on FREE917 but perform much worse on WEBQUESTIONS. We surmise this is because questions in FREE917 were generated by annotators prompted by Freebase facts, whereas questions in WEBQUESTIONS originated independently of Freebase. Thus, word choice in FREE917 is often close to the generated Freebase descriptions, allowing simple baselines to perform well. Error analysis We sampled examples from the development set to examine the main reasons PARASEMPRE makes errors. We notice that in many cases the paraphrase model can be further improved. For example, PARASEMPRE suggests 3We implement all features that do not require dependency parsing. 1422 that the best paraphrase for “What company did Henry Ford work for?” is “What written work novel by Henry Ford?” rather than “The employer of Henry Ford”, due to the exact match of the word “work”. Another example is the question “Where is the Nascar hall of fame?”, where PARASEMPRE suggests that “What hall of fame discipline has Nascar hall of fame as halls of fame?” is the best canonical utterance. This is because our simple model allows to associate “hall of fame” with the canonical utterance three times. Entity recognition also accounts for many errors, e.g., the entity chosen in “where was the gallipoli campaign waged?” is Galipoli and not GalipoliCampaign. Last, PARASEMPRE does not handle temporal information, which causes errors in questions like “Where did Harriet Tubman live after the civil war?” 7 Discussion In this work, we approach the problem of semantic parsing from a paraphrasing viewpoint. A fundamental motivation and long standing goal of the paraphrasing and RTE communities has been to cast various semantic applications as paraphrasing/textual entailment (Dagan et al., 2013). While it has been shown that paraphrasing methods are useful for question answering (Harabagiu and Hickl, 2006) and relation extraction (Romano et al., 2006), this is, to the best of our knowledge, the first paper to perform semantic parsing through paraphrasing. Our paraphrase model emphasizes simplicity and efficiency, but the framework is agnostic to the internals of the paraphrase method. On the semantic parsing side, our work is most related to Kwiatkowski et al. (2013). The main challenge in semantic parsing is coping with the mismatch between language and the KB. In both Kwiatkowski et al. (2013) and this work, an intermediate representation is employed to handle the mismatch, but while they use a logical representation, we opt for a text-based one. Our choice allows us to benefit from the parallel monolingual corpus PARALEX and from word vectors trained on Wikipedia. We believe that our approach is particularly suitable for scenarios such as factoid question answering, where the space of logical forms is somewhat constrained and a few generation rules suffice to reduce the problem to paraphrasing. Our work is also related to Fader et al. (2013), who presented a paraphrase-driven question answering system. One can view this work as a generalization of Fader et al. along three dimensions. First, Fader et al. use a KB over natural language extractions rather than a formal KB and so querying the KB does not require a generation step – they paraphrase questions to KB entries directly. Second, they suggest a particular paraphrasing method that maps a test question to a question for which the answer is already known in a single step. We propose a general paraphrasing framework and instantiate it with two paraphrase models. Lastly, Fader et al. handle queries with only one property and entity whereas we generalize to more types of logical forms. Since our generated questions are passed to a paraphrase model, we took a very simple approach, mostly ensuring that we preserved the semantics of the utterance without striving for the most fluent realization. Research on generation (Dale et al., 2003; Reiter et al., 2005; Turner et al., 2009; Piwek and Boyer, 2012) typically focuses on generating natural utterances for human consumption, where fluency is important. In conclusion, the main contribution of this paper is a novel approach for semantic parsing based on a simple generation procedure and a paraphrase model. We achieve state-of-the-art results on two recently released datasets. We believe that our approach opens a window of opportunity for learning semantic parsers from raw text not necessarily related to the target KB. With more sophisticated generation and paraphrase, we hope to tackle compositionally richer utterances. Acknowledgments We thank Kai Sheng Tai for performing the error analysis. Stanford University gratefully acknowledges the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government. The second author is supported by a Google Faculty Research Award. 1423 References J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). Q. Cai and A. Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Association for Computational Linguistics (ACL). M. Chang, D. Goldwasser, D. Roth, and V. Srikumar. 2010. Discriminative learning over constrained latent representations. In North American Association for Computational Linguistics (NAACL). I. Dagan, D. Roth, M. Sammons, and F. M. Zanzotto. 2013. Recognizing Textual Entailment: Models and Applications. Morgan and Claypool Publishers. R. Dale, S. Geldof, and J. Prost. 2003. Coral: using natural language generation for navigational assistance. In Australasian computer science conference, pages 35–44. D. Das and N. A. Smith. 2009. Paraphrase identification as probabilistic quasi-synchronous recognition. In Association for Computational Linguistics (ACL), pages 468–476. B. Dolan, C. Quirk, and C. Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In International Conference on Computational Linguistics (COLING). J. Duchi, E. Hazan, and Y. Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT). A. Fader, S. Soderland, and O. Etzioni. 2011. Identifying relations for open information extraction. In Empirical Methods in Natural Language Processing (EMNLP). A. Fader, L. Zettlemoyer, and O. Etzioni. 2013. Paraphrase-driven learning for open question answering. In Association for Computational Linguistics (ACL). C. Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. J. R. Finkel, T. Grenager, and C. Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Association for Computational Linguistics (ACL), pages 363–370. Google. 2013. Freebase data dumps (2013-0609). https://developers.google.com/ freebase/data. A. Haghighi, A. Y. Ng, and C. D. Manning. 2005. Robust textual inference via graph matching. In Empirical Methods in Natural Language Processing (EMNLP). S. Harabagiu and A. Hickl. 2006. Methods for using textual entailment in open-domain question answering. In Association for Computational Linguistics (ACL). M. Heilman and N. A. Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL), pages 1011–1019. E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Association for Computational Linguistics (ACL). D. Klein and C. Manning. 2003. Accurate unlexicalized parsing. In Association for Computational Linguistics (ACL), pages 423–430. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223–1233. T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Empirical Methods in Natural Language Processing (EMNLP). P. Liang. 2013. Lambda dependency-based compositional semantics. Technical report, ArXiv. T. Lin, Mausam, and O. Etzioni. 2012. Entity linking at web scale. In Knowledge Extraction Workshop (AKBC-WEKEX). T. Mikolov, K. Chen, G. Corrado, and Jeffrey. 2013. Efficient estimation of word representations in vector space. Technical report, ArXiv. F. J. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30:417–449. P. Piwek and K. E. Boyer. 2012. Varieties of question generation: Introduction to this special issue. Dialogue and Discourse, 3:1–9. E. Reiter, S. Sripada, J. Hunter, J. Yu, and I. Davy. 2005. Choosing words in computer-generated weather forecasts. Artificial Intelligence, 167:137– 169. L. Romano, M. kouylekov, I. Szpektor, I. Dagan, and A. Lavelli. 2006. Investigating a generic paraphrase-based approach for relation extraction. In Proceedings of ECAL. R. Socher, E. H. Huang, J. Pennin, C. D. Manning, and A. Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems (NIPS), pages 801–809. 1424 A. Stern and I. Dagan. 2011. A confidence model for syntactically-motivated entailment proofs. In Recent Advances in Natural Language Processing, pages 455–462. K. Toutanova and C. D. Manning. 2003. Featurerich part-of-speech tagging with a cyclic dependency network. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL). R. Turner, Y. Sripada, and E. Reiter. 2009. Generating approximate geographic descriptions. In European Workshop on Natural Language Generation, pages 42–49. S. Wan, M. Dras, R. Dale, and C. Paris. 2006. Using dependency-based features to take the “para-farce” out of paraphrase. In Australasian Language Technology Workshop. M. Wang and C. D. Manning. 2010. Probabilistic treeedit models with structured latent variables for textual entailment and question answering. In The International Conference on Computational Linguistics, pages 1164–1172. Y. W. Wong and R. J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguistics (ACL), pages 960–967. M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic proramming. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1050–1055. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658– 666. 1425
2014
133
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1426–1436, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Discriminative Graph-Based Parser for the Abstract Meaning Representation Jeffrey Flanigan Sam Thomson Jaime Carbonell Chris Dyer Noah A. Smith Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {jflanigan,sthomson,jgc,cdyer,nasmith}@cs.cmu.edu Abstract Abstract Meaning Representation (AMR) is a semantic formalism for which a growing set of annotated examples is available. We introduce the first approach to parse sentences into this representation, providing a strong baseline for future improvement. The method is based on a novel algorithm for finding a maximum spanning, connected subgraph, embedded within a Lagrangian relaxation of an optimization problem that imposes linguistically inspired constraints. Our approach is described in the general framework of structured prediction, allowing future incorporation of additional features and constraints, and may extend to other formalisms as well. Our open-source system, JAMR, is available at: http://github.com/jflanigan/jamr 1 Introduction Semantic parsing is the problem of mapping natural language strings into meaning representations. Abstract Meaning Representation (AMR) (Banarescu et al., 2013; Dorr et al., 1998) is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph. Nodes represent concepts, and labeled directed edges represent the relationships between them–see Figure 1 for an example AMR graph. The formalism is based on propositional logic and neo-Davidsonian event representations (Parsons, 1990; Davidson, 1967). Although it does not encode quantifiers, tense, or modality, the set of semantic phenomena included in AMR were selected with natural language applications—in particular, machine translation—in mind. In this paper we introduce JAMR, the first published system for automatic AMR parsing. The system is based on a statistical model whose parameters are trained discriminatively using annotated sentences in the AMR Bank corpus (Banarescu et al., 2013). We evaluate using the Smatch score (Cai and Knight, 2013), establishing a baseline for future work. The core of JAMR is a two-part algorithm that first identifies concepts using a semi-Markov model and then identifies the relations that obtain between these by searching for the maximum spanning connected subgraph (MSCG) from an edge-labeled, directed graph representing all possible relations between the identified concepts. To solve the latter problem, we introduce an apparently novel O(|V |2 log |V |) algorithm that is similar to the maximum spanning tree (MST) algorithms that are widely used for dependency parsing (McDonald et al., 2005). Our MSCG algorithm returns the connected subgraph with maximal sum of its edge weights from among all connected subgraphs of the input graph. Since AMR imposes additional constraints to ensure semantic well-formedness, we use Lagrangian relaxation (Geoffrion, 1974; Fisher, 2004) to augment the MSCG algorithm, yielding a tractable iterative algorithm that finds the optimal solution subject to these constraints. In our experiments, we have found this algorithm to converge 100% of the time for the constraint set we use. The approach can be understood as an alternative to parsing approaches using graph transducers such as (synchronous) hyperedge replacement grammars (Chiang et al., 2013; Jones et al., 2012; Drewes et al., 1997), in much the same way that spanning tree algorithms are an alternative to using shift-reduce and dynamic programming algorithms for dependency parsing.1 While a detailed 1To date, a graph transducer-based semantic parser has not been published, although the Bolinas toolkit (http://www.isi.edu/publications/ licensed-sw/bolinas/) contains much of the necessary infrastructure. 1426 want-01 boy visit-01 city name “New” “York” “City” ARG0 ARG1 ARG0 ARG1 name op1 op2 op3 (a) Graph. (w / want-01 :ARG0 (b / boy) :ARG1 (g / visit-01 :ARG0 b :ARG1 (c / city :name (n / name :op1 "New" :op2 "York" :op3 "City")))) (b) AMR annotation. Figure 1: Two equivalent ways of representing the AMR parse for the sentence, “The boy wants to visit New York City.” comparison of these two approaches is beyond the scope of this paper, we emphasize that—as has been observed with dependency parsing—a diversity of approaches can shed light on complex problems such as semantic parsing. 2 Notation and Overview Our approach to AMR parsing represents an AMR parse as a graph G = ⟨V, E⟩; vertices and edges are given labels from sets LV and LE, respectively. G is constructed in two stages. The first stage identifies the concepts evoked by words and phrases in an input sentence w = ⟨w1, . . . , wn⟩, each wi a member of vocabulary W. The second stage connects the concepts by adding LE-labeled edges capturing the relations between concepts, and selects a root in G corresponding to the focus of the sentence w. Concept identification (§3) involves segmenting w into contiguous spans and assigning to each span a graph fragment corresponding to a concept from a concept set denoted F (or to ∅for words that evoke no concept). In §5 we describe how F is constructed. In our formulation, spans are contiguous subsequences of w. For example, the words “New York City” can evoke the fragment represented by (c / city :name (n / name :op1 "New" :op2 "York" :op3 "City")))) We use a sequence labeling algorithm to identify concepts. The relation identification stage (§4) is similar to a graph-based dependency parser. Instead of finding the maximum-scoring tree over words, it finds the maximum-scoring connected subgraph that preserves concept fragments from the first stage, links each pair of vertices by at most one edge, and is deterministic2 with respect to a special set of edge labels L∗ E ⊂LE. The set L∗ E consists of the labels ARG0–ARG5, and does not include labels such as MOD or MANNER, for example. Linguistically, the determinism constraint enforces that predicates have at most one semantic argument of each type; this is discussed in more detail in §4. To train the parser, spans of words must be labeled with the concept fragments they evoke. Although AMR Bank does not label concepts with the words that evoke them, it is possible to build an automatic aligner (§5). The alignments are used to construct the concept lexicon and to train the concept identification and relation identification stages of the parser (§6). Each stage is a discriminatively-trained linear structured predictor with rich features that make use of part-ofspeech tagging, named entity tagging, and dependency parsing. In §7, we evaluate the parser against goldstandard annotated sentences from the AMR Bank corpus (Banarescu et al., 2013) under the Smatch score (Cai and Knight, 2013), presenting the first published results on automatic AMR parsing. 3 Concept Identification The concept identification stage maps spans of words in the input sentence w to concept graph fragments from F, or to the empty graph fragment ∅. These graph fragments often consist of just one labeled concept node, but in some cases they are larger graphs with multiple nodes and edges.3 2By this we mean that, at each node, there is at most one outgoing edge with that label type. 3About 20% of invoked concept fragments are multiconcept fragments. 1427 Concept identification is illustrated in Figure 2 using our running example, “The boy wants to visit New York City.” Let the concept lexicon be a mapping clex : W ∗→2F that provides candidate graph fragments for sequences of words. (The construction of F and clex is discussed below.) Formally, a concept labeling is (i) a segmentation of w into contiguous spans represented by boundaries b, giving spans ⟨wb0:b1, wb1:b2, . . . wbk−1:bk⟩, with b0 = 0 and bk = n, and (ii) an assignment of each phrase wbi−1:bi to a concept graph fragment ci ∈clex(wbi−1:bi) ∪∅. Our approach scores a sequence of spans b and a sequence of concept graph fragments c, both of arbitrary length k, using the following locally decomposed, linearly parameterized function: score(b, c; θ) = Pk i=1 θ⊤f(wbi−1:bi, bi−1, bi, ci) (1) where f is a feature vector representation of a span and one of its concept graph fragments in context. The features are: • Fragment given words: Relative frequency estimates of the probability of a concept graph fragment given the sequence of words in the span. This is calculated from the concept-word alignments in the training corpus (§5). • Length of the matching span (number of tokens). • NER: 1 if the named entity tagger marked the span as an entity, 0 otherwise. • Bias: 1 for any concept graph fragment from F and 0 for ∅. Our approach finds the highest-scoring b and c using a dynamic programming algorithm: the zeroth-order case of inference under a semiMarkov model (Janssen and Limnios, 1999). Let S(i) denote the score of the best labeling of the first i words of the sentence, w0:i; it can be calculated using the recurrence: S(0) = 0 S(i) = max j:0≤j<i, c∈clex(wj:i)∪∅ n S(j) + θ⊤f(wj:i, j, i, c) o The best score will be S(n), and the best scoring concept labeling can be recovered using backpointers, as in typical implementations of the Viterbi algorithm. Runtime is O(n2). clex is implemented as follows. When clex is called with a sequence of words, it looks up the sequence in a table that contains, for every word sequence that was labeled with a concept fragment in the training data, the set of concept fragments it was labeled with. clex also has a set of rules for generating concept fragments for named entities and time expressions. It generates a concept fragment for any entity recognized by the named entity tagger, as well as for any word sequence matching a regular expression for a time expression. clex returns the union of all these concept fragments. 4 Relation Identification The relation identification stage adds edges among the concept subgraph fragments identified in the first stage (§3), creating a graph. We frame the task as a constrained combinatorial optimization problem. Consider the fully dense labeled multigraph D = ⟨VD, ED⟩that includes the union of all labeled vertices and labeled edges in the concept graph fragments, as well as every possible labeled edge u ℓ−→v, for all u, v ∈VD and every ℓ∈LE.4 We require a subgraph G = ⟨VG, EG⟩that respects the following constraints: 1. Preserving: all graph fragments (including labels) from the concept identification phase are subgraphs of G. 2. Simple: for any two vertices u and v ∈VG, EG includes at most one edge between u and v. This constraint forbids a small number of perfectly valid graphs, for example for sentences such as “John hurt himself”; however, we see that < 1% of training instances violate the constraint. We found in preliminary experiments that including the constraint increases overall performance.5 3. Connected: G must be weakly connected (every vertex reachable from every other vertex, ignoring the direction of edges). This constraint follows from the formal definition of AMR and is never violated in the training data. 4. Deterministic: For each node u ∈VG, and for each label ℓ∈L∗ E, there is at most one outgoing edge in EG from u with label ℓ. As discussed in §2, this constraint is linguistically motivated. 4To handle numbered OP labels, we pre-process the training data to convert OPN to OP, and post-process the output by numbering the OP labels sequentially. 5In future work it might be treated as a soft constraint, or the constraint might be refined to specific cases. 1428 The boy wants to visit New York City ø ø boy want-01 visit-01 city name “New” “York” “City” name op1 op2 op3 Figure 2: A concept labeling for the sentence “The boy wants to visit New York City.” One constraint we do not include is acyclicity, which follows from the definition of AMR. In practice, graphs with cycles are rarely produced by JAMR. In fact, none of the graphs produced on the test set violate acyclicity. Given the constraints, we seek the maximumscoring subgraph. We define the score to decompose by edges, and with a linear parameterization: score(EG; ψ) = P e∈EG ψ⊤g(e) (2) The features are shown in Table 1. Our solution to maximizing the score in Eq. 2, subject to the constraints, makes use of (i) an algorithm that ignores constraint 4 but respects the others (§4.1); and (ii) a Lagrangian relaxation that iteratively adjusts the edge scores supplied to (i) so as to enforce constraint 4 (§4.2). 4.1 Maximum Preserving, Simple, Spanning, Connected Subgraph Algorithm The steps for constructing a maximum preserving, simple, spanning, connected (but not necessarily deterministic) subgraph are as follows. These steps ensure the resulting graph G satisfies the constraints: the initialization step ensures the preserving constraint is satisfied, the pre-processing step ensures the graph is simple, and the core algorithm ensures the graph is connected. 1. (Initialization) Let E(0) be the union of the concept graph fragments’ weighted, labeled, directed edges. Let V denote its set of vertices. Note that ⟨V, E(0)⟩is preserving (constraint 4), as is any graph that contains it. It is also simple (constraint 4), assuming each concept graph fragment is simple. 2. (Pre-processing) We form the edge set E by including just one edge from ED between each pair of nodes: • For any edge e = u ℓ−→v in E(0), include e in E, omitting all other edges between u and v. • For any two nodes u and v, include only the highest scoring edge between u and v. Note that without the deterministic constraint, we have no constraints that depend on the label of an edge, nor its direction. So it is clear that the edges omitted in this step could not be part of the maximum-scoring solution, as they could be replaced by a higher scoring edge without violating any constraints. Note also that because we have kept exactly one edge between every pair of nodes, ⟨V, E⟩is simple and connected. 3. (Core algorithm) Run Algorithm 1, MSCG, on ⟨V, E⟩and E(0). This algorithm is a (to our knowledge novel) modification of the minimum spanning tree algorithm of Kruskal (1956). Note that the directions of edges do not matter for MSCG. Steps 1–2 can be accomplished in one pass through the edges, with runtime O(|V |2). MSCG can be implemented efficiently in O(|V |2 log |V |) time, similarly to Kruskal’s algorithm, using a disjoint-set data structure to keep track of connected components.6 The total asymptotic runtime complexity is O(|V |2 log |V |). The details of MSCG are given in Algorithm 1. In a nutshell, MSCG first adds all positive edges to the graph, and then connects the graph by greedily adding the least negative edge that connects two previously unconnected components. Theorem 1. MSCG finds a maximum spanning, connected subgraph of ⟨V, E⟩ Proof. We closely follow the original proof of correctness of Kruskal’s algorithm. We first show by induction that, at every iteration of MSCG, there exists some maximum spanning, connected subgraph that contains G(i) = ⟨V, E(i)⟩: 6For dense graphs, Prim’s algorithm (Prim, 1957) is asymptotically faster (O(|V |2)). We conjecture that using Prim’s algorithm instead of Kruskall’s to connect the graph could improve the runtime of MSCG. 1429 Name Description Label For each ℓ∈LE, 1 if the edge has that label Self edge 1 if the edge is between two nodes in the same fragment Tail fragment root 1 if the edge’s tail is the root of its graph fragment Head fragment root 1 if the edge’s head is the root of its graph fragment Path Dependency edge labels and parts of speech on the shortest syntactic path between any two words in the two spans Distance Number of tokens (plus one) between the two concepts’ spans (zero if the same) Distance indicators A feature for each distance value, that is 1 if the spans are of that distance Log distance Logarithm of the distance feature plus one. Bias 1 for any edge. Table 1: Features used in relation identification. In addition to the features above, the following conjunctions are used (Tail and Head concepts are elements of LV ): Tail concept ∧Label, Head concept ∧Label, Path ∧Label, Path ∧Head concept, Path ∧ Tail concept, Path ∧Head concept ∧Label, Path ∧Tail concept ∧Label, Path ∧Head word, Path ∧Tail word, Path ∧Head word ∧Label, Path ∧Tail word ∧Label, Distance ∧Label, Distance ∧Path, and Distance ∧Path ∧Label. To conjoin the distance feature with anything else, we multiply by the distance. input : weighted, connected graph ⟨V, E⟩ and set of edges E(0) ⊆E to be preserved output: maximum spanning, connected subgraph of ⟨V, E⟩that preserves E(0) let E(1) = E(0) ∪{e ∈E | ψ⊤g(e) > 0}; create a priority queue Q containing {e ∈E | ψ⊤g(e) ≤0} prioritized by scores; i = 1; while Q nonempty and ⟨V, E(i)⟩is not yet spanning and connected do i = i + 1; E(i) = E(i−1); e = arg maxe′∈Q ψ⊤g(e′); remove e from Q; if e connects two previously unconnected components of ⟨V, E(i)⟩then add e to E(i) end end return G = ⟨V, E(i)⟩; Algorithm 1: MSCG algorithm. Base case: Consider G(1), the subgraph containing E(0) and every positive edge. Take any maximum preserving spanning connected subgraph M of ⟨V, E⟩. We know that such an M exists because ⟨V, E⟩itself is a preserving spanning connected subgraph. Adding a positive edge to M would strictly increase M’s score without disconnecting M, which would contradict the fact that M is maximal. Thus M must contain G(1). Induction step: By the inductive hypothesis, there exists some maximum spanning connected subgraph M = ⟨V, EM⟩that contains G(i). Let e be the next edge added to E(i) by MSCG. If e is in EM, then E(i+1) = E(i) ∪{e} ⊆EM, and the hypothesis still holds. Otherwise, since M is connected and does not contain e, EM ∪{e} must have a cycle containing e. In addition, that cycle must have some edge e′ that is not in E(i). Otherwise, E(i) ∪{e} would contain a cycle, and e would not connect two unconnected components of G(i), contradicting the fact that e was chosen by MSCG. Since e′ is in a cycle in EM ∪{e}, removing it will not disconnect the subgraph, i.e. (EM ∪{e})\ {e′} is still connected and spanning. The score of e is greater than or equal to the score of e′, otherwise MSCG would have chosen e′ instead of e. Thus, ⟨V, (EM ∪{e}) \ {e′}⟩is a maximum spanning connected subgraph that contains E(i+1), and the hypothesis still holds. When the algorithm completes, G = ⟨V, E(i)⟩ is a spanning connected subgraph. The maximum spanning connected subgraph M that contains it cannot have a higher score, because G contains every positive edge. Hence G is maximal. 4.2 Lagrangian Relaxation If the subgraph resulting from MSCG satisfies constraint 4 (deterministic) then we are done. Otherwise we resort to Lagrangian relaxation (LR). Here we describe the technique as it applies to our task, referring the interested reader to Rush and Collins (2012) for a more general introduction to Lagrangian relaxation in the context of structured prediction problems. In our case, we begin by encoding a graph G = ⟨VG, EG⟩as a binary vector. For each edge e in the fully dense multigraph D, we associate a bi1430 nary variable ze = 1{e ∈EG}, where 1{P} is the indicator function, taking value 1 if the proposition P is true, 0 otherwise. The collection of ze form a vector z ∈{0, 1}|ED|. Determinism constraints can be encoded as a set of linear inequalities. For example, the constraint that vertex u has no more than one outgoing ARG0 can be encoded with the inequality: X v∈V 1{u ARG0 −−−→v ∈EG} = X v∈V z u ARG0 −−−→v ≤1. All of the determinism constraints can collectively be encoded as one system of inequalities: Az ≤b, with each row Ai in A and its corresponding entry bi in b together encoding one constraint. For the previous example we have a row Ai that has 1s in the columns corresponding to edges outgoing from u with label ARG0 and 0’s elsewhere, and a corresponding element bi = 1 in b. The score of graph G (encoded as z) can be written as the objective function φ⊤z, where φe = ψ⊤g(e). To handle the constraint Az ≤b, we introduce multipliers µ ≥0 to get the Lagrangian relaxation of the objective function: Lµ(z) = maxz (φ⊤z + µ⊤(b −Az)), z∗ µ = arg maxz Lµ(z). And the dual objective: L(z) = min µ≥0 Lµ(z), z∗= arg maxz L(z). Conveniently, Lµ(z) decomposes over edges: Lµ(z) = maxz (φ⊤z + µ⊤(b −Az)) = maxz (φ⊤z −µ⊤Az) = maxz ((φ −A⊤µ)⊤z). So for any µ, we can find z∗ µ by assigning edges the new Lagrangian adjusted weights φ −A⊤µ and reapplying the algorithm described in §4.1. We can find z∗by projected subgradient descent, by starting with µ = 0, and taking steps in the direction: −∂Lµ ∂µ (z∗ µ) = Az∗ µ. If any components of µ are negative after taking a step, they are set to zero. L(z) is an upper bound on the unrelaxed objective function φ⊤z, and is equal to it if and only if the constraints Az ≤b are satisfied. If L(z∗) = φ⊤z∗, then z∗is also the optimal solution to the constrained solution. Otherwise, there exists a duality gap, and Lagrangian relaxation has failed. In that case we still return the subgraph encoded by z∗, even though it might violate one or more constraints. Techniques from integer programming such as branch-and-bound or cutting-planes methods could be used to find an optimal solution when LR fails (Das et al., 2012), but we do not use these techniques here. In our experiments, with a stepsize of 1 and max number of steps as 500, Lagrangian relaxation succeeds 100% of the time in our data. 4.3 Focus Identification In AMR, one node must be marked as the focus of the sentence. We notice this can be accomplished within the relation identification step: we add a special concept node root to the dense graph D, and add an edge from root to every other node, giving each of these edges the label FOCUS. We require that root have at most one outgoing FOCUS edge. Our system has two feature types for this edge: the concept it points to, and the shortest dependency path from a word in the span to the root of the dependency tree. 5 Automatic Alignments In order to train the parser, we need alignments between sentences in the training data and their annotated AMR graphs. More specifically, we need to know which spans of words invoke which concept fragments in the graph. To do this, we built an automatic aligner and tested its performance on a small set of alignments we annotated by hand. The automatic aligner uses a set of rules to greedily align concepts to spans. The list of rules is given in Table 2. The aligner proceeds down the list, first aligning named-entities exactly, then fuzzy matching named-entities, then date-entities, etc. For each rule, an entire pass through the AMR graph is done. The pass considers every concept in the graph and attempts to align a concept fragment rooted at that concept if the rule can apply. Some rules only apply to a particular type of concept fragment, while others can apply to any concept. For example, rule 1 can apply to any NAME concept and its OP children. It searches the sentence 1431 for a sequence of words that exactly matches its OP children and aligns them to the NAME and OP children fragment. Concepts are considered for alignment in the order they are listed in the AMR annotation (left to right, top to bottom). Concepts that are not aligned in a particular pass may be aligned in subsequent passes. Concepts are aligned to the first matching span, and alignments are mutually exclusive. Once aligned, a concept in a fragment is never realigned.7 However, more concepts can be attached to the fragment by rules 8–14. We use WordNet to generate candidate lemmas, and we also use a fuzzy match of a concept, defined to be a word in the sentence that has the longest string prefix match with that concept’s label, if the match length is ≥4. If the match length is < 4, then the concept has no fuzzy match. For example the fuzzy match for ACCUSE-01 could be “accusations” if it is the best match in the sentence. WordNet lemmas and fuzzy matches are only used if the rule explicitly uses them. All tokens and concepts are lowercased before matches or fuzzy matches are done. On the 200 sentences of training data we aligned by hand, the aligner achieves 92% precision, 89% recall, and 90% F1 for the alignments. 6 Training We now describe how to train the two stages of the parser. The training data for the concept identification stage consists of (X, Y ) pairs: • Input: X, a sentence annotated with named entities (person, organization, location, misciscellaneous) from the Illinois Named Entity Tagger (Ratinov and Roth, 2009), and part-ofspeech tags and basic dependencies from the Stanford Parser (Klein and Manning, 2003; de Marneffe et al., 2006). • Output: Y , the sentence labeled with concept subgraph fragments. The training data for the relation identification stage consists of (X, Y ) pairs: 7As an example, if “North Korea” shows up twice in the AMR graph and twice in the input sentence, then the first “North Korea” concept fragment listed in the AMR gets aligned to the first “North Korea” mention in the sentence, and the second fragment to the second mention (because the first span is already aligned when the second “North Korea” concept fragment is considered, so it is aligned to the second matching span). 1. (Named Entity) Applies to name concepts and their opn children. Matches a span that exactly matches its opn children in numerical order. 2. (Fuzzy Named Entity) Applies to name concepts and their opn children. Matches a span that matches the fuzzy match of each child in numerical order. 3. (Date Entity) Applies to date-entity concepts and their day, month, year children (if exist). Matches any permutation of day, month, year, (two digit or four digit years), with or without spaces. 4. (Minus Polarity Tokens) Applies to - concepts, and matches “no”, “not”, “non.” 5. (Single Concept) Applies to any concept. Strips off trailing ‘-[0-9]+’ from the concept (for example run-01 →run), and matches any exact matching word or WordNet lemma. 6. (Fuzzy Single Concept) Applies to any concept. Strips off trailing ‘-[0-9]+’, and matches the fuzzy match of the concept. 7. (U.S.) Applies to name if its op1 child is united and its op2 child is states. Matches a word that matches “us”, “u.s.” (no space), or “u. s.” (with space). 8. (Entity Type) Applies to concepts with an outgoing name edge whose head is an aligned fragment. Updates the fragment to include the unaligned concept. Ex: continent in (continent :name (name :op1 "Asia")) aligned to “asia.” 9. (Quantity) Applies to .*-quantity concepts with an outgoing unit edge whose head is aligned. Updates the fragment to include the unaligned concept. Ex: distance-quantity in (distance-quantity :unit kilometer) aligned to “kilometres.” 10. (Person-Of, Thing-Of) Applies to person and thing concepts with an outgoing .*-of edge whose head is aligned. Updates the fragment to include the unaligned concept. Ex: person in (person :ARG0-of strike-02) aligned to “strikers.” 11. (Person) Applies to person concepts with a single outgoing edge whose head is aligned. Updates the fragment to include the unaligned concept. Ex: person in (person :poss (country :name (name :op1 "Korea"))) 12. (Goverment Organization) Applies to concepts with an incoming ARG.*-of edge whose tail is an aligned government-organization concept. Updates the fragment to include the unaligned concept. Ex: govern-01 in (government-organization :ARG0-of govern-01) aligned to “government.” 13. (Minus Polarity Prefixes) Applies to - concepts with an incoming polarity edge whose tail is aligned to a word beginning with “un”, “in”, or “il.” Updates the fragment to include the unaligned concept. Ex: - in (employ-01 :polarity -) aligned to “unemployment.” 14. (Degree) Applies to concepts with an incoming degree edge whose tail is aligned to a word ending is “est.” Updates the fragment to include the unaligned concept. Ex: most in (large :degree most) aligned to “largest.” Table 2: Rules used in the automatic aligner. 1432 • Input: X, the sentence labeled with graph fragments, as well as named enties, POS tags, and basic dependencies as in concept identification. • Output: Y , the sentence with a full AMR parse.8 Alignments are used to induce the concept labeling for the sentences, so no annotation beyond the automatic alignments is necessary. We train the parameters of the stages separately using AdaGrad (Duchi et al., 2011) with the perceptron loss function (Rosenblatt, 1957; Collins, 2002). We give equations for concept identification parameters θ and features f(X, Y ). For a sentence of length k, and spans b labeled with a sequence of concept fragments c, the features are: f(X, Y ) = Pk i=1 f(wbi−1:bi, bi−1, bi, ci) To train with AdaGrad, we process examples in the training data ((X1, Y 1), . . . , (XN, Y N)) one at a time. At time t, we decode (§3) to get ˆY t and compute the subgradient: st = f(Xt, ˆY t) −f(Xt, Y t) We then update the parameters and go to the next example. Each component i of the parameter vector gets updated like so: θt+1 i = θt i − η qPt t′=1 st′ i st i η is the learning rate which we set to 1. For relation identification training, we replace θ and f(X, Y ) in the above equations with ψ and g(X, Y ) = P e∈EG g(e). We ran AdaGrad for ten iterations for concept identification, and five iterations for relation identification. The number of iterations was chosen by early stopping on the development set. 7 Experiments We evaluate our parser on the newswire section of LDC2013E117 (deft-amr-release-r3-proxy.txt). Statistics about this corpus and our train/dev./test splits are given in Table 3. 8Because the alignments are automatic, some concepts may not be aligned, so we cannot compute their features. We remove the unaligned concepts and their edges from the full AMR graph for training. Thus some graphs used for training may in fact be disconnected. Split Document Years Sentences Tokens Train 1995-2006 4.0k 79k Dev. 2007 2.1k 40k Test 2008 2.1k 42k Table 3: Train/dev./test split. Train Test P R F1 P R F1 .92 .90 .91 .90 .79 .84 Table 4: Concept identification performance. For the performance of concept identification, we report precision, recall, and F1 of labeled spans using the induced labels on the training and test data as a gold standard (Table 4). Our concept identifier achieves 84% F1 on the test data. Precision is roughly the same between train and test, but recall is worse on test, implicating unseen concepts as a significant source of errors on test data. We evaluate the performance of the full parser using Smatch v1.0 (Cai and Knight, 2013), which counts the precision, recall and F1 of the concepts and relations together. Using the full pipeline (concept identification and relation identification stages), our parser achieves 58% F1 on the test data (Table 5). Using gold concepts with the relation identification stage yields a much higher Smatch score of 80% F1. As a comparison, AMR Bank annotators have a consensus inter-annotator agreement Smatch score of 83% F1. The runtime of our system is given in Figure 3. The large drop in performance of 22% F1 when moving from gold concepts to system concepts suggests that joint inference and training for the two stages might be helpful. 8 Related Work Our approach to relation identification is inspired by graph-based techniques for non-projective syntactic dependency parsing. Minimum spanning tree algorithms—specifically, the optimum branching algorithm of Chu and Liu (1965) and Edmonds (1967)—were first used for dependency parsing by McDonald et al. (2005). Later exTrain Test concepts P R F1 P R F1 gold .85 .95 .90 .76 .84 .80 automatic .69 .78 .73 .52 .66 .58 Table 5: Parser performance. 1433 0 10 20 30 40 0.0 0.1 0.2 0.3 0.4 0.5 sentence length (words) average runtime (seconds) Figure 3: Runtime of JAMR (all stages). tensions allow for higher-order (non–edge-local) features, often making use of relaxations to solve the NP-hard optimization problem. Mcdonald and Pereira (2006) incorporated second-order features, but resorted to an approximate algorithm. Others have formulated the problem as an integer linear program (Riedel and Clarke, 2006; Martins et al., 2009). TurboParser (Martins et al., 2013) uses AD3 (Martins et al., 2011), a type of augmented Lagrangian relaxation, to integrate third-order features into a CLE backbone. Future work might extend JAMR to incorporate additional linguistically motivated constraints and higher-order features. The task of concept identification is similar in form to the problem of Chinese word segmentation, for which semi-Markov models have successfully been used to incorporate features based on entire spans (Andrew, 2006). While all semantic parsers aim to transform natural language text to a formal representation of its meaning, there is wide variation in the meaning representations and parsing techniques used. Space does not permit a complete survey, but we note some connections on both fronts. Interlinguas (Carbonell et al., 1992) are an important precursor to AMR. Both formalisms are intended for use in machine translation, but AMR has an admitted bias toward the English language. First-order logic representations (and extensions using, e.g., the λ-calculus) allow variable quantification, and are therefore more powerful. In recent research, they are often associated with combinatory categorial grammar (Steedman, 1996). There has been much work on statistical models for CCG parsing (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010, inter alia), usually using chart-based dynamic programming for inference. Natural language interfaces for querying databases have served as another driving application (Zelle and Mooney, 1996; Kate et al., 2005; Liang et al., 2011, inter alia). The formalisms used here are richer in logical expressiveness than AMR, but typically use a smaller set of concept types—only those found in the database. In contrast, semantic dependency parsing—in which the vertices in the graph correspond to the words in the sentence—is meant to make semantic parsing feasible for broader textual domains. Alshawi et al. (2011), for example, use shift-reduce parsing to map sentences to natural logical form. AMR parsing also shares much in common with tasks like semantic role labeling and framesemantic parsing (Gildea and Jurafsky, 2002; Punyakanok et al., 2008; Das et al., 2014, inter alia). In these tasks, predicates are often disambiguated to a canonical word sense, and roles are filled by spans (usually syntactic constituents). They consider each predicate separately, and produce a disconnected set of shallow predicate-argument structures. AMR, on the other hand, canonicalizes both predicates and arguments to a common concept label space. JAMR reasons about all concepts jointly to produce a unified representation of the meaning of an entire sentence. 9 Conclusion We have presented the first published system for automatic AMR parsing, and shown that it provides a strong baseline based on the Smatch evaluation metric. We also present an algorithm for finding the maximum, spanning, connected subgraph and show how to incorporate extra constraints with Lagrangian relaxation. Our featurebased learning setup allows the system to be easily extended by incorporating new feature sources. Acknowledgments The authors gratefully acknowledge helpful correspondence from Kevin Knight, Ulf Hermjakob, and Andr´e Martins, and helpful feedback from Nathan Schneider, Brendan O’Connor, Waleed Ammar, and the anonymous reviewers. This work was sponsored by the U. S. Army Research Laboratory and the U. S. Army Research Office under contract/grant number W911NF-10-1-0533 and DARPA grant FA8750-12-2-0342 funded under the DEFT program. 1434 References Hiyan Alshawi, Pi-Chuan Chang, and Michael Ringgaard. 2011. Deterministic statistical mapping of sentences to underspecified semantics. In Proc. of ICWS. Galen Andrew. 2006. A hybrid markov/semi-markov conditional random field for sequence segmentation. In Proc. of EMNLP. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proc. of the Linguistic Annotation Workshop and Interoperability with Discourse. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proc. of ACL. Jaime G. Carbonell, Teruko Mitamura, and Eric H. Nyberg. 1992. The KANT perspective: A critique of pure transfer (and pure interlingua, pure transfer, . . . ). In Proc. of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation: Empiricist vs. Rationalist Methods in MT. David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proc. of ACL. Y. J. Chu and T. H. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396– 1400. Michael Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proc. of EMNLP. Dipanjan Das, Andr´e F. T. Martins, and Noah A. Smith. 2012. An exact dual decomposition algorithm for shallow semantic parsing with constraints. In Proc. of the Joint Conference on Lexical and Computational Semantics. Dipanjan Das, Desai Chen, Andr´e F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguistics, 40(1):9–56. Donald Davidson. 1967. The logical form of action sentences. In Nicholas Rescher, editor, The Logic of Decision and Action, pages 81–120. Univ. of Pittsburgh Press. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proc. of LREC. Bonnie Dorr, Nizar Habash, and David Traum. 1998. A thematic hierarchy for efficient generation from lexical-conceptual structure. In David Farwell, Laurie Gerber, and Eduard Hovy, editors, Machine Translation and the Information Soup: Proc. of AMTA. Frank Drewes, Hans-J¨org Kreowski, and Annegret Habel. 1997. Hyperedge replacement graph grammars. In Handbook of Graph Grammars, pages 95– 162. World Scientific. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, July. Jack Edmonds. 1967. Optimum branchings. National Bureau of Standards. Marshall L. Fisher. 2004. The Lagrangian relaxation method for solving integer programming problems. Management Science, 50(12):1861–1871. Arthur M Geoffrion. 1974. Lagrangean relaxation for integer programming. Springer. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Jacques Janssen and Nikolaos Limnios. 1999. SemiMarkov Models and Applications. Springer, October. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyperedge replacement grammars. In Proc. of COLING. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In Proc. of AAAI. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proc. of ACL. Joseph B. Kruskal. 1956. On the shortest spanning subtree of a graph and the traveling salesman problem. Proc. of the American Mathematical Society, 7(1):48. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Proc. of EMNLP. Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proc. of ACL. Andr´e F. T. Martins, Noah A. Smith, and Eric P. Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proc. of ACL. 1435 Andr´e F. T. Martins, Noah A. Smith, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2011. Dual decomposition with many overlapping components. In Proc. of EMNLP. Andr´e F. T. Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non-projective Turbo parsers. In Proc. of ACL. Ryan Mcdonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proc. of EACL, page 81–88. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proc. of EMNLP. Terence Parsons. 1990. Events in the Semantics of English: A study in subatomic semantics. MIT Press. Robert C. Prim. 1957. Shortest connection networks and some generalizations. Bell System Technology Journal, 36:1389–1401. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257–287. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proc. of CoNLL. Sebastian Riedel and James Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In Proc. of EMNLP. Frank Rosenblatt. 1957. The perceptron–a perceiving and recognizing automaton. Technical Report 85460-1, Cornell Aeronautical Laboratory. Alexander M. Rush and Michael Collins. 2012. A tutorial on dual decomposition and Lagrangian relaxation for inference in natural language processing. Journal of Artificial Intelligence Research, 45(1):305—-362. Mark Steedman. 1996. Surface structure and interpretation. Linguistic inquiry monographs. MIT Press. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proc. of AAAI. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proc. of UAI. Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In In Proc. of EMNLP-CoNLL. 1436
2014
134
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1437–1447, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Context-dependent Semantic Parsing for Time Expressions Kenton Lee†, Yoav Artzi†, Jesse Dodge‡∗, and Luke Zettlemoyer† † Computer Science & Engineering, University of Washington, Seattle, WA {kentonl, yoav, lsz}@cs.washington.edu ‡ Language Technologies Institute, Carnegie Mellon University, Pittsburgh, PA [email protected] Abstract We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions. We use a Combinatory Categorial Grammar to construct compositional meaning representations, while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values. Experiments on benchmark datasets show that our approach outperforms previous stateof-the-art systems, with error reductions of 13% to 21% in end-to-end performance. 1 Introduction Time expressions present a number of challenges for language understanding systems. They have rich, compositional structure (e.g., “2nd Friday of July”), can be easily confused with non-temporal phrases (e.g., the word “May” can be a month name or a verb), and can vary in meaning in different linguistic contexts (e.g., the word “Friday” refers to different dates in the sentences “We met on Friday” and “We will meet on Friday”). Recovering the meaning of time expressions is therefore challenging, but provides opportunities to study context-dependent language use. In this paper, we present the first context-dependent semantic parsing approach for learning to identify and interpret time expressions, addressing all three challenges. Existing state-of-the-art methods use handengineered rules for reasoning about time expressions (Str¨otgen and Gertz, 2013). This includes both detection, identifying a phrase as a time expression, and resolution, mapping such a phrase into a standardized time value. While rule-based approaches provide a natural way to express expert knowledge, it is relatively difficult to en∗Work conducted at the University of Washington. code preferences between similar competing hypotheses and provide prediction confidence. Recently, methods for learning probabilistic semantic parsers have been shown to address such limitations (Angeli et al., 2012; Angeli and Uszkoreit, 2013). However, these approaches do not account for any surrounding linguistic context and were mainly evaluated with gold standard mentions. We propose to use a context-dependent semantic parser for both detection and resolution of time expressions. For both tasks, we make use of a hand-engineered Combinatory Categorial Grammar (CCG) to construct a set of meaning representations that identify the time being described. For example, this grammar maps the phrase “2nd Friday of July” to the meaning representation intersect(nth(2, friday), july), which encodes the set of all such days. Detection is then performed with a binary classifier to prune the set of text spans that can be parsed with the grammar (e.g., to tell that “born in 2000” has a time expression but “a 2000 piece puzzle” does not). For resolution, we consider mentions sequentially and use a log-linear model to select the most likely meaning for each. This choice depends on contextual cues such as previous time expressions and the tense of the governing verb (e.g., as required to correctly resolve cases like “We should meet on the 2nd Friday of July”). Such an approach provides a good balance between hand engineering and learning. For the relatively closed-class time expressions, we demonstrate that it is possible to engineer a high quality CCG lexicon. We take a data-driven approach for grammar design, preferring a grammar with high coverage even if it results in parsing ambiguities. We then learn a model to accurately select between competing parses and incorporate signals from the surrounding context, both more difficult to model with deterministic rules. For both problems, we learn from TimeML an1437 notations (Pustejovsky et al., 2005), which mark mentions and the specific times they reference. Training the detector is a supervised learning problem, but resolution is more challenging, requiring us to reason about latent parsing and context-dependent decisions. We evaluate performance in two domains: the TempEval-3 corpus of newswire text (Uzzaman et al., 2013) and the WikiWars corpus of Wikipedia history articles (Mazur and Dale, 2010). On these benchmark datasets, we present new state-of-theart results, with error reductions of up to 28% for the detection task and 21% for the end-to-end task. 2 Formal Overview Time Expressions We follow the TIMEX3 standard (Pustejovsky et al., 2005) for defining time expressions within documents. Let a document D = ⟨w1, . . . , wn⟩be a sequence of n words wi and a mention m = (i, j) indicate start and end indices for a phrase ⟨wi, . . . , wj⟩in D. Define a time expression e = (t, v) to include both a temporal type t and value v.1 The temporal type t ∈{Date, Time, Duration, Set} can take one of four possible values, indicating if the expression e is a date (e.g., “January 10, 2014”), time (e.g., “11:59 pm”), duration (e.g., “6 months”), or set (e.g., “every year”). The value v is an extension of the ISO 8601 standard, which encodes the time that mention m refers to in the context provided by document D. For example, in a document written on Tuesday, January 7, 2014, “Friday,” “three days later,” and “January 10th” would all resolve to the value 2014-01-10. The time values are similarly defined for a wide range of expressions, such as underspecified dates (e.g., XXXX-01-10 for “Janunary 10th” when the year is not inferable from context) and durations (P2D for “two days”). Tasks Our goal is to find all time expressions in an input document. We divide the problem into two parts: detection and resolution. The detection problem is to take an input document D and output a mention set M = {mi | i = 1 . . . n} of phrases in D that describe time expressions. The resolution problem (often also called normalization) is, given a document D and a set of mentions M, to 1Time expressions also have optional modifier values for non-TIMEX properties (e.g., the modifier would contain EARLY for the phrase “early march”). We do recover these modifiers but omit them from the discussion since they are not part of the official evaluation metrics. map each m ∈M to the referred time expression e. This paper addresses both of these tasks. Approach We learn separate, but related, models for detection and resolution. For both tasks, we define the space of possible compositional meaning representations Z, where each z ∈Z defines a unique time expression e. We use a log-linear CCG (Steedman, 1996; Clark and Curran, 2007) to rank possible meanings z ∈Z for each mention m in a document D, as described in Section 4. Both detection (Section 5) and resolution (Section 6) rely on the semantic parser to identify likely mentions and resolve them within context. For learning we assume access to TimeML data containing documents labeled with time expressions. Each document D has a set {(mi, ei)|i = 1 . . . n}, where each mention mi marks a phrase that resolves to the time expression ei. Evaluation We evaluate performance (Section 8) for both newswire text and Wikipedia articles. We compare to the state-of-the-art systems for end-to-end resolution (Str¨otgen and Gertz, 2013) and resolution given gold mentions (Bethard, 2013b), both of which do not use any machine learning techniques. 3 Representing Time We use simply typed lambda calculus to represent time expressions. Our representation draws heavily from the representation proposed by Angeli et al. (2012), who introduced semantic parsing for this task. There are five primitive types: duration d, sequence s, range r, approximate reference a, and numeral n, as described below. Table 1 lists the available constants for each type. Duration A period of time. Each duration is a multiple of one of a closed set of possible base durations (e.g., hour, day, and quarter), which we refer to as its granularity. Table 1 includes the complete set of base durations used. Range A specific interval of time, following an interval-based theory of time (Allen, 1981). The interval length is one of the base durations, which is the granularity of the range. Given two ranges R and R′, we say that R ⊆R′ if the endpoints of R lie on or within R′. Sequence A set of ranges with identical granularity. The granularity of the sequence is that of its members. For example, thursday, which has a 1438 Type Primitive Constants Duration second, minute, hour, timeofday, day, month, season, quarter, weekend, week, year, decade, century , temp d Sequence monday, tuesday, wednesday, thursday, friday, saturday, sunday, january, february, march, april, may, june, july, august, september, october, november, december, winter, spring, summer, fall, night, morning, afternoon, evening Range ref time Approximate reference present, future, past, unknown Numeral 1, 2, 3, 1999, 2000, 2001, ... Table 1: The types and primitive logical constants supported by the logical language for time. day granularity, denotes the set of all day-granular ranges enclosing specific Thursdays. Given a range R and sequence S, we say that R ∈S if R is a member of S. Given two sequences S and S′ we say that S ⊆S′ if R ∈S implies R ∈S′. Approximate Reference An approximate time relative to the reference time. For example, past and future. To handle mentions such as “a while,” we add the constant unknown. Numeral An integer, for example, 5 or 1990. Numerals are used to denote specific ranges, such as the year 2001, or to modify a duration’s length. Functions We also allow for functional types, for example ⟨s, r⟩is assigned to a function that maps from sequences to ranges. Table 2 lists all supported functions with example mentions. Context Dependent Constants To mark places where context-dependent choices will need to be made during resolution, we use two placeholder constants. First, ref time denotes the mention reference time, which is later set to either the document time or a previously resolved mention. Second, temp d is used in the shift function to determine its return granularity, as described in Table 2, and is later replaced with the granularity of either the first or second argument of the enclosing shift function. Section 4.3 describes how these decisions are made. 4 Parsing Time Expressions We define a three-step derivation to resolve mentions to their TIMEX3 value. First, we use a CCG to generate an initial logical form for the mention. Next, we apply a set of operations that modify the one week ago C N NP\NP 1 week λx.shift(ref time, −1 × x, temp d) N/N λx.1 × x > N 1 × week NP 1 × week < NP shift(ref time, −1 × 1 × week, temp d) Figure 1: A CCG parse tree for the mention “one week ago.” The tree includes forward (>) and backward (<) application, as well as two type-shifting operations initial logical form, as appropriate for its context. Finally, the logical form is resolved to a TIMEX3 value using a deterministic process. 4.1 Combinatory Categorial Grammars CCG is a linguistically motivated categorial formalism for modeling a wide range of language phenomena (Steedman, 1996; Steedman, 2000). A CCG is defined by a lexicon and a set of combinators. The lexicon pairs words with categories and the combinators define how to combine categories to create complete parse trees. For example, Figure 1 shows a CCG parse tree for the phrase “one week ago.” The parse tree is read top to bottom, starting from assigning categories to words using the lexicon. The lexical entry ago ⊢NP\NP : λx.shift(ref time, −1 × x, temp d) for the word “ago” pairs it with a category that has syntactic type NP\NP and semantics λx.shift(ref time, −1 × x, temp d). Each intermediate parse node is then constructed by applying one of a small set of binary or unary operations (Steedman, 1996; Steedman, 2000), which modify both the syntax and semantics. We use backward (<) and forward (>) application and several unary type-shifting rules to handle number combinations. For example, in Figure 1 the category of the span “one week” is combined with the category of “ago” using backward application (<). Parsing concludes with a logical form representing the meaning of the complete mention. Hand Engineered Lexicon To parse time expressions, we use a CCG lexicon that includes 287 manually designed entries, along with automatically generated entries such as numbers and common formats of dates and times. Figure 2 shows example entries from our lexicon. 1439 Function Description Example Operations on durations. ×⟨n,⟨d,d⟩⟩ Given a duration D and a numeral N, return a duration D′ that is N times longer than D. “after three days of questioning” 3 × day some⟨d,d⟩ Given a duration D, returns D′, s.t. D′ is the result of D×n for some n > 1. “he left for a few days” some(day) seq⟨d,s⟩ Given a duration D, generate a sequence S, s.t. S includes all ranges of type D. “went to last year’s event” previous(seq(year), ref time) Operations for extracting a specific range from a sequence. this⟨s,⟨r,r⟩⟩ Given a sequence S and a range R, returns the range R′ ∈ S, s.t. there exists a range R′′ where R ⊆R′′ and R′ ⊆ R′′, and the length of R′′ is minimal. “a meeting this friday” this(friday, ref time) next⟨s,⟨r,r⟩⟩ previous⟨s,⟨r,r⟩⟩ Given a sequence S and a range R, returns the range R′ ∈S that is the one after/before this(S, R). “arriving next month” next(seq(month), ref time) nearest forward⟨s,⟨r,r⟩⟩ nearest backward⟨s,⟨r,r⟩⟩ Given a sequence S and a range R, returns the range R′ ∈S that is closest to R in the forward/backward direction. “during the coming weekend” nearest forward(seq(weekend), ref time) Operations for sequences. nth⟨n,⟨s,⟨s,s⟩⟩⟩ nth⟨n,⟨s,s⟩⟩ Given a number N, a sequence S and a sequence S′, returns a sequence S′′ ⊆S s.t. for each Q ∈S′′ there exists P ∈S′ and Q is the N-th entry in S that is a sub-interval of P . For the two-argument version, we use heuristics to infer the third argument by determining a sequence of higher granularity that is likely to contain the second argument. “until the second quarter of the year” nth(2, seq(quarter), seq(year)) intersect⟨s,⟨s,s⟩⟩ Given sequences S, S′, where the duration of entries in S is shorter than these in S′, return a sequence S′′ ⊆S, where for each R ∈S′′ there exists R′ ∈S′ s.t. R ⊆R′. “starts on June 28” intersect(june, nth(28, seq(day), seq(month))) shift⟨r,⟨d,⟨d,r⟩⟩⟩ Given a range R, a duration D, and a duration G, return the range R′, s.t. the starting point of R′ is moved by the length of D. R′ is converted to represent a range of granularity G by expanding if G has larger granularity, and is undefined if G has smaller granularity. “a week ago, we went home” shift(ref time, −1 × 1 × week, temp d) Operations on numbers. ×⟨n,⟨n,n⟩⟩ Given two numerals, N′ and N′′, returns a numeral N′′′ representing their product N ′ × N ′′. “the battle lasted for one hundred days” 1 × 100 × day +⟨n,⟨n,n⟩⟩ Given two numerals, N′ and N′′, returns a numeral N ′′′ representing their sum N′ + N′′. “open twenty four hours” (20 + 4) × hour Operations to mark sequences for specific TIMEX3 type annotations. every⟨s,s⟩ Given a sequence S, returns a sequence with SET temporal type. “one dose each day” every(seq(day)) bc⟨s,s⟩ Convert a year to BC. “during five hundred BC” bc(nth(500, seq(year))) Table 2: Functional constants used to build logical expressions for representing time. Manually Designed Entries: several ⊢NP/N : λx.some(x) this ⊢NP/N : λx.this(x, ref time) each ⊢NP/N : λx.every(x) before ⊢N\NP/NP : λx.λy.shift(x, −1 × y, temp d) year ⊢N : year wednesday ⊢N : wednesday ’20s ⊢N : nth(192, seq(decade)) yesterday ⊢N : shift(ref time, −1 × day, temp d) Automatically Generated Entries: 1992 ⊢N : nth(1992, seq(year)) nineteen ninety two ⊢N : nth(1992, seq(year)) 09:30 ⊢N : intersect(nth(10, seq(hour), seq(day)), nth(31, seq(minute), seq(hour))) 3rd ⊢N\N : λx.intersect(x, nth(3, seq(day), seq(month))) Figure 2: Example lexical entries. 4.2 Context-dependent Operations To correctly resolve mentions to TIMEX3 values, the system must account for contextual information from various sources, including previous mentions in the document, the document creation time, and the sentence containing the mention. We consider three types of context operations, each takes as input a logical form z′, modifies it and returns a new logical form z. Each context-dependent parse y specifies one operator of each type, which are applied to the logical form constructed by the CCG grammar, to produce the final, context-dependent logical form LF(y). Reference Time Resolution The logical constant ref time is replaced by either dct, representing the document creation time, or last range, the last r-typed mention resolved in the document. For example, consider the mention “the following year”, which is represented using the logical form next(seq(year), ref time). Within the sentence “1998 was colder than the following year”, the resolution of “the following year” depends on the previous mention “1998”. In contrast, in “The following year will be warmer”, its resolution depends on the document creation time. 1440 Directionality Resolution If z′ is s-typed we modify it to nearest forward(z′, ref time), nearest backward(z′, ref time), or z′. For example, given the sentence “. . . will be launched in april”, the mention “april”, and its logical form april, we would like to resolve it to the coming April, and therefore modify it to nearest forward(april, ref time). Shifting Granularity Every occurrence of the logical constant temp d, which is used as an argument to the function shift (see Table 2), is replaced with the granularity of either the first argument, the origin of the shift, or the second argument, the delta of the shift. This determines the final granularity of the output. For example, if the reference time is 2002-01, the mention “two years earlier” would resolve to either a month (since the reference time is of month granularity) or a year (since the delta is of year granularity). 4.3 Resolving Logical Forms For a context-dependent parse y, we compute the TIMEX3 value TM(y) from the logical form z = LF(y) with a deterministic step that performs a single traversal of z. Each primitive logical constant from Table 1 contributes to setting part of the TIMEX3 value (for example, specifying the day of the week) and the functional constants in Table 2 dictate transformations on the TIMEX3 values (for example, shifting forward or backward in time).2 5 Detection The detection problem is to take an input document D and output a mention set M = {mi | i = 1, . . . , n}, where each mention mi indexes a specific phrase in D that delimits a time expression. Algorithm The detection algorithm considers all phrases that our CCG grammar Λ (Section 4) can parse, uses a learned classifier to further filter this set, and finally resolves conflicts between any overlapping predictions. We use a CKY algorithm to efficiently determine which phrases the CCG grammar can parse and only allow logical forms for which there exists some context in which they would produce a valid time expression, e.g. ruling out intersect(monday, tuesday). Finally, we build the set M of non-overlapping mentions using a step similar to non-maximum suppression: 2The full details are beyond the scope of this paper, but an implementation is available on the author’s website. the mentions are sorted by length (longest first) and iteratively added to M, as long as they do not overlap with any mention already in M. Filtering Model Given a mention m, its document D, a feature function φ, the CCG lexicon Λ, and feature weights θ, we use a logistic regression model to define the probability distribution: P(t|m, D; Λ, θ) = eθ·φ(m,D,Λ) 1 + eθ·φ(m,D,Λ) where t indicates whether m is a time expression. Features We use three types of indicator features that test properties of the words in and around the potential mention m. Context tokens Indicate the presence of a set of manually specified tokens near the mention. These include quotations around the mention, the word “old” after the mention, and prepositions of time (such as “in”, “until”, and “during”) before. Part of speech Indicators that pair each word with its part of speech, as assigned by the Stanford tagger (Toutanova et al., 2003). Lexical group Each lexical entry belongs to one of thirteen manually defined lexical groups which cluster entries that contribute to the final time expression similarly. These groups include numbers, days of the week, months, seasons, etc. For each group, we include a feature indicating whether the parse includes a lexical entry from that group. Determiner dependency Indicates the presence of a determiner in the mention and whether its parent in the dependency tree (generated by the Stanford parser (de Marneffe et al., 2006)) also resides within the mention. Learning Finally, we construct the training data by considering all spans that (1) the CCG temporal grammar can parse and (2) are not strict subspans of an annotated mention. All spans that exactly matched the gold labels are used as positive examples and all others are negatives. Given this relaxed data, we learn the feature weights θ with L1-regularization. We set the probability threshold for detecting a time expression by optimizing the F1 score over the training data. 6 Resolution The resolution problem is to, given a document D and a set of mentions M, map each m ∈M to the correct time expression e. Section 4 defined 1441 the space of possible time expression that can be constructed for an input mention m in the context of a document D. In general, there will be many different possible derivations, and we will learn a model for selecting the best one. Model Let y be a context-dependent CCG parse, which includes a parse tree TR(y), a set of context operations CNTX(y) applied to the logical form at the root of the tree, a final context-dependent logical form LF(y) and a TIMEX3 value TM(y). Define φ(m, D, y) ∈Rd to be a d-dimensional feature–vector representation and θ ∈Rd to be a parameter vector. The probability of a parse y for mention m and document D is: P(y|m, D; θ, Λ) = eθ·φ(m,D,y) P y′ eθ·φ(m,D,y′) The inference problem at test time requires finding the best resolution by solving y∗(m, D) = arg maxy P(y|m, D; θ, Λ), where the final output TIMEX3 value is TM(y∗(m, D)). Inference We find the best context-dependent parse y by enumeration, as follows. We first parse the input mention m with a CKY-style algorithm, following previous work (Zettlemoyer and Collins, 2005). Due to the short length of time expressions and the manually constructed lexicon, we can perform exact inference. Given a parse, we then enumerate all possible outcomes for the context resolution operators. In practice, there are never more than one hundred possibilities. Features The resolution features test properties of the linguistic context surrounding the mention m, relative to the context-dependent CCG parse y. Governor verb We define the governor verb to be the nearest ancestor verb in the dependency parse of any token in m. We include features indicating the concatenation of the part-of-speech of the governor verb, its auxiliary verb if present, and the selected direction resolution operator (see Section 4.2). This feature helps to distinguish “They met on Friday” from “They will meet on Friday.” Temporal offset If the final logical form LF(y) is a range, we define t to be the time difference between TM(y) and the reference time. For example, if the reference time is 2000-01-10 and the mention resolves to 2000-01-01, then t is -9 days. This feature indicates one of eleven bucketed values for t, including same day, less than a week, less than a month, etc. It allows the model to encode the likely temporal progression of a narrative. This feature is ignored if the granularity of TM(y) or the reference time is greater than a year. Shift granularity The logical constant shift (Table 2) takes three arguments: the origin (range), the delta (duration), and the output granularity (duration). This indicator feature is the concatenation of each argument’s granularity for every shift in LF(y). It allows the model to determine whether “a year ago” refers to a year or a day. Reference type Let r denote whether the reference time is the document creation time dct or the last range last range. Let gl and gr denote the granularities of LF(y) and the reference time, respectively. We include features indicating the concatenations: r+gl, r+gr, and r+gl+gr. Additionally, we include features indicating the concatenation of r with each lexical entry used in the parse TR(y). These features allow the model to encode preferences in selecting the correct reference time. Fine-grained type These features indicate the fine-grained type of TM(y), such as day of the month or week of the year. We also include a feature indicating the concatenation of each of these features with the direction resolution operator that was used. These features allow the model to represent, for example, that minutes of the year are less likely than days of the month. Intersections These features indicate the concatenation of the granularities of any two sequences that appear as arguments to an intersect constant. Learning To estimate the model parameters θ we assume access to a set of training examples {(mi, di, ei) : i = 1, . . . , n}, where each mention mi is paired with a document di and a TIMEX3 value ei. We use the AdaGrad algorithm (Duchi et al., 2011) to optimize the conditional, marginal log-likelihood of the data. For each mention, we marginalize over all possible context-dependent parses, using the predictions from the model on the previous gold mentions to fill in missing context, where necessary. After parameter estimation, we set a probability threshold for retaining a resolved time expression by optimizing value F1 (see Section 8) over the training data. 7 Related Work Semantic parsers map sentences to logical representations of their underlying meaning, e.g., Zelle 1442 and Mooney (1996), Zettlemoyer and Collins (2005), and Wong and Mooney (2007). Recently, research in this area has focused on learning for various forms of relatively weak but easily gathered supervision. This includes learning from question-answer pairs (Clarke et al., 2010; Liang et al., 2011; Kwiatkowski et al., 2013), from conversational logs (Artzi and Zettlemoyer, 2011), with distant supervision (Krishnamurthy and Mitchell, 2012; Cai and Yates, 2013), and from sentences paired with system behavior (Goldwasser and Roth, 2011; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013b). Recently, Angeli et al. introduced the idea of learning semantic parsers to resolve time expressions (Angeli et al., 2012) and showed that the approach can generalize to multiple languages (Angeli and Uszkoreit, 2013). Similarly, Bethard demonstrated that a hand-engineered semantic parser is also effective (Bethard, 2013b). However, these approaches did not use the semantic parser for detection and did not model linguistic context during resolution. We build on a number of existing algorithmic ideas, including using CCGs to build meaning representations (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011), building derivations to transform the output of the CCG parser based on context (Zettlemoyer and Collins, 2009), and using weakly supervised parameter updates (Artzi and Zettlemoyer, 2011; Artzi and Zettlemoyer, 2013b). However, we are the first to use a semantic parsing grammar within a mention detection algorithm, thereby avoiding the need to represent the meaning of complete sentences, and the first to develop a context-dependent model for semantic parsing of time expressions. Time expressions have been extensively studied as part of the TimeEx task, including 9 teams who competed in the 2013 TempEval-3 competition (Uzzaman et al., 2013). This line of work builds on ideas from TimeBank (Pustejovsky et al., 2003) and a number of different formal models for temporal reasoning, e.g. Allen (1983), Moens and Steedman (1988). In 2013, HeidelTime (Str¨otgen and Gertz, 2013) was the top performing system. It used deterministic rules defined over regular expressions to perform both detection and resolution, and will provide a comparison system for our evaluation in Section 9. In Corpus Doc. Token TimeEx TempEval-3 (Dev) 256 95,391 1,822 TempEval-3 (Test) 20 6,375 138 WikiWars (Dev) 17 98,746 2,228 WikiWars (Test) 5 19,052 363 Figure 3: Corpus statistics. general, many different rule-based systems, e.g. NavyTime (Chambers, 2013) and SUTime (Chang and Manning, 2012), and learning systems, e.g. ClearTK (Bethard, 2013a) and MANTime (Filannino et al., 2013), did well for detection. However, rule-based approaches dominated in resolution; none of the top performers attempted to learn to do resolution. Our approach is a hybrid of rule based and learning, by using latent-variable learning techniques to estimate CCG parsing and context resolution models from the provided data. 8 Experimental Setup Data We evaluate performance on the TempEval-3 (Uzzaman et al., 2013) and WikiWars (Mazur and Dale, 2010) datasets. Figure 3 shows summary statistics for both datasets. For the TempEval-3 corpus, we use the given training and testing set splits. Since the training set has lower inter-annotator agreement than the testing set (Uzzaman et al., 2013), we manually corrected all of the mistakes we found in the training data.3 The original training set is denoted Dev* and the corrected Dev. We report (1) cross-validation development results on Dev*, (2) cross-validation development and ablation results for Dev, and (3) held-out test results after training with Dev. For WikiWars, we randomly assigned the data to include 17 training documents (2,228 time expressions) and 5 test documents (363 time expressions). We use cross-validation on the training data for development. All cross-validation experiments used 10 folds. Implementation Our system was implemented using the open source University of Washington Semantic Parsing Framework (Artzi and Zettlemoyer, 2013a). We used LIBLINEAR (Fan et al., 2008) to learn the detection model. Parameter Settings We use the same set of parameters for both datasets, chosen based on development experiments. For detection, we set the regularization parameter to 10 with a stopping crite3We modified the annotations for 18% of the mentions. This relabeled corpus is available on the author’s website. 1443 System Strict Detection Relaxed Detection Type Res. Value Resolution Pre. Rec. F1 Pre. Rec. F1 Acc. F1 Acc. Pre. Rec. F1 Dev* This work 84.6 83.4 84.0 92.8 91.5 92.1 94.6 87.1 84.0 77.9 76.8 77.4 HeidelTime 83.7 83.4 83.5 91.7 91.4 91.6 95.0 87.0 84.1 77.1 76.8 77.0 Dev This work 92.7 89.6 91.1 97.4 94.1 95.7 97.1 92.9 91.5 89.1 86.1 87.6 Context ablation 92.7 89.3 91.0 97.5 93.9 95.7 97.1 92.9 89.8 87.6 84.3 85.9 HeidelTime 90.2 84.8 87.4 96.5 90.7 93.5 96.1 89.9 88.4 85.3 80.2 82.7 Test This work 86.1 80.4 83.1 94.6 88.4 91.4 93.4 85.4 90.2 85.3 79.7 82.4 HeidelTime 83.9 79.0 81.3 93.1 87.7 90.3 90.9 82.1 86.0 80.1 75.4 77.7 NavyTime 78.7 80.4 79.6 89.4 91.3 90.3 88.9 80.3 78.6 70.3 71.8 71.0 ClearTK 85.9 79.7 82.7 93.8 87.0 90.2 93.3 84.2 71.7 67.3 62.4 64.7 Figure 4: TempEval-3 development and test results, compared to the top systems in the shared task. System Strict Detection Relaxed Detection Value Resolution Pre. Rec. F1 Pre. Rec. F1 Acc. Pre. Rec. F1 Dev This work 90.3 83.0 86.5 98.1 90.1 93.9 87.6 85.9 78.9 82.3 Context ablation 90.9 80.1 85.2 98.2 86.5 92.0 68.5 67.3 59.3 63.0 HeidelTime 86.0 75.3 80.3 95.4 83.5 89.0 90.5 86.3 75.6 80.6 Test This work 87.7 78.8 83.0 97.6 87.6 92.3 84.6 82.5 74.1 78.1 HeidelTime 85.2 79.3 82.1 92.6 86.2 89.3 83.7 77.5 72.1 74.7 Figure 5: WikiWars development and test results. rion of 0.01. For resolution, we set the learning rate to 0.25 and ran AdaGrad for 5 iterations. All features are initialized to have zero weights. Evaluation Metrics We use the official TempEval-3 scoring script and report the standard metrics. We report detection precision, recall and F1 with relaxed and strict metrics; a gold mention is considered detected for the relaxed metric if any of the output candidates overlap with it and is detected for the strict metric if the extent of any output candidates matches exactly. For resolution, we report value accuracy, measuring correctness of time expressions detected according to the relaxed metric. We also report value precision, recall, and F1, which score an expression as correct if it is both correctly detected (relaxed) and resolved. For end-to-end performance, value F1 is the primary metric. Finally, we report accuracy and F1 for temporal types, as defined in Section 2, for the TempEval dataset (WikiWars does not include type labels). Comparison Systems We compare our system primarily to HeidelTime (Str¨otgen and Gertz, 2013), which is state of the art in the end-toend task. For the TempEval-3 dataset, we also compare to two other strong participants of the shared task. These include NavyTime (Chambers, 2013), which had the top relaxed detection score, and ClearTK (Bethard, 2013a), which had the top strict detection score and type F1 score. We also include a comparison with Bethard’s synchronous System Dev* Dev Test This work 81.8 90.1 82.6 SCFG 77.0 81.6 78.9 Figure 6: TempEval-3 gold mention value accuracy. context free grammar (SCFG) (Bethard, 2013b), which is state-of-the-art in the task of resolution with gold mention boundaries. 9 Results End-to-end results Figure 4 shows development and test results for TempEval-3. Figure 5 shows these numbers for WikiWars. In both datasets, we achieve state-of-the-art test scores. For detection, we show up to 3-point improvements in strict and relaxed F1 scores. These numbers outperform all systems participating in the shared task, which used a variety of techniques including hand-engineered rules, CRF tagging models, and SVMs. For resolution, we show up to 4-point improvements in the value F1 score, also outperforming participating systems, all of which used hand-engineered rules for resolution. Gold Mentions Figure 6 reports development and test results with gold mentions.4 Our approach outperforms the state of the art, SCFG (Bethard, 2013b), which also used a hand engineered grammar, but did not use machine learning techniques. 4These numbers vary slightly from those reported; we did not count the document creation times as mentions. 1444 65 70 75 80 85 90 84 86 88 90 92 94 Value Precision (%) 65 70 75 80 85 90 84 86 88 90 92 94 This work HeidelTime TempEval-3 Dev WikiWars Dev Value Recall (%) Figure 7: Value precision vs. recall for 10-fold cross validation on TempEval-3 Dev and WikiWars Dev. Precision vs. Recall Our probabilistic model of time expression resolution allows us to easily tradeoff precision and recall for end-to-end performance by varying the resolution probability threshold. Figure 7 shows the precision vs. recall of the resolved values from 10-fold cross validation of TempEval-3 Dev and WikiWars Dev. We are able to achieve precision at or above 90% with reasonable recall, nearly 70% for WikiWars and over 85% for TempEval-3. Ablation Study Figures 4-5 also show comparisons for our system with no context. We ablate the ability to refer to the context during resolution by removing contextual information from the resolution features and only allowing the document creation time to be the reference time. We see an interesting asymmetry in the effect of modeling context across the two domains. We find that context is much more important in WikiWars (19 point difference) than in TempEval (2 point difference). This result reaffirms the difference in domains that Str¨otgen and Gertz (2012) noted during the development of HeidelTime: history articles have narrative structure that moves back and forth through time while newspaper text typically describes events happening near the document creation time. This difference helps us to understand why previous learning systems have been able to ignore context and perform well on newswire text. Error Analysis To investigate the source of error, we compute oracle results for resolving gold mentions over the TempEval-3 Dev dataset. We found that our system produces a correct candidate derivation for 96% of the mentions. We also manually categorized all resolution errors for end-to-end performance with 10-fold cross validation of the TempEval-3 Dev dataset, Error description % Wrong directionality context operator 34.6 Wrong reference time context operator 15.7 Wrong shifting granularity context operator 14.4 Requires joint reasoning with events 9.2 Cascading error due to wrong detection 7.8 CCG parse error 2.0 Other error 16.3 Figure 8: Resolution errors from 10-fold cross validation of the TempEval-3 Dev dataset. shown in Figure 8. The lexicon allows for effective parsing, contributing to only 2% of the overall errors. However, context is more challenging. The three largest categories, responsible for 64.7% of the errors, were incorrect use of the context operators. More expressive modeling will be required to fully capture the complex pragmatics involved in understanding time expressions. 10 Conclusion We presented the first context-dependent semantic parsing system to detect and resolve time expressions. Both models used a Combinatory Categorial Grammar (CCG) to construct a set of possible temporal meaning representations. This grammar defined the possible phrases for detection and the inputs to a context-dependent reasoning step that was used to construct the output time expression during resolution. Experiments demonstrated that our approach outperforms state-of-the-art systems. In the future, we aim to develop joint models for reasoning about events and time expressions, including detection and resolution of temporal relations. We are also interested in testing coverage in new domains and investigating techniques for semi-supervised learning and learning with noisy data. We hypothesize that semantic parsing techniques could help in all of these settings, providing a unified mechanism for compositional analysis within temporal understanding problems. Acknowledgments The research was supported in part by DARPA under the DEFT program through the AFRL (FA8750-13-2-0019) and the CSSG (N11AP20020), and the NSF (IIS-1115966, IIS1252835). The authors thank Nicholas FitzGerald, Tom Kwiatkowski, and Mark Yatskar for helpful discussions, and the anonymous reviewers for helpful comments. 1445 References James F. Allen. 1981. An interval-based representation of temporal knowledge. In Proceedings of the 7th International Joint Conference on Artificial Intelligence. James F Allen. 1983. Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11):832–843. Gabor Angeli and Jakob Uszkoreit. 2013. Languageindependent discriminative parsing of temporal expressions. In Proceedings of the Conference of the Association of Computational Linguistics. Gabor Angeli, Christopher D Manning, and Daniel Jurafsky. 2012. Parsing time: Learning to interpret time expressions. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics. Y. Artzi and L.S. Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Y. Artzi and L.S. Zettlemoyer. 2013a. UW SPF: The University of Washington Semantic Parsing Framework. Y. Artzi and L.S. Zettlemoyer. 2013b. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1(1):49–62. Steven Bethard. 2013a. Cleartk-timeml: A minimalist approach to tempeval 2013. In Second Joint Conference on Lexical and Computational Semantics. Steven Bethard. 2013b. A synchronous context free grammar for time normalization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Q. Cai and A. Yates. 2013. Semantic parsing freebase: Towards open-domain semantic parsing. In Joint Conference on Lexical and Computational Semantics: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. Nathanael Chambers. 2013. Navytime: Event and time ordering from raw text. In Second Joint Conference on Lexical and Computational Semantics. Angel X Chang and Christopher Manning. 2012. Sutime: A library for recognizing and normalizing time expressions. In Proceedings of the 8th International Conference on Language Resources and Evaluation. D.L. Chen and R.J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the National Conference on Artificial Intelligence. S. Clark and J. R. Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Conference on Computational Natural Language Learning. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Michele Filannino, Gavin Brown, and Goran Nenadic. 2013. Mantime: Temporal expression identification and normalization in the tempeval-3 challenge. In Second Joint Conference on Lexical and Computational Semantics. D. Goldwasser and D. Roth. 2011. Learning from natural instructions. In Proceedings of the International Joint Conference on Artificial Intelligence. J. Krishnamurthy and T. Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. T. Kwiatkowski, L.S. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. T. Kwiatkowski, L.S. Zettlemoyer, S. Goldwater, and M. Steedman. 2011. Lexical Generalization in CCG Grammar Induction for Semantic Parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. P. Liang, M.I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the Conference of the Association for Computational Linguistics. 1446 Pawet Mazur and Robert Dale. 2010. Wikiwars: a new corpus for research on temporal expressions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Marc Moens and Mark Steedman. 1988. Temporal ontology and temporal reference. Computational linguistics, 14(2):15–28. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The timebank corpus. In Corpus linguistics. James Pustejovsky, Bob Ingria, Roser Sauri, Jose Castano, Jessica Littman, Rob Gaizauskas, Andrea Setzer, Graham Katz, and Inderjeet Mani. 2005. The specification language timeml. The language of time: A reader, pages 545–557. M. Steedman. 1996. Surface Structure and Interpretation. The MIT Press. M. Steedman. 2000. The Syntactic Process. The MIT Press. Jannik Str¨otgen and Michael Gertz. 2012. Temporal tagging on different domains: Challenges, strategies, and gold standards. In Proceedings of the Eigth International Conference on Language Resources and Evaluation. Jannik Str¨otgen and Michael Gertz. 2013. Multilingual and cross-domain temporal tagging. Language Resources and Evaluation, 47(2):269–298. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1. N. Uzzaman, H. Llorens, L. Derczynski, M. Verhagen, J. Allen, and J. Pustejovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Proceedings of the International Workshop on Semantic Evaluation. Y.W. Wong and R.J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the Conference of the Association for Computational Linguistics. J.M. Zelle and R.J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the National Conference on Artificial Intelligence. L.S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. L.S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. L.S. Zettlemoyer and M. Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of the Joint Conference of the Association for Computational Linguistics and International Joint Conference on Natural Language Processing. 1447
2014
135
Semantic Frame Identification with Distributed Word Representations Karl Moritz Hermann‡∗Dipanjan Das† Jason Weston† Kuzman Ganchev† ‡Department of Computer Science, University of Oxford, Oxford OX1 3QD, United Kingdom † Google Inc., 76 9th Avenue, New York, NY 10011, United States [email protected] {dipanjand,kuzman}@google.com [email protected] Abstract We present a novel technique for semantic frame identification using distributed representations of predicates and their syntactic context; this technique leverages automatic syntactic parses and a generic set of word embeddings. Given labeled data annotated with frame-semantic parses, we learn a model that projects the set of word representations for the syntactic context around a predicate to a low dimensional representation. The latter is used for semantic frame identification; with a standard argument identification method inspired by prior work, we achieve state-ofthe-art results on FrameNet-style framesemantic analysis. Additionally, we report strong results on PropBank-style semantic role labeling in comparison to prior work. 1 Introduction Distributed representations of words have proved useful for a number of tasks. By providing richer representations of meaning than what can be encompassed in a discrete representation, such approaches have successfully been applied to tasks such as sentiment analysis (Socher et al., 2011), topic classification (Klementiev et al., 2012) or word-word similarity (Mitchell and Lapata, 2008). We present a new technique for semantic frame identification that leverages distributed word representations. According to the theory of frame semantics (Fillmore, 1982), a semantic frame represents an event or scenario, and possesses frame elements (or semantic roles) that participate in the ∗The majority of this research was carried out during an internship at Google. event. Most work on frame-semantic parsing has usually divided the task into two major subtasks: frame identification, namely the disambiguation of a given predicate to a frame, and argument identification (or semantic role labeling), the analysis of words and phrases in the sentential context that satisfy the frame’s semantic roles (Das et al., 2010; Das et al., 2014).1 Here, we focus on the first subtask of frame identification for given predicates; we use our novel method (§3) in conjunction with a standard argument identification model (§4) to perform full frame-semantic parsing. We present experiments on two tasks. First, we show that for frame identification on the FrameNet corpus (Baker et al., 1998; Fillmore et al., 2003), we outperform the prior state of the art (Das et al., 2014). Moreover, for full frame-semantic parsing, with the presented frame identification technique followed by our argument identification method, we report the best results on this task to date. Second, we present results on PropBank-style semantic role labeling (Palmer et al., 2005; Meyers et al., 2004; M`arquez et al., 2008), that approach strong baselines, and are on par with prior state of the art (Punyakanok et al., 2008). 2 Overview Early work in frame-semantic analysis was pioneered by Gildea and Jurafsky (2002). Subsequent work in this area focused on either the FrameNet or PropBank frameworks, and research on the latter has been more popular. Since the CoNLL 2004-2005 shared tasks (Carreras and M`arquez, 1There are exceptions, wherein the task has been modeled using a pipeline of three classifiers that perform frame identification, a binary stage that classifies candidate arguments, and argument identification on the filtered candidates (Baker et al., 2007; Johansson and Nugues, 2007). John bought a car . COMMERCE_BUY buy.V Buyer Goods John bought a car . buy.01 buy.V A0 A1 Mary sold a car . COMMERCE_BUY sell.V Seller Goods Mary sold a car . sell.01 sell.V A0 A1 (a) (b) Figure 1: Example sentences with frame-semantic analyses. FrameNet annotation conventions are used in (a) while (b) denotes PropBank conventions. 2004; Carreras and M`arquez, 2005) on PropBank semantic role labeling (SRL), it has been treated as an important NLP problem. However, research has mostly focused on argument analysis, skipping the frame disambiguation step, and its interaction with argument identification. 2.1 Frame-Semantic Parsing Closely related to SRL, frame-semantic parsing consists of the resolution of predicate sense into a frame, and the analysis of the frame’s arguments. Work in this area exclusively uses the FrameNet full text annotations. Johansson and Nugues (2007) presented the best performing system at SemEval 2007 (Baker et al., 2007), and Das et al. (2010) improved performance, and later set the current state of the art on this task (Das et al., 2014). We briefly discuss FrameNet, and subsequently PropBank annotation conventions here. FrameNet The FrameNet project (Baker et al., 1998) is a lexical database that contains information about words and phrases (represented as lemmas conjoined with a coarse part-of-speech tag) termed as lexical units, with a set of semantic frames that they could evoke. For each frame, there is a list of associated frame elements (or roles, henceforth), that are also distinguished as core or non-core.2 Sentences are annotated using this universal frame inventory. For example, consider the pair of sentences in Figure 1(a). COMMERCE BUY is a frame that can be evoked by morphological variants of the two example lexical units buy.V and sell.V. Buyer, Seller and Goods are some example roles for this frame. 2Additional information such as finer distinction of the coreness properties of roles, the relationship between frames, and that of roles are also present, but we do not leverage that information in this work. PropBank The PropBank project (Palmer et al., 2005) is another popular resource related to semantic role labeling. The PropBank corpus has verbs annotated with sense frames and their arguments. Like FrameNet, it also has a lexical database that stores type information about verbs, in the form of sense frames and the possible semantic roles each frame could take. There are modifier roles that are shared across verb frames, somewhat similar to the non-core roles in FrameNet. Figure 1(b) shows annotations for two verbs “bought” and “sold”, with their lemmas (akin to the lexical units in FrameNet) and their verb frames buy.01 and sell.01. Generic core role labels (of which there are seven, namely A0-A5 and AA) for the verb frames are marked in the figure.3 A key difference between the two annotation systems is that PropBank uses a local frame inventory, where frames are predicate-specific. Moreover, role labels, although few in number, take specific meaning for each verb frame. Figure 1 highlights this difference: while both sell.V and buy.V are members of the same frame in FrameNet, they evoke different frames in PropBank. In spite of this difference, nearly identical statistical models could be employed for both frameworks. Modeling In this paper, we model the framesemantic parsing problem in two stages: frame identification and argument identification. As mentioned in §1, these correspond to a frame disambiguation stage,4 and a stage that finds the various arguments that fulfill the frame’s semantic roles within the sentence, respectively. This resembles the framework of Das et al. (2014), who solely focus on FrameNet corpora, unlike this paper. The novelty of this paper lies in the frame identification stage (§3). Note that this two-stage approach is unusual for the PropBank corpora when compared to prior work, where the vast majority of published papers have not focused on the verb frame disambiguation problem at all, only focusing on the role labeling stage (see the overview paper of M`arquez et al. (2008) for example). 3NomBank (Meyers et al., 2004) is a similar resource for nominal predicates, but we do not consider it in our experiments. 4For example in PropBank, the lexical unit buy.V has three verb frames and in sentential context, we want to disambiguate its frame. (Although PropBank never formally uses the term lexical unit, we adopt its usage from the frame semantics literature.) 2.2 Distributed Frame Identification We present a model that takes word embeddings as input and learns to identify semantic frames. A word embedding is a distributed representation of meaning where each word is represented as a vector in Rn. Such representations allow a model to share meaning between similar words, and have been used to capture semantic, syntactic and morphological content (Collobert and Weston, 2008; Turian et al., 2010, inter alia). We use word embeddings to represent the syntactic context of a particular predicate instance as a vector. For example, consider the sentence “He runs the company.” The predicate runs has two syntactic dependents – a subject and direct object (but no prepositional phrases or clausal complements). We could represent the syntactic context of runs as a vector with blocks for all the possible dependents warranted by a syntactic parser; for example, we could assume that positions 0 . . . n in the vector correspond to the subject dependent, n+1 . . . 2n correspond to the clausal complement dependent, and so forth. Thus, the context is a vector in Rnk with the embedding of He at the subject position, the embedding of company in direct object position and zeros everywhere else. Given input vectors of this form for our training data, we learn a matrix that maps this high dimensional and sparse representation into a lower dimensional space. Simultaneously, the model learns an embedding for all the possible labels (i.e. the frames in a given lexicon). At inference time, the predicate-context is mapped to the low dimensional space, and we choose the nearest frame label as our classification. We next describe this model in detail. 3 Frame Identification with Embeddings We continue using the example sentence from §2.2: “He runs the company.” where we want to disambiguate the frame of runs in context. First, we extract the words in the syntactic context of runs; next, we concatenate their word embeddings as described in §2.2 to create an initial vector space representation. Subsequently, we learn a mapping from this initial representation into a lowdimensional space; we also learn an embedding for each possible frame label in the same lowdimensional space. The goal of learning is to make sure that the correct frame label is as close as possible to the mapped context, while competing frame labels are farther away. Formally, let x represent the actual sentence with a marked predicate, along with the associated syntactic parse tree; let our initial representation of the predicate context be g(x). Suppose that the word embeddings we start with are of dimension n. Then g is a function from a parsed sentence x to Rnk, where k is the number of possible syntactic context types. For example g selects some important positions relative to the predicate, and reserves a block in its output space for the embedding of words found at that position. Suppose g considers clausal complements and direct objects. Then g : X →R2n and for the example sentence it has zeros in positions 0 . . . n and the embedding of the word company in positions n+1 . . . 2n. g(x) = [0, . . . , 0, embedding of company]. Section 3.1 describes the context positions we use in our experiments. Let the low dimensional space we map to be Rm and the learned mapping be M : Rnk →Rm. The mapping M is a linear transformation, and we learn it using the WSABIE algorithm (Weston et al., 2011). WSABIE also learns an embedding for each frame label (y, henceforth). In our setting, this means that each frame corresponds to a point in Rm. If we have F possible frames we can store those parameters in an F × m matrix, one m-dimensional point for each frame, which we will refer to as the linear mapping Y . Let the lexical unit (the lemma conjoined with a coarse POS tag) for the marked predicate be ℓ. We denote the frames that associate with ℓin the frame lexicon5 and our training corpus as Fℓ. WSABIE performs gradient-based updates on an objective that tries to minimize the distance between M(g(x)) and the embedding of the correct label Y (y), while maintaining a large distance between M(g(x)) and the other possible labels Y (¯y) in the confusion set Fℓ. At disambiguation time, we use a simple dot product similarity as our distance metric, meaning that the model chooses a label by computing the argmaxys(x, y) where s(x, y) = M(g(x)) · Y (y), where the argmax iterates over the possible frames y ∈Fℓif ℓwas seen in the lexicon or the training data, or y ∈F, if it was unseen.6 Model learning is performed using the margin ranking loss function as described in 5The frame lexicon stores the frames, corresponding semantic roles and the lexical units associated with the frame. 6This disambiguation scheme is similar to the one adopted by Das et al. (2014), but they use unlemmatized words to define their confusion set. Figure 2: Context representation extraction for the embedding model. Given a dependency parse (1) the model extracts all words matching a set of paths from the frame evoking predicate and its direct dependents (2). The model computes a composed representation of the predicate instance by using distributed vector representations for words (3) – the (red) vertical embedding vectors for each word are concatenated into a long vector. Finally, we learn a linear transformation function parametrized by the context blocks (4). Weston et al. (2011), and in more detail in section 3.2. Since WSABIE learns a single mapping from g(x) to Rm, parameters are shared between different words and different frames. So for example “He runs the company” could help the model disambiguate “He owns the company.” Moreover, since g(x) relies on word embeddings rather than word identities, information is shared between words. For example “He runs the company” could help us to learn about “She runs a corporation”. 3.1 Context Representation Extraction In principle g(x) could be any feature function, but we performed an initial investigation of two particular variants. In both variants, our representation is a block vector where each block corresponds to a syntactic position relative to the predicate, and each block’s values correspond to the embedding of the word at that position. Direct Dependents The first context function we considered corresponds to the examples in §3. To elaborate, the positions of interest are the labels of the direct dependents of the predicate, so k is the number of labels that the dependency parser can produce. For example, if the label on the edge between runs and He is nsubj, we would put the embedding of He in the block corresponding to nsubj. If a label occurs multiple times, then the embeddings of the words below this label are averaged. Unfortunately, using only the direct dependents can miss a lot of useful information. For example, topicalization can place discriminating information farther from the predicate. Consider “He runs the company.” vs. “It was the company that he runs.” In the second sentence, the discriminating word, company dominates the predicate runs. Similarly, predicates in embedded clauses may have a distant agent which cannot be captured using direct dependents. Consider “The athlete ran the marathon.” vs. “The athlete prepared himself for three months to run the marathon.” In the second example, for the predicate run, the agent The athlete is not a direct dependent, but is connected via a longer dependency path. Dependency Paths To capture more relevant context, we developed a second context function as follows. We scanned the training data for a given task (either the PropBank or the FrameNet domains) for the dependency paths that connected the gold predicates to the gold semantic arguments. This set of dependency paths were deemed as possible positions in the initial vector space representation. In addition, akin to the first context function, we also added all dependency labels to the context set. Thus for this context function, the block cardinality k was the sum of the number of scanned gold dependency path types and the number of dependency labels. Given a predicate in its sentential context, we therefore extract only those context words that appear in positions warranted by the above set. See Figure 2 for an illustration of this process. We performed initial experiments using context extracted from 1) direct dependents, 2) dependency paths, and 3) both. For all our experiments, setting 3) which concatenates the direct dependents and dependency path always dominated the other two, so we only report results for this setting. 3.2 Learning We model our objective function following Weston et al. (2011), using a weighted approximaterank pairwise loss, learned with stochastic gradient descent. The mapping from g(x) to the low dimensional space Rm is a linear transformation, so the model parameters to be learnt are the matrix M ∈Rnk×m as well as the embedding of each possible frame label, represented as another matrix Y ∈RF×m where there are F frames in total. The training objective function minimizes: X x X ¯y L ranky(x)  max(0, γ+s(x, y)−s(x, ¯y)). where x, y are the training inputs and their corresponding correct frames, and ¯y are negative frames, γ is the margin. Here, ranky(x) is the rank of the positive frame y relative to all the negative frames: ranky(x) = X ¯y I(s(x, y) ≤γ + s(x, ¯y)), and L(η) converts the rank to a weight. Choosing L(η) = Cη for any positive constant C optimizes the mean rank, whereas a weighting such as L(η) = Pη i=1 1/i (adopted here) optimizes the top of the ranked list, as described in (Usunier et al., 2009). To train with such an objective, stochastic gradient is employed. For speed the computation of ranky(x) is then replaced with a sampled approximation: sample N items ¯y until a violation is found, i.e. max(0, γ + s(x, ¯y) − s(x, y))) > 0 and then approximate the rank with (F −1)/N, see Weston et al. (2011) for more details on this procedure. For the choices of the stochastic gradient learning rate, margin (γ) and dimensionality (m), please refer to §5.4-§5.5. Note that an alternative approach could learn only the matrix M, and then use a k-nearest neighbor classifier in Rm, as in Weinberger and Saul (2009). The advantage of learning an embedding for the frame labels is that at inference time we need to consider only the set of labels for classification rather than all training examples. Additionally, since we use a frame lexicon that gives us the possible frames for a given predicate, we usually only consider a handful of candidate labels. If we used all training examples for a given predicate for finding a nearest-neighbor match at inference time, we would have to consider many more candidates, making the process very slow. 4 Argument Identification Here, we briefly describe the argument identification model used in our frame-semantic parsing experiments, post frame identification. Given x, the sentence with a marked predicate, the argument identification model assumes that the predicate frame y has been disambiguated. From a frame lexicon, we look up the set of semantic roles Ry that associate with y. This set also contains the null role r∅. From x, a rule-based candidate argument extraction algorithm extracts a set of spans A that could potentially serve as the overt7 argu7By overtness, we mean the non-null instantiation of a semantic role in a frame-semantic parse. • starting word of a • POS of the starting word of a • ending word of a • POS of the ending word of a • head word of a • POS of the head word of a • bag of words in a • bag of POS tags in a • a bias feature • voice of the predicate use • word cluster of a’s head • word cluster of a’s head conjoined with word cluster of the predicate∗ • dependency path between a’s head and the predicate • the set of dependency labels of the predicate’s children • dependency path conjoined with the POS tag of a’s head • dependency path conjoined with the word cluster of a’s head • position of a with respect to the predicate (before, after, overlap or identical) • whether the subject of the predicate is missing (missingsubj) • missingsubj, conjoined with the dependency path • missingsubj, conjoined with the dependency path from the verb dominating the predicate to a’s head Table 1: Argument identification features. The span in consideration is termed a. Every feature in this list has two versions, one conjoined with the given role r and the other conjoined with both r and the frame y. The feature with a ∗superscript is only conjoined with the role to reduce its sparsity. ments Ay for y (see §5.4-§5.5 for the details of the candidate argument extraction algorithms). Learning Given training data of the form ⟨⟨x(i), y(i), M(i)⟩⟩N i=1, where, M = {(r, a} : r ∈Ry, a ∈A ∪Ay}, (1) a set of tuples that associates each role r in Ry with a span a according to the gold data. Note that this mapping associates spans with the null role r∅ as well. We optimize the following log-likelihood to train our model: max θ N X i=1 |M(i)| X j=1 log pθ (r, a)j|x, y, Ry  −C∥θ∥2 2 where pθ is a log-linear model normalized over the set Ry, with features described in Table 1. We set C = 1.0 and use L-BFGS (Liu and Nocedal, 1989) for training. Inference Although our learning mechanism uses a local log-linear model, we perform inference globally on a per-frame basis by applying hard structural constraints. Following Das et al. (2014) and Punyakanok et al. (2008) we use the log-probability of the local classifiers as a score in an integer linear program (ILP) to assign roles subject to hard constraints described in §5.4 and §5.5. We use an off-the-shelf ILP solver for inference. 5 Experiments In this section, we present our experiments and the results achieved. We evaluate our novel frame identification approach in isolation and also conjoined with argument identification resulting in full frame-semantic structures; before presenting our model’s performance we first focus on the datasets, baselines and the experimental setup. 5.1 Data We evaluate our models on both FrameNet- and PropBank-style structures. For FrameNet, we use the full-text annotations in the FrameNet 1.5 release8 which was used by Das et al. (2014, §3.2). We used the same test set as Das et al. containing 23 documents with 4,458 predicates. Of the remaining 55 documents, 16 documents were randomly chosen for development.9 For experiments with PropBank, we used the Ontonotes corpus (Hovy et al., 2006), version 4.0, and only made use of the Wall Street Journal documents; we used sections 2-21 for training, section 24 for development and section 23 for testing. This resembles the setup used by Punyakanok et al. (2008). All the verb frame files in Ontonotes were used for creating our frame lexicon. 5.2 Frame Identification Baselines For comparison, we implemented a set of baseline models, with varying feature configurations. The baselines use a log-linear model that models the following probability at training time: p(y|x, ℓ) = eψ·f(y,x,ℓ) P ¯y∈Fℓeψ·f(¯y,x,ℓ) (2) At test time, this model chooses the best frame as argmaxyψ · f(y, x, ℓ) where argmax iterates over the possible frames y ∈Fℓif ℓwas seen in the lexicon or the training data, or y ∈F, if it was unseen, like the disambiguation scheme of §3. We train this model by maximizing L2 regularized log-likelihood, using L-BFGS; the regularization constant was set to 0.1 in all experiments. For comparison with our model from §3, which we call WSABIE EMBEDDING, we implemented two baselines with the log-linear model. Both the baselines use features very similar to the input representations described in §3.1. The first one computes the direct dependents and dependency paths 8See https://framenet.icsi.berkeley.edu. 9These documents are listed in appendix A. as described in §3.1 but conjoins them with the word identity rather than a word embedding. Additionally, this model uses the un-conjoined words as backoff features. This would be a standard NLP approach for the frame identification problem, but is surprisingly competitive with the state of the art. We call this baseline LOG-LINEAR WORDS. The second baseline, tries to decouple the WSABIE training from the embedding input, and trains a log linear model using the embeddings. So the second baseline has the same input representation as WSABIE EMBEDDING but uses a log-linear model instead of WSABIE. We call this model LOG-LINEAR EMBEDDING. 5.3 Common Experimental Setup We process our PropBank and FrameNet training, development and test corpora with a shift-reduce dependency parser that uses the Stanford conventions (de Marneffe and Manning, 2013) and uses an arc-eager transition system with beam size of 8; the parser and its features are described by Zhang and Nivre (2011). Before parsing the data, it is tagged with a POS tagger trained with a conditional random field (Lafferty et al., 2001) with the following emission features: word, the word cluster, word suffixes of length 1, 2 and 3, capitalization, whether it has a hyphen, digit and punctuation. Beyond the bias transition feature, we have two cluster features for the left and right words in the transition. We use Brown clusters learned using the algorithm of Uszkoreit and Brants (2008) on a large English newswire corpus for cluster features. We use the same word clusters for the argument identification features in Table 1. We learn the initial embedding representations for our frame identification model (§3) using a deep neural language model similar to the one proposed by Bengio et al. (2003). We use 3 hidden layers each with 1024 neurons and learn a 128dimensional embedding from a large corpus containing over 100 billion tokens. In order to speed up learning, we use an unnormalized output layer and a hinge-loss objective. The objective tries to ensure that the correct word scores higher than a random incorrect word, and we train with minibatch stochastic gradient descent. 5.4 Experimental Setup for FrameNet Hyperparameters For our frame identification model with embeddings, we search for the WSABIE hyperparameters using the development data. SEMAFOR LEXICON FULL LEXICON Development Data Model All Ambiguous Rare All Ambiguous Rare LOG-LINEAR WORDS 89.21 72.33 88.22 89.28 72.33 88.37 LOG-LINEAR EMBEDDING 88.66 72.41 87.53 88.74 72.41 87.68 WSABIE EMBEDDING (§3) 90.78 76.43 90.18 90.90 76.83 90.18 SEMAFOR LEXICON FULL LEXICON Model All Ambiguous Rare Unseen All Ambiguous Rare Test Data Das et al. supervised 82.97 69.27 80.97 23.08 Das et al. best 83.60 69.19 82.31 42.67 LOG-LINEAR WORDS 84.53 70.55 81.65 27.27 87.33 70.55 87.19 LOG-LINEAR EMBEDDING 83.94 70.26 81.03 27.97 86.74 70.26 86.56 WSABIE EMBEDDING (§3) 86.49 73.39 85.22 46.15 88.41 73.10 88.93 Table 2: Frame identification results for FrameNet. See §5.6. SEMAFOR LEXICON FULL LEXICON Model Precision Recall F1 Precision Recall F1 Development Data LOG-LINEAR WORDS 76.97 63.37 69.51 77.02 63.55 69.64 WSABIE EMBEDDING (§3) 78.33 64.51 70.75 78.33 64.53 70.76 Test Data Das et al. supervised 67.81 60.68 64.05 Das et al. best 68.33 61.14 64.54 LOG-LINEAR WORDS 71.21 63.37 67.06 73.31 65.20 69.01 WSABIE EMBEDDING (§3) 73.00 64.87 68.69 74.29 66.02 69.91 Table 3: Full structure prediction results for FrameNet; this reports frame and argument identification performance jointly. We skip LOG-LINEAR EMBEDDING because it underperforms all other models by a large margin. We search for the stochastic gradient learning rate in {0.0001, 0.001, 0.01}, the margin γ ∈ {0.001, 0.01, 0.1, 1} and the dimensionality of the final vector space m ∈{256, 512}, to maximize the frame identification accuracy of ambiguous lexical units; by ambiguous, we imply lexical units that appear in the training data or the lexicon with more than one semantic frame. The underlined values are the chosen hyperparameters used to analyze the test data. Argument Candidates The candidate argument extraction method used for the FrameNet data, (as mentioned in §4) was adapted from the algorithm of Xue and Palmer (2004) applied to dependency trees. Since the original algorithm was designed for verbs, we added a few extra rules to handle non-verbal predicates: we added 1) the predicate itself as a candidate argument, 2) the span ranging from the sentence position to the right of the predicate to the rightmost index of the subtree headed by the predicate’s head; this helped capture cases like “a few months” (where few is the predicate and months is the argument), and 3) the span ranging from the leftmost index of the subtree headed by the predicate’s head to the position immediately before the predicate, for cases like “your gift to Goodwill” (where to is the predicate and your gift is the argument).10 10Note that Das et al. (2014) describe the state of the art in FrameNet-based analysis, but their argument identification strategy considered all possible dependency subtrees in Frame Lexicon In our experimental setup, we scanned the XML files in the “frames” directory of the FrameNet 1.5 release, which lists all the frames, the corresponding roles and the associated lexical units, and created a frame lexicon to be used in our frame and argument identification models. We noted that this renders every lexical unit as seen; in other words, at frame disambiguation time on our test set, for all instances, we only had to score the frames in Fℓfor a predicate with lexical unit ℓ(see §3 and §5.2). We call this setup FULL LEXICON. While comparing with prior state of the art on the same corpus, we noted that Das et al. (2014) found several unseen predicates at test time.11 For fair comparison, we took the lexical units for the predicates that Das et al. considered as seen, and constructed a lexicon with only those; training instances, if any, for the unseen predicates under Das et al.’s setup were thrown out as well. We call this setup SEMAFOR LEXICON.12 We also experimented on the set of unseen instances used by Das et al. ILP constraints For FrameNet, we used three ILP constraints during argument identification (§4). 1) each span could have only one role, 2) each core role could be present only once, and 3) all overt arguments had to be non-overlapping. a parse, resulting in a much larger search space. 11Instead of using the frame files, Das et al. built a frame lexicon from FrameNet’s exemplars and the training corpus. 12We got Das et al.’s seen predicates from the authors. Model All Ambiguous Rare LOG-LINEAR WORDS 94.21 90.54 93.33 LOG-LINEAR EMBEDDING 93.81 89.86 93.73 WSABIE EMBEDDING (§3) 94.79 91.52 92.55 Dev data ↑↓Test data Model All Ambiguous Rare LOG-LINEAR WORDS 94.74 92.07 91.32 LOG-LINEAR EMBEDDING 94.04 90.95 90.97 WSABIE EMBEDDING (§3) 94.56 91.82 90.62 Table 4: Frame identification accuracy results for PropBank. The model and the column names have the same semantics as Table 2. Model P R F1 LOG-LINEAR WORDS 80.02 75.58 77.74 WSABIE EMBEDDING (§3) 80.06 75.74 77.84 Dev data ↑↓Test data Model P R F1 LOG-LINEAR WORDS 81.55 77.83 79.65 WSABIE EMBEDDING (§3) 81.32 77.97 79.61 Table 5: Full frame-structure prediction results for Propbank. This is a metric that takes into account frames and arguments together. See §5.7 for more details. 5.5 Experimental Setup for PropBank Hyperparameters As in §5.4, we made a hyperparameter sweep in the same space. The chosen learning rate was 0.01, while the other values were γ = 0.01 and m = 512. Ambiguous lexical units were used for this selection process. Argument Candidates For PropBank we use the algorithm of Xue and Palmer (2004) applied to dependency trees. Frame Lexicon For the PropBank experiments we scanned the frame files for propositions in Ontonotes 4.0, and stored possible core roles for each verb frame. The lexical units were simply the verb associating with the verb frames. There were no unseen verbs at test time. ILP constraints We used the constraints of Punyakanok et al. (2008). 5.6 FrameNet Results Table 2 presents accuracy results on frame identification.13 We present results on all predicates, ambiguous predicates seen in the lexicon or the training data, and rare ambiguous predicates that appear ≤11 times in the training data. The WSABIE EMBEDDING model from §3 performs significantly better than the LOG-LINEAR WORDS baseline, while LOG-LINEAR EMBEDDING underperforms in every metric. For the SEMAFOR LEXICON setup, we also compare with the state of the art from Das 13We do not report partial frame accuracy that has been reported by prior work. Model P R F1 LOG-LINEAR WORDS 77.29 71.50 74.28 WSABIE EMBEDDING (§3) 77.13 71.32 74.11 Dev data ↑↓Test data Model P R F1 LOG-LINEAR WORDS 79.47 75.11 77.23 WSABIE EMBEDDING (§3) 79.36 75.04 77.14 Punyakanok et al. Collins 75.92 71.45 73.62 Punyakanok et al. Charniak 77.09 75.51 76.29 Punyakanok et al. Combined 80.53 76.94 78.69 Table 6: Argument only evaluation (semantic role labeling metrics) using the CoNLL 2005 shared task evaluation script (Carreras and M`arquez, 2005). Results from Punyakanok et al. (2008) are taken from Table 11 of that paper. et al. (2014), who used a semi-supervised learning method to improve upon a supervised latentvariable log-linear model. For unseen predicates from the Das et al. system, we perform better as well. Finally, for the FULL LEXICON setting, the absolute accuracy numbers are even better for our best model. Table 3 presents results on the full frame-semantic parsing task (measured by a reimplementation of the SemEval 2007 shared task evaluation script) when our argument identification model (§4) is used after frame identification. We notice similar trends as in Table 2, and our results outperform the previously published best results, setting a new state of the art. 5.7 PropBank Results Table 4 shows frame identification results on the PropBank data. On the development set, our best model performs with the highest accuracy on all and ambiguous predicates, but performs worse on rare ambiguous predicates. On the test set, the LOG-LINEAR WORDS baseline performs best by a very narrow margin. See §6 for a discussion. Table 5 presents results where we measure precision, recall and F1 for frames and arguments together; this strict metric penalizes arguments for mismatched frames, like in Table 3. We see the same trend as in Table 4. Finally, Table 6 presents SRL results that measures argument performance only, irrespective of the frame; we use the evaluation script from CoNLL 2005 (Carreras and M`arquez, 2005). We note that with a better frame identification model, our performance on SRL improves in general. Here, too, the embedding model barely misses the performance of the best baseline, but we are at par and sometimes better than the single parser setting of a state-of-the-art SRL system (Punyakanok et al., 2008).14 14The last row of Table 6 refers to a system which used the 6 Discussion For FrameNet, the WSABIE EMBEDDING model we propose strongly outperforms the baselines on all metrics, and sets a new state of the art. We believe that the WSABIE EMBEDDING model performs better than the LOG-LINEAR EMBEDDING baseline (that uses the same input representation) because the former setting allows examples with different labels and confusion sets to share information; this is due to the fact that all labels live in the same label space, and a single projection matrix is shared across the examples to map the input features to this space. Consequently, the WSABIE EMBEDDING model can share more information between different examples in the training data than the LOG-LINEAR EMBEDDING model. Since the LOGLINEAR WORDS model always performs better than the LOG-LINEAR EMBEDDING model, we conclude that the primary benefit does not come from the input embedding representation.15 On the PropBank data, we see that the LOGLINEAR WORDS baseline has roughly the same performance as our model on most metrics: slightly better on the test data and slightly worse on the development data. This can be partially explained with the significantly larger training set size for PropBank, making features based on words more useful. Another important distinction between PropBank and FrameNet is that the latter shares frames between multiple lexical units. The effect of this is clearly observable from the “Rare” column in Table 4. WSABIE EMBEDDING performs poorly in this setting while LOG-LINEAR EMBEDDING performs well. Part of the explanation has to do with the specifics of WSABIE training. Recall that the WSABIE EMBEDDING model needs to estimate the label location in Rm for each frame. In other words, it must estimate 512 parameters based on at most 10 training examples. However, since the input representation is shared across all frames, every other training example from all the lexical units affects the optimal estimate, since they all modify the joint parameter matrix M. By contrast, in the log-linear models each label has its own set of parameters, and they interact only via the normalization constant. The LOG-LINEAR WORDS model does not have this entanglement, but cannot share information between words. For PropBank, combination of two syntactic parsers as input. 15One could imagine training a WSABIE model with word features, but we did not perform this experiment. these drawbacks and benefits balance out and we see similar performance for LOG-LINEAR WORDS and LOG-LINEAR EMBEDDING. For FrameNet, estimating the label embedding is not as much of a problem because even if a lexical unit is rare, the potential frames can be frequent. For example, we might have seen the SENDING frame many times, even though telex.V is a rare lexical unit. In comparison to prior work on FrameNet, even our baseline models outperform the previous state of the art. A particularly interesting comparison is between our LOG-LINEAR WORDS baseline and the supervised model of Das et al. (2014). They also use a log-linear model, but they incorporate a latent variable that uses WordNet (Fellbaum, 1998) to get lexical-semantic relationships and smooths over frames for ambiguous lexical units. It is possible that this reduces the model’s power and causes it to over-generalize. Another difference is that when training the log-linear model, they normalize over all frames, while we normalize over the allowed frames for the current lexical unit. This would tend to encourage their model to expend more of its modeling power to rule out possibilities that will be pruned out at test time. 7 Conclusion We have presented a simple model that outperforms the prior state of the art on FrameNetstyle frame-semantic parsing, and performs at par with one of the previous-best single-parser systems on PropBank SRL. Unlike Das et al. (2014), our model does not rely on heuristics to construct a similarity graph and leverage WordNet; hence, in principle it is generalizable to varying domains, and to other languages. Finally, we presented results on PropBank-style semantic role labeling with a system that included the task of automatic verb frame identification, in tune with the FrameNet literature; we believe that such a system produces more interpretable output, both from the perspective of human understanding as well as downstream applications, than pipelines that are oblivious to the verb frame, only focusing on argument analysis. Acknowledgments We thank Emily Pitler for comments on an early draft, and the anonymous reviewers for their valuable feedback. References C. F. Baker, C. J. Fillmore, and J. B. Lowe. 1998. The berkeley framenet project. In Proceedings of COLING-ACL. C. Baker, M. Ellsworth, and K. Erk. 2007. SemEval2007 Task 19: Frame semantic structure extraction. In Proceedings of SemEval. Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. X. Carreras and L. M`arquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In Proceedings of CoNLL. X. Carreras and L. M`arquez. 2005. Introduction to the CoNLL-2005 shared task: semantic role labeling. In Proceedings of CoNLL. R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML. D. Das, N. Schneider, D. Chen, and N. A. Smith. 2010. Probabilistic frame-semantic parsing. In Proceedings of NAACL-HLT. D. Das, D. Chen, A. F. T. Martins, N. Schneider, and N. A. Smith. 2014. Frame-semantic parsing. Computational Linguistics, 40(1):9–56. M.-C. de Marneffe and C. D. Manning, 2013. Stanford typed dependencies manual. C. Fellbaum, editor. 1998. WordNet: an electronic lexical database. C. J. Fillmore, C. R. Johnson, and M. R. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16(3):235–250. C. J. Fillmore. 1982. Frame Semantics. In Linguistics in the Morning Calm, pages 111–137. Hanshin Publishing Co., Seoul, South Korea. D. Gildea and D. Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. 2006. Ontonotes: The 90 In Proceedings of NAACL-HLT. R. Johansson and P. Nugues. 2007. LTH: semantic structure extraction using nonprojective dependency trees. In Proceedings of SemEval. A. Klementiev, I. Titov, and B. Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML. D. C. Liu and J. Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(3):503 – 528. L. M`arquez, X. Carreras, K. C. Litkowski, and S. Stevenson. 2008. Semantic role labeling: an introduction to the special issue. Computational Linguistics, 34(2):145–159. A. Meyers, R. Reeves, C. Macleod, R. Szekely, V. Zielinska, B. Young, and R. Grishman. 2004. The NomBank project: An interim report. In Proceedings of NAACL/HLT Workshop on Frontiers in Corpus Annotation. J. Mitchell and M. Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACLHLT. M. Palmer, D. Gildea, and P. Kingsbury. 2005. The Proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. V. Punyakanok, D. Roth, and W. Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257–287. R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP. J. Turian, L. Ratinov, and Y. Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of ACL, Stroudsburg, PA, USA. N. Usunier, D. Buffoni, and P. Gallinari. 2009. Ranking with ordered weighted pairwise classification. In ICML. J. Uszkoreit and T. Brants. 2008. Distributed word clustering for large scale class-based language modeling in machine translation. In Proceedings of ACL-HLT. K. Q. Weinberger and L. K. Saul. 2009. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207–244. J. Weston, S. Bengio, and N. Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of IJCAI. N. Xue and M. Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of EMNLP 2004. Y. Zhang and J. Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL-HLT. Number Filename dev-1 LUCorpus-v0.3 20000420 xin eng-NEW.xml dev-2 NTI SouthAfrica Introduction.xml dev-3 LUCorpus-v0.3 CNN AARONBROWN ENG 20051101 215800.partial-NEW.xml dev-4 LUCorpus-v0.3 AFGP-2002-600045-Trans.xml dev-5 PropBank TicketSplitting.xml dev-6 Miscellaneous Hijack.xml dev-7 LUCorpus-v0.3 artb 004 A1 E1 NEW.xml dev-8 NTI WMDNews 042106.xml dev-9 C-4 C-4Text.xml dev-10 ANC EntrepreneurAsMadonna.xml dev-11 NTI LibyaCountry1.xml dev-12 NTI NorthKorea NuclearOverview.xml dev-13 LUCorpus-v0.3 20000424 nyt-NEW.xml dev-14 NTI WMDNews 062606.xml dev-15 ANC 110CYL070.xml dev-16 LUCorpus-v0.3 CNN ENG 20030614 173123.4-NEW-1.xml Table 7: List of files used as development set for the FrameNet 1.5 corpus. A Development Data Table 7 features a list of the 16 randomly selected documents from the FrameNet 1.5 corpus, which we used for development. The resultant development set consists of roughly 4,500 predicates. We use the same test set as in Das et al. (2014), containing 23 documents and 4,458 predicates.
2014
136
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1459–1469, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Sense-Based Translation Model for Statistical Machine Translation Deyi Xiong and Min Zhang∗ Provincial Key Laboratory for Computer Information Processing Technology Soochow University, Suzhou, China 215006 {dyxiong, minzhang}@suda.edu.cn Abstract The sense in which a word is used determines the translation of the word. In this paper, we propose a sense-based translation model to integrate word senses into statistical machine translation. We build a broad-coverage sense tagger based on a nonparametric Bayesian topic model that automatically learns sense clusters for words in the source language. The proposed sense-based translation model enables the decoder to select appropriate translations for source words according to the inferred senses for these words using maximum entropy classifiers. Our method is significantly different from previous word sense disambiguation reformulated for machine translation in that the latter neglects word senses in nature. We test the effectiveness of the proposed sensebased translation model on a large-scale Chinese-to-English translation task. Results show that the proposed model substantially outperforms not only the baseline but also the previous reformulated word sense disambiguation. 1 Introduction One of very common phenomena in language is that a plenty of words have multiple meanings. In the context of machine translation, such different meanings normally produce different target translations. Therefore a natural assumption is that word sense disambiguation (WSD) may contribute to statistical machine translation (SMT) by providing appropriate word senses for target translation selection with context features (Carpuat and Wu, 2005). ∗Corresponding author This assumption, however, has not been empirically verified in the early days. Carpuat and Wu (2005) adopt a standard formulation of WSD: predicting word senses that are defined on an ontology for ambiguous words. As they apply WSD to Chinese-to-English translation, they predict word senses from a Chinese ontology HowNet and project the predicted senses to English glosses provided by HowNet. These glosses, used as the sense predictions of their WSD system, are integrated into a word-based SMT system either to substitute for translation candidates of their translation model or to postedit the output of their SMT system. They report that WSD degenerates the translation quality of SMT. In contrast to the standard WSD formulation, Vickrey et al. (2005) reformulate the task of WSD for SMT as predicting possible target translations rather than senses for ambiguous source words. They show that such a reformulated WSD can improve the accuracy of a simplified word translation task. Following this WSD reformulation for SMT, Chan et al. (2007) integrate a state-of-the-art WSD system into a hierarchical phrase-based system (Chiang, 2005). Carpuat and Wu (2007) also use this reformulated WSD and further adapt it to multi-word phrasal disambiguation. They both report that the redefined WSD can significantly improve SMT. Although this reformulated WSD has proved helpful for SMT, one question is not answered yet: are pure word senses useful for SMT? The early WSD for SMT (Carpuat and Wu, 2005) uses projected word senses while the reformulated WSD sidesteps word senses. In this paper we would like to re-investigate this question by resorting to word sense induction (WSI) that is related to but different from WSD.1 We use 1We will discuss the relation and difference between WSI and WSD in Section 2. 1459 WSI to obtain word senses for large-scale data. With these word senses, we study in particular: 1) whether word senses can be directly integrated to SMT to improve translation quality and 2) whether WSI-based model can outperform the reformulated WSD in the context of SMT. In order to incorporate word senses into SMT, we propose a sense-based translation model that is built on maximum entropy classifiers. We use a nonparametric Bayesian topic model based WSI to infer word senses for source words in our training, development and test set. We collect training instances from the sense-tagged training data to train the proposed sense-based translation model. Specially, • Instead of predicting target translations for ambiguous source words as the previous reformulated WSD does, we first predict word senses for ambiguous source words. The predicted word senses together with other context features are then used to predict possible target translations for these words. • Instead of using word senses defined by a prespecified sense inventory as the standard WSD does, we incorporate word senses that are automatically learned from data into our sense-based translation model. We integrate the proposed sense-based translation model into a state-of-the-art SMT system and conduct experiments on Chines-to-English translation using large-scale training data. Results show that automatically learned word senses are able to improve translation quality and the sensebased translation model is better than the previous reformulated WSD. The remainder of this paper proceeds as follows. Section 2 introduces how we obtain word senses for our large-scale training data via a WSIbased broad-coverage sense tagger. Section 3 presents our sense-based translation model. Section 4 describes how we integrate the sense-based translation model into SMT. Section 5 elaborates our experiments on the large-scale Chinese-toEnglish translation task. Section 6 introduces related studies and highlights significant differences from them. Finally, we conclude in Section 7 with future directions. 2 WSI-Based Broad-Coverage Sense Tagger In order to obtain word senses for any source words, we build a broad-coverage sense tagger that relies on the nonparametric Bayesian model based word sense induction. We first describe WSI, especially WSI based on the Hierarchical Dirichlet Process (HDP) (Teh et al., 2004), a nonparametric version of Latent Dirichlet Allocation (LDA) (Blei et al., 2003). We then elaborate how we use the HDP-based WSI to predict sense clusters and to annotate source words in our training/development/test sets with these sense clusters. 2.1 Word Sense Induction Before we introduce WSI, we differentiate word type from word token. A word type refers to a unique word as a vocabulary entry while a word token is an occurrence of a word type. Take the first sentence of this paragraph as an example, it has 11 word tokens but 9 word types as there are two word tokens of the word type “we” and two tokens of the word type “word”. Word sense induction is a task of automatically inducing the underlying senses of word tokens given the surrounding contexts where the word tokens occur. The biggest difference from word sense disambiguation lies in that WSI does not rely on a predefined sense inventory. Such a prespecified list of senses is normally assumed by WSD which predicts senses of word tokens using this given inventory. From this perspective, WSI can be treated as a clustering problem while WSD a classification one. Various clustering algorithms, such as k-means, have been previously used for WSI. Recently, we have also witnessed that WSI is cast as a topic modeling problem where the sense clusters of a word type are considered as underlying topics (Brody and Lapata, 2009; Yao and Durme, 2011; Lau et al., 2012). We follow this line to tailor a topic modeling framework to induce word senses for our large-scale training data. In the topic-based WSI, surrounding context of a word token is considered as a pseudo document of the corresponding word type. A pseudo document is composed of either a bag of neighboring words of a word token, or the Part-to-Speech tags of neighboring words, or other contextual information elements. In this paper, we define a pseudo 1460 document as ±N neighboring words centered on a given word token. Table 1 shows examples of pseudo documents for a Chinese word “wǎngluò” (network). These two pseudo documents are extracted from a sentence listed in the first row of Table 1. Here we set N = 5. We can extract as many pseudo documents as the number of word tokens of a given word type that occur in training data. The collection of all these extracted pseudo documents of the given word type forms a corpus. We can induce topics on this corpus for each pseudo document via topic modeling approaches. Figure 1(a) shows the LDA-based WSI for a given word type W. The outer plate represents replicates of pseudo documents which consist of N neighboring words centered on the tokens of the given word type W. wj,i is the i-th word of the j-th pseudo document of the given word type W. sj,i is the sense assigned to the word wj,i. The conventional topic distribution θj for the jth pseudo document is taken as the the distribution over senses for the given word type W. The LDA generative process for sense induction is as follows: 1) for each pseudo document Dj, draw a per-document sense distribution θj from a Dirichlet distribution Dir(α); 2) for each item wj,i in the pseudo document Dj, 2.1) draw a sense cluster sj,i ∼ Multinomial(θj); and 2.2) draw a word wj,i ∼ φsj,i where φsj,i is the distribution of sense sj,i over words drawn from a Dirichlet distribution Dir(β). As LDA needs to manually specify the number of senses (topics), a better idea is to let the training data automatically determine the number of senses for each word type. Therefore we resort to the HDP, a natural nonparametric generalization of LDA, for the inference of both sense clusters and the number of sense clusters following Lau et al. (2012) and Yao and Durme (2011). The HDP for WSI is shown in Figure 1(b). The HDP generative process for word sense induction is as follows: 1) sample a base distribution G0 from a Dirichlet process DP(γ, H) with a concentration parameter γ and a base distribution H; 2) for each pseudo document Dj, sample a distribution Gj ∼ DP(α0, G0); 3) for each item wj,i in the pseudo document Dj, 3.1) sample a sense cluster sj,i ∼Gj; and 3.2) sample a word wj,i ∼ φsj,i. Here G0 is a global distribution over sense clusters that are shared by all Gj. Gj is a per-document sense distribution over these sense wj,i α θj sj,i j ∈[1, J] ϕk k ∈[1, K] β G0 Gj sj,i j ∈[1, J] wj,i H γ α0 (a) (b) i ∈[1, Nj] i ∈[1, Nj] Figure 1: Graphical model representations of (a) Latent Dirichlet Allocation for WSI, (b) Hierarchical Dirichlet Process for WSI. clusters, which has its own document-specific proportions of these sense clusters. The hyperparameter γ, α0 in the HDP are both concentration parameters which control the variability of senses in the global distribution G0 and document-specific distribution Gj. The HDP/LDA-based WSI complies with the distributional hypothesis that states that words occurring in the same contexts tend to have similar meanings. We want to extend this hypothesis to machine translation by building sense-based translation model upon the HDP-based word sense induction: words with the same meanings tend to be translated in the same way. 2.2 Word Sense Tagging We adopt the HDP-based WSI to automatically predict word senses and use these predicted senses to annotate source words. We individually build a HDP-based WSI model per word type and train these models on the training data. The sense for a word token is defined as the most probable sense according to the per-document sense distribution Gj estimated for the corresponding pseudo document that represents the surrounding context of the word token. In particular, we take the following steps. 1461 tā tíxǐng wǒguó wǎngluò yùnyíng zhě zhùyì fángfàn hēikè gōngjī ,quèbǎo wǎngluò ānquán 。 Pseudo Documents for word “wǎngluò” tā tíxǐng wǒguó wǎngluò yùnyíng zhě zhùyì fángfàn hēikè fángfàn hēikè gōngjī ,quèbǎo wǎngluò ānquán 。 Table 1: Examples of pseudo documents extracted from a Chinese sentence (written in Chinese Pinyin). • Data preprocessing We preprocess the source side of our bilingual training data as well as development and test set by removing stop words and rare words. • Training Data Sense Annotation From the preprocessed training data, we extract all possible pseudo documents for each source word type. The collection of these extracted pseudo documents is used as a corpus to train a HDP-based WSI model for the source word type. In this way, we can train as many HDPbased WSI models as the number of word types kept after preprocessing. The sense with the highest probability output by the HDP-based WSI model for each pseudo document is used as the sense cluster to label the corresponding word token. • Test/Dev Data Sense Annotation From the preprocessed test data, we can also extract pseudo documents for each source word type that occur in the test/dev set. Using the trained HDP-based WSI model that correspond to the source word type in question, we can obtain the best sense assignment for each pseudo document of the word type, which in turn is used to annotate the corresponding word token in the test/dev data. 3 Sense-Based Translation Model In this section we present our sense-based translation model and describe the features that we use as well as the training process of this model. 3.1 Model The sense-based translation model estimates the probability that a source word c is translated into a target phrase ˜e given contextual information, including word senses that are obtained using the HDP-based WSI as described in the last section. We allow the target phrase ˜e to be either a phrase of length up to 3 words or NULL so that we can capture both multi-word and null translations. The essential component of the model is a maximum entropy (MaxEnt) based classifier that is used to predict the translation probability p(˜e|C(c)). The MaxEnt classifier can be formulated as follows. p(˜e|C(c)) = exp(∑ i θihi(˜e, C(c))) ∑ ˜e′ exp(∑ i θihi(˜e′, C(c))) (1) where his are binary features, θis are weights of these features, C(c) is the surrounding context of c. We define two groups of binary features: 1) lexicon features and 2) sense features. All these features take the following form. h(˜e, C(c)) = { 1, if ˜e = 2 and C(c).µ = ν 0, else (2) where 2 is a placeholder for a possible target translation (up to 3 words or NULL), µ is the name of a contextual (lexicon or sense) feature for the source word c, and the symbol ν represents the value of the feature µ. We extract both the lexicon and sense features from a ±k-word window centered on the word c. The lexicon features are defined as the preceding k words, the succeeding k words and the word c itself: {c−k, ..., c−1, c, c1, ..., ck}. The sense features are defined as the predicted senses for these words: {sc−k, ..., sc−1, sc, sc1, ..., sck}. As we also use these neighboring words to predict word senses in the HDP-based WSI, the information provided by the lexicon and sense features may overlap. This is not a issue for the MaxEnt classifier as it can deal with arbitrary overlapping features (Berger et al., 1996). One may also wonder whether the sense features can contribute to SMT new information that can NOT be obtained from the lexicon features. First, we believe that the senses induced by the HDP-based WSI provide a different view of data than that of the lexicon features. Second, the sense features contain semantic distributional information learned by the HDP across contexts where lexical words occur. Third, we empirically investigate this doubt by comparing two MaxEnt-based translation models 1462 in Section 5. One model only uses the lexicon features while the other integrates both the lexicon and sense features. The former model can be considered as a reformulated WSD for SMT as we described in Section 1. Given a source sentence {ci}I 1, the proposed sense-based translation model Ms can be denoted as Ms = ∏ ci∈W (˜ei|C(ci)) (3) where W is a set of words for which we build MaxEnt classifiers (see the next subsection for the discussion on how we build MaxEnt classifiers for our sense-based translation model). 3.2 Training The training of the proposed sense-based translation model is a process of estimating the feature weights θs in the equation (1). There are two strategies that we can use to obtain these weights. We can either build an all-in-one MaxEnt classifier that integrates all source word types c and their possible target translations ˜e or build multiple MaxEnt classifiers. If we train the all-in-one classifier, we have to predict millions of classes (target translations of length up to 3 words). This is normally intractable in practice. Therefore we take the second strategy: building multiple MaxEnt classifiers with one classifier per source word type. In order to train these classifiers, we have to collect training events from our word-aligned bilingual training data where source words are annotated with their corresponding sense clusters predicted by the HDP-based WSI as described in Section 2. A training event for a source word c consists of all contextual elements in the form of C(c).µ = ν defined in the last subsection and the target translation ˜e. Using these collected events, we can train our multiple classifiers. In practice, we do not build MaxEnt classifiers for source words that occur less than 10 times in the training data and run the MaxEnt toolkit in a parallel manner in order to expedite the training process. 4 Decoding with Sense-Based Translation Model The sense-based translation model described above is integrated into the log-linear translation model of SMT as a sense-based knowledge source. The weight of this model is tuned by the minimum source sentences HDP-based WSI sense-tagged source sentences MaxEnt classifiers sense-based translation model decoder target sentences other models Figure 2: Architecture of SMT system with the sense-based translation model. error rate training (MERT) (Och, 2003) together with other models such as the language model. Figure 2 shows the architecture of the SMT system enhanced with the sense-based translation model. Before we translate a source sentence, we use the HDP-based WSI models trained on the training data to predict senses for word tokens occurring in the source sentence as discussed in Section 2.2. Note that the HDP-based WSI does not predict senses for all words due to the following two reasons. • We do not train HDP-based WSI models for word types for which we extract more than T pseudo documents.2 • In the test/dev set, there are some words that are unseen in the training data. These unseen words, of course, do not have their HDPbased WSI models. For these words, we set a default sense (i.e. sc = s1). Sense tagging on test sentences can be done in a preprocessing step. Once we get sense clusters for word tokens in test sentences, we load pre-trained MaxEnt classifiers of the corresponding word types. During decoding, we keep word alignments for each translation rule. Whenever a new source word c is translated, we find its translation ˜e via the kept word alignments. We then calculate the translation probability p(˜e|C(c)) according to the equation (1) using the corresponding loaded classifier. In this way, we can easily calculate the sense-based translation model score. 2we set T = 20, 000. 1463 5 Experiments In this section, we carried out a series of experiments on Chinese-to-English translation using large-scale bilingual training data. In order to build the proposed sense-based translation model, we annotated the source part of the bilingual training data with word senses induced by the HDPbased WSI. With the trained sense-based translation model, we would like to investigate the following two questions: • Do word senses automatically induced by the HDP-based WSI improve translation quality? • Does the sense-based translation model outperform the reformulated WSD for SMT? 5.1 Setup Our baseline system is a state-of-the-art SMT system which adapts Bracketing Transduction Grammars (Wu, 1997) to phrasal translation and equips itself with a maximum entropy based reordering model (Xiong et al., 2006). We used LDC corpora LDC2004E12, LDC2004T08, LDC2005T10, LDC2003E14, LDC2002E18, LDC2005T06, LDC2003E07, LDC2004T07 as our bilingual training data which consists of 3.84M bilingual sentences, 109.5M English word tokens and 96.9M Chinese word tokens. We ran Giza++ on the training data in two directions and applied the “grow-diag-final” refinement rule (Koehn et al., 2003) to obtain word alignments. From the word-aligned data, we extracted weighted phrase pairs to generate our phrase table. We trained a 5-gram language model on the Xinhua section of the English Gigaword corpus (306 million words) using the SRILM toolkit (Stolcke, 2002) with the modified Kneser-Ney smoothing (Chen and Goodman, 1996). We trained our HDP-based WSI models via the C++ HDP toolkit3 (Wang and Blei, 2012). We set the hyperparameters γ = 0.1 and α0 = 1.0 following Lau et al. (2012).We extracted pseudo documents from a ±10-word window centered on the corresponding word token for each word type following Brody and Lapata (2009). As described in Section 2.2, we preprocessed the source part of our bilingual training data by removing stop words and infrequent words that occurs less than 3http://www.cs.cmu.edu/˜chongw/ resource.html Training Test # Word Types 67,723 4,348 # Total Pseudo Documents 27.73M 11,777 # Avg Pseudo Documents 427.79 2.71 # Total Senses 271,770 24,162 # Avg Senses 4.01 5.56 Table 2: Statistics of the HDP-based word sense induction on the training and test data. 10 times in the training data. From the preprocessed data, we extracted pseudo documents for each word type to train a HDP-based WSI model per word type. Note that we do not build WSI models for highly frequent words that occur more than 20,000 times in order to expedite the HDP training process. We trained our MaxEnt classifiers with the offthe-shelf MaxEnt tool.4 We performed 100 iterations of the L-BFGS algorithm implemented in the training toolkit on the collected training events from the sense-annotated data as described in Section 3.2. We set the Gaussian prior to 1 to avoid overfitting. On average, we obtained 346 classes (target translations) per source word type with the maximum number of classes being 256,243. It took an average of 57.5 seconds for training a Maxent classifier. We used the NIST MT03 evaluation test data as our development set, and the NIST MT05 as the test set. We evaluated translation quality with the case-insensitive BLEU-4 (Papineni et al., 2002) and NIST (Doddington, 2002). In order to alleviate the impact of MERT (Och, 2003) instability, we followed the suggestion of Clark et al. (2011) to run MERT three times and report average BLEU/NIST scores over the three runs for all our experiments. 5.2 Statistics and Examples of Word Senses Before we present our experiment results of the sense-based translation model, we study some statistics of the HDP-based WSI on the training and test data. We show these statistics in Table 2. There are 67,723 and 4,348 unique word types in the training and test data after the preprocessing step. For these word types, we extract 27.73M and 11,777 pseudo documents from the training and test set respectively. On average, there are 427.79 4http://homepages.inf.ed.ac.uk/ lzhang10/maxenttoolkit.html 1464 System BLEU(%) NIST STM (±5w) 34.64 9.4346 STM (±10w) 34.76 9.5114 STM (±15w) Table 4: Experiment results of the sense-based translation model (STM) with lexicon and sense features extracted from a window of size varying from ±5 to ±15 words on the development set. pseudo documents per word type in the training data and 2.71 in the test set. The HDP-based WSI learns 271,770 word senses in total using the pseudo documents collected from the training data and infers 24,162 word senses using the pseudo documents extracted from the test set. There are 4.01 different senses per word type in the training data and 5.56 in the test set on average. Table 3 illustrates six different senses of the word “运营(operate)” learned by the HDP-based WSI in the training data. We also show the most probable 10 words for each sense cluster. Sense s1 represents the operations of company or organization, sense s2 denotes country/institution/internation operations, sense s3 refers to market operations, sense s4 corresponds to business operations, sense s5 to public facility operations, and finally s6 to economy operations. 5.3 Impact of Window Size k used in MaxEnt Classifiers Our first group of experiments were conducted to investigate the impact of the window size k on translation performance in terms of BLEU/NIST on the development set. We extracted both the lexicon and sense features from a ±k-word window for our MaxEnt classifiers. We varied k from 5 to 15. Experiment results are shown in Table 4. We achieve the best performance when k = 10. This suggests that a ±10-word window context is sufficient for predicting target translations for ambiguous source words. We therefore set k = 10 for all experiments thereafter. 5.4 Effect of the Sense-Based Translation Model Our second group of experiments were carried out to investigate whether the sense-base translation model is able to improve translation quality by comparing the system enhanced with our sensebased translation model against the baseline. We also studied the impact of word senses induced by System BLEU(%) NIST Base 33.53 9.0561 STM (sense) 34.15 9.2596 STM (sense+lexicon) 34.73 9.4184 Table 5: Experiment results of the sense-based translation model (STM) against the baseline. System BLEU(%) NIST Base 33.53 9.0561 Reformulated WSD 34.16 9.3820 STM 34.73 9.4184 Table 6: Comparison results of the sense-based translation model vs. the reformulated WSD for SMT. the HDP-based WSI on translation performance by enforcing the sense-based translation model to use only sense features. Table 5 shows the experiment results. From the table, we can observe that • Our sense-based translation model achieves a substantial improvement of 1.2 BLEU points over the baseline. This indicates that the sense-based translation model is able to help select correct translations for ambiguous source words. • If we only integrate sense features into the sense-based translation model, we can still outperform the baseline by 0.62 BLEU points. This suggests that automatically induced word senses alone are indeed useful for machine translation. 5.5 Comparison to Word Sense Disambiguation As we mentioned in Section 3.1, our sense-based translation model can be degenerated to a reformulated WSD model for SMT if we only use lexicon features in MaxEnt classifiers. This allows us to directly compare our method against the reformulated WSD for SMT. Table 6 shows the comparison result. From the table, we can find that the sensebased translation model outperforms the reformulated WSD by 0.57 BLEU points. This suggests that the HDP-based word sense induction is better than the reformulated WSD in the context of SMT. Furthermore, as the reformulated WSD is a degenerated version of our sense-based translation model which only uses the lexicon features, 1465 s1 s2 s3 运营(operate) 运营(operate) 运营(operate) 设施(facility) 卫星(satellite) 市场(market) 计划(plan) 系统(system) 企业(enterprise) 基础(foundation) 国家(country) 竞争(competition) 项目(project) 提供(supply) 资产(assets) 公司(company) 国际(inter-nation) 利润(profit) 结构(structure) 机构(institution) 造成(cause) 服务(service) 进行(proceed) 费用(cost) 组织(organization) 中心(center) 资金(capital) 提供(supply) 合作(cooperate) 业务(business) s4 s5 s6 费用(cost) 城市(city) 处于(lie) 股价(share price) 处理(process) 拍照(photograph) 27000 自来水(tap-water) 119 科索沃(Kosovo) 工厂(factory) DPRK 额外(extra) 汽车(car) 保险(insurance) 工资(wage) 铁路(railway) 超支(overspend) 美元(dollar) 污水(sewage) 地位(position) 商业(commerce) 办事处(office) 经济(economy) 收入(income) 保本(break-even) 竞争者(competitor) 铁路局(railway administration) 部件(component) 平衡(balance) Table 3: Six different senses learned for the word “运营” from the training data. the sense features used in our model do provide new information that can not be obtained by the lexicon features. 6 Related Work In this section we introduce previous studies that are related to our work. For ease of comparison, we roughly divide them into 4 categories: 1) WSD for SMT, 2) topic-based WSI, 3) topic model for SMT and 4) lexical selection. WSD for SMT As we mentioned in Section 1, WSD has been successfully reformulated and adapted to SMT (Vickrey et al., 2005; Carpuat and Wu, 2007; Chan et al., 2007). Rather than predicting word senses for ambiguous words, the reformulated WSD directly predicts target translations for source words with context information. Our sense-based translation model also predicts target translations for SMT. The significant difference is that we predict word senses automatically learned from data and incorporate these predicted senses into SMT. Our experiments show that such word senses are able to improve translation quality. Topic-based WSI Topic-based WSI can be considered as the foundation of our work as we use it to obtain broad-coverage word senses to annotate our large-scale training data. Brody and Lapata (2009)’s work is the first attempt to approach WSI via topic modeling. They adapt LDA to word sense induction by building one topic model per word type. According to them, there are 3 significant differences between topic-based WSI and generic topic modeling. • First, the goal of topic-based WSI is to divide contexts of a word type into different categories, each representing a sense cluster. However generic topic models aim at topic distributions of documents. • Second, generic topic modeling explores whole documents for topic inference while topic-based WSI uses much smaller units in a document (e.g., surrounding words of a target word) for word sense induction. • Finally, the number of induced word senses in WSI is usually less than 10 while the number of inferred topics in generic topic modeling is tens or hundreds. As LDA-based WSI needs to manually specify the number of word senses, Yao and Durme (2011) propose HDP-based WSI that is capable of 1466 determining the number of senses for each word type according to training data. Lau et al. (2012) adopt the HDP-based WSI for novel sense detection and empirically show that the HDP-based WSI is better than the LDA-based WSI. We follow them to set the hyperparameters of HDP for training and incorporate automatically induced word senses into SMT in our work. Topic model for SMT Generic topic models are also explored for SMT. Zhao and Xing (2007) propose a bilingual topic model and integrate a topic-specific lexicon translation model into SMT. Tam et al. (2007) also explore a bilingual topic model for translation and language model adaptation. Foster and Kunh (2007) introduce a mixture model approach for translation model adaptation. Xiao et al. (2012) propose a topic-based similarity model for rule selection in hierarchical phrasebased translation. Xiong and Zhang (2013) employ a sentence-level topic model to capture coherence for document-level machine translation. The difference between our work and these previous studies on topic model for SMT lies in that we adopt topic-based WSI to obtain word senses rather than generic topics and integrate induced word senses into machine translation. Lexical selection Our work is also related to lexical selection in SMT where appropriate target lexical items for source words are selected by a statistical model with context information (Bangalore et al., 2007; Mauser et al., 2009). The reformulated WSD discussed before can also be considered as a lexical selection model. The significant difference from these studies is that we perform lexical selection using automatically induced word senses by the HDP on the source side. 7 Conclusion We have presented a sense-based translation model that integrates word senses into machine translation. We capitalize on the broad-coverage word sense induction system that is built on the nonparametric Bayesian HDP to learn sense clusters for words in the source language. We generate pseudo documents for word tokens in the training/test data for the HDP-based WSI system to infer topics. The most probable topic inferred for a pseudo document is taken as the sense of the corresponding word token. We incorporate these learned word senses as translation evidences into maximum entropy classifiers which form the foundation of the proposed sense-based translation model. We carried out a series of experiments to validate the effectiveness of the sense-based translation by comparing the model against the baseline and the previous reformulated WSD. Our experiment results show that • The sense-based translation model is able to substantially improve translation quality in terms of both BLEU and NIST. • The sense-based translation model is also better than the previous reformulated WSD for SMT. • Word senses automatically induced by the HDP-based WSI on large-scale training data are very useful for machine translation. To the best of our knowledge, this is the first attempt to empirically verify the positive impact of word senses on translation quality. Comparing with macro topics of documents inferred by LDA with bag of words from the whole documents, word senses inferred by the HDPbased WSI can be considered as micro topics. In the future, we would like to explore both the micro and macro topics for machine translation. Additionally, we also want to induce sense clusters for words in the target language so that we can build sense-based language model and integrate it into SMT. We would like to investigate whether automatically learned senses of proceeding words are helpful for predicting succeeding words. Acknowledgement The work was sponsored by the National Natural Science Foundation of China under projects 61373095 and 61333018. We would like to thank three anonymous reviewers for their insightful comments. References Srinivas Bangalore, Patrick Haffner, and Stephan Kanthak. 2007. Statistical Machine Translation through Global Lexical Selection and Sentence Reconstruction. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 152–159, Prague, Czech Republic, June. Association for Computational Linguistics. 1467 Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22(1):39–71. David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. Samuel Brody and Mirella Lapata. 2009. Bayesian Word Sense Induction. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 103–111, Athens, Greece, March. Association for Computational Linguistics. Marine Carpuat and Dekai Wu. 2005. Word Sense Disambiguation vs. Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 387–394, Ann Arbor, Michigan, June. Association for Computational Linguistics. Marine Carpuat and Dekai Wu. 2007. Improving Statistical Machine Translation Using Word Sense Disambiguation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 61–72. Yee Seng Chan, Hwee Tou Ng, and David Chiang. 2007. Word Sense Disambiguation Improves Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 33–40, Prague, Czech Republic, June. Association for Computational Linguistics. Stanley F. Chen and Joshua Goodman. 1996. An Empirical Study of Smoothing Techniques for Language Modeling. In Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, ACL ’96, pages 310–318, Stroudsburg, PA, USA. Association for Computational Linguistics. David Chiang. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 263–270, Ann Arbor, Michigan, June. Association for Computational Linguistics. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 176–181, Portland, Oregon, USA, June. George Doddington. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram Cooccurrence Statistics. In Proceedings of the Second International Conference on Human Language Technology Research, HLT ’02, pages 138–145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. George Foster and Roland Kuhn. 2007. MixtureModel Adaptation for SMT. In Proc. of the Second Workshop on Statistical Machine Translation, pages 128–135, Prague, Czech Republic, June. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 58–54, Edmonton, Canada, May-June. Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word Sense Induction for Novel Sense Detection. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 591–601, Avignon, France, April. Association for Computational Linguistics. Arne Mauser, Saˇsa Hasan, and Hermann Ney. 2009. Extending Statistical Machine Translation with Discriminative and Trigger-Based Lexicon Models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 210–218, Singapore, August. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan, July. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Yik-Cheung Tam, Ian R. Lane, and Tanja Schultz. 2007. Bilingual LSA-based adaptation for statistical machine translation. Machine Translation, 21(4):187–207. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2004. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101. David Vickrey, Luke Biewald, Marc Teyssier, and Daphne Koller. 2005. Word-Sense Disambiguation for Machine Translation. In HLT/EMNLP. The Association for Computational Linguistics. C. Wang and D. M. Blei. 2012. A Split-Merge MCMC Algorithm for the Hierarchical Dirichlet Process. ArXiv e-prints, January. 1468 Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Xinyan Xiao, Deyi Xiong, Min Zhang, Qun Liu, and Shouxun Lin. 2012. A Topic Similarity Model for Hierarchical Phrase-based Translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 750–758, Jeju Island, Korea, July. Association for Computational Linguistics. Deyi Xiong and Min Zhang. 2013. A Topic-Based Coherence Model for Statistical Machine Translation. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence (AAAI-13), Bellevue, Washington, USA, July. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 521–528, Sydney, Australia, July. Xuchen Yao and Benjamin Van Durme. 2011. Nonparametric Bayesian Word Sense Induction. In Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing, pages 10–14, Portland, Oregon, June. Association for Computational Linguistics. Bin Zhao and Eric P. Xing. 2007. HM-BiTAM: Bilingual Topic Exploration, Word Alignment, and Translation. In Proc. NIPS 2007. 1469
2014
137
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1470–1480, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Recurrent Neural Networks for Word Alignment Model Akihiro Tamura∗, Taro Watanabe, Eiichiro Sumita National Institute of Information and Communications Technology 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, JAPAN [email protected], {taro.watanabe, eiichiro.sumita}@nict.go.jp Abstract This study proposes a word alignment model based on a recurrent neural network (RNN), in which an unlimited alignment history is represented by recurrently connected hidden layers. We perform unsupervised learning using noise-contrastive estimation (Gutmann and Hyv¨arinen, 2010; Mnih and Teh, 2012), which utilizes artificially generated negative samples. Our alignment model is directional, similar to the generative IBM models (Brown et al., 1993). To overcome this limitation, we encourage agreement between the two directional models by introducing a penalty function that ensures word embedding consistency across two directional models during training. The RNN-based model outperforms the feed-forward neural network-based model (Yang et al., 2013) as well as the IBM Model 4 under Japanese-English and French-English word alignment tasks, and achieves comparable translation performance to those baselines for Japanese-English and Chinese-English translation tasks. 1 Introduction Automatic word alignment is an important task for statistical machine translation. The most classical approaches are the probabilistic IBM models 1-5 (Brown et al., 1993) and the HMM model (Vogel et al., 1996). Various studies have extended those models. Yang et al. (2013) adapted the ContextDependent Deep Neural Network for HMM (CDDNN-HMM) (Dahl et al., 2012), a type of feedforward neural network (FFNN)-based model, to ∗The first author is now affiliated with Knowledge Discovery Research Laboratories, NEC Corporation, Nara, Japan. the HMM alignment model and achieved state-ofthe-art performance. However, the FFNN-based model assumes a first-order Markov dependence for alignments. Recurrent neural network (RNN)-based models have recently demonstrated state-of-the-art performance that outperformed FFNN-based models for various tasks (Mikolov et al., 2010; Mikolov and Zweig, 2012; Auli et al., 2013; Kalchbrenner and Blunsom, 2013; Sundermeyer et al., 2013). An RNN has a hidden layer with recurrent connections that propagates its own previous signals. Through the recurrent architecture, RNN-based models have the inherent property of modeling long-span dependencies, e.g., long contexts, in input data. We assume that this property would fit with a word alignment task, and we propose an RNN-based word alignment model. Our model can maintain and arbitrarily integrate an alignment history, e.g., bilingual context, which is longer than the FFNN-based model. The NN-based alignment models are supervised models. Unfortunately, it is usually difficult to prepare word-by-word aligned bilingual data. Yang et al. (2013) trained their model from word alignments produced by traditional unsupervised probabilistic models. However, with this approach, errors induced by probabilistic models are learned as correct alignments; thus, generalization capabilities are limited. To solve this problem, we apply noise-contrastive estimation (NCE) (Gutmann and Hyv¨arinen, 2010; Mnih and Teh, 2012) for unsupervised training of our RNN-based model without gold standard alignments or pseudo-oracle alignments. NCE artificially generates bilingual sentences through samplings as pseudo-negative samples, and then trains the model such that the scores of the original bilingual sentences are higher than those of the sampled bilingual sentences. Our RNN-based alignment model has a direc1470 tion, such as other alignment models, i.e., from f (source language) to e (target language) and from e to f. It has been proven that the limitation may be overcome by encouraging two directional models to agree by training them concurrently (Matusov et al., 2004; Liang et al., 2006; Grac¸a et al., 2008; Ganchev et al., 2008). The motivation for this stems from the fact that model and generalization errors by the two models differ, and the models must complement each other. Based on this motivation, our directional models are also simultaneously trained. Specifically, our training encourages word embeddings to be consistent across alignment directions by introducing a penalty term that expresses the difference between embedding of words into an objective function. This constraint prevents each model from overfitting to a particular direction and leads to global optimization across alignment directions. This paper presents evaluations of JapaneseEnglish and French-English word alignment tasks and Japanese-to-English and Chinese-to-English translation tasks. The results illustrate that our RNN-based model outperforms the FFNN-based model (up to +0.0792 F1-measure) and the IBM Model 4 (up to +0.0703 F1-measure) for the word alignment tasks. For the translation tasks, our model achieves up to 0.74% gain in BLEU as compared to the FFNN-based model, which matches the translation qualities of the IBM Model 4. 2 Related Work Various word alignment models have been proposed. These models are roughly clustered into two groups: generative models, such as those proposed by Brown et al. (1993), Vogel et al. (1996), and Och and Ney (2003), and discriminative models, such as those proposed by Taskar et al. (2005), Moore (2005), and Blunsom and Cohn (2006). 2.1 Generative Alignment Model Given a source language sentence fJ 1 = f1, ..., fJ and a target language sentence eI 1 = e1, ..., eI, fJ 1 is generated by eI 1 via the alignment aJ 1 = a1, ..., aJ. Each aj is a hidden variable indicating that the source word fj is aligned to the target word eaj. Usually, a “null” word e0 is added to the target language sentence and aJ 1 may contain aj = 0, which indicates that fj is not aligned to any target word. The probability of generating the sentence fJ 1 from eI 1 is defined as p(fJ 1 |eI 1) = ∑ aJ 1 p(fJ 1 , aJ 1 |eI 1). (1) The IBM Models 1 and 2 and the HMM model decompose it into an alignment probability pa and a lexical translation probability pt as p(fJ 1 , aJ 1 |eI 1) = J ∏ j=1 pa(aj|aj−1, j)pt(fj|eaj). (2) The three models differ in their definition of alignment probability. For example, the HMM model uses an alignment probability with a first-order Markov property: pa(aj|aj −aj−1). In addition, the IBM models 3-5 are extensions of these, which consider the fertility and distortion of each translated word. These models are trained using the expectationmaximization algorithm (Dempster et al., 1977) from bilingual sentences without word-level alignments (unlabeled training data). Given a specific model, the best alignment (Viterbi alignment) of the sentence pair (fJ 1 , eI 1) can be found as ˆaJ 1 = argmax aJ 1 p(fJ 1 , aJ 1 |eI 1). (3) For example, the HMM model identifies the Viterbi alignment using the Viterbi algorithm. 2.2 FFNN-based Alignment Model As an instance of discriminative models, we describe an FFNN-based word alignment model (Yang et al., 2013), which is our baseline. An FFNN learns a hierarchy of nonlinear features that can automatically capture complex statistical patterns in input data. Recently, FFNNs have been applied successfully to several tasks, such as speech recognition (Dahl et al., 2012), statistical machine translation (Le et al., 2012; Vaswani et al., 2013), and other popular natural language processing tasks (Collobert and Weston, 2008; Collobert et al., 2011). Yang et al. (2013) have adapted a type of FFNN, i.e., CD-DNN-HMM (Dahl et al., 2012), to the HMM alignment model. Specifically, the lexical translation and alignment probability in Eq. 2 are computed using FFNNs as sNN(aJ 1 |fJ 1 , eI 1) = J ∏ j=1 ta(aj −aj−1|c(eaj−1)) ·tlex(fj, eaj|c(fj), c(eaj)), (4) 1471 Lookup Layer Hidden Layer Output Layer Input fj-1 e L L L L L L htanh(H× +BH) O× +BO aj-1 t ( , | , ) fj eaj e f j-1 j+1 lex z0 z1 fj fj+1 eaj eaj+1 z0 z1 aj-1 aj+1 Figure 1: FFNN-based model for computing a lexical translation score of (fj, eaj) where ta and tlex are an alignment score and a lexical translation score, respectively, sNN is a score of alignments aJ 1 , and “c(a word w)” denotes a context of word w. Note that the model uses nonprobabilistic scores rather than probabilities because normalization over all words is computationally expensive. The model finds the Viterbi alignment using the Viterbi algorithm, similar to the classic HMM model. Note that alignments in the FFNN-based model are also governed by first-order Markov dynamics because an alignment score depends on the previous alignment aj−1. Figure 1 shows the network structure with one hidden layer for computing a lexical translation probability tlex(fj, eaj|c(fj), c(eaj)). The model consists of a lookup layer, a hidden layer, and an output layer, which have weight matrices. The model receives a source and target word with their contexts as inputs, which are words in a predefined window (the window size is three in Figure 1). First, the lookup layer converts each input word into its word embedding by looking up its corresponding column in the embedding matrix (L), and then concatenates them. Let Vf (or Ve) be a set of source words (or target words) and M be a predetermined embedding length. L is a M × (|Vf| + |Ve|) matrix1. Word embeddings are dense, low dimensional, and real-valued vectors that can capture syntactic and semantic properties of the words (Bengio et al., 2003). The concatenation (z0) is then fed to the hidden layer to capture nonlinear relations. Finally, the output layer receives the output of the hidden layer (z1) and computes a lexical translation score. 1We add a special token ⟨unk⟩to handle unknown words and ⟨null⟩to handle null alignments to Vf and Ve The computations in the hidden and output layer are as follows2: z1 = f(H × z0 + BH), (5) tlex = O × z1 + BO, (6) where H, BH, O, and BO are |z1| × |z0|, |z1| × 1, 1×|z1|, and 1×1 matrices, respectively, and f(x) is an activation function. Following Yang et al. (2013), a “hard” version of the hyperbolic tangent, htanh(x)3, is used as f(x) in this study. The alignment model based on an FFNN is formed in the same manner as the lexical translation model. Each model is optimized by minimizing the following ranking loss with a margin using stochastic gradient descent (SGD)4, where gradients are computed by the back-propagation algorithm (Rumelhart et al., 1986): loss(θ) = ∑ (f,e)∈T max{0, 1 −sθ(a+|f, e) +sθ(a−|f, e)}, (7) where θ denotes the weights of layers in the model, T is a set of training data, a+ is the gold standard alignment, a−is the incorrect alignment with the highest score under θ, and sθ denotes the score defined by Eq. 4 as computed by the model under θ. 3 RNN-based Alignment Model This section proposes an RNN-based alignment model, which computes a score for alignments aJ 1 using an RNN: sNN(aJ 1 |fJ 1 , eI 1) = J ∏ j=1 tRNN(aj|aj−1 1 , fj, eaj), (8) where tRNN is the score of an alignment aj. The prediction of the j-th alignment aj depends on all preceding alignments aj−1 1 . Note that the proposed model also uses nonprobabilistic scores, similar to the FFNN-based model. The RNN-based model is illustrated in Figure 2. The model consists of a lookup layer, a hidden layer, and an output layer, which have weight 2Consecutive l hidden layers can be used: zl = f(Hl × zl−1 + BHl). For simplicity, this paper describes the model with 1 hidden layer. 3htanh(x) = −1 for x < −1, htanh(x) = 1 for x > 1, and htanh(x) = x for others. 4In our experiments, we used a mini-batch SGD instead of a plain SGD. 1472 O× +BO htanh(H× +R× +BH ) t ( | , , ) Lookup Layer Hidden Layer Output Layer Input L L d aj fj RNN eaj j-1 a1 fj eaj yj yj-1 yj d xj xj d yj-1 Figure 2: RNN-based alignment model matrices L, {Hd, Rd, Bd H}, and {O, BO}, respectively. Each matrix in the hidden layer (Hd, Rd, and Bd H) depends on alignment, where d denotes the jump distance from aj−1 to aj: d = aj − aj−1. In our experiments, we merge distances that are greater than 8 and less than -8 into the special “≥8” and “≤-8” distances, respectively. Specifically, the hidden layer has weight matrices {H≤−8, H−7, · · · , H7, H≥8, R≤−8, R−7, · · · , R7, R≥8, B≤−8 H , B−7 H , · · · , B7 H, B≥8 H } and computes yj using the corresponding matrices of the jump distance d. The Viterbi alignment is determined using the Viterbi algorithm, similar to the FFNN-based model, where the model is sequentially applied from f1 to fJ 5. When computing the score of the alignment between fj and eaj, the two words are input to the lookup layer. In the lookup layer, each of these words is converted to its word embedding, and then the concatenation of the two embeddings (xj) is fed to the hidden layer in the same manner as the FFNN-based model. Next, the hidden layer receives the output of the lookup layer (xj) and that of the previous hidden layer (yj−1). The hidden layer then computes and outputs the nonlinear relations between them. Note that the weight matrices used in this computation are embodied by the specific jump distance d. The output of the hidden layer (yj) is copied and fed to the output layer and the next hidden layer. Finally, the output layer computes the score of aj (tRNN(aj|aj−1 1 , fj, eaj)) from the output of the hidden layer (yj). Note that the FFNN-based model consists of two compo5Strictly speaking, we cannot apply the dynamic programming forward-backward algorithm (i.e., the Viterbi algorithm) due to the long alignment history of yi. Thus, the Viterbi alignment is computed approximately using heuristic beam search. nents: one is for lexical translation and the other is for alignment. The proposed RNN produces a single score that is constructed in the hidden layer by employing the distance-dependent weight matrices. Specifically, the computations in the hidden and output layer are as follows: yj = f(Hd × xj + Rd × yj−1 + Bd H), (9) tRNN = O × yj + BO, (10) where Hd, Rd, Bd H, O, and BO are |yj| × |xj|, |yj| × |yj−1|, |yj| × 1, 1 × |yj|, and 1 × 1 matrices, respectively. Note that |yj−1| = |yj|. f(x) is an activation function, which is a hard hyperbolic tangent, i.e., htanh(x), in this study. As described above, the RNN-based model has a hidden layer with recurrent connections. Under the recurrence, the proposed model compactly encodes the entire history of previous alignments in the hidden layer configuration yi. Therefore, the proposed model can find alignments by taking advantage of the long alignment history, while the FFNN-based model considers only the last alignment. 4 Training During training, we optimize the weight matrices of each layer (i.e., L, Hd, Rd, Bd H, O, and BO) following a given objective using a mini-batch SGD with batch size D, which converges faster than a plain SGD (D = 1). Gradients are computed by the back-propagation through time algorithm (Rumelhart et al., 1986), which unfolds the network in time (j) and computes gradients over time steps. In addition, an l2 regularization term is added to the objective to prevent the model from overfitting the training data. The RNN-based model can be trained by a supervised approach, similar to the FFNN-based model, where training proceeds based on the ranking loss defined by Eq. 7 (Section 2.2). However, this approach requires gold standard alignments. To overcome this drawback, we propose an unsupervised method using NCE, which learns from unlabeled training data. 4.1 Unsupervised Learning Dyer et al. (2011) presented an unsupervised alignment model based on contrastive estimation (CE) (Smith and Eisner, 2005). CE seeks to discriminate observed data from its neighborhood, 1473 which can be viewed as pseudo-negative samples. Dyer et al. (2011) regarded all possible alignments of the bilingual sentences, which are given as training data (T), and those of the full translation search space (Ω) as the observed data and its neighborhood, respectively. We introduce this idea to a ranking loss with margin as loss(θ) = max { 0, 1 − ∑ (f +,e+)∈T EΦ[sθ(a|f +, e+)] + ∑ (f +,e−)∈Ω EΦ[sθ(a|f +, e−)] } , (11) where Φ is a set of all possible alignments given (f, e), EΦ[sθ] is the expected value of the scores sθ on Φ, e+ denotes a target language sentence in the training data, and e−denotes a pseudo-target language sentence. The first expectation term is for the observed data, and the second is for the neighborhood. However, the computation for Ωis prohibitively expensive. To reduce computation, we employ NCE, which uses randomly sampled sentences from all target language sentences in Ωas e−, and calculate the expected values by a beam search with beam width W to truncate alignments with low scores. In our experiments, we set W to 100. In addition, the above criterion is converted to an online fashion as loss(θ) = ∑ f +∈T max { 0, 1 −EGEN[sθ(a|f +, e+)] + 1 N ∑ e− EGEN[sθ(a|f +, e−)] } , (12) where e+ is a target language sentence aligned to f + in the training data, i.e., (f +, e+) ∈T, e−is a randomly sampled pseudo-target language sentence with length |e+|, and N denotes the number of pseudo-target language sentences per source sentence f +. Note that |e+| = |e−|. GEN is a subset of all possible word alignments Φ, which is generated by beam search. In a simple implementation, each e−is generated by repeating a random sampling from a set of target words (Ve) |e+| times and lining them up sequentially. To employ more discriminative negative samples, our implementation samples each word of e−from a set of the target words that cooccur with fi ∈f + whose probability is above a threshold C under the IBM Model 1 incorporating l0 prior (Vaswani et al., 2012). The IBM Model 1 with l0 prior is convenient for reducing translation candidates because it generates more sparse alignments than the standard IBM Model 1. 4.2 Agreement Constraints Both of the FFNN-based and RNN-based models are based on the HMM alignment model, and they are therefore asymmetric, i.e., they can represent one-to-many relations from the target side. Asymmetric models are usually trained in each alignment direction. The model proposed by Yang et al. (2013) is no exception. However, it has been demonstrated that encouraging directional models to agree improves alignment performance (Matusov et al., 2004; Liang et al., 2006; Grac¸a et al., 2008; Ganchev et al., 2008). Inspired by their work, we introduce an agreement constraint to our learning. The constraint concretely enforces agreement in word embeddings of both directions. The proposed method trains two directional models concurrently based on the following objective by incorporating a penalty term that expresses the difference between word embeddings: argmin θF E { loss(θFE) + α∥θLEF −θLF E∥ } , (13) argmin θEF { loss(θEF ) + α∥θLF E −θLEF ∥ } , (14) where θFE (or θEF ) denotes the weights of layers in a source-to-target (or target-to-source) alignment model, θL denotes weights of a lookup layer, i.e., word embeddings, and α is a parameter that controls the strength of the agreement constraint. ∥θ∥indicates the norm of θ. 2-norm is used in our experiments. Equations 13 and 14 can be applied to both supervised and unsupervised approaches. Equations 7 and 12 are substituted into loss(θ) in supervised and unsupervised learning, respectively. The proposed constraint penalizes overfitting to a particular direction and enables two directional models to optimize across alignment directions globally. Our unsupervised learning procedure is summarized in Algorithm 1. In Algorithm 1, line 2 randomly samples D bilingual sentences (f+, e+)D from training data T. Lines 3-1 and 3-2 generate N pseudo-negative samples for each f+ and e+ based on the translation candidates of f+ and e+ found by the IBM Model 1 with l0 prior, 1474 Algorithm 1 Training Algorithm Input: θ1 FE, θ1 EF , training data T, MaxIter, batch size D, N, C, IBM1, W, α 1: for all t such that 1 ≤t ≤MaxIter do 2: {(f+, e+)D}←sample(D, T) 3-1: {(f+, {e−}N)D}←nege({(f+, e+)D}, N, C, IBM1) 3-2: {(e+, {f−}N)D}←negf({(f +, e+)D}, N, C, IBM1) 4-1: θt+1 FE ←update((f+, e+, {e−}N)D, θt FE, θt EF , W, α) 4-2: θt+1 EF ←update((e+, f +, {f −}N)D, θt EF , θt FE, W, α) 5: end for Output: θMaxIter+1 EF , θMaxIter+1 FE Train Dev Test BTEC 9 K 0 960 Hansards 1.1 M 37 447 FBIS NIST03 240 K 878 919 NIST04 1,597 IWSLT 40 K 2,501 489 NTCIR 3.2 M 2,000 2,000 Table 1: Size of experimental datasets IBM1 (Section 4.1). Lines 4-1 and 4-2 update the weights in each layer following a given objective (Sections 4.1 and 4.2). Note that θt FE and θt EF are concurrently updated in each iteration, and θt EF (or θt FE) is employed to enforce agreement between word embeddings when updating θt FE (or θt EF ). 5 Experiment 5.1 Experimental Data We evaluated the alignment performance of the proposed models with two tasks: JapaneseEnglish word alignment with the Basic Travel Expression Corpus (BTEC) (Takezawa et al., 2002) and French-English word alignment with the Hansard dataset (Hansards) from the 2003 NAACL shared task (Mihalcea and Pedersen, 2003). In addition, we evaluated the end-to-end translation performance of three tasks: a Chineseto-English translation task with the FBIS corpus (FBIS), the IWSLT 2007 Japanese-to-English translation task (IWSLT) (Fordyce, 2007), and the NTCIR-9 Japanese-to-English patent translation task (NTCIR) (Goto et al., 2011)6. Table 1 shows the sizes of our experimental datasets. Note that the development data was not used in the alignment tasks, i.e., BTEC 6We did not evaluate the translation performance on the Hansards data because the development data is very small and performance is unreliable. and Hansards, because the hyperparameters of the alignment models were set by preliminary small-scale experiments. The BTEC data is the first 9,960 sentence pairs in the training data for IWSLT, which were annotated with word alignment (Goh et al., 2010). We split these pairs into the first 9,000 for training data and the remaining 960 as test data. All the data in BTEC is word-aligned, and the training data in Hansards is unlabeled data. In FBIS, we used the NIST02 evaluation data as the development data, and the NIST03 and 04 evaluation data as test data (NIST03 and NIST04). 5.2 Comparing Methods We evaluated the proposed RNN-based alignment models against two baselines: the IBM Model 4 and the FFNN-based model with one hidden layer. The IBM Model 4 was trained by previously presented model sequence schemes (Och and Ney, 2003): 15H53545, i.e., five iterations of the IBM Model 1 followed by five iterations of the HMM Model, etc., which is the default setting for GIZA++ (IBM4). For the FFNN-based model, we set the word embedding length M to 30, the number of units of a hidden layer |z1| to 100, and the window size of contexts to 5. Hence, |z0| is 300 (30×5×2). Following Yang et al. (2013), the FFNN-based model was trained by the supervised approach described in Section 2.2 (FFNNs). For the RNN-based models, we set M to 30 and the number of units of each recurrent hidden layer |yj| to 100. Thus, |xj| is 60 (30 × 2). The number of units of each layer of the FFNNbased and RNN-based models and M were set through preliminary experiments. To demonstrate the effectiveness of the proposed learning methods, we evaluated four types of RNN-based models: RNNs, RNNs+c, RNNu, and RNNu+c, where “s/u” denotes a supervised/unsupervised model and “+c” indicates that the agreement constraint was used. In training all the models except IBM4, the weights of each layer were initialized first. For the weights of a lookup layer L, we preliminarily trained word embeddings for the source and target language from each side of the training data. We then set the word embeddings to L to avoid falling into local minima. Other weights were randomly initialized to [−0.1, 0.1]. For the pretraining, we 1475 Alignment BTEC Hansards IBM4 0.4859 0.9029 FFNNs(I) 0.4770 0.9020 RNNs(I) 0.5053+ 0.9068 RNNs+c(I) 0.5174+ 0.9202+ RNNu 0.5307+ 0.9037 RNNu+c 0.5562+ 0.9275+ FFNNs(R) 0.8224 RNNs(R) 0.8798+ RNNs+c(R) 0.8921+ Table 2: Word alignment performance (F1measure) used the RNNLM Toolkit 7 (Mikolov et al., 2010) with the default options. We mapped all words that occurred less than five times to the special token ⟨unk⟩. Next, each weight was optimized using the mini-batch SGD, where batch size D was 100, learning rate was 0.01, and an l2 regularization parameter was 0.1. The training stopped after 50 epochs. The other parameters were set as follows: W, N and C in the unsupervised learning were 100, 50, and 0.001, respectively, and α for the agreement constraint was 0.1. In the translation tasks, we used the Moses phrase-based SMT systems (Koehn et al., 2007). All Japanese and Chinese sentences were segmented by ChaSen8 and the Stanford Chinese segmenter9, respectively. In the training, long sentences with over 40 words were filtered out. Using the SRILM Toolkits (Stolcke, 2002) with modified Kneser-Ney smoothing, we trained a 5-gram language model on the English side of each training data for IWSLT and NTCIR, and a 5-gram language model on the Xinhua portion of the English Gigaword corpus for FBIS. The SMT weighting parameters were tuned by MERT (Och, 2003) in the development data. 5.3 Word Alignment Results Table 2 shows the alignment performance by the F1-measure. Hereafter, MODEL(R) and MODEL(I) denote the MODEL trained from gold standard alignments and word alignments found by the IBM Model 4, respectively. In Hansards, all models were trained from ran7http://www.fit.vutbr.cz/˜imikolov/ rnnlm/ 8http://chasen-legacy.sourceforge.jp/ 9http://nlp.stanford.edu/software/ segmenter.shtml domly sampled 100 K data10. We evaluated the word alignments produced by first applying each model in both directions and then combining the alignments using the “grow-diag-finaland” heuristic (Koehn et al., 2003). The significance test on word alignment performance was performed by the sign test with a 5% significance level. “+” in Table 2 indicates that the comparisons are significant over corresponding baselines, IBM4 and FFNNs(R/I). In Table 2, RNNu+c, which includes all our proposals, i.e., the RNN-based model, the unsupervised learning, and the agreement constraint, achieves the best performance for both BTEC and Hansards. The differences from the baselines are statistically significant. Table 2 shows that RNNs(R/I) outperforms FFNNs(R/I), which is statistically significant in BTEC. These results demonstrate that capturing the long alignment history in the RNN-based model improves the alignment performance. We discuss the difference of the RNN-based model’s effectiveness between language pairs in Section 6.1. Table 2 also shows that RNNs+c(R/I) and RNNu+c achieve significantly better performance than RNNs(R/I) and RNNu in both tasks, respectively. This indicates that the proposed agreement constraint is effective in training better models in both the supervised and unsupervised approaches. In BTEC, RNNu and RNNu+c significantly outperform RNNs(I) and RNNs+c(I), respectively. The performance of these models is comparable with Hansards. This indicates that our unsupervised learning benefits our models because the supervised models are adversely affected by errors in the automatically generated training data. This is especially true when the quality of training data, i.e., the performance of IBM4, is low. 5.4 Machine Translation Results Table 3 shows the translation performance by the case sensitive BLEU4 metric11 (Papineni et al., 2002). Table 3 presents the average BLEU of three different MERT runs. In NTCIR and FBIS, each alignment model was trained from the ran10Due to high computational cost, we did not use all the training data. Scaling up to larger datasets will be addressed in future work. 11We used mteval-v13a.pl as the evaluation tool (http://www.itl.nist.gov/iad/mig/tests/ mt/2009/). 1476 Alignment IWSLT NTCIR FBIS NIST03 NIST04 IBM4all 46.47 27.91 25.90 28.34 IBM4 27.25 25.41 27.65 FFNNs(I) 46.38 27.05 25.45 27.61 RNNs(I) 46.43 27.24 25.47 27.56 RNNs+c(I) 46.51 27.12 25.55 27.73 RNNu 47.05∗ 27.79∗ 25.76∗ 27.91∗ RNNu+c 46.97∗ 27.76∗ 25.84∗ 28.20∗ Table 3: Translation performance (BLEU4(%)) domly sampled 100 K data, and then a translation model was trained from all the training data that was word-aligned by the alignment model. In addition, for a detailed comparison, we evaluated the SMT system where the IBM Model 4 was trained from all the training data (IBM4all). The significance test on translation performance was performed by the bootstrap method (Koehn, 2004) with a 5% significance level. “*” in Table 3 indicates that the comparisons are significant over both baselines, i.e., IBM4 and FFNNs(I). Table 3 also shows that better word alignment does not always result in better translation, which has been discussed previously (Yang et al., 2013). However, RNNu and RNNu+c outperform FFNNs(I) and IBM4 in all tasks. These results indicate that our proposals contribute to improving translation performance12. In addition, Table 3 shows that these proposed models are comparable to IBM4all in NTCIR and FBIS even though the proposed models are trained from only a small part of the training data. 6 Discussion 6.1 Effectiveness of RNN-based Alignment Model Figure 3 shows word alignment examples from FFNNs and RNNs, where solid squares indicate the gold standard alignments. Figure 3 (a) shows that RRNs adequately identifies complicated alignments with long distances compared to FFNNs (e.g., jaggy alignments of “have you been learning” in Fig 3 (a)) because RNNs captures alignment paths based on long alignment history, which can be viewed as phrase-level alignments, while FFNNs employs only the last alignment. In French-English word alignment, the most 12We also confirmed the effectiveness of our models on the NIST05 and NTCIR-10 evaluation data. How long have you been learning English ? あなた は 英語 を 習い 始め て から どの くらい に なり ます か 。 △ △ △△ △△ △ △ △ △ (a) Japanese-English Alignment they also have a role to play in food chain . the eux aussi ont un rôle à jouer dans la chaîne alimentaire . (b) French-English Alignment △ △ △ △ △ △ △ △ △ △ △ △ △: FFNN (R) s : RNN (R) s △: FFNN (I) s : RNN (I) s Figure 3: Word alignment examples Alignment 40 K 9 K 1 K IBM4 0.5467 0.4859 0.4128 RNNu+c 0.6004 0.5562 0.4842 RNNs+c(R) 0.8921 0.6063 Table 4: Word alignment performance on BTEC with various sized training data valuable clues are located locally because English and French have similar word orders and their alignment has more one-to-one mappings than Japanese-English word alignment (Figure 3). Figure 3 (b) shows that both RRNs and FFNNs work for such simpler alignments. Therefore, the RNN-based model has less effect on FrenchEnglish word alignment than Japanese-English word alignment, as indicated in Table 2. 6.2 Impact of Training Data Size Table 4 shows the alignment performance on BTEC with various training data sizes, i.e., training data for IWSLT (40 K), training data for BTEC (9 K), and the randomly sampled 1 K data from the BTEC training data. Note that RNNs+c(R) cannot be trained from the 40 K data because the 40 K data does not have gold standard 1477 Alignment BTEC Hansards FFNNs(I) 0.4770 0.9020 FFNNs+c(I) 0.4854+ 0.9085+ FFNNu 0.5105+ 0.9026 FFNNu+c 0.5313+ 0.9144+ FFNNs(R) 0.8224 FFNNs+c(R) 0.8367+ Table 5: Word alignment performance of various FFNN-based models (F1-measure) word alignments. Table 4 demonstrates that the proposed RNNbased model outperforms IBM4 trained from the unlabeled 40 K data by employing either the 1 K labeled data or the 9 K unlabeled data, which is less than 25% of the training data for IBM4. Consequently, the SMT system using RNNu+c trained from a small part of training data can achieve comparable performance to that using IBM4 trained from all training data, which is shown in Table 3. 6.3 Effectiveness of Unsupervised Learning/Agreement Constraints The proposed unsupervised learning and agreement constraints can be applied to any NN-based alignment model. Table 5 shows the alignment performance of the FFNN-based models trained by our supervised/unsupervised approaches (s/u) with and without our agreement constraints. In Table 5, “+c” denotes that the agreement constraint was used, and “+” indicates that the comparison with its corresponding baseline, i.e., FFNNs(I/R), is significant in the sign test with a 5% significance level. Table 5 shows that FFNNs+c(R/I) and FFNNu+c achieve significantly better performance than FFNNs(R/I) and FFNNu, respectively, in both BTEC and Hansards. In addition, FFNNu and FFNNu+c significantly outperform FFNNs(I) and FFNNs+c(I), respectively, in BTEC. The performance of these models is comparable in Hansards. These results indicate that the proposed unsupervised learning and agreement constraint benefit the FFNN-based model, similar to the RNN-based model. 7 Conclusion We have proposed a word alignment model based on an RNN, which captures long alignment history through recurrent architectures. Furthermore, we proposed an unsupervised method for training our model using NCE and introduced an agreement constraint that encourages word embeddings to be consistent across alignment directions. Our experiments have shown that the proposed model outperforms the FFNN-based model (Yang et al., 2013) for word alignment and machine translation, and that the agreement constraint improves alignment performance. In future, we plan to employ contexts composed of surrounding words (e.g., c(fj) or c(eaj) in the FFNN-based model) in our model, even though our model implicitly encodes such contexts in the alignment history. We also plan to enrich each hidden layer in our model with multiple layers following the success of Yang et al. (2013), in which multiple hidden layers improved the performance of the FFNN-based model. In addition, we would like to prove the effectiveness of the proposed method for other datasets. Acknowledgments We thank the anonymous reviewers for their helpful suggestions and valuable comments on the first version of this paper. References Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint Language and Translation Modeling with Recurrent Neural Networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1044– 1054. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Language Model. Journal of Machine Learning Research, 3:1137–1155. Phil Blunsom and Trevor Cohn. 2006. Discriminative Word Alignment with Conditional Random Fields. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 65–72. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263–311. Ronan Collobert and Jason Weston. 2008. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In 1478 Proceedings of the 25th International Conference on Machine Learning, pages 160–167. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493–2537. George E. Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition. Audio, Speech, and Language Processing, IEEE Transactions on, 20(1):30–42. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Chris Dyer, Jonathan Clark, Alon Lavie, and Noah A. Smith. 2011. Unsupervised Word Alignment with Arbitrary Features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, pages 409–419. Cameron S. Fordyce. 2007. Overview of the IWSLT 2007 Evaluation Campaign. In Proceedings of the 4th International Workshop on Spoken Languaeg Translation, pages 1–12. Kuzman Ganchev, Jo˜ao V. Grac¸a, and Ben Taskar. 2008. Better Alignments = Better Translations? In Proceedings of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies, pages 986–993. Chooi-Ling Goh, Taro Watanabe, Hirofumi Yamamoto, and Eiichiro Sumita. 2010. Constraining a Generative Word Alignment Model with Discriminative Output. IEICE Transactions, 93-D(7):1976–1983. Isao Goto, Bin Lu, Ka Po Chow, Eiichiro Sumita, and Benjamin K. Tsou. 2011. Overview of the Patent Machine Translation Task at the NTCIR-9 Workshop. In Proceedings of the 9th NTCIR Workshop, pages 559–578. Jo˜ao V. Grac¸a, Kuzman Ganchev, and Ben Taskar. 2008. Expectation Maximization and Posterior Constraints. In Advances in Neural Information Processing Systems 20, pages 569–576. Michael Gutmann and Aapo Hyv¨arinen. 2010. NoiseContrastive Estimation: A New Estimation Principle for Unnormalized Statistical Models. In Proceedings of the 13st International Conference on Artificial Intelligence and Statistics, pages 297–304. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the 2003 Human Language Technology Conference: North American Chapter of the Association for Computational Linguistics, pages 48–54. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constrantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics on Interactive Poster and Demonstration Sessions, pages 177–180. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395. Hai-Son Le, Alexandre Allauzen, and Franc¸ois Yvon. 2012. Continuous Space Translation Models with Neural Networks. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 39–48. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by Agreement. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 104– 111. Evgeny Matusov, Richard Zens, and Hermann Ney. 2004. Symmetric Word Alignments for Statistical Machine Translation. In Proceedings of the 20th International Conference on Computational Linguistics, pages 219–225. Rada Mihalcea and Ted Pedersen. 2003. An Evaluation Exercise for Word Alignment. In Proceedings of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, pages 1–10. Tomas Mikolov and Geoffrey Zweig. 2012. Context Dependent Recurrent Neural Network Language Model. In Proceedings of the 4th IEEE Workshop on Spoken Language Technology, pages 234– 239. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent Neural Network based Language Model. In Proceedings of 11th Annual Conference of the International Speech Communication Association, pages 1045–1048. Andriy Mnih and Yee Whye Teh. 2012. A Fast and Simple Algorithm for Training Neural Probabilistic Language Models. In Proceedings of the 29th International Conference on Machine Learning, pages 1751–1758. 1479 Robert C. Moore. 2005. A Discriminative Framework for Bilingual Word Alignment. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 81–88. Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29:19–51. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. 1986. Learning Internal Representations by Error Propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, pages 318–362. MIT Press. Noah A. Smith and Jason Eisner. 2005. Contrastive Estimation: Training Log-Linear Models on Unlabeled Data. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 354–362. Andreas Stolcke. 2002. SRILM - An Extensible Language Modeling Toolkit. In Proceedings of International Conference on Spoken Language Processing, pages 901–904. Martin Sundermeyer, Ilya Oparin, Jean-Luc Gauvain, Ben Freiberg, Ralf Schl¨uter, and Hermann Ney. 2013. Comparison of Feedforward and Recurrent Neural Network Language Models. In IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 8430–8434. Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, and Seiichi Yamamoto. 2002. Toward a Broad-coverage Bilingual Corpus for Speech Translation of Travel Conversations in the Real World. In Proceedings of the 3rd International Conference on Language Resources and Evaluation, pages 147–152. Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A Discriminative Matching Approach to Word Alignment. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 73–80. Ashish Vaswani, Liang Huang, and David Chiang. 2012. Smaller Alignment Models for Better Translations: Unsupervised Word Alignment with the l0norm. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 311–319. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with Large-Scale Neural Language Models Improves Translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1387–1392. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based Word Alignment in Statistical Translation. In Proceedings of the 16th International Conference on Computational Linguistics, pages 836–841. Nan Yang, Shujie Liu, Mu Li, Ming Zhou, and Nenghai Yu. 2013. Word Alignment Modeling with Context Dependent Deep Neural Network. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 166–175. 1480
2014
138
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1481–1490, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Constrained Viterbi Relaxation for Bidirectional Word Alignment Yin-Wen Chang Alexander M. Rush MIT CSAIL, Cambridge, MA 02139 {yinwen,srush}@ csail.mit.edu John DeNero UC Berkeley, Berkeley, CA 94720 denero@ cs.berkeley.edu Michael Collins Columbia University, New York, NY 10027 mcollins@ cs.columbia.edu Abstract Bidirectional models of word alignment are an appealing alternative to post-hoc combinations of directional word aligners. Unfortunately, most bidirectional formulations are NP-Hard to solve, and a previous attempt to use a relaxationbased decoder yielded few exact solutions (6%). We present a novel relaxation for decoding the bidirectional model of DeNero and Macherey (2011). The relaxation can be solved with a modified version of the Viterbi algorithm. To find optimal solutions on difficult instances, we alternate between incrementally adding constraints and applying optimality-preserving coarse-to-fine pruning. The algorithm finds provably exact solutions on 86% of sentence pairs and shows improvements over directional models. 1 Introduction Word alignment is a critical first step for building statistical machine translation systems. In order to ensure accurate word alignments, most systems employ a post-hoc symmetrization step to combine directional word aligners, such as IBM Model 4 (Brown et al., 1993) or hidden Markov model (HMM) based aligners (Vogel et al., 1996). Several authors have proposed bidirectional models that incorporate this step directly, but decoding under many bidirectional models is NP-Hard and finding exact solutions has proven difficult. In this paper, we describe a novel Lagrangianrelaxation based decoder for the bidirectional model proposed by DeNero and Macherey (2011), with the goal of improving search accuracy. In that work, the authors implement a dual decomposition-based decoder for the problem, but are only able to find exact solutions for around 6% of instances. Our decoder uses a simple variant of the Viterbi algorithm for solving a relaxed version of this model. The algorithm makes it easy to reintroduce constraints for difficult instances, at the cost of increasing run-time complexity. To offset this cost, we employ optimality-preserving coarseto-fine pruning to reduce the search space. The pruning method utilizes lower bounds on the cost of valid bidirectional alignments, which we obtain from a fast, greedy decoder. The method has the following properties: • It is based on a novel relaxation for the model of DeNero and Macherey (2011), solvable with a variant of the Viterbi algorithm. • To find optimal solutions, it employs an efficient strategy that alternates between adding constraints and applying pruning. • Empirically, it is able to find exact solutions on 86% of sentence pairs and is significantly faster than general-purpose solvers. We begin in Section 2 by formally describing the directional word alignment problem. Section 3 describes a preliminary bidirectional model using full agreement constraints and a Lagrangian relaxation-based solver. Section 4 modifies this model to include adjacency constraints. Section 5 describes an extension to the relaxed algorithm to explicitly enforce constraints, and Section 6 gives a pruning method for improving the efficiency of the algorithm. Experiments compare the search error and accuracy of the new bidirectional algorithm to several directional combiners and other bidirectional algorithms. Results show that the new relaxation is much more effective at finding exact solutions and is able to produce comparable alignment accuracy. 1481 ϵ montrez nous les documents ϵ let us see the documents Figure 1: An example e→f directional alignment for the sentences let us see the documents and montrez nous les documents, with I = 5 and J = 5. The indices i ∈[I]0 are rows, and the indices j ∈[J]0 are columns. The HMM alignment shown has transitions x(0, 1, 1) = x(1, 2, 3) = x(3, 3, 1) = x(1, 4, 4) = x(4, 5, 5) = 1. Notation We use lower- and upper-case letters for scalars and vectors, and script-case for sets e.g. X. For vectors, such as v ∈{0, 1}(I×J )∪J , where I and J are finite sets, we use the notation v(i, j) and v(j) to represent elements of the vector. Define d = δ(i) to be the indicator vector with d(i) = 1 and d(i′) = 0 for all i′ ̸= i. Finally define the notation [J] to refer to {1 . . . J} and [J]0 to refer to {0 . . . J}. 2 Background The focus of this work is on the word alignment decoding problem. Given a sentence e of length |e| = I and a sentence f of length |f| = J, our goal is to find the best bidirectional alignment between the two sentences under a given objective function. Before turning to the model of interest, we first introduce directional word alignment. 2.1 Word Alignment In the e→f word alignment problem, each word in e is aligned to a word in f or to the null word ϵ. This alignment is a mapping from each index i ∈ [I] to an index j ∈[J]0 (where j = 0 represents alignment to ϵ). We refer to a single word alignment as a link. A first-order HMM alignment model (Vogel et al., 1996) is an HMM of length I + 1 where the hidden state at position i ∈[I]0 is the aligned index j ∈[J]0, and the transition score takes into account the previously aligned index j′ ∈[J]0.1 Formally, define the set of possible HMM alignments as X ⊂{0, 1}([I]0×[J]0)∪([I]×[J]0×[J]0) with 1Our definition differs slightly from other HMM-based aligners in that it does not track the last ϵ alignment. X =                x : x(0, 0) = 1, x(i, j) = J X j′=0 x(j′, i, j) ∀i ∈[I], j ∈[J]0, x(i, j) = J X j′=0 x(j, i + 1, j′) ∀i ∈[I −1]0, j ∈[J]0 where x(i, j) = 1 indicates that there is a link between index i and index j, and x(j′, i, j) = 1 indicates that index i −1 aligns to index j′ and index i aligns to j. Figure 1 shows an example member of X. The constraints of X enforce backward and forward consistency respectively. If x(i, j) = 1, backward consistency enforces that there is a transition from (i −1, j′) to (i, j) for some j′ ∈[J]0, whereas forward consistency enforces a transition from (i, j) to (i + 1, j′) for some j′ ∈[J]0. Informally the constraints “chain” together the links. The HMM objective function f : X →R can be written as a linear function of x f(x; θ) = I X i=1 J X j=0 J X j′=0 θ(j′, i, j)x(j′, i, j) where the vector θ ∈R[I]×[J]0×[J]0 includes the transition and alignment scores. For a generative model of alignment, we might define θ(j′, i, j) = log(p(ei|fj)p(j|j′)). For a discriminative model of alignment, we might define θ(j′, i, j) = w · φ(i, j′, j, f, e) for a feature function φ and weights w (Moore, 2005; Lacoste-Julien et al., 2006). Now reverse the direction of the model and consider the f→e alignment problem. An f→e alignment is a binary vector y ∈Y where for each j ∈[J], y(i, j) = 1 for exactly one i ∈ [I]0. Define the set of HMM alignments Y ⊂ {0, 1}([I]0×[J]0)∪([I]0×[I]0×[J]) as Y =                y : y(0, 0) = 1, y(i, j) = I X i′=0 y(i′, i, j) ∀i ∈[I]0, j ∈[J], y(i, j) = I X i′=0 y(i, i′, j + 1) ∀i ∈[I]0, j ∈[J −1]0 Similarly define the objective function g(y; ω) = J X j=1 I X i=0 I X i′=0 ω(i′, i, j)y(i′, i, j) with vector ω ∈R[I]0×[I]0×[J]. 1482 ϵ montrez nous les documents ϵ let us see the documents (a) ϵ montrez nous les documents ϵ let us see the documents (b) Figure 2: (a) An example alignment pair (x, y) satisfying the full agreement conditions. The x alignment is represented with circles and the y alignment with triangles. (b) An example f→e alignment y ∈Y′ with relaxed forward constraints. Note that unlike an alignment from Y multiple words may be aligned in a column and words may transition from nonaligned positions. Note that for both of these models we can solve the optimization problem exactly using the standard Viterbi algorithm for HMM decoding. The first can be solved in O(IJ2) time and the second in O(I2J) time. 3 Bidirectional Alignment The directional bias of the e→f and f→e alignment models may cause them to produce differing alignments. To obtain the best single alignment, it is common practice to use a post-hoc algorithm to merge these directional alignments (Och et al., 1999). First, a directional alignment is found from each word in e to a word f. Next an alignment is produced in the reverse direction from f to e. Finally, these alignments are merged, either through intersection, union, or with an interpolation algorithm such as grow-diag-final (Koehn et al., 2003). In this work, we instead consider a bidirectional alignment model that jointly considers both directional models. We begin in this section by introducing a simple bidirectional model that enforces full agreement between directional models and giving a relaxation for decoding. Section 4 loosens this model to adjacent agreement. 3.1 Enforcing Full Agreement Perhaps the simplest post-hoc merging strategy is to retain the intersection of the two directional models. The analogous bidirectional model enforces full agreement to ensure the two alignments select the same non-null links i.e. x∗, y∗= arg max x∈X,y∈Y f(x) + g(y) s.t. x(i, j) = y(i, j) ∀i ∈[I], j ∈[J] We refer to the optimal alignments for this problem as x∗and y∗. Unfortunately this bidirectional decoding model is NP-Hard (a proof is given in Appendix A). As it is common for alignment pairs to have |f| or |e| over 40, exact decoding algorithms are intractable in the worst-case. Instead we will use Lagrangian relaxation for this model. At a high level, we will remove a subset of the constraints from the original problem and replace them with Lagrange multipliers. If we can solve this new problem efficiently, we may be able to get optimal solutions to the original problem. (See the tutorial by Rush and Collins (2012) describing the method.) There are many possible subsets of constraints to consider relaxing. The relaxation we use preserves the agreement constraints while relaxing the Markov structure of the f→e alignment. This relaxation will make it simple to later re-introduce constraints in Section 5. We relax the forward constraints of set Y. Without these constraints the y links are no longer chained together. This has two consequences: (1) for index j there may be any number of indices i, such that y(i, j) = 1, (2) if y(i′, i, j) = 1 it is no longer required that y(i′, j −1) = 1. This gives a set Y′ which is a superset of Y Y′ =  y : y(0, 0) = 1, y(i, j) = PI i′=0 y(i′, i, j) ∀i ∈[I]0, j ∈[J] Figure 2(b) shows a possible y ∈Y′ and a valid unchained structure. To form the Lagrangian dual with relaxed forward constraints, we introduce a vector of Lagrange multipliers, λ ∈R[I−1]0×[J]0, with one multiplier for each original constraint. The Lagrangian dual L(λ) is defined as max x∈X,y∈Y′, x(i,j)=y(i,j) f(x) + I X i=1 J X j=0 I X i′=0 y(i′, i, j)ω(i′, i, j)(1) − I X i=0 J−1 X j=0 λ(i, j) y(i, j) − I X i′=0 y(i, i′, j + 1) ! = max x∈X,y∈Y′, x(i,j)=y(i,j) f(x) + I X i=1 J X j=0 I X i′=0 y(i′, i, j)ω′(i′, i, j)(2) = max x∈X,y∈Y′, x(i,j)=y(i,j) f(x) + I X i=1 J X j=0 y(i, j) max i′∈[I]0 ω′(i′, i, j)(3) = max x∈X,y∈Y′, x(i,j)=y(i,j) f(x) + g′(y; ω, λ) (4) 1483 Line 2 distributes the λ’s and introduces a modified potential vector ω′ defined as ω′(i′, i, j) = ω(i′, i, j) −λ(i, j) + λ(i′, j −1) for all i′ ∈[I]0, i ∈[I]0, j ∈[J]. Line 3 utilizes the relaxed set Y′ which allows each y(i, j) to select the best possible previous link (i′, j −1). Line 4 introduces the modified directional objective g′(y; ω, λ) = I X i=1 J X j=0 y(i, j) max i′∈[I]0 ω′(i′, i, j) The Lagrangian dual is guaranteed to be an upper bound on the optimal solution, i.e. for all λ, L(λ) ≥f(x∗) + g(y∗). Lagrangian relaxation attempts to find the tighest possible upper bound by minimizing the Lagrangian dual, minλ L(λ), using subgradient descent. Briefly, subgradient descent is an iterative algorithm, with two steps. Starting with λ = 0, we iteratively 1. Set (x, y) to the arg max of L(λ). 2. Update λ(i, j) for all i ∈[I −1]0, j ∈[J]0, λ(i, j) ←λ(i, j) −ηt y(i, j) − I X i′=0 y(i, i′, j + 1)  . where ηt > 0 is a step size for the t’th update. If at any iteration of the algorithm the forward constraints are satisfied for (x, y), then f(x)+g(y) = f(x∗) + g(x∗) and we say this gives a certificate of optimality for the underlying problem. To run this algorithm, we need to be able to efficiently compute the (x, y) pair that is the arg max of L(λ) for any value of λ. Fortunately, since the y alignments are no longer constrained to valid transitions, we can compute these alignments by first picking the best f→e transitions for each possible link, and then running an e→f Viterbi-style algorithm to find the bidirectional alignment. The max version of this algorithm is shown in Figure 3. It consists of two steps. We first compute the score for each y(i, j) variable. We then use the standard Viterbi update for computing the x variables, adding in the score of the y(i, j) necessary to satisfy the constraints. procedure VITERBIFULL(θ, ω′) Let π, ρ be dynamic programming charts. ρ[i, j] ←max i′∈[I]0 ω′(i′, i, j) ∀i ∈[I], j ∈[J]0 π[0, 0] ←PJ j=1 max{0, ρ[0, j]} for i ∈[I], j ∈[J]0 in order do π[i, j] ←max j′∈[J]0 θ(j′, i, j) + π[i −1, j′] if j ̸= 0 then π[i, j] ←π[i, j] + ρ[i, j] return maxj∈[J]0 π[I, j] Figure 3: Viterbi-style algorithm for computing L(λ). For simplicity the algorithm shows the max version of the algorithm, arg max can be computed with back-pointers. 4 Adjacent Agreement Enforcing full agreement can be too strict an alignment criteria. DeNero and Macherey (2011) instead propose a model that allows near matches, which we call adjacent agreement. Adjacent agreement allows links from one direction to agree with adjacent links from the reverse alignment for a small penalty. Figure 4(a) shows an example of a valid bidirectional alignment under adjacent agreement. In this section we formally introduce adjacent agreement, and propose a relaxation algorithm for this model. The key algorithmic idea is to extend the Viterbi algorithm in order to consider possible adjacent links in the reverse direction. 4.1 Enforcing Adjacency Define the adjacency set K = {−1, 0, 1}. A bidirectional alignment satisfies adjacency if for all i ∈[I], j ∈[J], • If x(i, j) = 1, it is required that y(i+k, j) = 1 for exactly one k ∈K (i.e. either above, center, or below). We indicate which position with variables z↕ i,j ∈{0, 1}K • If x(i, j) = 1, it is allowed that y(i, j + k) = 1 for any k ∈K (i.e. either left, center, or right) and all other y(i, j′) = 0. We indicate which positions with variables z↔ i,j ∈{0, 1}K Formally for x ∈X and y ∈Y, the pair (x, y) is feasible if there exists a z from the set Z(x, y) ⊂ {0, 1}K2×[I]×[J] defined as Z(x, y) =                z : ∀i ∈[I], j ∈[J] z↕ i,j ∈{0, 1}K, z↔ i,j ∈{0, 1}K x(i, j) = X k∈K z↕ i,j(k), X k∈K z↔ i,j(k) = y(i, j), z↕ i,j(k) ≤y(i + k, j) ∀k ∈K : i + k > 0, x(i, j) ≥z↔ i,j−k(k) ∀k ∈K : j + k > 0 1484 ϵ montrez nous les documents ϵ let us see the documents (a) ϵ montrez nous les documents ϵ let us see the documents (b) Figure 4: (a) An alignment satisfying the adjacency constraints. Note that x(2, 1) = 1 is allowed because of y(1, 1) = 1, x(4, 3) = 1 because of y(3, 3), and y(3, 1) because of x(3, 2). (b) An adjacent bidirectional alignment in progress. Currently x(2, 2) = 1 with z↕(−1) = 1 and z↔(−1) = 1. The last transition was from x(1, 3) with z↔′(−1) = 1, z↔′(0) = 1, z↕′(0) = 1. Additionally adjacent, non-overlapping matches are assessed a penalty α calculated as h(z) = I X i=1 J X j=1 X k∈K α|k|(z↕ i,j(k) + z↔ i,j(k)) where α ≤0 is a parameter of the model. The example in Figure 4(a) includes a 3α penalty. Adding these penalties gives the complete adjacent agreement problem arg max z∈Z(x,y) x∈X,y∈Y f(x) + g(y) + h(z) Next, apply the same relaxation from Section 3.1, i.e. we relax the forward constraints of the f→e set. This yields the following Lagrangian dual L(λ) = max z∈Z(x,y) x∈X,y∈Y′ f(x) + g′(y; ω, λ) + h(z) Despite the new constraints, we can still compute L(λ) in O(IJ(I + J)) time using a variant of the Viterbi algorithm. The main idea will be to consider possible adjacent settings for each link. Since each z↕ i,j and z↔ i,j only have a constant number of settings, this does not increase the asymptotic complexity of the algorithm. Figure 5 shows the algorithm for computing L(λ). The main loop of the algorithm is similar to Figure 3. It proceeds row-by-row, picking the best alignment x(i, j) = 1. The major change is that the chart π also stores a value z ∈{0, 1}K×K representing a possible z↕ i,j, z↔ i,j pair. Since we have procedure VITERBIADJ(θ, ω′) ρ[i, j] ←max i′∈[I]0 ω′(i′, i, j) ∀i ∈[I], j ∈[J]0 π[0, 0] ←PJ j=1 max{0, ρ[0, j]} for i ∈[I], j ∈[J]0, z↕, z↔∈{0, 1}|K| do π[i, j, z] ← max j′∈[J]0, z′∈N (z,j−j′) θ(j′, i, j) + π[i −1, j′, z′] + X k∈K z↔(k)(ρ[i, j + k] + α|k|) +z↕(k)α|k| return maxj∈[J]0,z∈{0,1}|K×K| π[I, j, z] Figure 5: Modified Viterbi algorithm for computing the adjacent agreement L(λ). the proposed zi,j in the inner loop, we can include the scores of the adjacent y alignments that are in neighboring columns, as well as the possible penalty for matching x(i, j) to a y(i + k, j) in a different row. Figure 4(b) gives an example setting of z. In the dynamic program, we need to ensure that the transitions between the z’s are consistent. The vector z′ indicates the y links adjacent to x(i − 1, j′). If j′ is near to j, z′ may overlap with z and vice-versa. The transition set N ensures these indicators match up N(z, k′) =    z′ : (z↕(−1) ∧k′ ∈K) ⇒z↔′(k′), (z↕′(1) ∧k′ ∈K) ⇒z↔(−k′), P k∈K z↕(k) = 1 5 Adding Back Constraints In general, it can be shown that Lagrangian relaxation is only guaranteed to solve a linear programming relaxation of the underlying combinatorial problem. For difficult instances, we will see that this relaxation often does not yield provably exact solutions. However, it is possible to “tighten” the relaxation by re-introducing constraints from the original problem. In this section, we extend the algorithm to allow incrementally re-introducing constraints. In particular we track which constraints are most often violated in order to explicitly enforce them in the algorithm. Define a binary vector p ∈{0, 1}[I−1]0×[J]0 where p(i, j) = 1 indicates a previously relaxed constraint on link y(i, j) that should be reintroduced into the problem. Let the new partially 1485 constrained Lagrangian dual be defined as L(λ; p) = max z∈Z(x,y) x∈X,y∈Y′ f(x) + g′(y; ω, λ) + h(z) y(i, j) = X i′ y(i, i′, j + 1) ∀i, j : p(i, j) = 1 If p = ⃗1, the problem includes all of the original constraints, whereas p = ⃗0 gives our original Lagrangian dual. In between we have progressively more constrained variants. In order to compute the arg max of this optimization problem, we need to satisfy the constraints within the Viterbi algorithm. We augment the Viterbi chart with a count vector d ∈D where D ⊂Z||p||1 and d(i, j) is a count for the (i, j)’th constraint, i.e. d(i, j) = y(i, j) −P i′ y(i′, i, j). Only solutions with count 0 at the final position satisfy the active constraints. Additionally define a helper function [·]D as the projection from Z[I−1]0×[J] →D, which truncates dimensions without constraints. Figure 6 shows this constrained Viterbi relaxation approach. It takes p as an argument and enforces the active constraints. For simplicity, we show the full agreement version, but the adjacent agreement version is similar. The main new addition is that the inner loop of the algorithm ensures that the count vector d is the sum of the counts of its children d′ and d −d′. Since each additional constraint adds a dimension to d, adding constraints has a multiplicative impact on running time. Asymptotically the new algorithm requires O(2||p||1IJ(I + J)) time. This is a problem in practice as even adding a few constraints can make the problem intractable. We address this issue in the next section. 6 Pruning Re-introducing constraints can lead to an exponential blow-up in the search space of the Viterbi algorithm. In practice though, many alignments in this space are far from optimal, e.g. aligning a common word like the to nous instead of les. Since Lagrangian relaxation re-computes the alignment many times, it would be preferable to skip these links in later rounds, particularly after re-introducing constraints. In this section we describe an optimality preserving coarse-to-fine algorithm for pruning. Approximate coarse-to-fine pruning algorithms are procedure CONSVITERBIFULL(θ, ω′, p) for i ∈[I], j ∈[J]0, i′ ∈[I] do d ←|δ(i, j) −δ(i′, j −1)|D ρ[i, j, d] ←ω′(i′, i, j) for j ∈[J], d ∈D do π[0, 0, d] ←max d′∈D π[0, 0, d′] + ρ[0, j, d −d′] for i ∈[I], j ∈[J]0, d ∈D do if j = 0 then π[i, j, d] ←max j′∈[J]0 θ(j′, i, j) + π[i −1, j′, d] else π[i, j, d] ← max j′∈[J]0,d′∈D θ(j′, i, j) + π[i −1, j′, d′] +ρ[i, j, d −d′] return maxj∈[J]0 π[I, j, 0] Figure 6: Constrained Viterbi algorithm for finding partiallyconstrained, full-agreement alignments. The argument p indicates which constraints to enforce. widely used within NLP, but exact pruning is less common. Our method differs in that it only eliminates non-optimal transitions based on a lower-bound score. After introducing the pruning method, we present an algorithm to make this method effective in practice by producing highscoring lower bounds for adjacent agreement. 6.1 Thresholding Max-Marginals Our pruning method is based on removing transitions with low max-marginal values. Define the max-marginal value of an e→f transition in our Lagrangian dual as M(j′, i, j; λ) = max z∈Z(x,y), x∈X,y∈Y′ f(x) + g′(y; λ) + h(z) s.t. x(j′, i, j) = 1 where M gives the value of the best dual alignment that transitions from (i −1, j′) to (i, j). These max-marginals can be computed by running a forward-backward variant of any of the algorithms described thus far. We make the following claim about maxmarginal values and any lower-bound score Lemma 1 (Safe Pruning). For any valid constrained alignment x ∈X, y ∈Y, z ∈Z(x, y) and for any dual vector λ ∈R[I−1]0×[J]0, if there exists a transition j′, i, j with max-marginal value M(j′, i, j; λ) < f(x)+g(y)+h(z) then the transition will not be in the optimal alignment, i.e. x∗(j′, i, j) = 0. This lemma tells us that we can prune transitions whose dual max-marginal value falls below 1486 a threshold without pruning possibly optimal transitions. Pruning these transitions can speed up Lagrangian relaxation without altering its properties. Furthermore, the threshold is determined by any feasible lower bound on the optimal score, which means that better bounds can lead to more pruning. 6.2 Finding Lower Bounds Since the effectiveness of pruning is dependent on the lower bound, it is crucial to be able to produce high-scoring alignments that satisfy the agreement constraints. Unfortunately, this problem is nontrivial. For instance, taking the union of directional alignments does not guarantee a feasible solution; whereas taking the intersection is trivially feasible but often not high-scoring. To produce higher-scoring feasible bidirectional alignments we introduce a greedy heuristic algorithm. The algorithm starts with any feasible alignment (x, y, z). It runs the following greedy loop: 1. Repeat until there exists no x(i, 0) = 1 or y(0, j) = 1, or there is no score increase. (a) For each i ∈[I], j ∈[J]0, k ∈K : x(i, 0) = 1, check if x(i, j) ←1 and y(i, j + k) ←1 is feasible, remember score. (b) For each i ∈[I]0, j ∈[J], k ∈K : y(0, j) = 1, check if y(i, j) ←1 and x(i + k, j) ←1 is feasible, remember score. (c) Let (x, y, z) be the highest-scoring feasible solution produced. This algorithm produces feasible alignments with monotonically increasing score, starting from the intersection of the alignments. It has run-time of O(IJ(I + J)) since each inner loop enumerates IJ possible updates and assigns at least one index a non-zero value, limiting the outer loop to I + J iterations. In practice we initialize the heuristic based on the intersection of x and y at the current round of Lagrangian relaxation. Experiments show that running this algorithm significantly improves the lower bound compared to just taking the intersection, and consequently helps pruning significantly. 7 Related Work The most common techniques for bidirectional alignment are post-hoc combinations, such as union or intersection, of directional models, (Och et al., 1999), or more complex heuristic combiners such as grow-diag-final (Koehn et al., 2003). Several authors have explored explicit bidirectional models in the literature. Cromieres and Kurohashi (2009) use belief propagation on a factor graph to train and decode a one-to-one word alignment problem. Qualitatively this method is similar to ours, although the model and decoding algorithm are different, and their method is not able to provide certificates of optimality. A series of papers by Ganchev et al. (2010), Graca et al. (2008), and Ganchev et al. (2008) use posterior regularization to constrain the posterior probability of the word alignment problem to be symmetric and bijective. This work acheives stateof-the-art performance for alignment. Instead of utilizing posteriors our model tries to decode a single best one-to-one word alignment. A different approach is to use constraints at training time to obtain models that favor bidirectional properties. Liang et al. (2006) propose agreement-based learning, which jointly learns probabilities by maximizing a combination of likelihood and agreement between two directional models. General linear programming approaches have also been applied to word alignment problems. Lacoste-Julien et al. (2006) formulate the word alignment problem as quadratic assignment problem and solve it using an integer linear programming solver. Our work is most similar to DeNero and Macherey (2011), which uses dual decomposition to encourage agreement between two directional HMM aligners during decoding time. 8 Experiments Our experimental results compare the accuracy and optimality of our decoding algorithm to directional alignment models and previous work on this bidirectional model. Data and Setup The experimental setup is identical to DeNero and Macherey (2011). Evaluation is performed on a hand-aligned subset of the NIST 2002 Chinese-English dataset (Ayan and Dorr, 2006). Following past work, the first 150 sentence pairs of the training section are used for evaluation. The potential parameters θ and ω are set based on unsupervised HMM models trained on the LDC FBIS corpus (6.2 million words). 1487 1-20 (28%) 21-40 (45%) 41-60 (27%) all time cert exact time cert exact time cert exact time cert exact ILP 15.12 100.0 100.0 364.94 100.0 100.0 2,829.64 100.0 100.0 924.24 100.0 100.0 LR 0.55 97.6 97.6 4.76 55.9 55.9 15.06 7.5 7.5 6.33 54.7 54.7 CONS 0.43 100.0 100.0 9.86 95.6 95.6 61.86 55.0 62.5 21.08 86.0 88.0 D&M 6.2 0.0 0.0 6.2 Table 1: Experimental results for model accuracy of bilingual alignment. Column time is the mean time per sentence pair in seconds; cert is the percentage of sentence pairs solved with a certificate of optimality; exact is the percentage of sentence pairs solved exactly. Results are grouped by sentence length. The percentage of sentence pairs in each group is shown in parentheses. Training is performed using the agreement-based learning method which encourages the directional models to overlap (Liang et al., 2006). This directional model has been shown produce state-of-theart results with this setup (Haghighi et al., 2009). Baselines We compare the algorithm described in this paper with several baseline methods. DIR includes post-hoc combinations of the e→f and f→e HMM-based aligners. Variants include union, intersection, and grow-diag-final. D&M is the dual decomposition algorithm for bidirectional alignment as presented by DeNero and Macherey (2011) with different final combinations. LR is the Lagrangian relaxation algorithm applied to the adjacent agreement problem without the additional constraints described in Section 5. CONS is our full Lagrangian relaxation algorithm including incremental constraint addition. ILP uses a highlyoptimized general-purpose integer linear programming solver to solve the lattice with the constraints described (Gurobi Optimization, 2013). Implementation The main task of the decoder is to repeatedly compute the arg max of L(λ). To speed up decoding, our implementation fully instantiates the Viterbi lattice for a problem instance. This approach has several benefits: each iteration can reuse the same lattice structure; maxmarginals can be easily computed with a general forward-backward algorithm; pruning corresponds to removing lattice edges; and adding constraints can be done through lattice intersection. For consistency, we implement each baseline (except for D&M) through the same lattice. Parameter Settings We run 400 iterations of the subgradient algorithm using the rate schedule ηt = 0.95t′ where t′ is the count of updates for which the dual value did not improve. Every 10 iterations we run the greedy decoder to compute a lower bound. If the gap between our current dual value L(λ) and the lower bound improves significantly we run coarse-to-fine pruning as described in Section 6 with the best lower bound. For Model Combiner alignment phrase pair Prec Rec AER Prec Rec F1 DIR union 57.6 80.0 33.4 75.1 33.5 46.3 intersection 86.2 62.9 27.0 64.3 43.5 51.9 grow-diag 59.7 79.5 32.1 70.1 36.9 48.4 D&M union 63.3 81.5 29.1 63.2 44.9 52.5 intersection 77.5 75.1 23.6 57.1 53.6 55.3 grow-diag 65.6 80.6 28.0 60.2 47.4 53.0 CONS 72.5 74.9 26.4 53.0 52.4 52.7 Table 2: Alignment accuracy and phrase pair extraction accuracy for directional and bidirectional models. Prec is the precision. Rec is the recall. AER is alignment error rate and F1 is the phrase pair extraction F1 score. CONS, if the algorithm does not find an optimal solution we run 400 more iterations and incrementally add the 5 most violated constraints every 25 iterations. Results Our first set of experiments looks at the model accuracy and the decoding time of various methods that can produce optimal solutions. Results are shown in Table 1. D&M is only able to find the optimal solution with certificate on 6% of instances. The relaxation algorithm used in this work is able to increase that number to 54.7%. With incremental constraints and pruning, we are able to solve over 86% of sentence pairs including many longer and more difficult pairs. Additionally the method finds these solutions with only a small increase in running time over Lagrangian relaxation, and is significantly faster than using an ILP solver. Next we compare the models in terms of alignment accuracy. Table 2 shows the precision, recall and alignment error rate (AER) for word alignment. We consider union, intersection and growdiag-final as combination procedures. The combination procedures are applied to D&M in the case when the algorithm does not converge. For CONS, we use the optimal solution for the 86% of instances that converge and the highest-scoring greedy solution for those that do not. The proposed method has an AER of 26.4, which outperforms each of the directional models. However, although CONS achieves a higher model score than D&M, it performs worse in accuracy. Ta1488 1-20 21-40 41-60 all # cons. 20.0 32.1 39.5 35.9 Table 3: The average number of constraints added for sentence pairs where Lagrangian relaxation is not able to find an exact solution. ble 2 also compares the models in terms of phraseextraction accuracy (Ayan and Dorr, 2006). We use the phrase extraction algorithm described by DeNero and Klein (2010), accounting for possible links and ϵ alignments. CONS performs better than each of the directional models, but worse than the best D&M model. Finally we consider the impact of constraint addition, pruning, and use of a lower bound. Table 3 gives the average number of constraints added for sentence pairs for which Lagrangian relaxation alone does not produce a certificate. Figure 7(a) shows the average over all sentence pairs of the best dual and best primal scores. The graph compares the use of the greedy algorithm from Section 6.2 with the simple intersection of x and y. The difference between these curves illustrates the benefit of the greedy algorithm. This is reflected in Figure 7(b) which shows the effectiveness of coarse-to-fine pruning over time. On average, the pruning reduces the search space of each sentence pair to 20% of the initial search space after 200 iterations. 9 Conclusion We have introduced a novel Lagrangian relaxation algorithm for a bidirectional alignment model that uses incremental constraint addition and coarseto-fine pruning to find exact solutions. The algorithm increases the number of exact solution found on the model of DeNero and Macherey (2011) from 6% to 86%. Unfortunately despite achieving higher model score, this approach does not produce more accurate alignments than the previous algorithm. This suggests that the adjacent agreement model may still be too constrained for this underlying task. Implicitly, an approach with fewer exact solutions may allow for useful violations of these constraints. In future work, we hope to explore bidirectional models with soft-penalties to explicitly permit these violations. A Proof of NP-Hardness We can show that the bidirectional alignment problem is NP-hard by reduction from the trav0 50 100 150 200 250 300 350 400 iteration 100 50 0 50 100 score relative to optimal best dual best primal intersection (a) The best dual and the best primal score, relative to the optimal score, averaged over all sentence pairs. The best primal curve uses a feasible greedy algorithm, whereas the intersection curve is calculated by taking the intersection of x and y. 0 50 100 150 200 250 300 350 400 number of iterations 0.0 0.2 0.4 0.6 0.8 1.0 relative search space size (b) A graph showing the effectiveness of coarse-to-fine pruning. Relative search space size is the size of the pruned lattice compared to the initial size. The plot shows an average over all sentence pairs. Figure 7 eling salesman problem (TSP). A TSP instance with N cities has distance c(i′, i) for each (i′, i) ∈ [N]2. We can construct a sentence pair in which I = J = N and ϵ-alignments have infinite cost. ω(i′, i, j) = −c(i′, i) ∀i′ ∈[N]0, i ∈[N], j ∈[N] θ(j′, i, j) = 0 ∀j′ ∈[N]0, i ∈[N], j ∈[N] ω(i′, 0, j) = −∞ ∀i′ ∈[N]0, j ∈[N] θ(j′, i, 0) = −∞ ∀j′ ∈[N]0, i ∈[N] Every bidirectional alignment with finite objective score must align exactly one word in e to each word in f, encoding a permutation a. Moreover, each possible permutation has a finite score: the negation of the total distance to traverse the N cities in order a under distance c. Therefore, solving such a bidirectional alignment problem would find a minimal Hamiltonian path of the TSP encoded in this way, concluding the reduction. Acknowledgments Alexander Rush, Yin-Wen Chang and Michael Collins were all supported by NSF grant IIS-1161814. Alexander Rush was partially supported by an NSF Graduate Research Fellowship. 1489 References Necip Fazil Ayan and Bonnie J Dorr. 2006. Going beyond aer: An extensive analysis of word alignments and their impact on mt. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 9–16. Association for Computational Linguistics. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. Fabien Cromieres and Sadao Kurohashi. 2009. An alignment algorithm using belief propagation and a structure-based distortion model. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 166–174. Association for Computational Linguistics. John DeNero and Dan Klein. 2010. Discriminative modeling of extraction sets for machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1453–1463. Association for Computational Linguistics. John DeNero and Klaus Macherey. 2011. Modelbased aligner combination using dual decomposition. In ACL, pages 420–429. Kuzman Ganchev, Jo˜ao V. Grac¸a, and Ben Taskar. 2008. Better alignments = better translations? In Proceedings of ACL-08: HLT, pages 986–993, Columbus, Ohio, June. Association for Computational Linguistics. K. Ganchev, J. Grac¸a, J. Gillenwater, and B. Taskar. 2010. Posterior Regularization for Structured Latent Variable Models. Journal of Machine Learning Research, 11:2001–2049. Joao Graca, Kuzman Ganchev, and Ben Taskar. 2008. Expectation maximization and posterior constraints. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 569–576. MIT Press, Cambridge, MA. Inc. Gurobi Optimization. 2013. Gurobi optimizer reference manual. Aria Haghighi, John Blitzer, John DeNero, and Dan Klein. 2009. Better word alignments with supervised itg models. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2Volume 2, pages 923–931. Association for Computational Linguistics. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 48–54. Association for Computational Linguistics. Simon Lacoste-Julien, Ben Taskar, Dan Klein, and Michael I Jordan. 2006. Word alignment via quadratic assignment. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 112– 119. Association for Computational Linguistics. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 104– 111. Association for Computational Linguistics. Robert C Moore. 2005. A discriminative framework for bilingual word alignment. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 81–88. Association for Computational Linguistics. Franz Josef Och, Christoph Tillmann, Hermann Ney, et al. 1999. Improved alignment models for statistical machine translation. In Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 20–28. Alexander M Rush and Michael Collins. 2012. A tutorial on dual decomposition and lagrangian relaxation for inference in natural language processing. Journal of Artificial Intelligence Research, 45:305–362. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics-Volume 2, pages 836– 841. Association for Computational Linguistics. 1490
2014
139
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 144–154, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Tagging The Web: Building A Robust Web Tagger with Neural Network Ji Ma†, Yue Zhang‡ and Jingbo Zhu† †Northeastern University, China ‡Singapore University of Technology and Design [email protected] yue [email protected] [email protected] Abstract In this paper, we address the problem of web-domain POS tagging using a twophase approach. The first phase learns representations that capture regularities underlying web text. The representation is integrated as features into a neural network that serves as a scorer for an easy-first POS tagger. Parameters of the neural network are trained using guided learning in the second phase. Experiment on the SANCL 2012 shared task show that our approach achieves 93.15% average tagging accuracy, which is the best accuracy reported so far on this data set, higher than those given by ensembled syntactic parsers. 1 Introduction Analysing and extracting useful information from the web has become an increasingly important research direction for the NLP community, where many tasks require part-of-speech (POS) tagging as a fundamental preprocessing step. However, state-of-the-art POS taggers in the literature (Collins, 2002; Shen et al., 2007) are mainly optimized on the the Penn Treebank (PTB), and when shifted to web data, tagging accuracies drop significantly (Petrov and McDonald, 2012). The problem we face here can be considered as a special case of domain adaptation, where we have access to labelled data on the source domain (PTB) and unlabelled data on the target domain (web data). Exploiting useful information from the web data can be the key to improving web domain tagging. Towards this end, we adopt the idea of learning representations which has been demonstrated useful in capturing hidden regularities underlying the raw input data (web text, in our case). Our approach consists of two phrases. In the pre-training phase, we learn an encoder that converts the web text into an intermediate representation, which acts as useful features for prediction tasks. We integrate the learned encoder with a set of well-established features for POS tagging (Ratnaparkhi, 1996; Collins, 2002) in a single neural network, which is applied as a scorer to an easyfirst POS tagger. We choose the easy-first tagging approach since it has been demonstrated to give higher accuracies than the standard left-to-right POS tagger (Shen et al., 2007; Ma et al., 2013). In the fine-tuning phase, the parameters of the network are optimized on a set of labelled training data using guided learning. The learned model preserves the property of preferring to tag easy words first. To our knowledge, we are the first to investigate guided learning for neural networks. The idea of learning representations from unlabelled data and then fine-tuning a model with such representations according to some supervised criterion has been studied before (Turian et al., 2010; Collobert et al., 2011; Glorot et al., 2011). While most previous work focus on in-domain sequential labelling or cross-domain classification tasks, we are the first to learn representations for web-domain structured prediction. Previous work treats the learned representations either as model parameters that are further optimized in supervised fine-tuning (Collobert et al., 2011) or as fixed features that are kept unchanged (Turian et al., 2010; Glorot et al., 2011). In this work, we investigate both strategies and give empirical comparisons in the cross-domain setting. Our results suggest that while both strategies improve in-domain tagging accuracies, keeping the learned representation unchanged consistently results in better cross-domain accuracies. We conduct experiments on the official data set provided by the SANCL 2012 shared task (Petrov and McDonald, 2012). Our method achieves a 93.15% average accuracy across the web-domain, which is the best result reported so far on this data 144 set, higher than those given by ensembled syntactic parsers. Our code will be publicly available at https://github.com/majineu/TWeb. 2 Learning from Web Text Unsupervised learning is often used for training encoders that convert the input data to abstract representations (i.e. encoding vectors). Such representations capture hidden properties of the input, and can be used as features for supervised tasks (Bengio, 2009; Ranzato et al., 2007). Among the many proposed encoders, we choose the restricted Boltzmann machine (RBM), which has been successfully used in many tasks (Lee et al., 2009b; Hinton et al., 2006). In this section, we give some background on RBMs and then show how they can be used to learn representations of the web text. 2.1 Restricted Boltzmann Machine The RBM is a type of graphical model that contains two layers of binary stochastic units v ∈ {0, 1}V and h ∈{0, 1}H, corresponding to a set of visible and hidden variables, respectively. The RBM defines the joint probability distribution over v and h by an energy function E(v, h) = −c′h −b′v −h′Wv, (1) which is factorized by a visible bias b ∈RV , a hidden bias c ∈RH and a weight matrix W ∈ RH×V . The joint distribution P(v, h) is given by P(v, h) = 1 Z exp(E(v, h)), (2) where Z is the partition function. The affine form of E with respect to v and h implies that the visible variables are conditionally independent with each other given the hidden layer units, and vice versa. This yields the conditional distribution: P(v|h) = VY j=1 P(vj|h) P(h|v) = H Y i=1 P(hi|v) P(vj = 1|h) = σ(bj + W·jh) (3) P(hi = 1|v) = σ(cj + Wi·v) (4) Here σ denotes the sigmoid function. Parameters of RBMs θ = {b, c, W} can be trained efficiently using contrastive divergence learning (CD), see (Hinton, 2002) for detailed descriptions of CD. 2.2 Encoding Web Text with RBM Most of the indicative features for POS disambiguation can be found from the words and word combinations within a local context (Ratnaparkhi, 1996; Collins, 2002). Inspired by this observation, we apply the RBM to learn feature representations from word n-grams. More specifically, given the ith word wi of a sentence, we apply RBMs to model the joint distribution of the n-gram (wi−l, · · · , wi+r), where l and r denote the left and right window, respectively. Note that the visible units of RBMs are binary. While in our case, each visible variable corresponds to a word, which may take on tens-of-thousands of different values. Therefore, the RBM need to be re-factorized to make inference tractable. We utilize the Word Representation RBM (WRRBM) factorization proposed by Dahl et al. (2012). The basic idea is to share word representations across different positions in the input n-gram while using position-dependent weights to distinguish between different word orders. Let wk be the k-th entry of lexicon L, and wk be its one-hot representation (i.e., only the k-th component of wk is 1, and all the others are 0). Let v(j) represents the j-th visible variable of the WRRBM, which is a vector of length |L|. Then v(j) = wk means that the j-th word in the n-gram is wk. Let D ∈RD×|L| be a projection matrix, then Dwk projects wk into a D-dimensional real value vector (embedding). For each position j, there is a weight matrix W(j) ∈RH×D, which is used to model the interaction between the hidden layer and the word projection in position j. The visible biases are also shared across different positions (b(j) = b ∀j) and the energy function is: E(v, h) = −c′h − n X j=1 (b′v(j) + h′W(j)Dv(j)), (5) which yields the conditional distributions: P(v|h) = n Y j=1 P(v(j)|h) P(h|v) = Y i=1 P(hi|v) P(hi = 1|v) = σ(ci + n X j=1 W(j) i· Dv(j)) (6) P(v(j) = wk|h) = 1 Z exp(b′wk + h′W(j)Dwk) (7) 145 Again Z is the partition function. The parameters {b, c, D, W(1), . . . , W(n)} can be trained using a Metropolis-Hastings-based CD variant and the learned word representations also capture certain syntactic information; see Dahl et al. (2012) for more details. Note that one can stack standard RBMs on top of a WRRBM to construct a Deep Belief Network (DBN). By adopting greedy layer-wise training (Hinton et al., 2006; Bengio et al., 2007), DBNs are capable of modelling higher order non-linear relations between the input, and has been demonstrated to improve performance for many computer vision tasks (Hinton et al., 2006; Bengio et al., 2007; Lee et al., 2009a). However, in this work we do not observe further improvement by employing DBNs. This may partly be due to the fact that unlike computer vision tasks, the input structure of POS tagging or other sequential labelling tasks is relatively simple, and a single non-linear layer is enough to model the interactions within the input (Wang and Manning, 2013). 3 Neural Network for POS Disambiguation We integrate the learned WRRBM into a neural network, which serves as a scorer for POS disambiguation. The main challenge to designing the neural network structure is: on the one hand, we hope that the model can take the advantage of information provided by the learned WRRBM, which reflects general properties of web texts, so that the model generalizes well in the web domain; on the other hand, we also hope to improve the model’s discriminative power by utilizing wellestablished POS tagging features, such as those of Ratnaparkhi (1996). Our approach is to leverage the two sources of information in one neural network by combining them though a shared output layer, as shown in Figure 1. Under the output layer, the network consists of two modules: the web-feature module, which incorporates knowledge from the pretrained WRRBM, and the sparse-feature module, which makes use of other POS tagging features. 3.1 The Web-Feature Module The web-feature module, shown in the lower left part of Figure 1, consists of a input layer and two hidden layers. The input for the this module is the word n-gram (wi−l, . . . , wi+r), the form of which Figure 1: The proposed neural network. The webfeature module (lower left) and sparse-feature module (lower right) are combined by a shared output layer (upper). is identical to the training data of the pre-trained WRRBM. The first layer is a linear projection layer, where each word in the input is projected into a Ddimensional real value vector using the projection operation described in Section 2.2. The output of this layer o1 w is the concatenation of the projections of wi−l, . . . , wi+r: o1 w =    M1 wwi−l ... M1 wwi+r    (8) Here M1 w denotes the parameters of the first layer of the web-feature module, which is a D × |L| projection matrix. The second layer is a sigmoid layer to model non-linear relations between the word projections: o2 w = σ(M2 wo1 w + b2 w) (9) Parameters of this layer include: a bias vector b2 w ∈RH and a weight matrix M2 w ∈RH×nD. The web-feature module enables us to explore the learned WRRBM in various ways. First, it allows us to investigate knowledge from the WRRBM incrementally. We can choose to use only the word representations of the learned WRRBM. This can be achieved by initializing only the first layer of the web module with the projection matrix D of the learned WRRBM: M1 w ←D. (10) Alternatively, we can choose to use the hidden states of the WRRBM, which can be treated as the 146 representations of the input n-gram. This can be achieved by also initializing the parameters of the second layer of the web-feature module using the position-dependent weight matrix and hidden bias of the learned WRRBM: b2 w ←c (11) M2 w ←(W(1), . . . , W(n)) (12) Second, the web-feature module also allows us to make a comparison between whether or not to further adjust the pre-trained representation in the supervised fine-tuning phase, which corresponds to the supervised learning strategies of Turian et al. (2010) and Collobert et al. (2011), respectively. To our knowledge, no investigations have been presented in the literature on this issue. 3.2 The Sparse-Feature Module The sparse-feature module, as shown in the lower right part of Figure 1, is designed to incorporate commonly-used tagging features. The input for this module is a vector of boolean values Φ(x) = (f1(x), . . . , fk(x)), where x denotes the partially tagged input sentence and fi(x) denotes a feature function, which returns 1 if the corresponding feature fires and 0 otherwise. The first layer of this module is a linear transformation layer, which converts the high dimensional sparse vector into a fixed-dimensional real value vector: os = MsΦ(x) + bs (13) Depending on the specific task being considered, the output of this layer can be further fed to other non-linear layers, such as a sigmoid or hyperbolic tangent layer, to model more complex relations. For POS tagging, we found that a simple linear layer yields satisfactory accuracies. The web-feature and sparse-feature modules are combined by a linear output layer, as shown in the upper part of Figure 1. The value of each unit in this layer denotes the score of the corresponding POS tag. oo = Mo ow os  + bo (14) In some circumstances, probability distribution over POS tags might be a more preferable form of output. Such distribution can be easily obtained by adding a soft-max layer on top of the output layer to perform a local normalization, as done by Collobert et al. (2011). Algorithm 1 Easy-first POS tagging Input: x a sentence of m words w1, . . . , wm Output: tag sequence of x 1: U ←[w1, . . . , wm] // untagged words 2: while U ̸= [] do 3: ( ˆw, ˆt) ←arg max(w,t)∈U×T S(w, t) 4: ˆw.t ←ˆt 5: U ←U/[ ˆw] // remove ˆw from U 6: end while 7: return [w1.t, . . . , wm.t] 4 Easy-first POS tagging with Neural Network The neural network proposed in Section 3 is used for POS disambiguation by the easy-first POS tagger. Parameters of the network are trained using guided learning, where learning and search interact with each other. 4.1 Easy-first POS tagging Pseudo-code of easy-first tagging is shown in Algorithm 1. Rather than tagging a sentence from left to right, easy-first tagging is based on a deterministic process, repeatedly selecting the easiest word to tag. Here “easiness” is evaluated based on a statistical model. At each step, the algorithm adopts a scorer, the neural network in our case, to assign a score to each possible word-tag pair (w, t), and then selects the highest score one ( ˆw, ˆt) to tag (i.e., tag ˆw with ˆt). The algorithm repeats until all words are tagged. 4.2 Training The training algorithm repeats for several iterations over the training data, which is a set of sentences labelled with gold standard POS tags. In each iteration, the procedure shown in Algorithm 2 is applied to each sentence in the training set. At each step during the processing of a training example, the algorithm calculates a margin loss based on two word-tag pairs (w, t) and ( ˆw, ˆt) (line 4 ∼line 6). (w, t) denotes the word-tag pair that has the highest model score among those that are inconsistent with the gold standard, while ( ˆw, ˆt) denotes the one that has the highest model score among those that are consistent with the gold standard. If the loss is zero, the algorithm continues to process the next untagged word. Otherwise, parameters are updated using back-propagation. The standard back-propagation algorithm 147 (Rumelhart et al., 1988) cannot be applied directly. This is because the standard loss is calculated based on a unique input vector. This condition does not hold in our case, because ˆw and w may refer to different words, which means that the margin loss in line 6 of Algorithm 2 is calculated based on two different input vectors, denoted by ⟨ˆw⟩and ⟨w⟩, respectively. We solve this problem by decomposing the margin loss in line 6 into two parts: • 1 + nn(w, t), which is associated with ⟨w⟩; • −nn( ˆw, ˆt), which is associated with ⟨ˆw⟩. In this way, two separate back-propagation updates can be used to update the model’s parameters (line 8 ∼line 11). For the special case where ˆw and w do refer to the same word w, it can be easily verified that the two separate back-propagation updates equal to the standard back-propagation with a loss 1 + nn(w, t) −nn(w, ˆt) on the input ⟨w⟩. The algorithm proposed here belongs to a general framework named guided learning, where search and learning interact with each other. The algorithm learns not only a local classifier, but also the inference order. While previous work (Shen et al., 2007; Zhang and Clark, 2011; Goldberg and Elhadad, 2010) apply guided learning to train a linear classifier by using variants of the perceptron algorithm, we are the first to combine guided learning with a neural network, by using a margin loss and a modified back-propagation algorithm. 5 Experiments 5.1 Setup Our experiments are conducted on the data set provided by the SANCL 2012 shared task, which aims at building a single robust syntactic analysis system across the web-domain. The data set consists of labelled data for both the source (Wall Street Journal portion of the Penn Treebank) and target (web) domains. The web domain data can be further classified into five sub-domains, including emails, weblogs, business reviews, news groups and Yahoo!Answers. While emails and weblogs are used as the development sets, reviews, news groups and Yahoo!Answers are used as the final test sets. Participants are not allowed to use web-domain labelled data for training. In addition to labelled data, a large amount of unlabelled data on the web domain is also provided. Statistics Algorithm 2 Training over one sentence Input: (x, t) a tagged sentence, neural net nn Output: updated neural net nn′ 1: U ←[w1, . . . , wm] // untagged words 2: R ←[(w1, t1), . . . , (wm, tm)] // reference 3: while U ̸= [] do 4: (w, t) ←arg max(w,t)∈(U×T/R) nn(w, t) 5: ( ˆw, ˆt) ←arg max(w,t)∈R nn(w, t) 6: loss ←max(0, 1 + nn(w, t) −nn( ˆw, ˆt)) 7: if loss > 0 then 8: ˆe ←nn.BackPropErr(⟨ˆw⟩, −nn( ˆw, ˆt)) 9: e ←nn.BackPropErr(⟨w⟩, 1+nn(w, t)) 10: nn.Update(⟨ˆw⟩, ˆe) 11: nn.Update(⟨w⟩, e) 12: else 13: U ←U/{ ˆw}, R ←R/( ˆw, ˆt) 14: end if 15: end while 16: return nn about labelled and unlabelled data are summarized in Table 1 and Table 2, respectively. The raw web domain data contains much noise, including spelling error, emotions and inconsistent capitalization. Following some participants (Le Roux et al., 2012), we conduct simple preprocessing steps to the input of the development and the test sets1 • Neutral quotes are transformed to opening or closing quotes. • Tokens starting with “www.”, “http.” or ending with “.org”, “.com” are converted to a “#URL” symbol • Repeated punctuations such as “!!!!” are collapsed into one. • Left brackets such as “<”,“{” and “[” are converted to “-LRB-”. Similarly, right brackets are converted to “-RRB-” • Upper cased words that contain more than 4 letters are lowercased. • Consecutive occurrences of one or more digits within a word are replaced with “#DIG” We apply the same preprocessing steps to all the unlabelled data. In addition, following Dahl et 1The preprocessing steps make use of no POS knowledge, and does not bring any unfair advantages to the participants. 148 Training set Dev set Test set WSJ-Train Emails Weblogs WSJ-dev Answers Newsgroups Reviews WSJ-test #Sen 30060 2,450 1,016 1,336 1,744 1,195 1,906 1,640 #Words 731,678 29,131 24,025 32,092 28,823 20,651 28,086 35,590 #Types 35,933 5,478 4,747 5,889 4,370 4,924 4,797 6,685 Table 1: Statistics of the labelled data. #Sen denotes number of sentences. #Words and #Types denote number of words and unique word types, respectively. Emails Weblogs Answers Newsgroups Reviews #Sen 1,194,173 524,834 27,274 1,000,000 1,965,350 #Words 17,047,731 10,365,284 424,299 18,424,657 29,289,169 #Types 221,576 166,515 33,325 357,090 287,575 Table 2: Statistics of the raw unlabelled data. features templates unigram H(wi), C(wi), L(wi), L(wi−1), L(wi+1), ti−2, ti−1, ti+1, ti+2 bigram L(wi) ⊙L(wi−1), L(wi) ⊙L(wi+1), ti−2 ⊙ti−1, ti−1 ⊙ti+1, ti+1 ⊙ti+2, L(wi) ⊙ti−2, L(wi) ⊙ti−1, L(wi) ⊙ti+1, L(wi) ⊙ti+2 trigram L(wi) ⊙ti−2 ⊙ti−1, L(wi) ⊙ti−1 ⊙ti+1, L(wi) ⊙ti+1 ⊙ti+2 Table 3: Feature templates, where wi denotes the current word. H(w) and C(w) indicates whether w contains hyphen and upper case letters, respectively. L(w) denotes a lowercased w. al. (2012) and Turian et al. (2010), we also lowercased all the unlabelled data and removed those sentences that contain less than 90% a-z letters. The tagging performance is evaluated according to the official evaluation metrics of SANCL 2012. The tagging accuracy is defined as the percentage of words (punctuations included) that are correctly tagged. The averaged accuracies are calculated across the web domain data. We trained the WRRBM on web-domain data of different sizes (number of sentences). The data sets are generated by first concatenating all the cleaned unlabelled data, then selecting sentences evenly across the concatenated file. For each data set, we investigate an extensive set of combinations of hyper-parameters: the n-gram window (l, r) in {(1, 1), (2, 1), (1, 2), (2, 2)}; the hidden layer size in {200, 300, 400}; the learning rate in {0.1, 0.01, 0.001}. All these parameters are selected according to the averaged accuracy on the development set. 5.2 Baseline We reimplemented the greedy easy-first POS tagger of Ma et al. (2013), which is used for all the experiments. While the tagger of Ma et al. (2013) utilizes a linear scorer, our tagger adopts the neural network as its scorer. The neural network of our baseline tagger only contains the sparse-feature module. We use this baseline to examine the performance of a tagger trained purely on the source domain. Feature templates are shown in Table 3, which are based on those of Ratnaparkhi (1996) and Shen et al. (2007). Accuracies of the baseline tagger are shown in the upper part of Table 6. Compared with the performance of the official baseline (row 4 of Table 6), which is evaluated based on the output of BerkeleyParser (Petrov et al., 2006; Petrov and Klein, 2007), our baseline tagger achieves comparable accuracies on both the source and target domain data. With data preprocessing, the average accuracy boosts to about 92.02 on the test set of the target domain. This is consistent with previous work (Le Roux et al., 2011), which found that for noisy data such as web domain text, data cleaning is a effective and necessary step. 5.3 Exploring the Learned Knowledge As mentioned in Section 3.1, the knowledge learned from the WRRBM can be investigated incrementally, using word representation, which corresponds to initializing only the projection layer of web-feature module with the projection matrix of the learned WRRBM, or ngram-level representation, which corresponds to initializing both the projection and sigmoid layers of the webfeature module by the learned WRRBM. In each case, there can be two different training strategies depending on whether the learned representations are further adjusted or kept unchanged during the fine-turning phrase. Experimental results under the 4 combined settings on the development sets are illustrated in Figure 2, 3 and 4, where the 149 96.5 96.6 96.7 96.8 96.9 200 400 600 800 1000 Accuracy Number of unlabelled sentences (k) WSJ word-fixed word-adjust ngram-fixed ngram-adjust Figure 2: Tagging accuracies on the sourcedomain data. “word” and “ngram” denote using word representations and n-gram representations, respectively. “fixed” and “adjust” denote that the learned representation are kept unchanged or further adjusted in supervised learning, respectively. 89.8 90 90.2 90.4 90.6 90.8 91 200 400 600 800 1000 Accuracy Number of unlabelled sentences (k) Email word-fixed word-adjust ngram-fixed ngram-adjust Figure 3: Accuracies on the email domain. 94.8 95 95.2 95.4 95.6 95.8 200 400 600 800 1000 Accuracy Number of unlabelled sentences (k) Weblog word-fixed word-adjust ngram-fixed ngram-adjust Figure 4: Accuracies on the weblog domain. x-axis denotes the size of the training data and yaxis denotes tagging accuracy. 5.3.1 Effect of the Training Strategy From Figure 2 we can see that when knowledge from the pre-trained WRRBM is incorpomethod all non-oov oov baseline 89.81 92.42 65.64 word-adjust +0.09 −0.05 +1.38 word-fix +0.11 +0.13 +1.73 ngram-adjust +0.53 +0.52 +0.53 ngram-fix +0.69 +0.60 +2.30 Table 4: Performance on the email domain. rated, both the training strategies (“word-fixed” vs “word-adjusted”, “ngram-fixed” vs “ngramadjusted”) improve accuracies on the source domain, which is consistent with previous findings (Turian et al., 2010; Collobert et al., 2011). In addition, adjusting the learned representation or keeping them fixed does not result in too much difference in tagging accuracies. On the web-domain data, shown in Figure 3 and 4, we found that leaving the learned representation unchanged (“word-fixed”, “ngram-fixed”) yields consistently higher performance gains. This result is to some degree expected. Intuitively, unsupervised pre-training moves the parameters of the WRRBM towards the region where properties of the web domain data are properly modelled. However, since fine-tuning is conducted with respect to the source domain, adjusting the parameters of the pre-trained representation towards optimizing source domain tagging accuracies would disrupt its ability in modelling the web domain data. Therefore, a better idea is to keep the representation unchanged so that we can learn a function that maps the general web-text properties to its syntactic categories. 5.3.2 Word and N-gram Representation From Figures 2, 3 and 4, we can see that adopting the ngram-level representation consistently achieves better performance compared with using word representations only (“word-fixed” vs “ngram-fixed”, “word-adjusted” vs “ngramadjusted”). This result illustrates that the ngramlevel knowledge captures more complex interactions of the web text, which cannot be recovered by using only word embeddings. Similar result was reported by Dahl et al. (2012), who found that using both the word embeddings and the hidden units of a tri-gram WRRBM as additional features for a CRF chunker yields larger improvements than using word embeddings only. Finally, more detailed accuracies under the 4 settings on the email domain are shown in Table 4. We can see that the improvement of using word 150 RBM-E RBM-W RBM-M +acc% Emails +0.73 +0.37 +0.69 Weblog +0.31 +0.52 +0.54 cov% Emails 95.24 92.79 93.88 Weblog 90.21 97.74 94.77 Table 5: Effect of unlabelled data. “+acc” denotes improvement in tagging accuracy and “cov” denotes the lexicon coverages. representations mainly comes from better accuracy of out-of-vocabulary (oov) words. By contrast, using n-gram representations improves the performance on both oov and non-oov. 5.4 Effect of Unlabelled Domain Data In some circumstances, we may know beforehand that the target domain data belongs to a certain sub-domain, such as the email domain. In such cases, it might be desirable to train WRRBM using data only on that domain. We conduct experiments to test whether using the target domain data to train the WRRBM yields better performance compared with using mixed data from all sub-domains. We trained 3 WRRBMs using the email domain data (RBM-E), weblog domain data (RBMW) and mixed domain data (RBM-M), respectively, with each data set consisting of 300k sentences. Tagging performance and lexicon coverages of each data set on the development sets are shown in Table 5. We can see that using the target domain data achieves similar improvements compared with using the mixed data. However, for the email domain, RBM-W yields much smaller improvement compared with RBM-E, and vice versa. From the lexicon coverages, we can see that the sub-domains varies significantly. The results suggest that using mixed data can achieve almost as good performance as using the target sub-domain data, while using mixed data yields a much more robust tagger across all sub-domains. 5.5 Final Results The best result achieved by using a 4-gram WRRBM, (wi−2, . . . , wi+1), with 300 hidden units learned on 1,000k web domain sentences are shown in row 3 of Table 6. Performance of the top 2 systems of the SANCL 2012 task are also shown in Table 6. Our greedy tagger achieves 93% tagging accuracy, which is significantly better than the baseline’s 92.02% accuracy (p < 0.05 by McNemar’s test). Moreover, we achieve the highest tagging accuracy reported so far on this data set, surpassing those achieved using parser combinations based on self-training (Tang et al., 2012; Le Roux et al., 2012). In addition, different from Le Roux et al. (2012), we do not use any external resources in data cleaning. 6 Related Work Learning representations has been intensively studied in computer vision tasks (Bengio et al., 2007; Lee et al., 2009a). In NLP, there is also much work along this line. In particular, Collobert et al. (2011) and Turian et al. (2010) learn word embeddings to improve the performance of in-domain POS tagging, named entity recognition, chunking and semantic role labelling. Yang et al. (2013) induce bi-lingual word embeddings for word alignment. Zheng et al. (2013) investigate Chinese character embeddings for joint word segmentation and POS tagging. While those approaches mainly explore token-level representations (word or character embeddings), using WRRBM is able to utilize both word and n-gram representations. Titov (2011) and Glorot et al. (2011) propose to learn representations from the mixture of both source and target domain unlabelled data to improve cross-domain sentiment classification. Titov (2011) also propose a regularizer to constrain the inter-domain variability. In particular, their regularizer aims to minimize the Kullback-Leibler (KL) distance between the marginal distributions of the learned representations on the source and target domains. Their work differs from ours in that their approaches learn representations from the feature vectors for sentiment classification, which might be of thousands of dimensions. Such high dimensional input gives rise to high computational cost and it is not clear whether those approaches can be applied to large scale unlabelled data, with hundreds of millions of training examples. Our method learns representations from only word ngrams with n ranging from 3 to 5, which can be easily applied to large scale-data. In addition, while Titov (2011) and Glorot et al. (2011) use the learned representation to improve cross-domain classification tasks, we are the first to apply it to cross-domain structured prediction. Blitzer et al. (2006) propose to induce shared representations for domain adaptation, which is based on the alternating structure optimization 151 System Answer Newsgroup Review WSJ-t Avg baseline-raw 89.79 91.36 89.96 97.09 90.31 baseline-clean 91.35 92.06 92.92 97.09 92.02 best-clean 92.37 93.59 93.62 97.44 93.15 baseline-offical 90.20 91.24 89.33 97.08 90.26 Le Roux et al.(2011) 91.79 93.81 93.11 97.29 92.90 Tang et al. (2012) 91.76 92.91 91.94 97.49 92.20 Table 6: Main results. “baseline-raw” and “baseline-clean” denote performance of our baseline tagger on the raw and cleaned data, respectively. “best-clean” is best performance achieved using a 4-gram WRRBM. The lower part shows accuracies of the official baseline and that of the top 2 participants. (ASO) method of Ando and Zhang (2005). The idea is to project the original feature representations into low dimensional representations, which yields a high-accuracy classifier on the target domain. The new representations are induced based on the auxiliary tasks defined on unlabelled data together with a dimensionality reduction technique. Such auxiliary tasks can be specific to the supervised task. As pointed out by Plank (2009), for many NLP tasks, defining the auxiliary tasks is a non-trivial engineering problem. Compared with Blitzer et al. (2006), the advantage of using RBMs is that it learns representations in a pure unsupervised manner, which is much simpler. Besides learning representations, another line of research addresses domain-adaptation by instance re-weighting (Bickel et al., 2007; Jiang and Zhai, 2007) or feature re-weighting (Satpal and Sarawagi, 2007). Those methods assume that each example x that has a non-zero probability on the source domain must have a non-zero probability on the target domain, and vice-versa. As pointed out by Titov (2011), such an assumption is likely to be too restrictive since most NLP tasks adopt word-based or lexicon-based features that vary significantly across different domains. Regarding using neural networks for sequential labelling, our approach shares similarity with that of Collobert et al. (2011). In particular, we both use a non-linear layer to model complex relations underling word embeddings. However, our network differs from theirs in the following aspects. Collobert et al. (2011) model the dependency between neighbouring tags in a generative manner, by employing a transition score Aij. Training the score involves a forward process of complexity O(nT 2), where T denotes the number of tags. Our model captures such a dependency in a discriminative manner, by just adding tag-related features to the sparse-feature module. In addition, Collobert et al. (2011) train their network by maximizing the training set likelihood, while our approach is to minimize the margin loss using guided learning. 7 Conclusion We built a web-domain POS tagger using a two-phase approach. We used a WRRBM to learn the representation of the web text and incorporate the representation in a neural network, which is trained using guided learning for easy-first POS tagging. Experiment showed that our approach achieved significant improvement in tagging the web domain text. In addition, we found that keeping the learned representations unchanged yields better performance compared with further optimizing them on the source domain data. We release our tools at https://github.com/majineu/TWeb. For future work, we would like to investigate the two-phase approach to more challenging tasks, such as web domain syntactic parsing. We believe that high-accuracy web domain taggers and parsers would benefit a wide range of downstream tasks such as machine translation2. 8 Acknowledgements We would like to thank Hugo Larochelle for his advices on re-implementing WRRBM. We also thank Nan Yang, Shujie Liu and Tong Xiao for the fruitful discussions, and three anonymous reviewers for their insightful suggestions. This research was supported by the National Science Foundation of China (61272376; 61300097), the research grant T2MOE1301 from Singapore Ministry of Education (MOE) and the start-up grant SRG ISTD2012038 from SUTD. References Rie Ando and Tong Zhang. 2005. A high-performance semi-supervised learning method for text chunk2This work is done while the first author is visiting SUTD. 152 ing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 1–9, Ann Arbor, Michigan, June. Association for Computational Linguistics. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 153–160. MIT Press, Cambridge, MA. Yoshua Bengio. 2009. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1–127. Also published as a book. Now Publishers, 2009. Steffen Bickel, Michael Brckner, and Tobias Scheffer. 2007. Discriminative learning for differing training and test distributions. In Proc of ICML 2007, pages 81–88. ACM Press. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 120–128, Sydney, Australia, July. Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10, EMNLP ’02, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. George E. Dahl, Ryan P. Adams, and Hugo Larochelle. 2012. Training restricted boltzmann machines on word observations. In John Langford and Joelle Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), ICML ’12, pages 679–686, New York, NY, USA, July. Omnipress. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proc of ICML 2011, pages 513–520. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 742–750, Stroudsburg, PA, USA. Association for Computational Linguistics. Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, July. Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Comput., 14(8):1771–1800, August. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 264–271, Prague, Czech Republic, June. Association for Computational Linguistics. Joseph Le Roux, Jennifer Foster, Joachim Wagner, Rasul Samad Zadeh Kaljahi, and Anton Bryl. 2012. DCU-Paris13 Systems for the SANCL 2012 Shared Task. In Proceedings of the NAACL 2012 First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL), pages 1–4, Montr´eal, Canada, June. Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009a. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proc of ICML 2009, pages 609–616. Honglak Lee, Peter Pham, Yan Largman, and Andrew Ng. 2009b. Unsupervised feature learning for audio classification using convolutional deep belief networks. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1096–1104. Ji Ma, Jingbo Zhu, Tong Xiao, and Nan Yang. 2013. Easy-first pos tagging and dependency parsing with beam search. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 110–114, Sofia, Bulgaria, August. Association for Computational Linguistics. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404–411, Rochester, New York, April. Association for Computational Linguistics. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433–440, Sydney, Australia, July. Association for Computational Linguistics. Barbara Plank. 2009. Structural correspondence learning for parse disambiguation. In Alex Lascarides, 153 Claire Gardent, and Joakim Nivre, editors, EACL (Student Research Workshop), pages 37–45. The Association for Computer Linguistics. Marc’Aurelio Ranzato, Christopher Poultney, Sumit Chopra, and Yann LeCun. 2007. Efficient learning of sparse representations with an energy-based model. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 1137–1144. MIT Press, Cambridge, MA. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1988. Neurocomputing: Foundations of research. chapter Learning Representations by Back-propagating Errors, pages 696–699. MIT Press, Cambridge, MA, USA. Sandeepkumar Satpal and Sunita Sarawagi. 2007. Domain adaptation of conditional probability models via feature subsetting. In PKDD, volume 4702 of Lecture Notes in Computer Science, pages 224–235. Springer. Libin Shen, Giorgio Satta, and Aravind Joshi. 2007. Guided learning for bidirectional sequence classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 760–767, Prague, Czech Republic, June. Association for Computational Linguistics. Buzhou Tang, Min Jiang, and Hua Xu. 2012. Varderlibt’s systems for sancl2012 shared task. In Proceedings of the NAACL 2012 First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL), Montr´eal, Canada, June. Ivan Titov. 2011. Domain adaptation by constraining inter-domain variability of latent feature representation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 62–71, Portland, Oregon, USA, June. Association for Computational Linguistics. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394, Uppsala, Sweden, July. Association for Computational Linguistics. Mengqiu Wang and Christopher D. Manning. 2013. Effect of non-linear deep architecture in sequence labeling. In Proceedings of the 6th International Joint Conference on Natural Language Processing (IJCNLP). Nan Yang, Shujie Liu, Mu Li, Ming Zhou, and Nenghai Yu. 2013. Word alignment modeling with context dependent deep neural network. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 166–175, Sofia, Bulgaria, August. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2011. Syntax-based grammaticality improvement using ccg and guided search. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1147–1157, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 647–657, Seattle, Washington, USA, October. Association for Computational Linguistics. 154
2014
14
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1491–1500, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Recursive Recurrent Neural Network for Statistical Machine Translation Shujie Liu1, Nan Yang2, Mu Li1 and Ming Zhou1 1Microsoft Research Asia, Beijing, China 2University of Science and Technology of China, Hefei, China shujliu, v-nayang, muli, [email protected] Abstract In this paper, we propose a novel recursive recurrent neural network (R2NN) to model the end-to-end decoding process for statistical machine translation. R2NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R2NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU. 1 Introduction Deep Neural Network (DNN), which essentially is a multi-layer neural network, has re-gained more and more attentions these years. With the efficient training methods, such as (Hinton et al., 2006), DNN is widely applied to speech and image processing, and has achieved breakthrough results (Kavukcuoglu et al., 2010; Krizhevsky et al., 2012; Dahl et al., 2012). Applying DNN to natural language processing (NLP), representation or embedding of words is usually learnt first. Word embedding is a dense, low dimensional, real-valued vector. Each dimension of the vector represents a latent aspect of the word, and captures its syntactic and semantic properties (Bengio et al., 2006). Word embedding is usually learnt from large amount of monolingual corpus at first, and then fine tuned for special distinct tasks. Collobert et al. (2011) propose a multi-task learning framework with DNN for various NLP tasks, including part-of-speech tagging, chunking, named entity recognition, and semantic role labelling. Recurrent neural networks are leveraged to learn language model, and they keep the history information circularly inside the network for arbitrarily long time (Mikolov et al., 2010). Recursive neural networks, which have the ability to generate a tree structured output, are applied to natural language parsing (Socher et al., 2011), and they are extended to recursive neural tensor networks to explore the compositional aspect of semantics (Socher et al., 2013). DNN is also introduced to Statistical Machine Translation (SMT) to learn several components or features of conventional framework, including word alignment, language modelling, translation modelling and distortion modelling. Yang et al. (2013) adapt and extend the CD-DNN-HMM (Dahl et al., 2012) method to HMM-based word alignment model. In their work, bilingual word embedding is trained to capture lexical translation information, and surrounding words are utilized to model context information. Auli et al. (2013) propose a joint language and translation model, based on a recurrent neural network. Their model predicts a target word, with an unbounded history of both source and target words. Liu et al. (2013) propose an additive neural network for SMT decoding. Word embedding is used as the input to learn translation confidence score, which is combined with commonly used features in the conventional log-linear model. For distortion modeling, Li et al. (2013) use recursive auto encoders to make full use of the entire merging phrase pairs, going beyond the boundary words with a maximum entropy classifier (Xiong et al., 2006). 1491 Different from the work mentioned above, which applies DNN to components of conventional SMT framework, in this paper, we propose a novel R2NN to model the end-to-end decoding process. R2NN is a combination of recursive neural network and recurrent neural network. In R2NN, new information can be used to generate the next hidden state, like recurrent neural networks, and a tree structure can be built, as recursive neural networks. To generate the translation candidates in a commonly used bottom-up manner, recursive neural networks are naturally adopted to build the tree structure. In recursive neural networks, all the representations of nodes are generated based on their child nodes, and it is difficult to integrate additional global information, such as language model and distortion model. In order to integrate these crucial information for better translation prediction, we combine recurrent neural networks into the recursive neural networks, so that we can use global information to generate the next hidden state, and select the better translation candidate. We propose a three-step semi-supervised training approach to optimizing the parameters of R2NN, which includes recursive auto-encoding for unsupervised pre-training, supervised local training based on the derivation trees of forced decoding, and supervised global training using early update strategy. So as to model the translation confidence for a translation phrase pair, we initialize the phrase pair embedding by leveraging the sparse features and recurrent neural network. The sparse features are phrase pairs in translation table, and recurrent neural network is utilized to learn a smoothed translation score with the source and target side information. We conduct experiments on a Chinese-to-English translation task to test our proposed methods, and we get about 1.5 BLEU points improvement, compared with a state-of-the-art baseline system. The rest of this paper is organized as follows: Section 2 introduces related work on applying DNN to SMT. Our R2NN framework is introduced in detail in Section 3, followed by our three-step semi-supervised training approach in Section 4. Phrase pair embedding method using translation confidence is elaborated in Section 5. We introduce our conducted experiments in Section 6, and conclude our work in Section 7. 2 Related Work Yang et al. (2013) adapt and extend CD-DNNHMM (Dahl et al., 2012) to word alignment. In their work, initial word embedding is firstly trained with a huge mono-lingual corpus, then the word embedding is adapted and fine tuned bilingually in a context-depended DNN HMM framework. Word embeddings capturing lexical translation information and surrounding words modeling context information are leveraged to improve the word alignment performance. Unfortunately, the better word alignment result generated by this model, cannot bring significant performance improvement on a end-to-end SMT evaluation task. To improve the SMT performance directly, Auli et al. (2013) extend the recurrent neural network language model, in order to use both the source and target side information to scoring translation candidates. In their work, not only the target word embedding is used as the input of the network, but also the embedding of the source word, which is aligned to the current target word. To tackle the large search space due to the weak independence assumption, a lattice algorithm is proposed to rerank the n-best translation candidates, generated by a given SMT decoder. Liu et al. (2013) propose an additive neural network for SMT decoding. RNNLM (Mikolov et al., 2010) is firstly used to generate the source and target word embeddings, which are fed into a onehidden-layer neural network to get a translation confidence score. Together with other commonly used features, the translation confidence score is integrated into a conventional log-linear model. The parameters are optimized with development data set using mini-batch conjugate sub-gradient method and a regularized ranking loss. DNN is also brought into the distortion modeling. Going beyond the previous work using boundary words for distortion modeling in BTGbased SMT decoder, Li et al. (2013) propose to apply recursive auto-encoder to make full use of the entire merged blocks. The recursive auto-encoder is trained with reordering examples extracted from word-aligned bilingual sentences. Given the representations of the smaller phrase pairs, recursive auto-encoder can generate the representation of the parent phrase pair with a re-ordering confidence score. The combination of reconstruction error and re-ordering error is used to be the objective function for the model training. 1492 3 Our Model In this section, we leverage DNN to model the end-to-end SMT decoding process, using a novel recursive recurrent neural network (R2NN), which is different from the above mentioned work applying DNN to components of conventional SMT framework. R2NN is a combination of recursive neural network and recurrent neural network, which not only integrates the conventional global features as input information for each combination, but also generates the representation of the parent node for the future candidate generation. In this section, we briefly recall the recurrent neural network and recursive neural network in Section 3.1 and 3.2, and then we elaborate our R2NN in detail in Section 3.3. 3.1 Recurrent Neural Network Recurrent neural network is usually used for sequence processing, such as language model (Mikolov et al., 2010). Commonly used sequence processing methods, such as Hidden Markov Model (HMM) and n-gram language model, only use a limited history for the prediction. In HMM, the previous state is used as the history, and for ngram language model (for example n equals to 3), the history is the previous two words. Recurrent neural network is proposed to use unbounded history information, and it has recurrent connections on hidden states, so that history information can be used circularly inside the network for arbitrarily long time. 𝑉 𝑈 𝑊 ℎ𝑡−1 𝑦𝑡 ℎ𝑡 𝑥𝑡 Figure 1: Recurrent neural network As shown in Figure 1, the network contains three layers, an input layer, a hidden layer, and an output layer. The input layer is a concatenation of ht−1 and xt, where ht−1 is a real-valued vector, which is the history information from time 0 to t −1. xt is the embedding of the input word at time t . Word embedding xt is integrated with previous history ht−1 to generate the current hidden layer, which is a new history vector ht . Based on ht , we can predict the probability of the next word, which forms the output layer yt . The new history ht is used for the future prediction, and updated with new information from word embedding xt recurrently. 3.2 Recursive Neural Network In addition to the sequential structure above, tree structure is also usually constructed in various NLP tasks, such as parsing and SMT decoding. To generate a tree structure, recursive neural networks are introduced for natural language parsing (Socher et al., 2011). Similar with recurrent neural networks, recursive neural networks can also use unbounded history information from the sub-tree rooted at the current node. The commonly used binary recursive neural networks generate the representation of the parent node, with the representations of two child nodes as the input. 𝑠[𝑙,𝑚] 𝑠[𝑚,𝑛] 𝑠[𝑙,𝑛] 𝑊 𝑦[𝑙,𝑛] Figure 2: Recursive neural network As shown in Figure 2, s[l,m] and s[m,n] are the representations of the child nodes, and they are concatenated into one vector to be the input of the network. s[l,n] is the generated representation of the parent node. y[l,n] is the confidence score of how plausible the parent node should be created. l, m, n are the indexes of the string. For example, for nature language parsing, s[l,n] is the representation of the parent node, which could be a NP or V P node, and it is also the representation of the whole sub-tree covering from l to n . 3.3 Recursive Recurrent Neural Network Word embedding xt is integrated as new input information in recurrent neural networks for each prediction, but in recursive neural networks, no additional input information is used except the two representation vectors of the child nodes. However, some global information , which cannot be generated by the child representations, is crucial 1493 for SMT performance, such as language model score and distortion model score. So as to integrate such global information, and also keep the ability to generate tree structure, we combine the recurrent neural network and the recursive neural network to be a recursive recurrent neural network (R2NN). 𝑠[𝑙,𝑚] 𝑠[𝑚,𝑛] 𝑠[𝑙,𝑛] 𝑥[𝑙,𝑚] 𝑥[𝑚,𝑛] 𝑊 𝑦[𝑙,𝑛] 𝑉 𝑥[𝑙,𝑛] Figure 3: Recursive recurrent neural network As shown in Figure 3, based on the recursive network, we add three input vectors x[l,m] for child node [l, m] , x[m,n] for child node [m, n] , and x[l,n] for parent node [l, n] . We call them recurrent input vectors, since they are borrowed from recurrent neural networks. The two recurrent input vectors x[l,m] and x[m,n] are concatenated as the input of the network, with the original child node representations s[l,m] and s[m,n] . The recurrent input vector x[l,n] is concatenated with parent node representation s[l,n] to compute the confidence score y[l,n] . The input, hidden and output layers are calculated as follows: ˆx[l,n] = x[l,m] ▷◁s[l,m] ▷◁x[m,n] ▷◁s[m,n] (1) s[l,n] j = f( X i ˆx[l,n] i wji) (2) y[l,n] = X j (s[l,n] ▷◁x[l,n])jvj (3) where ▷◁is a concatenation operator in Equation 1 and Equation 3, and f is a non-linear function, here we use HTanh function, which is defined as: HTanh(x) =      −1, x < −1 x, −1 ≤x ≥1 1, x > 1 (4) Figure 4 illustrates the R2NN architecture for SMT decoding. For a source sentence “laizi faguo he eluosi de”, we first split it into phrases “laizi”, “faguo he eluosi” and “de”. We then check whether translation candidates can be found in the translation table for each span, together with the phrase pair embedding and recurrent input vector (global features). We call it the rule matching phase. For a translation candidate of the span node [l, m] , the black dots stand for the node representation s[l,m] , while the grey dots for recurrent input vector x[l,m] . Given s[l,m] and x[l,m] for matched translation candidates, conventional CKY decoding process is performed using R2NN. R2NN can combine the translation pairs of child nodes, and generate the translation candidates for parent nodes with their representations and plausible scores. Only the n-best translation candidates are kept for upper combination, according to their plausible scores. 来自 laizi 法国和俄罗斯 faguo he eluosi 的 de coming from France and Russia NULL Rule Match Rule Match Rule Match coming from France and Russia R2NN coming from France and Russia R2NN Figure 4: R2NN for SMT decoding We extract phrase pairs using the conventional method (Och and Ney, 2004). The commonly used features, such as translation score, language model score and distortion score, are used as the recurrent input vector x . During decoding, recurrent input vectors x for internal nodes are calculated accordingly. The difference between our model and the conventional log-linear model includes: • R2NN is not linear, while the conventional model is a linear combination. • Representations of phrase pairs are automatically learnt to optimize the translation performance, while features used in conventional model are hand-crafted. • History information of the derivation can be recorded in the representation of internal nodes, while conventional model cannot. 1494 Liu et al. (2013) apply DNN to SMT decoding, but not in a recursive manner. A feature is learnt via a one-hidden-layer neural network, and the embedding of words in the phrase pairs are used as the input vector. Our model generates the representation of a translation pair based on its child nodes. Li et al. (2013) also generate the representation of phrase pairs in a recursive way. In their work, the representation is optimized to learn a distortion model using recursive neural network, only based on the representation of the child nodes. Our R2NN is used to model the end-to-end translation process, with recurrent global information added. We also explore phrase pair embedding method to model translation confidence directly, which is introduced in Section 5. In the next two sections, we will answer the following questions: (a) how to train the model, and (b) how to generate the initial representations of translation pairs. 4 Model Training In this section, we propose a three-step training method to train the parameters of our proposed R2NN, which includes unsupervised pre-training using recursive auto-encoding, supervised local training on the derivation tree of forced decoding, and supervised global training using early update training strategy. 4.1 Unsupervised Pre-training We adopt the Recursive Auto Encoding (RAE) (Socher et al., 2011) for our unsupervised pretraining. The main idea of auto encoding is to initialize the parameters of the neural network, by minimizing the information lost, which means, capturing as much information as possible in the hidden states from the input vector. As shown in Figure 5, RAE contains two parts, an encoder with parameter W , and a decoder with parameter W ′ . Given the representations of child nodes s1 and s2 , the encoder generates the representation of parent node s . With the parent node representation s as the input vector, the decoder reconstructs the representation of two child nodes s′ 1 and s′ 2 . The loss function is defined as following so as to minimize the information lost: LRAE(s1, s2) = 1 2( s1 −s′ 1 2 + s2 −s′ 2 2) (5) where ∥·∥is the Euclidean norm. coming from France and Russia 来自 laizi 法国和俄罗斯 faguo he eluosi coming from France and Russia coming from France and Russia 𝑠1 𝑠2 𝑠 𝑠1 ′ 𝑠2 ′ 𝑊 𝑊′ Figure 5: Recursive auto encoding for unsupervised pre-training The training samples for RAE are phrase pairs {s1, s2} in translation table, where s1 and s2 can form a continuous partial sentence pair in the training data. When RAE training is done, only the encoding model W will be fine tuned in the future training phases. 4.2 Supervised Local Training We use contrastive divergence method to fine tune the parameters W and V . The loss function is the commonly used ranking loss with a margin, and it is defined as follows: LSLT (W, V, s[l,n]) = max(0, 1 −y[l,n] oracle + y[l,n] t ) (6) where s[l,n] is the source span. y[l,n] oracle is the plausible score of a oracle translation result. y[l,n] t is the plausible score for the best translation candidate given the model parameters W and V . The loss function aims to learn a model which assigns the good translation candidate (the oracle candidate) higher score than the bad ones, with a margin 1. Translation candidates generated by forced decoding (Wuebker et al., 2010) are used as oracle translations, which are the positive samples. Forced decoding performs sentence pair segmentation using the same translation system as decoding. For each sentence pair in the training data, SMT decoder is applied to the source side, and any candidate which is not the partial sub-string of the target sentence is removed from the n-best list during decoding. From the forced decoding result, we can get the ideal derivation tree in the decoder’s search space, and extract positive/oracle translation candidates. 1495 4.3 Supervised Global Training The supervised local training uses the nodes/samples in the derivation tree of forced decoding to update the model, and the trained model tends to over-fit to local decisions. In this subsection, a supervised global training is proposed to tune the model according to the final translation performance of the whole source sentence. Actually, we can update the model from the root of the decoding tree and perform back propagation along the tree structure. Due to the inexact search nature of SMT decoding, search errors may inevitably break theoretical properties, and the final translation results may be not suitable for model training. To handle this problem, we use early update strategy for the supervised global training. Early update is testified to be useful for SMT training with large scale features (Yu et al., 2013). Instead of updating the model using the final translation results, early update approach optimizes the model, when the oracle translation candidate is pruned from the n-best list, meaning that, the model is updated once it performs a unrecoverable mistake. Back propagation is performed along the tree structure, and the phrase pair embeddings of the leaf nodess are updated. The loss function for supervised global training is defined as follows: LSGT (W, V, s[l,n]) = −log( P y[l,n] oracle exp (y[l,n] oracle) P t∈nbest exp (y[l,n] t ) ) (7) where y[l,n] oracle is the model score of a oracle translation candidate for the span [l, n] . Oracle translation candidates are candidates get from forced decoding. If the span [l, n] is not the whole source sentence, there may be several oracle translation candidates, otherwise, there is only one, which is exactly the target sentence. There are much fewer training samples than those for supervised local training, and it is not suitable to use ranking loss for global training any longer. We use negative log-likelihood to penalize all the other translation candidates except the oracle ones, so as to leverage all the translation candidates as training samples. 5 Phrase Pair Embedding The next question is how to initialize the phrase pair embedding in the translation table, so as to generate the leaf nodes of the derivation tree. There are more phrase pairs than mono-lingual words, but bilingual corpus is much more difficult to acquire, compared with monolingual corpus. Embedding #Data #Entry #Parameter Word 1G 500K 20 × 500K Word Pair 7M (500K)2 20 × (500K)2 Phrase Pair 7M (500K)4 20 × (500K)4 Table 1: The relationship between the size of training data and the number of model parameters. The numbers for word embedding is calculated on English Giga-Word corpus version 3. For word pair and phrase pair embedding, the numbers are calculated on IWSLT 2009 dialog training set. The word count of each side of phrase pairs is limited to be 2. Table 1 shows the relationship between the size of training data and the number of model parameters. For word embedding, the training size is 1G bits, and we may have 500K terms. For each term, we have a vector with length 20 as parameters, so there are 20 × 500K parameters totally. But for source-target word pair, we may only have 7M bilingual corpus for training (taking IWSLT data set as an example), and there are 20 ×(500K)2 parameters to be tuned. For phrase pairs, the situation becomes even worse, especially when the limitation of word count in phrase pairs is relaxed. It is very difficult to learn the phrase pair embedding brute-forcedly as word embedding is learnt (Mikolov et al., 2010; Collobert et al., 2011), since we may not have enough training data. A simple approach to construct phrase pair embedding is to use the average of the embeddings of the words in the phrase pair. One problem is that, word embedding may not be able to model the translation relationship between source and target phrases at phrase level, since some phrases cannot be decomposed. For example, the meaning of ”hot dog” is not the composition of the meanings of the words ”hot” and ”dog”. In this section, we split the phrase pair embedding into two parts to model the translation confidence directly: translation confidence with sparse features and translation confidence with recurrent neural network. We first get two translation confidence vectors separately using sparse features and recurrent neural network, and then concatenate them to be the phrase pair embedding. We call it translation confidence based phrase pair embedding (TCBPPE). 1496 5.1 Translation Confidence with Sparse Features Large scale feature training has drawn more attentions these years (Liang et al., 2006; Yu et al., 2013). Instead of integrating the sparse features directly into the log-linear model, we use them as the input to learn a phrase pair embedding. For the top 200,000 frequent translation pairs, each of them is a feature in itself, and a special feature is added for all the infrequent ones. The one-hot representation vector is used as the input, and a one-hidden-layer network generates a confidence score. To train the neural network, we add the confidence scores to the conventional log-linear model as features. Forced decoding is utilized to get positive samples, and contrastive divergence is used for model training. The neural network is used to reduce the space dimension of sparse features, and the hidden layer of the network is used as the phrase pair embedding. The length of the hidden layer is empirically set to 20. 5.2 Translation Confidence with Recurrent Neural Network 𝑊e 𝑉 ℎ𝑖−1 𝑈 𝑝(𝑒𝑖) 𝑒𝑖−1 𝑊𝑓 𝑓𝑎𝑖 ℎ𝑖 Figure 6: Recurrent neural network for translation confidence We use recurrent neural network to generate two smoothed translation confidence scores based on source and target word embeddings. One is source to target translation confidence score and the other is target to source. These two confidence scores are defined as: TS2T (s, t) = X i log p(ei|ei−1, fai, hi) (8) TT2S(s, t) = X j log p(fj|fj−1, eˆaj, hj) (9) where, fai is the corresponding target word aligned to ei , and it is similar for eˆaj . p(ei|ei−1, fai, hi) is produced by a recurrent network as shown in Figure 6. The recurrent neural network is trained with word aligned bilingual corpus, similar as (Auli et al., 2013). 6 Experiments and Results In this section, we conduct experiments to test our method on a Chinese-to-English translation task. The evaluation method is the case insensitive IBM BLEU-4 (Papineni et al., 2002). Significant testing is carried out using bootstrap re-sampling method proposed by (Koehn, 2004) with a 95% confidence level. 6.1 Data Setting and Baseline The data is from the IWSLT 2009 dialog task. The training data includes the BTEC and SLDB training data. The training data contains 81k sentence pairs, 655K Chinese words and 806K English words. The language model is a 5-gram language model trained with the target sentences in the training data. The test set is development set 9, and the development set comprises both development set 8 and the Chinese DIALOG set. The training data for monolingual word embedding is Giga-Word corpus version 3 for both Chinese and English. Chinese training corpus contains 32M sentences and 1.1G words. English training data contains 8M sentences and 247M terms. We only train the embedding for the top 100,000 frequent words following (Collobert et al., 2011). With the trained monolingual word embedding, we follow (Yang et al., 2013) to get the bilingual word embedding using the IWSLT bilingual training data. Our baseline decoder is an in-house implementation of Bracketing Transduction Grammar (BTG) (Wu, 1997) in CKY-style decoding with a lexical reordering model trained with maximum entropy (Xiong et al., 2006). The features of the baseline are commonly used features as standard BTG decoder, such as translation probabilities, lexical weights, language model, word penalty and distortion probabilities. All these commonly used features are used as recurrent input vector x in our R2NN. 6.2 Translation Results As we mentioned in Section 5, constructing phrase pair embeddings from word embeddings may be not suitable. Here we conduct experiments to ver1497 ify it. We first train the source and target word embeddings separately using large monolingual data, following (Collobert et al., 2011). Using monolingual word embedding as the initialization, we fine tune them to get bilingual word embedding (Yang et al., 2013). The word embedding based phrase pair embedding (WEPPE) is defined as: Eppweb(s, t) = X i Ewms(si) ▷◁ X j Ewbs(sj) ▷◁ X k Ewmt(tk) ▷◁ X l Ewbt(tl) (10) where ▷◁is a concatenation operator. s and t are the source and target phrases. Ewms(si) and Ewmt(tk) are the monolingual word embeddings, and Ewbs(si) and Ewbt(tk) are the bilingual word embeddings. Here the length of the word embedding is also set to 20. Therefore, the length of the phrase pair embedding is 20 × 4 = 80 . We compare our phrase pair embedding methods and our proposed R2NN with baseline system, in Table 2. We can see that, our R2NN models with WEPPE and TCBPPE are both better than the baseline system. WEPPE cannot get significant improvement, while TCBPPE does, compared with the baseline result. TCBPPE is much better than WEPPE. Setting Development Test Baseline 46.81 39.29 WEPPE+R2NN 47.23 39.92 TCBPPE+R2NN 48.70 ↑ 40.81 ↑ Table 2: Translation results of our proposed R2NN Model with two phrase embedding methods, compared with the baseline. Setting ”WEPPE+R2NN” is the result with word embedding based phrase pair embedding and our R2NN Model, and ”TCBPPE+R2NN” is the result of translation confidence based phrase pair embedding and our R2NN Model. The results with ↑are significantly better than the baseline. Word embedding can model translation relationship at word level, but it may not be powerful to model the phrase pair respondents at phrasal level, since the meaning of some phrases cannot be decomposed into the meaning of words. And also, translation task is difference from other NLP tasks, that, it is more important to model the translation confidence directly (the confidence of one target phrase as a translation of the source phrase), and our TCBPPE is designed for such purpose. 6.3 Effects of Global Recurrent Input Vector In order to compare R2NN with recursive network for SMT decoding, we remove the recurrent input vector in R2NN to test its effect, and the results are shown in Table 3. Without the recurrent input vectors, R2NN degenerates into recursive neural network (RNN). Setting Development Test WEPPE+R2NN 47.23 40.81 WEPPE+RNN 37.62 33.29 TCBPPE+R2NN 48.70 40.81 TCBPPE+RNN 45.11 37.33 Table 3: Experimental results to test the effects of recurrent input vector. WEPPE /TCBPPE+RNN are the results removing recurrent input vectors with WEPPE /TCBPPE. From Table 3 we can find that, the recurrent input vector is essential to SMT performance. When we remove it from R2NN, WEPPE based method drops about 10 BLEU points on development data and more than 6 BLEU points on test data. TCBPPE based method drops about 3 BLEU points on both development and test data sets. When we remove the recurrent input vectors, the representations of recursive network are generated with the child nodes, and it does not integrate global information, such as language model and distortion model, which are crucial to the performance of SMT. 6.4 Sparse Features and Recurrent Network Features To test the contributions of sparse features and recurrent network features, we first remove all the recurrent network features to train and test our R2NN model, and then remove all the sparse features to test the contribution of recurrent network features. Setting Development Test TCBPPE+R2NN 48.70 40.81 SF+R2NN 48.23 40.19 RNN+R2NN 47.89 40.01 Table 4: Experimental results to test the effects of sparse features and recurrent network features. 1498 The results are shown in Table 6.4. From the results, we can find that, sparse features are more effective than the recurrent network features a little bit. The sparse features can directly model the translation correspondence, and they may be more effective to rank the translation candidates, while recurrent neural network features are smoothed lexical translation confidence. 7 Conclusion and Future Work In this paper, we propose a Recursive Recurrent Neural Network(R2NN) to combine the recurrent neural network and recursive neural network. Our proposed R2NN cannot only integrate global input information during each combination, but also can generate the tree structure in a recursive way. We apply our model to SMT decoding, and propose a three-step semisupervised training method. In addition, we explore phrase pair embedding method, which models translation confidence directly. We conduct experiments on a Chinese-to-English translation task, and our method outperforms a state-of-theart baseline about 1.5 points BLEU. From the experiments, we find that, phrase pair embedding is crucial to the performance of SMT. In the future, we will explore better methods for phrase pair embedding to model the translation equivalent between source and target phrases. We will apply our proposed R2NN to other tree structure learning tasks, such as natural language parsing. References Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1044– 1054, Seattle, Washington, USA, October. Association for Computational Linguistics. Yoshua Bengio, Holger Schwenk, Jean-S´ebastien Sen´ecal, Fr´ederic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. Innovations in Machine Learning, pages 137–186. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. George E Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, IEEE Transactions on, 20(1):30–42. Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554. Koray Kavukcuoglu, Pierre Sermanet, Y-Lan Boureau, Karol Gregor, Micha¨el Mathieu, and Yann LeCun. 2010. Learning convolutional feature hierarchies for visual recognition. Advances in Neural Information Processing Systems, pages 1090–1098. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 388–395. Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114. Peng Li, Yang Liu, and Maosong Sun. 2013. Recursive autoencoders for ITG-based translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 567– 577, Seattle, Washington, USA, October. Association for Computational Linguistics. Percy Liang, Alexandre Bouchard-Cˆot´e, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 761–768. Association for Computational Linguistics. Lemao Liu, Taro Watanabe, Eiichiro Sumita, and Tiejun Zhao. 2013. Additive neural networks for statistical machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 791–801, Sofia, Bulgaria, August. Association for Computational Linguistics. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the Annual Conference of International Speech Communication Association, pages 1045– 1048. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational linguistics, 30(4):417–449. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. 1499 Richard Socher, Cliff C Lin, Andrew Y Ng, and Christopher D Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 26th International Conference on Machine Learning (ICML), volume 2, page 7. Richard Socher, John Bauer, and Christopher D Manning. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, volume 1, pages 455–465. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics, 23(3):377–403. Joern Wuebker, Arne Mauser, and Hermann Ney. 2010. Training phrase translation models with leaving-one-out. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 475–484. Association for Computational Linguistics. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, volume 44, page 521. Nan Yang, Shujie Liu, Mu Li, Ming Zhou, and Nenghai Yu. 2013. Word alignment modeling with context dependent deep neural network. In 51st Annual Meeting of the Association for Computational Linguistics. Heng Yu, Liang Huang, Haitao Mi, and Kai Zhao. 2013. Max-violation perceptron and forced decoding for scalable MT training. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1112–1123, Seattle, Washington, USA, October. Association for Computational Linguistics. 1500
2014
140
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1501–1511, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Predicting Instructor’s Intervention in MOOC forums Snigdha Chaturvedi Dan Goldwasser Hal Daum´e III Department of Computer Science, University of Maryland, College Park, Maryland {snigdhac, goldwas1, hal}@umiacs.umd.edu Abstract Instructor intervention in student discussion forums is a vital component in Massive Open Online Courses (MOOCs), where personalized interaction is limited. This paper introduces the problem of predicting instructor interventions in MOOC forums. We propose several prediction models designed to capture unique aspects of MOOCs, combining course information, forum structure and posts content. Our models abstract contents of individual posts of threads using latent categories, learned jointly with the binary intervention prediction problem. Experiments over data from two Coursera MOOCs demonstrate that incorporating the structure of threads into the learning problem leads to better predictive performance. 1 Introduction Ubiquitous computing and easy access to high bandwidth internet have reshaped the modus operandi in distance education towards Massive Open Online Courses (MOOCs). Courses offered by ventures such as Coursera and Udacity now impart inexpensive and high-quality education from field-experts to thousands of learners across geographic and cultural barriers. Even as the MOOC model shows exciting possibilities, it presents a multitude of challenges that must first be negotiated to completely realize its potential. MOOCs platforms have been especially criticized on grounds of lacking a personalized educational experience (Edmundson, 2012). Unlike traditional classrooms, the predominant mode of interaction between students and instructors in MOOCs is via online discussion forums. Ideally, forum discussions can help make up for the lack of direct interaction, by enabling students to ask questions and clarify doubts. However, due to huge class sizes, even during the short duration of a course, MOOCs witness a very large number of threads on these forums. Owing to extremely skewed ratios of students to instructional staff, it can be prohibitively time-consuming for the instructional staff to manually follow all threads of a forum. Hence there is a pressing need for automatically curating the discussions for the instructors. In this paper, we focus on identifying situations in which instructor (used interchangeably with “instructional staff” in this paper) intervention is warranted. Using existing forum posts and interactions, we frame this as a binary prediction problem of identifying instructor’s intervention in forum threads. Our initial analysis revealed that instructors usually intervene on threads discussing students’ issues close to a quiz or exam. They also take interest in grading issues and logistics problems. There are multiple cues specific to the MOOC setting, which when combined with the rich lexical information present in the forums, can yield useful predictive models. Analyzing forum-postings contents and bringing the most pertinent content to the instructor’s attention would help instructors receive timely feedback and design interventions as needed. From the students’ perspective, the problem is evident from an examination of existing forum content, indicating that if students want instructor’s input on some issues, the only way for them to get his/her attention is by ‘up-voting’ their votes. Fig. 1 provides some examples of this behavior. This is clearly an inefficient solution. Our main technical contribution is introducing three different models addressing the task of predicting instructor interventions. The first uses a logistic regression model that primarily incorporates high level information about threads and posts. However, forum threads have structure which is not leveraged our initial model. We present two 1501 “The problem summary: Anyone else having problems viewing the video lecture...very choppy. If you are also experiencing this issue; please upvote this post.” “I read that by up-voting threads and posts you can get the instructors’ attention faster.” “Its is very bad to me that I achieved 10 marks in my 1st assignment and now 9 marks in my 2nd assignment, now I won’t get certificate, please Course staff it is my appeal to change the passing scheme or please be lenient. Please upvote my post so that staff take this problem under consideration.” Figure 1: Sample posts that showing students desiring instructor’s attention have to resolve to the inefficient method of getting their posts upvoted. additional structured models. Both models assume that posts of a thread structure it in form of a story or a “chain of events.” For example, an opening post of a thread might pose a question and the following posts can then answer or comment on the question. Our second and third models tap this linear ‘chain of events’ behavior by assuming that individual posts belong to latent categories which represent their textual content at an abstract level and that an instructor’s decision to reply to a post is based on this chain of events (represented by the latent categories). We present two different ways of utilizing this ‘chain of events’ behavior for predicting instructor’s intervention which can be either simply modeled as the ‘next step’ is this chain of events (Linear Chain Markov Model) or as a decision globally depending on the entire chain (Global Chain Model). Our experiments on two different datasets reveal that using the latent post categories helps in better prediction. Our contributions can be summarized as: • We motivate and introduce the important problem of predicting instructor intervention in MOOC forums • We present two chain based models that incorporate thread structure. • We show the utility of modeling thread structure, and the value of lexical and domain specific knowledge for the prediction task 2 Related Work To the best of our knowledge, the problem of predicting instructor’s intervention in MOOC forums has not been addressed yet. Prior work deals with analyzing general online discussion forums of social media sites (Kleinberg, 2013): such as predicting comment volume (Backstrom et al., 2013; De Choudhury et al., 2009; Wang et al., 2012; Tsagkias et al., 2009; Yano and Smith, 2010; Artzi et al., 2012) and rate of content diffusion (Kwak et al., 2010; Lerman and Ghosh, 2010; Bakshy et al., 2011; Romero et al., 2011; Artzi et al., 2012) and also question answering (Chaturvedi et al., 2014). Wang et al. (2007) incorporate thread structure of conversations using features in email threads while Goldwasser and Daum´e III (2014) use latent structure, aimed to identify relevant dialog segments, for predicting objections during courtroom deliberations. Other related work include speech act recognition in emails and forums but at a sentence level (Jeong et al., 2009), and using social network analysis to improve message classification into pre-determined types (Fortuna et al., 2007). Discussion forums data has also been used to address other interesting challenges such as extracting chatbox knowledge for use in general online forums (Huang et al., 2007) and automatically extracting answers from discussion forums (Catherine et al., 2013), subjectivity analysis of online forums (Biyani et al., 2013). Most of these methods use ideas similar to ours: identifying that threads (or discussions) have an underlying structure and that messages belong to categories. However, they operate in a different domain, which makes their goals and methods different from ours. Our work is most closely related to that of Backstrom et al. (2013) which introduced the re-entry prediction task —predicting whether a user who has participated in a thread will later contribute another comment to it. While seemingly related, their prediction task, focusing on users who have already commented on a thread, and their algorithmic approach are different than ours. Our work is also very closely related to that of Wang et al. (2013) who predict solvedness —which predicts if there is a solution to the original problem posted in the thread. Like us, they believe that category of posts can assist in the prediction task, however, possibly owing to the complexity of general discussion forums, they had to manually create and annotate data with a sophisticated taxonomy. We do not make such assumptions. The work presented in (G´omez et al., 2008; 1502 Liben-Nowell and Kleinberg, 2008; Kumar et al., 2010; Golub and Jackson, 2010; Wang et al., 2011; Aumayr et al., 2011) discuss characterizing threads using reply-graphs (often trees) and learning this structure. However, this representation is not natural for the MOOC domain where discussions are relatively more focused on the thread topic and are better organized using sections within the forums. Although most prior work focuses on discussion forums of social media sites such as Twitter or Facebook, where the dynamics of interaction is very different from MOOCs, a small number of recent work address the unique MOOC setting. Stump et al. (2013) propose a framework for categorizing forum posts by designing a taxonomy and annotating posts manually to assist general forum analysis. Our model learns categories in a data-driven manner guided by the binary supervision (intervention decision) and serves a different purpose. Nevertheless, in Sec. 4.3 we compare the categories learnt by our models with those proposed by Stump et al. (2013). Apart from this, recent works have looked into interesting challenges in this domain such as better peer grading models (Piech et al., 2013), code review (Huang et al., 2013; Nguyen et al., 2014), improving student engagement (Anderson et al., 2014) and understanding how students learn and code (Piech et al., 2012; Kizilcec et al., 2013; Ramesh et al., 2013). 3 Intervention Prediction Models In this section, we explain our models in detail. 3.1 Problem Setting In our description it is assumed that a discussion board is organized into multiple forums (representing topics such as “Assignment”, “Study Group” etc.). A forum consists of multiple threads. Each thread (t) has a title and consists of multiple posts (pi). Individual posts do not have a title and the number of posts varies dramatically from one thread to another. We address the problem of predicting if the course instructor would intervene on a thread, t. The instructor’s decision to intervene, r, equals 0 when the instructor doesn’t reply to the thread and 1 otherwise. The individual posts are not assumed to be labeled with any category and the only supervision given to the model during training is in form of intervention decision. 3.2 Logistic Regression (LR) Our first attempt at solving this problem involved training a logistic regression for the binary prediction task which models P(r|t). 3.2.1 Feature Engineering Our logistic regression model uses the following two types of features: Thread only features and Aggregated post features. ‘Thread only features’ capture information about the thread such as when, where, by who was the thread posted and lexical features based on the title of the thread. While these features provide a high-level information about the thread, it is also important to analyze the contents of the posts of the thread. In order to maintain a manageable feature space, we compress the features from posts and represent them using our ‘Aggregated post features’. Thread only features: 1. a binary feature indicating if the thread was started by an anonymous user 2. three binary features indicating whether the thread was marked as approved, unresolved or deleted (respectively) 3. forum id in which the thread was posted 4. time when the thread was started 5. time of last posting on the thread 6. total number of posts in the thread 7. a binary feature indicating if the thread title contains the words lecture or lectures 8. a binary feature indicating if the thread title contains the words assignment, quiz, grade, project, exam (and their plural forms) Aggregated post features: 9. sum of number of votes received by the individual posts 10. mean and variance of the posting times of individual posts in the thread 11. mean of time difference between the posting times of individual posts and the closest course landmark. A course landmark is the deadline of an assignment, exam or project. 12. sum of count of occurrences of assessment related words e.g. grade, exam, assignment, quiz, reading, project etc. in the posts 13. sum of count of occurrences of words indicating technical problems e.g. problem, error 14. sum of count of occurrences of thread conclusive words like thank you and thank 15. sum of count of occurrences of request, submit, suggest 1503 h1 h2 hn r φ(t) p1 p2 pn T (a) Linear Chain Markov Model (LCMM) h1 h2 hn r φ(t) p1 p2 pn T (b) Global Chain Model (GCM) Figure 2: Diagrams of the Linear Chain Markov Model (LCMM) and the Global Chain Model (GCM). pi, r and φ(t) are observed and hi are the latent variables. pi and hi represent the posts of the thread and their latent categories respectively; r represents the instructor’s intervention and φ(t) represent the non-structural features used by the logistic regression model. We had also considered and dropped (because of no performance gain) other features about identity of the user who started the thread, number of distinct participants in the thread (an important feature used by Backstrom et al. (2013)), binary feature indicating if the first and the last posts were by the same user, average number of words in the thread’s posts, lexical features capturing references to the instructors in the posts etc. 3.3 Linear Chain Markov Model (LCMM) The logistic regression model is good at exploiting the thread level features but not the content of individual posts. The ‘Aggregated post features’ attempt to capture this information but since the number of posts in a thread is variable, these features relied on aggregated values. We believe that considering aggregate values is not sufficient for the task in hand. As noted before, posts of a thread are not independent of each other. Instead, they are arranged chronologically such that a post is published in reply to the preceding posts and this For every thread, t, in the dataset: 1. Choose a start state, h1, and emit the first post, p1. 2. For every subsequent post, pi ∀i ∈ {2 . . . n} : (a) Transition from hi−1 to hi. (b) Emit post pi. 3. Generate the instructor’s intervention decision, r, using the last state hn and non-structural features, φ(t). Figure 3: Instructor’s intervention decision process for the Linear Chain Markov Model. might effect an instructor’s decision to reply. For example, consider a thread that starts with a question. The following posts will be students’ attempt to answer the question or raise further concerns or comment on previous posts. The instructor’s post, though a future event, will be a part of this process. We, therefore, propose to model this complete process using a linear chain markov model shown in Fig. 2a. The model abstractly represents the information from individual posts (pi) using latent categories (hi). The intervention decision, r, is the last step in the chain and thus incorporates information from the individual posts. It also depends on the thread level features: ‘Thread only features’ and the ‘Aggregated post features’ jointly represented by φ(t) (also referred to as the nonstructural features). This process is explained in Fig. 3. We use hand-crafted features to model the dynamics of the generative process. Whenever a latent state emits a post or transits to another latent state (or to the final intervention decision state), emission and transition features get fired which are then multiplied by respective weights to compute a thread’s ‘score’: fw(t, p) = max h [w · φ(p, r, h, t)] (1) Note that the non-structural features, φ(t), also contribute to the final score. 3.3.1 Learning and Inference During training we maximize the combined scores of all threads in the dataset using a generic EM style algorithm. The supervision in this model is provided only in form of the observed intervention decision, r and the post categories, hi are hid1504 den. The model uses the pseudocode shown in Algorithm 1 to iteratively refine the weight vectors. In each iteration, the model first uses viterbi algorithm to decode thread sequences with the current weights wt to find optimal highest scoring latent state sequences that agree with the observed intervention state (r = r′). In the next step, given the latent state assignments from the previous step, a structured perceptron algorithm (Collins, 2002) is used to update the weights wt+1 using weights from the previous step, wt, initialization. Algorithm 1 Training algorithm for LCMM 1: Input: Labeled data D = {(t, p, r)i} 2: Output: Weights w 3: Initialization: Set wj randomly, ∀j 4: for t : 1 to N do 5: ˆhi = arg maxh[wt · φ(p, r, h, t)] such that r = ri∀i 6: wt+1 = StructuredPerceptron(t, p, ˆh, r) 7: end for 8: return w While testing, we use the learned weights and viterbi decoding to compute the intervention state and the best scoring latent category sequence. 3.3.2 Feature Engineering In addition to the ‘Thread Only Features’ and the ‘Aggregated post features’, φ(t) (Sec. 3.2.1, this model uses the following emission and transition features: Post Emission Features: 1. φ(pi, hi) = count of occurrences of question words or question marks in pi if the state is hi; 0 otherwise. 2. φ(pi, hi) = count of occurrences of thank words (thank you or thanks) in pi if the state is hi; 0 otherwise. 3. φ(pi, hi) = count of occurrences of greeting words (e.g. hi, hello, good morning, welcome etc ) in pi if the state is hi; 0 otherwise. 4. φ(pi, hi) = count of occurrences of assessment related words (e.g. grade, exam, assignment, quiz, reading, project etc.) in pi if the state is hi; 0 otherwise. 5. φ(pi, hi) = count of occurrences of request, submit or suggest in pi if the state is hi; 0 otherwise. 6. φ(pi, hi) = log(course duration/t(pi)) if the state is hi; 0 otherwise. Here t(pi) is the difference between the posting time of pi and the closest course landmark (assignment or project deadline or exam). 7. φ(pi, pi−1, hi) = difference between posting times of pi and pi−1 normalized by course duration if the state is hi; 0 otherwise. Transition Features: 1. φ(hi−1, hi) = 1 if previous state is hi−1 and current state is hi; 0 otherwise. 2. φ(hi−1, hi, pi, pi−1) = cosine similarity between pi−1 and pi if previous state is hi−1 and current state is hi; 0 otherwise. 3. φ(hi−1, hi, pi, pi−1) = length of pi if previous state is hi−1, pi−1 has non-zero question words and current state is hi; 0 otherwise. 4. φ(hn, r) = 1 if last post’s state is hn and intervention decision is r; 0 otherwise. 5. φ(hn, r, pn) = 1 if last post’s state is hn, pn has non-zero question words and intervention decision is r; 0 otherwise. 6. φ(hn, r, pn) = log(course duration/t(pn)) if last post’s state is hn and intervention decision is r; 0 otherwise. Here t(pn) is the difference between the posting time of pn and the closest course landmark (assignment or project deadline or exam). 3.4 Global Chain Model (GCM) In this model we propose another way of incorporating the chain structure of a thread. Like the previous model, this model also assumes that posts belong to latent categories. It, however, doesn’t model the instructor’s intervention decision as a step in the thread generation process. Instead, it assumes that instructor’s decision to intervene is dependent on all the posts in the threads, modeled using the latent post categories. This model is shown in Fig. 2b. Assuming that p represents posts of thread t, h represents the latent category assignments, r represents the intervention decision; feature vector, φ(p, r, h, t), is extracted for each thread and using the weight vector, w, this model defines a decision function, similar to what is shown in Equation 1. 3.4.1 Learning and Inference Similar to the traditional maximum margin based Support Vector Machine (SVM) formulation, our model’s objective function is defined as: min w λ 2 ||w||2 + T X j l(−rjfw(tj, pj)) (2) 1505 where λ is the regularization coefficient, tj is the jth thread with intervention decision rj and pj are the posts of this thread. w is the weight vector, l(·) is the squared hinge loss function and fw(tj, pj) is defined in Equation 1. Replacing the term fw(tj, pj) with the contents of Equation 1 in the minimization objective above, reveals the key difference from the traditional SVM formulation - the objective function has a maximum term inside the global minimization problem making it non-convex. We, therefore, employ the optimization algorithm presented in (Chang et al., 2010) to solve this problem. Exploiting the semi-convexity property (Felzenszwalb et al., 2010), the algorithm works in two steps, each executed iteratively. In the first step, it determines the latent variable assignments for positive examples. The algorithm then performs two step iteratively - first it determines the structural assignments for the negative examples, and then optimizes the fixed objective function using a cutting plane algorithm. Once this process converges for negative examples, the algorithm reassigns values to the latent variables for positive examples, and proceeds to the second step. The algorithm stops once a local minimum is reached. A somewhat similar approach, which uses the Convex-Concave Procedure (CCCP) is presented by (Yu and Joachims, 2009). At test time, given a thread, t, and it posts, p, we use the learned weights to compute fw(t, p) and classify it as belonging to the positive class (instructor intervenes) if fw(t, p) ≥0. 3.4.2 Feature Engineering The feature set used by this model is very similar to the features used by the previous model. In addition to the non-structural features used by the logistic regression model (Sec. 3.2.1), it uses all the Post Emission features and the three transition features represented by φ(hi−1, hi) and φ(hi−1, hi, pi, pi−1) as described in Sec. 3.3.2. 4 Empirical Evaluation This section describes our experiments. 4.1 Datasets and Evaluation Measure For our experiments, we have used the forum content of two MOOCs from different domains (science and humanities), offered by Coursera1, 1https://www.coursera.org/ a leading education technology company. Both courses were taught by professors from the University of Maryland, College Park. Genes and the Human Condition (From Behavior to Biotechnology) (GHC) dataset. 2 This course was attended by 30,000 students and the instructional staff comprised of 2 instructors, 3 Teaching Assistants and 56 technical support staff. The discussion forum of this course consisted of 980 threads composed of about 3,800 posts. Women and the Civil Rights Movement (WCR) dataset. 3 The course consisted of a classroom of about 14,600 students, 1 instructor, 6 Teaching Assistants and 49 support staff. Its discussion forum consisted of 800 threads and 3,900 posts. We evaluate our models on held-out test sets. For the GHC dataset, the test set consisted of 186 threads out of which the instructor intervened on 24 while, for the WCR dataset, the instructor intervened on 21 out of 155 threads. Also, it was commonly observed that after an instructor intervenes on a thread, its posting and/or viewing behavior increases. We, therefore, only consider the student posts until the instructor’s first intervention. Care was also taken to not use features that increased/decreased disproportionately because of the instructor’s intervention such as number of views or votes of a thread. In our evaluation we approximate instructor’s ‘should reply’ instances with those where the instructor indeed replied. Unlike general forum users, we believe that the correlation between the two scenarios is quite high for instructors. It is their responsibility to reply, and by choosing to a MOOC, they have ‘bought in’ to the idea of forum participation. The relatively smaller class sizes of these two MOOCs also ensured that most threads were manually reviewed, thus reducing instances of ‘missed’ threads while retaining the posting behavior and content of a typical MOOC. 4.2 Experimental Results Since the purpose of solving this problem is to identify the threads which should be brought to the notice of the instructors, we measure the performance of our models using F-measure of the positive class. The values of various parameters were selected using 10-fold Cross Validation on 2https://www.coursera.org/course/genes 3https://www.coursera.org/course/ womencivilrights 1506 Model Genes and the Human Condition (GHC) Women and the Civil Rights (WCR) P R F P R F LR 44.44 16.67 24.24 66.67 15.38 25.00 J48 45.50 20.80 28.55 25.00 23.10 24.01 LCMM 33.33 29.17 31.11 42.86 23.08 30.00 GCM 60.00 25.00 35.29 50.00 18.52 27.03 Table 1: Held-out test set performances of chain models, LCMM and GCM, are better than that of the unstructured models, LR and J48. Figure 4: Visualization of lexical contents of the categories learnt by our model from the GHC dataset. Each row is a category and each column represents a feature vector. Bright cream color represents high values while lower values are represented by darker shades. Dark beige columns are used to better separate the five feature clusters, F1-F5, which represent words that are common in thanking, logistics-related, introductory, syllabus related and miscellaneous posts respectively. Categories 1,2,3 and 4 are dominated by F2, F4, F1 and F3 respectively indicating a semantic segregation of posts by our model’s categories. the training set. Table 1 presents the performances of the proposed models on the held-out test sets. We also report performance of a decision tree (J48) on the test sets for sake of comparison. We can see that the chain based models, Linear Chain Markov Model (LCMM) and Global Chain Model (GCM), outperform the unstructured models, namely Logistic regression (LR) and Decision Trees (J48). This validates our hypothesis that using the post structure results in better modeling of instructor’s intervention. The table also reveals that GCM yields high precision and low recall values, which is possibly due to the model being more conservative owing to information from all posts of the thread. 4.3 Visual Exploration of Categories Our chain based models assume that posts belong to different (latent) categories and use these categories to make intervention predictions. Since this process of discovering categories is data driven, it would be interesting to examine the contents of these categories. Fig. 4 presents a heat map of lexical content of categories identified by LCMM from the GHC dataset. The value of H (number of categories) was set to be 4 and was predetermined during the model selection procedure. Each row of the heat map represents a category and the columns represent values of individual features, f(w, c), defined as: f(w, c) = C(w,c) <C(w,c)> where, C(w, c) is total count of occurrences of a word, w, in all posts assigned to category, c and < C(w, c) > represents its expected count based on its frequency in the dataset. While the actual size of vocabulary is huge, we use only a small subset of words in our feature vector for this visualization. These feature values, after normalization, are represented in the heat map using colors ranging from bright cream (high value) to dark black (low value). The darker the shade of a cell, the lower is the value represented by it. For visual convenience, the features are manually clustered into five groups (F1 to F5) each separated by a dark beige colored column in the heat map. The first column of the heat map represents the F1 group which consists of words like thank you, thanks etc. These words are characteristic of posts that mark either the conclusion of a resolved thread or are posted towards the end of the course. Rows corresponding to the category 3 in Table 2 show two examples of such posts. Similarly, F2 represents the features related to logistics of the course and F3 captures introductory posts by new students. Finally, F4 contains words that are closely related to the subfield of gene and human conditions and would appear in posts that discuss specific aspects or chapters of the course con1507 tents, while F5 contains general buzz words that would appear frequently in any biology course. Analyzing individual rows of the heat map, we can see that out of F1 to F4, Categories 1, 2, 3 and 4 are dominated by logistics (F2), course content related (F4), thank you (F1) and introductory posts (F3) respectively, represented by bright colors in their respective rows. We also observe similar correlations while examining the columns of the heat map. Also, F5, which contains words common to the gene and human health domain, is scattered across multiple categories. For example, dna/rna and breeding are sufficiently frequent in category 1 as well as 2. Table 2 gives examples of representative posts from the four clusters. Due to space constraints, we show only part of the complete post. We can see that these examples agree with our observations from the heat map. Furthermore, as noted in Sec. 2, we compare the semantics of clusters learnt by our models with those proposed by Stump et al. (2013) even though the two categorizations are not directly comparable. Nevertheless, generally speaking, our category 1 corresponds to Stump et al. (2013)’s Course structure/policies and category 2 corresponds to Content. Interestingly, categories 3 and 4, which represent valedictory and introductory posts, correspond to a single Social/affective from the previous work. We can, therefore, conclude that the model, indeed splits the posts into categories that look semantically coherent to the human eyes. 4.4 Choice of Number of Categories Our chain based models, assigning forum posts to latent categories, are parameterized with H, the number of categories. We therefore, study the sensitivity of our models to this parameter. Fig. 5, plots the 10-fold cross validation performance of the models with increasing values of H for the two datasets. Interestingly, the sensitivity of the two models to the value of H is very different. The LCMM model’s performance fluctuates as the value of H increases. The initial performance improvement might be due to an increase in the expressive power of the model. Performance peaks at H = 4 and then decreases, perhaps owing to over-fitting of the data. In contrast, GCM performance remains steady for various values of H which might be attributed (a) Genes and the Human Condition dataset (b) Women and the Civil Rights Movement dataset Figure 5: Cross validation performances of the two models with increasing number of categories. to the explicit regularization coefficient which helps combat over-fitting, by encouraging zero weights for unnecessary categories. 4.5 How important are linguistic features? We now focus on the structure independent features and experiment with their predictive value, according to types. We divide the features used by the LR into the following categories:4 • Full: set of all features (feature no. 1 to 15) • lexical: based on content of thread titles and posts (feature no. 7 to 8 and 12 to 13) • landmark: based on course landmarks (e.g, exams, quizzes) information (feature no. 11) • MOOCs-specific: features specific to the MOOCs domain (lexical + landmark features) • post: based only on aggregated posts information (feature no. 9 to 15) • temporal: based on posting time patterns (feature no. 4, 5 and 10) Fig. 6 shows 10-fold cross validation F-measure of the positive class for LR when different types of features are excluded from the full set. The figure reveals that the MOOCs-specific features (purple bar) are important for both the datasets indicating a need for designing specialized models for forums analysis in this domain. 4Please refer to Sec 3.2.1 for description of the feature id. 1508 Category Example posts 1 ‘I’m having some issues with video playback. I have downloaded the videos to my laptop...’ 1 ‘There was no mention of the nuclear envelope in the Week One lecture, yet it was in the quiz. Is this a mistake?’ 2 ‘DNA methylation is a crucial part of normal development of organisms and cell differentiation in higher organisms...’ 2 ‘In the lecture, she said there are...I don’t see how tumor-suppressor genes are a cancer group mutation.’ 3 ‘Thank you very much for a most enjoyable and informative course.’ 3 ‘Great glossary! Thank you!’ 4 ‘Hello everyone, I’m ... from the Netherlands. I’m a life science student. 4 ‘Hi, my name is ... this is my third class with coursera’ Table 2: Representative posts from the four categories learnt by our model. Due to space and privacy concerns we omit some parts of the text, indicated by “. . . ”. (a) Genes and the Human Condition dataset (b) Women and the Civil Rights Movement dataset Figure 6: Cross validation performances of the various feature types for the two datasets. Also, lexical features (red bar) and post features (blue bar) have pretty dramatic effects in GHC and WCR data respectively. Interestingly, removing the landmark feature set (green bar) causes a considerable drop in predictive performance, even though it consists of only one feature. Other temporal features (orange bar) also turn out to be important for the prediction. From a separate instructor activity vs time graph (not shown due to space constraints), we observed that instructors tend to get more active as the course progresses and their activity level also increases around quizzes/exams deadlines. We can, therefore, conclude that all feature types are important and that lexical as well as MOOC specific analysis is necessary for modeling instructor’s intervention. 5 Conclusion One of the main challenges in MOOCs is managing student-instructor interaction. The massive scale of these courses rules out any form of personalized interaction, leaving instructors with the need to go over the forum discussions, gauge student reactions and selectively respond when appropriate. This time consuming and error prone task stresses the need for methods and tools supplying this actionable information automatically. This paper takes a first step in that direction, and formulates the novel problem of predicting instructor intervention in MOOC discussion forums. Our main technical contribution is to construct predictive models combining information about forum post content and posting behavior with information about the course and its landmarks. We propose three models for addressing the task. The first, a logistic regression model is trained on thread level and aggregated post features. The other two models take thread structure into account when making the prediction. These models assume that posts can be represented by categories which characterize post content at an abstract level, and treat category assignments as latent variables organized according to, and influenced by, the forum thread structure. Our experiments on forum data from two different Coursera MOOCs show that utilizing thread structure is important for predicting instructor’s behavior. Furthermore, our qualitative analysis shows that our latent categories are semantically coherent to human eye. 1509 References Ashton Anderson, Daniel P. Huttenlocher, Jon M. Kleinberg, and Jure Leskovec. 2014. Engaging with massive online courses. In WWW, pages 687–698. Yoav Artzi, Patrick Pantel, and Michael Gamon. 2012. Predicting responses to microblog posts. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 602–606, Stroudsburg, PA, USA. Association for Computational Linguistics. Erik Aumayr, Jeffrey Chan, and Conor Hayes. 2011. Reconstruction of threaded conversations in online discussion forums. In Lada A. Adamic, Ricardo A. Baeza-Yates, and Scott Counts, editors, ICWSM. The AAAI Press. Lars Backstrom, Jon Kleinberg, Lillian Lee, and Cristian Danescu-Niculescu-Mizil. 2013. Characterizing and curating conversation threads: Expansion, focus, volume, re-entry. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, WSDM ’13, pages 13–22, New York, NY, USA. ACM. Eytan Bakshy, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. 2011. Everyone’s an influencer: Quantifying influence on twitter. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, WSDM ’11, pages 65–74, New York, NY, USA. ACM. Prakhar Biyani, Cornelia Caragea, and Prasenjit Mitra. 2013. Predicting subjectivity orientation of online forum threads. In CICLing (2), pages 109–120. Rose Catherine, Rashmi Gangadharaiah, Karthik Visweswariah, and Dinesh Raghu. 2013. Semisupervised answer extraction from discussion forums. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1–9, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Ming-Wei Chang, Dan Goldwasser, Dan Roth, and Vivek Srikumar. 2010. Discriminative learning over constrained latent representations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 429– 437, Stroudsburg, PA, USA. Association for Computational Linguistics. Snigdha Chaturvedi, Vittorio Castelli, Radu Florian, Ramesh M. Nallapati, and Hema Raghavan. 2014. Joint question clustering and relevance prediction for open domain non-factoid question answering. In Proceedings of the 23rd International Conference on World Wide Web, WWW ’14, pages 503–514, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing - Volume 10, EMNLP ’02, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Munmun De Choudhury, Hari Sundaram, Ajita John, and Dor´ee Duncan Seligmann. 2009. What makes conversations interesting? themes, participants and consequences of conversations in online social media. In 18th International World Wide Web Conference (WWW), pages 331–331, April. Mark Edmundson. 2012. The trouble with online education, July 19. http://www.nytimes.com/ 2012/07/20/opinion/the-troublewith-online-education.html. Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester, and Deva Ramanan. 2010. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627–1645. Blaz Fortuna, Eduarda Mendes Rodrigues, and Natasa Milic-Frayling. 2007. Improving the classification of newsgroup messages through social network analysis. In Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management, CIKM ’07, pages 877–880, New York, NY, USA. ACM. Dan Goldwasser and Hal Daum´e III. 2014. “I object!” modeling latent pragmatic effects in courtroom dialogues. European Chapter of the Association for Computational Linguistics (EACL), April. To appear. Benjamin Golub and Matthew O. Jackson. 2010. Seeing only the successes: The power of selection bias in explaining the structure of observed internet diffusions. Vicenc¸ G´omez, Andreas Kaltenbrunner, and Vicente L´opez. 2008. Statistical analysis of the social network and discussion threads in slashdot. In Proceedings of the 17th International Conference on World Wide Web, WWW ’08, pages 645–654, New York, NY, USA. ACM. Jizhou Huang, Ming Zhou, and Dan Yang. 2007. Extracting chatbox knowledge from online discussion forums. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, pages 423–428, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Jonathan Huang, Chris Piech, Andy Nguyen, and Leonidas J. Guibas. 2013. Syntactic and functional variability of a million code submissions in a machine learning mooc. In AIED Workshops. 1510 Minwoo Jeong, Chin-Yew Lin, and Gary Geunbae Lee. 2009. Semi-supervised speech act recognition in emails and forums. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3, EMNLP ’09, pages 1250–1259, Stroudsburg, PA, USA. Association for Computational Linguistics. Ren´e F. Kizilcec, Chris Piech, and Emily Schneider. 2013. Deconstructing disengagement: analyzing learner subpopulations in massive open online courses. In LAK, pages 170–179. Jon M. Kleinberg. 2013. Computational perspectives on social phenomena at global scales. In Francesca Rossi, editor, IJCAI. IJCAI/AAAI. Ravi Kumar, Mohammad Mahdian, and Mary McGlohon. 2010. Dynamics of conversations. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’10, pages 553–562, New York, NY, USA. ACM. Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon. 2010. What is twitter, a social network or a news media? In Proceedings of the 19th International Conference on World Wide Web, WWW ’10, pages 591–600, New York, NY, USA. ACM. K. Lerman and R. Ghosh. 2010. Information contagion: An empirical study of the spread of news on digg and twitter social networks. In Proceedings of 4th International Conference on Weblogs and Social Media (ICWSM). David Liben-Nowell and Jon Kleinberg. 2008. Tracing the flow of information on a global scale using Internet chain-letter data. Proceedings of the National Academy of Sciences, 105(12):4633–4638, 25 March. Andy Nguyen, Christopher Piech, Jonathan Huang, and Leonidas J. Guibas. 2014. Codewebs: scalable homework search for massive open online programming courses. In WWW, pages 491–502. Chris Piech, Mehran Sahami, Daphne Koller, Steve Cooper, and Paulo Blikstein. 2012. Modeling how students learn to program. In SIGCSE, pages 153– 160. Chris Piech, Jonathan Huang, Zhenghao Chen, Chuong Do, Andrew Ng, and Daphne Koller. 2013. Tuned models of peer assessment in MOOCs. In Proceedings of The 6th International Conference on Educational Data Mining (EDM 2013). Arti Ramesh, Dan Goldwasser, Bert Huang, Hal Daum´e III, and Lise Getoor. 2013. Modeling learner engagement in moocs using probabilistic soft logic. In NIPS Workshop on Data Driven Education. Daniel M. Romero, Brendan Meeder, and Jon Kleinberg. 2011. Differences in the mechanics of information diffusion across topics: Idioms, political hashtags, and complex contagion on twitter. In Proceedings of the 20th International Conference on World Wide Web, WWW ’11, pages 695–704, New York, NY, USA. ACM. Glenda S. Stump, Jennifer DeBoer, Jonathan Whittinghill, and Lori Breslow. 2013. Development of a framework to classify mooc discussion forum posts: Methodology and challenges. Manos Tsagkias, Wouter Weerkamp, and Maarten de Rijke. 2009. Predicting the volume of comments on online news stories. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM ’09, pages 1765–1768, New York, NY, USA. ACM. Yi-Chia Wang, Mahesh Joshi, and Carolyn Penstein Ros. 2007. A feature based approach to leveraging context for classifying newsgroup style discussion segments. In John A. Carroll, Antal van den Bosch, and Annie Zaenen, editors, ACL. The Association for Computational Linguistics. Hongning Wang, Chi Wang, ChengXiang Zhai, and Jiawei Han. 2011. Learning online discussion structures by conditional random fields. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’11, pages 435–444, New York, NY, USA. ACM. Chunyan Wang, Mao Ye, and Bernardo A. Huberman. 2012. From user comments to on-line conversations. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’12, pages 244–252, New York, NY, USA. ACM. Li Wang, Su Nam Kim, and Timothy Baldwin. 2013. The utility of discourse structure in forum thread retrieval. In AIRS, pages 284–295. Tae Yano and Noah A. Smith. 2010. What’s worthy of comment? content and comment volume in political blogs. In William W. Cohen and Samuel Gosling, editors, ICWSM. The AAAI Press. Chun-Nam John Yu and Thorsten Joachims. 2009. Learning structural svms with latent variables. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 1169–1176, New York, NY, USA. ACM. 1511
2014
141
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1512–1523, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Joint Graph Model for Pinyin-to-Chinese Conversion with Typo Correction∗ Zhongye Jia and Hai Zhao† MOE-Microsoft Key Laboratory for Intelligent Computing and Intelligent Systems, Center for Brain-Like Computing and Machine Intelligence Department of Computer Science and Engineering, Shanghai Jiao Tong University 800 Dongchuan Road, Shanghai 200240, China [email protected], [email protected] Abstract It is very import for Chinese language processing with the aid of an efficient input method engine (IME), of which pinyinto-Chinese (PTC) conversion is the core part. Meanwhile, though typos are inevitable during user pinyin inputting, existing IMEs paid little attention to such big inconvenience. In this paper, motivated by a key equivalence of two decoding algorithms, we propose a joint graph model to globally optimize PTC and typo correction for IME. The evaluation results show that the proposed method outperforms both existing academic and commercial IMEs. 1 Introduction 1.1 Chinese Input Method The daily life of Chinese people heavily depends on Chinese input method engine (IME), no matter whether one is composing an E-mail, writing an article, or sending a text message. However, every Chinese word inputted into computer or cellphone cannot be typed through one-to-one mapping of key-to-letter inputting directly, but has to go through an IME as there are thousands of Chinese characters for inputting while only 26 letter keys are available in the keyboard. An IME is an essential software interface that maps Chinese characters into English letter combinations. An ef∗This work was partially supported by the National Natural Science Foundation of China (Grant No.60903119, Grant No.61170114, and Grant No.61272248), the National Basic Research Program of China (Grant No.2013CB329401), the Science and Technology Commission of Shanghai Municipality (Grant No.13511500200), and the European Union Seventh Framework Program (Grant No.247619). †Corresponding author ficient IME will largely improve the user experience of Chinese information processing. Nowadays most of Chinese IMEs are pinyin based. Pinyin is originally designed as the phonetic symbol of a Chinese character (based on the standard modern Chinese, mandarin) , using Latin letters as its syllable notation. For example, the pinyin of the Chinese character “爱”(love) is “ài”. Most characters usually have unique pinyin representations, while a few Chinese characters may be pronounced in several different ways, so they may have multiple pinyin representations. The advantage of pinyin IME is that it only adopts the pronunciation perspective of Chinese characters so that it is simple and easy to learn. But there are only less than 500 pinyin syllables in standard modern Chinese, compared with over 6,000 commonly used Chinese characters, which leads to serious ambiguities for pinyin-to-character mapping. Modern pinyin IMEs mostly use a “sentencebased” decoding technique (Chen and Lee, 2000) to alleviate the ambiguities. “Sentence based” means that IME generates a sequence of Chinese characters upon a sequence of pinyin inputs with respect to certain statistical criteria. 1.2 Typos and Chinese Spell Checking Written in Chinese characters but not alphabets, spell checking for Chinese language is quite different from the same task for other languages. Since Chinese characters are entered via IME, those user-made typos do not immediately lead to spelling errors. When a user types a wrong letter, IME will be very likely to fail to generate the expected Chinese character sequence. Normally, the user may immediately notice the inputting error and then make corrections, which usually means doing a bunch of extra operations like cursor 1512 movement, deletion and re-typing. Thus there are two separated sub-tasks for Chinese spell checking: 1. typo checking for user typed pinyin sequences which should be a built-in module in IME, and 2. spell checking for Chinese texts in its narrow sense, which is typically a module of word processing applications (Yang et al., 2012b). These two terms are often confused especially in IME related works such as (Chen and Lee, 2000) and (Wu et al., 2009). Pinyin typos have always been a serious problem for Chinese pinyin IMEs. The user may fail to input the completely right pinyin simply because he/she is a dialect speaker and does not know the exact pronunciation for the expected character. This may be a very common situation since there are about seven quite different dialects in Chinese, among which being spoken languages, six are far different from the standard modern Chinese, mandarin. With the boom of smart-phones, pinyin typos worsen due to the limited size of soft keyboard, and the lack of physical feedback on the touch screen. However, existing practical IMEs only provide small patches to deal with typos such as Fuzzy Pinyin (Wu and Chen, 2004) and other language specific errors (Zheng et al., 2011b). Typo checking and correction has an important impact on IME performance. When IME fails to correct a typo and generate the expected sentence, the user will have to take much extra effort to move the cursor back to the mistyped letter and correct it, which leads to very poor user experience (Jia and Zhao, 2013). 2 Related Works The very first approach for Chinese input with typo correction was made by (Chen and Lee, 2000), which was also the initial attempt of “sentence-based” IME. The idea of “statistical input method” was proposed by modeling PTC conversion as a hidden Markov model (HMM), and using Viterbi (Viterbi, 1967) algorithm to decode the sequence. They solved the typo correction problem by decomposing the conditional probability P(H|P) of Chinese character sequence H given pinyin sequence P into a language model P(wi|wi−1) and a typing model P(pi|wi). The typing model that was estimated on real user input data was for typo correction. However, real user input data can be very noisy and not very convenient to obtain. As we will propose a joint model in this paper, such an individual typing model is not necessarily built in our approach. (Zheng et al., 2011a) developed an IME system with typo correction called CHIME using noisy channel error model and language-specific features. However their model depended on a very strong assumption that input pinyin sequence should have been segmented into pinyin words by the user. This assumption does not really hold in modern “sentence-based” IMEs. We release this assumption since our model solves segmentation, typo correction and PTC conversion jointly. Besides the common HMM approach for PTC conversion, there are also various methods such as: support vector machine (Jiang et al., 2007), maximum entropy (ME) model (Wang et al., 2006), conditional random field (CRF) (Li et al., 2009) and statistical machine translation (SMT) (Yang et al., 2012a; Wang et al., 2013c; Zhang and Zhao, 2013), etc. Spell checking or typo checking was first proposed for English (Peterson, 1980). (Mays et al., 1991) addressed that spell checking should be done within a context, i.e., a sentence or a long phrase with a certain meaning, instead of only in one word. A recent spell correction work is (Li et al., 2006), where a distributional similarity was introduced for spell correction of web queries. Early attempts for Chinese spelling checking could date back to (Chang, 1994) where character tables for similar shape, pronunciation, meaning, and input-method-code characters were proposed. More recently, the 7th SIGHAN Workshop on Chinese Language Processing (Yu et al., 2013) held a shared task on Chinese spell checking. Various approaches were made for the task including language model (LM) based methods (Chen et al., 2013), ME model (Han and Chang, 2013), CRF (Wang et al., 2013d; Wang et al., 2013a), SMT (Chiu et al., 2013; Liu et al., 2013), and graph model (Jia et al., 2013), etc. 3 Pinyin Input Method Model 3.1 From English Letter to Chinese Sentence It is a rather long journey from the first English letter typed on the keyboard to finally a completed Chinese sentence generated by IME. We will first take an overview of the entire process. The average length of pinyin syllables is about 3 letters. There are about 410 pinyin syllables used in the current pinyin system. Each pinyin sylla1513 ble has a bunch of corresponding Chinese characters which share the same pronunciation represented by the syllable. The number of those homophones ranges from 1 to over 300. Chinese characters then form words. But word in Chinese is a rather vague concept. Without word delimiters, linguists have argued on what a Chinese word really is for a long time and that is why there is always a primary word segmentation treatment in most Chinese language processing tasks (Zhao et al., 2006; Huang and Zhao, 2007; Zhao and Kit, 2008; Zhao et al., 2010; Zhao and Kit, 2011; Zhao et al., 2013). A Chinese word may contain from 1 to over 10 characters due to different word segmentation conventions. Figure 1 demonstrates the relationship of pinyin and word, from pinyin letters “nihao” to the word “你好(hello)”. Typically, an IME takes the pinyin input, segments it into syllables, looks up corresponding words in a dictionary and generates a sentence with the candidate words. nihao 你好 ni hao 好 你 Pinyin syllables Chinese characters Pinyin word Chinese word n i h a o Pinyin characters Figure 1: Relationship of pinyin and words 3.2 Pinyin Segmentation and Typo Correction Non-Chinese users may feel confused or even surprised if they know that when typing pinyin through an IME, Chinese IME users will never enter delimiters such as “Space” key to segment either pinyin syllables or pinyin words, but just input the entire un-segmented pinyin sequence. For example, if one wants to input “你好世界(Hello world)”, he will just type “nihaoshijie” instead of segmented pinyin sequence “ni hao shi jie”. Nevertheless, pinyin syllable segmentation is a much easier problem compared to Chinese word segmentation. Since pinyin syllables have a very limited vocabulary and follow a set of regularities strictly, it is convenient to perform pinyin syllable segmentation by using rules. But as the pinyin input is not segmented, it is nearly impossible to adopt previous spell checking methods for English to pinyin typo checking, although techniques for English spell checking have been well developed. A bit confusing but interesting, pinyin typo correction and segmentation come as two sides of one problem: when a pinyin sequence is mistyped, it is unlikely to be correctly segmented; when it is segmented in an awkward way, it is likely to be mistyped. Inspired by (Yang et al., 2012b) and (Jia et al., 2013), we adopt the graph model for Chinese spell checking for pinyin segmentation and typo correction, which is based on the shortest path word segmentation algorithm (Casey and Lecolinet, 1996). The model has two major steps: segmentation and correction. 3.2.1 Pinyin Segmentation The shortest path segmentation algorithm is based on the idea that a reasonable segmentation should minimize the number of segmented units. For a pinyin sequence p1p2 . . . pL, where pi is a letter, first a directed acyclic graph (DAG) GS = (V, E) is built for pinyin segmentation step. The vertex set V consists of the following parts: • Virtual start vertex S0 and end vertex SE; • Possible legal syllables fetched from dictionary Dp according to the input pinyin sequence: {Si,j|Si,j = pi . . . pj ∈Dp}; • The letter itself as a fallback no matter if it is a legal pinyin syllable or not: {Si|Si = pi}. The vertex weights wS are all set to 0. The edges are from a syllable to all syllables next to it: E = {E(Si,j →Sj+1,k)|Si,j, Sj+1,k ∈V}. The edge weight the negative logarithm of conditional probability P(Sj+1,k|Si,j) that a syllable Si,j is followed by Sj+1,k, which is give by a bigram language model of pinyin syllables: WE(Si,j→Sj+1,k) = −log P(Sj+1,k|Si,j) The shortest path P ∗on the graph is the path P with the least sum of weights: P ∗= arg min (v,E)∈G∧(v,E)∈P ∑ v wv + ∑ E WE. 1514 Computing the shortest path from S0 to SE on GS yields the best segmentation. This is the single source shortest path (SSSP) problem on DAG which has an efficient algorithm by preprocessing the DAG with topology sort, then traversing vertices and edges in topological order. It has the time complexity of O(|V| + |E|). For example, one intends to input “你好世界(Hello world)” by typing “nihaoshijie”, but mistyped as “mihaoshijiw”. The graph for this input is shown in Figure 2. The shortest path, i.e., the best segmentation is “mi hao shi ji w”. We will continue to use this example in the rest of this paper. m i h a o s h i j i w ha hao ao shi ji mi Figure 2: Graph model for pinyin segmentation 3.2.2 Pinyin Typo Correction Next in the correction step, for the segmented pinyin sequence S1, S2, . . . , SM, a graph Gc is constructed to perform typo correction. The vertex set V consists of the following parts: • Virtual start vertex S′ 0 and end vertex S′ E with vertex weights of 0; • All possible syllables similar to original syllable in Gs. If the adjacent syllables can be merged into a legal syllable, the merged syllable is also added into V: {S′ i,j|S′ i,j = S′ i . . . S′ j ∈Dp, S′ k ∼Sk, k = i ≤j}, where the similarity ∼is measured in Levenshtein distance (Levenshtein, 1966). Syllables with Levenshtein distance under a certain threshold are considered as similar: L(Si, Sj) < T ↔Si ∼Sj. The vertex weight is the Levenshtein distance multiply by a normalization parameter: wS′ i,j = β j ∑ k−i L(S′ k, Sk). Similar to Gs, the edges are from one syllable to all syllables next to it and edge weights are the conditional probabilities between them. Computing the shortest path from S′ 0 to S′ E on Gc yields the best typo correction result. In addition, the result has been segmented so far. Considering our running example, the graph Gc is shown in Figure 3, and the typo correction result is “mi hao shi jie”. w hao shi ji jie mi ti ni ma hai huo pao shu sai zhi jia ... ... ... ... ... ... ... ... a e ... shuai ju Figure 3: Graph model for pinyin typo correction Merely using the above model, the typo correction result is not satisfying yet, no matter how much effort is paid. The major reason is that the basic semantic unit of Chinese language is actually word (tough vaguely defined) which is usually composed of several characters. Thus the conditional probability between characters does not make much sense. In addition, a pinyin syllable usually maps to dozens or even hundreds of corresponding homophonic characters, which makes the conditional probability between syllables much more noisy. However, using pinyin words instead of syllables is not a wise choice because pinyin word segmentation is not so easy a task as syllable segmentation. To make typo correction better, we consider to integrate it with PTC conversion using a joint model. 3.3 Hidden Markov Model for Pinyin-to-Chinese Conversion PTC conversion has long been viewed as a decoding problem using HMM. We continue to follow this formalization. The best Chinese character sequence W ∗for a given pinyin syllable sequence S is the one with the highest conditional probability P(W|S) that W ∗= arg max W P(W|S) = arg max W P(W)P(S|W) P(S) = arg max W P(W)P(S|W) = arg max w1,ww,...,wM ∏ wi P(wi|wi−1) ∏ wi P(si|wi) 1515 In the HMM for pinyin IME, observation states are pinyin syllables, hidden states are Chinese words, emission probability is P(si|wi), and transition probability is P(wi|wi−1). Note the transition probability is the conditional probability between words instead of characters. PTC conversion is to decode the Chinese word sequence from the pinyin sequence. The Viterbi algorithm (Viterbi, 1967) is used for the decoding. The shortest path algorithm for typo correction and Viterbi algorithm for PTC conversion are very closely related. It has been strictly proven by (Forney, 1973) that the sequence decoding problem on HMM is formally identical to finding a shortest path on a certain graph, which can be constructed in the following manner. Consider a first order HMM with all possible observations O = {o1, o2, . . . , oM}, hidden states S = {s1, s2, . . . , sN}, a special start state s0, emission probabilities (Esi,ok) = P(ok|si), transition probabilities (Tsi,sj) = P(sj|si), and start probabilities (Ssi) = P(si|s0). For an observation sequence of T time periods Y = {y1, y2, . . . , yT |yt ∈O, t = 1, . . . , T}, the decoding problem is to find the best corresponding hidden state sequence X∗with the highest probability, i.e., X∗= arg max x1,xt∈S Sx1Ex1,y1 T ∏ t=2 Ext,ytTxt−1,xt. (1) Then we will construct a DAG G = (V, E) upon the HMM. The vertex set V includes: • Virtual start vertex v0 and end vertex vE with vertex weight of 0; • Normal vertices vxt, where t = 1, . . . , T, and ∀xt ∈S. The vertex weight is the negative logarithm of emission probability: wvxt = −log Ext,yt. The edge set E includes: • Edges from the start vertex E(v0 →vx1) with edge weight WE(v0→vx1) = −log Sx1, where ∀x1 ∈S; • Edges to the end vertex E(vxT →vE) with vertex weights of 0; • Edges between adjacent time periods E(vxt−1 →vxt) with edge weight WE(vxt−1→vxt) = −log Txt−1,xt, where t = 2, . . . , T, and ∀xt, xt−1 ∈S. The shortest path P ∗from v0 to vE is the one with the least sum of vertex and edge weights, i.e., P ∗= arg min vxt∈V T ∑ t=1 (wvxt + WE(vxt−1→vxt)) = arg min vx1,vxt∈V {−log Sx1 −log Ex1,y1 + T ∑ t=2 (−log Ext,yt −log Txt−1,xt)} = arg max vx1,vxt∈V Sx1Ex1,y1 T ∏ t=2 Ext,ytTxt−1,xt. (2) The optimization goal of P ∗in Equation (2) is identical to that of X∗in Equation (1). 3.4 Joint Graph Model For Pinyin IME Given HMM decoding problem is identical to SSSP problem on DAG, we propose a joint graph model for PTC conversion with typo correction. The joint graph model aims to find the global optimal for both PTC conversion and typo correction on the entire input pinyin sequence. The graph G = (V, E) is constructed based on graph Gc for typo correction in Section 3.2. The vertex set V consists of the following parts: • Virtual start vertex V0 and end vertex VE with vertex weight of 0; • Adjacent pinyin syllables in Gc are merged into pinyin words. Corresponding Chinese words are fetched from a PTC dictionary Dc, which is a dictionary maps pinyin words to Chinese words, and added as vertices: {Vi,j|∀Vi,j ∈Dc[S′ i . . . S′ j], i ≤j}; The vertex weight consists of two parts: 1. the vertex weights of syllables in Gc, and 2. the emission probability: wVi,j =β j ∑ k=i L(S′ k, Sk) −γ log P(S′ i . . . S′ j|Vi,j); 1516 If the corresponding pinyin syllables in Gc have an edge between them, the vertices in G also have an edge: E = {E(Vi,j →Vj+1,k)|E(Si,j →Sj+1,k) ∈Gc}. The edge weights are the negative logarithm of the transition probabilities: WE(Vi,j→Vj+1,k) = −log P(Vj+1,k|Vi,j) Although the model is formulated on first order HMM, i.e., the LM used for transition probability is a bigram one, it is easy to extend the model to take advantage of higher order n-gram LM, by tracking longer history while traversing the graph. Computing the shortest path from V0 to VE on G yields the best pinyin-to-Chinese conversion with typo correction result. Considering our running example, the graph G is shown in Figure 4. ni’hao w hao shi ji jie mi ti ni ma hai huo pao shu sai zhi jia a e shuai ju mi’huo 你好 ... zhi'ji shi'jie 接姐节 ... 你呢呢 ... 米迷 ... 饿额 ... 啊阿 ... 家加 ... 吗嘛妈 ... 跑泡 ... 好好豪 ... 或活火 ... 还海 ... 只之 ... 提题 ... 世事 ... 束书数 ... 赛塞鳃 ... 迷惑 ... 至极 ... 帅甩摔 ... 句局 ... 及几 ... 世界 ... Figure 4: Joint graph model The joint graph is rather huge and density. According to our empirical statistics, when setting threshold T = 2, for a sentence of M characters, the joint graph will have |V| = M × 1, 000, and |E| = M × 1, 000, 000. 3.5 K-Shortest Paths To reduce the scale of graph G, we filter graph Gc by searching its K-shortest paths first to get G′ c and construct G on top of G′ c. Figure 5 shows the 3shortest paths filtered graph G′ c and Figure 6 shows the corresponding G for our running example. The scale of graph may be thus drastically reduced. hao shi ji jie mi ni huo zhi a Figure 5: K-shortest paths in typo correction An efficient heap data structure is required in K-shortest paths algorithm (Eppstein, 1998) for hao shi ji jie mi ni huo zhi a ni’hao mi’huo zhi'ji shi'jie 只之 ... 世界 ... 至极 ... 啊阿 ... 接姐节 ... 及几 ... 世事 ... 好好豪 ... 你好 ... 你呢呢 ... 米迷 ... 迷惑 ... 或活火 ... Figure 6: Filtered graph model backtracking the best paths to current vertex while traversing. The heap is implemented as a priority queue of size K sorted according to path length that should support efficient push and pop operations. Fibonacci heap (Fredman and Tarjan, 1987) is adopted for the heap implementation since it has a push complexity of O(1) which is better than the O(K) for other heap structures. Another benefit provided by K-shortest paths is that it can be used for generating N-best candidates of PTC conversion, which may be helpful for further performance improvement. 4 Experiments 4.1 Corpora, Tools and Experiment Settings The corpus for evaluation is the one provided in (Yang et al., 2012a), which is originally extracted from the People’s Daily corpus and labeled with pinyin. The corpus has already been split into training T, development Dand test T sets as shown in Table 1. T D T #Sentence 1M 2K 100K #character 43,679,593 83,765 4,123,184 Table 1: Data set size SRILM (Stolcke, 2002) is adopted for language model training and KenLM (Heafield, 2011; Heafield et al., 2013) for language model query. The Chinese part of the corpus is segmented into words before LM training. Maximum matching word segmentation is used with a large word vocabulary V extracted from web data provided by (Wang et al., 2013b). The pinyin part is segmented according to the Chinese part. This vocabulary V also serves as the PTC dictionary. The original vocabulary is not labeled with pinyin, thus we use the PTC dictionary of sunpinyin1 which is an open source Chinese pinyin IME, to label the 1http://code.google.com/p/sunpinyin/ 1517 vocabulary V with pinyin. The emission probabilities are estimated using the lexical translation module of MOSES (Koehn et al., 2007) as “translation probability” from pinyin to Chinese. 4.2 Evaluation Metrics We will use conventional sequence labeling evaluation metrics such as sequence accuracy and character accuracy2. Chinese characters in a sentence may be separated by digits, punctuation and alphabets which are directly inputted without the IME. We follow the so-called term Max Input Unit (MIU), the longest consecutive Chinese character sequence proposed by (Jia and Zhao, 2013). We will mainly consider MIU accuracy (MIU-Acc) which is the ratio of the number of completely corrected generated MIUs over the number of all MIUs, and character accuracy (Ch-Acc), but the sentence accuracy (S-Acc) will also be reported in evaluation results. We will also report the conversion error rate (ConvER) proposed by (Zheng et al., 2011a), which is the ratio of the number of mistyped pinyin word that is not converted to the right Chinese word over the total number of mistyped pinyin words3. 4.3 Baseline System without Typo Correction Firstly we build a baseline system without typo correction which is a pipeline of pinyin syllable segmentation and PTC conversion. The baseline system takes a pinyin input sequence, segments it into syllables, and then converts it to Chinese character sequence. The pinyin syllable segmentation already has very high (over 98%) accuracy with a trigram LM using improved Kneser-Ney smoothing. According to our empirical observation, emission probabilities are mostly 1 since most Chinese words have unique pronunciation. So in this step we set γ = 0. We consider different LM smoothing methods including Kneser-Ney (KN), improved Kneser-Ney (IKN), and Witten-Bell (WB). All of the three smoothing methods for bigram and trigram LMs are examined both using back-off mod2We only work on the PTC conversion part of IME, thus we are unable to use existing evaluation systems (Jia and Zhao, 2013) for full Chinese IME functions. 3Other evaluation metrics are also proposed by (Zheng et al., 2011a) which is only suitable for their system since our system uses a joint model els and interpolated models. The number of Nbest candidates for PTC conversion is set to 10. The results on Dare shown in Figure 7 in which the “-i” suffix indicates using interpolated model. According to the results, we then choose the trigram LM using Kneser-Ney smoothing with interpolation. 0.62 0.64 0.66 0.68 0.7 0.72 0.74 KN KN-i IKN IKN-i WB WB-i 0.944 0.946 0.948 0.95 0.952 0.954 0.956 0.958 0.96 0.962 0.964 MIU-Acc Ch-Acc MIU-Acc-bigram Ch-Acc-bigram MIU-Acc-trigram Ch-Acc-trigram Figure 7: MIU-Acc and Ch-Acc with different LM smoothing The choice of the number of N-best candidates for PTC conversion also has a strong impact on the results. Figure 8 shows the results on Dwith different Ns, of which the N axis is drawn in logarithmic scale. We can observe that MIU-Acc slightly decreases while N goes up, but Ch-Acc largely increases. We therefore choose N = 10 as trade-off. 0.7265 0.727 0.7275 0.728 0.7285 0.729 0.7295 0.73 0.7305 0.731 0.7315 0.732 1 10 100 1000 0.935 0.94 0.945 0.95 0.955 0.96 0.965 0.97 0.975 0.98 0.985 MIU-Acc Ch-Acc MIU-Acc Ch-Acc Figure 8: MIU-Acc and Ch-Acc with different Ns The parameter γ determines emission probability. Results with different γ on Dis shown in Figure 9, of which the γ axis is drawn in logarithmic scale. γ = 0.03 is chosen at last. We compare our baseline system with several practical pinyin IMEs including sunpinyin and Google Input Tools (Online version)4. The results on Dare shown in Table 2. 4http://www.google.com/inputtools/try/ 1518 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.001 0.01 0.1 1 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 MIU-Acc Ch-Acc MIU-Acc Ch-Acc Figure 9: MIU-Acc and Ch-Acc with different γ MIU-Acc Ch-Acc S-Acc Baseline 73.39 96.24 38.00 sunpinyin 52.37 87.51 13.95 Google 74.74 94.81 40.2 Yang-ME 93.3 30.2 Yang-MT 95.5 45.4 Table 2: Baseline system compared to other IMEs (%) 4.4 PTC Conversion with Typo Correction Based upon the baseline system, we build the joint system of PTC conversion with typo correction. We simulate user typos by randomly generating errors automatically on the corpus. The typo rate is set according to previous Human-Computer Interaction (HCI) studies. Due to few works have been done on modeling Chinese text entry, we have to refer to those corresponding results on English (Wobbrock and Myers, 2006; MacKenzie and Soukoreff, 2002; Clarkson et al., 2005), which show that the average typo rate is about 2%. (Zheng et al., 2011a) performed an experiment that 2,000 sentences of 11,968 Chinese words were entered by 5 native speakers. The collected data consists of 775 mistyped pinyin words caused by one edit operation, and 85 caused by two edit operations. As we observe on Tthat the average pinyin word length is 5.24, then typo rate in the experiment of (Zheng et al., 2011a) can be roughly estimated as: 775 + 85 × 2 11968 × 5.24 = 1.51%, which is similar to the conclusion on English. Thus we generate corpora from Dwith typo rate of 0% (0-P), 2% (2-P), and 5% (5-P) to evaluate the system. According to (Zheng et al., 2011a) most mistyped pinyin words are caused by one edit operation. Since pinyin syllable is much shorter than pinyin word, this ratio can be higher for pinyin syllables. From our statistics on T, with 2% randomly generated typos, Pr(L(S′, S) < 2) = 99.86%. Thus we set the threshold T for L to 2. We first set K-shortest paths filter to K = 10 and tune β. Results with different β are shown in Figure 10. With β = 3.5, we select K. Re 0.7 0.705 0.71 0.715 0.72 0.725 0.73 0.735 0.74 0.745 2 2.5 3 3.5 4 4.5 5 0.954 0.956 0.958 0.96 0.962 0.964 0.966 0.968 0.97 MIU-Acc Ch-Acc MIU-Acc Ch-Acc (a) 0-P 0.665 0.67 0.675 0.68 0.685 0.69 0.695 2 2.5 3 3.5 4 4.5 5 0.932 0.934 0.936 0.938 0.94 0.942 0.944 0.946 MIU-Acc Ch-Acc MIU-Acc Ch-Acc (b) 2-P 0.585 0.59 0.595 0.6 0.605 0.61 0.615 2 2.5 3 3.5 4 4.5 5 0.872 0.874 0.876 0.878 0.88 0.882 0.884 0.886 0.888 MIU-Acc Ch-Acc MIU-Acc Ch-Acc (c) 5-P Figure 10: MIU-Acc and Ch-Acc with different β sults with different K are shown in Figure 11. We choose K = 20 since there is no significant improvement when K > 20. The selection of K also directly guarantees the running time of the joint model. With K = 20, on a normal PC with Intel Pentium Dual-Core E6700 CPU, the PTC conversion rate is over 2000 characters-per-minute (cpm), which is much faster than the normal typing rate of 200 cpm. With all parameters optimized, results on T 1519 0.705 0.71 0.715 0.72 0.725 0.73 0.735 0.74 0.745 0 10 20 30 40 50 60 70 80 90 100 0.954 0.956 0.958 0.96 0.962 0.964 0.966 0.968 MIU-Acc Ch-Acc MIU-Acc Ch-Acc (a) 0-P 0.655 0.66 0.665 0.67 0.675 0.68 0.685 0.69 0.695 0.7 0 10 20 30 40 50 60 70 80 90 100 0.92 0.925 0.93 0.935 0.94 0.945 MIU-Acc Ch-Acc MIU-Acc Ch-Acc (b) 2-P 0.55 0.56 0.57 0.58 0.59 0.6 0.61 0 10 20 30 40 50 60 70 80 90 100 0.85 0.855 0.86 0.865 0.87 0.875 0.88 0.885 0.89 MIU-Acc Ch-Acc MIU-Acc Ch-Acc (c) 5-P Figure 11: MIU-Acc and Ch-Acc with different K using the proposed joint model are shown in Table 3 and Table 4. Our results are compared to the baseline system without typo correction and Google Input Tool. Since sunpinyin does not have typo correction module and performs much poorer than our baseline system, we do not include it in the comparison. Though no direct proofs can be found to indicate if Google Input Tool has an independent typo correction component, its outputs show that such a component is unlikely available. Since Google Input Tool has to be accessed through a web interface and the network connection cannot be guaranteed. we only take a subset of 10K sentences of Tto perform the experiments, and the results are shown in Table 3. The scores reported in (Zheng et al., 2011a) are not listed in Table 4 since the data set is different. They reported a ConvER of 53.56%, which is given here for reference. Additionally, to further inspect the robustness of our model, performance with typo rate ranges from 0% to 5% is shown in Figure 12. Although the performance decreases while typo rate goes up, it is still quite satisfying around typo rate of 2% which is assumed to be the real world situation. MIU-Acc Ch-Acc S-Acc ConvER Baseline 0-P 79.90 97.47 48.87 Baseline 2-P 50.47 90.53 11.12 99.95 Baseline 5-P 30.26 82.83 3.32 99.99 Google 0-P 79.08 95.26 46.83 Google 2-P 49.47 61.50 11.08 91.70 Google 5-P 29.18 36.20 3.29 94.64 Joint 0-P 79.90 97.52 49.27 Joint 2-P 75.55 95.40 40.69 18.45 Joint 5-P 67.76 90.17 27.86 24.68 Table 3: Test results on 10K sentences from T (%) MIU-Acc Ch-Acc S-Acc ConvER Baseline 0-P 74.46 96.42 40.50 Baseline 2-P 47.25 89.50 9.62 99.95 Baseline 5-P 28.28 81.74 2.63 99.98 Joint 2-P 74.22 96.39 40.34 Joint 2-P 69.91 94.14 33.11 21.35 Joint 5-P 62.14 88.49 22.62 27.79 Table 4: Test results on T(%) 0 0.2 0.4 0.6 0.8 1 1.2 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 1.2 MIU-Acc Ch-Acc MIU-Acc Ch-Acc Figure 12: MIU-Acc and Ch-Acc with different typo rate (%) 5 Conclusion In this paper, we have developed a joint graph model for pinyin-to-Chinese conversion with typo correction. This model finds a joint global optimal for typo correction and PTC conversion on the entire input pinyin sequence. The evaluation results show that our model outperforms both previous academic systems and existing commercial products. In addition, the joint model is efficient enough for practical use. 1520 References Richard G. Casey and Eric Lecolinet. 1996. A Survey of Methods and Strategies in Character Segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 18(7):690–706. Chao-Huang Chang. 1994. A Pilot Study on Automatic Chinese Spelling Error Correction. Journal of Chinese Language and Computing, 4:143–149. Zheng Chen and Kai-Fu Lee. 2000. A New Statistical Approach To Chinese Pinyin Input. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 241–247, Hong Kong, October. Kuan-Yu Chen, Hung-Shin Lee, Chung-Han Lee, HsinMin Wang, and Hsin-Hsi Chen. 2013. A Study of Language Modeling for Chinese Spelling Check. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 79–83, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Hsun-wen Chiu, Jian-cheng Wu, and Jason S. Chang. 2013. Chinese Spelling Checker Based on Statistical Machine Translation. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 49–53, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Edward Clarkson, James Clawson, Kent Lyons, and Thad Starner. 2005. An Empirical Study of Typing Rates on mini-QWERTY Keyboards. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’05, pages 1288–1291, New York, NY, USA. ACM. David Eppstein. 1998. Finding the K Shortest Paths. SIAM Journal on computing, 28(2):652–673. Jr G. David Forney. 1973. The Viterbi Algorithm. Proceedings of the IEEE, 61(3):268–278. Michael L. Fredman and Robert Endre Tarjan. 1987. Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithms. Journal of the ACM (JACM), 34(3):596–615, July. Dongxu Han and Baobao Chang. 2013. A Maximum Entropy Approach to Chinese Spelling Check. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 74–78, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable Modified Kneser-Ney Language Model Estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 690–696, Sofia, Bulgaria, August. Kenneth Heafield. 2011. KenLM: Faster and Smaller Language Model Queries. In Proceedings of the EMNLP 2011 Sixth Workshop on Statistical Machine Translation, pages 187–197, Edinburgh, Scotland, United Kingdom, July. Changning Huang and Hai Zhao. 2007. Chinese Word Segmentation: A Decade Review. Journal of Chinese Information Processing, 21(3):8–20. Zhongye Jia and Hai Zhao. 2013. KySS 1.0: a Framework for Automatic Evaluation of Chinese Input Method Engines. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1195–1201, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Zhongye Jia, Peilu Wang, and Hai Zhao. 2013. Graph Model for Chinese Spell Checking. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 88–92, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Wei Jiang, Yi Guan, Xiaolong Wang, and BingQuan Liu. 2007. PinYin-to-Character Conversion Model based on Support Vector Machines. Journal of Chinese information processing, 21(2):100–105. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic, June. Association for Computational Linguistics. Vladimir I. Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. In Soviet physics doklady, volume 10, page 707. Mu Li, Muhua Zhu, Yang Zhang, and Ming Zhou. 2006. Exploring Distributional Similarity Based Models for Query Spelling Correction. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 1025–1032, Sydney, Australia, July. Association for Computational Linguistics. Lu Li, Xuan Wang, Xiao-Long Wang, and Yan-Bing Yu. 2009. A Conditional Random Fields Approach to Chinese Pinyin-to-Character Conversion. Journal of Communication and Computer, 6(4):25–31. Xiaodong Liu, Kevin Cheng, Yanyan Luo, Kevin Duh, and Yuji Matsumoto. 2013. A Hybrid Chinese Spelling Correction Using Language Model and Statistical Machine Translation with Reranking. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 54–58, Nagoya, Japan, October. Asian Federation of Natural Language Processing. 1521 I. Scott MacKenzie and R. William Soukoreff. 2002. A Character-level Error Analysis Technique for Evaluating Text Entry Methods. In Proceedings of the Second Nordic Conference on Human-computer Interaction, NordiCHI ’02, pages 243–246, New York, NY, USA. ACM. Eric Mays, Fred J Damerau, and Robert L Mercer. 1991. Context Based Spelling Correction. Information Processing & Management, 27(5):517–522. James L. Peterson. 1980. Computer Programs for Detecting and Correcting Spelling Errors. Commun. ACM, 23(12):676–687, December. Andreas Stolcke. 2002. SRILM-An Extensible Language Modeling Toolkit. In Proceedings of the international conference on spoken language processing, volume 2, pages 901–904. Andrew J. Viterbi. 1967. Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm. Information Theory, IEEE Transactions on, 13(2):260–269. Xuan Wang, Lu Li, Lin Yao, and Waqas Anwar. 2006. A Maximum Entropy Approach to Chinese Pin YinTo-Character Conversion. In Systems, Man and Cybernetics, 2006. SMC’06. IEEE International Conference on, volume 4, pages 2956–2959. IEEE. Chun-Hung Wang, Jason S. Chang, and Jian-Cheng Wu. 2013a. Automatic Chinese Confusion Words Extraction Using Conditional Random Fields and the Web. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 64– 68, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Peilu Wang, Ruihua Sun, Hai Zhao, and Kai Yu. 2013b. A New Word Language Model Evaluation Metric for Character Based Languages. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 315–324. Springer. Rui Wang, Masao Utiyama, Isao Goto, Eiichro Sumita, Hai Zhao, and Bao-Liang Lu. 2013c. Converting Continuous-Space Language Models into N-Gram Language Models for Statistical Machine Translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 845–850, Seattle, Washington, USA, October. Association for Computational Linguistics. Yih-Ru Wang, Yuan-Fu Liao, Yeh-Kuang Wu, and Liang-Chun Chang. 2013d. Conditional Random Field-based Parser and Language Model for Traditional Chinese Spelling Checker. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 69–73, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Jacob O. Wobbrock and Brad A. Myers. 2006. Analyzing the Input Stream for Character- Level Errors in Unconstrained Text Entry Evaluations. ACM Trans. Comput.-Hum. Interact., 13(4):458–489, December. Jun Wu and Liren Chen. 2004. Fault-tolerant Romanized Input Method for Non-roman Characters, August 25. US Patent App. 10/928,131. Jun Wu, Hulcan Zhu, and Hongjun Zhu. 2009. Systems and Methods for Translating Chinese Pinyin to Chinese Characters, January 13. US Patent 7,478,033. Shaohua Yang, Hai Zhao, and Bao-liang Lu. 2012a. A Machine Translation Approach for Chinese WholeSentence Pinyin-to-Character Conversion. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation, pages 333– 342, Bali,Indonesia, November. Faculty of Computer Science, Universitas Indonesia. Shaohua Yang, Hai Zhao, Xiaolin Wang, and Bao-liang Lu. 2012b. Spell Checking for Chinese. In International Conference on Language Resources and Evaluation, pages 730–736, Istanbul, Turkey, May. Liang-Chih Yu, Yuen-Hsien Tseng, Jingbo Zhu, and Fuji Ren, editors. 2013. Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing. Asian Federation of Natural Language Processing, Nagoya, Japan, October. Jingyi Zhang and Hai Zhao. 2013. Improving Function Word Alignment with Frequency and Syntactic Information. In Proceedings of the Twenty-Third international joint conference on Artificial Intelligence, pages 2211–2217. AAAI Press. Hai Zhao and Chunyu Kit. 2008. Exploiting Unlabeled Text with Different Unsupervised Segmentation Criteria for Chinese Word Segmentation. Research in Computing Science, 33:93–104. Hai Zhao and Chunyu Kit. 2011. Integrating Unsupervised and Supervised Word Segmentation: The Role of Goodness Measures. Information Sciences, 181(1):163–183. Hai Zhao, Chang-Ning Huang, and Mu Li. 2006. An Improved Chinese Word Segmentation System with Conditional Random Field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 162–165, Sydney, Australia, July. Association for Computational Linguistics. Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2010. A Unified Character-Based Tagging Framework for Chinese Word Segmentation. ACM Transactions on Asian Language Information Processing (TALIP), 9(2):5. Hai Zhao, Masao Utiyama, Eiichiro Sumita, and BaoLiang Lu. 2013. An Empirical Study on Word Segmentation for Chinese Machine Translation. In Computational Linguistics and Intelligent Text Processing, pages 248–263. Springer. 1522 Yabin Zheng, Chen Li, and Maosong Sun. 2011a. CHIME: An Efficient Error-tolerant Chinese Pinyin Input Method. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume Three, IJCAI’11, pages 2551–2556. AAAI Press. Yabin Zheng, Lixing Xie, Zhiyuan Liu, Maosong Sun, Yang Zhang, and Liyun Ru. 2011b. Why Press Backspace? Understanding User Input Behaviors in Chinese Pinyin Input Method. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Techologies, pages 485–490, Portland, Oregon, USA, June. Association for Computational Linguistics. 1523
2014
142
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1524–1533, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Smart Selection Patrick Pantel Microsoft Research One Microsoft Way Redmond, WA 98052, USA [email protected] Michael Gamon Microsoft Research One Microsoft Way Redmond, WA 98052, USA [email protected] Ariel Fuxman Microsoft Research 1065 La Avenida St. Mountain View, CA 94043, USA [email protected] Abstract Natural touch interfaces, common now in devices such as tablets and smartphones, make it cumbersome for users to select text. There is a need for a new text selection paradigm that goes beyond the high acuity selection-by-mouse that we have relied on for decades. In this paper, we introduce such a paradigm, called Smart Selection, which aims to recover a user’s intended text selection from her touch input. We model the problem using an ensemble learning approach, which leverages multiple linguistic analysis techniques combined with information from a knowledge base and a Web graph. We collect a dataset of true intended user selections and simulated user touches via a large-scale crowdsourcing task, which we release to the academic community. We show that our model effectively addresses the smart selection task and significantly outperforms various baselines and standalone linguistic analysis techniques. 1 Introduction The process of using a pointing device to select a span of text has a long history dating back to the invention of the mouse. It serves to access functions on text spans, such as copying/pasting, looking up a word in a dictionary, searching the Web, or accessing other accelerators. As consumers move from traditional PCs to mobile devices (e.g., tablets and smartphones), touch interaction is replacing the pointing devices of yore. Although more intuitive and arguably a more natural form of interaction, touch offers much less acuity (colloquially referred to as the fat finger problem). To select multi-word spans today, mobile devices require an intricate series of gestures that results in cumbersome user experiences1. Consequently, there is an opportunity to reinvent the way users select text in such devices. Our task is, given a single user touch, to predict the span that the user likely intended to select. We call this task smart selection. We restrict our prediction task to cases where a user intends to perform research on a text span (dictionary/thesaurus lookup, translation, searching). We specifically consider operations on text spans that do not form a single unit (i.e., an entity, a concept, a topic, etc.) to be out of scope. For example, full sentences, paragraph and page fragments are out of scope. Smart selection, as far as we know, is a new research problem. Yet there are many threads of research in the NLP community that identify multiword sequences, which have coherent properties. For example, named-entity recognizers identify entities such as people/places/organizations, chunkers and parsers identify syntactic constituents such as noun phrases, key phrase detectors or term segmentors identify term boundaries. While each of these techniques retrieve meaningful linguistic units, our problem is a semantic one of recovering a user’s intent, and as such none alone solves the entire smart selection problem. In this paper, we model the problem of smart selection using an ensemble learning approach. We leverage various linguistic techniques, such as those discussed above, and augment them with other sources of information from a knowledge 1In order to select a multi-word span, a user would first have to touch on either word, then drag the left and right boundary handles to expand it to the adjacent words. 1524 base and a web graph. We evaluate our methods using a novel dataset constructed for our task. We construct our dataset of true userintended selections by crowdsourcing the task of a user selecting spans of text in a researching task. We obtain 13,681 data points. For each intended selection, we construct test cases for each individual sub-word, simulating the user selecting via touch. The resulting testset consists of 33,912 ⟨simulated selection, intended selection⟩pairs, which we further stratify into head, torso, and tail subsets. We release the full dataset and testset to the academic community for further research on this new NLP task. Finally, we empirically show that our ensemble model significantly improves upon various baseline systems. In summary, the major contributions of our research are: • We introduce a new natural language processing task, called smart selection, which aims to address an important problem in text selection for touch-enabled devices; • We conduct a large crowd-sourced user study to collect a dataset of intended selections and simulated user selections, which we release to the academic community; • We propose a machine-learned ensemble model for smart selection, which combines various linguistic annotation methods with information from a large knowledge base and web graph; • We empirically show that our model can effectively address the smart selection task. 2 Related Work Related work falls into three broad categories: linguistic unit detection, human computer interaction (HCI), and intent detection. 2.1 Linguistic Unit Detection Smart selection is closely related to the detection of syntactic and semantic units: user selections are often entities, noun phrases, or concepts. A first approach to solving smart selection is to select an entity, noun phrase, or concept that subsumes the user selection. However, no single approach alone can cover the entire smart selection problem. For example, consider an approach that uses a state-ofthe-art named-entity recognizer (NER) (Chinchor, 1998; Tjong Kim Sang and De Meulder, 2003; Finkel et al., 2005; Ratinov and Roth, 2009). We found in our dataset (see Section 3.2) that only a quarter of what users intend to select consists in fact of named entities. Although an NER approach can be very useful, it is certainly not sufficient. The remainder of the data can be partially addressed with noun phrase (NP) detectors (Abney, 1991; Ramshaw and Marcus, 1995; Mu˜noz et al., 1999; Kudo and Matsumoto, 2001) and lists of items in a knowledge base (KB), but again, each is not alone sufficient. NP detectors and KB-based methods are further very susceptible to the generation of false positives (i.e., text contains many nested noun phrases and knowledge base items include highly ambiguous terms). In our work, we leverage all three techniques in order to benefit from their complementary coverage of user selections. We further create a novel unit detector, called the hyperlink intent model. Based on the assumption that Wikipedia anchor texts are similar in nature to what users would select in a researching task, it models the problem of recovering Wikipedia anchor texts from partial selections. 2.2 Human Computer Interaction There is a substantial amount of research in the HCI community on how to facilitate interaction of a user with touch and speech enabled devices. To give but a few examples of trends in this field, Gunawardana et al. (2010) address the fat finger problem in the use of soft keyboards on mobile devices, Kumar et al. (2012) explore a novel speech interaction paradigm for text entry, and Sakamoto et al. (2013) introduce a technique that combines touch and voice input on a mobile device for improved navigation of user interface elements such as commands and controls. To the best of our knowledge, however, the problem of smart selection as we defined it has not been addressed. 2.3 Intent detection There is a long line of research in the web literature on understanding user intent. The closest to smart selection is query recommendation (Baeza-Yates et al., 2005; Zhang and Nasraoui, 2006; Boldi et al., 2008), where the goal is to suggest queries that may be related to a user’s intent. Query recommendation techniques are based either on clustering queries by their co-clicked URL patterns (Baeza-Yates et al., 2005) or on leveraging co-occurrences of sequential queries in web 1525 search sessions (Zhang and Nasraoui, 2006; Boldi et al., 2008; Sadikov et al., 2010). The key difference from smart selection is that in our task the output is a selection that is relevant to the context of the document where the original selection appears (e.g., by adding terms neighboring the selection). In query recommendation, however, there is no notion of a document being read by the user and, instead, the recommendations are based exclusively on the aggregation of behavior of multiple users. 3 Problem Setting and Data 3.1 Smart Selection Definition Let D be the set of all documents. We define a selection to be a character ⟨offset, length⟩-tuple in a document d ∈D. Let S be the set of all possible selections in D and let Sd be the set of all possible selections in d. We define a scored smart selection, σ, in a document d, as a pair σ = ⟨x, y⟩where x ∈Sd is a selection and y ∈R+ is a score for the selection. We formally define the smart selection function φ as producing a ranked scored list of all possible selections from a document and user selection pair 2: φ : D × S →(σ1, ..., σ|Sd| | xi ∈Sd, yi ≥yi+1) (1) Consider a user who selects s in a document d. Let τ be the target selection that best captures what the user intended to select. We define the smart selection task as recovering τ given the pair ⟨d, s⟩. Our problem then is to learn a function φ that best recovers the target selection from any user selection. Note that even for a human, reconstructing an intended selection from a single word selection is not trivial. While there are some fairly clear cut cases such as expanding the selection “Obama” to Barack Obama in the sentence “While in DC, Barack Obama met with...”, there are cases where the user intention depends on extrinsic factors such as the user’s interests. For example, in a phrase “University of California at Santa Cruz” with a selection “California”, some (albeit probably few) users may indeed be interested in the state of California, others in the University 2The output consists of a ranked list of selections instead of a single selection to allow experiences such as proposing an n-best list to the user. of California system of universities, and yet others specifically in the University of California at Santa Cruz. In the next section, we describe how we obtained a dataset of true intended user selections. 3.2 Data In order to obtain a representative dataset for the smart selection task, we focus on a real-world application of users interacting with a touch-enabled e-reader device. In this application, a user is reading a book and chooses phrases for which she would like to get information from resources such as a dictionary, Wikipedia, or web search. Yet, because of the touch interface, she may only touch on a single word. 3.2.1 Crowdsourced Intended Selections We obtain the intended selections through the following crowdsourcing exercise. We use the entire collection of textbooks in English from Wikibooks3, a repository of publicly available textbooks. The corpus consists of 2,696 textbooks that span a large variety of categories such as Computing, Humanities, Science, etc. We first produce a uniform random sample of 100 books, and then sample one paragraph from each book. The resulting set of 100 paragraphs is then sent to the crowdsourcing system. Each paragraph is evaluated by 100 judges, using a pool of 152 judges. For each paragraph, we request the judges to select complete phrases for which they would like to “learn more in resources such as Wikipedia, search engines and dictionaries”, i.e., our true user intended selections. As a result of this exercise, we obtain 13,681 judgments, corresponding to 4,067 unique intended selections. The distribution of number of unique judges who selected each unique intended selection, in a log-log scale, is shown in Figure 1. Notice that this is a Zipfian distribution since it follows a linear trend in the log-log scale. Intuitively, the likelihood that a phrase is of interest to a user correlates with the number of judges who select that phrase. We thus use the number of judges who selected each phrase as a proxy for the likelihood that the phrase will be chosen by users. The resulting dataset consists of 4,067 ⟨d, τ⟩pairs where d is a Wikibook document paragraph and τ is an intended selection, along with the number of judges who selected it. We further assigned 3Available at http://wikibooks.org. 1526 0 2 4 6 8 10 12 0 1 2 3 4 5 6 LOG2(Unique intended selections) LOG2(Unique judges that selected the intended selection) Figure 1: Zipfian distribution of unique intended selections vs. the number of judges who selected them, in log-log scale. each pair to one of five randomly chosen folds, which are used for cross-validation experiments. 3.2.2 Testset Construction We define a test case as a triple ⟨d, s, τ⟩where s is a simulated user selection. For each ⟨d, τ⟩pair in our dataset we construct n corresponding test cases by simulating the user selections {⟨d, τ, s1⟩, . . . , ⟨d, τ, sn⟩} where s1, . . . , sn correspond to the individual words in τ. In other words, each word in τ is considered as a candidate user selection. We discard all target selections that only a single judge annotated since we observed that these mostly contained errors and noise, such as full sentences or nonsensical long sentence fragments. Our first testset, labeled TALL, is the resulting traffic-weighted multiset. That is, each test case ⟨d, s, τ⟩appears k times, where k is the number of judges who selected τ in d. TALL consists of 33,913 test cases. We further utilize the distribution of judgments in the creation of three other testsets. Following the stratified sampling methodology commonly employed in the IR community, we construct testsets for the frequently, less frequently, and rarely annotated intended selections, which we call HEAD, TORSO, and TAIL, respectively. We obtain these testsets by first sorting each unique selection according to their frequency of occurrence, and then partitioning the set so that HEAD corresponds to the elements at the top of the list that account for 20% of the judgments; TAIL corresponds to the elements at the bottom also accounting for 20% of the judgments; and TORSO corresponds to the remaining elements. The resulting test sets, THEAD, TTORSO, TTAIL consist of 114, 2115, and 5798 test cases, respectively4. Test sets along with fold assignments and annotation guidelines are available at http://research.microsoft.com/enus/downloads/eb42522c-068e-404c-b63fcf632bd27344/. 3.3 Discussion Our focus on single word selections is motivated by the touchscreen scenario presented in Section 1. Although our touch simulation assumes that each word in a target selection is equally likely to be selected by a user, in fact we expect this distribution to be non-uniform. For example, users may tend to select the first or last word more frequently than words in the middle of the target selection. Or perhaps users tend to select nouns and verbs more frequently than function words. We consider this out of scope for our paper, but view it as an important avenue of future investigation. Finally, for non-touchscreen environments, such as the desktop case, it would also be interesting to study the problem on multi-word user selections. To get an idea of the kind of intended selections that comprise our dataset, we broke them down according to whether they referred to named entities or not. Perhaps surprisingly, the fraction of named entities in the dataset is quite low, 24.3%5. The rest of the intended selections mostly correspond to concepts and topics such as embouchure formation, vocal fold relaxation, NHS voucher values, time-domain graphs, etc. 4 Model As argued in Section 1, existing techniques, such as NER taggers, chunkers, Knowledge Base lookup, etc., are geared towards aspects of the task (i.e., NEs, concepts, KB entries), but not the task as a whole. We can, however, combine the outputs of these systems with a learned “metamodel”. The meta-model ranks the combined candidates according to a criterion that is derived from data that resembles real usage of smart selection as closely as possible. This technique is known 4We stress that TALL is a multi-set, reflecting the overall expected user traffic from our 100 judges per paragraph. THEAD, TTORSO, TTAIL, in contrast, are not multi-sets since judgment frequency is already accounted for in the stratification process, as commonly done in the IR community. 5Becker et al. (2012) report a similar finding, showing that only 26% of questions, which a user might ask after reading a Wikipedia article, are focused on named entities. 1527 in the machine learning community as ensemble learning (Dietterich, 1997). Our ensemble approach, described in this section, serves as our main implementation of the smart selection function φ of Equation 1. Each of the ensemble members are themselves a separate implementation of φ and will be used as a point of comparison in our experiments. Below, we describe the ensemble members before turning to the ensemble learner. 4.1 Ensemble Members 4.1.1 Hyperlink Intent Model The Hyperlink Intent Model (HIM), which leverages web graph information, is a machine-learned system based on the intuition that anchor texts in Wikipedia are good representations of what users might want to learn about. We build upon the fact that Wikipedia editors write anchor texts for entities, concepts, and things of potential interest for follow-up to other content. HIM learns to recover anchor texts from their single word subselections. Specifically, HIM iteratively decides whether to expand the current selection (initially a single word) one word to the left or right via greedy binary decisions, until a stopping condition is met. At each step, two binary classifiers are consulted. The first one scores the left expansion decision and the second one scores the right expansion decision. In addition, we use the same two classifiers to evaluate the expansion decision “from the outside in”, i.e., from the word next to the current selection (left and right, respectively) to the closest word in the current selection. If the probability for expansion of any model exceeds a predefined threshold, then the most probable expansion is chosen and we continue the iteration with the newly expanded selection as input. The algorithm is illustrated in Figure 2. We automatically create our training set for HIM by first taking a random sample of 8K Wikipedia anchor texts. We treat each anchor text as an intended selection, and each word in the anchor text as a simulated user selection. For each word to the left (or the right) of the user selection that is part of the anchor text, we create a positive training example. Similarly, for each word to the left (or the right) that is outside of the anchor text, we create a negative training example. We include additional negative examples using random word selections from Wikipedia content. For this purpose we samLeft Context Right Context Current selection Candidate Right Selected Word 2 Candidate Left Selected Word 1 Context1 Context2 Context4 Context3 Figure 2: Hyperlink Intent Model (HIM) decoding flow for smart selection. ple random words that are not part of an anchor text. Our final data consists of 2.6M data points, with a 1:20 ratio of positive to negative examples6. We use logistic regression as the classification algorithm for our binary classifiers. The features used by each model are computed over three strings: the current selection s (initially the singleword simulated user selection), the candidate expansion word w, and one word over from the right or left of s. The features fall into five feature families: (1) character-level features, including capitalization, all-cap formatting, character length, presence of opening/closing parentheses, presence and position of digits and non-alphabetic characters, and minimum and average character uni/bi/trigram frequencies (based on frequency tables computed offline from Wikipedia article content); (2) stopword features, which indicate the presence of a stop word (from a stop word list); (3) tf.idf scores precomputed from Wikipedia content statistics; (4) knowledge base features, which indicate whether a string matches an item or a substring of an item in the knowledge base described in Section 4.1.2 below; and (5) lexical features, which capture the actual string of the current selection and the candidate expansion word. 4.1.2 Unit Spotting Our second qualitative class of ensemble members use notions of unit that are either based on linguistic constituency or knowledge base presence. The general process is that any unit that subsumes the user selection is treated as a smart selection candidate. Scoring of candidates is by normalized length, under the assumption that in general the most specific (longest) unit is more likely to be the intended selection. 6Note that this training set is generated automatically and is, by design, of a different nature than the manually labeled data we use to train and test the ensemble model. 1528 Our first unit spotter, labeled NER is geared towards recognizing named entities. We use a commercial and proprietary state-of-the-art NER system, trained using the perceptron algorithm (Collins, 2002) over more than a million hand-annotated labels. Our second approach uses purely syntactic information and treats noun phrases as units. We label this model as NP. For this purpose we parse the sentence containing the user selection with a syntactic parser following (Ratnaparkhi, 1999). We then treat every noun phrase that subsumes the user selection as a candidate smart selection. Finally, our third unit spotter, labeled KB, is based on the assumption that concepts and other entries in a knowledge base are, by nature, things that can be of interest to people. For our knowledge base lookup, we use a proprietary graph consisting of knowledge from Wikipedia, Freebase, and paid feeds from various providers from domains such as entertainment, local, and finance. 4.1.3 Heuristics Our third family of ensemble members implements simple heuristics, which tend to be high precision especially in the HEAD of our data. The first heuristic, representing the current touch-enabled selection paradigm seen in many of today’s tablets and smartphones, is labeled CUR. It simply assumes that the intended selection is always the user-selected word. The second is a capitalization-based heuristic (CAP), which simply expands every selected capitalized word selection to the longest uninterrupted sequence of capitalized words. 4.2 Ensemble Learning In this section, we describe how we train our metalearner, labeled ENS, which takes as input the candidate lists produced by the ensemble members from Section 4.1, and scores each candidate, producing a final scored ranked list. We use logistic regression as a classification algorithm to address this task. Our 22 features in ENS consist of three main classes: (1) features related to the individual ensemble members; (2) features related to the user selection; and (3) features related to the candidate smart selection. For (1), the features consist of whether a particular ensemble member generated the candidate smart selection and its score for that candidate. If the candidate smart selection is not in the candidate list of an ensemble member, its score is set to zero. For both (2) and (3), features account for length and capitalization properties of the user selection and the candidate smart selection (e.g., token length, ratio of capitalized tokens, ratio of capitalized characters, whether or not the first and last tokens are capitalized.) Although training data for the HIM model was automatically generated from Wikipedia, for ENS we desire training data that reflects the true expected user experience. For this, we use fivefold cross-validation over our data collection described in Section 3.2. That is, to decode a fold with our meta-learner, we train ENS with the other four folds. Note that every candidate selection for a ⟨document, user selection⟩-pair, ⟨d, s⟩, for the same d and s, are assigned to a single fold, hence the training process does not see any user selection from the test set. 5 Experimental Results 5.1 Experimental Setup Recall our testsets TALL, THEAD, TTORSO, and TTAIL from Section 3.2.2, where a test case is defined as a triple ⟨d, s, τ⟩, and where d is a document, s is a user selection, and τ is the intended user selection. In this section, we describe our evaluation metric and summarize the system configurations that we evaluate. 5.1.1 Metric In our evaluation, we apply the smart selection function φ(d, s) (see Eq. 1) to each test case and measure how well it recovers τ. Let A be the set of ⟨d, τ⟩-pairs from our dataset described in Section 3.2.1 that corresponds to a testset T. Let T⟨d,τ⟩be the set of all test cases in T with a fixed d and τ. We define the macro precision of a smart selection function, Pφ, as follows: Pφ = 1 | A | X ⟨d,τ⟩∈A Pφ(d, τ) (2) Pφ(d, τ) = 1 | T⟨d,τ⟩| X ⟨d,s,τ⟩∈T⟨d,τ⟩ Pφ(d, s, τ) Pφ(d, s, τ) = 1 | φ(d, s) | X σ∈φ(d,s) I(σ, τ) I(σ, τ) =  1 if σ = ⟨x, y⟩∧x = τ 0 otherwise 1529 CP@1 CP@2 CP@3 CP@4 CP@5 CUR 39.3 CAP 48.9 51.0 51.2 51.8 51.8 NER 43.5 NP 34.1 50.2 55.5 57.1 57.6 KB 50.2 50.8 50.9 50.9 50.9 HIM 48.1 48.8 48.8 48.8 48.8 ENS 56.8† 76.0‡ 82.6‡ 85.2‡ 86.6‡ Table 1: Smart selection performance, as a function of CP, on TALL. ‡ and † indicate statistical significance with p = 0.01 and 0.05, respectively. An oracle ensemble would achieve an upper bound CP of 87.3%. We report cumulative macro precision at rank (CP@k) in our experiments since our testsets contain a single true user-intended selection for each test case7. However, this is an overly conservative metric since in many cases an alternative smart selection might equally please the user. For example, if our testset contains a user intended selection τ = The University of Southern California, then given the simulated selection “California”, both τ and University of Southern California would most likely equally satisfy the user intent (whereas the latter would be considered incorrect in our evaluation). In fact, the ideal testset would further evaluate the distance or relevance of the smart selection to the intended user selection. We would then find perhaps that Southern California is a more reasonable smart selection than of Southern California. However, precisely defining such a relevance function and designing the guidelines for a user study is non-trivial and left for future work. 5.1.2 Systems In our experiments, we evaluate the following systems, each described in detail in Section 4: Passthrough (CUR), Capitalization (CAP), Named-Entity Recognizer (NER), Noun Phrase (NP), Knowledge Base (KB), Hyperlink Intent Model (HIM), Ensemble (ENS). 5.2 Results Table 1 reports the smart selection performance on the full traffic weighted testset TALL, as a func7Because there is only a single true intended selection for each test case, Recall@k = CP@k. tion of CP@k. Our ensemble approach recovers the true user-intended selection in 56.8% of the cases. In its top-2 and top-3 ranked smart selections, the true user-intended selection is retrieved 76.0% and 82.6% of the time, respectively. In position 1, ENS significantly outperforms all other systems with 95% confidence. Moreover, we notice that the divergence between ENS and the other systems greatly increases for K ≥2, where the significance is now at the 99% level. The CUR system models the selection paradigm of today’s consumer touch-enabled devices (i.e., it assumes that the intented selection is always the touched word). Without changing the user interface, we report a 45% improvement in predicting what the user intended to select over this baseline. If we changed the user interface to allow two or three options to be displayed to the user, then we would improve by 93% and 110%, respectively. For CUR and NER, we report results only at K = 1 since these systems only ever return a single smart selection. Note also that when no named entity is found by NER, or no noun phrase is found by NP or no knowledge base entry is found by KB, the corresponding systems return the original user selection as their smart selection. CAP does not vary much across K: when the intended selection is a capitalized multi-word, the longest string tends to be the intended selection. The same holds for KB. Whereas Table 1 reports the aggregate expected traffic performance, we further explore the performance against the stratified THEAD, TTORSO, and TTAIL testsets. The results are summarized in Table 2. As outlined in Section 3.2, the HEAD selections tend to be disproportionately entities and capitalized terms when compared to the TORSO and TAIL. Hence CAP, NER and KB perform much better on the HEAD. In fact, on the HEAD, CAP performs statistically as well as the ENS model. This means that at position 1, for systems that need to focus only on the HEAD, a very simple solution is adequate. For TORSO and TAIL, however, ENS performs better. At positions 2 and 3, across all strata, the ENS model significantly outperforms all other systems (with 99% confidence). Next, we studied the relative contribution of each ensemble member to the ENS model. Figure 3 illustrates the results of the ablation study. The ensemble member that results in the biggest performance drop when removed is HIM. Perhaps 1530 HEAD TORSO TAIL CP@1 CP@2 CP@3 CP@1 CP@2 CP@3 CP@1 CP@2 CP@3 CUR 48.5 36.7 26.6 CAP 74.2 74.7 74.8 43.0 45.0 45.1 26.1 27.4 28.2 NER 60.6 39.2 26.7 NP 52.3 64.9 69.4 31.0 48.2 53.8 20.0 32.2 35.7 KB 66.7 66.7 66.7 47.0 47.9 48.1 29.9 30.1 30.1 HIM 64.4 65.7 65.7 44.7 45.2 45.4 27.9 28.2 28.2 ENS 75.8 91.8‡ 96.5‡ 52.7† 73.7‡ 81.5‡ 32.4† 50.7‡ 58.5‡ Table 2: Smart selection performance, as a function of CP, on the THEAD, TTORSO, and TTAIL testsets. ‡ and † indicate statistical significance with p = 0.01 and 0.05, respectively. An oracle ensemble would achieve an upper bound CP of 98.5%, 86.8% and 64.8% for THEAD, TTORSO, and TTAIL, respectively. 0.7 0.75 0.8 0.85 0.9 0.95 1 Smart Selection Cumulative Precision @ Rank (ALL) Ensemble Member Ablation ENS -HIM -KB -NER 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 Cumulative Precision Rank Smart Selection Cumulative Precision @ Rank (ALL) Ensemble Member Ablation ENS -HIM -KB -NER -NP Figure 3: Ablation of ensemble model members over TALL. Each consecutive model removes one member specified in the series name. surprisingly, a first ablation of either the CAP or KB model, two of the better individual performing models from Table 1, leads to an ablated-ENS performance that is nearly identical to the full ENS model. One possible reason is that both tend to generate similar candidates (i.e., many entities in our KB are capitalized). Although the HIM model as a standalone system does not outperform simple linguistic unit selection models, it appears to be the most important contributor to the overall ensemble. 5.3 Error Analysis: Oracle Ensemble We begin by assessing an upper bound for our ensemble, i.e., an oracle ensemble, by assuming that if a correct candidate is generated by any ensemble member, the oracle ensemble model places it in first position. For TALL the oracle performance is 87.3%. In other words, our choice of ensemble members was able to recover a correct smart selection as a candidate in 87.3% of the user study cases. For THEAD, TTORSO, and TTAIL, the oracle performance is 98.5%, 86.8%, and 64.8%, respectively. Although our ENS model’s CP@3 is within 2-6 points of the oracle, there is room to significantly improve our CP@1, see Table 1 and Table 2. We analyze this opportunity by inspecting a random sample of 200 test cases where ENS produced an incorrect smart selection in position 1. The breakdown of these cases is: 1 case from THEAD; 50 cases from TTORSO; 149 cases from TTAIL, i.e., most errors occur in the TAIL. For 146 of these cases (73%), not a single ensemble member produced the correct target selection τ as a candidate. We analyze these cases in detail in Section 5.4. Of the remaining cases, 25, 10, 9, 4, 4, and 2 were correct in positions 2, 3, 4, 5, 6, 7, respectively. Table 3 lists some examples. In 18 cases (33%), the result in position 1 is very reasonable given the context and user selection (see lines 1-4 in Table 3 for examples). Often the target selection was also found in second position. These cases highlight the need for a more relaxed, relevance-based user study, as pointed out at the end of Section 5.1.1. We attributed 7 (13%) of the cases to data problems: some cases had a punctuation as a sole character user selection, some had a mishandled escaped quotation character, and some had a UTF-8 encoding error. The remaining 29 (54%) were truly model errors. Some examples are shown in lines 5-8 in Table 3. We found three categories of errors here. First, our model has learned a strong prior on preferring the original user selection (see example line 5). From a user experience point of view, when the model is unsure of itself, it is in fact better not to alter her selection. Second, we also learned a strong capitalization prior, i.e., to trust the CAP member (see example line 6). Finally, we noticed that we have difficulty handling user selections consisting of a stopword (we noted determiners, prepositions, and the word “and”). Adding a few simple features to ENS based on a stopwords list or a list of closed-class words should address this problem. 1531 Text Snippet User Selection ENS 1st Result 1 “The Russian conquest of the South Caucasus in the 19th century split the speech community across two states...” Caucasus South Caucasus 2 “...are generally something that transportation agencies would like to minimize...” transportation transportation agencies 3 “The vocal organ of birds, the syrinx, is located at the base of the blackbird’s trachea.” vocal vocal organ 4 “An example of this may be an idealised waveform like a square wave...” waveform idealised waveform 5 “Tickets may be purchased from either the ticket counter or from automatic machines...” counter counter 6 “PBXT features include the following: MVCC Support: MVCC stands for Multi-version Concurrency Control.” MVCC MVCC Support 7 “Centers for song production pathways include the High vocal center; robust nucleus of archistriatum (RA); and the tracheosyringeal part of the hypoglossal nucleus...” robust robust nucleus 8 “...and get an 11gR2 RAC cluster database running inside virtual machines...” cluster RAC cluster Table 3: Position 1 errors when applying ENS to our test cases. The text snippet is a substring of a paragraph presented to our judges with the target selection (τ) indicated in bold. 5.4 Error Analysis: Ensemble Members Over all test cases, the distribution of cases without a correct candidate generated by an ensemble member in the HEAD, TORSO, TAIL is 0.3%, 34.6%, and 65.1%, respectively. We manually inspected a random sample of 100 such test cases. The majority of them, 83%, were large sentence fragments, which we consider out of scope according to our prediction task definition outlined in Section 1. The average token length of the target selection τ for these was 15.3. In comparison, we estimate the average token length of the task-admissable cases to be 2.7 tokens. Although most of these long fragment selections seem to be noise, a few cases are statements that a user would reasonably want to know more about, such as: (i) “Talks of a merger between the NHL and the WHA were growing” or (ii) “NaN + NaN * 1.0i”. In 10% of the cases, we face a punctuationhandling issue, and in each case our ensemble was able to generate a correct candidate when fixing the punctuation. For example, for the book title τ = What is life?, our ensemble found the candidate What is life, dropping the question mark. For τ = Near Earth Asteroid. our ensemble found Near Earth Asteroid, dropping the period. Similar problems occurred with parentheses and quotation marks. In two cases, our ensemble members dropped a leading “the” token, e.g., for τ = the Hume Highway, we found Hume Highway. Finally, 2 cases were UTF-8 encoding mistakes, leaving five “true error” cases. 6 Conclusion and Future Work We introduced a new paradigm, smart selection, to address the cumbersome text selection capabilities of today’s touch-enabled mobile devices. We report 45% improvement in predicting what the user intended to select over current touch-enabled consumer platforms, such as iOS, Android and Windows. We release to the community a dataset of 33, 912 crowdsourced true intended user selections and corresponding simulated user touches. There are many avenues for future work, including understanding the distribution of user touches on their intended selection, other interesting scenarios (e.g., going beyond the e-reader towards document editors and web browsers may show different distributions in what users select), leveraging other sources of signal such as a user’s profile, her interests and her local session context, and exploring user interfaces that leverage n-best smart selection prediction lists, for example by providing selection options to the user after her touch. With the release of our 33, 912-crowdsourced dataset and our model analyses, it is our hope that the research community can help accelerate the progress towards reinventing the way text selection occurs today, the initial steps for which we have taken in this paper. 7 Acknowledgments The authors thank Aitao Chen for sharing his NER tagger for our experiments, and Bernhard Kohlmeier, Pradeep Chilakamarri, Ashok Chandra, David Hamilton, and Bo Zhao for their guidance and valuable discussions. 1532 References Steven. P. Abney. 1991. Parsing by chunks. In Robert C. Berwick, Steven P. Abney, and Carol Tenny, editors, Principle-Based Parsing: Computation and Psycholinguistics, pages 257–278. Kluwer, Dordrecht. Ricardo Baeza-Yates, Carlos Hurtado, and Marcelo Mendoza. 2005. Query recommendation using query logs in search engines. In Current Trends in Database Technology-EDBT 2004 Workshops, pages 588–596. Springer. Lee Becker, Sumit Basu, and Lucy Vanderwende. 2012. Mind the gap: Learning to choose gaps for question generation. In Proceedings of NAACL HLT ’12, pages 742–751. Paolo Boldi, Francesco Bonchi, Carlos Castillo, Debora Donato, Aristides Gionis, and Sebastiano Vigna. 2008. The query-flow graph: model and applications. In Proceedings of CIKM ’08, pages 609–618. ACM. Nancy A. Chinchor. 1998. Named entity task definition. In Proceedings of the Seventh Message Understanding Conference (MUC-7), Fairfax, VA. Michael Collins. 2002. Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. In Proceedings of EMNLP. Thomas G. Dietterich. 1997. Machine Learning Research - Four Current Directions. AI Magazine, 18:4:97–136. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In In ACL, pages 363–370. Asela Gunawardana, Tim Paek, and Christopher Meek. 2010. Usability guided key-target resizing for soft keyboards. In Proceedings of IUI ’10, pages 111– 118. Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of NAACL ’01, pages 1–8. Anuj Kumar, Tim Paek, and Bongshin Lee. 2012. Voice typing: A new speech interaction model for dictation on touchscreen devices. In Proceedings of CHI’12, pages 2277–2286. Marcia Mu˜noz, Vasin Punyakanok, Dan Roth, and Dav Zimak. 1999. A learning approach to shallow parsing. In Proceedings of EMNLP/VLC, pages 168– 178. Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the 3rd ACL Workshop on Very Large Corpora, pages 82–94. Cambridge MA, USA. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of CoNLL-2009, pages 147–155. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Mach. Learn., 34(1-3):151–175, February. Eldar Sadikov, Jayant Madhavan, Lu Wang, and Alon Halevy. 2010. Clustering query refinements by user intent. In Proceedings of the 19th international conference on World wide web, pages 841–850. ACM. Daisuke Sakamoto, Takanori Komatsu, and Takeo Igarashi. 2013. Voice augmented manipulation: using paralinguistic information to manipulate mobile devices. In Mobile HCI, pages 69–78. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL-2003, pages 142–147. Edmonton, Canada. Zhiyong Zhang and Olfa Nasraoui. 2006. Mining search engine query logs for query recommendation. In Proceedings of the 15th international conference on World Wide Web, pages 1039–1040. ACM. 1533
2014
143
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1534–1543, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Modeling Prompt Adherence in Student Essays Isaac Persing and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {persingq,vince}@hlt.utdallas.edu Abstract Recently, researchers have begun exploring methods of scoring student essays with respect to particular dimensions of quality such as coherence, technical errors, and prompt adherence. The work on modeling prompt adherence, however, has been focused mainly on whether individual sentences adhere to the prompt. We present a new annotated corpus of essaylevel prompt adherence scores and propose a feature-rich approach to scoring essays along the prompt adherence dimension. Our approach significantly outperforms a knowledge-lean baseline prompt adherence scoring system yielding improvements of up to 16.6%. 1 Introduction Automated essay scoring, the task of employing computer technology to evaluate and score written text, is one of the most important educational applications of natural language processing (NLP) (see Shermis and Burstein (2003) and Shermis et al. (2010) for an overview of the state of the art in this task). A major weakness of many existing scoring engines such as the Intelligent Essay AssessorTM(Landauer et al., 2003) is that they adopt a holistic scoring scheme, which summarizes the quality of an essay with a single score and thus provides very limited feedback to the writer. In particular, it is not clear which dimension of an essay (e.g., style, coherence, relevance) a score should be attributed to. Recent work addresses this problem by scoring a particular dimension of essay quality such as coherence (Miltsakaki and Kukich, 2004), technical errors, organization (Persing et al., 2010), and thesis clarity (Persing and Ng, 2013). Essay grading software that provides feedback along multiple dimensions of essay quality such as E-rater/Criterion (Attali and Burstein, 2006) has also begun to emerge. Our goal in this paper is to develop a computational model for scoring an essay along an under-investigated dimension — prompt adherence. Prompt adherence refers to how related an essay’s content is to the prompt for which it was written. An essay with a high prompt adherence score consistently remains on the topic introduced by the prompt and is free of irrelevant digressions. To our knowledge, little work has been done on scoring the prompt adherence of student essays since Higgins et al. (2004). Nevertheless, there are major differences between Higgins et al.’s work and our work with respect to both the way the task is formulated and the approach. Regarding task formulation, while Higgins et al. focus on classifying each sentence as having either good or bad adherence to the prompt, we focus on assigning a prompt adherence score to the entire essay, allowing the score to range from one to four points at half-point increments. As far as the approach is concerned, Higgins et al. adopt a knowledgelean approach to the task, where almost all of the features they employ are computed based on a word-based semantic similarity measure known as Random Indexing (Kanerva et al., 2000). On the other hand, we employ a large variety of features, including lexical and knowledge-based features that encode how well the concepts in an essay match those in the prompt, LDA-based features that provide semantic generalizations of lexical features, and “error type” features that encode different types of errors the writer made that are related to prompt adherence. In sum, our contributions in this paper are twofold. First, we develop a scoring model for the prompt adherence dimension on student essays using a feature-rich approach. Second, in order to stimulate further research on this task, we make our data set consisting of prompt adherence an1534 Topic Languages Essays Most university degrees are theoretical and do not prepare students for the real world. They are therefore of very little value. 13 131 The prison system is outdated. No civilized society should punish its criminals: it should rehabilitate them. 11 80 In his novel Animal Farm, George Orwell wrote “All men are equal but some are more equal than others.” How true is this today? 10 64 Table 1: Some examples of writing topics. notations of 830 essays publicly available. Since progress in prompt adherence modeling is hindered in part by the lack of a publicly annotated corpus, we believe that our data set will be a valuable resource to the NLP community. 2 Corpus Information We use as our corpus the 4.5 million word International Corpus of Learner English (ICLE) (Granger et al., 2009), which consists of more than 6000 essays written by university undergraduates from 16 countries and 16 native languages who are learners of English as a Foreign Language. 91% of the ICLE texts are argumentative. We select a subset consisting of 830 argumentative essays from the ICLE to annotate for training and testing of our essay prompt adherence scoring system. Table 1 shows three of the 13 topics selected for annotation. Fifteen native languages are represented in the set of annotated essays. 3 Corpus Annotation We ask human annotators to score each of the 830 argumentative essays along the prompt adherence dimension. Our annotators were selected from over 30 applicants who were familiarized with the scoring rubric and given sample essays to score. The six who were most consistent with the expected scores were given additional essays to annotate. Annotators evaluated how well each essay adheres to its prompt using a numerical score from one to four at half-point increments (see Table 2 for a description of each score). This contrasts with previous work on prompt adherence essay scoring, where the corpus is annotated with a binary decision (i.e., good or bad) (e.g., Higgins et al. (2004; 2006), Louis and Higgins (2010)). Hence, our annotation scheme not only provides Score Description of Prompt Adherence 4 essay fully addresses the prompt and consistently stays on topic 3 essay mostly addresses the prompt or occasionally wanders off topic 2 essay does not fully address the prompt or consistently wanders off topic 1 essay does not address the prompt at all or is completely off topic Table 2: Descriptions of the meaning of scores. a finer-grained distinction of prompt adherence (which can be important in practice), but also makes the prediction task more challenging. To ensure consistency in annotation, we randomly select 707 essays to have graded by multiple annotators. Analysis reveals that the Pearson’s correlation coefficient computed over these doubly annotated essays is 0.243. Though annotators exactly agree on the prompt adherence score of an essay only 38% of the time, the scores they apply fall within 0.5 points in 66% of essays and within 1.0 point in 89% of essays. For the sake of our experiments, whenever annotators disagree on an essay’s prompt adherence score, we assign the essay the average of all annotations rounded to the nearest half point. Table 3 shows the number of essays that receive each of the seven scores for prompt adherence. score 1.0 1.5 2.0 2.5 3.0 3.5 4.0 essays 0 0 8 44 105 230 443 Table 3: Distribution of prompt adherence scores. 4 Score Prediction In this section, we describe in detail our system for predicting essays’ prompt adherence scores. 4.1 Model Training and Application We cast the problem of predicting an essay’s prompt adherence score as 13 regression problems, one for each prompt. Each essay is represented as an instance whose label is the essay’s true score (one of the values shown in Table 3) with up to seven types of features including baseline (Section 4.2) and six other feature types proposed by us (Section 4.3). Our regressors may assign an essay any score in the range of 1.0−4.0. Using regression captures the fact that some pairs of scores are more similar than others (e.g., an essay with a prompt adherence score of 3.5 is more similar to an essay with a score of 4.0 than it is to one with a score of 1.0). A classification sys1535 tem, by contrast, may sometimes believe that the scores 1.0 and 4.0 are most likely for a particular essay, even though these scores are at opposite ends of the score range. Using a different regressor for each prompt captures the fact that it may be easier for an essay to adhere to some prompts than to others, and common problems students have writing essays for one prompt may not apply to essays written in response to another prompt. For example, in essays written in response to the prompt “Marx once said that religion was the opium of the masses. If he was alive at the end of the 20th century, he would replace religion with television,” students sometimes write essays about all the evils of television, forgetting that their essay is only supposed to be about whether it is “the opium of the masses”. Students are less likely to make an analogous mistake when writing for the prompt “Crime does not pay.” After creating training instances for prompt pi, we train a linear regressor, ri, with regularization parameter ci for scoring test essays written in response to pi using the linear SVM regressor implemented in the LIBSVM software package (Chang and Lin, 2001). All SVM-specific learning parameters are set to their default values except ci, which we tune to maximize performance on held-out validation data. After training the classifiers, we use them to classify the test set essays. The test instances are created in the same way as the training instances. 4.2 Baseline Features Our baseline system for score prediction employs various features based on Random Indexing. 1. Random Indexing Random Indexing (RI) is “an efficient, scalable and incremental alternative” (Sahlgren, 2005) to Latent Semantic Indexing (Deerwester et al., 1990; Landauer and Dutnais, 1997) which allows us to automatically generate a semantic similarity measure between any two words. We train our RI model on over 30 million words of the English Gigaword corpus (Parker et al., 2009) using the S-Space package (Jurgens and Stevens, 2010). We expect that features based on RI will be useful for prompt adherence scoring because they may help us find text related to the prompt even if some of its concepts have have been rephrased (e.g., an essay may talk about “jail” rather than “prison”, which is mentioned in one of the prompts), and because they have already proven useful for the related task of determining which sentences in an essay are related to the prompt (Higgins et al., 2004). For each essay, we therefore attempt to adapt the RI features used by Higgins et al. (2004) to our problem of prompt adherence scoring. We do this by generating one feature encoding the entire essay’s similarity to the prompt, another encoding the essay’s highest individual sentence’s similarity to the prompt, a third encoding the highest entire essay similarity to one of the prompt sentences, another encoding the highest individual sentence similarity to an individual prompt sentence, and finally one encoding the entire essay’s similarity to a manually rewritten version of the prompt that excludes extraneous material (such as “In his novel Animal Farm, George Orwell wrote,” which is introductory material from the third prompt in Table 1). Our RI feature set necessarily excludes those features from Higgins et al. that are not easily translatable to our problem since we are concerned with an entire essay’s adherence to its prompt rather than with each of its sentences’ relatedness to the prompt. Since RI does not provide a straightforward way to measure similarity between groups of words such as sentences or essays, we use Higgins and Burstein’s (2007) method to generate these features. 4.3 Novel Features Next, we introduce six types of novel features. 2. N-grams As our first novel feature, we use the 10,000 most important lemmatized unigram, bigram, and trigram features that occur in the essay. N-grams can be useful for prompt adherence scoring because they can capture useful words and phrases related to a prompt. For example, words and phrases like “university degree”, “student”, and “real world” are relevant to the first prompt in Table 1, so it is more likely that an essay adheres to the prompt if they appear in the essay. We determine the “most important” n-gram features using information gain computed over the training data (Yang and Pedersen, 1997). Since the essays vary greatly in length, we normalize each essay’s set of n-gram features to unit length. 3. Thesis Clarity Keywords Our next set of features consists of the keyword features we introduced in our previous work on essay thesis clarity scoring (Persing and Ng, 2013). Below we give an overview of these keyword features and motivate 1536 why they are potentially useful for prompt adherence scoring. The keyword features were formed by first examining the 13 essay prompts, splitting each into its component pieces. As an example of what is meant by a “component piece”, consider the first prompt in Table 1. The components of this prompt would be “Most university degrees are theoretical”, “Most university degrees do not prepare students for the real world”, and “Most university degrees are of very little value.” Then the most important (primary) and second most important (secondary) words were selected from each prompt component, where a word was considered “important” if it would be a good word for a student to use when stating her thesis about the prompt. So since the lemmatized version of the third component of the second prompt in Table 1 is “it should rehabilitate they”, “rehabilitate” was selected as a primary keyword and “society” as a secondary keyword. Features are then computed based on these keywords. For instance, one thesis clarity keyword feature is computed as follows. The RI similarity measure is first taken between the essay and each group of the prompt’s primary keywords. The feature then gets assigned the lowest of these values. If this feature has a low value, that suggests that the student ignored the prompt component from which the value came when writing the essay. To compute another of the thesis clarity keyword features, the numbers of combined primary and secondary keywords the essay contains from each component of its prompt are counted. These numbers are then divided by the total count of primary and secondary features in their respective components. The greatest of the fractions generated in this way is encoded as a feature because if it has a low value, that indicates the essay’s thesis may not be very relevant to the prompt.1 4. Prompt Adherence Keywords The thesis clarity keyword features described above were intended for the task of determining how clear an essay’s thesis is, but since our goal is instead to determine how well an essay adheres to its prompt, it makes sense to adapt keyword features to our task rather than to adopt keyword features ex1Space limitations preclude a complete listing of the thesis clarity keyword features. See our website at http: //www.hlt.utdallas.edu/˜persingq/ICLE/ for the complete list. actly as they have been used before. For this reason, we construct a new list of keywords for each prompt component, though since prompt adherence is more concerned with what the student says about the topics than it is with whether or not what she says about them is stated clearly, our keyword lists look a little different than the ones discussed above. For an example, we earlier alluded to the problem of students merely discussing all the evils of television for the prompt “Marx once said that religion was the opium of the masses. If he was alive at the end of the 20th century, he would replace religion with television.” Since the question suggests that students discuss whether television is analogous to religion in this way, our set of prompt adherence keywords for this prompt contains the word “religion” while the previously discussed keyword sets do not. This is because a thesis like “Television is bad” can be stated very clearly without making any reference to religion at all, and so an essay with a thesis like this can potentially have a very high thesis clarity score. It should not, however, have a very high prompt adherence score, as the prompt asked the student to discuss whether television is like religion in a particular way, so religion should be at least briefly addressed for an essay to be awarded a high prompt adherence score. Additionally, our prompt adherence keyword sets do not adopt the notions of primary and secondary groups of keywords for each prompt component, instead collecting all the keywords for a component into one set because “secondary” keywords tend to be things that are important when we are concerned with what a student is saying about the topic rather than just how clearly she said it. We form two types of features from prompt adherence keywords. While both types of features measure how much each prompt component was discussed in an essay, they differ in how they encode the information. To obtain feature values of the first type, we take the RI similarities between the whole essay and each set of prompt adherence keywords from the prompt’s components. This results in one to three features, as some prompts have one component while others have up to three. We obtain feature values of the second type as follows. For each component, we count the number of prompt adherence keywords the essay contains. We divide this number by the number of prompt adherence keywords we identified from 1537 the component. This results in one to three features since a prompt has one to three components. 5. LDA Topics A problem with the features we have introduced up to this point is that they have trouble identifying topics that are not mentioned in the prompt, but are nevertheless related to the prompt. These topics should not diminish the essay’s prompt adherence score because they are at least related to prompt concepts. For example, consider the prompt “All armies should consist entirely of professional soldiers: there is no value in a system of military service.” An essay containing words like “peace”, “patriotism”, or “training” are probably not digressions from the prompt, and therefore should not be penalized for discussing these topics. But the various measures of keyword similarities described above will at best not notice that anything related to the prompt is being discussed, and at worst, this might have effects like lowering some of the RI similarity scores, thereby probably lowering the prompt adherence score the regressor assigns to the essay. While n-gram features do not have exactly the same problem, they would still only notice that these example words are related to the prompt if multiple essays use the same words to discuss these concepts. For this reason, we introduce Latent Dirichlet Allocation (LDA) (Blei et al., 2003) features. In order to construct our LDA features, we first collect all essays written in response to each prompt into its own set. Note that this feature type exploits unlabeled data: it includes all essays in the ICLE responding to our prompts, not just those in our smaller annotated 830 essay dataset. We then use the MALLET (McCallum, 2002) implementation of LDA to build a topic model of 1,000 topics around each of these sets of essays. This results in what we can think of as a soft clustering of words into 1,000 sets for each prompt, where each set of words represents one of the topics LDA identified being discussed in the essays for that prompt. So for example, the five most important words in the most frequently discussed topic for the military prompt we mentioned above are “man”, “military”, “service”, “pay”, and “war”. We also use the MALLET-generated topic model to tell us how much of each essay is spent discussing each of the 1,000 topics. The model might tell us, for example, that a particular essay written on the military prompt spends 35% of the time discussing the “man”, “military”, “service”, “pay”, and “war” topic and 65% of the time discussing a topic whose most important words are “fully”, “count”, “ordinary”, “czech”, and “day”. Since the latter topic is discussed so much in the essay and does not appear to have much to do with the military prompt, this essay should probably get a bad prompt adherence score. We construct 1,000 features from this topic model, one for each topic. Each feature’s value is obtained by using the topic model to tell us how much of the essay was spent discussing the feature’s corresponding topic. From these features, our regressor should be able to learn which topics are important to a good prompt adherent essay. 6. Manually Annotated LDA Topics A weakness of the LDA topics feature type is that it may result in a regressor that has trouble distinguishing between an infrequent topic that is adherent to the prompt and one that just represents an irrelevant digression. This is because an infrequent topic may not appear in the training set often enough for the regressor to make this judgment. We introduce the manually annotated LDA topics feature type to address this problem. In order to construct manually annotated LDA topic features, we first build 13 topic models, one for each prompt, just as described in the section on LDA topic features. Rather than requesting models of 1,000 topics, however, we request models of only 100 topics2. We then go through all 13 lists of 100 topics as represented by their top ten words, manually annotating each topic with a number from 0 to 5 representing how likely it is that the topic is adherent to the prompt. A topic labeled 5 is very likely to be related to the prompt, where a topic labeled 0 appears totally unrelated. Using these annotations alongside the topic distribution for each essay that the topic models provide us, we construct ten features. The first five features encode the sum of the contributions to an essay of topics annotated with a number ≥1, the sum of the contributions to an essay of topics annotated with a number ≥2, and so on up to 5. The next five features are similar to the last, with one feature taking on the sum of the contributions to an essay of topics annotated with the number 0, another feature taking on the sum of the 2We use 100 topics for each prompt in the manually annotated version of LDA features rather than the 1,000 topics we use in the regular version of LDA features because 1,300 topics are not too costly to annotate, but manually annotating 13,000 topics would take too much time. 1538 contributions to an essay of topics annotated with the number 1, and so on up to 4. We do not include a feature for topics annotated with the number 5 because it would always have the same value as the feature for topics ≥5. Features like these should give the regressor a better idea how much of an essay is composed of prompt-related arguments and discussion and how much of it is irrelevant to the prompt, even if some of the topics occurring in it are too infrequent to judge just from training data. 7. Predicted Thesis Clarity Errors In our previous work on essay thesis clarity scoring (Persing and Ng, 2013), we identified five classes of errors that detract from the clarity of an essay’s thesis: Confusing Phrasing. The thesis is phrased oddly, making it hard to understand the writer’s point. Incomplete Prompt Response. The thesis leaves some part of a multi-part prompt unaddressed. Relevance to Prompt. The apparent thesis’s weak relation to the prompt causes confusion. Missing Details. The thesis leaves out an important detail needed to understand the writer’s point. Writer Position. The thesis describes a position on the topic without making it clear that this is the position the writer supports. We hypothesize that these errors, though originally intended for thesis clarity scoring, could be useful for prompt adherence scoring as well. For instance, an essay that has a Relevance to Prompt error or an Incomplete Prompt Response error should intuitively receive a low prompt adherence score. For this reason, we introduce features based on these errors to our feature set for prompt adherence scoring3. While each of the essays in our data set was previously annotated with these thesis clarity errors, in a realistic setting a prompt adherence scoring system will not have access to these manual error labels. As a result, we first need to predict which of these errors is present in each essay. To do this, we train five maximum entropy classifiers for each prompt, one for each of the five thesis clarity errors, using MALLET’s (McCallum, 2002) implementation of maximum entropy classification. Instances are presented to classifier for prompt p for error e in the following way. If a training essay is written in response to p, it will be used to gen3See our website at http://www.hlt.utdallas. edu/˜persingq/ICLE/ for the complete list of error annotations. erate a training instance whose label is 1 if e was annotated for it or 0 otherwise. Since error prediction and prompt adherence scoring are related problems, the features we associate with this instance are features 1−6 which we have described earlier in this section. The classifier is then used to generate probabilities telling us how likely it is that each test essay has error e. Then, when training our regressor for prompt adherence scoring, we add the following features to our instances. We add a binary feature indicating the presence or absence of each error. Or in the case of test essays, the feature takes on a real value from 0 to 1 indicating how likely the classifier thought it was that the essay had each of the errors. This results in five additional features, one for each error. 5 Evaluation In this section, we evaluate our system for prompt adherence scoring. All the results we report are obtained via five-fold cross-validation experiments. In each experiment, we use 3 5 of our labeled essays for model training, another 1 5 for parameter tuning, and the final 1 5 for testing. 5.1 Experimental Setup 5.1.1 Scoring Metrics We employ four evaluation metrics. As we will see below, S1, S2, and S3 are error metrics, so lower scores imply better performance. In contrast, PC is a correlation metric, so higher correlation implies better performance. The simplest metric, S1, measures the frequency at which a system predicts the wrong score out of the seven possible scores. Hence, a system that predicts the right score only 25% of the time would receive an S1 score of 0.75. The S2 metric measures the average distance between a system’s score and the actual score. This metric reflects the idea that a system that predicts scores close to the annotator-assigned scores should be preferred over a system whose predictions are further off, even if both systems estimate the correct score at the same frequency. The S3 metric measures the average square of the distance between a system’s score predictions and the annotator-assigned scores. The intuition behind this system is that not only should we prefer a system whose predictions are close to the annotator scores, but we should also prefer 1539 one whose predictions are not too frequently very far away from the annotator scores. These three scores are given by: 1 N X Aj̸=E′ j 1, 1 N N X i=1 |Aj −Ej|, 1 N N X i=1 (Aj −Ej)2 where Aj, Ej, and E′ j are the annotator assigned, system predicted, and rounded system predicted scores4 respectively for essay j, and N is the number of essays. The last metric, PC, computes Pearson’s correlation coefficient between a system’s predicted scores and the annotator-assigned scores. PC ranges from −1 to 1. A positive (negative) PC implies that the two sets of predictions are positively (negatively) correlated. 5.1.2 Parameter Tuning As mentioned earlier, for each prompt pi, we train a linear regressor ri using LIBSVM with regularization parameter ci. To optimize our system’s performance on the three error measures described previously, we use held-out validation data to independently tune each of the ci values5. Note that each of the ci values can be tuned independently because a ci value that is optimal for predicting scores for pi essays with respect to any of the error performance measures is necessarily also the optimal ci when measuring that error on essays from all prompts. However, this is not case with Pearson’s correlation coefficient, as the PC value for essays from all 13 prompts cannot be simplified as a weighted sum of the PC values obtained on each individual prompt. In order to obtain an optimal result as measured by PC, we jointly tune the ci parameters to optimize the PC value achieved by our system on the same held-out validation data. However, an exact solution to this optimization problem is computationally expensive, as there are too many (713) possible combinations of c values to exhaustively search. Consequently, we find a local maximum by employing the simulated an4Since our regressor assigns each essay a real value rather than an actual valid score, it would be difficult to obtain a reasonable S1 score without rounding the system estimated score to one of the possible values. For that reason, we round the estimated score to the nearest of the seven scores the human annotators were permitted to assign (1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0) only when calculating S1. For other scoring metrics, we only round the predictions to 1.0 or 4.0 if they fall outside the 1.0−4.0 range. 5For parameter tuning, we employ the following values. ci may be assigned any of the values 100 101, 102, 103, 104, 105, or 106. System S1 S2 S3 PC Baseline .517 .368 .234 .233 Our System .488 .348 .197 .360 Table 4: Five-fold cross-validation results for prompt adherence scoring. nealing algorithm (Kirkpatrick et al., 1983), altering one ci value at a time to optimize PC while holding the remaining parameters fixed. 5.2 Results and Discussion Five-fold cross-validation results on prompt adherence score prediction are shown in Table 4. On the first line, this table shows that our baseline system, which recall uses only various RI features, predicts the wrong score 51.7% of the time. Its predictions are off by an average of .368 points, and the average squared distance between its predicted score and the actual score is .234. In addition, its predicted scores and the actual scores have a Pearson correlation coefficient of 0.233. The results from our system, which uses all seven feature types described in Section 4, are shown in row 2 of the table. Our system obtains S1, S2, S3, and PC scores of .488, .348, .197, and .360 respectively, yielding a significant improvement over the baseline with respect to S2, S3, and PC with p < 0.05, p < 0.01, and p < 0.06 respectively6. While our system yields improvements by all four measures, its improvement over the baseline S1 score is not significant. These results mean that the greatest improvements our system makes are that it ensures that our score predictions are not too often very far away from an essay’s actual score, as making such predictions would tend to drive up S3, yielding a relative error reduction in S3 of 15.8%, and it also ensures a better correlation between predicted and actual scores, thus yielding the 16.6% improvement in PC.7 It also gives more modest improvements in how frequently exactly the right score is predicted (S1) and is better at predicting scores closer to the actual scores (S2). 5.3 Feature Ablation To gain insight into how much impact each of the feature types has on our system, we perform fea6All significance tests are paired t-tests. 7These numbers are calculated B−O B−P where B is the baseline system’s score, O is our system’s score, and P is a perfect score. Perfect scores for error measures and PC are 0 and 1 respectively. 1540 ture ablation experiments in which we remove the feature types from our system one-by-one. Results of the ablation experiments when performed using the four scoring metrics are shown in Table 5. The top line of each subtable shows what our system’s score would be if we removed just one of the feature types from our system. So to see how our system performs by the S1 metric if we remove only predicted thesis clarity error features, we would look at the first row of results of Table 5(a) under the column headed by the number 7 since predicted thesis clarity errors are the seventh feature type introduced in Section 4. The number here tells us that our system’s S1 score without this feature type is .502. Since Table 4 shows that when our system includes this feature type (along with all the other feature types), it obtains an S1 score of .488, this feature type’s removal costs our system .014 S1 points, and thus its inclusion has a beneficial effect on the S1 score. From row 1 of Table 5(a), we can see that removing feature 4 yields a system with the best S1 score in the presence of the other feature types in this row. For this reason, we permanently remove feature 4 from the system before we generate the results on line 2. Thus, we can see what happens when we remove both feature 4 and feature 5 by looking at the second entry in row 2. And since removing feature 6 harms performance least in the presence of row 2’s other feature types, we permanently remove both 4 and 6 from our feature set when we generate the third row of results. We iteratively remove the feature type that yields a system with the best performance in this way until we get to the last line, where only one feature type is used to generate each result. Since the feature type whose removal yields the best system is always the rightmost entry in a line, the order of column headings indicates the relative importance of the feature types, with the leftmost feature types being most important to performance and the rightmost feature types being least important in the presence of the other feature types. This being the case, it is interesting to note that while the relative importance of different feature types does not remain exactly the same if we measure performance in different ways, we can see that some feature types tend to be more important than others in a majority of the four scoring metrics. Features 2 (n-grams), 3 (thesis clarity keywords), and 6 (manually annotated LDA top(a) Results using the S1 metric 3 5 1 7 2 6 4 .527 .502 .512 .502 .511 .500 .488 .527 .502 .512 .501 .513 .500 .525 .508 .505 .505 .504 .513 .527 .520 .513 .523 .520 .506 .541 .527 (b) Results using the S2 metric 2 6 3 1 4 5 7 .356 .350 .348 .350 .349 .348 .348 .351 .349 .348 .348 .348 .347 .351 .349 .348 .348 .347 .350 .349 .348 .348 .358 .351 .349 .362 .352 (c) Results using the S3 metric 2 6 1 5 4 7 3 .221 .201 .197 .197 .197 .197 .196 .215 .201 .197 .196 .196 .196 .212 .203 .199 .197 .196 .212 .203 .199 .197 .212 .203 .199 .223 .204 (d) Results using the PC metric 6 3 2 1 7 5 4 .326 .332 .303 .344 .348 .348 .361 .326 .332 .304 .343 .348 .348 .324 .337 .292 .345 .352 .322 .337 .297 .346 .316 .321 .323 .218 .325 Table 5: Feature ablation results. In each subtable, the first row shows how our system would perform if each feature type was removed. We remove the least important feature type, and show in the next row how the adjusted system would perform without each remaining type. For brevity, a feature type is referred to by its feature number: (1) RI; (2) n-grams; (3) thesis clarity keywords; (4) prompt adherence keywords; (5) LDA topics; (6) manually annotated LDA topics; and (7) predicted thesis clarity errors. ics) tend to be the most important feature types, as they tend to be the last feature types removed in the ablation subtables. Features 1 (RI) and 5 (LDA topics) are of middling importance, with neither ever being removed first or last, and each tending to have a moderate effect on performance. Finally, while features 4 (prompt adherence keywords) and 7 (predicted thesis clarity errors) may by themselves provide useful information to our system, in the presence of the other feature types they tend to be the least important to performance as they are often the first feature types removed. While there is a tendency for some feature types to always be important (or unimportant) regardless of which scoring metric is used to measure per1541 S1 S2 S3 PC Gold .25 .50 .75 .25 .50 .75 .25 .50 .75 .25 .50 .75 2.0 3.35 3.56 3.79 3.40 3.52 3.73 3.06 3.37 3.64 3.06 3.37 3.64 2.5 3.43 3.63 3.80 3.25 3.52 3.79 3.24 3.45 3.67 3.24 3.46 3.73 3.0 3.64 3.78 3.85 3.56 3.70 3.90 3.52 3.65 3.74 3.52 3.66 3.79 3.5 3.73 3.81 3.88 3.63 3.78 3.90 3.59 3.70 3.81 3.60 3.74 3.85 4.0 3.76 3.84 3.88 3.70 3.83 3.90 3.63 3.75 3.84 3.66 3.78 3.88 Table 6: Regressor scores for our system. formance, the relative importance of different feature types does not always remain consistent if we measure performance in different ways. For example, while we identified feature 3 (thesis clarity keywords) as one of the most important feature types generally due to its tendency to have a large beneficial impact on performance, when we are measuring performance using S3, it is the least useful feature type. Furthermore, its removal increases the S3 score by a small amount, meaning that its inclusion actually makes our system perform worse with respect to S3. Though feature 3 is an extreme example, all feature types fluctuate in importance, as we see when we compare their orders of removal among the four ablation subtables. Hence, it is important to know how performance is measured when building a system for scoring prompt adherence. Feature 3 is not the only feature type whose removal sometimes has a beneficial impact on performance. As we can see in Table 5(b), the removal of features 4, 5, and 7 improves our system’s S2 score by .001 points. The same effect occurs in Table 5(c) when we remove features 4, 7, and 3. These examples illustrate that under some scoring metrics, the inclusion of some feature types is actively harmful to performance. Fortunately, this effect does not occur in any other cases than the two listed above, as most feature types usually have a beneficial or at least neutral impact on our system’s performance. For those feature types whose effect on performance is neutral in the first lines of ablation results (feature 4 in S1, features 3, 5, and 7 in S2, and features 1, 4, 5, and 7 in S3), it is important to note that their neutrality does not mean that they are unimportant. It merely means that they do not improve performance in the presence of other feature types. We can see this is the case by noting that they are not all the least important feature types in their respective subtables as indicated by column order. For example, by the time feature 1 gets permanently removed in Table 5(c), its removal harms performance by .002 S3 points. 5.4 Analysis of Predicted Scores To more closely examine the behavior of our system, in Table 6 we chart the distributions of scores it predicts for essays having each gold standard score. As an example of how to read this table, consider the number 3.06 appearing in row 2.0 in the .25 column of the S3 region. This means that 25% of the time, when our system with parameters tuned for optimizing S3 is presented with a test essay having a gold standard score of 2.0, it predicts that the essay has a score less than or equal to 3.06. From this table, we see that our system has a strong bias toward predicting more frequent scores as there are no numbers less than 3.0 in the table, and about 93.7% of all essays have gold standard scores of 3.0 or above. Nevertheless, our system does not rely entirely on bias, as evidenced by the fact that each column in the table has a tendency for its scores to ascend as the gold standard score increases, implying that our system has some success at predicting lower scores for essays with lower gold standard prompt adherence scores. Another interesting point to note about this table is that the difference in error weighting between the S2 and S3 scoring metrics appears to be having its desired effect, as every entry in the S3 subtable is less than its corresponding entry in the S2 subtable due to the greater penalty the S3 metric imposes for predictions that are very far away from the gold standard scores. 6 Conclusion We proposed a feature-rich approach to the underinvestigated problem of predicting essay-level prompt adherence scores on student essays. In an evaluation on 830 argumentative essays selected from the ICLE corpus, our system significantly outperformed a Random Indexing based baseline by several evaluation metrics. To stimulate further research on this task, we make all our annotations, including our prompt adherence scores, the LDA topic annotations, and the error annotations publicly available. 1542 References Yigal Attali and Jill Burstein. 2006. Automated essay scoring with E-rater v.2.0. Journal of Technology, Learning, and Assessment, 4(3). David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. Chih-Chung Chang and Chih-Jen Lin, 2001. LIBSVM: A library for support vector machines. Software available at http://www.csie.ntu. edu.tw/˜cjlin/libsvm. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent smeantic analysis. Journal of American Society of Information Science, 41(6):391–407. Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. International Corpus of Learner English (Version 2). Presses universitaires de Louvain. Derrick Higgins and Jill Burstein. 2007. Sentence similarity measures for essay coherence. In Proceedings of the 7th International Workshop on Computational Semantics. Derrick Higgins, Jill Burstein, Daniel Marcu, and Claudia Gentile. 2004. Evaluating multiple aspects of coherence in student essays. In Human Language Technologies: The 2004 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 185–192. Derrick Higgins, Jill Burstein, and Yigal Attali. 2006. Identifying off-topic student essays without topicspecific training data. Natural Language Engineering, 12(2):145–159. David Jurgens and Keith Stevens. 2010. The S-Space package: An open source package for word space models. In Proceedings of the ACL 2010 System Demonstrations, pages 30–35. Pentti Kanerva, Jan Kristoferson, and Anders Holst. 2000. Random indexing of text samples for latent semantic analysis. In Proceedings of the 22nd Annual Conference of the Cognitive Science Society, pages 103–106. Scott Kirkpatrick, C. D. Gelatt, and Mario P. Vecchi. 1983. Optimization by simulated annealing. Science, 220(4598):671–680. Thomas K. Landauer and Susan T. Dutnais. 1997. A solution to plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, pages 211–240. Thomas K. Landauer, Darrell Laham, and Peter W. Foltz. 2003. Automated scoring and annotation of essays with the Intelligent Essay AssessorTM˙In Automated Essay Scoring: A Cross-Disciplinary Perspective, pages 87–112. Lawrence Erlbaum Associates, Inc., Mahwah, NJ. Annie Louis and Derrick Higgins. 2010. Off-topic essay detection using short prompt texts. In Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 92–95. Andrew Kachites McCallum. 2002. MALLET: A Machine Learning for Language Toolkit. http: //mallet.cs.umass.edu. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Robert Parker, David Graf, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2009. English Gigaword Fourth Edition. Linguistic Data Consortium, Philadelphia. Isaac Persing and Vincent Ng. 2013. Modeling thesis clarity in student essays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 260–269. Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 229– 239. Magnus Sahlgren. 2005. An introduction to random indexing. In Methods and Applications of Semantic Indexing Workshop at the 7th International Conference on Terminology and Knowledge Engineering. Mark D. Shermis and Jill C. Burstein. 2003. Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Inc., Mahwah, NJ. Mark D. Shermis, Jill Burstein, Derrick Higgins, and Klaus Zechner. 2010. Automated essay scoring: Writing assessment and instruction. In International Encyclopedia of Education (3rd edition). Elsevier, Oxford, UK. Yiming Yang and Jan O. Pedersen. 1997. A comparative study on feature selection in text categorization. In Proceedings of the 14th International Conference on Machine Learning, pages 412–420. 1543
2014
144
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1544–1554, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics ConnotationWordNet: Learning Connotation over the Word+Sense Network Jun Seok Kang Song Feng Leman Akoglu Yejin Choi Department of Computer Science Stony Brook University Stony Brook, NY 11794-4400 junkang, songfeng, leman, [email protected] Abstract We introduce ConnotationWordNet, a connotation lexicon over the network of words in conjunction with senses. We formulate the lexicon induction problem as collective inference over pairwise-Markov Random Fields, and present a loopy belief propagation algorithm for inference. The key aspect of our method is that it is the first unified approach that assigns the polarity of both word- and sense-level connotations, exploiting the innate bipartite graph structure encoded in WordNet. We present comprehensive evaluation to demonstrate the quality and utility of the resulting lexicon in comparison to existing connotation and sentiment lexicons. 1 Introduction We introduce ConnotationWordNet, a connotation lexicon over the network of words in conjunction with senses, as defined in WordNet. A connotation lexicon, as introduced first by Feng et al. (2011), aims to encompass subtle shades of sentiment a word may conjure, even for seemingly objective words such as “sculpture”, “Ph.D.”, “rosettes”. Understanding the rich and complex layers of connotation remains to be a challenging task. As a starting point, we study a more feasible task of learning the polarity of connotation. For non-polysemous words, which constitute a significant portion of English vocabulary, learning the general connotation at the word-level (rather than at the sense-level) would be a natural operational choice. However, for polysemous words, which correspond to most frequently used words, it would be an overly crude assumption that the same connotative polarity should be assigned for all senses of a given word. For example, consider “abound”, for which lexicographers of WordNet prescribe two different senses: • (v) abound: (be abundant of plentiful; exist in large quantities) • (v) abound, burst, bristle: (be in a state of movement or action) “The room abounded with screaming children”; “The garden bristled with toddlers” For the first sense, which is the most commonly used sense for “abound”, the general overtone of the connotation would seem positive. That is, although one can use this sense in both positive and negative contexts, this sense of “abound” seems to collocate more often with items that are good to be abundant (e.g., “resources”), than unfortunate items being abundant (e.g., “complaints”). However, as for the second sense, for which “burst” and “bristle” can be used interchangeably with respect to this particular sense,1 the general overtone is slightly more negative with a touch of unpleasantness, or at least not as positive as that of the first sense. Especially if we look up the WordNet entry for “bristle”, there are noticeably more negatively connotative words involved in its gloss and examples. This word sense issue has been a universal challenge for a range of Natural Language Processing applications, including sentiment analysis. Recent studies have shown that it is fruitful to tease out subjectivity and objectivity corresponding to different senses of the same word, in order to improve computational approaches to sentiment analysis (e.g. Pestian et al. (2012), Mihalcea et al. (2012) Balahur et al. (2014)). Encouraged by these recent successes, in this study, we investigate if we can attain similar gains if we model the connotative polarity of senses separately. There is one potential practical issue we would like to point out in building a sense-level lexical resource, however. End-users of such a lexicon may not wish to deal with Word Sense Disam1Hence a sense in WordNet is defined by synset (= synonym set), which is the set of words sharing the same sense. 1544 biguation (WSD), which is known to be often too noisy to be incorporated into the pipeline with respect to other NLP tasks. As a result, researchers often would need to aggregate labels across different senses to derive the word-level label. Although such aggregation is not entirely unreasonable, it does not seem to be the most optimal and principled way of integrating available resources. Therefore, in this work, we present the first unified approach that learns both sense- and wordlevel connotations simultaneously. This way, endusers will have access to more accurate sense-level connotation labels if needed, while also having access to more general word-level connotation labels. We formulate the lexicon induction problem as collective inference over pairwise-Markov Random Fields (pairwise-MRF) and derive a loopy belief propagation algorithm for inference. The key aspect of our approach is that we exploit the innate bipartite graph structure between words and senses encoded in WordNet. Although our approach seems conceptually natural, previous approaches, to our best knowledge, have not directly exploited these relations between words and senses for the purpose of deriving lexical knowledge over words and senses collectively. In addition, previous studies (for both sentiment and connotation lexicons) aimed to produce only either of the two aspects of the polarity: word-level or sense-level, while we address both. Another contribution of our work is the introduction of loopy belief propagation (loopy-BP) as a lexicon induction algorithm. Loopy-BP in our study achieves statistically significantly better performance over the constraint optimization approaches previously explored. In addition, it runs much faster and it is considerably easier to implement. Last but not least, by using probabilistic representation of pairwise-MRF in conjunction with Loopy-BP as inference, the resulting solution has the natural interpretation as the intensity of connotation. This contrasts to approaches that seek discrete solutions such as Integer Linear Programming(Papadimitriou and Steiglitz, 1998). ConnotationWordNet, the final outcome of our study, is a new lexical resource that has connotation labels over both words and senses following the structure of WordNet. The lexicon is publicly available at: http://www.cs.sunysb. edu/˜junkang/connotation_wordnet.) In what follows, we will first describe the netmemberships) antonyms) Pred1Arg) Arg1Arg) …) …) prevent) suffer) enjoy) achieve) pain) losses) life) profit) success) win) investment) injure) accident) wound) ache) word) sense) gain) hurt) put)on) lose) injure,)) wound) gain,)) put)on) win,)gain,) acquire) win,)profits,))) winnings) ConnotaAve) Predicates) Arguments) Senses) Figure 1: GWORD+SENSE with words and senses. work of words and senses (Section 2), then introduce the representation of the network structure as pairwise Markov Random Fields, and a loopy belief propagation algorithm as collective inference (Section 3). We then present comprehensive evaluation (Section 4 & 5 & 6), followed by related work (Section 7) and conclusion (Section 8). 2 Network of Words and Senses The connotation graph, called GWORD+SENSE, is a heterogeneous graph with multiple types of nodes and edges. As shown in Figure 1, it contains two types of nodes; (i) lemmas (i.e., words, 115K) and (ii) synsets (63K), and four types of edges; (t1) predicate-argument (179K), (t2) argumentargument (144K), (t3) argument-synset (126K), and (t4) synset-synset (3.4K) edges. The predicate-argument edges, first introduced by Feng et al. (2011), depict the selectional preference of connotative predicates (i.e., the polarity of a predicate indicates the polarity of its arguments) and encode their co-occurrence relations based on the Google Web 1T corpus. The argumentargument edges are based on the distributional similarities among the arguments. The argumentsynset edges capture the synonymy between argument nodes through the corresponding synsets. Finally, the synset-synset edges depict the antonym relations between synset pairs. In general, our graph construction is similar to that of Feng et al. (2013), but there are a few important differences. Most notably, we model both words and synsets explicitly, and exploit the membership relations between words and senses. We expect that edges between words and senses will encourage senses that belong to the same word to 1545 receive the same connotation label. Conversely, we expect that these edges will also encourage words that belong to the same sense (i.e., synset definition) to receive the same connotation label. Another benefit of our approach is that for various WordNet relations (e.g., antonym relations), which are defined over synsets (not over words), we can add edges directly between corresponding synsets, rather than projecting (i.e., approximating) those relations over words. Note that the latter, which has been employed by several previous studies (e.g., Kamps et al. (2004), Takamura et al. (2005), Andreevskaia and Bergler (2006), Su and Markert (2009), Lu et al. (2011), Kaji and Kitsuregawa (2007), Feng et al. (2013)), could be a source of noise, as one needs to assume that the semantic relation between a pair of synsets transfers over the pair of words corresponding to that pair of synsets. For polysemous words, this assumption may be overly strong. 3 Pairwise Markov Random Fields and Loopy Belief Propagation We formulate the task of learning sense- and wordlevel connotation lexicon as a graph-based classification task (Sen et al., 2008). More formally, we denote the connotation graph GWORD+SENSE by G = (V, E), in which a total of n word and synset nodes V = {v1, . . . , vn} are connected with typed edges e(vi, vj, t) ∈E, where edge types t ∈{pred-arg, arg-arg, syn-arg, syn-syn} depict the four edge types as described in Section 2. A neighborhood function N, where Nv = {u| e(u, v) ∈E} ⊆V , describes the underlying network structure. In our collective classification formulation, each node in V is represented as a random variable that takes a value from an appropriate class label domain; in our case, L = {+, −} for positive and negative connotation. In this classification task, we denote by Y the nodes the labels of which need to be assigned, and let yi refer to Yi’s label. 3.1 Pairwise Markov Random Fields We next define our objective function. We propose to use an objective formulation that utilizes pairwise Markov Random Fields (MRFs) (Kindermann and Snell, 1980), which we adapt to our problem setting. MRFs are a class of probabilistic graphical models that are suited for solving inference problems in networked data. An MRF consists of an undirected graph where each node can be in any of a finite number of states (i.e., class labels). The state of a node is assumed to be dependent on each of its neighbors and independent of other nodes in the graph.2 In pairwise MRFs, the joint probability of the graph can be written as a product of pairwise factors, parameterized over the edges. These factors are referred to as clique potentials in general MRFs, which are essentially functions that collectively determine the graph’s joint probability. Specifically, let G = (V, E) denote a network of random variables, where V consists of the unobserved variables Y that need to be assigned values from label set L. Let Ψ denote a set of clique potentials that consists of two types of factors: • For each Yi ∈Y, ψi ∈Ψ is a prior mapping ψi : L →R≥0, where R≥0 denotes nonnegative real numbers. • For each e(Yi, Yj, t) ∈E, ψt ij ∈Ψ is a compatibility mapping ψt ij : L × L →R≥0. Objective formulation Given an assignment y to all the unobserved variables Y and x to observed ones X (variables with known labels, if any), our objective function is associated with the following joint probability distribution P(y|x) = 1 Z(x) Y Yi∈Y ψi(yi) Y e(Yi,Yj,t)∈E ψt ij(yi, yj) (1) where Z(x) is the normalization function. Our goal is then to infer the maximum likelihood assignment of states (i.e., labels) to unobserved variables (i.e., nodes) that will maximize Equation (1). Problem Definition Having introduced our graph-based classification task and objective formulation, we define our problem more formally. Given - a connotation graph G = (V, E) of words and synsets connected with typed edges, - prior knowledge (i.e., probabilities) of (some or all) nodes belonging to each class, - compatibility of two nodes with a given pair of labels being connected to each other; Classify the nodes Yi ∈Y, into one of two classes; L = {+, −}, such that the class assignments yi maximize our objective in Equation (1). We can further rank the network objects by the probability of their connotation polarity. 2This assumption yields a pairwise Markov Random Field (MRF); a special case of general MRFs (Yedidia et al., 2003). 1546 3.2 Loopy Belief Propagation Finding the best assignments to unobserved variables in our objective function is the inference problem. The brute force approach through enumeration of all possible assignments is exponential and thus intractable. In general, exact inference is known to be NP-hard and there is no known algorithm which can be theoretically shown to solve the inference problem for general MRFs. Therefore in this work, we employ a computationally tractable (in fact linearly scalable with network size) approximate inference algorithm called Loopy Belief Propagation (LBP) (Yedidia et al., 2003), which we extend to handle typed graphs like our connotation graph. Our inference algorithm is based on iterative message passing and the core of it can be concisely expressed as the following two equations: mi→j(yj) = α X yi∈L  ψt ij(yi, yj) ψi(yi) Y Yk∈Ni∩Y\Yj mk→i(yi)  , ∀yj ∈L (2) bi(yi) = β ψi(yi) Y Yj∈Ni∩Y mj→i(yi), ∀yi ∈L (3) A message mi→j is sent from node i to node j and captures the belief of i about j, which is the probability distribution over the labels of j; i.e. what i “thinks” j’s label is, given the current label of i and the type of the edge that connects i and j. Beliefs refer to marginal probability distributions of nodes over labels; for example bi(yi) denotes the belief of node i having label yi. α and β are the normalization constants, which respectively ensure that each message and each set of marginal probabilities sum to 1. At every iteration, each node computes its belief based on messages received from its neighbors, and uses the compatibility mapping to transform its belief into messages for its neighbors. The key idea is that after enough iterations of message passes between the nodes, the “conversations” are likely to come to a consensus, which determines the marginal probabilities of all the unknown variables. The pseudo-code of our method is given in Algorithm 1. It first initializes all messages to 1 and priors to unbiased (i.e., equal) probabilities for all nodes except the seed nodes for which the sentiment is known (lines 3-9). It then proceeds by making each Yi ∈Y communicate messages Algorithm 1: CONNOTATION INFERENCE 1 Input: Connotation graph G=(V, E), prior potentials ψs for seed words s ∈S, and compatibility potentials ψt ij 2 Output: Connotation label probabilities for each node i ∈V \P 3 foreach e(Yi, Yj, t) ∈E do // initialize msg.s 4 foreach yj ∈L do 5 mi→j(yj) ←1 6 foreach i ∈V do // initialize priors 7 foreach yj ∈L do 8 if i ∈S then φi(yj) ←ψi(yj) else φi(yj) ←1/|L| 9 repeat // iterative message passing 10 foreach e(Yi, Yj, t) ∈E, Yj ∈YV \S do 11 foreach yj ∈L do 12 Use Equation (2) 13 until all messages stop changing 14 foreach Yi ∈YV \S do // compute final beliefs 15 foreach yi ∈L do 16 Use Equation (3) with their neighbors in an iterative fashion until the messages stabilize (lines 10-14), i.e. convergence is reached.3 At convergence, we calculate the marginal probabilities, that is of assigning Yi with label yi, by computing the final beliefs bi(yi) (lines 15-17). We use these maximum likelihood probabilities for label assignment; for each node i, we assign the label Li ←max yi bi(yi). To completely define our algorithm, we need to instantiate the potentials Ψ, in particular the priors and the compatibilities, which we discuss next. Priors The prior beliefs ψi of nodes can be suitably initialized if there is any prior knowledge for their connotation sentiment (e.g., enjoy is positive, suffer is negative). As such, our method is flexible to integrate available side information. In case there is no prior knowledge available, each node is initialized equally likely to have any of the possible labels, i.e., 1 |L| as in Algorithm 1 (line 9). Compatibilities The compatibility potentials can be thought of as matrices, with entries 3Although convergence is not theoretically guaranteed, in practice LBP converges to beliefs within a small threshold of change (e.g., 10−6) fairly quickly with accurate results (Pandit et al., 2007; McGlohon et al., 2009; Akoglu et al., 2013). 1547 ψt ij(yi, yj) that give the likelihood of a node having label yi, given that it has a neighbor with label yj to which it is connected through a type t edge. A key difference of our method from earlier models is that we use clique potentials that differ for edge types, since the connotation graph is heterogeneous. This is exactly because the compatibility of class labels of two adjacent nodes depends on the type of the edge connecting them: e.g., + syn-arg −−−−−→+ is highly compatible, whereas + syn-syn −−−−−→+ is unlikely; as syn-arg edges capture synonymy; i.e., words-sense memberships, while syn-syn edges depict antonym relations. A sample instantiation of the compatibilities is shown in Table 1. Notice that the potentials for pred-arg, arg-arg, and syn-arg capture homophily, i.e., nodes with the same label are likely to connect to each other through these types of edges.4 On the other hand, syn-syn edges connect nodes that are antonyms of each other, and thus the compatibilities capture the reverse relationship among their labels. Table 1: Instantiation of compatibility potentials. Entry ψt ij(yi, yj) is the compatibility of a node with label yi having a neighbor labeled yj, given the edge between i and j is type t, for small ϵ. t: t1 A P + − + 1-ϵ ϵ − ϵ 1-ϵ t: t2 A A + − + 1-2ϵ 2ϵ − 2ϵ 1-2ϵ (t1) pred-arg (t2) arg-arg t: t3 A S + − + 1-ϵ ϵ − ϵ 1-ϵ t: t4 S S + − + ϵ 1-ϵ − 1-ϵ ϵ (t3) syn-arg (t4) syn-syn (synonym relations) (antonym relations) Complexity analysis Most demanding component of Algorithm 1 is the iterative message passing over the edges (lines 10-14), with time complexity O(ml2r), where m = |E| is the number of edges in the connotation graph, l = |L|, the classes, and r, the iterations until convergence. Often, l is quite small (in our case, l = 2) and r ≪m. Thus running time grows linearly with the number of edges and is scalable to large datasets. 4arg-arg edges are based on co-occurrence (see Section 2), which does not carry as strong indication of the same connotation as e.g., synonymy. Thus, we enforce less homophily for nodes connected through edges of arg-arg type. 4 Evaluation I: Agreement with Sentiment Lexicons ConnotationWordNet is expected to be the superset of a sentiment lexicon, as it is highly likely for any word with positive/negative sentiment to carry connotation of the same polarity. Thus, we use two conventional sentiment lexicons, General Inquirer (GENINQ) (Stone et al., 1966) and MPQA (Wilson et al., 2005b), as surrogates to measure the performance of our inference algorithm. 4.1 Variants of Graph Construction The construction of the connotation graph, denoted by GWORD+SENSE, which includes words and synsets, has been described in Section 2. In addition to this graph, we tried several other graph constructions, the first three of which have previously been used in (Feng et al., 2013). We briefly describe these graphs below, and compare performance on all the graphs in the proceeding. GWORD W/ PRED-ARG: This is a (bipartite) subgraph of GWORD+SENSE, which only includes the connotative predicates and their arguments. As such, it contains only type t1 edges. The edges between the predicates and the arguments can be weighted by their Point-wise Mutual Information (PMI)5 based on the Google Web 1T corpus. GWORD W/ OVERLAY: The second graph is also a proper subgraph of GWORD+SENSE, which includes the predicates and all the argument words. Predicate words are connected to their arguments as before. In addition, argument pairs (a1, a2) are connected if they occurred together in the “a1 and a2” or “a2 and a1” coordination (Hatzivassiloglou and McKeown, 1997; Pickering and Branigan, 1998). This graph contains both type t1 and t2 edges. The edges can also be weighted based on the distributional similarities of the word pairs. GWORD: The third graph is a super-graph of GWORD W/ OVERLAY, with additional edges, where argument pairs in synonym and antonym relation are connected to each other. Note that unlike the connotation graph GWORD+SENSE, it does not contain any synset nodes. Rather, the words that are synonyms or antonyms of each other are directly linked in the graph. As such, this graph contains all edge types t1 through t4. 5PMI scores are widely used in previous studies to measure association between words (e.g., (Church and Hanks, 1990), (Turney, 2001), (Newman et al., 2009)). 1548 GWORD+SENSE W/ SYNSIM: This is a supergraph of our original GWORD+SENSE graph; that is, it has all the predicate, arguments, and synset nodes, as well as the four types of edges between them. In addition, we add edges of a fifth type t5 between the synset nodes to capture their similarity. To define similarity, we use the glossary definitions of the synsets and derive three different scores. Each score utilizes the count(s1, s2) of overlapping nouns, verbs, and adjectives/adverbs among the glosses of the two synsets s1 and s2. GWORD+SENSE W/ SYNSIM1: We discard edges with count less than 3. The weighted version has the counts normalized between 0 and 1. GWORD+SENSE W/ SYNSIM2: We normalize the counts by the length of the gloss (the avg of two lengths), that is, p = count / avg(len gloss(s1), len gloss(s2)) and discard edges with p < 0.5. The weighted version contains p values as edge weights. GWORD+SENSE W/ SYNSIM3: To further sparsify the graph we discard edges with p < 0.6. To weigh the edges, we use the cosine similarity between the gloss vectors of the synsets based on the TF-IDF values of the words the glosses contain. Note that the connotation inference algorithm, as given in Algorithm 1, remains exactly the same for all the graphs described above. The only difference is the set of parameters used; while GWORD W/ PRED-ARG and GWORD W/ OVERLAY contain one and two edge types, respectively and only use compatibilities (t1) and (t2), GWORD uses all four as given in Table 1. The GWORD+SENSE W/ SYNSIM graphs use an additional compatibility matrix for the synset similarity edges of type t5, which is the same as the one used for t1, i.e., similar synsets are likely to have the same connotation label. This flexibility is one of the key advantages of our algorithm as new types of nodes and edges can be added to the graph seamlessly. 4.2 Sentiment-Lexicon based Performance In this section, we first compare the performance of our connotation graph GWORD+SENSE to graphs that do not include synset nodes but only words. Then we analyze the performance when the additional synset similarity edges are added. First, we briefly describe our performance measures. The sentiment lexicons we use as gold standard are small, compared to the size (i.e., number of words) our graphs contain. Thus, we first find the overlap between each graph and a sentiGENINQ MPQA P R F F Variations of GWORD W/ PRED-ARG 88.0 67.6 76.5 57.3 W/ PRED-ARG-W 84.9 68.9 76.1 57.8 W/ OVERLAY 87.8 70.4 78.1 58.4 W/ OVERLAY-W 82.2 67.7 74.2 54.2 GWORD 88.5 83.1 85.7 69.7 GWORD-W 75.5 71.5 73.4 53.2 Variations of GWORD+SENSE GWORD+SENSE 88.8 84.1 86.4 70.0 GWORD+SENSE-W 76.8 73.0 74.9 54.6 W/ SYNSIM1 87.2 83.3 85.2 67.9 W/ SYNSIM2 83.9 80.8 82.3 65.1 W/ SYNSIM3 86.5 83.2 84.8 67.8 W/ SYNSIM1-W 88.0 84.3 86.1 69.2 W/ SYNSIM2-W 86.4 83.7 85.0 68.5 W/ SYNSIM3-W 86.7 83.4 85.0 68.2 Table 2: Connotation inference performance on various graphs. ‘-W’ indicates weighted versions (see §4.1). P: precision, R: recall, F: F1-score (%). ment lexicon. Note that the overlap size may be smaller than the lexicon size, as some sentiment words may be missing from our graphs. Then, we calculate the number of correct label assignments. As such, precision is defined as (correct / overlap), and recall as (correct / lexicon size). Finally, F1-score is their harmonic mean and reflects the overall accuracy. As shown in Table 2 (top), we first observe that including the synonym and antonym relations in the graph, as with GWORD and GWORD+SENSE, improve the performance significantly, almost by an order of magnitude, over graphs GWORD W/ PREDARG and GWORD W/ OVERLAY that do not contain those relation types. Furthermore, we notice that the performances on the GWORD+SENSE graph are better than those on the word-only graphs. This shows that including the synset nodes explicitly in the graph structure is beneficial. What is more, it gives us a means to obtain connotation labels for the synsets themselves, which we use in the evaluations in the next sections. Finally, we note that using the unweighted versions of the graphs provide relatively more robust performance, potentially due to noise in the relative edge weights. Next we analyze the performance when the new edges between synsets are introduced, as given in Table 2 (bottom). We observe that connecting the synset nodes by their gloss-similarity (at least in the ways we tried) does not yield better performance than on our original GWORD+SENSE graph. Different from earlier, the weighted versions of the similarity based graphs provide better perfor1549 mance than their unweighted counterparts. This suggests that glossary similarity would be a more robust means to correlate nodes; we leave it as future work to explore this direction for predicateargument and argument-argument relations. 4.3 Parameter Sensitivity Our belief propagation based connotation sentiment inference algorithm has one user-specified parameter ϵ (see Table 1). To study the sensitivity of its performance to the choice of ϵ, we reran our experiments for ϵ = {0.02, 0.04, . . . , 0.24}6 and report the accuracy results on our GWORD+SENSE in Figure 2 for the two lexicons. The results indicate that the performances remain quite stable across a wide range of the parameter choice. precision recall F-score Performance 0 20 40 60 80 100 ε 0.02 0.06 0.10 0.14 0.18 0.22 precision recall F-score Performance 0 20 40 60 80 100 ε 0.02 0.06 0.10 0.14 0.18 0.22 (a) GENINQ EVAL (b) MPQA EVAL Figure 2: Performance is stable across various ϵ. 5 Evaluation II: Human Evaluation on ConnotationWordNet In this section, we present the result of human evaluation we executed using Amazon Mechanical Turk (AMT). We collect two separate sets of labels: a set of labels at the word-level, and another set at the sense-level. We first describe the labeling process of sense-level connotation: We selected 350 polysemous words and one of their senses, and each Turker was asked to rate the connotative polarity of a given word (or of a given sense), from -5 to 5, 0 being the neutral.7 For each word, we asked 5 Turkers to rate and we took the average of the 5 ratings as the connotative intensity score of the word. We labeled a word as negative if its intensity score is less than 0 and positive otherwise. For word-level labels we apply similar procedure as above. 6Note that for ϵ > 0.25, compatibilities of ψt2 in Table 1 are reversed, hence the maximum of 0.24. 7Because senses in WordNet can be tricky to understand, care should be taken in designing the task so that the Turkers will focus only on the corresponding sense of a word. Therefore, we provided the part of speech tag, the WordNet gloss of the selected sense, and a few examples as given in WordNet. As an incentive, each Turker was rewarded $0.07 per hit which consists of 10 words to label. Lexicon Word-level Sense-level SentiWordNet 27.22 14.29 OpinionFinder 31.95 Feng2013 62.72 GWORD+SENSE(95%) 84.91 83.43 GWORD+SENSE(99%) 84.91 83.71 E-GWORD+SENSE(95%) 86.98 86.29 E-GWORD+SENSE(99%) 86.69 85.71 Table 3: Word-/Sense-level evaluation results 5.1 Word-Level Evaluation We first evaluate the word-level assignment of connotation, as shown in Table 3. The agreement between the new lexicon and human judges varies between 84% and 86.98%. Sentiment lexicons such as SentiWordNet (Baccianella et al. (2010)) and OpinionFinder (Wilson et al. (2005a)) show low agreement rate with human, which is somewhat as expected: human judges in this study are labeling for subtle connotation, not for more explicit sentiment. OpinionFinder’s low agreement rate was mainly due to the low hit rate of the words (successful look-up rate, 33.43%). Feng2013 is the lexicon presented in (Feng et al., 2013) and it showed a relatively higher 72.13% hit rate. Note that belief propagation was run until 95% and 99% of the nodes were converged in their beliefs. In addition, the seed words with known connotation labels originally consist of 20 positive and 20 negative predicates. We also extended the seed set with the sentiment lexicon words and denote these runs with E- for ‘Extended’. 5.2 Sense-Level Evaluation We also examined the agreement rates on the sense-level. Since OpinionFinder and Feng2013 do not provide the polarity scores at the senselevel, we excluded them from this evaluation. Because sense-level polarity assignment is a harder (more subtle) task, the performance of all lexicons decreased to some degree in comparison to that of word-level evaluations. 5.3 Pair-wise Intensity Ranking A notable goodness of our induction algorithm is that the outcome of the algorithm can be interpreted as an intensity of the corresponding connotation. But are these values meaningful? We answer this question in this section. We formulate a pair-wise ranking task as a binary decision task as follows: given a pair of words, we ask which one is more positive (or more negative) than the other. Since we collect human labels based on scales, we 1550 Lexicon Correct Undecided SentiWordNet 33.77 23.34 GWORD+SENSE(95%) 74.83 0.58 GWORD+SENSE(99%) 73.01 0.58 E-GWORD+SENSE(95%) 73.84 1.16 E-GWORD+SENSE(99%) 74.01 1.16 Table 4: Results of pair-wise intensity evaluation, for intensity difference threshold = 2.0 already have this information at hand. Because different human judges have different notion of scales however, subtle differences are more likely to be noisy. Therefore, we experiment with varying degrees of differences in their scales, as shown in Figure 3. Threshold values (ranging from 0.5 to 3.0) indicate the minimum differences in scales for any pair of words, for the pair to be included in the test set. As expected, we observe that the performance improves as we increase the threshold (as pairs get better separated). Within range [0.5, 1.5] (249 pairs examined), the accuracies are as high as 68.27%, which shows that even the subtle differences of the connotative intensities are relatively well reflected in the new lexicons. SentiWordNet GWord+Sense(95%) GWord+Sense(99%) e-GWord+Sense(95%) e-GWord+Sense(99%) Accuracy (%) 40 60 80 Threshold 0.5 1.0 2.0 3.0 Figure 3: Trend of accuracy for pair-wise intensity evaluation over threshold The results for pair-wise intensity evaluation (threshold=2.0, 1,208 pairs) are given in Table 4. Despite that intensity is generally a harder property to measure (than the coarser binary categorization of polarities), our connotation lexicons perform surprisingly well, reaching up to 74.83% accuracy. Further study on the incorrect cases reveals that SentiWordNet has many pair of words with the same polarity score (23.34%). Such cases seems to be due to the limited score patterns of SentiWordNet. The ratio of such cases are accounted as Undecided in Table 4. 6 Evaluation III: Sentiment Analysis using ConnotationWordNet Finally, to show the utility of the resulting lexicon in the context of a concrete sentiment analysis task, we perform lexicon-based sentiment analysis. We experiment with SemEval dataset (Strapparava and Mihalcea, 2007) that includes the human labeled dataset for predicting whether a news headline is a good news or a bad news, which we expect to have a correlation with the use of connotative words that we focus on in this paper. The good/bad news are annotated with scores (ranging from -100 to 87). We construct several data sets by applying different thresholds on scores. For example, with the threshold set to 60, we discard the instances whose scores lie between -60 and 60. For comparison, we also test the connotation lexicon from (Feng et al., 2013) and the combined sentiment lexicon GENINQ+MPQA. Note that there is a difference in how humans judge the orientation and the degree of connotation for a given word out of context, and how the use of such words in context can be perceived as good/bad news. In particular, we conjecture that humans may have a bias toward the use of positive words, which in turn requires calibration from the readers’ minds (Pennebaker and Stone, 2003). That is, we might need to tone down the level of positiveness in order to correctly measure the actual intended positiveness of the message. With this in mind, we tune the appropriate calibration from a small training data, by using 1 fold from N fold cross validation, and using the remaining N −1 folds as testing. We simply learn the mixture coefficient λ to scale the contribution of positive and negative connotation values. We tune this parameter λ8 for other lexicons we compare against as well. Note that due to this parameter learning, we are able to report better performance for the connotation lexicon of (Feng et al., 2013) than what the authors have reported in their paper (labeled with *) in Table 5. Table 5 shows the results for N=15, where the new lexicon consistently outperforms other competitive lexicons. In addition, Figure 4 shows that the performance does not change much based on the size of training data used for parameter tuning (N={5, 10, 15, 20}). 7 Related Work Several previous approaches explored the use of graph propagation for sentiment lexicon induction (Velikovich et al., 2010) and connotation lexicon 8What is reported is based on λ ∈{20, 40, 60, 80}. More detailed parameter search does not change the results much. 1551 Lexicon SemEval Threshold 20 40 60 80 Instance Size 955 649 341 86 Feng2013 71.5 77.1 81.6 90.5 GENINQ+MPQA 72.8 77.2 80.4 86.7 GWORD+SENSE(95%) 74.5 79.4 86.5 91.9 GWORD+SENSE(99%) 74.6 79.4 86.8 91.9 E-GWORD+SENSE(95%) 72.5 76.8 82.3 87.2 E-GWORD+SENSE(99%) 72.6 76.9 82.5 87.2 Feng2013* 70.8 74.6 80.8 93.5 GENINQ+MPQA* 64.5 69.0 74.0 80.5 Table 5: SemEval evaluation results, for N=15 Feng2013 MPQA+GenInq GWord+Sense(95%) GWord+Sense(99%) e-GWord+Sense(95%) e-GWord+Sense(99%) Accuracy (%) 50 60 70 80 N 5 10 15 20 Figure 4: Trend of SemEval performance over N, the number of CV folds induction (Feng et al., 2013). Our work introduces the use of loopy belief propagation over pairwise-MRF as an alternative solution to these tasks. At a high-level, both approaches share the general idea of propagating confidence or belief over the graph connectivity. The key difference, however, is that in our MRF representation, we can explicitly model various types of word-word, sense-sense and word-sense relations as edge potentials. In particular, we can naturally encode relations that encourage the same assignment (e.g., synonym) as well as the opposite assignment (e.g., antonym) of the polarity labels. Note that integration of the latter is not straightforward in the graph propagation framework. There have been a number of previous studies that aim to construct a word-level sentiment lexicon (Wiebe et al., 2005; Qiu et al., 2009) and a sense-level sentiment lexicon (Esuli and Sebastiani, 2006). But none of these approaches considered to induce the polarity labels at both the word-level and sense-level. Although we focus on learning connotative polarity of words and senses in this paper, the same approach would be applicable to constructing a sentiment lexicon as well. There have been recent studies that address word sense disambiguation issues for sentiment analysis. SentiWordNet (Esuli and Sebastiani, 2006) was the very first lexicon developed for sense-level labels of sentiment polarity. In recent years, Akkaya et al. (2009) report a successful empirical result where WSD helps improving sentiment analysis, while Wiebe and Mihalcea (2006) study the distinction between objectivity and subjectivity in each different sense of a word, and their empirical effects in the context of sentiment analysis. Our work shares the high-level spirit of accessing the sense-level polarity, while also deriving the word-level polarity. In recent years, there has been a growing research interest in investigating more fine-grained aspects of lexical sentiment beyond positive and negative sentiment. For example, Mohammad and Turney (2010) study the affects words can evoke in people’s minds, while Bollen et al. (2011) study various moods, e.g., “tension”, “depression”, beyond simple dichotomy of positive and negative sentiment. Our work, and some recent work by Feng et al. (2011) and Feng et al. (2013) share this spirit by targeting more subtle, nuanced sentiment even from those words that would be considered as objective in early studies of sentiment analysis. 8 Conclusion We have introduced a novel formulation of lexicon induction operating over both words and senses, by exploiting the innate structure between the words and senses as encoded in WordNet. In addition, we introduce the use of loopy belief propagation over pairwise-Markov Random Fields as an effective lexicon induction algorithm. A notable strength of our approach is its expressiveness: various types of prior knowledge and lexical relations can be encoded as node potentials and edge potentials. In addition, it leads to a lexicon of better quality while also offering faster run-time and easiness of implementation. The resulting lexicon, called ConnotationWordNet, is the first lexicon that has polarity labels over both words and senses. ConnotationWordNet is publicly available for research and practical use. Acknowledgments This research was supported by the Army Research Office under Contract No. W911NF-14-10029, Stony Brook University Office of Vice President for Research, and gifts from Northrop Grumman Aerospace Systems and Google. We thank reviewers for many insightful comments and suggestions. 1552 References Cem Akkaya, Janyce Wiebe, and Rada Mihalcea. 2009. Subjectivity word sense disambiguation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 190–199. Association for Computational Linguistics. Leman Akoglu, Rishi Chandy, and Christos Faloutsos. 2013. Opinion fraud detection in online reviews by network effects. Alina Andreevskaia and Sabine Bergler. 2006. Mining wordnet for a fuzzy sentiment: Sentiment tag extraction from wordnet glosses. In EACL, pages 209–216. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In LREC, volume 10, pages 2200–2204. Alexandra Balahur, Rada Mihalcea, and Andr´es Montoyo. 2014. Computational approaches to subjectivity and sentiment analysis: Present and envisaged methods and applications. Computer Speech & Language, 28(1):1–6. Johan Bollen, Huina Mao, and Alberto Pepe. 2011. Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena. In ICWSM. K. W. Church and P. Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 1(16):22–29. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC06, pages 417–422. Song Feng, Ritwik Bose, and Yejin Choi. 2011. Learning general connotation of words using graph-based algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1092–1103. Association for Computational Linguistics. Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In The Association for Computer Linguistics, pages 1774– 1784. Vasileios Hatzivassiloglou and Kathleen McKeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the Joint ACL/EACL Conference, pages 174–181. Nobuhiro Kaji and Masaru Kitsuregawa. 2007. Building lexicon for sentiment analysis from massive collection of html documents. In EMNLP-CoNLL, pages 1075–1083. Jaap Kamps, MJ Marx, Robert J Mokken, and Maarten De Rijke. 2004. Using wordnet to measure semantic orientations of adjectives. Ross Kindermann and J. L. Snell. 1980. Markov Random Fields and Their Applications. Yue Lu, Malu Castellanos, Umeshwar Dayal, and ChengXiang Zhai. 2011. Automatic construction of a context-aware sentiment lexicon: an optimization approach. In Proceedings of the 20th international conference on World wide web, pages 347– 356. ACM. Mary McGlohon, Stephen Bay, Markus G. Anderle, David M. Steier, and Christos Faloutsos. 2009. Snare: a link analytic system for graph labeling and risk detection. In John F. Elder IV, Franoise Fogelman-Souli, Peter A. Flach, and Mohammed Zaki, editors, KDD, pages 1265–1274. ACM. Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2012. Multilingual subjectivity and sentiment analysis. In Tutorial Abstracts of ACL 2012, pages 4–4. Association for Computational Linguistics. Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 26–34, Los Angeles, CA, June. Association for Computational Linguistics. David Newman, Sarvnaz Karimi, and Lawrence Cavedon. 2009. External evaluation of topic models. In Australasian Document Computing Symposium, pages 11–18, Sydney, December. Shashank Pandit, Duen Horng Chau, Samuel Wang, and Christos Faloutsos. 2007. Netprobe: a fast and scalable system for fraud detection in online auction networks. In WWW, pages 201–210. Christos H Papadimitriou and Kenneth Steiglitz. 1998. Combinatorial optimization: algorithms and complexity. Courier Dover Publications. James W Pennebaker and Lori D Stone. 2003. Words of wisdom: language use over the life span. Journal of personality and social psychology, 85(2):291. John P Pestian, Pawel Matykiewicz, Michelle LinnGust, Brett South, Ozlem Uzuner, Jan Wiebe, K Bretonnel Cohen, John Hurdle, Christopher Brew, et al. 2012. Sentiment analysis of suicide notes: A shared task. Biomedical Informatics Insights, 5(Suppl. 1):3. Martin J. Pickering and Holly P. Branigan. 1998. The representation of verbs: Evidence from syntactic priming in language production. Journal of Memory and Language, 39:633–651. 1553 Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In IJCAI, volume 9, pages 1199–1204. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI Magazine, 29(3):93–106. Philip J. Stone, Dexter C. Dunphy, Marshall S. Smith, and Daniel M. Ogilvie. 1966. The General Inquirer: A Computer Approach to Content Analysis. MIT Press, Cambridge, MA. Carlo Strapparava and Rada Mihalcea. 2007. Semeval2007 task 14: Affective text. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 70–74. Association for Computational Linguistics. Fangzhong Su and Katja Markert. 2009. Subjectivity recognition on word senses via semi-supervised mincuts. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1–9. Association for Computational Linguistics. Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Extracting semantic orientations of words using spin model. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 133–140. Association for Computational Linguistics. Peter D. Turney. 2001. Mining the Web for synonyms: PMI-IR versus LSA on TOEFL. In Proceedings of the Twelfth European Conference on Machine Learning (ECML-01), pages 491–502, Freiburg, Germany. Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan McDonald. 2010. The viability of web-derived polarity lexicons. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Janyce Wiebe and Rada Mihalcea. 2006. Word sense and subjectivity. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 1065–1072. Association for Computational Linguistics. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation (formerly Computers and the Humanities), 39(2/3):164–210. Theresa Wilson, Paul Hoffmann, Swapna Somasundaran, Jason Kessler, Janyce Wiebe, Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005a. Opinionfinder: A system for subjectivity analysis. In Proceedings of HLT/EMNLP on Interactive Demonstrations, pages 34–35. Association for Computational Linguistics. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005b. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of Human Language Technologies Conference/Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP 2005), Vancouver, CA. Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. 2003. Understanding belief propagation and its generalizations. In Exploring AI in the new millennium, pages 239–269. 1554
2014
145
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1555–1565, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification∗ Duyu Tang†, Furu Wei‡ , Nan Yang♮, Ming Zhou‡, Ting Liu†, Bing Qin† †Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China ‡Microsoft Research, Beijing, China ♮University of Science and Technology of China, Hefei, China {dytang, tliu, qinb}@ir.hit.edu.cn {fuwei, v-nayang, mingzhou}@microsoft.com Abstract We present a method that learns word embedding for Twitter sentiment classification in this paper. Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text. This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors. We address this issue by learning sentimentspecific word embedding (SSWE), which encodes sentiment information in the continuous representation of words. Specifically, we develop three neural networks to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions. To obtain large scale training corpora, we learn the sentiment-specific word embedding from massive distant-supervised tweets collected by positive and negative emoticons. Experiments on applying SSWE to a benchmark Twitter sentiment classification dataset in SemEval 2013 show that (1) the SSWE feature performs comparably with hand-crafted features in the top-performed system; (2) the performance is further improved by concatenating SSWE with existing feature set. 1 Introduction Twitter sentiment classification has attracted increasing research interest in recent years (Jiang et al., 2011; Hu et al., 2013). The objective is to classify the sentiment polarity of a tweet as positive, ∗This work was done when the first and third authors were visiting Microsoft Research Asia. negative or neutral. The majority of existing approaches follow Pang et al. (2002) and employ machine learning algorithms to build classifiers from tweets with manually annotated sentiment polarity. Under this direction, most studies focus on designing effective features to obtain better classification performance. For example, Mohammad et al. (2013) build the top-performed system in the Twitter sentiment classification track of SemEval 2013 (Nakov et al., 2013), using diverse sentiment lexicons and a variety of hand-crafted features. Feature engineering is important but laborintensive. It is therefore desirable to discover explanatory factors from the data and make the learning algorithms less dependent on extensive feature engineering (Bengio, 2013). For the task of sentiment classification, an effective feature learning method is to compose the representation of a sentence (or document) from the representations of the words or phrases it contains (Socher et al., 2013b; Yessenalina and Cardie, 2011). Accordingly, it is a crucial step to learn the word representation (or word embedding), which is a dense, low-dimensional and real-valued vector for a word. Although existing word embedding learning algorithms (Collobert et al., 2011; Mikolov et al., 2013) are intuitive choices, they are not effective enough if directly used for sentiment classification. The most serious problem is that traditional methods typically model the syntactic context of words but ignore the sentiment information of text. As a result, words with opposite polarity, such as good and bad, are mapped into close vectors. It is meaningful for some tasks such as pos-tagging (Zheng et al., 2013) as the two words have similar usages and grammatical roles, but it becomes a disaster for sentiment analysis as they have the opposite sentiment polarity. In this paper, we propose learning sentimentspecific word embedding (SSWE) for sentiment analysis. We encode the sentiment information in1555 to the continuous representation of words, so that it is able to separate good and bad to opposite ends of the spectrum. To this end, we extend the existing word embedding learning algorithm (Collobert et al., 2011) and develop three neural networks to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions. We learn the sentiment-specific word embedding from tweets, leveraging massive tweets with emoticons as distant-supervised corpora without any manual annotations. These automatically collected tweets contain noises so they cannot be directly used as gold training data to build sentiment classifiers, but they are effective enough to provide weakly supervised signals for training the sentimentspecific word embedding. We apply SSWE as features in a supervised learning framework for Twitter sentiment classification, and evaluate it on the benchmark dataset in SemEval 2013. In the task of predicting positive/negative polarity of tweets, our method yields 84.89% in macro-F1 by only using SSWE as feature, which is comparable to the top-performed system based on hand-crafted features (84.70%). After concatenating the SSWE feature with existing feature set, we push the state-of-the-art to 86.58% in macro-F1. The quality of SSWE is also directly evaluated by measuring the word similarity in the embedding space for sentiment lexicons. In the accuracy of polarity consistency between each sentiment word and its top N closest words, SSWE outperforms existing word embedding learning algorithms. The major contributions of the work presented in this paper are as follows. • We develop three neural networks to learn sentiment-specific word embedding (SSWE) from massive distant-supervised tweets without any manual annotations; • To our knowledge, this is the first work that exploits word embedding for Twitter sentiment classification. We report the results that the SSWE feature performs comparably with hand-crafted features in the top-performed system in SemEval 2013; • We release the sentiment-specific word embedding learned from 10 million tweets, which can be adopted off-the-shell in other sentiment analysis tasks. 2 Related Work In this section, we present a brief review of the related work from two perspectives, Twitter sentiment classification and learning continuous representations for sentiment classification. 2.1 Twitter Sentiment Classification Twitter sentiment classification, which identifies the sentiment polarity of short, informal tweets, has attracted increasing research interest (Jiang et al., 2011; Hu et al., 2013) in recent years. Generally, the methods employed in Twitter sentiment classification follow traditional sentiment classification approaches. The lexicon-based approaches (Turney, 2002; Ding et al., 2008; Taboada et al., 2011; Thelwall et al., 2012) mostly use a dictionary of sentiment words with their associated sentiment polarity, and incorporate negation and intensification to compute the sentiment polarity for each sentence (or document). The learning based methods for Twitter sentiment classification follow Pang et al. (2002)’s work, which treat sentiment classification of texts as a special case of text categorization issue. Many studies on Twitter sentiment classification (Pak and Paroubek, 2010; Davidov et al., 2010; Barbosa and Feng, 2010; Kouloumpis et al., 2011; Zhao et al., 2012) leverage massive noisy-labeled tweets selected by positive and negative emoticons as training set and build sentiment classifiers directly, which is called distant supervision (Go et al., 2009). Instead of directly using the distantsupervised data as training set, Liu et al. (2012) adopt the tweets with emoticons to smooth the language model and Hu et al. (2013) incorporate the emotional signals into an unsupervised learning framework for Twitter sentiment classification. Many existing learning based methods on Twitter sentiment classification focus on feature engineering. The reason is that the performance of sentiment classifier being heavily dependent on the choice of feature representation of tweets. The most representative system is introduced by Mohammad et al. (2013), which is the state-of-theart system (the top-performed system in SemEval 2013 Twitter Sentiment Classification Track) by implementing a number of hand-crafted features. Unlike the previous studies, we focus on learning discriminative features automatically from massive distant-supervised tweets. 1556 2.2 Learning Continuous Representations for Sentiment Classification Pang et al. (2002) pioneer this field by using bagof-word representation, representing each word as a one-hot vector. It has the same length as the size of the vocabulary, and only one dimension is 1, with all others being 0. Under this assumption, many feature learning algorithms are proposed to obtain better classification performance (Pang and Lee, 2008; Liu, 2012; Feldman, 2013). However, the one-hot word representation cannot sufficiently capture the complex linguistic characteristics of words. With the revival of interest in deep learning (Bengio et al., 2013), incorporating the continuous representation of a word as features has been proving effective in a variety of NLP tasks, such as parsing (Socher et al., 2013a), language modeling (Bengio et al., 2003; Mnih and Hinton, 2009) and NER (Turian et al., 2010). In the field of sentiment analysis, Bespalov et al. (2011; 2012) initialize the word embedding by Latent Semantic Analysis and further represent each document as the linear weighted of ngram vectors for sentiment classification. Yessenalina and Cardie (2011) model each word as a matrix and combine words using iterated matrix multiplication. Glorot et al. (2011) explore Stacked Denoising Autoencoders for domain adaptation in sentiment classification. Socher et al. propose Recursive Neural Network (RNN) (2011b), matrixvector RNN (2012) and Recursive Neural Tensor Network (RNTN) (2013b) to learn the compositionality of phrases of any length based on the representation of each pair of children recursively. Hermann et al. (2013) present Combinatory Categorial Autoencoders to learn the compositionality of sentence, which marries the Combinatory Categorial Grammar with Recursive Autoencoder. The representation of words heavily relies on the applications or tasks in which it is used (Labutov and Lipson, 2013). This paper focuses on learning sentiment-specific word embedding, which is tailored for sentiment analysis. Unlike Maas et al. (2011) that follow the probabilistic document model (Blei et al., 2003) and give an sentiment predictor function to each word, we develop neural networks and map each ngram to the sentiment polarity of sentence. Unlike Socher et al. (2011c) that utilize manually labeled texts to learn the meaning of phrase (or sentence) through compositionality, we focus on learning the meaning of word, namely word embedding, from massive distant-supervised tweets. Unlike Labutov and Lipson (2013) that produce task-specific embedding from an existing word embedding, we learn sentiment-specific word embedding from scratch. 3 Sentiment-Specific Word Embedding for Twitter Sentiment Classification In this section, we present the details of learning sentiment-specific word embedding (SSWE) for Twitter sentiment classification. We propose incorporating the sentiment information of sentences to learn continuous representations for words and phrases. We extend the existing word embedding learning algorithm (Collobert et al., 2011) and develop three neural networks to learn SSWE. In the following sections, we introduce the traditional method before presenting the details of SSWE learning algorithms. We then describe the use of SSWE in a supervised learning framework for Twitter sentiment classification. 3.1 C&W Model Collobert et al. (2011) introduce C&W model to learn word embedding based on the syntactic contexts of words. Given an ngram “cat chills on a mat”, C&W replaces the center word with a random word wr and derives a corrupted ngram “cat chills wr a mat”. The training objective is that the original ngram is expected to obtain a higher language model score than the corrupted ngram by a margin of 1. The ranking objective function can be optimized by a hinge loss, losscw(t, tr) = max(0, 1 −fcw(t) + fcw(tr)) (1) where t is the original ngram, tr is the corrupted ngram, fcw(·) is a one-dimensional scalar representing the language model score of the input ngram. Figure 1(a) illustrates the neural architecture of C&W, which consists of four layers, namely lookup →linear →hTanh →linear (from bottom to top). The original and corrupted ngrams are treated as inputs of the feed-forward neural network, respectively. The output fcw is the language model score of the input, which is calculated as given in Equation 2, where L is the lookup table of word embedding, w1, w2, b1, b2 are the parameters of linear layers. fcw(t) = w2(a) + b2 (2) 1557 so cooool :D lookup linear hTanh linear softmax (a) C&W so cooool :D (b) SSWEh so cooool :D (c) SSWEu syntactic sentiment positive negative Figure 1: The traditional C&W model and our neural networks (SSWEh and SSWEu) for learning sentiment-specific word embedding. a = hTanh(w1Lt + b1) (3) hTanh(x) =      −1 if x < −1 x if −1 ≤x ≤1 1 if x > 1 (4) 3.2 Sentiment-Specific Word Embedding Following the traditional C&W model (Collobert et al., 2011), we incorporate the sentiment information into the neural network to learn sentimentspecific word embedding. We develop three neural networks with different strategies to integrate the sentiment information of tweets. Basic Model 1 (SSWEh). As an unsupervised approach, C&W model does not explicitly capture the sentiment information of texts. An intuitive solution to integrate the sentiment information is predicting the sentiment distribution of text based on input ngram. We do not utilize the entire sentence as input because the length of different sentences might be variant. We therefore slide the window of ngram across a sentence, and then predict the sentiment polarity based on each ngram with a shared neural network. In the neural network, the distributed representation of higher layer are interpreted as features describing the input. Thus, we utilize the continuous vector of top layer to predict the sentiment distribution of text. Assuming there are K labels, we modify the dimension of top layer in C&W model as K and add a softmax layer upon the top layer. The neural network (SSWEh) is given in Figure 1(b). Softmax layer is suitable for this scenario because its outputs are interpreted as conditional probabilities. Unlike C&W, SSWEh does not generate any corrupted ngram. Let f g(t), where K denotes the number of sentiment polarity labels, be the gold K-dimensional multinomial distribution of input t and P k f g k(t) = 1. For positive/negative classification, the distribution is of the form [1,0] for positive and [0,1] for negative. The cross-entropy error of the softmax layer is : lossh(t) = − X k={0,1} f g k(t) · log(f h k (t)) (5) where f g(t) is the gold sentiment distribution and f h(t) is the predicted sentiment distribution. Basic Model 2 (SSWEr). SSWEh is trained by predicting the positive ngram as [1,0] and the negative ngram as [0,1]. However, the constraint of SSWEh is too strict. The distribution of [0.7,0.3] can also be interpreted as a positive label because the positive score is larger than the negative score. Similarly, the distribution of [0.2,0.8] indicates negative polarity. Based on the above observation, the hard constraints in SSWEh should be relaxed. If the sentiment polarity of a tweet is positive, the predicted positive score is expected to be larger than the predicted negative score, and the exact reverse if the tweet has negative polarity. We model the relaxed constraint with a ranking objective function and borrow the bottom four layers from SSWEh, namely lookup →linear → hTanh →linear in Figure 1(b), to build the relaxed neural network (SSWEr). Compared with SSWEh, the softmax layer is removed because SSWEr does not require probabilistic interpretation. The hinge loss of SSWEr is modeled as de1558 scribed below. lossr(t) = max(0, 1 −δs(t)f r 0(t) + δs(t)f r 1(t) ) (6) where f r 0 is the predicted positive score, f r 1 is the predicted negative score, δs(t) is an indicator function reflecting the sentiment polarity of a sentence, δs(t) = ( 1 if f g(t) = [1, 0] −1 if f g(t) = [0, 1] (7) Similar with SSWEh, SSWEr also does not generate the corrupted ngram. Unified Model (SSWEu). The C&W model learns word embedding by modeling syntactic contexts of words but ignoring sentiment information. By contrast, SSWEh and SSWEr learn sentiment-specific word embedding by integrating the sentiment polarity of sentences but leaving out the syntactic contexts of words. We develop a unified model (SSWEu) in this part, which captures the sentiment information of sentences as well as the syntactic contexts of words. SSWEu is illustrated in Figure 1(c). Given an original (or corrupted) ngram and the sentiment polarity of a sentence as the input, SSWEu predicts a two-dimensional vector for each input ngram. The two scalars (f u 0 , f u 1 ) stand for language model score and sentiment score of the input ngram, respectively. The training objectives of SSWEu are that (1) the original ngram should obtain a higher language model score f u 0 (t) than the corrupted ngram f u 0 (tr), and (2) the sentiment score of original ngram f u 1 (t) should be more consistent with the gold polarity annotation of sentence than corrupted ngram f u 1 (tr). The loss function of SSWEu is the linear combination of two hinge losses, lossu(t, tr) = α · losscw(t, tr)+ (1 −α) · lossus(t, tr) (8) where losscw(t, tr) is the syntactic loss as given in Equation 1, lossus(t, tr) is the sentiment loss as described in Equation 9. The hyper-parameter α weighs the two parts. lossus(t, tr) = max(0, 1 −δs(t)f u 1 (t) + δs(t)f u 1 (tr) ) (9) Model Training. We train sentiment-specific word embedding from massive distant-supervised tweets collected with positive and negative emoticons1. We crawl tweets from April 1st, 2013 to April 30th, 2013 with TwitterAPI. We tokenize each tweet with TwitterNLP (Gimpel et al., 2011), remove the @user and URLs of each tweet, and filter the tweets that are too short (< 7 words). Finally, we collect 10M tweets, selected by 5M tweets with positive emoticons and 5M tweets with negative emoticons. We train SSWEh, SSWEr and SSWEu by taking the derivative of the loss through backpropagation with respect to the whole set of parameters (Collobert et al., 2011), and use AdaGrad (Duchi et al., 2011) to update the parameters. We empirically set the window size as 3, the embedding length as 50, the length of hidden layer as 20 and the learning rate of AdaGrad as 0.1 for all baseline and our models. We learn embedding for unigrams, bigrams and trigrams separately with same neural network and same parameter setting. The contexts of unigram (bigram/trigram) are the surrounding unigrams (bigrams/trigrams), respectively. 3.3 Twitter Sentiment Classification We apply sentiment-specific word embedding for Twitter sentiment classification under a supervised learning framework as in previous work (Pang et al., 2002). Instead of hand-crafting features, we incorporate the continuous representation of words and phrases as the feature of a tweet. The sentiment classifier is built from tweets with manually annotated sentiment polarity. We explore min, average and max convolutional layers (Collobert et al., 2011; Socher et al., 2011a), which have been used as simple and effective methods for compositionality learning in vector-based semantics (Mitchell and Lapata, 2010), to obtain the tweet representation. The result is the concatenation of vectors derived from different convolutional layers. z(tw) = [zmax(tw), zmin(tw), zaverage(tw)] where z(tw) is the representation of tweet tw and zx(tw) is the results of the convolutional layer x ∈ {min, max, average}. Each convolutional layer 1We use the emoticons selected by Hu et al. (2013). The positive emoticons are :) : ) :-) :D =), and the negative emoticons are :( : ( :-( . 1559 zx employs the embedding of unigrams, bigrams and trigrams separately and conducts the matrixvector operation of x on the sequence represented by columns in each lookup table. The output of zx is the concatenation of results obtained from different lookup tables. zx(tw) = [wx⟨Luni⟩tw, wx⟨Lbi⟩tw, wx⟨Ltri⟩tw] where wx is the convolutional function of zx, ⟨L⟩tw is the concatenated column vectors of the words in the tweet. Luni, Lbi and Ltri are the lookup tables of the unigram, bigram and trigram embedding, respectively. 4 Experiment We conduct experiments to evaluate SSWE by incorporating it into a supervised learning framework for Twitter sentiment classification. We also directly evaluate the effectiveness of the SSWE by measuring the word similarity in the embedding space for sentiment lexicons. 4.1 Twitter Sentiment Classification Experiment Setup and Datasets. We conduct experiments on the latest Twitter sentiment classification benchmark dataset in SemEval 2013 (Nakov et al., 2013). The training and development sets were completely in full to task participants. However, we were unable to download all the training and development sets because some tweets were deleted or not available due to modified authorization status. The test set is directly provided to the participants. The distribution of our dataset is given in Table 1. We train sentiment classifier with LibLinear (Fan et al., 2008) on the training set, tune parameter −c on the dev set and evaluate on the test set. Evaluation metric is the Macro-F1 of positive and negative categories 2. Positive Negative Neutral Total Train 2,642 994 3,436 7,072 Dev 408 219 493 1,120 Test 1,570 601 1,639 3,810 Table 1: Statistics of the SemEval 2013 Twitter sentiment classification dataset. 2We investigate 2-class Twitter sentiment classification (positive/negative) instead of 3-class Twitter sentiment classification (positive/negative/neutral) in SemEval2013. Baseline Methods. We compare our method with the following sentiment classification algorithms: (1) DistSuper: We use the 10 million tweets selected by positive and negative emoticons as training data, and build sentiment classifier with LibLinear and ngram features (Go et al., 2009). (2) SVM: The ngram features and Support Vector Machine are widely used baseline methods to build sentiment classifiers (Pang et al., 2002). LibLinear is used to train the SVM classifier. (3) NBSVM: NBSVM (Wang and Manning, 2012) is a state-of-the-art performer on many sentiment classification datasets, which trades-off between Naive Bayes and NB-enhanced SVM. (4) RAE: Recursive Autoencoder (Socher et al., 2011c) has been proven effective in many sentiment analysis tasks by learning compositionality automatically. We run RAE with randomly initialized word embedding. (5) NRC: NRC builds the top-performed system in SemEval 2013 Twitter sentiment classification track which incorporates diverse sentiment lexicons and many manually designed features. We re-implement this system because the codes are not publicly available 3. NRC-ngram refers to the feature set of NRC leaving out ngram features. Except for DistSuper, other baseline methods are conducted in a supervised manner. We do not compare with RNTN (Socher et al., 2013b) because we cannot efficiently train the RNTN model. The reason lies in that the tweets in our dataset do not have accurately parsed results or fine grained sentiment labels for phrases. Another reason is that the RNTN model trained on movie reviews cannot be directly applied on tweets due to the differences between domains (Blitzer et al., 2007). Results and Analysis. Table 2 shows the macroF1 of the baseline systems as well as the SSWEbased methods on positive/negative sentiment classification of tweets. Distant supervision is relatively weak because the noisy-labeled tweets are treated as the gold standard, which affects the performance of classifier. The results of bagof-ngram (uni/bi/tri-gram) features are not satisfied because the one-hot word representation cannot capture the latent connections between words. NBSVM and RAE perform comparably and have 3For 3-class sentiment classification in SemEval 2013, our re-implementation of NRC achieved 68.3%, 0.7% lower than NRC (69%) due to less training data. 1560 Method Macro-F1 DistSuper + unigram 61.74 DistSuper + uni/bi/tri-gram 63.84 SVM + unigram 74.50 SVM + uni/bi/tri-gram 75.06 NBSVM 75.28 RAE 75.12 NRC (Top System in SemEval) 84.73 NRC - ngram 84.17 SSWEu 84.98 SSWEu+NRC 86.58 SSWEu+NRC-ngram 86.48 Table 2: Macro-F1 on positive/negative classification of tweets. a big gap in comparison with the NRC and SSWEbased methods. The reason is that RAE and NBSVM learn the representation of tweets from the small-scale manually annotated training set, which cannot well capture the comprehensive linguistic phenomenons of words. NRC implements a variety of features and reaches 84.73% in macro-F1, verifying the importance of a better feature representation for Twitter sentiment classification. We achieve 84.98% by using only SSWEu as features without borrowing any sentiment lexicons or hand-crafted rules. The results indicate that SSWEu automatically learns discriminative features from massive tweets and performs comparable with the state-of-the-art manually designed features. After concatenating SSWEu with the feature set of NRC, the performance is further improved to 86.58%. We also compare SSWEu with the ngram feature by integrating SSWE into NRC-ngram. The concatenated features SSWEu+NRC-ngram (86.48%) outperform the original feature set of NRC (84.73%). As a reference, we apply SSWEu on subjective classification of tweets, and obtain 72.17% in macro-F1 by using only SSWEu as feature. After combining SSWEu with the feature set of NRC, we improve NRC from 74.86% to 75.39% for subjective classification. Comparision between Different Word Embedding. We compare sentiment-specific word embedding (SSWEh, SSWEr, SSWEu) with baseline embedding learning algorithms by only using word embedding as features for Twitter sentiment classification. We use the embedding of unigrams, bigrams and trigrams in the experiment. The embeddings of C&W (Collobert et al., 2011), word2vec4, WVSA (Maas et al., 2011) and our models are trained with the same dataset and same parameter setting. We compare with C&W and word2vec as they have been proved effective in many NLP tasks. The trade-off parameter of ReEmb (Labutov and Lipson, 2013) is tuned on the development set of SemEval 2013. Table 3 shows the performance on the positive/negative classification of tweets5. ReEmb(C&W) and ReEmb(w2v) stand for the use of embeddings learned from 10 million distantsupervised tweets with C&W and word2vec, respectively. Each row of Table 3 represents a word embedding learning algorithm. Each column stands for a type of embedding used to compose features of tweets. The column uni+bi denotes the use of unigram and bigram embedding, and the column uni+bi+tri indicates the use of unigram, bigram and trigram embedding. Embedding unigram uni+bi uni+bi+tri C&W 74.89 75.24 75.89 Word2vec 73.21 75.07 76.31 ReEmb(C&W) 75.87 – – ReEmb(w2v) 75.21 – – WVSA 77.04 – – SSWEh 81.33 83.16 83.37 SSWEr 80.45 81.52 82.60 SSWEu 83.70 84.70 84.98 Table 3: Macro-F1 on positive/negative classification of tweets with different word embeddings. From the first column of Table 3, we can see that the performance of C&W and word2vec are obviously lower than sentiment-specific word embeddings by only using unigram embedding as features. The reason is that C&W and word2vec do not explicitly exploit the sentiment information of the text, resulting in that the words with opposite polarity such as good and bad are mapped to close word vectors. When such word embeddings are fed as features to a Twitter sentiment classifier, the discriminative ability of sentiment words are weakened thus the classification performance is affected. Sentiment-specific word em4Available at https://code.google.com/p/word2vec/. We utilize the Skip-gram model because it performs better than CBOW in our experiments. 5MVSA and ReEmb are not suitable for learning bigram and trigram embedding because their sentiment predictor functions only utilize the unigram embedding. 1561 beddings (SSWEh, SSWEr, SSWEu) effectively distinguish words with opposite sentiment polarity and perform best in three settings. SSWE outperforms MVSA by exploiting more contextual information in the sentiment predictor function. SSWE outperforms ReEmb by leveraging more sentiment information from massive distant-supervised tweets. Among three sentiment-specific word embeddings, SSWEu captures more context information and yields best performance. SSWEh and SSWEr obtain comparative results. From each row of Table 3, we can see that the bigram and trigram embeddings consistently improve the performance of Twitter sentiment classification. The underlying reason is that a phrase, which cannot be accurately represented by unigram embedding, is directly encoded into the ngram embedding as an idiomatic unit. A typical case in sentiment analysis is that the composed phrase and multiword expression may have a different sentiment polarity than the individual words it contains, such as not [bad] and [great] deal of (the word in the bracket has different sentiment polarity with the ngram). A very recent study by Mikolov et al. (2013) also verified the effectiveness of phrase embedding for analogically reasoning phrases. Effect of α in SSWEu We tune the hyperparameter α of SSWEu on the development set by using unigram embedding as features. As given in Equation 8, α is the weighting score of syntactic loss of SSWEu and trades-off the syntactic and sentiment losses. SSWEu is trained from 10 million distant-supervised tweets. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.77 0.78 0.79 0.8 0.81 0.82 0.83 0.84 α Macro−F1 SSWEu Figure 2: Macro-F1 of SSWEu on the development set of SemEval 2013 with different α. Figure 2 shows the macro-F1 of SSWEu on positive/negative classification of tweets with different α on our development set. We can see that SSWEu performs better when α is in the range of [0.5, 0.6], which balances the syntactic context and sentiment information. The model with α=1 stands for C&W model, which only encodes the syntactic contexts of words. The sharp decline at α=1 reflects the importance of sentiment information in learning word embedding for Twitter sentiment classification. Effect of Distant-supervised Data in SSWEu We investigate how the size of the distantsupervised data affects the performance of SSWEu feature for Twitter sentiment classification. We vary the number of distant-supervised tweets from 1 million to 12 million, increased by 1 million. We set the α of SSWEu as 0.5, according to the experiments shown in Figure 2. Results of positive/negative classification of tweets on our development set are given in Figure 3. 1 2 3 4 5 6 7 8 9 10 11 12 x 10 6 0.77 0.78 0.79 0.8 0.81 0.82 0.83 0.84 # of distant−supervised tweets Macro−F1 SSWEu Figure 3: Macro-F1 of SSWEu with different size of distant-supervised data on our development set. We can see that when more distant-supervised tweets are added, the accuracy of SSWEu consistently improves. The underlying reason is that when more tweets are incorporated, the word embedding is better estimated as the vocabulary size is larger and the context and sentiment information are richer. When we have 10 million distantsupervised tweets, the SSWEu feature increases the macro-F1 of positive/negative classification of tweets to 82.94% on our development set. When we have more than 10 million tweets, the performance remains stable as the contexts of words have been mostly covered. 4.2 Word Similarity of Sentiment Lexicons The quality of SSWE has been implicitly evaluated when applied in Twitter sentiment classification in the previous subsection. We explicitly evaluate it in this section through word similarity in the em1562 bedding space for sentiment lexicons. The evaluation metric is the accuracy of polarity consistency between each sentiment word and its top N closest words in the sentiment lexicon, Accuracy = P#Lex i=1 PN j=1 β(wi, cij) #Lex × N (10) where #Lex is the number of words in the sentiment lexicon, wi is the i-th word in the lexicon, cij is the j-th closest word to wi in the lexicon with cosine similarity, β(wi, cij) is an indicator function that is equal to 1 if wi and cij have the same sentiment polarity and 0 for the opposite case. The higher accuracy refers to a better polarity consistency of words in the sentiment lexicon. We set N as 100 in our experiment. Experiment Setup and Datasets We utilize the widely-used sentiment lexicons, namely MPQA (Wilson et al., 2005) and HL (Hu and Liu, 2004), to evaluate the quality of word embedding. For each lexicon, we remove the words that do not appear in the lookup table of word embedding. We only use unigram embedding in this section because these sentiment lexicons do not contain phrases. The distribution of the lexicons used in this paper is listed in Table 4. Lexicon Positive Negative Total HL 1,331 2,647 3,978 MPQA 1,932 2,817 4,749 Joint 1,051 2,024 3,075 Table 4: Statistics of the sentiment lexicons. Joint stands for the words that occur in both HL and MPQA with the same sentiment polarity. Results. Table 5 shows our results compared to other word embedding learning algorithms. The accuracy of random result is 50% as positive and negative words are randomly occurred in the nearest neighbors of each word. Sentiment-specific word embeddings (SSWEh, SSWEr, SSWEu) outperform existing neural models (C&W, word2vec) by large margins. SSWEu performs best in three lexicons. SSWEh and SSWEr have comparable performances. Experimental results further demonstrate that sentiment-specific word embeddings are able to capture the sentiment information of texts and distinguish words with opposite sentiment polarity, which are not well solved in traditional neural Embedding HL MPQA Joint Random 50.00 50.00 50.00 C&W 63.10 58.13 62.58 Word2vec 66.22 60.72 65.59 ReEmb(C&W) 64.81 59.76 64.09 ReEmb(w2v) 67.16 61.81 66.39 WVSA 68.14 64.07 67.12 SSWEh 74.17 68.36 74.03 SSWEr 73.65 68.02 73.14 SSWEu 77.30 71.74 77.33 Table 5: Accuracy of the polarity consistency of words in different sentiment lexicons. models like C&W and word2vec. SSWE outperforms MVSA and ReEmb by exploiting more context information of words and sentiment information of sentences, respectively. 5 Conclusion In this paper, we propose learning continuous word representations as features for Twitter sentiment classification under a supervised learning framework. We show that the word embedding learned by traditional neural networks are not effective enough for Twitter sentiment classification. These methods typically only model the context information of words so that they cannot distinguish words with similar context but opposite sentiment polarity (e.g. good and bad). We learn sentiment-specific word embedding (SSWE) by integrating the sentiment information into the loss functions of three neural networks. We train SSWE with massive distant-supervised tweets selected by positive and negative emoticons. The effectiveness of SSWE has been implicitly evaluated by using it as features in sentiment classification on the benchmark dataset in SemEval 2013, and explicitly verified by measuring word similarity in the embedding space for sentiment lexicons. Our unified model combining syntactic context of words and sentiment information of sentences yields the best performance in both experiments. Acknowledgments We thank Yajuan Duan, Shujie Liu, Zhenghua Li, Li Dong, Hong Sun and Lanjun Zhou for their great help. This research was partly supported by National Natural Science Foundation of China (No.61133012, No.61273321, No.61300113). 1563 References Luciano Barbosa and Junlan Feng. 2010. Robust sentiment detection on twitter from biased and noisy data. In Proceedings of International Conference on Computational Linguistics, pages 36–44. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Trans. Pattern Analysis and Machine Intelligence. Yoshua Bengio. 2013. Deep learning of representations: Looking forward. arXiv preprint arXiv:1305.0445. Dmitriy Bespalov, Bing Bai, Yanjun Qi, and Ali Shokoufandeh. 2011. Sentiment classification based on supervised latent n-gram analysis. In Proceedings of the Conference on Information and Knowledge Management, pages 375–382. Dmitriy Bespalov, Yanjun Qi, Bing Bai, and Ali Shokoufandeh. 2012. Sentiment classification with supervised sequence embedding. In Machine Learning and Knowledge Discovery in Databases, pages 159–174. Springer. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Annual Meeting of the Association for Computational Linguistics, volume 7. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using twitter hashtags and smileys. In Proceedings of International Conference on Computational Linguistics, pages 241– 249. Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the International Conference on Web Search and Data Mining, pages 231–240. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, pages 2121–2159. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874. Ronen Feldman. 2013. Techniques and applications for sentiment analysis. Communications of the ACM, 56(4):82–89. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 42–47. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. Proceedings of International Conference on Machine Learning. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, pages 1–12. Karl Moritz Hermann and Phil Blunsom. 2013. The role of syntax in vector space models of compositional semantics. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 894–904. Ming Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 168–177. Xia Hu, Jiliang Tang, Huiji Gao, and Huan Liu. 2013. Unsupervised sentiment analysis with emotional signals. In Proceedings of the International World Wide Web Conference, pages 607–618. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. The Proceeding of Annual Meeting of the Association for Computational Linguistics, 1:151–160. Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the omg! In The International AAAI Conference on Weblogs and Social Media. Igor Labutov and Hod Lipson. 2013. Re-embedding words. In Annual Meeting of the Association for Computational Linguistics. Kun-Lin Liu, Wu-Jun Li, and Minyi Guo. 2012. Emoticon smoothed language models for twitter sentiment analysis. In The Association for the Advancement of Artificial Intelligence. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1–167. 1564 Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. The Conference on Neural Information Processing Systems. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Advances in neural information processing systems, pages 1081–1088. Saif M Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-ofthe-art in sentiment analysis of tweets. Proceedings of the International Workshop on Semantic Evaluation. Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. 2013. Semeval-2013 task 2: Sentiment analysis in twitter. In Proceedings of the International Workshop on Semantic Evaluation, volume 13. Alexander Pak and Patrick Paroubek. 2010. Twitter as a corpus for sentiment analysis and opinion mining. In Proceedings of Language Resources and Evaluation Conference, volume 2010. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 79–86. Richard Socher, Eric H Huang, Jeffrey Pennington, Andrew Y Ng, and Christopher D Manning. 2011a. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. The Conference on Neural Information Processing Systems, 24:801– 809. Richard Socher, Cliff C Lin, Andrew Ng, and Chris Manning. 2011b. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the International Conference on Machine Learning, pages 129–136. Richard Socher, J. Pennington, E.H. Huang, A.Y. Ng, and C.D. Manning. 2011c. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Conference on Empirical Methods in Natural Language Processing, pages 151–161. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic Compositionality Through Recursive Matrix-Vector Spaces. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013a. Parsing with compositional vector grammars. In Annual Meeting of the Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexiconbased methods for sentiment analysis. Computational linguistics, 37(2):267–307. Mike Thelwall, Kevan Buckley, and Georgios Paltoglou. 2012. Sentiment strength detection for the social web. Journal of the American Society for Information Science and Technology, 63(1):163–173. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. Annual Meeting of the Association for Computational Linguistics. Peter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 417–424. Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 90–94. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 347–354. Ainur Yessenalina and Claire Cardie. 2011. Compositional matrix-space models for sentiment analysis. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 172–182. Jichang Zhao, Li Dong, Junjie Wu, and Ke Xu. 2012. Moodlens: an emoticon-based sentiment analysis system for chinese tweets. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and pos tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 647–657. 1565
2014
146
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1566–1576, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Towards a General Rule for Identifying Deceptive Opinion Spam Jiwei Li1, Myle Ott2, Claire Cardie2, Eduard Hovy1 1Language Technology Institute, Carnegie Mellon University, Pittsburgh, P.A. 15213, USA 2Department of Computer Science, Cornell University, Ithaca, N.Y., 14853, USA [email protected], [email protected] [email protected], [email protected] Abstract Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites.1 1 Introduction Consumers increasingly rely on user-generated online reviews when making purchase decision (Cone, 2011; Ipsos, 2012). Unfortunately, the ease of posting content to the Web, potentially anonymously, creates opportunities and incentives for unscrupulous businesses to post deceptive opinion spam—fictitious reviews that are deliberately written to sound authentic, in order to deceive the reader.2 Accordingly, there appears 1Dataset available by request from the first author. 2Manipulating online reviews may also have legal consequences. For example, the Federal Trade Commission (FTC) to be widespread and growing concern among both businesses and the public about this potential abuse (Meyer, 2009; Miller, 2009; Streitfeld, 2012; Topping, 2010; Ott, 2013). Existing approaches for spam detection are usually focused on developing supervised learningbased algorithms to help users identify deceptive opinion spam, which are highly dependent upon high-quality gold-standard labeled data (Jindal and Liu, 2008; Jindal et al., 2010; Lim et al., 2010; Wang et al., 2011; Wu et al., 2010). Studies in the literature rely on a couple of approaches for obtaining labeled data, which usually fall into two categories. The first relies on the judgements of human annotators (Jindal et al., 2010; Mukherjee et al., 2012). However, recent studies show that deceptive opinion spam is not easily identified by human readers (Ott et al., 2011). An alternative approach, as introduced by Ott et al. (2011), crowdsourced deceptive reviews using Amazon Mechanical Turk.3 A couple of follow-up works have been introduced based on Ott et al.’s dataset, including estimating prevalence of deception in online reviews (Ott et al., 2012), identification of negative deceptive opinion spam (Ott et al., 2013), and identifying manipulated offerings (Li et al., 2013b). Despite the advantages of soliciting deceptive gold-standard material from Turkers (it is easy, large-scale, and affordable), it is unclear whether Turkers are representative of the general population that generate fake reviews, or in other words, Ott et al.’s data set may correspond to only one type of online deceptive opinion spam — fake reviews generated by people who have never been to offerings or experienced the entities. Specifically, according to their findings (Ott et al., 2011; has updated their guidelines on the use of endorsements and testimonials in advertising to suggest that posting deceptive reviews may be unlawful in the United States (FTC, 2009). 3http://www.mturk.com 1566 Li et al., 2013a), truthful hotel reviews encode more spatial details, characterized by terms such as “bathroom” and “location”, while deceptive reviews talk about general concepts such as why or with whom they went to the hotel. However, a hotel can instead solicit fake reviews from their employees or customers who possess substantial domain knowledge to write fake reviews and encode more spatial details in their lies. Indeed, cases have been reported where hotel owners bribe guests in return for good reviews on TripAdvisor4, or companies ordered employees to pretend they were satisfied customers and write glowing reviews of its face-lift procedure on Web sites.5 The domain knowledge possessed by domain experts enables them to craft reviews that are much more difficult for classifiers to detect, compared to the crowdsourced fake reviews. Additionally, existing supervised algorithms in the literature are usually narrowed to one specific domain and heavily rely on domain-specific vocabulary. For example, classifiers assign high weights to domain-specific terms such as “hotel”, “rooms”, or even the name of the hotels such as “Hilton” when trained on reviews on hotels. It is unclear whether these classifiers will perform well at detecting deception in other domains, e.g., Restaurant or Doctor reviews. Even in a single domain, e.g., Hotel, classifiers trained from reviews of one city (e.g., Chicago) may not be effective if directly applied to reviews from other cities (e.g., New York City) (Li et al., 2013b). In the examples in Table 1, we trained a linear SVM classifier on Ott’s Chicago-hotel dataset on unigram features and tested it on a couple of different domains (the details of data acquisition are illustrated in Section 3). Good performance is obtained on Chicago-hotel reviews (Ott et al., 2011), but not as good on New York City ones. The performance is reasonable in Restaurant reviews due to the many shared properties among restaurants and hotels, but suffers in Doctor settings. In this paper, we try to obtain a deeper understanding of the general nature of deceptive opinion spam. One contribution of the work presented here is the creation of the cross-domain (i.e., Hotel, Restaurant and Doctor) gold-standard dataset. 4http://www.dailymail.co.uk/travel/article2013391/Tripadvisor-Hotel-owners-bribe-guests-returngood-reviews.html 5http://www.nytimes.com/2009/07/15/ technology/internet/15lift.html?_r=0 Accuracy Precision Recall F1 NYC-Hotel 0.799 0.794 0.758 0.766 Chicago-Restaurant 0.785 0.813 0.742 0.778 Doctor 0.550 0.537 0.725 0.617 Table 1: SVM performance on datasets for a classifier trained on Chicago hotel review based on Unigram feature. In contrast to existing work (Ott et al., 2011; Li et al., 2013b), our new gold standard includes three types of reviews: domain expert deceptive opinion spam (Employee), crowdsourced deceptive opinion spam (Turker), and truthful Customer reviews (Customer). In addition, some of domains contain both positive (P) and negative (N) reviews.6 To explore the general rule of deceptive opinion spam, we extended SAGE Model (Eisenstein et al., 2011), a bayesian generative approach that can capture the multiple generative facets (i.e., deceptive vs truthful, positive vs negative, experienced vs non-experienced, hotel vs restaurant vs doctor) in the text collection. We find that more general features, such as LIWC and POS, are more robust when modeled using SAGE, compared with just bag-of-words. We additionally make theoretical contributions that may shed light on a longstanding debate in the literature about deception. For example, in contrast to existing findings that highlight the lack of spatial detail in deceptive reviews (Ott et al., 2011; Li et al., 2013b), we find that a lack of spatial detail may not be a universal cue to deception, since it does not apply to fake reviews written by domain experts. Instead, our finding suggest that other linguistic features may offer more robust cues to deceptive opinion spam, such as overly highlighted sentiment in the review or the overuse of firstperson singular pronouns. The rest of this paper is organized as follows. In Section 2, we briefly go over related work. We describe the creation of our data set in Section 3 and present our model in Section 4. Experimental results are shown in Section 5. We present analysis of general cues to deception in Section 6 and conclude this paper in Section 7. 6For example, a hotel manager could hire people to write positive reviews to increase the reputation of his own hotel or post negative ones to degrade his competitors. Identifying positive/negative opinion spam is explored in (Ott et al., 2011; Ott et al., 2013) 1567 2 Related Work Spam has been historically studied in the contexts of Web text (Gy¨ongyi et al., 2004; Ntoulas et al., 2006) or email (Drucker et al., 1999). Recently there has been increasing concern about deceptive opinion spam (Jindal and Liu, 2008; Ott et al., 2011; Wu et al., 2010; Mukherjee et al., 2013b; Wang et al., 2012). Jindal and Liu (2008) first studied the deceptive opinion problem and trained models using features based on the review text, reviewer, and product to identify duplicate opinions, i.e., opinions that appear more than once in the corpus with similar contexts. Wu et al. (2010) propose an alternative strategy to detect deceptive opinion spam in the absence of a gold standard. Yoo and Gretzel (2009) gathered 40 truthful and 42 deceptive hotel reviews and manually compare the linguistic differences between them. Ott et al. created a gold-standard collection by employing Turkers to write fake reviews, and follow-up research was based on their data (Ott et al., 2012; Ott et al., 2013; Li et al., 2013b; Feng and Hirst, 2013). For example, Song et al. (2012) looked into syntactic features from Context Free Grammar parse trees to improve the classifier performance. A step further, Feng and Hirst (2013) make use of degree of compatibility between the personal experiment and a collection of reference reviews about the same product rather than simple textual features. In addition to exploring text or linguistic features in deception, some existing work looks into customers’ behavior to identify deception (Mukherjee et al., 2013a). For example, Mukherjee et al. (2011; 2012) delved into group behavior to identify group of reviewers who work collaboratively to write fake reviews. Qian and Liu (2013) identified multiple user IDs that are generated by the same author, as these authors are more likely to generate deceptive reviews. In the psychological literature, researchers have looked into possible linguistic cues to deception (Newman et al., 2003), such as decreased spatial detail, which is consistent with theories of reality monitoring (Johnson and Raye, 1981), increased negative emotion terms (Newman et al., 2003), or the writing style difference between informative (truthful) and imaginative (deceptive) writings in (Rayson et al., 2001). The former typically consists of more nouns, adjectives, prepositions, determiners, and coordinating conjunctions, while the latter consists of more verbs, adverbs, pronouns, and pre-determiners. SAGE (Sparse Additive Generative Model): SAGE is an generative bayesian approach introduced by Eisenstein et al. (2011), which can be viewed as an combination of topic models (Blei et al., 2003) and generalized additive models (Hastie and Tibshirani, 1990). Unlike other derivatives of topic models, SAGE drops the Dirichlet-multinomial assumption and adopts a Laplacian prior, triggering sparsity in topic-word distribution. The reason why SAGE is tailored for our task is that SAGE constructs multi-faceted latent variable models by simply adding together the component vectors rather than incorporating multiple switching latent variables in multiple facets. 3 Dataset Construction In this section, we report our efforts to gather goldstandard opinion spam datasets. Our datasets contain the following domains, namely Hotel, Restaurant, and Doctor. 3.1 Turker set, using Mechanical Turk Crowdsourcing services such as AMT greatly facilitate large-scale data annotation and collection efforts. Anyone with basic programming skills can create Human Intelligence Tasks (HITs) and access a marketplace of anonymous online workers (Turkers) willing to complete the tasks. We borrowed some rules used by Ott et al. to create their dataset, such as restricting task to Turkers located in the United States, and who maintain an approval rating of at least 90%. Hotel-Turker : We directly borrowed datasets from Ott7 and Li.8 Restaurant-Turker : We gathered 20 positive (P) deceptive reviews for each of 10 of the most popular restaurants in Chicago, for a total of 200 positive deceptive restaurant reviews. Doctor-Turker : We gathered a total number of 200 positive reviews from Turkers. 3.2 Employee set, by domain experts We seek deceptive opinion spam written by people with expert-level domain knowledge. It is not appropriate to use crowdsourcing to obtain this data, 7http://myleott.com/op_spam/ 8http://www.cs.cmu.edu/˜jiweil/html/ four_city.html 1568 Turker Expert Customer Hotel (P/N) 400/400 140/140 400/400 Restaurant (P/N) 200/0 120/0 200/200 Doctor (P/N) 200/0 32/0 200/0 Table 2: Statistics for our dataset. so instead we solicit reviews written by employees in each domain. Hotel-Employee: We asked two hotel employees from each of seven hotels (14 employees total) each to write 10 deceptive positive-sentiment reviews of their own hotel, and 10 deceptive negative-sentiment reviews of their biggest local competitor’s hotel. In total, we obtained 280 deceptive reviews of 14 hotels, including a balanced mix of positive- and negative-sentiment reviews. Restaurant-Employee: We asked employees from selected restaurants (a waiter/waitress or cook) to each write positive-sentiment reviews of their restaurant. Doctor-Employee: We asked real doctors to write positive fake reviews about themselves. In total we obtained 32 reviews from 15 doctors. 3.3 Customer set from Actual Customers Hotel-Customer: We borrowed from Ott et al.’s dataset. Restaurant/Doctor-Customer: We solicited data by matching a set of truthful reviews as Ott et al. did in collecting truthful hotel reviews. 3.4 Summary for Data Creation Statistics for our data set is presented in Table 2. Due to the difficulty in obtaining gold-standard data in the literature, there is no doubt that our data set is not perfect. Some parts are missing, some are unbalanced, participants in the survey may not be representative of the general population. However, as far as we know, this is the most comprehensive dataset for deceptive opinion spam so far, and may to some extent shed insights on the nature of online deception. 4 Feature-based Additive Model In this section, we briefly describe our model. Since mathematics are not the main theme of this paper, we omit the exact details for inference, which can be found in (Eisenstein et al., 2011). Before describing the model in detail, we note the following advantages of the SAGE model, and our reasons for using it in this paper: 1. the “additive” nature of SAGE allows a better understanding of which features contribute most to each type of deceptive review and how much each such feature contributes to the final decision jointly. If we instead use SVM, for example, we would have to train classifiers one by one (due to the distinct features from different sources) to draw conclusions regarding the differences between Turker vs Expert vs truthful reviews, positive expert vs negative expert reviews, or reviews from different domains. This would not only become intractable, but would make the conclusions less clear. 2. For cross-domain classification task, standard machine learning approaches may suffer due to domain-specific properties (See Section 5.2). 4.1 Model In SAGE, each term w is drawn from a distribution proportional to exp(m(w) + η(T)(w) yd + η(A)(w) zn + η(I)(w) yd,zn ), where m(w) is the observed background term frequency, ηyd, ηzn and ηyd,zn denote the log frequency deviation representing topic zn, facet yd, and the second-order interaction part respectively. Superscripts T, A and I respectively denote the index of the topic, facet, and second-order interaction. In our task, we adapt the SAGE model as follows: Y = {ySentiment ∈{positive, negative}, yDomain ∈{hotel, restaurant, doctor}, ySource ∈{employee, turker, customer}} We model three η’s, one for each type of y. Let i, j, k denote the index of the different types of y, so that each term w is drawn as follows: P(w|i, j, k) ∝exp(m(w) + η(i)(w) ySentiment +η(j)(w) yDomain + η(k)(w) yScource + higher order) where the higher order parts denote the interactions between different facets. In our approach each document-level feature f is drawn from the following distribution: P(f|i, j, k) ∝exp(m(f) + η(i)(f) ySentiment + η(j)(f) yDomain + η(k)(f) yScource + higher order) (1) 1569 where m(f) can be interpreted as the background value of feature f. For each review d, the probability that it is drawn from facets with index i, j, k is as follows: P(d|i, j, k) = Y f∈d P(f|i, j, k) Y w∈d P(w|i, j, k) (2) In the training process, parameters η(w) y and η(f) y are to be learned by maximizing the posterior distribution following the original SAGE training procedure. For prediction, we estimate ySource for each document given all or part of η(w) y and η(f) y as follows: ySource = argmax y′ Source P(d|y′ Source, ySentiment, yDomain), where we assume ySentiment and yDomain are given for each document d. Note that we assume conditional independence between features and words given y, similar to other topic models (Blei et al., 2003). Notably, our revised SAGE model degenerates into a model similar to Generalized Additive Model (Hastie and Tibshirani, 1990) when word features are not considered. 5 Experiments In this section, we report our experimental results. We first restrict experiments to the within-domain task and see what features most characterize the deceptive reviews, and how. We later extend it to cross domains to explore a more general classifier of deceptive opinion spam. 5.1 Intra-Domain Classification We explore the effect of both domain experts and crowdsourcing workers on intra-domain deception. Specifically, we reframe it as a intradomain multi-class classification task, where given the labeled training data from one domain, we learn a classifier to classify reviews according to their source, i.e., Employee, Turker and Customer. Since the machine learning classifier is trained and tested within the same domain, η(j)(w) yDomain and η(i)(f) yDomain are not considered here. We use a One-Versus-Rest (OvR) scheme, in which we train m classifiers using SAGE, such that each classifier fi, for i ∈[1, m], is trained to distinguish between class i on the one hand, and all classes except i on the other. To make an mway decision, we then choose the class c with the most confident prediction. OvR approaches have been shown to produce state-of-art performance compared to other multi-class approaches such as Multinomial Naive Bayes or One-Versus-One classification scheme. We train the OvR classifier on three sets of features, LIWC, Unigram, and POS.9 Multi-class classification results are given at Table 3. We report both OvR performance and the performance of three One-versus-One binary classifiers, trained to distinguish between each pair of classes. In particular, the three-class classifier is around 65% accurate at distinguishing between Employee, Customer, and Turker for each of the domains using Unigram, significantly higher than random guess. We also observe that each of the three One-versus-One binary classifications performs significantly better than chance, suggesting that Employee, Customer, and Turker are in fact three different classes. In particular, the two-class classifier is around 0.76 accurate in distinguishing between Turker and Employee reviews, despite both kinds of reviews being deceptive opinion spam. Best performance is achieved on Unigram features, constantly outperforming LIWC and POS features in both three-class and two-class settings in the hotel domain. Similar results are observed for restaurant and doctor domains and details are excluded for brevity. This suggests that a universal set of keyword-based deception cues (e.g., LIWC) is not the best approach for Intra-Domain Classification. Similar results were also reported in previous work (Ott et al., 2012; Ott, 2013). 5.2 Cross-domain Classification In this subsection, we frame our problem as a domain adaptation task (Pan and Yang, 2010). Again, we explore 3 feature sets: LIWC, Unigram and POS. We train a classifier on hotel reviews, and evaluate the performance on other domains. For simplicity, we focus on truthful (Customer) versus deceptive (Turker) binary classification rather than a multi-class classification. We report results from SAGE and SVM10 in Table 4. We first observe that classifiers trained on hotel reviews apply well in the restaurant domain, which is reasonable due to the many shared prop9Part-of-speech tags were assigned based on Stanford Parser http://nlp.stanford.edu/software/ lex-parser.shtml 10We use SVMlight (Joachims, 1999) to train our linear SVM classifiers 1570 Domain Setting Features Customer Employee Turker A P R P R P R Hotel Three-Class Unigram 0.664 0.678 0.669 0.589 0.610 0.641 0.582 LIWC 0.602 0.617 0.613 0.541 0.598 0.590 0.511 POS 0.517 0.532 0.669 0.481 0.479 0.482 0.416 Customer vs Turker Unigram 0.818 0.812 0.840 0.820 0.809 LIWC 0.764 0.774 0.771 0.723 0.749 POS 0.729 0.748 0.692 0.707 0.759 Customer vs Employee Unigram 0.799 0.832 0.784 0.804 0.820 LIWC 0.732 0.746 0.751 0.714 0.722 POS 0.728 0.713 0.742 0.707 0.754 Employee vs Turker Unigram 0.762 0.786 0.806 0.826 0.794 LIWC 0.720 0.728 0.726 0.698 0.739 POS 0.701 0.688 0.710 0.701 0.697 Restaurant Three-Class Unigram 0.647 0.692 0.725 0.625 0.648 0.686 0.702 Customer vs Turker 0.817 0.842 0.816 0.804 0.812 Customer vs Employee 0.785 0.790 0.814 0.769 0.826 Employee vs Turker 0.774 0.784 0.804 0.802 0.763 Doctor Customer vs Turker 0.745 0.772 0.701 0.752 0.718 Table 3: Within-domain multi-class classifier performance. Model Features Domain A P R F1 Domain A P R F1 SVM Unigram Restaurant 0.785 0.813 0.742 0.778 Doctor 0.550 0.537 0.725 0.617 LIWC Restaurant 0.745 0.692 0.840 0.759 Doctor 0.521 0.512 0.965 0.669 POS Restaurant 0.735 0.697 0.815 0.751 Doctor 0.540 0.521 0.975 0.679 SAGE Unigram Restaurant 0.770 0.793 0.750 0.784 Doctor 0.520 0.547 0.705 0.616 LIWC Restaurant 0.742 0.728 0.749 0.738 Doctor 0.647 0.650 0.608 0.628 POS Restaurant 0.746 0.732 0.687 0.701 Doctor 0.634 0.623 0.682 0.651 Table 4: Classifier performance in cross-domain adaptation. erties among restaurants and hotels. Among three types of features, Unigram still performs best. POS and LIWC features are also robust across domains. In the doctor domain, we observe that models trained on Unigram features from the hotels domain do not generalize well to doctor reviews, and the performance is a little bit better than random guess with only 0.55 accuracy. For SVM, models trained on POS and LIWC features achieve even lower accuracy than Unigram. POS and LIWC features obtain around 0.5 precision and 1.0 recall, indicating that all doctor reviews are classified as deceptive by the classifier. One plausible explanation could be doctor reviews generally encode some type of positive-weighted (deceptive) features more than hotel reviews and these types of features dominate the decision making procedures, leading all reviews to be classified as deceptive. Tables 5 and 6 give the top weighted LIWC and POS features. We observe that many features are indeed shared among doctor and hotel domains. Notably, POS features are more robust than LIWC as more shared features are observed. As domain specific properties will be considered in the interaction part (ηLIWC domain and ηPOS domain) of the addiLIWC (hotel) LIWC (doctor) deceptive truthful deceptive truthful i AllPct Sixletters present family number past AllPct pronoun hear work social Sixletters we health shehe see space i number posemo dash friend time certain human posemo we leisure exclusive feel you future past perceptual negemo perceptual home leisure Period feel otherpunct insight relativ comma negemo comma ingest cause dash future money Table 5: Top weighted LIWC features for Turker vs Customer in Doctor and Hotel reviews. Blue denotes shared positive (deceptive) features and red denotes negative (truthful) features. tive model, SAGE achieve much better results than SVM, and is around 0.65 accurate in the crossdomain task. 6 General Linguistic Cues of Deceptive Opinion Spam In this section, we examine a number of general POS and LIWC features that may shed light on a general rule for identifying deceptive opinion 1571 Figure 1: Visualization of the η for POS features: Horizontal axes correspond to the values η and are NORMALIZED from the log-frequency function. POS (hotel) POS (doctor) deceptive truthful deceptive truthful PRP$ CD VBD CD PRP RRB NNP VBZ VB LRB VB VBP TO CC TO FW NNP NNS VBG RRB VBG RP PRP$ LRB MD VBN JJS RB VBP IN JJ LS RB EX WRB PDT JJS VBZ PRP VBN Table 6: Top weighted POS features for Turker vs Customer in Doctor and Hotel reviews. Blue denotes shared positive (deceptive) features and red denotes negative (truthful) features. spam. Our modified SAGE model provides us with a tailored tool for this analysis. Specifically, each feature f is associated with a background value mf. For each facet A, ηf A, presents the facetspecific preference value for feature f. Note that sentiments are separated into positive and negative dimensions, which is necessary because hotel employee authors wrote positive-sentiment reviews when reviewing their own hotels, and negativesentiment reviews when reviewing their competitors’ hotels. 6.1 POS features Early findings in the literature (Rayson et al., 2001; Buller and Burgoon, 1996; Biber et al., 1999) found that informative (truthful) writings typically consist of more nouns, adjectives, prepositions, determiners, and coordinating conjunctions, while imaginative (deceptive) writing consist of more verbs, adverbs, pronouns, and predeterminers (with a few exceptions). Our findings with POS features are largely in agreement with these findings when distinguishing between Turker and Customer reviews, but are violated in the Employee set. We present the eight types of POS features in Figure 1, namely, N (Noun), JJ (Adjective), IN (Preposition or subordinating conjunction) and DT (Determiner), V (Verb), RB (Adverb), PRP (Pronouns, both personal and possessive) and PDT (Pre-Determiner). From Figures 1(a)(b)(e)(f), we observe that with the exception of PDT, the word frequency of which is too small to draw a conclusion, Turker and Customer reviews exhibit linguistic patterns in agreement with previous findings in the literature, where truthful reviews (Customer) tend to include more N, JJ, IN and DT, while deceptive writings tend to encode more V, RB and PRP. However, in the case of the Employee-Positive dataset, which is equally deceptive, most of these rules are violated. Notably, reviews from the Employee-Positive set did not encode fewer N, JJ and DT terms, as expected (see Figures 1(a)(c)). Instead, they encode even more N, JJ and DT vocabularies than truthful reviews from the Customer reviews. Also, fewer V and RB are found in Employee-Positive reviews compared with Customer reviews (see Figures 1(e)(g)). One explanation for these observations is that informative (truthful) writing tends to be more introductory and descriptive, encoding more concrete details, when compared with imaginary writings. As domain experts possess considerable knowledge of their own offerings, they highlight 1572 Figure 2: Visualization of the η for LIWC features: Horizontal axes correspond to the values η and are normalized from the log-frequency function. the details and their lies may be even more informative and descriptive than those generated by real customers! This explains why EmployeePositive contains more N, IN and DT. Meanwhile, as domain experts are engaged more in talking about the details, they inevitably overlook other information, possibly leading to fewer V and RB. For Employee-Positive reviews, shown in Figures 1(d)(h), it turns out that domain experts do not compensate for their lack of prior experience when writing negative reviews for competitors’ offerings, as we will see again with LIWC features in the next subsection. 6.2 LIWC features We explore 3 LIWC categories (from left to right in subfigures of Figure 2): sentiment (neg emo and pos emo), spatial detail (space), and first-person singular pronouns (first-person). Space: Note that spatial details are more specific in the Hotel and Restaurant domains, which is reflected in the high positive value of ηHotel,space domain (see Figure 2(g)) and negative value of ηDoctor,space domain (see Figure 2(h)). It illustrates how domain-specific details can be predictive of deceptive text. Similarly predictive LIWC features are home for the Hotel domain, ingest for the Restaurant domain, and health and body for the Doctor domain. In Figure 2(i)(j)(k)(l), we can easily see that both actual customers and domain experts encode more spatial details in their reviews (positive value of η), which is in agreement with our expectation. This further demonstrates that a lack of spatial details would not be a general cue for deception. Moreover, it appears that general domain expertise does not compensate for the lack of prior experience when writing deceptive negative reviews for competitors’ hotels, as demonstrated by the lack of spatial details in the negative-sentiment reviews by employees shown in Figure 2(k). Sentiment: According to our findings, the presence of sentiment is a general cue to deceptive opinion spam, as observed when comparing Figure 2(b) to Figure 2(c) and (d). Participants, both Employees and Turkers, tend to exaggerate sentiment, and include more sentiment-related vocabularies in their lies. In other words, positive deceptive reviews were generally more positive and negative deceptive reviews were more negative in sentiment when compared with the truthful reviews generated by actual customers. A similar pattern can also be observed when comparing Figure 2(i) to Figure 2(j). 1573 First-Person Singular Pronouns: The literature also associates deception with decreased usage of first-person singular pronouns, an effect attributed to psychological distancing, whereby deceivers talk less about themselves due either to a lack of personal experience, or to detach themselves from the lie (Newman et al., 2003; Zhou et al., 2004; Buller et al., 1996; Knapp and Comaden, 1979). However, according to our findings, we find the opposite to hold. Increased first person singular is an apparent indicator of deception, when comparing Figure 2(b) to 2(c) and 2(e). We suspect that this relates to an effect observed in previous studies of deception, where liars inadvertently undermine their lies by overemphasizing aspects of their deception that they believe reflect credibility (Bond and DePaulo, 2006; DePaulo et al., 2003). One interpretation for this phenomenon would be that deceivers try to overemphasize their physical presence because they believe that this increases their credibility. 7 Conclusion and Discussion In this work, we have developed a multi-domain large-scale dataset containing gold-standard deceptive opinion spam. It includes reviews of Hotels, Restaurants and Doctors, generated through crowdsourcing and domain experts. We study this data using SAGE, which enables us to make observations about the respects in which truthful and deceptive text differs. Our model includes several domain-independent features that shed light on these differences, which further allows us to formulate some general rules for recognizing deceptive opinion spam. We also acknowledge several important caveats to this work. By soliciting fake reviews from participants, including crowd workers and domain experts, we have found that is possible to detect fake reviews with above-chance accuracy, and have used our models to explore several psychological theories of deception. However, it is still very difficult to estimate the practical impact of such methods, as it is very challenging to obtain gold-standard data in the real world. Moreover, by soliciting deceptive opinion spam in an artificial environment, we are endorsing the deception, which may influence the cues that we observe (Feeley and others, 1998; Frank and Ekman, 1997; Newman et al., 2003; Ott, 2013). Finally, it may be possible to train people to tell more convincing lies. Many of the characteristics regarding fake review generation might be overcome by well-trained fake review writers, which would results in opinion spam that is harder for detect. Future work may wish to consider some of these additional challenges. 8 Acknowledgement We thank Wenjie Li and Xun Wang for useful discussions and suggestions. This work was supported in part by National Science Foundation Grant BCS-0904822, a DARPA Deft grant, as well as a gift from Google. We also thank the ACL reviewers for their helpful comments and advice. References Douglas Biber, Stig Johansson, Geoffrey Leech, Susan Conrad, Edward Finegan, and Randolph Quirk. 1999. Longman grammar of spoken and written English, volume 2. MIT Press. David Blei, Andrew Ng, and Michael Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022. Charles Bond and Bella DePaulo. 2006. Accuracy of deception judgments. Personality and Social Psychology Review, 10(3):214–234. David B Buller and Judee K Burgoon. 1996. Interpersonal deception theory. Communication theory, 6(3):203–242. David B Buller, Judee K Burgoon, Aileen Buslig, and James Roiger. 1996. Testing interpersonal deception theory: The language of interpersonal deception. Communication theory, 6(3):268–289. Paul-Alexandru Chirita, J¨org Diederich, and Wolfgang Nejdl. 2005. Mailrank: using ranking for spam detection. In Proceedings of the 14th ACM international conference on Information and knowledge management, pages 373–380. ACM. Cone. 2011. 2011 Online Influence Trend Tracker. http://www.coneinc.com/negative-reviews-onlinereverse-purchase-decisions, August. Bella DePaulo, James Lindsay, Brian Malone, Laura Muhlenbruck, Kelly Charlton, and Harris Cooper. 2003. Cues to deception. Psychological bulletin, 129(1):74. Harris Drucker, Donghui Wu, and Vladimir Vapnik. 1999. Support vector machines for spam categorization. Neural Networks, IEEE Transactions on, 10(5):1048–1054. 1574 Jacob Eisenstein, Amr Ahmed, and Eric P Xing. 2011. Sparse additive generative models of text. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1041–1048. Thomas Feeley. 1998. The behavioral correlates of sanctioned and unsanctioned deceptive communication. Journal of Nonverbal Behavior, 22(3):189– 204. Vanessa Feng and Graeme Hirst. 2013. Detecting deceptive opinions with profile compatibility. In Proceedings of the 6th International Joint Conference on Natural Language Processing, Nagoya, Japan, pages 14–18. Song Feng, Ritwik Banerjee, and Yejin Choi. 2012. Syntactic stylometry for deception detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 171–175. Association for Computational Linguistics. Mark Frank and Paul Ekman. 1997. The ability to detect deceit generalizes across different types of highstake lies. Journal of personality and social psychology, 72(6):1429. Zolt´an Gy¨ongyi, Hector Garcia-Molina, and Jan Pedersen. 2004. Combating web spam with trustrank. In Proceedings of the Thirtieth international conference on Very large data bases-Volume 30, pages 576–587. VLDB Endowment. Trevor J Hastie and Robert J Tibshirani. 1990. Generalized additive models, volume 43. CRC Press. Ipsos. 2012. Socialogue: Five Stars? Thumbs Up? A+ or Just Average? http://www.ipsos-na.com/newspolls/pressrelease.aspx?id=5929. Nitin Jindal and Bing Liu. 2008. Opinion spam and analysis. In Proceedings of the international conference on Web search and web data mining, pages 219–230. ACM. Nitin Jindal, Bing Liu, and Ee-Peng Lim. 2010. Finding unusual review patterns using unexpected rules. In Proceedings of the 19th ACM international conference on Information and knowledge management, pages 1549–1552. ACM. Thorsten Joachims. 1999. Making large scale svm learning practical. Marcia K Johnson and Carol L Raye. 1981. Reality monitoring. Psychological review, 88(1):67. Mark Knapp and Mark Comaden. 1979. Telling it like it isn’t: A review of theory and research on deceptive communications. Human Communication Research, 5(3):270–285. Jiwei Li, Claire Cardie, and Sujian Li. 2013a. Topicspam: a topic-model-based approach for spam detection. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguis-tics. Jiwei Li, Myle Ott, and Claire Cardie. 2013b. Identifying manipulated offerings on review portals. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Wash, pages 18–21. Ee-Peng Lim, Viet-An Nguyen, Nitin Jindal, Bing Liu, and Hady Wirawan Lauw. 2010. Detecting product review spammers using rating behaviors. In Proceedings of the 19th ACM international conference on Information and knowledge management, pages 939–948. ACM. Juan Martinez-Romo and Lourdes Araujo. 2009. Web spam identification through language model analysis. In Proceedings of the 5th International Workshop on Adversarial Information Retrieval on the Web, pages 21–28. ACM. David Meyer. 2009. Fake reviews prompt belkin apology. Claire Miller. 2009. Company settles case of reviews it faked. New York Times. Arjun Mukherjee, Bing Liu, Junhui Wang, Natalie Glance, and Nitin Jindal. 2011. Detecting group review spam. In Proceedings of the 20th international conference companion on World wide web, pages 93–94. ACM. Arjun Mukherjee, Bing Liu, and Natalie Glance. 2012. Spotting fake reviewer groups in consumer reviews. In Proceedings of the 21st international conference on World Wide Web, pages 191–200. ACM. Arjun Mukherjee, Abhinav Kumar, Bing Liu, Junhui Wang, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013a. Spotting opinion spammers using behavioral footprints. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 632–640. ACM. Arjun Mukherjee, Vivek Venkataraman, Bing Liu, and Natalie Glance. 2013b. What yelp fake review filter might be doing. In Seventh International AAAI Conference on Weblogs and Social Media. Matthew L Newman, James W Pennebaker, Diane S Berry, and Jane M Richards. 2003. Lying words: Predicting deception from linguistic styles. Personality and social psychology bulletin, 29(5):665–675. Alexandros Ntoulas, Marc Najork, Mark Manasse, and Dennis Fetterly. 2006. Detecting spam web pages through content analysis. In Proceedings of the 15th international conference on World Wide Web, pages 83–92. ACM. Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T. Hancock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 309–319. 1575 Myle Ott, Claire Cardie, and Jeff Hancock. 2012. Estimating the prevalence of deception in online review communities. In Proceedings of the 21st international conference on World Wide Web, pages 201– 210. ACM. Myle Ott, Claire Cardie, and Jeffrey T. Hancock. 2013. Negative deceptive opinion spam. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Short Papers, Atlanta, Georgia, USA, June. Association for Computational Linguistics. Myle Ott. 2013. Computational lingustic models of deceptive opinion spam. PHD, thesis. Sinno Pan and Qiang Yang. 2010. A survey on transfer learning. Knowledge and Data Engineering, IEEE Transactions on, 22(10):1345–1359. Tieyun Qian and Bing Liu. 2013. Identifying multiple userids of the same author. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Wash, pages 18–21. Paul Rayson, Andrew Wilson, and Geoffrey Leech. 2001. Grammatical word class variation within the british national corpus sampler. Language and Computers, 36(1):295–306. David Streitfeld. 2012. For 2 a star, an online retailer gets 5-star product reviews. New York Times., 26. Alexandra Topping. 2010. Historian orlando figes agrees to pay damages for fake reviews. The Guardian., 16. Guan Wang, Sihong Xie, Bing Liu, and Philip Yu. 2011. Review graph based online store review spammer detection. In Data Mining (ICDM), 2011 IEEE 11th International Conference on, pages 1242–1247. IEEE. Guan Wang, Sihong Xie, Bing Liu, and Philip Yu. 2012. Identify online store review spammers via social review graph. ACM Transactions on Intelligent Systems and Technology (TIST), 3(4):61. Guangyu Wu, Derek Greene, Barry Smyth, and P´adraig Cunningham. 2010. Distortion as a validation criterion in the identification of suspicious reviews. In Proceedings of the First Workshop on Social Media Analytics, pages 10–13. ACM. Kyung-Hyan Yoo and Ulrike Gretzel. 2009. Comparison of deceptive and truthful travel reviews. In Information and communication technologies in tourism 2009, pages 37–47. Springer. Lina Zhou, Judee K Burgoon, Douglas P Twitchell, Tiantian Qin, and Jay F Nunamaker Jr. 2004. A comparison of classification methods for predicting deception in computer-mediated communication. Journal of Management Information Systems, 20(4):139–166. 1576
2014
147
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 155–164, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Unsupervised Solution Post Identification from Discussion Forums Deepak P IBM Research - India Bangalore, India [email protected] Karthik Visweswariah IBM Research - India Bangalore, India [email protected] Abstract Discussion forums have evolved into a dependable source of knowledge to solve common problems. However, only a minority of the posts in discussion forums are solution posts. Identifying solution posts from discussion forums, hence, is an important research problem. In this paper, we present a technique for unsupervised solution post identification leveraging a so far unexplored textual feature, that of lexical correlations between problems and solutions. We use translation models and language models to exploit lexical correlations and solution post character respectively. Our technique is designed to not rely much on structural features such as post metadata since such features are often not uniformly available across forums. Our clustering-based iterative solution identification approach based on the EM-formulation performs favorably in an empirical evaluation, beating the only unsupervised solution identification technique from literature by a very large margin. We also show that our unsupervised technique is competitive against methods that require supervision, outperforming one such technique comfortably. 1 Introduction Discussion forums have become a popular knowledge source for finding solutions to common problems. StackOverflow1, a popular discussion forum for programmers is among the top-100 most visited sites globally2. Now, there are discussion forums for almost every major product ranging from 1http://www.stackoverflow.com 2http://www.alexa.com/siteinfo/stackoverflow.com automobiles3 to gadgets such as those of Mac4 or Samsung5. These typically start with a registered user posting a question/problem6 to which other users respond. Typical response posts include solutions or clarification requests, whereas feedback posts form another major category of forum posts. As is the case with any community of humans, discussion forums have their share of inflammatory remarks too. Mining problem-solution pairs from discussion forums has attracted much attention from the scholarly community in the recent past. Since the first post most usually contains the problem description, identifying its solutions from among the other posts in the thread has been the focus of many recent efforts (e.g., (Gandhe et al., 2012; Hong and Davison, 2009)). Extracting problem-solution pairs from forums enables the usage of such knowledge in knowledge reuse frameworks such as case-based reasoning (Kolodner, 1992) that use problem-solution pairs as raw material. In this paper, we address the problem of unsupervised solution post identification7 from discussion forums. Among the first papers to address the solution identification problem was the unsupervised approach proposed by (Cong et al., 2008). It employs a graph propagation method that prioritizes posts that are (a) more similar to the problem post, (b) more similar to other posts, and (c) authored by a more authoritative user, to be labeled as solution posts. Though seen to be effective in identifying solutions from travel forums, the first two assumptions, (a) and (b), were seen to be not very 3http://www.cadillacforums.com/ 4https://discussions.apple.com/ 5http://www.galaxyforums.net/ 6We use problem and question, as well as solution and answer interchangeably in this paper. 7This problem has been referred to as answer extraction by some papers earlier. However, we use solution identification to refer to the problem since answer and extraction have other connotations in the Question-Answering and Information Extraction communities respectively. 155 reliable in solution identification in other kinds of discussion boards. (Catherine et al., 2012) reports a study that illustrates that non-solution posts are, on an average, as similar to the problem as solution posts in technical forums. The second assumption (i.e., (b) above) was also not seen to be useful in discussion forums since posts that are highly similar to other posts were seen to be complaints, repetitive content being more pervasive among complaint posts than solutions (Catherine et al., 2013). Having exhausted the two obvious textual features for solution identification, subsequent approaches have largely used the presence of lexical cues signifying solution-like narrative (e.g., instructive narratives such as ”check the router for any connection issues”) as the primary contentbased feature for solution identification. All solution identification approaches since (Cong et al., 2008) have used supervised methods that require training data in the form of labeled solution and non-solution posts. The techniques differ from one another mostly in the non-textual features that are employed in representing posts. A variety of high precision assumptions such as solution post typically follows a problem post (Qu and Liu, 2011), solution posts are likely to be within the first few posts, solution posts are likely to have been acknowledged by the problem post author (Catherine et al., 2012), users with high authoritativeness are likely to author solutions (Hong and Davison, 2009), and so on have been seen to be useful in solution identification. Being supervised methods, the above assumptions are implicitly factored in by including the appropriate feature (e.g., post position in thread) in the feature space so that the learner may learn the correlation (e.g., solution posts typically are among the first few posts) using the training data. Though such assumptions on structural features, if generic enough, may be built into unsupervised techniques to aid solution identification, the variation in availability of such features across forums limits the usage of models that rely heavily on structural features. For example, some forums employ chronological order based flattening of threads (Seo et al., 2009) making reply-to information unavailable; models that harness reply-to features would then have limited utility on identifying solutions within such flattened threads. On medical forums, privacy considerations may force forum data to be dumped without author information, making a host of author-id based features unavailable. On datasets that contain data from across forums, the model may have to be aware of the absence of certain features in subsets of the data, or be modeled using features that are available on all threads. Our Contribution: We propose an unsupervised method for solution identification. The cornerstone of our technique is the usage of a hitherto unexplored textual feature, lexical correlations between problems and solutions, that is exploited along with language model based characterization of solution posts. We model the lexical correlation and solution post character using regularized translation models and unigram language models respectively. To keep our technique applicable across a large variety of forums with varying availability of non-textual features, we design it to be able to work with minimal availability of non-textual features. In particular, we show that by using post position as the only non-textual feature, we are able to achieve accuracies comparable to supervision-based approaches that use many structural features (Catherine et al., 2013). 2 Related Work In this section, we provide a brief overview of previous work related to our problem. Though most of the answer/solution identification approaches proposed so far in literature are supervised methods that require a labeled training corpus, there are a few that require limited or no supervision. Table 1 provides an overview of some of the more recent solution identification techniques from literature, with a focus on some features that we wish to highlight. The common observation that most problem-solving discussion threads have a problem description in the first post has been explicitly factored into many techniques; knowing the problem/question is important for solution identification since author relations between problem and other posts provide valuable cues for solution identification. Most techniques use a variety of such features as noted in Section 1. SVMs have been the most popular method for supervised and semi-supervised learning for the task of solution identification. Of particular interest to us are approaches that use limited or no supervision, since we focus on unsupervised solution identification in this paper. 156 Paper Reference Supervision Assumptions on Features other than Learning Problem Position Post Content Used Technique (Qu and Liu, 2011) Supervised First Post likely HMM assumes Naive Bayes to be problem solution follows problem & HMM (Ding et al., 2008) Supervised First Post Post Position, Author, CRFs Context Posts (Kim et al., 2010) Supervised None Post Position, Author, MaxEnt, Previous Posts, Profile etc. SVM, CRF (Hong and Davison, 2009) Supervised First Post Post Position, Author, SVM Author Authority (Catherine et al., 2012) Supervised First Post Post Position, Author, Problem SVM Author’s activities wrt Post (Catherine et al., 2013) Limited First Post Post Position/Rating, Author, SVMs & Supervision Author Rating, Post Ack Co-Training (Cong et al., 2008) Unsupervised None Author, Author Authority, Graph Relation to Problem Author Propagation Our Method Unsupervised First Post Post Position Translation Models & LM Table 1: Summary of Some Solution Identification Techniquess The only unsupervised approach for the task, that from (Cong et al., 2008), uses a graph propagation method on a graph modeled using posts as vertices, and relies on the assumptions that posts that bear high similarity to the problem and other posts and those authored by authoritative users are more likely to be solution posts. Some of those assumptions, as mentioned in Section 1, were later found to be not generalizable to beyond travel forums. The semi-supervised approach presented in (Catherine et al., 2013) uses a few labeled threads to bootstrap SVM based learners which are then co-trained in an iterative fashion. In addition to various features explored in literature, they use acknowledgement modeling so that posts that have been acknowledged positively may be favored for being labeled as solutions. We will use translation and language models in our method for solution identification. Usage of translation models for modeling the correlation between textual problems and solutions have been explored earlier starting from the answer retrieval work in (Xue et al., 2008) where new queries were conceptually expanded using the translation model to improve retrieval. Translation models were also seen to be useful in segmenting incident reports into the problem and solution parts (Deepak et al., 2012); we will use an adaptation of the generative model presented therein, for our solution extraction formulation. Entity-level translation models were recently shown to be useful in modeling correlations in QA archives (Singh, 2012). 3 Problem Definition Let a thread T from a discussion forum be made up of t posts. Since we assume, much like many other earlier papers, that the first post is the problem post, the task is to identify which among the remaining t −1 posts are solutions. There could be multiple (most likely, different) solutions within the same thread. We may now model the thread T as t −1 post pairs, each pair having the problem post as the first element, and one of the t −1 remaining posts (i.e., reply posts in T ) as the second element. Let C = {(p1, r1), (p2, r2), . . . , (pn, rn)} be the set of such problem-reply pairs from across threads in the discussion forum. We are interested in finding a subset C′ of C such that most of the pairs in C′ are problem-solution pairs, and most of those in C−C′ are not so. In short, we would like to find problemsolution pairs from C such that the F-measure8 for solution identification is maximized. 4 Our Approach 4.1 The Correlation Assumption Central to our approach is the assumption of lexical correlation between the problem and solution 8http://en.wikipedia.org/wiki/F1 score 157 texts. At the word level, this translates to assuming that there exist word pairs such that the presence of the first word in the problem part predicts the presence/absence of the second word in the solution part well. Though not yet harnessed for solution identification, the correlation assumption is not at all novel. Infact, the assumption that similar problems have similar solutions (of which the correlation assumption is an offshoot) forms the foundation of case-based reasoning systems (Kolodner, 1992), a kind of knowledge reuse systems that could be the natural consumers of problem-solution pairs mined from forums. The usage of translation models in QA retrieval (Xue et al., 2008; Singh, 2012) and segmentation (Deepak et al., 2012) were also motivated by the correlation assumption. We use an IBM Model 1 translation model (Brown et al., 1990) in our technique; simplistically, such a model m may be thought of as a 2-d associative array where the value m[w1][w2] is directly related to the probability of w1 occuring in the problem when w2 occurs in the solution. 4.2 Generative model for Solution Posts Consider a unigram language model SS that models the lexical characteristics of solution posts, and a translation model TS that models the lexical correlation between problems and solutions. Our generative model models the reply part of a (p, r) pair (in which r is a solution) as being generated from the statistical models in {SS, TS} as follows. • For each word ws occuring in r, 1. Choose z ∼U(0, 1) 2. If z ≤λ, Choose w ∼Mult(SS) 3. Else, Choose w ∼Mult(T p S ) where T p S denotes the multionomial distribution obtained from TS conditioned over the words in the post p; this is obtained by assigning each candidate solution word w a weight equal to avg{TS[w′][w]|w′ ∈p}, and normalizing such weights across all solution words. In short, each solution word is assumed to be generated from the language model or the translation model (conditioned on the problem words) with a probability of λ and 1 −λ respectively, thus accounting for the correlation assumption. The generative model above is similar to the proposal in (Deepak et al., 2012), adapted suitably for our scenario. We model non-solution posts similarly with the sole difference being that they would be sampled from the analogous models SN and TN that characterize behavior of non-solution posts. Example: Consider the following illustrative example of a problem and solution post: • Problem: I am unable to surf the web on the BT public wifi. • Solution: Maybe, you should try disconnecting and rejoining the network. Of the solution words above, generic words such as try and should could probably be explained by (i.e., sampled from) the solution language model, whereas disconnect and rejoin could be correlated well with surf and wifiand hence are more likely to be supported better by the translation model. 4.3 Clustering-based Approach We propose a clustering based approach so as to cluster each of the (p, r) pairs into either the solution cluster or the non-solution cluster. The objective function that we seek to maximize is the following: X (p,r)∈C ( F((p, r), SS, TS) if label((p,r))=S F((p, r), SN, TN) if label((p,r))=N (1) F((p, r), S, T ) indicates the conformance of the (p, r) pair (details in Section 4.3.1) with the generative model that uses the S and T models as the language and translation models respectively. The clustering based approach labels each (p, r) pair as either solution (i.e., S) or non-solution (i.e., N). Since we do not know the models or the labelings to start with, we use an iterative approach modeled on the EM meta-algorithm (Dempster et al., 1977) involving iterations, each comprising of an E-step followed by the M-step. For simplicity and brevity, instead of deriving the EM formulation, we illustrate our approach by making an analogy with the popular K-Means clustering (MacQueen, 1967) algorithm that also uses the EM formulation and crisp assignments of data points like we do. K-Means is a clustering algorithm that clusters objects represented as multi-dimensional points into k clusters where each cluster is represented by the centroid of all its members. Each iteration in K-Means starts off with assigning each 158 In K-Means In Our Approach Data Multi-dimensional Points (p, r) pairs Cluster Model Respective Centroid Vector Respective S and T Models for each cluster Initialization Random Choice of Centroids Models learnt using (p, r) pairs labeled using the Post Position of r E-Step label(d) = label((p, r)) = arg maxi F((p, r), Si, Ti) arg mini dist(d, centroidi) (Sec 4.3.1), and learn solution word source probabilities (Sec 4.3.2) M-Step centroidi = avg{d|label(d) = i} Re-learn SS and TS using pairs labeled S SN and TN using pairs labeled N (Sec 4.3.3) Output The clustering of points (p, r) pairs labeled as S Table 2: Illustrating Our Approach wrt K-Means Clustering data object to its nearest centroid, followed by recomputing the centroid vector based on the assignments made. The analogy with K-Means is illustrated in Table 2. Though the analogy in Table 2 serves to provide a high-level picture of our approach, the details require further exposition. In short, our approach is a 2-way clustering algorithm that uses two pairs of models, [SS, TS] and [SN, TN], to model solution pairs and non-solution pairs respectively. At each iteration, the post-pairs are labeled as either solution (S) or non-solution (N) based on which pair of models they better conform to. Within the same iteration, the four models are then re-learnt using the labels and other side information. At the end of the iterations, the pairs labeled S are output as solution pairs. We describe the various details in separate subsections herein. 4.3.1 E-Step: Estimating Labels As outlined in Table 2, each (p, r) pair would be assigned to one of the classes, solution or non-solution, based on whether it conforms better with the solution models (i.e., SS & TS) or nonsolution models (SN & TN), as determined using the F((p, r), S, T ) function, i.e., label((p, r)) = arg max i∈{S,N} F((p, r), Si, Ti) F(.) falls out of the generative model: F((p, r), S, T ) = Y w∈r λ×S[w]+(1−λ)×T p[w] where S[w] denotes the probability of w from S and T p[w] denotes the probability of w from the multinomial distribution derived from T conditioned over the words in p, as in Section 4.2. 4.3.2 E-Step: Estimating Reply Word Source Since the language and translation models operate at the word level, the objective function entails that we let the models learn based on their fractional contribution of the words from the language and translation models. Thus, we estimate the proportional contribution of each word from the language and translation models too, in the E-step. The fractional contributions of the word w ∈r in the (p, r) pair labeled as solution (i.e., S) is as follows: f(p,r) SS (w) = SS[w] SS[w] + T p S [w] f(p,r) TS (w) = T p S [w] SS[w] + T p S [w] The fractional contributions are just the actual supports for the word w, normalized by the total contribution for the word from across the two models. Similar estimates, f(p,r) SN (.) and f(p,r) SN (.) are made for reply words from pairs labeled N. In our example from Section 4.2, words such as rejoin are likely to get higher f(p,r) TS (.) scores due to being better correlated with problem words and consequently better supported by the translation model; those such as try may get higher f(p,r) SS (.) scores. 4.3.3 M-Step: Learning Models We use the labels and reply-word source estimates from the E-step to re-learn the language and translation models in this step. As may be obvious from the ensuing discussion, those pairs labeled as solution pairs are used to learn the SS and TS models and those labeled as non-solution pairs are 159 used to learn the models with subscript N. We let each reply word contribute as much to the respective language and translation models according to the estimates in Section 4.3.2. In our example, if the word disconnect is assigned a source probability of 0.9 and 0.1 for the translation and language models respectively, the virtual documentpair from (p, r) that goes into the training of the respective T model would assume that disconnect occurs in r with a frequency of 0.9; similarly, the respective S would account for disconnect with a frequency of 0.1. Though fractional word frequencies are not possible in real documents, statistical models can accomodate such fractional frequencies in a straightforward manner. The language models are learnt only over the r parts of the (p, r) pairs since they are meant to characterize reply behavior; on the other hand, translation models learn over both p and r parts to model correlation. Regularizing the T models: In our formulation, the language and translation models may be seen as competing for ”ownership” of reply words. Consider the post and reply vocabularies to be of sizes A and B respectively; then, the translation model would have A × B variables, whereas the unigram language model has only B variables. This gives the translation model an implicit edge due to having more parameters to tune to the data, putting the language models at a disadvantage. To level off the playing field, we use a regularization9 operation in the learning of the translation models. The IBM Model 1 learning process uses an internal EM approach where the Estep estimates the alignment vector for each problem word; this vector indicates the distribution of alignments of the problem word across the solution words. In our example, an example alignment vector for wificould be: {rejoin : 0.4, network : 0.4, disconnect : 0.1, . . .}. Our regularization method uses a parameter τ to discard the long tail in the alignment vector by resetting entries having a value ≤τ to 0.0 followed by re-normalizing the alignment vector to add up to 1.0. Such pruning is performed at each iteration in the learning of the translation model, so that the following M-steps learn the probability matrix according to such modified alignment vectors. The semantics of the τ parameter may be in9We use the word regularization in a generic sense to mean adapting models to avoid overfitting; in particular, it may be noted that we are not using popular regularization methods such as L1-regularization. Alg. 1 Clustering-based Solution Identification Input. C, a set of (p, r) pairs Output. C′, the set of identified solution pairs Initialization 1. ∀(p, r) ∈C 2. if(r.postpos = 2) label((p, r)) = S 3. else label((p, r)) = N 4. Learn SS & TS using pairs labeled S 5. Learn SN & TN using pairs labeled N EM Iterations 6. while(not converged ∧#Iterations < 10) E-Step: 7. ∀(p, r) ∈C 8. label((p, r)) = arg maxi F((p, r), Si, Ti) 9. ∀w ∈r 10. Estimate f(p,r) Slabel(p,r)(w) , f(p,r) Tlabel(p,r)(w) M-Step: 11. Learn SS & TS from pairs labeled S using the f(p,r) SS (.) f(p,r) TS (.) estimates 12. Learn SN & TN from pairs labeled N using the f(p,r) SN (.) f(p,r) TN (.) estimates Output 13. Output (p, r) pairs from C with label((p, r)) = S as C′ tuitively outlined. If we would like to allow alignment vectors to allow a problem word to align with upto two reply words, we would need to set τ to a value close to 0.5(= 1 2); ideally though, to allow for the mass consumed by an almost inevitable long tail of very low values in the alignment vector, we would need to set it to slightly lower than 0.5, say 0.4. 4.3.4 Initialization K-Means clustering mostly initializes centroid vectors randomly; however, it is non-trivial to initialize the complex translation and language models randomly. Moreover, an initialization such that the SS and TS models favor the solution pairs more than the non-solution pairs is critical so that they may progressively lean towards modeling solution behaviour better across iterations. Towards this, we make use of a structural feature; in particular, adapting the hypothesis that solutions occur in the first N posts (Ref. (Catherine et al., 2012)), we label the pairs that have the the reply from the second post (note that the first post is assumed to be the problem post) in the thread as a solution 160 post, and all others as non-solution posts. Such an initialization along with uniform reply word source probabilities is used to learn the initial estimates of the SS, TS, SN and TN models to be used in the E-step for the first iteration. We will show that we are able to effectively perform solution identification using our approach by exploiting just one structural feature, the post position, as above. However, we will also show that we can exploit other features as and when available, to deliver higher accuracy clusterings. 4.3.5 Method Summary The overall method comprising the steps that have been described is presented in Algorithm 1. The initialization using the post position (Ref. Sec 4.3.4) is illustrated in Lines 1-5, whereas the EM-iterations form Steps 6 through 12. Of these, the E-step incorporates labeling (Line 8) as described in Sec 4.3.1 and reply-word source estimation (Line 10) detailed in Sec 4.3.2. The models are then re-learnt in the M-Step (Lines 11-12) as outlined in Sec 4.3.3. At the end of the iterations that may run up to 10 times if the labelings do not stabilize earlier, the pairs labeled S are output as identified solutions (Line 13). Time Complexity: Let n denote |C|, and the number of unique words in each problem and reply post be a and b respectively. We will denote the vocabulary size of problem posts as A and that of reply posts as B. Learning of the language and translation models in each iteration costs O(nb + B) and O(k′(nab + AB)) respectively (assuming the translation model learning runs for k′ iterations). The E-step labeling and source estimation cost O(nab) each. For k iterations of our algorithm, this leads to an overall complexity of O(kk′(nab + AB)). 5 Experimental Evaluation We use a crawl of 140k threads from Apple Discussion forums10. Out of these, 300 threads (comprising 1440 posts) were randomly chosen and each post was manually tagged as either solution or non-solution by the authors of (Catherine et al., 2013) (who were kind enough to share the data with us) with an inter-annotator agreement11 of 0.71. On an average, 40% of replies in each thread and 77% of first replies were seen to be solutions, 10http://discussions.apple.com 11http://en.wikipedia.org/wiki/Cohen’s kappa Figure 1: F% (Y) vs. #Iterations (X) TS ProblemWord, SolutionWord TS[p][s] network, guest 0.0754 connect, adaptor 0.0526 wireless, adaptor 0.0526 translat, shortcut 0.0492 updat, rebuilt 0.0405 SS SolutionWord SS[s] your 0.0115 try 0.0033 router 0.0033 see 0.0033 password 0.0023 Table 4: Sample TS and SS Estimates leading to an F-measure of 53% for our initialization heuristic. We use the F-measure12 for solution identification, as the primary evaluation measure. While we vary the various parameters separately in order to evaluate the trends, we use a dataset of 800 threads (containing the 300 labeled threads) and set λ = 0.5 and τ = 0.4 unless otherwise mentioned. Since we have only 300 labeled threads, accuracy measures are reported on those (like in (Catherine et al., 2013)). We pre-process the post data by stemming words (Porter, 1980). 5.1 Quality Evaluation In this study, we compare the performance of our method under varying settings of λ against the only unsupervised approach for solution identification from literature, that from (Cong et al., 2008). We use an independent implementation of the technique using Kullback-Leibler Divergence (Kullback, 1997) as the similarity measure between posts; KL-Divergence was seen to perform best in the experiments reported in (Cong et al., 2008). Table 3 illustrates the comparative performance 12http://en.wikipedia.org/wiki/F1 score 161 Technique Precision Recall F-Measure Unsupervised Graph Propagation (Cong et al., 2008) 29.7 % 55.6 % 38.7 % Our Method with only Translation Models (λ = 0.0) 41.8 % 86.8 % 56.5 % Our Method with only Language Models (λ = 1.0) 63.2 % 62.1 % 62.6 % Our Method with Both Models (λ = 0.5) 61.3 % 66.9 % 64.0 % Methods using Supervision (Catherine et al., 2013) ANS CT 40.6 % 88.0 % 55.6 % ANS-ACK PCT 56.8 % 84.1 % 67.8% Table 3: Quality Evaluation Figure 2: F% (Y) vs. λ (X) Figure 3: F% (Y) vs. τ (X) Figure 4: F% (Y) vs. #Threads (X) on various quality metrics, of which F-Measure is typically considered most important. Our pureLM13 setting (i.e., λ = 1) was seen to perform up to 6 F-Measure points better than the pure-TM14 setting (i.e., λ = 0), whereas the uniform mix is seen to be able to harness both to give a 1.4 point (i.e., 2.2%) improvement over the pure-LM case. The comparison with the approach from (Cong et al., 2008) illustrates that our method is very clearly the superior method for solution identification outperforming the former by large margins on all the evaluation measures, with the improvement on Fmeasure being more than 25 points. Comparison wrt Methods from (Catherine et al., 2013): Table 3 also lists the performance of SVM-based methods from (Catherine et al., 2013) that use supervised information for solution identification, to help put the performance of our technique in perspective. Of the two methods therein, ANS CT is a more general method that uses two views (structural and lexical) of solutions which are then co-trained. ANS-ACK PCT is an enhanced method that requires author-id information and a means of classifying posts as acknowledgements (which is done using additional supervision); a post being acknowledged by the problem author is then used as a signal to enhance the solution-ness of a post. In the absence of author information (such as may be common in 13Language Model 14Translation Model privacy-constrained domains such as medical forums) and extrinsic information to enable identify acknowledgements, ANS CT is the only technique available. Our technique is seen to outperform ANS CT by a respectable margin (8.6 F-measure points) while trailing behind the enhanced ANSACK PCT method with a reasonably narrow 3.8 F-measure point margin. Thus, our unsupervised method is seen to be a strong competitor even for techniques using supervision outlined in (Catherine et al., 2013), illustrating the effectiveness of LM and TM modeling of reply posts. Across Iterations: For scenarios where computation is at a premium, it is useful to know how quickly the quality of solution identification stabilizes, so that the results can be collected after fewer iterations. Figure 1 plots the F-measure across iterations for the run with λ = 0.5, τ = 0.4 setting, where the F-measure is seen to stabilize in as few as 4-5 iterations. Similar trends were observed for other runs as well, confirming that the run may be stopped as early as after the fourth iteration without considerable loss in quality. Example Estimates from LMs and TMs: In order to understand the behavior of the statistical models, we took the highest 100 entries from both SS and TS and attempted to qualitatively evaluate semantics of the words (or word pairs) corresponding to those. Though the stemming made it hard to make sense of some entries, we present some of the understandable entries from among 162 the top-100 in Table 4. The first three entries from TS deal with connection issues for which adaptor or guest account related solutions are proposed, whereas the remaining have something to do with the mac translator app and rebuilding libraries after an update. The top words from SS include imperative words and words from solutions to common issues that include actions pertaining to the router or password. 5.2 Varying Parameter Settings We now analyse the performance of our approach against varying parameter settings. In particular, we vary λ and τ values and the dataset size, and experiment with some initialization variations. Varying λ: λ is the weighting parameter that indicates the fraction of weight assigned to LMs (vis-a-vis TMs). As may be seen from Figure 2, the quality of the results as measured by the Fmeasure is seen to peak around the middle (i.e., λ = 0.5), and decline slowly towards either extreme, with a sharp decline at λ = 0 (i.e., pureTM setting). This indicates that a uniform mix is favorable; however, if one were to choose only one type of model, usage of LMs is seen to be preferable than TMs. Varying τ: τ is directly related to the extent of pruning of TMs, in the regularization operation; all values in the alignment vector ≤τ are pruned. Thus, each problem word is roughly allowed to be aligned with at most ∼ 1 τ solution words. The trends from Figure 3 suggests that allowing a problem word to be aligned to up to 2.5 solution words (i.e., τ = 0.4) is seen to yield the best performance though the quality decline is graceful towards either side of the [0.1, 0.5] range. Varying Data Size: Though more data always tends to be beneficial since statistical models benefit from redundancy, the marginal utility of additional data drops to very small levels beyond a point; we are interested in the amount of data beyond which the quality of solution identification flattens out. Figure 4 suggests that there is a sharp improvement in quality while increasing the amount of data from 300 threads (i.e., 1440 (p, r) pairs) to 550 (2454 pairs), whereas the increment is smaller when adding another 250 pairs (total of 3400 pairs). Beyond 800 threads, the Fmeasure was seen to flatten out rapidly and stabilize at ∼64%. Initialization: In Apple discussion forums, posts by Apple employees that are labeled with the Apple employees tag (approximately ∼7% of posts in our dataset) tend to be solutions. So are posts that are marked Helpful (∼3% of posts) by other users. Being specific to Apple forums, we did not use them for initialization in experiments so far with the intent of keeping the technique generic. However, when such posts are initialized as solutions (in addition to first replies as we did earlier), the F-score for solution identification for our technique was seen to improve slightly, to 64.5% (from 64%). Thus, our technique is able to exploit any extra solution identifying structural features that are available. 6 Conclusions and Future Work We considered the problem of unsupervised solution post identification from discussion forum threads. Towards identifying solutions to the problem posed in the initial post, we proposed the usage of a hitherto unexplored textual feature for the solution identification problem; that of lexical correlations between problems and solutions. We model and harness lexical correlations using translation models, in the company of unigram language models that are used to characterize reply posts, and formulate a clustering-based EM approach for solution identification. We show that our technique is able to effectively identify solutions using just one non-content based feature, the post position, whereas previous techniques in literature have depended heavily on structural features (that are not always available in many forums) and supervised information. Our technique is seen to outperform the sole unsupervised solution identification technique in literature, by a large margin; further, our method is even seen to be competitive to recent methods that use supervision, beating one of them comfortably, and trailing another by a narrow margin. In short, our empirical analysis illustrates the superior performance and establishes our method as the method of choice for unsupervised solution identification. Exploration into the usage of translation models to aid other operations in discussion forums such as proactive word suggestions for solution authoring would be interesting direction for follow-up work. Discovery of problem-solution pairs in cases where the problem post is not known beforehand, would be a challenging problem to address. 163 References Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Fredrick Jelinek, John D Lafferty, Robert L Mercer, and Paul S Roossin. 1990. A statistical approach to machine translation. Computational linguistics, 16(2):79–85. Rose Catherine, Amit Singh, Rashmi Gangadharaiah, Dinesh Raghu, and Karthik Visweswariah. 2012. Does similarity matter? the case of answer extraction from technical discussion forums. In COLING (Posters), pages 175–184. Rose Catherine, Rashmi Gangadharaiah, Karthik Visweswariah, and Dinesh Raghu. 2013. Semisupervised answer extraction from discussion forums. In IJCNLP. Gao Cong, Long Wang, Chin-Yew Lin, Young-In Song, and Yueheng Sun. 2008. Finding question-answer pairs from online forums. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 467–474. ACM. P. Deepak, Karthik Visweswariah, Nirmalie Wiratunga, and Sadiq Sani. 2012. Two-part segmentation of text documents. In CIKM, pages 793–802. Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), pages 1– 38. Shilin Ding, Gao Cong, Chin-Yew Lin, and Xianyan Zhu. 2008. Using conditional random fields to extract contexts and answers of questions from online forums. In ACL. Ankur Gandhe, Dinesh Raghu, and Rose Catherine. 2012. Domain adaptive answer extraction for discussion boards. In Proceedings of the 21st international conference companion on World Wide Web, pages 501–502. ACM. Liangjie Hong and Brian D Davison. 2009. A classification-based approach to question answering in discussion boards. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 171– 178. ACM. Su Nam Kim, Li Wang, and Timothy Baldwin. 2010. Tagging and linking web forum posts. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 192–202. Association for Computational Linguistics. Janet L Kolodner. 1992. An introduction to case-based reasoning. Artificial Intelligence Review, 6(1):3–34. Solomon Kullback. 1997. Information theory and statistics. Courier Dover Publications. James MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, page 14. California, USA. Martin F Porter. 1980. An algorithm for suffix stripping. Program: electronic library and information systems, 14(3):130–137. Zhonghua Qu and Yang Liu. 2011. Finding problem solving threads in online forum. In IJCNLP, pages 1413–1417. Jangwon Seo, W Bruce Croft, and David A Smith. 2009. Online community search using thread structure. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 1907–1910. ACM. Amit Singh. 2012. Entity based q&a retrieval. In EMNLP-CoNLL, pages 1266–1277. Xiaobing Xue, Jiwoon Jeon, and W. Bruce Croft. 2008. Retrieval models for question and answer archives. In SIGIR, pages 475–482. 164
2014
15
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 165–174, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Weakly Supervised User Profile Extraction from Twitter Jiwei Li1, Alan Ritter2, Eduard Hovy1 1Language Technology Institute, 2Machine Learning Department Carnegie Mellon University, Pittsburgh, PA 15213, USA [email protected], [email protected], [email protected] Abstract While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users’ profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users’ attributes based on their tweets.1 1 Introduction The overwhelming popularity of online social media creates an opportunity to display given aspects of oneself. Users’ profile information in social networking websites such as Facebook2 or Google Plus3 provides a rich repository personal information in a structured data format, making it amenable to automatic processing. This includes, for example, users’ jobs and education, and provides a useful source of information for applications such as search4, friend recommendation, on1Both code and data are available at http://aclweb. org/aclwiki/index.php?title=Profile_data 2https://www.facebook.com/ 3https://plus.google.com/ 4https://www.facebook.com/about/ graphsearch @[shanenicholson] has taken all the kids today so I can go shopping-CHILD FREE! #iloveyoushano #iloveyoucreditcard Tamworth promo day with my handsome classy husband @[shanenicholson] Spouse: shanenicholson I got accepted to be part of the UofM engineering safety pilot program in [FSU] Here in class. (@ [Florida State University] - Williams Building) Don’t worry , guys ! Our beloved [FSU] will always continue to rise ” to the top ! Education: Florida State University (FSU) first day of work at [HuffPo], a sports bar woo come visit me yo.. start to think we should just add a couple desks to the [HuffPo] newsroom for Business Insider writers just back from [HuffPo], what a hell ! Job: HuffPo Table 1: Examples of Twitter message clues for user profile inference. line advertising, computational social science and more. Although profiles exist in an easy-to-use, structured data format, they are often sparsely populated; users rarely fully complete their online profiles. Additionally, some social networking services such as Twitter don’t support this type of structured profile data. It is therefore difficult to obtain a reasonably comprehensive profile of a user, or a reasonably complete facet of information (say, education level) for a class of users. While many users do not explicitly list all their personal information in their online profile, their user generated content often contains strong evidence to suggest many types of user attributes, for example education, spouse, and employment (See Table 1). Can one use such information to infer more details? In particular, can one exploit indirect clues from an unstructured data source like Twitter to obtain rich, structured user profiles? In this paper we demonstrate that it is feasible to automatically extract Facebook-style pro165 files directly from users’ tweets, thus making user profile data available in a structured format for upstream applications. We view user profile inference as a structured prediction task where both text and network information are incorporated. Concretely, we cast user profile prediction as binary relation extraction (Brin, 1999), e.g., SPOUSE(Useri, Userj), EDUCATION(Useri, Entityj) and EMPLOYER(Useri, Entityj). Inspired by the concept of distant supervision, we collect training tweets by matching attribute ground truth from an outside “knowledge base” such as Facebook or Google Plus. One contribution of the work presented here is the creation of the first large-scale dataset on three general Twitter user profile domains (i.e., EDUCATION, JOB, SPOUSE). Experiments demonstrate that by simultaneously harnessing both text features and network information, our approach is able to make accurate user profile predictions. We are optimistic that our approach can easily be applied to further user attributes such as HOBBIES and INTERESTS (MOVIES, BOOKS, SPORTS or STARS), RELIGION, HOMETOWN, LIVING LOCATION, FAMILY MEMBERS and so on, where training data can be obtained by matching ground truth retrieved from multiple types of online social media such as Facebook, Google Plus, or LinkedIn. Our contributions are as follows: • We cast user profile prediction as an information extraction task. • We present a large-scale dataset for this task gathered from various structured and unstructured social media sources. • We demonstrate the benefit of jointly reasoning about users’ social network structure when extracting their profiles from text. • We experimentally demonstrate the effectiveness of our approach on 3 relations: SPOUSE, JOB and EDUCATION. The remainder of this paper is organized as follows: We summarize related work in Section 2. The creation of our dataset is described in Section 3. The details of our model are presented in Section 4. We present experimental results in Section 5 and conclude in Section 6. 2 Related Work While user profile inference from social media has received considerable attention (Al Zamal et al., 2012; Rao and Yarowsky, 2010; Rao et al., 2010; Rao et al., 2011), most previous work has treated this as a classification task where the goal is to predict unary predicates describing attributes of the user. Examples include gender (Ciot et al., 2013; Liu and Ruths, 2013; Liu et al., 2012), age (Rao et al., 2010), or political polarity (Pennacchiotti and Popescu, 2011; Conover et al., 2011). A significant challenge that has limited previous efforts in this area is the lack of available training data. For example, researchers obtain training data by employing workers from Amazon Mechanical Turk to manually identify users’ gender from profile pictures (Ciot et al., 2013). This approach is appropriate for attributes such as gender with a small numbers of possible values (e.g., male or female), for which the values can be directly identified. However for attributes such as spouse or education there are many possible values, making it impossible to manually search for gold standard answers within a large number of tweets which may or may not contain sufficient evidence. Also related is the Twitter user timeline extraction algorithm of Li and Cardie (2013). This work is not focused on user attribute extraction, however. Distant Supervision Distant supervision, also known as weak supervision, is a method for learning to extract relations from text using ground truth from an existing database as a source of supervision. Rather than relying on mentionlevel annotations, which are expensive and time consuming to generate, distant supervision leverages readily available structured data sources as a weak source of supervision for relation extraction from related text corpora (Craven et al., 1999). For example, suppose r(e1, e2) = IsIn(Paris, France) is a ground tuple in the database and s =“Paris is the capital of France” contains synonyms for both “Paris” and “France”, then we assume that s may express the fact r(e1, e2) in some way and can be used as positive training examples. In addition to the wide use in text entity relation extraction (Mintz et al., 2009; Ritter et al., 2013; Hoffmann et al., 2011; Surdeanu et al., 2012; Takamatsu et al., 2012), distant supervision has been applied to multiple 166 Figure 1: Illustration of Goolge Plus “knowledge base”. fields such as protein relation extraction (Craven et al., 1999; Ravikumar et al., 2012), event extraction from Twitter (Benson et al., 2011), sentiment analysis (Go et al., 2009) and Wikipedia infobox generation (Wu and Weld, 2007). Homophily Online social media offers a rich source of network information. McPherson et al. (2001) discovered that people sharing more attributes such as background or hobby have a higher chance of becoming friends in social media. This property, known as HOMOPHILY (summarized by the proverb “birds of a feather flock together”) (Al Zamal et al., 2012) has been widely applied to community detection (Yang and Leskovec, 2013) and friend recommendation (Guy et al., 2010) on social media. In the user attribute extraction literature, researchers have considered neighborhood context to boost inference accuracy (Pennacchiotti and Popescu, 2011; Al Zamal et al., 2012), where information about the degree of their connectivity to their pre-labeled users is included in the feature vectors. A related algorithm by Mislove et al. (2010) crawled Facebook profiles of 4,000 Rice University students and alumni and inferred attributes such as major and year of matriculation purely based on network information. Mislove’s work does not consider the users’ text stream, however. As we demonstrate below, relying solely on network information is not enough to enable inference about attributes. 3 Dataset Creation We now describe the generation of our distantly supervised training dataset in detail. We make use of Google Plus and Freebase to obtain ground facts and extract positive/negative bags of postings from users’ twitter streams according to the ground facts. Figure 2: Example of fetching tweets containing entity USC mention from Miranda Cosgrove (an American actress and singer-songwriter)’s twitter stream. Education/Job We first used the Google Plus API5 (shown in Figure 1) to obtain a seed set of users whose profiles contain both their education/job status and a link to their twitter account.6 Then, we fetched tweets containing the mention of the education/job entity from each correspondent user’s twitter stream using Twitter’s search API7 (shown in Figure 2) and used them to construct positive bags of tweets expressing the associated attribute, namely EDUCATION(Useri, Entityj), or EMPLOYER(Useri, Entityj). The Freebase API8 is employed for alias recognition, to match terms such as “Harvard University”, “Harvard”, “Harvard U” to a single The remainder of each corresponding user’s entire Twitter feed is used as negative training data.9 We expanded our dataset from the seed users according to network information provided by Google Plus and Twitter. Concretely, we crawled circle information of users in the seed set from both their Twitter and Google Plus accounts and performed a matching to pick out shared users between one’s Twitter follower list and Google Plus Circle. This process assures friend identity and avoids the problem of name ambiguity when matching accounts across websites. Among candidate users, those who explicitly display Job or Education information on Google Plus are preserved. We then gathered positive and negative data as described above. Dataset statistics are presented in Table 2. Our 5https://developers.google.com/+/api/ 6An unambiguous twitter account link is needed here because of the common phenomenon of name duplication. 7https://twitter.com/search 8http://wiki.freebase.com/wiki/ Freebase_API 9Due to Twitter user timeline limit, we crawled at most 3200 tweets for each user. 167 education dataset contains 7,208 users, 6,295 of which are connected to others in the network. The positive training set for the EDUCATION is comprised of 134,060 tweets. Spouse Facebook is the only type of social media where spouse information is commonly displayed. However, only a tiny amount of individual information is publicly accessible from Facebook Graph API10. To obtain ground truth for the spouse relation at large scale, we turned to Freebase11, a large, open-domain database, and gathered instances of the /PEOPLE/PERSON/SPOUSE relation. Positive/negative training tweets are obtained in the same way as was previously described for EDUCATION and JOB. It is worth noting that our Spouse dataset is not perfect, as individuals retrieved from Freebase are mostly celebrities, and thus it’s not clear whether this group of people are representative of the general population. SPOUSE is an exception to the “homophily” effect. But it exhibits another unique property, known as, REFLEXIVITY: fact IsSpouseOf(e1, e2) and IsSpouseOf(e2, e1) will hold or not hold at the same time. Given training data expressing the tuple IsSpouseOf(e1, e2) from user e1’s twitter stream, we also gather user e2’s tweet collection, and fetch tweets with the mention of e1. We augment negative training data from e2 as in the case of Education and Job. Our Spouse dataset contains 1,636 users, where there are 554 couples (1108 users). Note that the number of positive entities (3,121) is greater than the number of users as (1) one user can have multiple spouses at different periods of time (2) multiple entities may point to the same individual, e.g., BarackObama, Barack Obama and Barack. 4 Model We now describe our approach to predicting user profile attributes. 4.1 Notation Message X: Each user i ∈[1, I] is associated with his Twitter ID and his tweet corpus Xi. Xi is comprised of a collection of tweets Xi = {xi,j}j=Ni j=1 , where Ni denotes the number of tweets user i published. 10https://developers.facebook.com/docs/ graph-api/ 11http://www.freebase.com/ Education Job Spouse #Users 7,208 1,806 1,636 #Users Connected 6,295 1,407 1,108 #Edges 11,167 3,565 554 #Pos Entities 451 380 3121 #Pos Tweets 124,801 65,031 135,466 #Aver Pos Tweets per User 17.3 36.6 82.8 #Neg Entity 6,987,186 4,405,530 8,840,722 #Neg Tweets 16,150,600 10,687,403 12,872,695 Table 2: Statistics for our Dataset Tweet Collection Le i: Le i denotes the collection of postings containing the mention of entity e from user i. Le i ⊂Xi. Entity attribute indicator zk i,e and zk i,x: For each entity e ∈Xi, there is a boolean variable zk i,e, indicating whether entity e expresses attribute k of user i. Each posting x ∈Le i is associated with attribute indicator zk i,x indicating whether posting x expresses attribute k of user i. zk i,e and zk i,x are observed during training and latent during testing. Neighbor set F k i : F k i denotes the neighbor set of user i. For Education (k = 0) and Job (k = 1), F k i denotes the group of users within the network that are in friend relation with user i. For Spouse attribute, F k i denote current user’s spouse. 4.2 Model The distant supervision assumes that if entity e corresponds to an attribute for user i, at least one posting from user i’s Twitter stream containing a mention of e might express that attribute. For userlevel attribute prediction, we adopt the following two strategies: (1) GLOBAL directly makes aggregate (entity) level prediction for zk i,e, where features for all tweets from Le i are aggregated to one vector for training and testing, following Mintz et al. (2009). (2) LOCAL makes local tweet-level predictions for each tweet ze i,x, x ∈Lk i in the first place, making the stronger assumption that all mentions of an entity in the users’ profile are expressing the associated attribute. An aggregate-level decision zk i,e is then made from the deterministic OR operators. ze i,x = ( 1 ∃x ∈Le i, s.t.zk i,x = 1 0 Otherwise (1) The rest of this paper describes GLOBAL in detail. The model and parameters with LOCAL are identical to those in GLOBAL except that LOCAL 168 encode a tweet-level feature vector rather than an aggregate one. They are therefore excluded for brevity. For each attribute k, we use a model that factorizes the joint distribution as product of two distributions that separately characterize text features and network information as follows: Ψ(zk i,e, Xi, F k i : Θ) ∝ Ψtext(zk i,e, Xi)ΨNeigh(zk i,e, F k i ) (2) Text Factor We use Ψtext(zk e , Xi) to capture the text related features which offer attribute clues: Ψtext(zk e , , Xi) = exp[(Θk text)T · ψtext(zk i,e, Xi)] (3) The feature vector ψtext(zk i,e, Xi) encodes the following standard general features: • Entity-level: whether begins with capital letter, length of entity. • Token-level: for each token t ∈e, word identity, word shape, part of speech tags, name entity tags. • Conjunctive features for a window of k (k=1,2) words and part of speech tags. • Tweet-level: All tokens in the correspondent tweet. In addition to general features, we employ attribute-specific features, such as whether the entity matches a bag of words observed in the list of universities, colleges and high schools for Education attribute, whether it matches terms in a list of companies for Job attribute12. Lists of universities and companies are taken from knowledge base NELL13. Neighbor Factor For Job and Education, we bias friends to have a larger possibility to share the same attribute. ΨNeigh(zk i,e, F k i ) captures such influence from friends within the network: ΨNeigh(zk i,e, F k i ) = Y j∈F k i ΦNeigh(zk e , Xj) ΦNeigh(zk i,e, Xj) = exp[(Θk Neigh)T · ψNeigh(zk i,e, Xj)] (4) Features we explore include the whether entity e is also the correspondent attribute with neighbor user j, i.e., I(ze j,k = 0) and I(ze j,k = 1). 12Freebase is employed for alias recognition. 13http://rtw.ml.cmu.edu/rtw/kbbrowser/ Input: Tweet Collection {Xi}, Neighbor set {F k i } Initialization: • for each user i: for each candidate entity e ∈Xi zk i,e = argmaxz′ Ψ(z′, Xi) from text features End Initialization while not convergence: • for each user i: update attribute values for j ∈F k i for each candidate entity e ∈Xi zk i,e = argmaxz′ Ψ(z′, Xi, F k i ) end while: Figure 3: Inference for NEIGH-LATENT setting. For Spouse, we set F spouse i = {e} and the neighbor factor can be rewritten as: ΨNeigh(zk i,e, Xj) = ΨNeigh(Ci, Xe) (5) It characterizes whether current user Ci to be the spouse of user e (if e corresponds to a Twitter user). We expect clues about whether Ci being entity e’s spouse from e’s Twitter corpus will in turn facilitate the spouse inference procedure of user i. ψNeigh(Ci, Xe) encodes I(Ci ∈Se), I(Ci ̸∈Se). Features we explore also include whether Ci’s twitter ID appears in e’s corpus. 4.3 Training We separately trained three classifiers regarding the three attributes. All variables are observed during training; we therefore take a feature-based approach to learning structure prediction models inspired by structure compilation (Liang et al., 2008). In our setting, a subset of the features (those based on network information) are computed based on predictions that will need to be made at test time, but are observed during training. This simplified approach to learning avoids expensive inference; at test time, however, we still need to jointly predict the best attribute values for friends as is described in section 4.4. 4.4 Inference Job and Education Our inference algorithm for Job/Education is performed on two settings, depending on whether neighbor information is 169 observed (NEIGH-OBSERVED) or latent (NEIGHLATENT). Real world applications, where network information can be partly retrieved from all types of social networks, can always falls in between. Inference in the NEIGH-OBSERVED setting is trivial; for each entity e ∈Gi, we simply predict it’s candidate attribute values using Equ.6. zk i,e = argmax z′ Ψ(z′, Xi, F k i ) (6) For NEIGH-LATENT setting, attributes for each node along the network are treated latent and user attribute prediction depends on attributes of his neighbors. The objective function for joint inference would be difficult to optimize exactly, and algorithms for doing so would be unlikely to scale to network of the size we consider. Instead, we use a sieve-based greedy search approach to inference (shown in Figure 3) inspired by recent work on coreference resolution (Raghunathan et al., 2010). Attributes are initialized using only text features, maximizing Ψtext(e, Xi), and ignoring network information. Then for each user we iteratively reestimate their profile given both their text features and network features (computed based on the current predictions made for their friends) which provide additional evidence. In this way, highly confident predictions will be made strictly from text in the first round, then the network can either support or contradict low confidence predictions as more decisions are made. This process continues until no changes are made at which point the algorithm terminates. We empirically found it to work well in practice. We expect that NEIGH-OBSERVED performs better than NEIGH-LATENT since the former benefits from gold network information. Spouse For Spouse inference, if candidate entity e has no correspondent twitter account, we directly determine zk i,e = argmaxz′ Ψ(z′, Xi) from text features. Otherwise, the inference of zk i,e depends on the zk e,Ci. Similarly, we initialize zk i,e and zk e,Ci by maximizing text factor, as we did for Education and Job. Then we iteratively update zk given by the rest variables until convergence. 5 Experiments In this Section, we present our experimental results in detail. Education Job AFFINITY 74.3 14.5 Table 3: Affinity values for Education and Job. 5.1 Preprocessing and Experiment Setup Each tweet posting is tokenized using Twitter NLP tool introduced by Noah’s Ark14 with # and @ separated following tokens. We assume that attribute values should be either name entities or terms following @ and #. Name entities are extracted using Ritter et al.’s NER system (2011). Consecutive tokens with the same named entity tag are chunked (Mintz et al., 2009). Part-ofspeech tags are assigned based on Owoputi et al’s tweet POS system (Owoputi et al., 2013). Data is divided in halves. The first is used as training data and the other as testing data. 5.2 Friends with Same Attribute Our network intuition is that users are much more likely to be friends with other users who share attributes, when compared to users who have no attributes in common. In order to statistically show this, we report the value of AFFINITY defined by Mislove et al (2010), which is used to quantitatively evaluate the degree of HOMOPHILY in the network. AFFINITY is the ratio of the fraction of links between attribute (k)-sharing users (Sk), relative to what is expected if attributes are randomly assigned in the network (Ek). Sk = P i P j∈F k i I(P k i = P k j ) P i P j∈F k i I Ek = P m T k m(T k m −1) Uk(Uk −1) (7) where T k m denotes the number of users with m value for attribute k and Uk = P m T k m. Table 3 shows the affinity value of the Education and Job. As we can see, the property of HOMOPHILY indeed exists among users in the social network with respect to Education and Job attribute, as significant affinity is observed. In particular, the affinity value for Education is 74.3, implying that users connected by a link in the network are 74.3 times more likely affiliated in the same school than as expected if education attributes are randomly assigned. It is interesting to note that Education exhibits a much stronger HOMOPHILY property than 14https://code.google.com/p/ ark-tweet-nlp/downloads/list 170 Job. Such affinity demonstrates that our approach that tries to take advantage of network information for attribute prediction of holds promise. 5.3 Evaluation and Discussion We evaluate settings described in Section 4.2 i.e., GLOBAL setting, where user-level attribute is predicted directly from jointly feature space and LOCAL setting where user-level prediction is made based on tweet-level prediction along with different inference approaches described in Section 4.4, i.e. NEIGH-OBSERVED and NEIGH-LATENT, regarding whether neighbor information is observed or latent. Baselines We implement the following baselines for comparison and use identical processing techniques for each approach for fairness. • Only-Text: A simplified version of our algorithm where network/neighbor influence is ignored. Classifier is trained and tested only based on text features. • NELL: For Job and Education, candidate is selected as attribute value once it matches bag of words in the list of universities or companies borrowed from NELL. For Education, the list is extended by alias identification based on Freebase. For Job, we also fetch the name abbreviations15. NELL is only implemented for Education and Job attribute. For each setting from each approach, we report the (P)recision, (R)ecall and (F)1-score. For LOCAL setting, we report the performance for both entity-level prediction (Entity) and posting-level prediction (Tweet). Results for Education, Job and Spouse from different approaches appear in Table 4, 5 and 6 respectively. Local or Global For horizontal comparison, we observe that GLOBAL obtains a higher Precision score but a lower Recall than LOCAL(ENTITY). This can be explained by the fact that LOCAL(U) sets zk i,e = 1 once one posting x ∈Le i is identified as attribute related, while GLOBAL tend to be more meticulous by considering the conjunctive feature space from all postings. Homophile effect In agreement with our expectation, NEIGH-OBSERVED performs better than NEIGH-LATENT since erroneous predictions in 15http://www.abbreviations.com/ NEIGH-LATENT setting will have negative influence on further prediction during the greedy search process. Both NEIGH-OBSERVED and NEIGH-LATENT where network information is harnessed, perform better than Only-Text, which the prediction is made independently on user’s text features. The improvement of NEIGH-OBSERVED over Only-Text is 22.7% and 6.4% regarding F1 score for Education and Job respectively, which further illustrate the usefulness of making use of Homophile effect for attribute inference on online social media. It is also interesting to note the improvement much more significant in Education inference than Job inference. This is in accord with what we find in Section 5.2, where education network exhibits stronger HOMOPHILE property than Job network, enabling a significant benefit for education inference, but limited for job inference. Spouse prediction also benefits from neighboring effect and the improvement is about 12% for LOCAL(ENTITY) setting. Unlike Education and Job prediction, for which in NEIGH-OBSERVED setting all neighboring variables are observed, network variables are hidden during spouse prediction. By considering network information, the model benefits from evident clues offered by tweet corpus of user e’s spouse when making prediction for e, but also suffers when erroneous decision are made and then used for downstream predictions. NELL Baseline Notably, NELL achieves highest Recall score for Education inference. It is also worth noting that most of education mentions that NELL fails to retrieve are those involve irregular spellings, such as HarvardUniv and Cornell U, which means Recall score for NELL baseline would be even higher if these irregular spellings are recognized in a more sophisticated system. The reason for such high recall is that as our ground truths are obtained from Google plus, the users from which are mostly affiliated with decent schools found in NELL dictionary. However, the high recall from NELL is sacrificed at precision, as users can mention school entities in many of situations, such as paying a visit or reporting some relevant news. NELL will erroneously classify these cases as attribute mentions. NELL does not work out for Job, with a fairly poor 0.0156 F1 score for LOCAL(ENTITY) and 0.163 for LOCAL(TWEET). Poor precision is expected for as users can mention firm entity in a great many of situations. The recall score for 171 GLOBAL LOCAL(ENTITY) LOCAL(TWEET) P R F P R F P R F Our approach NEIGH-OBSERVED 0.804 0.515 0.628 0.524 0.780 0.627 0.889 0.729 0.801 NEIGH-LATENT 0.755 0.440 0.556 0.420 0.741 0.536 0.854 0.724 0.783 Only-Text —0.735 0.393 0.512 0.345 0.725 0.467 0.809 0.724 0.764 NELL ————0.170 0.798 0.280 0.616 0.848 0.713 Table 4: Results for Education Prediction GLOBAL LOCAL(ENTITY) LOCAL(TWEET) P R F P R F P R F Our approach NEIGH-OBSERVED 0.643 0.330 0.430 0.374 0.620 0.467 0.891 0.698 0.783 NEIGH-LATENT 0.617 0.320 0.421 0.226 0.544 0.319 0.804 0.572 0.668 Only-Text —0.602 0.304 0.404 0.155 0.501 0.237 0.764 0.471 0.583 NELL ————0.0079 0.509 0.0156 0.094 0.604 0.163 Table 5: Results for Job Prediction GLOBAL LOCAL(ENTITY) LOCAL(TWEET) P R F P R F P R F Our approach —0.870 0.560 0.681 0.593 0.857 0.701 0.904 0.782 0.839 Only-Text —0.852 0.448 0.587 0.521 0.781 0.625 0.890 0.729 0.801 Table 6: Results for Spouse Prediction NELL in job inference is also quite low as job related entities exhibit a greater diversity of mentions, many of which are not covered by the NELL dictionary. Vertical Comparison: Education, Job and Spouse Job prediction turned out to be much more difficult than Education, as shown in Tables 4 and 5. Explanations are as follows: (1) Job contains a much greater diversity of mentions than Education. Education inference can benefit a lot from the dictionary relevant feature which Job may not. (2) Education mentions are usually associated with clear evidence such as homework, exams, studies, cafeteria or books, while situations are much more complicated for job as vocabularies are usually specific for different types of jobs. (3) The boundary between a user working in and a fun for a specific operation is usually ambiguous. For example, a Google engineer may constantly update information about outcome products of Google, so does a big fun. If the aforementioned engineer barely tweets about working conditions or colleagues (which might still be ambiguous), his tweet collection, which contains many of mentions about outcomes of Google product, will be significantly similar to tweets published by a Google fun. Such nuisance can be partly solved by the consideration of network information, but not totally. The relatively high F1 score for spouse prediction is largely caused by the great many of nonindividual related entities in the dataset, the identification of which would be relatively simpler. A deeper look at the result shows that the classifier frequently makes wrong decisions for entities such as userID and name entities. Significant as some spouse relevant features are, such as love, husband, child, in most circumstances, spouse mentions are extremely hard to recognize. For example, in tweets “Check this out, @alancross, it’s awesome bit.ly/1bnjYHh.” or “Happy Birthday @alancross !”. alancross can reasonably be any option among current user’s friend, colleague, parents, child or spouse. Repeated mentions add no confidence. Although we can identify alancross as spouse attribute once it jointly appear with other strong spouse indicators, they are still many cases where they never co-appear. How to integrate more useful side information for spouse recognition constitutes our future work. 6 Conclusion and Future Work In this paper, we propose a framework for user attribute inference on Twitter. We construct the publicly available dataset based on distant supervision and experiment our model on three useful user profile attributes, i.e., Education, Job and Spouse. Our model takes advantage of network information on social network. We will keep updating the dataset as more data is collected. One direction of our future work involves exploring more general categories of user profile at172 tributes, such as interested books, movies, hometown, religion and so on. Facebook would an ideal ground truth knowledge base. Another direction involves incorporating richer feature space for better inference performance, such as multi-media sources (i.e. pictures and video). 7 Acknowledgments A special thanks is owned to Dr. Julian McAuley and Prof. Jure Leskovec from Stanford University for the Google+ circle/network crawler, without which the network analysis would not have been conducted. This work was supported in part by DARPA under award FA8750-13-2-0005. References Faiyaz Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and latent attribute inference: Inferring latent attributes of twitter users from neighbors. In ICWSM. Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 389–398. Association for Computational Linguistics. Sergey Brin. 1999. Extracting patterns and relations from the world wide web. In The World Wide Web and Databases. Morgane Ciot, Morgan Sonderegger, and Derek Ruths. 2013. Gender inference of twitter users in nonenglish contexts. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Wash, pages 18–21. Michael Conover, Jacob Ratkiewicz, Matthew Francisco, Bruno Gonc¸alves, Filippo Menczer, and Alessandro Flammini. 2011. Political polarization on twitter. In ICWSM. Mark Craven and Johan Kumlien 1999. Constructing biological knowledge bases by extracting information from text sources. In ISMB, volume 1999, pages 77–86. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, pages 1–12. Ido Guy, Naama Zwerdling, Inbal Ronen, David Carmel, and Erel Uziel. 2010. Social media recommendation based on people and tags. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 194–201. ACM. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke S Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In ACL, pages 541–550. Jiwei Li and Claire Cardie. 2013. Timeline generation: Tracking individuals on twitter. Proceedings of the 23rd international conference on World wide web. Percy Liang, Hal Daum´e III, and Dan Klein. 2008. Structure compilation: trading structure for features. In Proceedings of the 25th international conference on Machine learning. Wendy Liu and Derek Ruths. 2013. Whats in a name? using first names as features for gender inference in twitter. In 2013 AAAI Spring Symposium Series. Wendy Liu, Faiyaz Zamal, and Derek Ruths. 2012. Using social media to infer gender composition of commuter populations. In Proceedings of the When the City Meets the Citizen Workshop, the International Conference on Weblogs and Social Media. Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology, pages 415– 444. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Alan Mislove, Bimal Viswanath, Krishna Gummadi, and Peter Druschel. 2010. You are who you know: inferring user profiles in online social networks. In Proceedings of the third ACM international conference on Web search and data mining, pages 251– 260. ACM. Olutobi Owoputi, Brendan OConnor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL-HLT, pages 380–390. Marco Pennacchiotti and Ana Popescu. 2011. A machine learning approach to twitter user classification. In ICWSM. Karthik Raghunathan, Heeyoung Lee, Sudarshan Rangarajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multipass sieve for coreference resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Delip Rao and David Yarowsky. 2010. Detecting latent user properties in social media. In Proc. of the NIPS MLSN Workshop. 173 Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in twitter. In Proceedings of the 2nd international workshop on Search and mining usergenerated contents, pages 37–44. ACM. Delip Rao, Michael Paul, Clayton Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hierarchical bayesian models for latent attribute detection in social media. In ICWSM. Haibin Liu, Michael Wall, Karin Verspoor, et al. 2012. Literature mining of protein-residue associations with graph rules learned through distant supervision. Journal of biomedical semantics, 3(Suppl 3):S2. Alan Ritter, Sam Clark, Mausam, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1524–1534. Association for Computational Linguistics. Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Etzioni. 2013. Modeling missing data in distant supervision for information extraction. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 455– 465. Association for Computational Linguistics. Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 721–729. Association for Computational Linguistics. Fei Wu and Daniel S Weld. 2007. Autonomously semantifying wikipedia. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 41–50. ACM. Jaewon Yang and Jure Leskovec. 2013. Overlapping community detection at scale: A nonnegative matrix factorization approach. In Proceedings of the sixth ACM international conference on Web search and data mining, pages 587–596. ACM. 174
2014
16
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 175–185, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics The effect of wording on message propagation: Topic- and author-controlled natural experiments on Twitter Chenhao Tan Dept. of Computer Science Cornell University [email protected] Lillian Lee Dept. of Computer Science Cornell University [email protected] Bo Pang Google Inc. [email protected] Abstract Consider a person trying to spread an important message on a social network. He/she can spend hours trying to craft the message. Does it actually matter? While there has been extensive prior work looking into predicting popularity of socialmedia content, the effect of wording per se has rarely been studied since it is often confounded with the popularity of the author and the topic. To control for these confounding factors, we take advantage of the surprising fact that there are many pairs of tweets containing the same url and written by the same user but employing different wording. Given such pairs, we ask: which version attracts more retweets? This turns out to be a more difficult task than predicting popular topics. Still, humans can answer this question better than chance (but far from perfectly), and the computational methods we develop can do better than both an average human and a strong competing method trained on noncontrolled data. 1 Introduction How does one make a message “successful”? This question is of interest to many entities, including political parties trying to frame an issue (Chong and Druckman, 2007), and individuals attempting to make a point in a group meeting. In the first case, an important type of success is achieved if the national conversation adopts the rhetoric of the party; in the latter case, if other group members repeat the originating individual’s point. The massive availability of online messages, such as posts to social media, now affords researchers new means to investigate at a very large scale the factors affecting message propagation, also known as adoption, sharing, spread, or virality. According to prior research, important features include characteristics of the originating author (e.g., verified Twitter user or not, author’s messages’ past success rate), the author’s social network (e.g., number of followers), message timing, and message content or topic (Artzi et al., 2012; Bakshy et al., 2011; Borghol et al., 2012; Guerini et al., 2011; Guerini et al., 2012; Hansen et al., 2011; Hong et al., 2011; Lakkaraju et al., 2013; Milkman and Berger, 2012; Ma et al., 2012; Petrovi´c et al., 2011; Romero et al., 2013; Suh et al., 2010; Sun et al., 2013; Tsur and Rappoport, 2012). Indeed, it’s not surprising that one of the most retweeted tweets of all time was from user BarackObama, with 40M followers, on November 6, 2012: “Four more years. [link to photo]”. Our interest in this paper is the effect of alternative message wording, meaning how the message is said, rather than what the message is about. In contrast to the identity/social/timing/topic features mentioned above, wording is one of the few factors directly under an author’s control when he or she seeks to convey a fixed piece of content. For example, consider a speaker at the ACL business meeting who has been tasked with proposing that Paris be the next ACL location. This person cannot on the spot become ACL president, change the shape of his/her social network, wait until the next morning to speak, or campaign for Rome instead; but he/she can craft the message to be more humorous, more informative, emphasize certain aspects instead of others, and so on. In other words, we investigate whether a different choice of words affects message propagation, controlling for user and topic: would user BarackObama have gotten significantly more (or fewer) retweets if he had used some alternate wording to announce his reelection? Although we cannot create a parallel universe 175 Table 1: Topic- and author-controlled (TAC) pairs. Topic control = inclusion of the same URL. author tweets #retweets natlsecuritycnn t1: FIRST ON CNN: After Petraeus scandal, Paula Broadwell looks to recapture ‘normal life.’ http://t.co/qy7GGuYW n1 = 5 t2: First on CNN: Broadwell photos shared with Security Clearance as she and her family fight media portrayal of her [same URL] n2 = 29 ABC t1: Workers, families take stand against Thanksgiving hours: http://t.co/J9mQHiIEqv n1 = 46 t2: Staples, Medieval Times Workers Say Opening Thanksgiving Day Crosses the Line [same URL] n2 = 27 cactus music t1: I know at some point you’ve have been saved from hunger by our rolling food trucks friends. Let’s help support them! http://t.co/zg9jwA5j n1 = 2 t2: Food trucks are the epitome of small independently owned LOCAL businesses! Help keep them going! Sign the petition [same URL] n2 = 13 in which BarackObama tweeted something else1, fortunately, a surprising characteristic of Twitter allows us to run a fairly analogous natural experiment: external forces serendipitously provide an environment that resembles the desired controlled setting (DiNardo, 2008). Specifically, it turns out to be unexpectedly common for the same user to post different tweets regarding the same URL — a good proxy for fine-grained topic2 — within a relatively short period of time.3 Some example pairs are shown in Table 1; we see that the paired tweets may differ dramatically, going far beyond word-for-word substitutions, so that quite interesting changes can be studied. Looking at these examples, can one in fact tell from the wording which tweet in a topic- and author-controlled pair will be more successful? The answer may not be a priori clear. For example, for the first pair in the table, one person we asked found t1’s invocation of a “scandal” to be more attention-grabbing; but another person preferred t2 because it is more informative about the URL’s content and includes “fight media portrayal”. In an Amazon Mechanical Turk (AMT) experiment (§4), we found that humans achieved an average accuracy of 61.3%: not that high, but better than chance, indicating that it is somewhat possible for humans to predict greater message spread from different deliveries of the same information. Buoyed by the evidence of our AMT study that wording effects exist, we then performed a battery of experiments to seek generally-applicable, non1Cf. the Music Lab “multiple universes” experiment to test the randomness of popularity (Salganik et al., 2006). 2Although hashtags have been used as coarse-grained topic labels in prior work, for our purposes, we have no assurance that two tweets both using, say, “#Tahrir” would be attempting to express the same message but in different words. In contrast, see the same-URL examples in Table 1. 3Moreover, Twitter presents tweets to a reader in strict chronological order, so that there are no algorithmic-ranking effects to compensate for in determining whether readers saw a tweet. And, Twitter accumulates retweet counts for the entire retweet cascade and displays them for the original tweet at the root of the propagation tree, so we can directly use Twitter’s retweet counts to compare the entire reach of the different versions. Twitter-specific features of more successful phrasings. §5.1 applies hypothesis testing (with Bonferroni correction to ameliorate issues with multiple comparisons) to investigate the utility of features like informativeness, resemblance to headlines, and conformity to the community norm in language use. §5.2 further validates our findings via prediction experiments, including on completely fresh held-out data, used only once and after an array of standard cross-validation experiments.4 We achieved 66.5% cross-validation accuracy and 65.6% held-out accuracy with a combination of our custom features and bag-of-words. Our classifier fared significantly better than a number of baselines, including a strong classifier trained on the most- and least-retweeted tweets that was even granted access to author and timing metadata. 2 Related work The idea of using carefully controlled experiments to study effective communication strategies dates back at least to Hovland et al. (1953). Recent studies range from examining what characteristics of New York Times articles correlate with high re-sharing rates (Milkman and Berger, 2012) to looking at how differences in description affect the spread of content-controlled videos or images (Borghol et al., 2012; Lakkaraju et al., 2013). Simmons et al. (2011) examined the variation of quotes from different sources to examine how textual memes mutate as people pass them along, but did not control for author. Predicting the “success” of various texts such as novels and movie quotes has been the aim of additional prior work not already mentioned in §1 (Ashok et al., 2013; Louis and Nenkova, 2013; Danescu-Niculescu-Mizil et al., 2012; Pitler and Nenkova, 2008; McIntyre and Lapata, 2009). To our knowledge, there have been no large-scale studies exploring wording effects in a both topic- and author-controlled setting. Employing such controls, we find that predicting the more effective alternative wording is much harder than the previously well-studied problem of pre4And after crossing our fingers. 176 dicting popular content when author or topic can freely vary. Related work regarding the features we considered is deferred to §5.1 (features description). 3 Data Our main dataset was constructed by first gathering 1.77M topic- and author-controlled (henceforth TAC) tweet pairs5 differing in more than just spacing.6 We accomplished this by crawling timelines of 236K user ids that appear in prior work (Kwak et al., 2010; Yang and Leskovec, 2011) via the Twitter API. This crawling process also yielded 632K TAC pairs whose only difference was spacing, and an additional 558M “unpaired” tweets; as shown later in this paper, we used these extra corpora for computing language models and other auxiliary information. We applied nonobvious but important filtering — described later in this section — to control for other external factors and to reduce ambiguous cases. This brought us to a set of 11,404 pairs, with the gold-standard labels determined by which tweet in each pair was the one that received more retweets according to the Twitter API. We then did a second crawl to get an additional 1,770 pairs to serve as a held-out dataset. The corresponding tweet IDs are available online at http://chenhaot.com/pages/ wording-for-propagation.html. (Twitter’s terms of service prohibit sharing the actual tweets.) Throughout, we refer to the textual content of the earlier tweet within a TAC pair as t1, and of the later one as t2. We denote the number of retweets received by each tweet by n1 and n2, respectively. We refer to the tweet with higher (lower) ni as the “better (worse)” tweet. Using “identical” pairs to determine how to compensate for follower-count and timing effects. In an ideal setting, differences between n1 and n2 would be determined solely by differences in wording. But even with a TAC pair, retweets might exhibit a temporal bias because of the chronological order of tweet presentation (t1 might enjoy a first-mover advantage (Borghol et al., 2012) because it is the “original”; alternatively, 5No data collection/processing was conducted at Google. 6The total excludes: tweets containing multiple URLs; tweets from users posting about the same URL more than five times (since such users might be spammers); the third, fourth, or fifth version for users posting between three and five tweets for the same URL; retweets (as identified by Twitter’s API or by beginning with “RT @”); non-English tweets. 3 6 12 18 24 36 48 time lag (hours) 2 4 6 8 10 12 14 16 D >1K f’ers >2.5K f’ers >5K f’ers >10K f’ers (a) For identical TAC pairs, retweet-count deviation vs. time lag between t1 and t2, for the author followercounts given in the legend. 0 2 4 6 8 n1 0 2 4 6 8 10 b E(n2|n1) >5K f’ers,<12hrs otherwise (b) Avg. n2 vs. n1 for identical TAC pairs, highlighting our chosen time-lag and follower thresholds. Bars: standard error. Diagonal line: pEpn2|n1q “ n1. Figure 1: (a): The ideal case where n2 “ n1 when t1 “ t2 is best approximated when t2 occurs within 12 hours of t1 and the author has at least 10,000 or 5,000 followers. (b): in our chosen setting (blue circles), n2 indeed tends to track n1, whereas otherwise (black squares), there’s a bias towards retweeting t1. t2 might be preferred because retweeters consider t1 to be “stale”). Also, the number of followers an author has can have complicated indirect effects on which tweets are read (space limits preclude discussion). We use the 632K TAC pairs wherein t1 and t2 are identical7 to check for such confounding effects: we see how much n2 deviates from n1 in such settings, since if wording were the only explanatory factor, the retweet rates for identical tweets ought to be equal. Figure 1(a) plots how the time lag between t1 and t2 and the author’s follower-count affect the following deviation estimate: D “ ÿ 0ďn1ă10 | pEpn2|n1q ´ n1|, where pEpn2|n1q is the average value of n2 over pairs whose t1 is retweeted n1 times. (Note that the number of pairs whose t1 is retweeted n1 times decays exponentially with n1; hence, we condition on n1 to keep the estimate from being dominated by pairs with n1 “ 0, and do not consider n1 ě 10 because there are too few such pairs to estimate pEpn2|n1q reliably.) Figure 1(a) shows that the setting where we (i) minimize the confounding effects of time lag and author’s follower-count and (ii) maximize the amount of data to work with 7Identical up to spacing: Twitter prevents exact copies by the same author appearing within a short amount of time, but some authors work around this by inserting spaces. 177 is: when t2 occurs within 12 hours after t1 and the author has more than 5,000 followers. Figure 1(b) confirms that for identical TAC pairs, our chosen setting indeed results in n2 being on average close to n1, which corresponds to the desired setting where wording is the dominant differentiating factor.8 Focus on meaningful and general changes. Even after follower-count and time-lapse filtering, we still want to focus on TAC pairs that (i) exhibit significant/interesting textual changes (as exemplified in Table 1, and as opposed to typo corrections and the like), and (ii) have n2 and n1 sufficiently different so that we are confident in which ti is better at attracting retweets. To take care of (i), we discarded the 50% of pairs whose similarity was above the median, where similarity was tf-based cosine.9 For (ii), we sorted the remaining pairs by n2 ´ n1 and retained only the top and bottom 5%.10 Moreover, to ensure that we do not overfit to the idiosyncrasies of particular authors, we cap the number of pairs contributed by each author to 50 before we deal with (ii). 4 Human accuracy on TAC pairs We first ran a pilot study on Amazon Mechanical Turk (AMT) to determine whether humans can identify, based on wording differences alone, which of two topic- and author- controlled tweets is spread more widely. Each of our 5 AMT tasks involved a disjoint set of 20 randomly-sampled TAC pairs (with t1 and t2 randomly reordered); subjects indicated “which tweet would other people be more likely to retweet?”, provided a short justification for their binary response, and clicked a checkbox if they found that their choice was a “close call”. We received 39 judgments per pair in aggregate from 106 subjects total (9 people completed all 5 tasks). The subjects’ justifications were of very high quality, convincing us that they all did the task in good faith11. Two examples for 8We also computed the Pearson correlation between n1 and n2, even though it can be dominated by pairs with smaller n1. The correlation is 0.853 for “ą 5K f’ers, ă12hrs”, clearly higher than the 0.305 correlation for “otherwise”. 9Idf weighting was not employed because changes to frequent words are of potential interest. Urls, hashtags, @mentions and numbers were normalized to [url], [hashtag], [at], and [num] before computing similarity. 10For our data, this meant n2 ´ n1 ě 10 or ď ´15. Cf. our median number of retweets: 30. 11We also note that the feedback we got was quite positive, including: “...It’s fun to make choices between close tweets and use our subjective opinion. Thanks and best of the third TAC pair in Table 1 were: “[t1 makes] the cause relate-able to some people, therefore showing more of an appeal as to why should they click the link and support” and, expressing the opposite view, “I like [t2] more because [t1] starts out with a generalization that doesn’t affect me and try to make me look like I had that experience before”. If we view the set of 3900 binary judgments for our 100-TAC-pair sample as constituting independent responses, then the accuracy for this set is 62.4% (rising to 63.8% if we exclude the 587 judgments deemed “close calls”). However, if we evaluate the accuracy of the majority response among the 39 judgments per pair, the number rises to 73%. The accuracy of the majority response generally increases with the dominance of the majority, going above 90% when at least 80% of the judgments agree (although less than a third of the pairs satisfied this criterion). Alternatively, we can consider the average accuracy of the 106 subjects: 61.3%, which is better than chance but far from 100%. (Variance was high: one subject achieved 85% accuracy out of 20 pairs, but eight scored below 50%.) This result is noticeably lower than the 73.8%-81.2% reported by Petrovi´c et al. (2011), who ran a similar experiment involving two subjects and 202 tweet pairs, but where the pairs were not topic- or author-controlled.12 We conclude that even though propagation prediction becomes more challenging when topic and author controls are applied, humans can still to some degree tell which wording attracts more retweets. Interested readers can try this out themselves at http://chenhaot.com/ retweetedmore/quiz. 5 Experiments We now investigate computationally what wording features correspond to messages achieving a broader reach. We start (§5.1) by introducing a set of generally-applicable and (mostly) non-Twitterspecific features to capture our intuitions about what might be better ways to phrase a message. We then use hypothesis testing (§5.1) to evaluate the importance of each feature for message propluck with your research” and “This was very interesting and really made me think about how I word my own tweets. Great job on this survey!”. We only had to exclude one person (not counted among the 106 subjects), doing so because he or she gave the same uninformative justification for all pairs. 12The accuracy range stems from whether author’s social features were supplied and which subject was considered. 178 Table 2: Notational conventions for tables in §5.1. One-sided paired t-test for feature efficacy ÒÒÒÒ: pă1e-20 ÓÓÓÓ: pą1-1e-20 ÒÒÒ : pă0.001 ÓÓÓ : pą0.999 ÒÒ : pă0.01 ÓÓ : pą0.99 Ò : pă0.05 Ó : pą0.95 ˚: passes our Bonferroni correction One-sided binomial test for feature increase (Do authors prefer to ‘raise’ the feature in t2?) YES : t2 has a higher feature score than t1, α “ .05 NO : t2 has a lower feature score than t1, α “ .05 (x%): %pf2 ą f1q, if sig. larger or smaller than 50% agation and the extent to which authors employ it, followed by experiments on a prediction task (§5.2) to further examine the utility of these features. 5.1 Features: efficacy and author preference What kind of phrasing helps message propagation? Does it work to explicitly ask people to share the message? Is it better to be short and concise or long and informative? We define an array of features to capture these and other messaging aspects. We then examine (i) how effective each feature is for attracting more retweets; and (ii) whether authors prefer applying a given feature when issuing a second version of a tweet. First, for each feature, we use a one-sided paired t-test to test whether, on our 11K TAC pairs, our score function for that feature is larger in the better tweet versions than in the worse tweet versions, for significance levels α “ .05, .01, .001, 1e-20. Given that we did 39 tests in total, there is a risk of obtaining false positives due to multiple testing (Dunn, 1961; Benjamini and Hochberg, 1995). To account for this, we also report significance results for the conservatively Bonferroni-corrected (“BC”) significance level α = 0.05/39=1.28e-3. Second, we examine author preference for applying a feature. We do so because one (but by no means the only) reason authors post t2 after having already advertised the same URL in t1 is that these authors were dissatisfied with the amount of attention t1 got; in such cases, the changes may have been specifically intended to attract more retweets. We measure author preference for a feature by the percentage of our TAC pairs13 where t2 has more “occurrences” of the feature than t1, which we denote by “%pf2 ą f1q”. We use the one-sided binomial test to see whether %pf2 ą f1q is significantly larger (or smaller) than 50%. 13 For our preference experiments, we added in pairs where n2 ´ n1 was not in the top or bottom 5% (cf. §3, meaningful changes), since to measure author preference it’s not necessary that the retweet counts differ significantly. Table 3: Explicit requests for sharing (where only occurrences POS-tagged as verbs count, according to the Gimpel et al. (2011) tagger). effective? author-preferred? rt ÒÒÒÒ * —— retweet ÒÒÒÒ * YES (59%) spread ÒÒÒ Ò * YES (56%) please ÒÒÒ Ò * —— pls Ò ÒÒÒ —— plz ÒÒ ÒÒ —— Table 4: Informativeness. effective? author-preferred? length (chars) ÒÒÒÒ * YES (54%) verb ÒÒÒÒ * YES (56%) noun ÒÒÒÒ * —— adjective ÒÒÒ Ò * YES (51%) adverb ÒÒÒ Ò * YES (55%) proper noun ÒÒÒ Ò * NO– (45%) number ÒÒÒÒ * NO– (48%) hashtag Ò ÒÒÒ —— @-mention ÓÓÓ Ó * YES (53%) Not surprisingly, it helps to ask people to share. (See Table 3; the notation for all tables is explained in Table 2.) The basic sanity check we performed here was to take as features the number of occurrences of the verbs ‘rt’, ‘retweet’, ‘please’, ‘spread’, ‘pls’, and ‘plz’ to capture explicit requests (e.g. “please retweet”). Informativeness helps. (Table 4) Messages that are more informative have increased social exchange value (Homans, 1958), and so may be more worth propagating. One crude approximation of informativeness is length, and we see that length helps.14 In contrast, Simmons et al. (2011) found that shorter versions of memes are more likely to be popular. The difference may result from TAC-pair changes being more drastic than the variations that memes undergo. A more refined informativeness measure is counts of the parts of speech that correspond to content. Our POS results, gathered using a Twitter-specific tagger (Gimpel et al., 2011), echo those of Ashok et al. (2013) who looked at predict14Of course, simply inserting garbage isn’t going to lead to more retweets, but adding more information generally involves longer text. 179 Table 5: Conformity to the community and one’s own past, measured via scores assigned by various language models. effective? author-preferred? twitter unigram ÒÒÒ Ò * YES (54%) twitter bigram ÒÒÒ Ò * YES (52%) personal unigram ÒÒÒ Ò * YES (52%) personal bigram ——– NO– (48%) ing the success of books. The diminished effect of hashtag inclusion with respect to what has been reported previously (Suh et al., 2010; Petrovi´c et al., 2011) presumably stems from our topic and author controls. Be like the community, and be true to yourself (in the words you pick, but not necessarily in how you combine them). (Table 5) Although distinctive messages may attract attention, messages that conform to expectations might be more easily accepted and therefore shared. Prior work has explored this tension: Lakkaraju et al. (2013), in a content-controlled study, found that the more upvoted Reddit image titles balance novelty and familiarity; Danescu-Niculescu-Mizil et al. (2012) (henceforth DCKL’12) showed that the memorability of movie quotes corresponds to higher lexical distinctiveness but lower POS distinctiveness; and Sun et al. (2013) observed that deviating from one’s own past language patterns correlates with more retweets. Keeping in mind that the authors in our data have at least 5000 followers15, we consider two types of language-conformity constraints an author might try to satisfy: to be similar to what is normal in the Twitter community, and to be similar to what his or her followers expect. We measure a tweet’s similarity to expectations by its score according to the relevant language model, 1 |T| ř xPT logpppxqq, where T refers to either all the unigrams (unigram model) or all and only bigrams (bigram model).16 We trained a Twittercommunity language model from our 558M unpaired tweets, and personal language models from each author’s tweet history. Imitate headlines. (Table 6) News headlines are often intentionally written to be both informative and attention-getting, so we introduce the idea of 15This is not an artificial restriction on our set of authors; a large follower count means (in principle) that our results draw on a large sample of decisions whether to retweet or not. 16The tokens [at], [hashtag], [url] were ignored in the unigram-model case to prevent their undue influence, but retained in the bigram model to capture longer-range usage (“combination”) patterns. Table 6: LM-based resemblance to headlines. effective? author-preferred? headline unigram ÒÒ ÒÒ YES (53%) headline bigram ÒÒÒÒ * YES (52%) Table 7: Retweet score. effective? author-preferred? rt score ÒÒ ÒÒ * NO– (49%) verb rt score ÒÒÒÒ * —— noun rt score ÒÒÒ Ò * —— adjective rt score Ò ÒÒÒ YES (50%) adverb rt score Ò ÒÒÒ YES (51%) proper noun rt score ——– NO– (48%) scoring by a language model built from New York Times headlines.17 Use words associated with (non-paired) retweeted tweets. (Table 7) We expect that provocative or sensationalistic tweets are likely to make people react. We found it difficult to model provocativeness directly. As a rough approximation, we check whether the changes in t2 with respect to t1 (which share the same topic and author) involve words or parts-of-speech that are associated with high retweet rate in a very large separate sample of unpaired tweets (retweets and replies discarded). Specifically, for each word w that appears more than 10 times, we compute the probability that tweets containing w are retweeted more than once, denoted by rspwq. We define the rt score of a tweet as maxwPT rspwq, where T is all the words in the tweet, and the rt score of a particular POS tag z in a tweet as maxwPT&tagpwq“zrspwq. Include positive and/or negative words. (Table 8) Prior work has found that including positive or negative sentiment increases message propagation (Milkman and Berger, 2012; Godes et al., 2005; Heath et al., 2001; Hansen et al., 2011). We measured the occurrence of positive and negative words as determined by the connotation lexicon of Feng et al. (2013) (better coverage than LIWC). Measuring the occurrence of both simultaneously was inspired by Riloff et al. (2013). Refer to other people (but not your audience). (Table 9) First-person has been found useful for success before, but in the different domains of scientific abstracts (Guerini et al., 2012) and books (Ashok et al., 2013). 17 To test whether the results stem from similarity to news rather than headlines per se, we constructed a NYT-text LM, which proved less effective. We also tried using Gawker headlines (often said to be attention-getting) but pilot studies revealed insufficient vocabulary overlap with our TAC pairs. 180 Table 8: Sentiment (contrast is measured by presence of both positive and negative sentiments). effective? author-preferred? positive ÒÒÒ Ò * —— negative ÒÒÒ Ò * —— contrast ÒÒÒ Ò * —— Table 9: Pronouns. effective? author-preferred? 1st person singular ——– YES (51%) 1st person plural ——– YES (52%) 2nd person ——– YES (57%) 3rd person singular ÒÒ ÒÒ YES (55%) 3rd person plural Ò ÒÒÒ YES (58%) Generality helps. (Table 10) DCKL’12 posited that movie quotes are more shared in the culture when they are general enough to be used in multiple contexts. We hence measured the presence of indefinite articles vs. definite articles. The easier to read, the better. (Table 11) We measure readability by using Flesch reading ease (Flesch, 1948) and Flesch-Kincaid grade level (Kincaid et al., 1975), though they are not designed for short texts. We use negative grade level so that a larger value indicates easier texts to read. Final question: Do authors prefer to do what is effective? Recall that we use binomial tests to determine author preference for applying a feature more in t2. Our preference statistics show that author preferences in many cases are aligned with feature efficacy. But there are several notable exceptions: for example, authors tend to increase the use of @-mentions and 2nd person pronouns even though they are ineffective. On the other hand, they did not increase the use of effective ones like proper nouns and numbers; nor did they tend to increase their rate of sentiment-bearing words. Bearing in mind that changes in t2 may not always be intended as an effort to improve t1, it is still interesting to observe that there are some contrasts between feature efficacy and author preferences. 5.2 Predicting the “better” wording Here, we further examine the collective efficacy of the features introduced in §5.1 via their performance on a binary prediction task: given a TAC pair (t1, t2), did t2 receive more retweets? Our approach. We group the features introduced in §5.1 into 16 lexicon-based features (Table 3, 8, 9, 10), 9 informativeness features (Table 4), 6 language model features (Table 5, 6), 6 rt score features (Table 7), and 2 readability features (Table 11). We refer to all 39 of them together as Table 10: Generality. effective? author-preferred? indefinite articles (a,an) ÒÒÒ Ò * —— definite articles (the) ——– YES (52%) Table 11: Readability. effective? author-preferred? reading ease ÒÒ ÒÒ YES (52%) negative grade level Ò ÒÒÒ YES (52%) custom features. We also consider tagged bag-ofwords (“BOW”) features, which includes all the unigram (word:POS pair) and bigram features that appear more than 10 times in the cross-validation data. This yields 3,568 unigram features and 4,095 bigram features, for a total of 7,663 so-called 1,2-gram features. Values for each feature are normalized by linear transformation across all tweets in the training data to lie in the range r0, 1s.18 For a given TAC pair, we construct its feature vector as follows. For each feature being considered, we compute its normalized value for each tweet in the pair and take the difference as the feature value for this pair. We use L2-regularized logistic regression as our classifier, with parameters chosen by cross validation on the training data. (We also experimented with SVMs. The performance was very close, but mostly slightly lower.) A strong non-TAC alternative, with social information and timing thrown in. One baseline result we would like to establish is whether the topic and author controls we have argued for, while intuitively compelling for the purposes of trying to determine the best way for a given author to present some fixed content, are really necessary in practice. To test this, we consider an alternative binary L2-regularized logistic-regression classifier that is trained on unpaired data, specifically, on the collection of 10,000 most retweeted tweets (gold-standard label: positive) plus the 10,000 least retweeted tweets (gold-standard label: negative) that are neither retweets nor replies. Note that this alternative thus is granted, by design, roughly twice the training instances that our classifiers have, as a result of having roughly the same number of tweets, since our instances are pairs. Moreover, we additionally include the tweet author’s follower count, and the day and hour of posting, as features. We refer to this alternative classifier as ␣TAC+ff+time. (Mnemonic: “ff” is used in bibliographic contexts as an abbreviation 18We also tried normalization by whitening, but it did not lead to further improvements. 181 (a) Cross-validation and heldout accuracy for various feature sets. Blue lines inside bars: performance when custom features are restricted to those that pass our Bonferroni correction (no line for readability because no readability features passed). Dashed vertical line: ␣TAC+ff+time performance. 1000 3000 5000 7000 9000 58% 60% 62% 64% 66% 68% 70% custom+1,2-gram custom 1,2-gram human (b) Cross-validation accuracy vs data size. Human performance was estimated from a disjoint set of 100 pairs (see §4). Figure 2: Accuracy results. Pertinent significance results are as follows. In cross-validation, custom+1,2gram is significantly better than ␣TAC+ff+time (p=0) and 1,2-gram (p=3.8e-7). In heldout validation, custom+1,2-gram is significantly better than ␣TAC+ff+time (p=3.4e-12) and 1,2-gram (p=0.01) but not unigram (p=0.08), perhaps due to the small size of the heldout set. for “and the following”.) We apply it to a tweet pair by computing whether it gives a higher score to t2 or not. Baselines. To sanity-check whether our classifier provides any improvement over the simplest methods one could try, we also report the performance of the majority baseline, our request-for-sharing features, and our character-length feature. Performance comparison. We compare the accuracy (percentage of pairs whose labels were correctly predicted) of our approach against the competing methods. We report 5-fold cross validation results on our balanced set of 11,404 TAC pairs and on our completely disjoint heldout data19 of 1,770 TAC pairs; this set was never examined during development, and there are no authors in common between the two testing sets. Figure 2(a) summarizes the main results. While ␣TAC+ff+time outperforms the majority baseline, using all the features we proposed beats ␣TAC+ff+time by more than 10% in both crossvalidation (66.5% vs 55.9%) and heldout validation (65.6% vs 55.3%). We outperform the average human accuracy of 61% reported in our Amazon Mechanical Turk experiments (for a different data sample); ␣TAC+ff+time fails to do so. The importance of topic and author control can be seen by further investigation of ␣TAC+ff+time’s performance. First, note that 19To construct this data, we used the same criteria as in §3: written by authors with more than 5000 followers, posted within 12 hours, n2 ´ n1 ě 10 or ď ´15, and cosine similarity threshold value the same as in §3, cap of 50 on number of pairs from any individual author. it yields an accuracy of around 55% on our alternate-version-selection task,20 even though its cross-validation accuracy on the larger most- and least-retweeted unpaired tweets averages out to a high 98.8%. Furthermore, note the superior performance of unigrams trained on TAC data vs ␣TAC+ff+time — which is similar to our unigrams but trained on a larger but non-TAC dataset that included metadata. Thus, TAC pairs are a useful data source even for non-custom features. (We also include individual feature comparisons later.) Informativeness is the best-performing custom feature group when run in isolation, and outperforms all baselines, as well as ␣TAC+ff+time; and we can see from Figure 2(a) that this is not due just to length. The combination of all our 39 custom features yields approximately 63% accuracy in both testing settings, significantly outperforming informativeness alone (pă0.001 in both cases). Again, this is higher than our estimate of average human performance. Not surprisingly, the TAC-trained BOW features (unigram and 1,2-gram) show impressive predictive power in this task: many of our custom features can be captured by bag-of-word features, in a way. Still, the best performance is achieved 20One might suspect that the problem is that ␣TAC+ff+time learns from its training data to overrely on follower-count, since that is presumably a good feature for non-TAC tweets, and for this reason suffers when run on TAC data where follower-counts are by construction non-informative. But in fact, we found that removing the follower-count feature from ␣TAC+ff+time and re-training did not lead to improved performance. Hence, it seems that it is the non-controlled nature of the alternate training data that explains the drop in performance. 182 by combining our custom and 1,2-gram features together, to a degree statistically significantly better than using 1,2-gram features alone. Finally, we remark on our Bonferroni correction. Recall that the intent of applying it is to avoid false positives. However, in our case, Figure 2(a) shows that our potentially “false” positives — features whose effectiveness did not pass the Bonferroni correction test — actually do raise performance in our prediction tests. Size of training data. Another interesting observation is how performance varies with data size. For n “ 1000, 2000, . . . , 10000, we randomly sampled n pairs from our 11,404 pairs, and computed the average cross-validation accuracy on the sampled data. Figure 2(b) shows the averages over 50 runs of the aforementioned procedure. Our custom features can achieve good performance with little data, in the sense that for sample size 1000, they outperform BOW features; on the other hand, BOW features quickly surpass them. Across the board, the custom+1,2-gram features are consistently better than the 1,2-gram features alone. Top features. Finally, we examine some of the top-weighted individual features from our approach and from the competing ␣TAC+ff+time classifier. The top three rows of Table 12 show the best custom and best and worst unigram features for our method; the bottom two rows show the best and worst unigrams for ␣TAC+ff+time. Among custom features, we see that community and personal language models, informativeness, retweet scores, sentiment, and generality are represented. As for unigram features, not surprisingly, “rt” and “retweet” are top features for both our approach and ␣TAC+ff+time. However, the other unigrams for the two methods seem to be a bit different in spirit. Some of the unigrams determined to be most poor only by our method appear to be both surprising and yet plausible in retrospect: “icymi” (abbreviation for “in case you missed it”) tends to indicate a direct repetition of older information, so people might prefer to retweet the earlier version; “thanks” and “sorry” could correspond to personal thank-yous and apologies not meant to be shared with a broader audience, and similarly @-mentioning another user may indicate a tweet intended only for that person. The appearance of [hashtag] in the best ␣TAC+ff+time unigrams is consistent with prior research in non-TAC settings (Suh et al., 2010; Petrovi´c et al., 2011). Table 12: Features with largest coefficients, delimited by commas. POS tags omitted for clarity. Our approach best 15 custom twitter bigram, length (chars), rt (the word), retweet (the word), verb, verb retweet score, personal unigram, proper noun, number, noun, positive words, please (the word), proper noun retweet score, indefinite articles (a,an), adjective best 20 unigrams rt, retweet, [num], breaking, is, win, never, ., people, need, official, officially, are, please, november, world, girl, !!!, god, new worst 20 unigrams :, [at], icymi, also, comments, half, ?, earlier, thanks, sorry, highlights, bit, point, update, last, helping, peek, what, haven’t, debate ␣TAC+ff+time best 20 unigrams [hashtag], teen, fans, retweet, sale, usa, women, butt, caught, visit, background, upcoming, rt, this, bieber, these, each, chat, houston, book worst 20 unigrams :, ..., boss, foundation, ?, „, others, john, roll, ride, appreciate, page, drive, correct, full, ’, looks, @ (not as [at]), sales, hurts 6 Conclusion In this work, we conducted the first large-scale topic- and author-controlled experiment to study the effects of wording on information propagation. The features we developed to choose the better of two alternative wordings posted better performance than that of all our comparison algorithms, including one given access to author and timing features but trained on non-TAC data, and also bested our estimate of average human performance. According to our hypothesis tests, helpful wording heuristics include adding more information, making one’s language align with both community norms and with one’s prior messages, and mimicking news headlines. Readers may try out their own alternate phrasings at http: //chenhaot.com/retweetedmore/ to see what a simplified version of our classifier predicts. In future work, it will be interesting to examine how these features generalize to longer and more extensive arguments. Moreover, understanding the underlying psychological and cultural mechanisms that establish the effectiveness of these features is a fundamental problem of interest. Acknowledgments. We thank C. Callison-Burch, C. Danescu-Niculescu-Mizil, J. Kleinberg, P. Mahdabi, S. Mullainathan, F. Pereira, K. Raman, A. Swaminathan, the Cornell NLP seminar participants and the reviewers for their comments; J. Leskovec for providing some initial data; and the anonymous annotators for all their labeling help. This work was supported in part by NSF grant IIS0910664 and a Google Research Grant. 183 References Yoav Artzi, Patrick Pantel, and Michael Gamon. 2012. Predicting responses to microblog posts. In Proceedings of NAACL (short paper). Vikas Ganjigunte Ashok, Song Feng, and Yejin Choi. 2013. Success with style: Using writing style to predict the success of novels. In Proceedings of EMNLP. Eitan Bakshy, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. 2011. Everyone’s an influencer: Quantifying influence on twitter. In Proceedings of WSDM. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), pages 289–300. Youmna Borghol, Sebastien Ardon, Niklas Carlsson, Derek Eager, and Anirban Mahanti. 2012. The untold story of the clones: Content-agnostic factors that impact YouTube video popularity. In Proceedings of KDD. Dennis Chong and James N. Druckman. 2007. Framing theory. Annual Review of Political Science, 10:103–126. Cristian Danescu-Niculescu-Mizil, Justin Cheng, Jon Kleinberg, and Lillian Lee. 2012. You had me at hello: How phrasing affects memorability. In Proceedings of ACL. John DiNardo. 2008. Natural experiments and quasinatural experiments. In The New Palgrave Dictionary of Economics. Palgrave Macmillan. Olive Jean Dunn. 1961. Multiple comparisons among means. Journal of the American Statistical Association, 56(293):52–64. Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In Proceedings of ACL. Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech Tagging for Twitter: Annotation, Features, and Experiments. In Proceedings of NAACL (short paper). David Godes, Dina Mayzlin, Yubo Chen, Sanjiv Das, Chrysanthos Dellarocas, Bruce Pfeiffer, Barak Libai, Subrata Sen, Mengze Shi, and Peeter Verlegh. 2005. The firm’s management of social interactions. Marketing Letters, 16(3-4):415–428. Marco Guerini, Carlo Strapparava, and G¨ozde ¨Ozbal. 2011. Exploring text virality in social networks. In Proceedings of ICWSM (poster). Marco Guerini, Alberto Pepe, and Bruno Lepri. 2012. Do linguistic style and readability of scientific abstracts affect their virality? In Proceedings of ICWSM (poster). Lars Kai Hansen, Adam Arvidsson, Finn ˚Arup Nielsen, Elanor Colleoni, and Michael Etter. 2011. Good friends, bad news-affect and virality in Twitter. Communications in Computer and Information Science, 185:34–43. Chip Heath, Chris Bell, and Emily Sternberg. 2001. Emotional selection in memes: The case of urban legends. Journal of personality and social psychology, 81(6):1028. George C. Homans. 1958. Social Behavior as Exchange. American Journal of Sociology, 63(6):597– 606. Liangjie Hong, Ovidiu Dan, and Brian D. Davison. 2011. Predicting popular messages in Twitter. In Proceedings of WWW. Carl I. Hovland, Irving L. Janis, and Harold H. Kelley. 1953. Communication and Persuasion: Psychological Studies of Opinion Change, volume 19. Yale University Press. J. Peter Kincaid, Robert P. Fishburne Jr., Richard L. Rogers, and Brad S. Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, DTIC Document. Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon. 2010. What is Twitter, a social network or a news media? In Proceedings of WWW. Himabindu Lakkaraju, Julian McAuley, and Jure Leskovec. 2013. What’s in a name? Understanding the interplay between titles, content, and communities in social media. In Proceedings of ICWSM. Annie Louis and Ani Nenkova. 2013. What makes writing great? First experiments on article quality prediction in the science journalism domain. Transactions of ACL. Zongyang Ma, Aixin Sun, and Gao Cong. 2012. Will this #hashtag be popular tomorrow? In Proceedings of SIGIR. Neil McIntyre and Mirella Lapata. 2009. Learning to tell tales: A data-driven approach to story generation. In Proceedings of ACL-IJCNLP. Katherine L Milkman and Jonah Berger. 2012. What makes online content viral? Journal of Marketing Research, 49(2):192–205. 184 Saˇsa Petrovi´c, Miles Osborne, and Victor Lavrenko. 2011. RT to win! Predicting message propagation in Twitter. In Proceedings of ICWSM. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of EMNLP. Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of EMNLP. Daniel M. Romero, Chenhao Tan, and Johan Ugander. 2013. On the interplay between social and topical structure. In Proceedings of ICWSM. Matthew J. Salganik, Peter Sheridan Dodds, and Duncan J. Watts. 2006. Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762):854–856. Matthew P. Simmons, Lada A Adamic, and Eytan Adar. 2011. Memes online: Extracted, subtracted, injected, and recollected. In Proceedings of ICWSM. Bongwon Suh, Lichan Hong, Peter Pirolli, and Ed H. Chi. 2010. Want to be retweeted? Large scale analytics on factors impacting retweet in Twitter network. In Proceedings of SocialCom. Tao Sun, Ming Zhang, and Qiaozhu Mei. 2013. Unexpected relevance: An empirical study of serendipity in retweets. In Proceedings of ICWSM. Oren Tsur and Ari Rappoport. 2012. What’s in a hashtag?: Content based prediction of the spread of ideas in microblogging communities. In Proceedings of WSDM. Jaewon Yang and Jure Leskovec. 2011. Patterns of temporal variation in online media. In Proceedings of WSDM. 185
2014
17
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 186–196, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Inferring User Political Preferences from Streaming Communications Svitlana Volkova,1 Glen Coppersmith2 and Benjamin Van Durme1,2 1Center for Language and Speech Processing, 2Human Language Technology Center of Excellence, Johns Hopkins University, Baltimore, MD 21218 [email protected], [email protected], [email protected] Abstract Existing models for social media personal analytics assume access to thousands of messages per user, even though most users author content only sporadically over time. Given this sparsity, we: (i) leverage content from the local neighborhood of a user; (ii) evaluate batch models as a function of size and the amount of messages in various types of neighborhoods; and (iii) estimate the amount of time and tweets required for a dynamic model to predict user preferences. We show that even when limited or no selfauthored data is available, language from friend, retweet and user mention communications provide sufficient evidence for prediction. When updating models over time based on Twitter, we find that political preference can be often be predicted using roughly 100 tweets, depending on the context of user selection, where this could mean hours, or weeks, based on the author’s tweeting frequency. 1 Introduction Inferring latent user attributes such as gender, age, and political preferences (Rao et al., 2011; Zamal et al., 2012; Cohen and Ruths, 2013) automatically from personal communications and social media including emails, blog posts or public discussions has become increasingly popular with the web getting more social and volume of data available. Resources like Twitter1 or Facebook2 become extremely valuable for studying the underlying properties of such informal communications because of its volume, dynamic nature, and diverse population (Lunden, 2012; Smith, 2013). 1http://www.demographicspro.com/ 2http://www.wolframalpha.com/facebook/ The existing batch models for predicting latent user attributes rely on thousands of tweets per author (Rao et al., 2010; Conover et al., 2011; Pennacchiotti and Popescu, 2011a; Burger et al., 2011; Zamal et al., 2012; Nguyen et al., 2013). However, most Twitter users are less prolific than those examined in these works, and thus do not produce the thousands of tweets required to obtain their levels of accuracy e.g., the median number of tweets produced by a random Twitter user per day is 10. Moreover, recent changes to Twitter API querying rates further restrict the speed of access to this resource, effectively reducing the amount of data that can be collected in a given time period. In this paper we analyze and go beyond static models formulating personal analytics in social media as a streaming task. We first evaluate batch models that are cognizant of low-resource prediction setting described above, maximizing the efficiency of content in calculating personal analytics. To the best of our knowledge, this is the first work that makes explicit the tradeoff between accuracy and cost (manifest as calls to the Twitter API), and optimizes to a different tradeoff than state-ofthe-art approaches, seeking maximal performance when limited data is available. In addition, we propose streaming models for personal analytics that dynamically update user labels based on their stream of communications which has been addressed previously by Van Durme (2012b). Such models better capture the real-time nature of evidence being used in latent author attribute predictions tasks. Our main contributions include: - develop low-resource and real-time dynamic approaches for personal analytics using as an example the prediction of political preference of Twitter users; - examine the relative utility of six different notions of “similarity” between users in an implicit Twitter social network for personal analytics; 186 - experiments are performed across multiple datasets supporting the prediction of political preference in Twitter, to highlight the significant differences in performance that arise from the underlying collection and annotation strategies. 2 Identifying Twitter Social Graph Twitter users interact with one another and engage in direct communication in different ways e.g., using retweets, user mentions e.g., @youtube or hashtags e.g., #tcot, in addition to having explicit connections among themselves such as following, friending. To investigate all types of social relationships between Twitter users and construct Twitter social graphs we collect lists of followers and friends, and extract user mentions, hashtags, replies and retweets from communications.3 2.1 Social Graph Definition Lets define an attributed, undirected graph G = (V, E), where V is a set of vertices and E is a set of edges. Each vertex vi represents someone in a communication graph i.e., communicant: here a Twitter user. Each vertex is attributed with a feature vector ⃗f(vi) which encodes communications e.g., tweets available for a given user. Each vertex is associated with a latent attribute a(vi), in our case it is binary a(vi) ∈{D, R}, where D stands for Democratic and R for Republican users. Each edge eij ∈E represents a connection between vi and vj, eij = (vi, vj) and defines different social circles between Twitter users e.g., follower (f), friend (b), user mention (m), hashtag (h), reply (y) and retweet (w). Thus, E ∈ V (2)×{f, b, h, m, w, y}. We denote a set of edges of a given type as φr(E) for r ∈{f, b, h, m, w, y}. We denote a set of vertices adjacent to vi by social circle type r as Nr(vi) which is equivalent to {vj | eij ∈φr(E)}. Following Filippova (2012) we refer to Nr(vi) as vi’s social circle, otherwise known as a neighborhood. In most cases, we only work with a sample of a social circle, denoted by N′ r(vi) where |N′ r(vi)| = k is its size for vi. Figure 1 presents an example of a social graph derived from Twitter. Notably, users from different social circles can be shared across the users of the same or different classes e.g., a user vj can be 3The code and detailed explanation on how we collected all six types of user neighbors and their communications using Twitter API can be found here: http://www.cs.jhu.edu/ svitlana/ Figure 1: An example of a social graph with follower, friend, @mention, reply, retweet and hashtag social circles for each user of interest e.g., blue: Democratic, red: Republican. in both follower circle vj ∈Nf(vi), vi ∈D and retweet circle vj ∈Nw(vk), vk ∈R. 2.2 Candidate-Centric Graph We construct candidate-centric graph Gcand by looking into following relationships between the users and Democratic or Republican candidates during the 2012 US Presidential election. In the Fall of 2012, leading up to the elections, we randomly sampled n = 516 Democratic and m = 515 Republican users. We labeled users as Democratic if they exclusively follow both Democratic candidates4 – BarackObama and JoeBiden but do not follow both Republican candidates – MittRomney and RepPaulRyan and vice versa. We collectively refer to D and R as our “users of interest” for which we aim to predict political preference. For each such user we collect recent tweets and randomly sample their immediate k = 10 neighbors from follower, friend, user mention, reply, retweet and hashtag social circles. 2.3 Geo-Centric Graph We construct a geo-centric graph Ggeo by collecting n = 135 Democratic and m = 135 Republican users from the Maryland, Virginia and Delaware region of the US with self-reported political preference in their biographies. Similar to the candidate-centric graph, for each user we collect recent tweets and randomly sample user social circles in the Fall of 2012. We collect this data to get a sample of politically less active users compared to the users from candidate-centric graph. 2.4 ZLR Graph We also consider a GZLR graph constructed from a dataset previously used for political affiliation 4As of Oct 12, 2012, the number of followers for Obama, Biden, Romney and Ryan were 2m, 168k, 1.3m and 267k. 187 classification (Zamal et al., 2012). This dataset consists of 200 Republican and 200 Democratic users associated with 925 tweets on average per user.5 Each user has on average 6155 friends with 642 tweets per friend. Sharing restrictions and rate limits on Twitter data collection only allowed us to recreate a semblance of ZLR data6 – 193 Democratic and 178 Republican users with 1K tweets per user, and 20 neighbors of four types including follower, friends, user mention and retweet with 200 tweets per neighbor for each user of interest. 3 Batch Models Baseline User Model As input we are given a set of vertices representing users of interest vi ∈V along with feature vectors ⃗f(vi) derived from content authored by the user of interest. Each user is associated with a non-zero number of publicly posted tweets. Our goal is assign to a category each user of interest vi based on ⃗f(vi). Here we focus on a binary assignment into the categories Democratic D or Republican R. The log-linear model7 for such binary classification is: Φvi =  D (1 + exp[−⃗θ · ⃗f(vi)])−1 ≥0.5, R otherwise. (1) where features are normalized word ngram counts extracted from vi’s tweets ⃗ft(vi) : D×t(vi) →R. The proposed baseline model follows the same trends as the existing state-of-the-art approaches for user attribute classification in social media as described in Section 8. Next we propose to extend the baseline model by taking advantage of language in user social circles as describe below. Neighbor Model As input we are given user-local neighborhood Nr(vi), where r is a neighborhood type. Besides the neighborhood’s type r, each is characterized by: • the number of communications per neighbor ⃗ft(Nr), t = {5, 10, 15, 25, 50, 100, 200}; 5The original dataset was collected in 2012 and has been recently released at http://icwsm.cs.mcgill.ca/. Political labels are extracted from http://www.wefollow.com as described by Pennacchiotti and Popescu (2011b). 6This inability to perfectly replicate prior work based on Twitter is a recognized problem throughout the community of computational social science, arising from the data policies of Twitter itself, it is not specific to this work. 7We use log-linear models over reasonable alternatives such as perceptron or SVM, following the practice of a wide range of previous work in related areas (Smith, 2004; Liu et al., 2005; Poon et al., 2009) including text classification in social media (Van Durme, 2012b; Yang and Eisenstein, 2013). • the order of the social circle – the number of neighbors per user of interest |Nr| = deg(vi), n = {1, 2, 5, 10}. Our goal is to classify users of interest using evidence (e.g., communications) from their local neighborhood P n ⃗ft[Nr(vi)] ≡⃗f(Nr) as Democratic or Republican. The corresponding loglinear model is defined as: ΦNr =  D (1 + exp[−⃗θ · ⃗f(Nr)])−1 ≥0.5, R otherwise. (2) To check whether our static models are cognizant of low-resource prediction settings we compare the performance of the user model from Eq.1 and the neighborhood model from Eq.2. Following the streaming nature of social media, we see the scarce available resource as the number of requests allowed per day to the Twitter API. Here we abstract this to a model assumption where we receive one tweet tk at a time and aim to maximize classification performance with as few tweets per user as possible:8 • for the baseline user model: minimize k X k tk(vi), (3) • for the neighborhood model: minimize k X n X k tk[Nr(vi)]. (4) 4 Streaming Models We rely on straightforward Bayesian rule update to our batch models in order to simulate a realtime streaming prediction scenario as a first step beyond the existing models as shown in Figure 2. The model makes predictions of a latent user attribute e.g., Republican under a model assumption of sequentially arriving, independent and identically distributed observations T = (t1, . . . , tk)9. The model dynamically updates posterior probability estimates p(a(vi) = R|tk) for a given user 8The separate issue is that many authors simply don’t tweet very often. For instance, 85.3% of all Twitter users post less than one update per day as reported at http://www.sysomos.com/insidetwitter/. Thus, their communications are scare even if we could get all of them without rate limiting from Twitter API. 9Given the dynamic character of online discourse it will clearly be of interest in the future to consider models that go beyond the iid assumption. 188 p(R|t1) 0.6 vi vi vi p(R|t1, t2) 0.7 p(R|t1, . . . tk) 0.9 Nr Nr Nr Time, τ τ2 τ1 τk Figure 2: Stream-based classification of an attribute a(vi) ∈ {R, D} given a stream of communications t1, t2, . . . , tk authored by a user vi or user immediate neighbors from Nr social circles at time τ1, τ2, . . . , τk. vi as an additional evidence tk is acquired, as defined in a general form below for any latent attribute a(vi) ∈A given the tweets T of user vi: p(a(vi) = x ∈A | T) = p(T | a(vi) = x) · p(a(vi) = x) P y∈A p(T | a(vi) = y) · p(a(vi) = y) = Q k p(tk | a(vi) = x) · p(a(vi) = x) P y∈A Q k p(tk | a(vi) = y) · p(a(vi) = y), (5) where y is the number of all possible attribute values, and k is the number of tweets per user. For example, to predict user political preference, we start with a prior P(R) = 0.5, and sequentially update the posterior p(R | T) by accumulating evidence from the likelihood p(tk|R): p(R | T) = Q k p(tk|R) · p(R) Q k P(tk|R) · p(R) + Q k P(tk|D) · p(D). (6) Our goal is to maximize posterior probability estimates given a stream of communications for each user in the data over (a) time τ and (b) the number of tweets T. For that, for each user we take tweets that arrive continuously over time and apply two different streaming models: • User Model with Dynamic Updates: relies exclusively on user tweets t(vi) 1 , . . . , t(vi) k following the order they arrive over time τ, where for each user vi we dynamically update the posterior p(R | t(vi) 1 , . . . , t(vi) k ). • User-Neighbor Model with Dynamic Updates: relies on both neighbor Nr communications including friend, follower, retweet, user mention and user tweets t(vi) 1 , . . . , t(Nr) k following the order they arrive over time τ; here we dynamically update the posterior probability p(R | t(vi) 1 , . . . , t(Nr) k ). 5 Experimental Setup We design a set of experiments to analyze static and dynamic models for political affiliation classification defined in Sections 3 and 4. 5.1 Batch Classification Experiments We first answer whether communications from user-local neighborhoods can help predict political preference for the user. To explore the contribution of different neighborhood types we learn static user and neighbor models on Gcand, Ggeo and GZLR graphs. We also examine the ability of our static models to predict user political preferences in low-resource setting e.g., 5 tweets. The existing models follow a standard setup when either user or neighbor tweets are available during train and test. For a static neighbor model we go beyond that, and train our the model on all data available per user, but only apply part of the data at the test time, pushing the boundaries of how little is truly required for classification. For example, we only use follower tweets for Gtest, but we use tweets from all types of neighbors for Gtrain. Such setup will simulate different realworld prediction scenarios which have not been previously explored, to our knowledge e.g., when a user has a private profile or has not tweeted yet, and only user neighbor tweets are available. We experiment with our static neighbor model defined in Eq.2 with the aim to: 1. evaluate neighborhood size influence, we change the number of neighbors and try n = [1, 2, 5, 10] neighbor(s) per user; 2. estimate neighbor content influence, we alternate the amount of content per neighbor and try t = [5, 10, 15, 25, 50, 100, 200] tweets. We perform 10-fold cross validation10 and run 100 random restarts for every n and t parameter combination. We compare our static neighbor and user models using the cost functions from Eq.3 and Eq.4. For all experiments we use LibLinear (Fan et al., 2008), integrated in the Jerboa toolkit (Van Durme, 2012a). Both models defined in Eq.1 and Eq.2 are learned using normalized count-based word ngram features extracted from either user or neighbor tweets.11 10For each fold we split the data into 3 parts: 70% train, 10% development and 20% test. 11For brevity we omit reporting results for bigram and trigram features, since unigrams showed superior performance. 189 5.2 Streaming Classification Experiments We evaluate our models with dynamic Bayesian updates on a continuous stream of communications over time as shown in Figure 2. Unlike static model experiments, we are not modeling the influence of the number of neighbors or the amount of content per neighbor. Here, we order user and neighbor communication streams by real world time of posting and measure changes in posterior probabilities over time. The main purpose of these experiments is to quantitatively evaluate (1) the number of tweets and (2) the amount of real world time it takes to observe enough evidence on Twitter to make reliable predictions. We experiment with log-linear models defined in Eq. 1 and 2 and continuously estimate the posterior probabilities P(R | T) as defined in Eq.6. We average the posterior probability results over the users in Gcand, Ggeo and GZLR graphs. We train streaming models on an attribute balanced subset of tweets for each user vi excluding vi’s tweets (or vi’s neighbor tweets for a joint model). This setup is similar to leave-one-out classification. The classifier is learned using binary word ngram features extracted from user or user-neighbor communications. We prefer binary to normalized countbased features to overcome sparsity issues caused by making predictions on each tweet individually. 6 Static Classification Results 6.1 Modeling User Content Influence We investigate classification decision probabilities for our static user model Φvi by making predictions on a random set of 5 vs. 100 tweets per user. To our knowledge only limited work on personal 0 20 40 60 80 100 0.0 0.2 0.4 0.6 0.8 1.0 User Classification decision (probability) misclassified misclassified correct correct Figure 3: Classification probabilities for Φvi estimated over 100 users in Gcand tested on 5 (blue) vs. 100 (green) tweets per user where Republican = 1, Democratic = 0, filled markers = correctly classified, not filled = misclassified users. G G G G G G G 5 10 20 50 100 200 0.50 0.55 0.60 0.65 0.70 log(Tweets Per Neighbor) Accuracy 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User (a) Ggeo: 2 neighbors G G G G G G G 5 10 20 50 100 200 0.50 0.55 0.60 0.65 0.70 log(Tweets Per Neighbor) Accuracy 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User (b) Ggeo: 10 neighbors G G G G G G G 5 10 20 50 100 200 0.50 0.55 0.60 0.65 0.70 0.75 log(Tweets Per Neighbor) Accuracy 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 10 50 100 200 400 Friend Follower Hashtag Usermention Retweet Reply User (c) Gcand: 2 neighbors G G G G G G G 5 10 20 50 100 200 0.50 0.55 0.60 0.65 0.70 0.75 log(Tweets Per Neighbor) Accuracy 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User G G G G G G G 50 100 250 500 1000 2000 Friend Follower Hashtag Usermention Retweet Reply User (d) Gcand: 10 neighbors Figure 4: Modeling the influence of the number of tweets per neighbor t=[5, .., 200] for Gcand and Ggeo graphs. analytics (Burger et al., 2011; Van Durme, 2012b) have performed this straight-forward comparison. For that purpose, we take a random partition containing 100 users of Gcand graph and perform four independent classification experiments – two runs using 5 and two runs using 100 tweets per user. Figure 3 demonstrates that more tweets during prediction time lead to higher accuracy by showing that more users with 100 tweets are correctly classified e.g., filled green markers in the right upper quadrant are true Republicans and in the left lower quadrant are true Democrats. Moreover, a lot of users with 100 tweets are close to 0.5 decision probability which suggests that the classifier is just uncertain rather then being completely off, e.g., misclassified Republican users with 5 tweets (not filled blue markers in the right lower quadrant) are close to 0. These results follow naturally from the underlying feature representation: having more tweets per user leads to a lower variance estimate of a target multinomial distribution. The more robustly this distribution is estimated (based on having more tweets) the more confident we should be in the classifier output. 6.2 Modeling Neighbor Content Influence Here we discuss the results for our static neighborhood model. We study the influence of the neighborhood type r and size in terms of the number of neighbors n and tweets t per neighbor. 190 G G G G 1 2 5 10 0.50 0.55 0.60 0.65 0.70 0.75 log(Number of Neighbors) Accuracy 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply (a) Gcand: 5 tweets G G G G 1 2 5 10 0.50 0.55 0.60 0.65 0.70 0.75 log(Number of Neighbors) Accuracy 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply (b) Gcand: 200 tweets G G G G 1 2 5 10 0.50 0.55 0.60 0.65 0.70 log(Number of Neighbors) Accuracy 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply G G G G 5 10 25 50 Friend Follower Hashtag Usermention Retweet Reply (c) Ggeo: 5 tweets G G G G 1 2 5 10 0.50 0.55 0.60 0.65 0.70 log(Number of Neighbors) Accuracy 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply G G G G 200 400 1000 2000 Friend Follower Hashtag Usermention Retweet Reply (d) Ggeo: 200 tweets Figure 5: Modeling the influence of the number of neighbors per user n=[1, .., 10] for Gcand and Ggeo graphs. In Figure 4 we present accuracy results for Gcand and Ggeo graphs. Following Eq.3 and 4, we spent an equal amount of resources to obtain 100 user tweets and 10 tweets from 10 neighbors. We annotate these ‘points of equal number of communications’ with a line on top marked with a corresponding number of user tweets. We show that three of six social circles – friend, retweet and user-mention yield better accuracy compared to the user model for all graphs when t ≥250. Thus, for effectively classifying a given user vi it is better to take 200 tweets each from 10 neighbors rather than 2,000 tweets from the user. The best accuracy for Gcand is 0.75 for friend, follower, retweet and user-mention neighborhoods which is 0.03 higher than the user baseline; for Ggeo is 0.67 for user-mention and 0.64 for retweet circles compared to 0.57 for the user model; for GZLR is 0.863 for retweet and 0.849 for friend circles which is 0.11 higher that the user baseline. Finally, similarly to the results for the user model given in Figure 3, increasing the number of tweets per neighbor from 5 to 200 leads to a significant gain in performance for all neighborhood types. 6.3 Modeling Neighborhood Size In Figure 5 we present accuracy results to show neighborhood size influence on classification performance for Ggeo and Gcand graphs. Our results demonstrate that even small changes to the neighborhood size n lead to better performance which does not support the claims by Zamal et al. (2012). We demonstrate that increasing the size of the neighborhood leads to better performance across six neighborhood types. Friend, user mention and retweet neighborhoods yield the highest accuracy for all graphs. We observe that when the number of neighbors is n = 1, the difference in accuracy across all neighborhood types is less significant but for n ≥2 it becomes more significant. 7 Streaming Classification Results 7.1 Modeling Dynamic Posterior Updates from a User Stream Figures 6a and 6b demonstrate dynamic user model prediction results averaged over users from Gcand and GZLR graphs. Each figure outlines changes in sequential average probability estimates pµ(R | T) for each individual self-authored tweet tk as defined in Eq. 6. The average probability estimates pµ(R | T) are reported for every 5 tweets in a stream T = (t1, . . . tk) as P nP(R|tk) n , where n is the total number of users with the same attribute R or D. We represent pµ(R | T) as a box and whisker plot with the median, lower and upper quantiles to show the variance; the length of whiskers indicate lower and upper extreme values. We find similar behavior across all three graphs. In particular, the posterior estimates converge faster when predicting Democratic than Republican users but it has been trained on an equal number of tweets per class. We observe that average posterior estimates Pµ(R | T) converge faster to 0 0.5 0.6 0.7 0.8 0.9 1.0 0 20 40 60 p(Republican|T) 0.0 0.1 0.2 0.3 0.4 0.5 0 20 40 60 Tweet Stream (T) p(Republican|T) (a) User Gcand 0.5 0.6 0.7 0.8 0.9 1.0 0 50 100 150 p(Republican|T) 0.0 0.1 0.2 0.3 0.4 0.5 0 50 100 150 Tweet Stream (T) p(Republican|T) (b) User GZLR Figure 6: Streaming classification results from user communications for Gcand and GZLR graphs averaged over every 5 tweets (red - Republican, blue - Democratic). 191 300 400 500 0 5 10 15 Time in Weeks Users (a) User Gcand 0 100 200 0 20 40 60 80 Time in Weeks Users (b) User GZLR 300 400 500 0 1 2 3 4 5 Time in Weeks Users (c) User-Neigh Gcand 0 100 200 0 10 20 30 40 Time in Weeks Users (d) User-Neigh GZLR Figure 7: Time needed for (a) - (b) dynamic user model and (c) - (d) joint user-neighbor model to infer political preferences of Democratic (blue) and Republican (red) users at 75% (dotted line) and 95% (solid line) accuracy levels. (Democratic) than to 1 (Republican) in Figures 6a and 6b. It suggests that language of Democrats is more expressive of their political preference than language of Republicans. For example, frequent politically influenced terms used widely by Democratic users include faith4liberty, constitutionally, pass, vote2012, terroristic. The variance for average posterior estimates decreases when the number of tweets increases for all three datasets. Moreover, we detect that Pµ(R|T) estimates for users in Gcand converge 23 times faster in terms of number of tweets than for users in GZLR. The lowest convergence is detected for Ggeo where after tk = 250 tweets the average posterior estimate Pµ(R | tk) = 0.904 ± 0.044 and Pµ(D | tk) = 0.861 ± 0.008. It means that users in Gcand are more politically vocal compared to users in GZLR and Ggeo. As a result, less active users in Ggeo just need more than 250 tweets to converge to a true 0 or 1 class. These results are coherent with the outcomes for our static models shown in Figures 4 and 5. These findings further confirm that differences in performance are caused by various biases present in the data due to distinct sampling and annotation approaches. Figure 7a and 7b illustrate the amount of time required for the user model to infer political preferences estimated for 1,031 users in Gcand and 371 users in GZLR. The amount of time needed can be evaluated for different accuracy levels e.g., 0.75 and 0.95. Thus, with 75% accuracy we classify: • 100 (∼20%) Republican users in 3.6 hours and Democratic users in 2.2 hours for Gcand; • 100 (∼56%) R users in 20 weeks and 100 (∼52%) D users in 8.9 weeks for GZLR which is 800 times longer that for Gcand; • 100 (∼75%) R users in 12 weeks and 80 (∼60%) D users in 19 weeks for Ggeo. Such extreme divergences in the amount of time required for classification across all graphs should be of strong interest to researchers concerned with latent attribute prediction tasks because Twitter users produce messages with extremely different frequencies. In our case, users in GZLR tweet approximately 800 times less frequently than users in Gcand. 7.2 Modeling Dynamic Posterior Updates from a Joint User-Neighbor Stream We estimate dynamic posterior updates from a joint stream of user and neighbor communications in Ggeo, Gcand and GZLR graphs. To make a fair comparison with a streaming user model, we start with the same user tweet t0(vi). Then instead of waiting for the next user tweet we rely on any neighbor tweets that appear until the user produces the next tweet t1(vi). We rely on communications from four types of neighbors such as friends, followers, retweets and user mentions. The convergence rate for the average posterior probability estimates Pµ(R|T) depending on the number of tweets is similar to the user model results presented in Figure 6. However, for Ggeo the variance for Pµ(R|T) is higher for Democratic users; for GZLR Pµ(R|T) →1 for Republicans in less than 110 tweets which is ∆t = 40 tweets faster than the user model; for Gcand the convergence for both Pµ(R|T) →1 and Pµ(D|T) →0 is not significantly different than the user model. Figures 7c and 7d show the amount of time required for a joint user-neighbor model to infer political preferences estimated for users in Gcand and GZLR. We find that with 75% accuracy we can classify 100 users for: • Gcand: Republican users in 23 minutes and Democratic users in 10 minutes; • GZLR: R users in 3.2 weeks and D users in 1.1 weeks which is 7 times faster on average across attributes than for the user model; • Ggeo: R users in 1.2 weeks and D users in 3.5 weeks which is on average 6 times faster across attributes than for the user model. Similar or better Pµ(R|T) convergence in terms of the number of tweets and, especially, in the amount of time needed for user and user-neighbor 192 models further confirms that neighborhood content is useful for political preference prediction. Moreover, communications from a joint stream allow to make an inference up to 7 times faster. 8 Related Work Supervised Batch Approaches The vast majority of work on predicting latent user attributes in social media apply supervised static SVM models for discrete categorical e.g., gender and regression models for continuous attributes e.g., age with lexical bag-of-word features for classifying user gender (Garera and Yarowsky, 2009; Rao et al., 2010; Burger et al., 2011; Van Durme, 2012b), age (Rao et al., 2010; Nguyen et al., 2011; Nguyen et al., 2013) or political orientation. We present an overview of the existing models for political preference prediction in Table 1. Bergsma et al. (2012) following up on Rao’s work (2010) on adding socio-linguistic features to improve gender, ethnicity and political preference prediction show that incorporating stylistic and syntactic information to the bag-of-word features improves gender classification. Other methods characterize Twitter users by applying limited amounts of network structure information in addition to lexical features. ConApproach Users Tweets Features Accur. Rao et al. (2010) 1K 2M ngrams socio-ling stacked 0.824 0.634 0.809 Pennacchiotti and Popescu (2011a) 10.3K – ling-all soc-all full 0.770 0.863 0.889 Conover et al. (2011) 1,000 1M full-text hashtags clusters 0.792 0.908 0.949 Zamal et al. (2012) 400 400K 3.85M 4.25M UserOnly Nbr User-Nbr11 0.890 0.920 0.932 Cohen and Ruths (2013) 397 1.8K 262 196 397K 1.8M 262K 196K features from (Zamal et al., 2012) 0.910 0.840 0.680 0.870 This paper (batch classification) Gcand 1,031 Ggeo 270 GZLR 371 206K 2M 54K 540K 371K 1.5M user ngrams neighbor user ngrams neighbor user ngrams neighbor 0.720 0.750 0.570 0.670 0.886 0.920 This paper (dynamic Bayesian update classification) Gcand 1,031 Ggeo 270 GZLR 371 103K 130K 54K 67K 74K 185K user stream user-neigh. user stream user-neigh. user stream user-neigh. 0.995 0.999 0.843 0.882 0.892 0.999 Table 1: Overview of the existing approaches for political preference classification in Twitter. nover et al. (2011) rely on identifying strong partisan clusters of Democratic and Republican users in a Twitter network based on retweet and user mention degree of connectivity, and then combine this clustering information with the follower and friend neighborhood size features. Pennacchiotti et al. (2011a; 2011b) focus on user behavior, network structure and linguistic features. Similar to our work, they assume that users from a particular class tend to reply and retweet messages of the users from the same class. We extend this assumption and study other relationship types e.g., friends, user mentions etc. Recent work by Wong et al. (2013) investigates tweeting and retweeting behavior for political learning during 2012 US Presidential election. The most similar work to ours is by Zamal et al. (2012), where the authors apply features from the tweets authored by a user’s friend to infer attributes of that user. In this paper, we study different types of user social circles in addition to a friend network. Additionally, using social media for mining political opinions (O’Connor et al., 2010a; Maynard and Funk, 2012) or understanding sociopolitical trends and voting outcomes (Tumasjan et al., 2010; Gayo-Avello, 2012; Lampos et al., 2013) is becoming a common practice. For instance, Lampos et al. (2013) propose a bilinear user-centric model for predicting voting intentions in the UK and Australia from social media data. Other works explore political blogs to predict what content will get the most comments (Yano et al., 2013) or analyze communications from Capitol Hill12 to predict campaign contributors based on this content (Yano and Smith, 2013). Unsupervised Batch Approaches Bergsma et al. (2013) show that large-scale clustering of user names improves gender, ethnicity and location classification on Twitter. O’Connor et al. (2010b) following the work by Eisenstein (2010) propose a Bayesian generative model to discover demographic language variations in Twitter. Rao et al. (2011) suggest a hierarchical Bayesian model which takes advantage of user name morphology for predicting user gender and ethnicity. Golbeck et al. (2010) incorporate Twitter data in a spatial model of political ideology. Streaming Approaches Van Durme (2012b) proposed streaming models to predict user gender in Twitter. Other works suggested to process 12http://www.tweetcongress.org 193 text streams for a variety of NLP tasks e.g., realtime opinion mining and sentiment analysis in social media (Pang and Lee, 2008), named entity disambiguation (Sarmento et al., 2009), statistical machine translation (Levenberg et al., 2011), first story detection (Petrovi´c et al., 2010), and unsupervised dependency parsing (Goyal and Daum´e, 2011). Massive Online Analysis (MOA) toolkit developed by Bifet et al. (2010) is an alternative to the Jerboa package used in this work developed by Van Durme (2012a). MOA has been effectively used to detect sentiment changes in Twitter streams (Bifet et al., 2011). 9 Conclusions and Future Work In this paper, we extensively examined state-ofthe-art static approaches and proposed novel models with dynamic Bayesian updates for streaming personal analytics on Twitter. Because our streaming models rely on communications from Twitter users and content from various notions of userlocal neighborhood they can be effectively applied to real-time dynamic data streams. Our results support several key findings listed below. Neighborhood content is useful for personal analytics. Content extracted from various notions of a user-local neighborhood can be as effective or more effective for political preference classification than user self-authored content. This may be an effect of ‘sparseness’ of relevant user data, in that users talk about politics very sporadically compared to a random sample of their neighbors. Substantial signal for political preference prediction is distributed in the neighborhood. Querying for more neighbors per user is more beneficial than querying for extra content from the existing neighbors e.g., 5 tweets from 10 neighbors leads to higher accuracy than 25 tweets from 2 neighbors or 50 tweets from 1 neighbor. This may be also the effect of data heterogeneity in social media compared to e.g., political debate text (Thomas et al., 2006). These findings demonstrate that a substantial signal is distributed over the neighborhood content. Neighborhoods constructed from friend, user mention and retweet relationships are most effective. Friend, user mention and retweet neighborhoods show the best accuracy for predicting political preferences of Twitter users. We think that friend relationships are more effective than e.g., follower relationships because it is very likely that users share common interests and preferences with their friends, e.g. Facebook friends can even be used to predict a user’s credit score.13 User mentions and retweets are two primary ways of interaction on Twitter. They both allow to share information e.g., political news, events with others and to be involved in direct communication e.g., live political discussions, political groups. Streaming models are more effective than batch models for personal analytics. The predictions made using dynamic models with Bayesian updates over user and joint user-neighbor communication streams demonstrate higher performance with lower resources spent compared to the batch models. Depending on user political involvement, expressiveness and activeness, the perfect prediction (approaching 100% accuracy) can be made using only 100 - 500 tweets per user. Generalization of the classifiers for political preference prediction. This work raises a very important but under-explored problem of the generalization of classifiers for personal analytics in social media, also recently discussed by Cohen and Ruth (2013). For instance, the existing models developed for political preference prediction are all trained on Twitter data but report significantly different results even for the same baseline models trained using bag-of-word lexical features as shown in Table 1. In this work we experiment with three different datasets. Our results for both static and dynamic models show that the accuracy indeed depends on the way the data was constructed. Therefore, publicly available datasets need to be released for a meaningful comparison of the approaches for personal analytics in social media. In future work, we plan to incorporate iterative model updates from newly classified communications similar to online perceptron-style updates. In addition, we aim to experiment with neighborhood-specific classifiers applied towards the tweets from neighborhood-specific streams e.g., friend classifier used for friend tweets, retweet classifier applied to retweet tweets etc. Acknowledgments The authors would like to thank the anonymous reviewers for their helpful comments. 13http://money.cnn.com/2013/08/26/technology/social/ facebook-credit-score/ 194 References Shane Bergsma, Matt Post, and David Yarowsky. 2012. Stylometric analysis of scientific articles. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 327–337. Shane Bergsma, Mark Dredze, Benjamin Van Durme, Theresa Wilson, and David Yarowsky. 2013. Broadly improving user classification via communication-based name and location clustering on Twitter. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 1010–1019. Albert Bifet, Geoff Holmes, Bernhard Pfahringer, Philipp Kranen, Hardy Kremer, Timm Jansen, and Thomas Seidl. 2010. MOA: Massive online analysis, a framework for stream classification and clustering. Journal of Machine Learning Research, 11:44–50. Albert Bifet, Geoffrey Holmes, Bernhard Pfahringer, and Ricard Gavald`a. 2011. Detecting sentiment change in Twitter streaming data. Journal of Machine Learning Research, 17:5–11. John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1301–1309. Raviv Cohen and Derek Ruths. 2013. Classifying Political Orientation on Twitter: It’s Not Easy! In Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM), pages 91–99. Michael D. Conover, Bruno Gonc¸alves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011. Predicting the political alignment of Twitter users. In Proceedings of Social Computing, pages 192–199. Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1277–1287. Rong En Fan, Kai Wei Chang, Cho Jui Hsieh, Xiang Rui Wang, and Chih Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Katja Filippova. 2012. User demographics and language in an implicit social network. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 1478–1488. Nikesh Garera and David Yarowsky. 2009. Modeling latent biographic attributes in conversational genres. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 710–718. Daniel Gayo-Avello. 2012. No, you cannot predict elections with Twitter. Internet Computing, IEEE, 16(6):91–94. Jennifer Golbeck, Justin M. Grimes, and Anthony Rogers. 2010. Twitter use by the u.s. congress. Journal of the American Society for Information Science and Technology, 61(8):1612–1621. Amit Goyal and Hal Daum´e, III. 2011. Approximate scalable bounded space sketch for large data NLP. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 250–261. Vasileios Lampos, Daniel Preotiuc-Pietro, and Trevor Cohn. 2013. A user-centric model of voting intention from social media. In Proceedings of the Association for Computational Linguistics (ACL), pages 993–1003. Abby Levenberg, Miles Osborne, and David Matthews. 2011. Multiple-stream language models for statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation (WMT), pages 177–186. Yang Liu, Qun Liu, and Shouxun Lin. 2005. Loglinear models for word alignment. In Proceedings of the Annual Meeting on Association for Computational Linguistics (ACL), pages 459–466. Ingrid Lunden. 2012. Analyst: Twitter passed 500M users in june 2012, 140m of them in US; Jakarta ‘biggest tweeting’ city. http://techcrunch.com/2012/07/30/analyst-twitterpassed-500m-users-in-june-2012-140m-of-them-inus-jakarta-biggest-tweeting-city/. Diana Maynard and Adam Funk. 2012. Automatic detection of political opinions in tweets. In Proceedings of the 8th International Conference on The Semantic Web (ESWC), pages 88–99. Felix Ming Fai Wong, Chee Wei Tan, Soumya Sen, and Mung Chiang. 2013. Quantifying political leaning from tweets and retweets. In Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM). Dong Nguyen, Noah A. Smith, and Carolyn P. Ros´e. 2011. Author age prediction from text using linear regression. In Proceedings of the 5th ACLHLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH), pages 115–123. 195 Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. ”How old do you think I am?” A study of language and age in Twitter. In Proceedings of the AAAI Conference on Weblogs and Social Media (ICWSM), pages 439–448. Brendan O’Connor, Ramnath Balasubramanyan, Bryan R. Routledge, and Noah A. Smith. 2010a. From tweets to polls: Linking text sentiment to public opinion time series. In Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM), pages 122–129. Brendan O’Connor, Jacob Eisenstein, Eric P. Xing, and Noah A. Smith. 2010b. A mixture model of demographic lexical variation. In Proceedings of the NIPS Workshop on Machine Learning and Social Computing. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations of Trends in Information Retrieval, 2(1-2):1–135, January. Marco Pennacchiotti and Ana-Maria Popescu. 2011a. Democrats, republicans and starbucks afficionados: user classification in twitter. In Proceedings of the 17th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pages 430–438. Marco Pennacchiotti and Ana Maria Popescu. 2011b. A machine learning approach to Twitter user classification. In Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM), pages 281–288. Saˇsa Petrovi´c, Miles Osborne, and Victor Lavrenko. 2010. Streaming first story detection with application to Twitter. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 209–217. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in Twitter. In Proceedings of the 2nd International Workshop on Search and Mining Usergenerated Contents (SMUC), pages 37–44. Delip Rao, Michael Paul, Clay Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hierarchical Bayesian models for latent attribute detection in social media. In Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM). Lu´ıs Sarmento, Alexander Kehlenbeck, Eug´enio Oliveira, and Lyle Ungar. 2009. An approach to web-scale named-entity disambiguation. In Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition (MLDM), pages 689–703. Noah A. Smith. 2004. Log-linear models. Craig Smith. 2013. May 2013 by the numbers: 16 amazing Twitter stats. http://expandedramblings.com/index.php/march2013-by-the-numbers-a-few-amazing-twitter-stats/. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: determining support or opposition from congressional floor-debate transcripts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 327–335. A. Tumasjan, T. O. Sprenger, P. G. Sandner, and I. M. Welpe. 2010. Predicting elections with Twitter: What 140 characters reveal about political sentiment. In Proceedings of the International AAAI Conference on Weblogs and Social Media, pages 178–185. Benjamin Van Durme. 2012a. Jerboa: A toolkit for randomized and streaming algorithms. Technical report, Human Language Technology Center of Excellence. Benjamin Van Durme. 2012b. Streaming analysis of discourse participants. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 48–58. Yi Yang and Jacob Eisenstein. 2013. A log-linear model for unsupervised text normalization. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 61–72. Tao Yano and Noah A. Smith. 2013. What’s worthy of comment? content and comment volume in political blogs. In International AAAI Conference on Weblogs and Social Media (ICWSM). Tao Yano, Dani Yogatama, and Noah A. Smith. 2013. A penny for your tweets: Campaign contributions and capitol hill microblogs. In Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM). Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and latent attribute inference: Inferring latent attributes of Twitter users from neighbors. In Proceedings of the International AAAI Conference on Weblogs and Social Media, pages 387–390. 196
2014
18
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 197–207, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Steps to Excellence: Simple Inference with Refined Scoring of Dependency Trees Yuan Zhang, Tao Lei, Regina Barzilay, Tommi Jaakkola Massachusetts Institute of Technology {yuanzh, taolei, regina, tommi}@csail.mit.edu Amir Globerson The Hebrew University [email protected] Abstract Much of the recent work on dependency parsing has been focused on solving inherent combinatorial problems associated with rich scoring functions. In contrast, we demonstrate that highly expressive scoring functions can be used with substantially simpler inference procedures. Specifically, we introduce a sampling-based parser that can easily handle arbitrary global features. Inspired by SampleRank, we learn to take guided stochastic steps towards a high scoring parse. We introduce two samplers for traversing the space of trees, Gibbs and Metropolis-Hastings with Random Walk. The model outperforms state-of-the-art results when evaluated on 14 languages of non-projective CoNLL datasets. Our sampling-based approach naturally extends to joint prediction scenarios, such as joint parsing and POS correction. The resulting method outperforms the best reported results on the CATiB dataset, approaching performance of parsing with gold tags.1 1 Introduction Dependency parsing is commonly cast as a maximization problem over a parameterized scoring function. In this view, the use of more expressive scoring functions leads to more challenging combinatorial problems of finding the maximizing parse. Much of the recent work on parsing has been focused on improving methods for solving the combinatorial maximization inference problems. Indeed, state-of-the-art results have been ob1The source code for the work is available at http://groups.csail.mit.edu/rbg/code/ global/acl2014. tained by adapting powerful tools from optimization (Martins et al., 2013; Martins et al., 2011; Rush and Petrov, 2012). We depart from this view and instead focus on using highly expressive scoring functions with substantially simpler inference procedures. The key ingredient in our approach is how learning is coupled with inference. Our combination outperforms the state-of-the-art parsers and remains comparable even if we adopt their scoring functions. Rich scoring functions have been used for some time. They first appeared in the context of reranking (Collins, 2000), where a simple parser is used to generate a candidate list which is then reranked according to the scoring function. Because the number of alternatives is small, the scoring function could in principle involve arbitrary (global) features of parse trees. The power of this methodology is nevertheless limited by the initial set of alternatives from the simpler parser. Indeed, the set may already omit the gold parse. We dispense with the notion of a candidate set and seek to exploit the scoring function more directly. In this paper, we introduce a sampling-based parser that places few or no constraints on the scoring function. Starting with an initial candidate tree, our inference procedure climbs the scoring function in small (cheap) stochastic steps towards a high scoring parse. The proposal distribution over the moves is derived from the scoring function itself. Because the steps are small, the complexity of the scoring function has limited impact on the computational cost of the procedure. We explore two alternative proposal distributions. Our first strategy is akin to Gibbs sampling and samples a new head for each word in the sentence, modifying one arc at a time. The second strategy relies on a provably correct sampler for firstorder scores (Wilson, 1996), and uses it within a Metropolis-Hastings algorithm for general scoring functions. It turns out that the latter optimizes the 197 score more efficiently than the former. Because the inference procedure is so simple, it is important that the parameters of the scoring function are chosen in a manner that facilitates how we climb the scoring function in small steps. One way to achieve this is to make sure that improvements in the scoring functions are correlated with improvements in the quality of the parse. This approach was suggested in the SampleRank framework (Wick et al., 2011) for training structured prediction models. This method was originally developed for a sequence labeling task with local features, and was shown to be more effective than state-of-the-art alternatives. Here we apply SampleRank to parsing, applying several modifications such as the proposal distributions mentioned earlier. The benefits of sampling-based learning go beyond stand-alone parsing. For instance, we can use the framework to correct preprocessing mistakes in features such as part-of-speech (POS) tags. In this case, we combine the scoring function for trees with a stand-alone tagging model. When proposing a small move, i.e., sampling a head of the word, we can also jointly sample its POS tag from a set of alternatives provided by the tagger. As a result, the selected tag is influenced by a broad syntactic context above and beyond the initial tagging model and is directly optimized to improve parsing performance. Our joint parsing-tagging model provides an alternative to the widely-adopted pipeline setup. We evaluate our method on benchmark multilingual dependency corpora. Our method outperforms the Turbo parser across 14 languages on average by 0.5%. On four languages, we top the best published results. Our method provides a more effective mechanism for handling global features than reranking, outperforming it by 1.3%. In terms of joint parsing and tagging on the CATiB dataset, we nearly bridge (88.38%) the gap between independently predicted (86.95%) and gold tags (88.45%). This is better than the best published results in the 2013 SPMRL shared task (Seddah et al., 2013), including parser ensembles. 2 Related Work Earlier works on dependency parsing focused on inference with tractable scoring functions. For instance, a scoring function that operates over each single dependency can be optimized using the maximum spanning tree algorithm (McDonald et al., 2005). It was soon realized that using higher order features could be beneficial, even at the cost of using approximate inference and sacrificing optimality. The first successful approach in this arena was reranking (Collins, 2000; Charniak and Johnson, 2005) on constituency parsing. Reranking can be combined with an arbitrary scoring function, and thus can easily incorporate global features over the entire parse tree. Its main disadvantage is that the output parse can only be one of the few parses passed to the reranker. Recent work has focused on more powerful inference mechanisms that consider the full search space (Zhang and McDonald, 2012; Rush and Petrov, 2012; Koo et al., 2010; Huang, 2008). For instance, Nakagawa (2007) deals with tractability issues by using sampling to approximate marginals. Another example is the dual decomposition (DD) framework (Koo et al., 2010; Martins et al., 2011). The idea in DD is to decompose the hard maximization problem into smaller parts that can be efficiently maximized and enforce agreement among these via Lagrange multipliers. The method is essentially equivalent to linear programming relaxation approaches (Martins et al., 2009; Sontag et al., 2011), and also similar in spirit to ILP approaches (Punyakanok et al., 2004). A natural approach to approximate global inference is via search. For instance, a transitionbased parsing system (Zhang and Nivre, 2011) incrementally constructs a parsing structure using greedy beam-search. Other approaches operate over full trees and generate a sequence of candidates that successively increase the score (Daum´e III et al., 2009; Li et al., 2013; Wick et al., 2011). Our work builds on one such approach — SampleRank (Wick et al., 2011), a sampling-based learning algorithm. In SampleRank, the parameters are adjusted so as to guide the sequence of candidates closer to the target structure along the search path. The method has been successfully used in sequence labeling and machine translation (Haddow et al., 2011). In this paper, we demonstrate how to adapt the method for parsing with rich scoring functions. 3 Sampling-Based Dependency Parsing with Global Features In this section, we introduce our novel samplingbased dependency parser which can incorporate 198 arbitrary global features. We begin with the notation before addressing the decoding and learning algorithms. Finally, we extend our model to a joint parsing and POS correction task. 3.1 Notations We denote sentences by x and the corresponding dependency trees by y ∈Y(x). Here Y(x) is the set of valid (projective or non-projective) dependency trees for sentence x. We use xj to refer to the jth word of sentence x, and hj to the head word of xj. A training set of size N is given as a set of pairs D = {(x(i), y(i))}N i=1 where y(i) is the ground truth parse for sentence x(i). We parameterize the scoring function s(x, y) as s(x, y) = θ · f(x, y) (1) where f(x, y) is the feature vector associated with tree y for sentence x. We do not make any assumptions about how the feature function decomposes. In contrast, most state-of-the-art parsers operate under the assumption that the feature function decomposes into a sum of simpler terms. For example, in the second-order MST parser (McDonald and Pereira, 2006), all the feature terms involve arcs or consecutive siblings. Similarly, parsers based on dual decomposition (Martins et al., 2011; Koo et al., 2010) assume that s(x, y) decomposes into a sum of terms where each term can be maximized over y efficiently. 3.2 Decoding The decoding problem consists of finding a valid dependency tree y ∈Y(x) that maximizes the score s(x, y) = θ · f(x, y) with parameters θ. For scoring functions that extend beyond firstorder arc preferences, finding the maximizing nonprojective tree is known to be NP-hard (McDonald and Pereira, 2006). We find a high scoring tree through sampling, and (later) learn the parameters θ so as to further guide this process. Our sampler generates a sequence of dependency structures so as to approximate independent samples from p(y|x, T, θ) ∝exp (s(x, y)/T) (2) The temperature parameter T controls how concentrated the samples are around the maximum of s(x, y) (e.g., see Geman and Geman (1984)). Sampling from target distribution p is typically as hard as (or harder than) that maximizing s(x, y). Inputs: θ, x, T0 (initial temperature), c (temperature update rate), proposal distribution q. Outputs: y∗ T ←T0 Set y0 to some random tree y∗←y0 repeat y′ ←q(·|x, yt, T, θ) if s(x, y′) > s(x, y∗) then y∗←y′ α = min h 1, p(y′)q(yt|y′) p(yt)q(y′|yt) i Sample Bernouli variable Z with P[Z = 1] = α. if Z = 0 then yt+1 ←yt else yt+1 ←y′ t ←t + 1 T ←c · T until convergence return y∗ Figure 1: Sampling-based algorithm for decoding (i.e., approximately maximizing s(x, y)). We follow here a Metropolis-Hastings sampling algorithm (e.g., see Andrieu et al. (2003)) and explore different alternative proposal distributions q(y′|x, y, θ, T). The distribution q governs the small steps that are taken in generating a sequence of structures. The target distribution p folds into the procedure by defining the probability that we will accept the proposed move. The general structure of our sampling algorithm is given in Figure 1. 3.2.1 Gibbs Sampling Perhaps the most natural choice of the proposal distribution q is a conditional distribution from p. This is feasible if we restrict the proposed moves to only small changes in the current tree. In our case, we choose a word j randomly, and then sample its head hj according to p with the constraint that we obtain a valid tree (when projective trees are sought, this constraint is also incorporated). For this choice of q, the probability of accepting the new tree (α in Figure 1) is identically one. Thus new moves are always accepted. 3.2.2 Exact First-Order Sampling One shortcoming of the Gibbs sampler is that it only changes one variable (arc) at a time. This usually leads to slow mixing, requiring more samples to get close to the parse with maximum score. Ideally, we would change multiple heads in the parse tree simultaneously, and sample those choices from the corresponding conditional distribution of p. While in general this is increasingly difficult with more heads, it is indeed tractable if 199 Inputs: x, yt, θ, K (number of heads to change). Outputs: y′ for i = 1 to |x| do inTree[i] ←false ChangeNode[i] ←false Set ChangeNode to true for K random nodes. head[0] ←−1 for i = 1 to |x| do u ←i while not inTree[u] do if ChangeNode[u] then head[u] ←randomHead(u, θ) else head[u] ←yt(u) u ←head[u] if LoopExist(head) then EraseLoop(head) u ←i while not inTree[u] do inTree[u] ←true u ←head[u] return Construct tree y′ from the head array. Figure 2: A proposal distribution q(y′|yt) based on the random walk sampler of Wilson (1996). The function randomHead samples a new head for node u according to the first-order weights given by θ. the model corresponds to a first-order parser. One such sampling algorithm is the random walk sampler of Wilson (1996). It can be used to obtain i.i.d. samples from distributions of the form: p(y) ∝ Y i→j∈y wij, (3) where y corresponds to a tree with a spcified root and wij is the exponential of the first-order score. y is always a valid parse tree if we allow multiple children of the root and do not impose projective constraint. The algorithm in Wilson (1996) iterates over all the nodes, and for each node performs a random walk according to the weights wij until the walk creates a loop or hits a tree. In the first case the algorithm erases the loop and continues the walk. If the walk hits the current tree, the walk path is added to form a new tree with more nodes. This is repeated until all the nodes are included in the tree. It can be shown that this procedure generates i.i.d. trees from p(y). Since our features do not by design correspond to a first-order parser, we cannot use the Wilson algorithm as it is. Instead we use it as the proposal function and sample a subset of the dependencies from the first-order distribution of our model, while fixing the others. In each step we uniformly sample K nodes to update and sample their new 1! 2! not →Monday →not ssssssssssss " → """ was loop erased! Black →Monday →was ROOT! It! was! not! Black! Monday! 2! 1! 3! ROOT! It! was! not! Black! Monday! (b) walk path:! (c) walk path:! (a) original tree! ROOT! It! was! not! Black! Monday! Figure 3: An illustration of random walk sampler. The index on each edge indicates its order on each walk path. The heads of the red words are sampled while others are fixed. The blue edges represent the current walk path and the black ones are already in the tree. Note that the walk direction is opposite to the dependency direction. (a) shows the original tree before sampling; (b) and (c) show the walk path and how the tree is generated in two steps. The loop not→Monday →not in (b) is erased. heads using the Wilson algorithm (in the experiments we use K = 4). Note that blocked Gibbs sampling would be exponential in K, and is thus very slow already at K = 4. The procedure is described in Figure 2 with a graphic illustration in Figure 3. 3.3 Training In this section, we describe how to learn the adjustable parameters θ in the scoring function. The parameters are learned in an on-line fashion by successively imposing soft constraints between pairs of dependency structures. We introduce both margin constraints and constraints pertaining to successive samples generated along the search path. We demonstrate later that both types of constraints are essential. We begin with the standard margin constraints. An ideal scoring function would always rank the gold parse higher than any alternative. Moreover, alternatives that are far from the gold parse should score even lower. As a result, we require that s(x(i), y(i)) −s(x(i), y) ≥∆(y(i), y) ∀y (4) where ∆(y(i), y) is the number of head mistakes in y relative to the gold parse y(i). We adopt here a shorthand Err(y) = ∆(y(i), y), where the de200 pendence on y(i) is implied from context. Note that Equation 4 contains exponentially many constraints and cannot be enforced jointly for general scoring functions. However, our sampling procedure generates a small number of structures along the search path. We enforce only constraints corresponding to those samples. The second type of constraints are enforced between successive samples along the search path. To illustrate the idea, consider a parse y that differs from y(i) in only one arc, and a parse y′ that differs from y(i) in ten arcs. We cannot necessarily assume that s(x, y) is greater than s(x, y′) without additional encouragement. Thus, we can complement the constraints in Equation 4 with additional pairwise constraints (Wick et al., 2011): s(x(i), y) −s(x(i), y′) ≥Err(y′) −Err(y) (5) where similarly to Equation 4, the difference in scores scales with the differences in errors with respect to the target y(i). We only enforce the above constraints for y, y′ that are consecutive samples in the course of the sampling process. These constraints serve to guide the sampling process derived from the scoring function towards the gold parse. We learn the parameters θ in an on-line fashion to satisfy the above constraints. This is done via the MIRA algorithm (Crammer and Singer, 2003). Specifically, if the current parameters are θt, and we enforce constraint Equation 5 for a particular pair y, y′, then we will find θt+1 that minimizes min ||θ −θt||2 + Cξ s.t. θ · (f(x, y) −f(x, y′)) ≥Err(y′) −Err(y) −ξ (6) The updates can be calculated in closed form. Figure 4 summarizes the learning algorithm. We repeatedly generate parses based on the current parameters θt for each sentence x(i), and use successive samples to enforce constraints in Equation 4 and Equation 5 one at a time. 3.4 Joint Parsing and POS Correction It is easy to extend our sampling-based parsing framework to joint prediction of parsing and other labels. Specifically, when sampling the new heads, we can also sample the values of other variables at the same time. For instance, we can sample the POS tag, the dependency relation or morphology information. In this work, we investigate a joint Inputs: D = {(x(i), y(i))}N i=1. Outputs: Learned parameters θ. θ0 ←0 for e = 1 to #epochs do for i = 1 to N do y′ ←q(·|x(i), y ti i , θt) y+ = arg min y∈  yti i ,y′  Err(y) y−= arg max y∈  yti i ,y′  Err(y) y ti+1 i ←acceptOrReject(y′, y ti i , θt) ti ←ti + 1 ∇f = f(x(i), y+) −f(x(i), y−) ∆Err = Err(y+) −Err(y−) if ∆Err ̸= 0 and θt · ∇f < ∆Err then θt+1 ←updateMIRA(∇f, ∆Err, θt) t ←t + 1 ∇fg = f(x(i), y(i)) −f(x(i), y ti i ) if θt · ∇fg < Err(y ti i ) then θt+1 ←updateMIRA(∇fg, Err(y ti i ), θt) t ←t + 1 return Average of θ0, . . . , θt parameters. Figure 4: SampleRank algorithm for learning. The rejection strategy is as in Figure 1. yti i is the tith tree sample of x(i). The first MIRA update (see Equation 6) enforces a ranking constraint between two sampled parses. The second MIRA update enforces constraints between a sampled parse and the gold parse. In practice several samples are drawn for each sentence in each epoch. POS correction scenario in which only the predicted POS tags are provided in the testing phase, while both gold and predicted tags are available for the training set. We extend our model such that it jointly learns how to predict a parse tree and also correct the predicted POS tags for a better parsing performance. We generate the POS candidate list for each word based on the confusion matrix on the training set. Let c(tg, tp) be the count when the gold tag is tg and the predicted one is tp. For each word w, we first prune out its POS candidates by using the vocabulary from the training set. We don’t prune anything if w is unseen. Assuming that the predicted tag for w is tp, we further remove those tags t if their counts are smaller than some threshold c(t, tp) < α · c(tp, tp)2. After generating the candidate lists for each word, the rest of the extension is rather straightforward. For each sampling, let H be the set of candidate heads and T be the set of candidate POS tags. The Gibbs sampler will generate a new sample from the space H × T . The other parts of the algorithm remain the same. 2In our work we choose α = 0.003, which gives a 98.9% oracle POS tagging accuracy on the CATiB development set. 201 arc! head bigram! !h h m m +1 arbitrary sibling! …! h m s h m consecutive sibling! h m s grandparent! g h m grand-sibling! g h m s tri-siblings! h m s t grand-grandparent! g h m gg outer-sibling-grandchild! h m s gc h s gc m inner-sibling-grandchild! Figure 5: First- to third-order features. 4 Features First- to Third-Order Features The feature templates of first- to third-order features are mainly drawn from previous work on graphbased parsing (McDonald and Pereira, 2006), transition-based parsing (Nivre et al., 2006) and dual decomposition-based parsing (Martins et al., 2011). As shown in Figure 5, the arc is the basic structure for first-order features. We also define features based on consecutive sibling, grandparent, arbitrary sibling, head bigram, grand-sibling and tri-siblings, which are also used in the Turbo parser (Martins et al., 2013). In addition to these first- to third-order structures, we also consider grand-grandparent and sibling-grandchild structures. There are two types of sibling-grandchild structures: (1) inner-sibling when the sibling is between the head and the modifier and (2) outersibling for the other cases. Global Features We used feature shown promising in prior reranking work Charniak and Johnson (2005), Collins (2000) and Huang (2008). • Right Branch This feature enables the model to prefer right or left-branching trees. It counts the number of words on the path from the root node to the right-most non-punctuation word, normalized by the length of the sentence. • Coordination In a coordinate structure, the two adjacent conjuncts usually agree with each other on POS tags and their span lengths. For instance, in cats and dogs, the conjuncts are both short noun phrases. Therefore, we add different features to capture POS tag and span length consistency in a coordinate structure. • PP Attachment We add features of lexical tueat! with! knife! and! fork! Figure 6: An example of PP attachment with coordination. The arguments should be knife and fork, not and. ples involving the head, the argument and the preposition of prepositional phrases. Generally, this feature can be defined based on an instance of grandparent structure. However, we also handle the case of coordination. In this case, the arguments should be the conjuncts rather than the coordinator. Figure 6 shows an example. • Span Length This feature captures the distribution of the binned span length of each POS tag. It also includes flags of whether the span reaches the end of the sentence and whether the span is followed by the punctuation. • Neighbors The POS tags of the neighboring words to the left and right of each span, together with the binned span length and the POS tag at the span root. • Valency We consider valency features for each POS tag. Specifically, we add two types of valency information: (1) the binned number of non-punctuation modifiers and (2) the concatenated POS string of all those modifiers. • Non-projective Arcs A flag indicating if a dependency is projective or not (i.e. if it spans a word that does not descend from its head) (Martins et al., 2011). This flag is also combined with the POS tags or the lexical words of the head and the modifier. POS Tag Features In the joint POS correction scenario, we also add additional features specifically for POS prediction. The feature templates are inspired by previous feature-rich POS tagging work (Toutanova et al., 2003). However, we are free to add higher order features because we do not rely on dynamic programming decoding. In our work we use feature templates up to 5-gram. Table 1 summarizes all POS tag feature templates. 5 Experimental Setup Datasets We evaluate our model on standard benchmark corpora — CoNLL 2006 and CoNLL 2008 (Buchholz and Marsi, 2006; Surdeanu et al., 2008) — which include dependency treebanks for 14 different languages. Most of these data sets 202 1-gram ⟨ti⟩, ⟨ti, wi−2⟩, ⟨ti, wi−1⟩, ⟨ti, wi⟩, ⟨ti, wi+1⟩, ⟨ti, wi+2⟩ 2-gram ⟨ti−1, ti⟩, ⟨ti−2, ti⟩, ⟨ti−1, ti, wi−1⟩, ⟨ti−1, ti, wi⟩ 3-gram ⟨ti−1, ti, ti+1⟩, ⟨ti−2, ti, ti+1, ⟩, ⟨ti−1, ti, ti+2⟩, ⟨ti−2, ti, ti+2⟩ 4-gram ⟨ti−2, ti−1, ti, ti+1⟩, ⟨ti−2, ti−1, ti, ti+2⟩, ⟨ti−2, ti, ti+1, ti+2⟩ 5-gram ⟨ti−2, ti−1, ti, ti+1, ti+2⟩ Table 1: POS tag feature templates. ti and wi denotes the POS tag and the word at the current position. ti−x and ti+x denote the left and right context tags, and similarly for words. contain non-projective dependency trees. We use all sentences in CoNLL datasets during training and testing. We also use the Columbia Arabic Treebank (CATiB) (Marton et al., 2013). CATiB mostly includes projective trees. The trees are annotated with both gold and predicted versions of POS tags and morphology information. Following Marton et al. (2013), for this dataset we use 12 core POS tags, word lemmas, determiner features, rationality features and functional genders and numbers. Some CATiB sentences exceed 200 tokens. For efficiency, we limit the sentence length to 70 tokens in training and development sets. However, we do not impose this constraint during testing. We handle long sentences during testing by applying a simple split-merge strategy. We split the sentence based on the ending punctuation, predict the parse tree for each segment and group the roots of resulting trees into a single node. Evaluation Measures Following standard practice, we use Unlabeled Attachment Score (UAS) as the evaluation metric in all our experiments. We report UAS excluding punctuation on CoNLL datasets, following Martins et al. (2013). For the CATiB dataset, we report UAS including punctuation in order to be consistent with the published results in the 2013 SPMRL shared task (Seddah et al., 2013). Baselines We compare our model with the Turbo parser and the MST parser. For the Turbo parser, we directly compare with the recent published results in (Martins et al., 2013). For the MST parser, we train a second-order non-projective model using the most recent version of the code3. We also compare our model against a discriminative reranker. The reranker operates over the 3http://sourceforge.net/projects/mstparser/ top-50 list obtained from the MST parser4. We use a 10-fold cross-validation to generate candidate lists for training. We then train the reranker by running 10 epochs of cost-augmented MIRA. The reranker uses the same features as our model, along with the tree scores obtained from the MST parser (which is a standard practice in reranking). Experimental Details Following Koo and Collins (2010), we always first train a first-order pruner. For each word xi, we prune away the incoming dependencies ⟨hi, xi⟩with probability less than 0.005 times the probability of the most likely head, and limit the number of candidate heads up to 30. This gives a 99% pruning recall on the CATiB development set. The first-order model is also trained using the algorithm in Figure 4. After pruning, we tune the regularization parameter C = {0.1, 0.01, 0.001} on development sets for different languages. Because the CoNLL datasets do not have a standard development set, we randomly select a held out of 200 sentences from the training set. We also pick the training epochs from {50, 100, 150} which gives the best performance on the development set for each language. After tuning, the model is trained on the full training set with the selected parameters. We apply the Random Walk-based sampling method (see Section 3.2.2) for the standard dependency parsing task. However, for the joint parsing and POS correction on the CATiB dataset we do not use the Random Walk method because the first-order features in normal parsing are no longer first-order when POS tags are also variables. Therefore, the first-order distribution is not well-defined and we only employ Gibbs sampling for simplicity. On the CATiB dataset, we restrict the sample trees to always be projective as described in Section 3.2.1. However, we do not impose this constraint for the CoNLL datasets. 6 Results Comparison with State-of-the-art Parsers Table 2 summarizes the performance of our model and of the baselines. We first compare our model to the Turbo parser using the Turbo parser feature set. This is meant to test how our learning and inference methods compare to a dual decomposition approach. The first column in Table 2 4The MST parser is trained in projective mode for reranking because generating top-k list from second-order nonprojective model is intractable. 203 Our Model (UAS) Turbo (UAS) MST 2nd-Ord. (UAS) Best Published UAS Top-50 Reranker Top-500 Reranker Turbo Feat. Full Feat. Arabic 79.86 80.21 79.64 78.75 81.12 (Ma11) 79.03 78.91 Bulgarian 92.97 93.30 93.10 91.56 94.02 (Zh13) 92.81 Chinese 92.06 92.63 89.98 91.77 91.89 (Ma10) 92.25 Czech 90.62 91.04 90.32 87.30 90.32 (Ma13) 88.14 Danish 91.45 91.80 91.48 90.50 92.00 (Zh13) 90.88 90.91 Dutch 85.83 86.47 86.19 84.11 86.19 (Ma13) 81.01 English 92.79 92.94 93.22 91.54 93.22 (Ma13) 92.41 German 91.79 92.07 92.41 90.14 92.41 (Ma13) 91.19 Japanese 93.23 93.42 93.52 92.92 93.72 (Ma11) 93.40 Portuguese 91.82 92.41 92.69 91.08 93.03 (Ko10) 91.47 Slovene 86.19 86.82 86.01 83.25 86.95 (Ma11) 84.81 85.37 Spanish 88.24 88.21 85.59 84.33 87.96 (Zh13) 86.85 87.21 Swedish 90.48 90.71 91.14 89.05 91.62 (Zh13) 90.53 Turkish 76.82 77.21 76.90 74.39 77.55 (Ko10) 76.35 76.23 Average 88.87 89.23 88.72 86.86 89.33 87.92 Table 2: Results of our model, the Turbo parser, and the MST parser. “Best Published UAS” includes the most accurate parsers among Nivre et al. (2006), McDonald et al. (2006), Martins et al. (2010), Martins et al. (2011), Martins et al. (2013), Koo et al. (2010), Rush and Petrov (2012), Zhang and McDonald (2012) and Zhang et al. (2013). Martins et al. (2013) is the current Turbo parser. The last two columns shows UAS of the discriminative reranker. shows the result for our model with an average of 88.87%, and the third column shows the results for the Turbo parser with an average of 88.72%. This suggests that our learning and inference procedures are as effective as the dual decomposition method in the Turbo parser. Next, we add global features that are not used by the Turbo parser. The performance of our model is shown in the second column with an average of 89.23%. It outperforms the Turbo parser by 0.5% and achieves the best reported performance on four languages. Moreover, our model also outperforms the 88.80% average UAS reported in Martins et al. (2011), which is the top performing single parsing system (to the best of our knowledge). Comparison with Reranking As column 6 of Table 2 shows, our model outperforms the reranker by 1.3%5. One possible explanation of this performance gap between the reranker and our model is the small number of candidates considered by the reranker. To test this hypothesis, we performed experiments with top-500 list for a subset of languages.6 As column 7 shows, this increase in the list size does not change the relative performance of the reranker and our model. Joint Parsing and POS Correction Table 3 shows the results of joint parsing and POS correction on the CATiB dataset, for our model and 5Note that the comparison is conservative because we can also add MST scores as features in our model as in reranker. With these features our model achieves an average UAS 89.28%. 6We ran this experiment on 5 languages with small datasets due to the scalability issues associated with reranking top-500 list. state-of-the-art systems. As the upper part of the table shows, the parser with corrected tags reaches 88.38% compared to the accuracy of 88.46% on the gold tags. This is a substantial increase from the parser that uses predicted tags (86.95%). To put these numbers into perspective, the bottom part of Table 3 shows the accuracy of the best systems from the 2013 SPMRL shared task on Arabic parsing using predicted information (Seddah et al., 2013). Our system not only outperforms the best single system (Bj¨orkelund et al., 2013) by 1.4%, but it also tops the ensemble system that combines three powerful parsers: the Mate parser (Bohnet, 2010), the Easy-First parser (Goldberg and Elhadad, 2010) and the Turbo parser (Martins et al., 2013) Impact of Sampling Methods We compare two sampling methods introduced in Section 3.2 with respect to their decoding efficiency. Specifically, we measure the score of the retrieved trees in testing as a function of the decoding speed, measured by the number of tokens per second. We change the temperature update rate c in order to decode with different speed. In Figure 7 we show the corresponding curves for two languages: Arabic and Chinese. We select these two languages as they correspond to two extremes in sentence length: Arabic has the longest sentences on average, while Chinese has the shortest ones. For both languages, the tree score improves over time. Given sufficient time, both sampling methods achieve the same score. However, the Random Walk-based sampler performs better when the quality is traded for speed. This result is to be expected given that each 204 Dev. Set (≤70) Testing Set POS Acc. UAS POS Acc. UAS Gold 90.27 88.46 Predicted 96.87 88.81 96.82 86.95 POS Correction 97.72 90.08 97.49 88.38 CADIM 96.87 87.496.82 85.78 IMS-Single 86.96 IMS-Ensemble 88.32 Table 3: Results for parsing and corrective tagging on the CATiB dataset. The upper part shows UAS of our model with gold/predicted information or POS correction. Bottom part shows UAS of the best systems in the SPMRL shared task. IMSSingle (Bj¨orkelund et al., 2013) is the best single parsing system, while IMS-Ensemble (Bj¨orkelund et al., 2013) is the best ensemble parsing system. We also show results for CADIM (Marton et al., 2013), the second best system, because we use their predicted features. 0 20 40 60 80 100 2.648 2.65 2.652 2.654 2.656 2.658x 10 4 Toks/sec Score Gibbs Random Walk (a) Arabic 0 100 200 300 400 500 600 700 800 1.897 1.898 1.899 1.9 x 10 4 Toks/sec Score Gibbs Random Walk (b) Chinese Figure 7: Total score of the predicted test trees as a function of the decoding speed, measured in the number of tokens per second. iteration of this sampler makes multiple changes to the tree, in contrast to a single-edge change of Gibbs sampler. The Effect of Constraints in Learning Our training method updates parameters to satisfy the pairwise constraints between (1) subsequent samples on the sampling path and (2) selected samples and the ground truth. Figure 8 shows that applying both types of constraints is consistently better than using either of them alone. Moreover, these results demonstrate that comparison between subsequent samples is more important than comparison against the gold tree. Decoding Speed Our sampling-based parser is an Danish Japanese Portuguese Swedish 89 90 91 92 93 94 UAS(%) Both Neighbor Gold Figure 8: UAS on four languages when training with different constraints. “Neighbor” corresponds to pairwise constraints between subsequent samples, “Gold” represents constraints between a single sample and the ground truth, “Both” means applying both types of constraints. anytime algorithm, and therefore its running time can be traded for performance. Figure 7 illustrates this trade-off. In the experiments reported above, we chose a conservative cooling rate and continued to sample until the score no longer changed. The parser still managed to process all the datasets in a reasonable time. For example, the time that it took to decode all the test sentences in Chinese and Arabic were 3min and 15min, respectively. Our current implementation is in Java and can be further optimized for speed. 7 Conclusions This paper demonstrates the power of combining a simple inference procedure with a highly expressive scoring function. Our model achieves the best results on the standard dependency parsing benchmark, outperforming parsing methods with elaborate inference procedures. In addition, this framework provides simple and effective means for joint parsing and corrective tagging. Acknowledgments This research is developed in collaboration with the Arabic Language Technologies (ALT) group at Qatar Computing Research Institute (QCRI) within the IYAS project. The authors acknowledge the support of the MURI program (W911NF-101-0533, the DARPA BOLT program and the USIsrael Binational Science Foundation (BSF, Grant No 2012330). We thank the MIT NLP group and the ACL reviewers for their comments. 205 References Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. 2003. An introduction to mcmc for machine learning. Machine learning, 50(1-2):5–43. Anders Bj¨orkelund, Ozlem Cetinoglu, Rich´ard Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 135– 145, Seattle, Washington, USA, October. Association for Computational Linguistics. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In COLING, pages 89–97. Sabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 149–164. Association for Computational Linguistics. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 173–180. Association for Computational Linguistics. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML ’00, pages 175–182. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. The Journal of Machine Learning Research, 3:951– 991. Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning, 75(3):297–325. Stuart Geman and Donald Geman. 1984. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, (6):721–741. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 742–750. Association for Computational Linguistics. Barry Haddow, Abhishek Arun, and Philipp Koehn. 2011. Samplerank training for phrase-based machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 261– 271. Association for Computational Linguistics. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In ACL, pages 586– 594. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1–11. Association for Computational Linguistics. Terry Koo, Alexander M Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1288–1298. Association for Computational Linguistics. Quannan Li, Jingdong Wang, Zhuowen Tu, and David P Wipf. 2013. Fixed-point model for structured labeling. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 214–221. Andr´e FT Martins, Noah A Smith, and Eric P Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 342–350. Association for Computational Linguistics. Andr´e FT Martins, Noah A Smith, Eric P Xing, Pedro MQ Aguiar, and M´ario AT Figueiredo. 2010. Turbo parsers: Dependency parsing by approximate variational inference. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 34–44. Association for Computational Linguistics. Andr´e FT Martins, Noah A Smith, Pedro MQ Aguiar, and M´ario AT Figueiredo. 2011. Dual decomposition with many overlapping components. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 238–249. Association for Computational Linguistics. Andr´e FT Martins, Miguel B Almeida, and Noah A Smith. 2013. Turning on the turbo: Fast third-order non-projective turbo parsers. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Yuval Marton, Nizar Habash, Owen Rambow, and Sarah Alkhulani. 2013. Spmrl13 shared task system: The cadim arabic dependency parser. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 76– 80. Ryan T McDonald and Fernando CN Pereira. 2006. Online learning of approximate dependency parsing algorithms. In EACL. 206 R. McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 523–530. Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a twostage discriminative parser. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 216–220. Association for Computational Linguistics. Tetsuji Nakagawa. 2007. Multilingual dependency parsing using global features. In EMNLP-CoNLL, pages 952–956. Joakim Nivre, Johan Hall, Jens Nilsson, G¨uls¸en Eryiit, and Svetoslav Marinov. 2006. Labeled pseudoprojective dependency parsing with support vector machines. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 221–225. Association for Computational Linguistics. Vasin Punyakanok, Dan Roth, Wen-tau Yih, and Dav Zimak. 2004. Semantic role labeling via integer linear programming inference. In Proceedings of the 20th international conference on Computational Linguistics, page 1346. Association for Computational Linguistics. Alexander M Rush and Slav Petrov. 2012. Vine pruning for efficient multi-pass dependency parsing. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 498–507. Association for Computational Linguistics. Djam´e Seddah, Reut Tsarfaty, Sandra K¨ubler, Marie Candito, Jinho D Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, et al. 2013. Overview of the spmrl 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146–182. D. Sontag, A. Globerson, and T. Jaakkola. 2011. Introduction to dual decomposition for inference. In Optimization for Machine Learning, pages 219–254. MIT Press. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The conll-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 159–177. Association for Computational Linguistics. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 173–180. Association for Computational Linguistics. Michael L. Wick, Khashayar Rohanimanesh, Kedar Bellare, Aron Culotta, and Andrew McCallum. 2011. Samplerank: Training factor graphs with atomic gradients. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning, ICML 2011, pages 777–784. David Bruce Wilson. 1996. Generating random spanning trees more quickly than the cover time. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pages 296–303. ACM. Hao Zhang and Ryan McDonald. 2012. Generalized higher-order dependency parsing with cube pruning. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 320–331. Association for Computational Linguistics. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 188–193. Association for Computational Linguistics. Hao Zhang, Liang Huang Kai Zhao, and Ryan McDonald. 2013. Online learning for inexact hypergraph search. In Proceedings of EMNLP. 207
2014
19
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 13–24, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Representation Learning for Text-level Discourse Parsing Yangfeng Ji School of Interactive Computing Georgia Institute of Technology [email protected] Jacob Eisenstein School of Interactive Computing Georgia Institute of Technology [email protected] Abstract Text-level discourse parsing is notoriously difficult, as distinctions between discourse relations require subtle semantic judgments that are not easily captured using standard features. In this paper, we present a representation learning approach, in which we transform surface features into a latent space that facilitates RST discourse parsing. By combining the machinery of large-margin transition-based structured prediction with representation learning, our method jointly learns to parse discourse while at the same time learning a discourse-driven projection of surface features. The resulting shift-reduce discourse parser obtains substantial improvements over the previous state-of-the-art in predicting relations and nuclearity on the RST Treebank. 1 Introduction Discourse structure describes the high-level organization of text or speech. It is central to a number of high-impact applications, such as text summarization (Louis et al., 2010), sentiment analysis (Voll and Taboada, 2007; Somasundaran et al., 2009), question answering (Ferrucci et al., 2010), and automatic evaluation of student writing (Miltsakaki and Kukich, 2004; Burstein et al., 2013). Hierarchical discourse representations such as Rhetorical Structure Theory (RST) are particularly useful because of the computational applicability of tree-shaped discourse structures (Taboada and Mann, 2006), as shown in Figure 1. Unfortunately, the performance of discourse parsing is still relatively weak: the state-of-the-art F-measure for text-level relation detection in the RST Treebank is only slightly above 55% (Joty when profit was $107.8 million on sales of $435.5 million. The projections are in the neighborhood of 50 cents a share to 75 cents, compared with a restated $1.65 a share a year earlier, CIRCUMSTANCE COMPARISON Figure 1: An example of RST discourse structure. et al., 2013). While recent work has introduced increasingly powerful features (Feng and Hirst, 2012) and inference techniques (Joty et al., 2013), discourse relations remain hard to detect, due in part to a long tail of “alternative lexicalizations” that can be used to realize each relation (Prasad et al., 2010). Surface and syntactic features are not capable of capturing what are fundamentally semantic distinctions, particularly in the face of relatively small annotated training sets. In this paper, we present a representation learning approach to discourse parsing. The core idea of our work is to learn a transformation from a bag-of-words surface representation into a latent space in which discourse relations are easily identifiable. The latent representation for each discourse unit can be viewed as a discriminativelytrained vector-space representation of its meaning. Alternatively, our approach can be seen as a nonlinear learning algorithm for incremental structure prediction, which overcomes feature sparsity through effective parameter tying. We consider several alternative methods for transforming the original features, corresponding to different ideas of the meaning and role of the latent representation. Our method is implemented as a shift-reduce discourse parser (Marcu, 1999; Sagae, 2009). Learning is performed as large-margin transitionbased structure prediction (Taskar et al., 2003), while at the same time jointly learning to project the surface representation into latent space. The 13 resulting system strongly outperforms the prior state-of-the-art at labeled F-measure, obtaining raw improvements of roughly 6% on relation labels and 2.5% on nuclearity. In addition, we show that the latent representation coheres well with the characterization of discourse connectives in the Penn Discourse Treebank (Prasad et al., 2008). 2 Model The core idea of this paper is to project lexical features into a latent space that facilitates discourse parsing. In this way, we can capture the meaning of each discourse unit, without suffering from the very high dimensionality of a lexical representation. While such feature learning approaches have proven to increase robustness for parsing, POS tagging, and NER (Miller et al., 2004; Koo et al., 2008; Turian et al., 2010), they would seem to have an especially promising role for discourse, where training data is relatively sparse and ambiguity is considerable. Prasad et al. (2010) show that there is a long tail of alternative lexicalizations for discourse relations in the Penn Discourse Treebank, posing obvious challenges for approaches based on directly matching lexical features observed in the training data. Based on this observation, our goal is to learn a function that transforms lexical features into a much lower-dimensional latent representation, while simultaneously learning to predict discourse structure based on this latent representation. In this paper, we consider a simple transformation function, linear projection. Thus, we name the approach DPLP: Discourse Parsing from Linear Projection. We apply transition-based (incremental) structured prediction to obtain a discourse parse, training a predictor to make the correct incremental moves to match the annotations of training data in the RST Treebank. This supervision signal is then used to learn both the weights and the projection matrix in a large-margin framework. 2.1 Shift-reduce discourse parsing We construct RST Trees using shift-reduce parsing, as first proposed by Marcu (1999). At each point in the parsing process, we maintain a stack and a queue; initially the stack is empty and the first elementary discourse unit (EDU) in the document is at the front of the queue.1 The parser can 1We do not address segmentation of text into elementary discourse units in this paper. Standard classificationNotation Explanation V Vocabulary for surface features V Size of V K Dimension of latent space wm Classification weights for class m C Total number of classes, which correspond to possible shift-reduce operations A Parameter of the representation function (also the projection matrix in the linear representation function) vi Word count vector of discourse unit i v Vertical concatenation of word count vectors for the three discourse units currently being considered by the parser λ Regularization for classification weights τ Regularization for projection matrix ξi Slack variable for sample i ηi,m Dual variable for sample i and class m αt Learning rate at iteration t Table 1: Summary of mathematical notation then choose either to shift the front of the queue onto the top of the stack, or to reduce the top two elements on the stack in a discourse relation. The reduction operation must choose both the type of relation and which element will be the nucleus. So, overall there are multiple reduce operations with specific relation types and nucleus positions. Shift-reduce parsing can be learned as a classification task, where the classifier uses features of the elements in the stack and queue to decide what move to take. Previous work has employed decision trees (Marcu, 1999) and the averaged perceptron (Collins and Roark, 2004; Sagae, 2009) for this purpose. Instead, we employ a large-margin classifier, because we can compute derivatives of the margin-based objective function with respect to both the classifier weights as well as the projection matrix. 2.2 Discourse parsing with projected features More formally, we denote the surface feature vocabulary V, and represent each EDU as the numeric vector v ∈NV , where V = #|V| and the nth element of v is the count of the n-th surface feature in this EDU (see Table 1 for a summary of notation). During shift-reduce parsing, we consider features of three EDUs:2 the top two elements on based approaches can achieve a segmentation F-measure of 94% (Hernault et al., 2010); a more complex reranking model does slightly better, at 95% F-Measure with automatically-generated parse trees, and 96.6% with gold annotated trees (Xuan Bach et al., 2012). Human agreement reaches 98% F-Measure. 2After applying a reduce operation, the stack will include a span that contains multiple EDUs. We follow the strong 14 the stack (v1 and v2), and the front of the queue (v3). The vertical concatenation of these vectors is denoted v = [v1; v2; v3]. In general, we can formulate the decision function for the multi-class shift-reduce classifier as ˆm = arg max m∈{1,...,C} w⊤ mf(v; A) (1) where wm is the weight for the m-th class and f(v; A) is the representation function parametrized by A. The score for class m (in our case, the value of taking the m-th shiftreduce operation) is computed by the inner product w⊤ mf(v; A). The specific shift-reduce operation is chosen by maximizing the decision value in Equation 1. The representation function f(v; A) can be defined in any form; for example, it could be a nonlinear function defined by a neural network model parametrized by A. We focus on the linear projection, f(v; A) = Av, (2) where A ∈RK×3V is projects the surface representation v of three EDUs into a latent space of size K ≪V . Note that by setting ˜w⊤ m = w⊤ mA, the decision scoring function can be rewritten as ˜w⊤ mv, which is linear in the original surface features. Therefore, the expressiveness of DPLP is identical to a linear separator in the original feature space. However, the learning problem is considerably different. If there are C total classes (possible shift-reduce operations), then a linear classifier must learn 3V C parameters, while DPLP must learn (3V + C)K parameters, which will be smaller under the assumption that K < C ≪V . This can be seen as a form of parameter tying on the linear weights ˜wm, which allows statistical strength to be shared across training instances. We will consider special cases of A that reduce the parameter space still further. 2.3 Special forms of the projection matrix We consider three different constructions for the projection matrix A. • General form: In the general case, we place compositionality criterion of Marcu (1996) and consider only the nuclear EDU of the span. Later work may explore the composition of features between the nucleus and satellite. no special constraint on the form of A. f(v; A) = A   v1 v2 v3   (3) This form is shown in Figure 2(a). • Concatenation form: In the concatenation form, we choose a block structure for A, in which a single projection matrix B is applied to each EDU: f(v; A) = " B 0 0 0 B 0 0 0 B # " v1 v2 v3 # (4) In this form, we transform the representation of each EDU separately, but do not attempt to represent interrelationships between the EDUs in the latent space. The number of parameters in A is 1 3KV . Then, the total number of parameters, including the decision weights {wm}, in this form is (V 3 + C)K. • Difference form. In the difference form, we explicitly represent the differences between adjacent EDUs, by constructing A as a block difference matrix, f(v; A) = " C −C 0 C 0 −C 0 0 0 # " v1 v2 v3 # , (5) The result of this projection is that the latent representation has the form [C(v1 − v2); C(v1 −v3)], representing the difference between the top two EDUs on the stack, and between the top EDU on the stack and the first EDU in the queue. This is intended to capture semantic similarity, so that reductions between related EDUs will be preferred. Similarly, the total number of parameters to estimate in this form is (V + 2C)K 3 . 3 Large-Margin Learning Framework We apply a large margin structure prediction approach to train the model. There are two parameters that need to be learned: the classification weights {wm}, and the projection matrix A. As we will see, it is possible to learn {wm} using standard support vector machine (SVM) training (holding A fixed), and then make a simple gradient-based update to A (holding {wm} fixed). By interleaving these two operations, we arrive at a saddle point of the objective function. 15 A W y v1 from stack v 2 from stack v 3 from queue (a) General form A W y v1 from stack v 2 from stack v 3 from queue (b) Concatenation form A W y v1 from stack v 2 from stack v 3 from queue (c) Difference form Figure 2: Decision problem with different representation functions Specifically, we formulate the following constrained optimization problem, min {w1:C,ξ1:l,A} λ 2 C X m=1 ∥wm∥2 2 + l X i=1 ξi + τ 2 ∥A∥2 F s.t. (wyi−wm)⊤f(vi; A) ≥1 −δyi=m −ξi, ∀i, m (6) where m ∈ {1, . . . , C} is the index of the shift-reduce decision taken by the classifier (e.g., SHIFT, REDUCE-CONTRAST-RIGHT, etc), i ∈ {1, · · · , l} is the index of the training sample, and wm is the vector of classification weights for class m. The slack variables ξi permit the margin constraint to be violated in exchange for a penalty, and the delta function δyi=m is unity if yi = m, and zero otherwise. As is standard in the multi-class linear SVM (Crammer and Singer, 2001), we can solve the problem defined in Equation 6 via Lagrangian optimization: L({w1:C, ξ1:l, A, η1:l,1:C}) = λ 2 C X m=1 ∥wm∥2 2 + l X i=1 ξi + τ 2 ∥A∥2 F + X i,m ηi,m n (w⊤ m −w⊤ yi)f(vi; A) + 1 −δyi=m −ξi o s.t. ηi,m ≥0 ∀i, m (7) Then, to optimize L, we need to find a saddle point, which would be the minimum for the variables {w1:C, ξ1:l} and the projection matrix A, and the maximum for the dual variables {η1:l,1:C}. If A is fixed, then the optimization problem is equivalent to a standard multi-class SVM, in the transformed feature space f(vi; A). We can obtain the weights {w1:C} and dual variables {η1:l,1:C} from a standard dual-form SVM solver. We then update A, recompute {w1:C} and {η1:l,1:C}, and iterate until convergence. This iterative procedure is similar to the latent variable structural SVM (Yu and Joachims, 2009), although the specific details of our learning algorithm are different. 3.1 Learning Projection Matrix A We update A while holding fixed the weights and dual variables. The derivative of L with respect to A is ∂L ∂A = τA + X i,m ηi,m(w⊤ m −w⊤ yi)∂f(vi; A) ∂A = τA + X i,m ηi,m(wm −wyi)vi ⊤ (8) Setting ∂L ∂A = 0, we have the closed-form solution, A = 1 τ X i,m ηi,m(wm −wyi)vi ⊤ = 1 τ X i,j (wyi − X m ηi,mwm)vi ⊤, (9) because the dual variables for each instance must sum to one, P m ηi,m = 1. Note that for a given i, the matrix (wyi − P m ηi,mwm)vi⊤is of (at most) rank-1. Therefore, the solution of A can be viewed as the linear combination of a sequence of rank-1 matrices, where each rank-1 matrix is defined by distributional representation vi and the weight difference between the weight of true label wyi and the “expected” weight P m ηi,mwm. One property of the dual variables is that f(vi; A) is a support vector only if the dual variable ηi,yi < 1. Since the dual variables for each instance are guaranteed to sum to one, we have wyi −P m ηi,mwm = 0 if ηi,yi = 1. In other words, the contribution from non support vectors to the projection matrix A is 0. Then, we can further simplify the updating equation as A = 1 τ X vi∈SV (wyi − X m ηi,mwm)vi⊤ (10) This is computationally advantageous since many instances are not support vectors, and it shows that the discriminatively-trained projection matrix only incorporates information from each instance to the extent that the correct classification receives low confidence. 16 Algorithm 1 Mini-batch learning algorithm Input: Training set D, Regularization parameters λ and τ, Number of iteration T, Initialization matrix A0, and Threshold ε while t = 1, . . . , T do Randomly choose a subset of training samples Dt from D Train SVM with At−1 to obtain {w(t) m } and {η(t) i,m} Update At using Equation 11 with αt = 1 t if ∥At−At−1∥F ∥A2−A1∥F < ε then Return end if end while Re-train SVM with D and the final A Output: Projection matrix A, SVM classifier with weights w 3.2 Gradient-based Learning for A Solving the quadratic programming defined by the dual form of the SVM is time-consuming, especially on a large-scale dataset. But if we focus on learning the projection matrix A, we can speed up learning by sampling only a small proportion of the training data to compute an approximate optimum for {w1:C, η1:l,1:C}, before each update of A. This idea is similar to the mini-batch learning, which has been used in large-scale SVM problem (Nelakanti et al., 2013) and deep learning models (Le et al., 2011). Specifically, in iteration t, the algorithm randomly chooses a subset of training samples Dt to train the model. We cannot make a closed-form update to A based on this small sample, but we can take an approximate gradient step, At = (1 −αtτ)At−1+ αt n X vi∈SV(Dt)  w(t) yi − X m η(t) i,mw(t) m  vi ⊤o , (11) where αt is a learning rate. In iteration t, we choose αt = 1 t . After convergence, we obtain the weights w by applying the SVM over the entire dataset, using the final A. The algorithm is summarized in Algorithm 1 and more details about implementation will be clarified in Section 4. While minibatch learning requires more iterations, the SVM training is much faster in each batch, and the overall algorithm is several times faster than using the entire training set for each update. 4 Implementation The learning algorithm is applied in a shift-reduce parser, where the training data consists of the (unique) list of shift and reduce operations required to produce the gold RST parses. On test data, we choose parsing operations in an online fashion — at each step, the parsing algorithm changes the status of the stack and the queue according the selected transition, then creates the next sample with the updated status. 4.1 Parameters and Initialization There are three free parameters in our approach: the latent dimension K, and regularization parameters λ and τ. We consider the values K ∈ {30, 60, 90, 150}, λ ∈{1, 10, 50, 100} and τ ∈ {1.0, 0.1, 0.01, 0.001}, and search over this space using a development set of thirty document randomly selected from within the RST Treebank training data. We initialize each element of A0 to a uniform random value in the range [0, 1]. For mini-batch learning, we fixed the batch size to be 500 training samples (shift-reduce operations) in each iteration. 4.2 Additional features As described thus far, our model considers only the projected representation of each EDU in its parsing decisions. But prior work has shown that other, structural features can provide useful information (Joty et al., 2013). We therefore augment our classifier with a set of simple feature templates. These templates are applied to individual EDUs, as well as pairs of EDUs: (1) the two EDUs on top of the stack, and (2) the EDU on top of the stack and the EDU in front of the queue. The features are shown in Table 2. In computing these features, all tokens are downcased, and numerical features are not binned. The dependency structure and POS tags are obtained from MALTParser (Nivre et al., 2007). 5 Experiments We evaluate DPLP on the RST Discourse Treebank (Carlson et al., 2001), comparing against state-of-the-art results. We also investigate the information encoded by the projection matrix. 5.1 Experimental Setup Dataset The RST Discourse Treebank (RSTDT) consists of 385 documents, with 347 for train17 Feature Examples Words at beginning and end of the EDU ⟨BEGIN-WORD-STACK1 = but⟩ ⟨BEGIN-WORD-STACK1-QUEUE1 = but, the⟩ POS tag at beginning and end of the EDU ⟨BEGIN-TAG-STACK1 = CC⟩ ⟨BEGIN-TAG-STACK1-QUEUE1 = CC, DT⟩ Head word set from each EDU. The set includes words whose parent in the depenency graph is ROOT or is not within the EDU (Sagae, 2009). ⟨HEAD-WORDS-STACK2 = working⟩ Length of EDU in tokens ⟨LEN-STACK1-STACK2 = ⟨7, 8⟩⟩ Distance between EDUs ⟨DIST-STACK1-QUEUE1 = 2⟩ Distance from the EDU to the beginning of the document ⟨DIST-FROM-START-QUEUE1 = 3⟩ Distance from the EDU to the end of the document ⟨DIST-FROM-END-STACK1 = 1⟩ Whether two EDUs are in the same sentence ⟨SAME-SENT-STACK1-QUEUE1 = True⟩ Table 2: Additional features for RST parsing ing and 38 for testing in the standard split. As we focus on relational discourse parsing, we follow prior work (Feng and Hirst, 2012; Joty et al., 2013), and use gold EDU segmentations. The strongest automated RST segmentation methods currently attain 95% accuracy (Xuan Bach et al., 2012). Preprocessing In the RST-DT, most nodes have exactly two children, one nucleus and one satellite. For non-binary relations, we use right-branching to binarize the tree structure. For multi-nuclear relations, we choose the left EDU as “head” EDU. The vocabulary V includes all unigrams after down-casing. No other preprocessing is performed. In total, there are 16250 unique unigrams in V. Fixed projection matrix baselines Instead of learning from data, a simple way to obtain a projection matrix is to use matrix factorization. Recent work has demonstrated the effectiveness of non-negative matrix factorization (NMF) for measuring distributional similarity (Dinu and Lapata, 2010; Van de Cruys and Apidianaki, 2011). We can construct Bnmf in the concatenation form of the projection matrix by applying NMF to the EDU-feature matrix, M ≈WH. As a result, W describes each EDU with a K-dimensional vector, and H describes each word with a K-dimensional vector. We can then construct Bnmf by taking the pseudo-inverse of H, which then projects from word-count vectors into the latent space. Another way to construct B is to use neural word embeddings (Collobert and Weston, 2008). In this case, we can view the product Bv as a composition of the word embeddings, using the simple additive composition model proposed by Mitchell and Lapata (2010). We used the word embeddings from Collobert and Weston (2008) with dimension {25, 50, 100}. Grid search over heldout training data was used to select the optimum latent dimension for both the NMF and word embedding baselines. Note that the size K of the resulting projection matrix is three times the size of the embedding (or NMF representation) due to the concatenate construction. We also consider the special case where A = I. Competitive systems We compare our approach with HILDA (Hernault et al., 2010) and TSP (Joty et al., 2013). Joty et al. (2013) proposed two different approaches to combine sentence-level parsing models: sliding windows (TSP SW) and 1 sentence-1 subtree (TSP 1-1). In the comparison, we report the results of both approaches. All results are based on the same gold standard EDU segmentation. We cannot compare with the results of Feng and Hirst (2012), because they do not evaluate on the overall discourse structure, but rather treat each relation as an individual classification problem. Metrics To evaluate the parsing performance, we use the three standard ways to measure the performance: unlabeled (i.e., hierarchical spans) and labeled (i.e., nuclearity and relation) F-score, as defined by Black et al. (1991). The application of this approach to RST parsing is described by Marcu (2000b).3 To compare with previous works on RST-DT, we use the 18 coarse-grained relations defined in (Carlson et al., 2001). 3We implemented the evaluation metrics by ourselves. Together with the DPLP system, all codes are published on https://github.com/jiyfeng/DPLP 18 Method Matrix Form +Features K Span Nuclearity Relation Prior work 1. HILDA (Hernault et al., 2010) 83.0 68.4 54.8 2. TSP 1-1 (Joty et al., 2013) 82.47 68.43 55.73 3. TSP SW (Joty et al., 2013) 82.74 68.40 55.71 Our work 4. Basic features A = 0 Yes 79.43 67.98 52.96 5. Word embeddings Concatenation No 75 75.28 67.14 53.79 6. NMF Concatenation No 150 78.57 67.66 54.80 7. Bag-of-words A = I Yes 79.85 69.01 60.21 8. DPLP Concatenation No 60 80.91 69.39 58.96 9. DPLP Difference No 60 80.47 68.61 58.27 10. DPLP Concatenation Yes 60 82.08 71.13 61.63 11. DPLP General Yes 30 81.60 70.95 61.75 Human annotation 88.70 77.72 65.75 Table 3: Parsing results of different models on the RST-DT test set. The results of TSP and HILDA are reprinted from prior work (Joty et al., 2013; Hernault et al., 2010). 5.2 Experimental Results Table 3 presents RST parsing results for DPLP and some alternative systems. All versions of DPLP outperform the prior state-of-the-art on nuclearity and relation detection. This includes relatively simple systems whose features are simply a projection of the word count vectors for each EDU (lines 7 and 8). The addition of the features from Table 2 improves performance further, leading to absolute F-score improvement of around 2.5% in nuclearity and 6% in relation prediction (lines 9 and 10). On span detection, DPLP performs slightly worse than the prior state-of-the-art. These systems employ richer syntactic and contextual features, which might be especially helpful for span identification. As shown by line 4 of the results table, the basic features from Table 2 provide most of the predictive power for spans; however, these features are inadequate at the more semantically-oriented tasks of nuclearity and relation prediction, which benefit substantially from the projected features. Since correctly identifying spans is a precondition for nuclearity and relation prediction, we might obtain still better results by combining features from HILDA and TSP with the representation learning approach described here. Lines 5 and 6 show that discriminative learning of the projection matrix is crucial, as fixed projections obtained from NMF or neural word embeddings perform substantially worse. Line 7 shows that the original bag-of-words representation together with basic features could give us some benefit on discourse parsing, but still not as good as results from DPLP. From lines 8 and 9, we see that the concatenation construction is superior to the difference construction, but the comparison between lines 10 and 11 is inconclusive on the merits of the general form of A. This suggests that using the projection matrix to model interrelationships between EDUs does not substantially improve performance, and the simpler concatenation construction may be preferred. Figure 3 shows how performance changes for different latent dimensions K. At each value of K, we employ grid search over a development set to identify the optimal regularizers λ and τ. For the concatenation construction, performance is not overly sensitive to K. For the general form of A, performance decreases with large K. Recall from Section 2.3 that this construction has nine times as many parameters as the concatenation form; with large values of K, it is likely to overfit. 5.3 Analysis of Projection Matrix Why does projection of the surface features improve discourse parsing? To answer this question, we examine what information the projection matrix is learning to encoded. We take the projection matrix from the concatenation construction and K = 60 as an example for case study. Recalling the definition in equation 4, the projection matrix A will be composed of three identical submatrices B ∈R20×V . The columns of the B matrix can be viewed as 20-dimensional descriptors of the words in the vocabulary. For the purpose of visualization, we further reduce the dimension of latent representation from K = 20 to 2 dimensions using t-SNE (van der Maaten and Hinton, 2008). One further simpli19 30 60 90 150 K 76 77 78 79 80 81 82 83 84 F-score Concatenation DPLP General DPLP TSP 1-1 (Joty, et al., 2013) HILDA (Hernault, et al., 2010) (a) Span 30 60 90 150 K 65 66 67 68 69 70 71 72 F-score Concatenation DPLP General DPLP TSP 1-1 (Joty, et al., 2013) HILDA (Hernault, et al., 2010) (b) Nuclearity 30 60 90 150 K 50 52 54 56 58 60 62 F-score Concatenation DPLP General DPLP TSP 1-1 (Joty, et al., 2013) HILDA (Hernault, et al., 2010) (c) Relation Figure 3: The performance of our parser over different latent dimension K. Results for DPLP include the additional features from Table 3 fication for visualization is we consider only the top 1000 frequent unigrams in the RST-DT training set. For comparison, we also apply t-SNE to the projection matrix Bnmf recovered from nonnegative matrix factorization. Figure 4 highlights words that are related to discourse analysis. Among the top 1000 words, we highlight the words from 5 major discourse connective categories provided in Appendix B of the PDTB annotation manual (Prasad et al., 2008): CONJUNCTION, CONTRAST, PRECEDENCE, RESULT, and SUCCESSION. In addition, we also highlighted two verb categories from the top 1000 words: modal verbs and reporting verbs, with their inflections (Krestel et al., 2008). From the figure, it is clear DPLP has learned a projection matrix that successfully groups several major discourse-related word classes: particularly modal and reporting verbs; it has also grouped succession and precedence connectives with some success. In contrast, while NMF does obtain compact clusters of words, these clusters appear to be completely unrelated to discourse function of the words that they include. This demonstrates the value of using discriminative training to obtain the transformed representation of the discourse units. 6 Related Work Early work on document-level discourse parsing applied hand-crafted rules and heuristics to build trees in the framework of Rhetorical Structure Theory (Sumita et al., 1992; Corston-Oliver, 1998; Marcu, 2000a). An early data-driven approach was offered by Schilder (2002), who used distributional techniques to rate the topicality of each discourse unit, and then chose among underspecified discourse structures by placing more topical sentences near the root. Learning-based approaches were first applied to identify within-sentence discourse relations (Soricut and Marcu, 2003), and only later to cross-sentence relations at the document level (Baldridge and Lascarides, 2005). Of particular relevance to our inference technique are incremental discourse parsing approaches, such as shift-reduce (Sagae, 2009) and A* (Muller et al., 2012). Prior learning-based work has largely focused on lexical, syntactic, and structural features, but the close relationship between discourse structure and semantics (Forbes-Riley et al., 2006) suggests that shallow feature sets may struggle to capture the long tail of alternative lexicalizations that can be used to realize discourse relations (Prasad et al., 2010; Marcu and Echihabi, 2002). Only Subba and Di Eugenio (2009) incorporate rich compositional semantics into discourse parsing, but due to the ambiguity of their semantic parser, they must manually select the correct semantic parse from a forest of possiblities. Recent work has succeeded in pushing the stateof-the-art in RST parsing by innovating on several fronts. Feng and Hirst (2012) explore rich linguistic linguistic features, including lexical semantics and discourse production rules suggested by Lin et al. (2009) in the context of the Penn Discourse Treebank (Prasad et al., 2008). Muller et al. (2012) show that A* decoding can outperform both greedy and graph-based decoding algorithms. Joty et al. (2013) achieve the best prior results on RST relation detection by (i) jointly performing relation detection and classification, (ii) performing bottom-up rather than greedy decoding, and (iii) distinguishing between intra-sentence and inter-sentence relations. Our approach is largely orthogonal to this prior work: we focus on trans20 although until however also though but thus later can could would should and when after so once will might may before then says say reported said saying believe think must asked report (a) Latent representation of words from projection learning with K = 20. but would when also may can then must might once however so though thus although should later until will before after could and says said say asked saying think believe report Conjunction Contrast Precedence Result Succession Modal verb Reporting verb (b) Latent representation of words from non-negative matrix factorization with K = 20. Figure 4: t-SNE Visualization on latent representations of words. forming the lexical representation of discourse units into a latent space to facilitate learning. As shown in Figure 4(a), this projection succeeds at grouping words with similar discourse functions. We might expect to obtain further improvements by augmenting this representation learning approach with rich syntactic features (particularly for span identification), more accurate decoding, and special treatment of intra-sentence relations; this is a direction for future research. Discriminative learning of latent features for discourse processing can be viewed as a form of representation learning (Bengio et al., 2013). Also called Deep Learning, such approaches have recently been applied in a number of NLP tasks (Collobert et al., 2011; Socher et al., 2012). Of particular relevance are applications to the detection of semantic or discourse relations, such as paraphrase, by comparing sentences in an induced latent space (Socher et al., 2011; Guo and Diab, 2012; Ji and Eisenstein, 2013). In this work, we show how discourse structure annotations can function as a supervision signal to discriminatively learn a transformation from lexical features to a latent space that is well-suited for discourse parsing. Unlike much of the prior work on representation learning, we induce a simple linear transformation. Extension of our approach by incorporating a non-linear activation function is a natural topic for future research. 7 Conclusion We have presented a framework to perform discourse parsing while jointly learning to project to a low-dimensional representation of the discourse units. Using the vector-space representation of EDUs, our shift-reduce parsing system substantially outperforms existing systems on nuclearity detection and discourse relation identification. By adding some additional surface features, we obtain further improvements. The low dimensional representation also captures basic intuitions about discourse connectives and verbs, as shown in Figure 4(a). Deep learning approaches typically apply a non-linear transformation such as the sigmoid function (Bengio et al., 2013). We have conducted a few unsuccessful experiments with the “hard tanh” function proposed by Collobert and Weston (2008), but a more complete exploration of non-linear transformations must wait for future work. Another direction would be more sophisticated composition of the surface features within each elementary discourse unit, such as the hierarchical convolutional neural network (Kalchbrenner and Blunsom, 2013) or the recursive tensor network (Socher et al., 2013). It seems likely that a better accounting for syntax could improve the latent representations that our method induces. Acknowledgments We thank the reviewers for their helpful feedback, particularly for the connection to multitask learning. We also want to thank Kenji Sagae and Vanessa Wei Feng for the helpful discussion via email communication. This research was supported by Google Faculty Research Awards to the second author. 21 References Jason Baldridge and Alex Lascarides. 2005. Probabilistic head-driven parsing for discourse structure. In Proceedings of the Ninth Conference on Computational Natural Language Learning, pages 96–103. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828. Ezra Black, Steve Abney, Dan Flickinger, Claudia Gdaniec, Ralph Grishman, Phil Harrison, Don Hindle, Robert Ingria, Fred Jelinek, Judith Klavans, Mark Liberman, Mitchell Marcus, Salim Roukos, Beatrice Santorini, and Tomek Strzalkowski. 1991. A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars. In Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991, pages 306–311. Jill Burstein, Joel Tetreault, and Martin Chodorow. 2013. Holistic discourse coherence annotation for noisy essay writing. Dialogue & Discourse, 4(2):34–52. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. In Proceedings of Second SIGdial Workshop on Discourse and Dialogue. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of ACL, page 111. Association for Computational Linguistics. R. Collobert and J. Weston. 2008. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In ICML. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493–2537. Simon Corston-Oliver. 1998. Beyond string matching and cue phrases: Improving efficiency and coverage in discourse analysis. In The AAAI Spring Symposium on Intelligent Text Summarization, pages 9–15. Koby Crammer and Yoram Singer. 2001. On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines. Journal of Machine Learning Research, 2:265–292. Georgiana Dinu and Mirella Lapata. 2010. Measuring Distributional Similarity in Context. In EMNLP, pages 1162–1172. Vanessa Wei Feng and Graeme Hirst. 2012. Text-level Discourse Parsing with Rich Linguistic Features. In Proceedings of ACL. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building Watson: An overview of the DeepQA project. AI magazine, 31(3):59–79. Katherine Forbes-Riley, Bonnie Webber, and Aravind Joshi. 2006. Computing discourse semantics: The predicate-argument semantics of discourse connectives in D-LTAG. Journal of Semantics, 23(1):55– 106. Weiwei Guo and Mona Diab. 2012. Modeling Sentences in the Latent Space. In Proceedings of ACL, pages 864–872, Jeju Island, Korea, July. Association for Computational Linguistics. Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. HILDA: A Discourse Parser Using Support Vector Machine Classification. Dialogue and Discourse, 1(3):1–33. Yangfeng Ji and Jacob Eisenstein. 2013. Discriminative Improvements to Distributional Sentence Similarity. In EMNLP, pages 891–896, Seattle, Washington, USA, October. Association for Computational Linguistics. Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining Intra- and Multi-sentential Rhetorical Parsing for Documentlevel Discourse Analysis. In Proceedings of ACL. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, pages 119–126, Sofia, Bulgaria, August. Association for Computational Linguistics. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-supervised Dependency Parsing. In Proceedings of ACL-HLT, pages 595–603, Columbus, Ohio, June. Association for Computational Linguistics. Ralf Krestel, Sabine Bergler, and Ren´e Witte. 2008. Minding the Source: Automatic Tagging of Reported Speech in Newspaper Articles. In LREC, Marrakech, Morocco, May. European Language Resources Association (ELRA). Quoc V. Le, Jiquan Ngiam, Adam Coates, Abhik Lahiri, Bobby Prochnow, and Andrew Y. Ng. 2011. On Optimization Methods for Deep Learning. In ICML. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing Implicit Discourse Relations in the Penn Discourse Treebank. In EMNLP. Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse indicators for content selection in summarization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 147–156. Association for Computational Linguistics. 22 Daniel Marcu and Abdessamad Echihabi. 2002. An Unsupervised Approach to Recognizing Discourse Relations. In Proceedings of ACL, pages 368–375, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Daniel Marcu. 1996. Building Up Rhetorical Structure Trees. In Proceedings of AAAI. Daniel Marcu. 1999. A Decision-Based Approach to Rhetorical Parsing. In Proceedings of ACL, pages 365–372, College Park, Maryland, USA, June. Association for Computational Linguistics. Daniel Marcu. 2000a. The Rhetorical Parsing of Unrestricted Texts: A Surface-based Approach. Computational Linguistics, 26:395–448. Daniel Marcu. 2000b. The Theory and Practice of Discourse Parsing and Summarization. MIT Press. Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name Tagging with Word Clusters and Discriminative Training. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL, pages 337–342, Boston, Massachusetts, USA, May 2 May 7. Association for Computational Linguistics. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Philippe Muller, Stergos Afantenos, Pascal Denis, and Nicholas Asher. 2012. Constrained Decoding for Text-Level Discourse Parsing. In Coling, pages 1883–1900, Mumbai, India, December. The COLING 2012 Organizing Committee. Anil Kumar Nelakanti, Cedric Archambeau, Julien Mairal, Francis Bach, and Guillaume Bouchard. 2013. Structured Penalties for Log-Linear Language Models. In EMNLP, pages 233–243, Seattle, Washington, USA, October. Association for Computational Linguistics. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95–135. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In LREC. Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2010. Realization of discourse relations by other means: alternative lexicalizations. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1023–1031. Association for Computational Linguistics. Kenji Sagae. 2009. Analysis of Discourse Structure with Syntactic Dependencies and Data-Driven ShiftReduce Parsing. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT), pages 81–84, Paris, France, October. Association for Computational Linguistics. Frank Schilder. 2002. Robust discourse parsing via discourse markers, topicality and position. Natural Language Engineering, 8(3):235–255. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In NIPS. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic Compositionality Through Recursive Matrix-Vector Spaces. In EMNLP. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification. In Proceedings of EMNLP. Radu Soricut and Daniel Marcu. 2003. Sentence Level Discourse Parsing using Syntactic and Lexical Information. In NAACL. Rajen Subba and Barbara Di Eugenio. 2009. An effective Discourse Parser that uses Rich Linguistic Information. In NAACL-HLT, pages 566–574, Boulder, Colorado, June. Association for Computational Linguistics. K. Sumita, K. Ono, T. Chino, T. Ukita, and S. Amano. 1992. A discourse structure analyzer for Japanese text. In Proceedings International Conference on Fifth Generation Computer Systems, pages 1133– 1140. Maite Taboada and William C Mann. 2006. Applications of rhetorical structure theory. Discourse studies, 8(4):567–588. Benjamin Taskar, Carlos Guestrin, and Daphne Koller. 2003. Max-margin markov networks. In NIPS. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word Representation: A Simple and General Method for Semi-Supervised Learning. In Proceedings of ACL, pages 384–394. Tim Van de Cruys and Marianna Apidianaki. 2011. Latent Semantic Word Sense Induction and Disambiguation. In Proceedings of ACL, pages 1476– 1485, Portland, Oregon, USA, June. Association for Computational Linguistics. 23 Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2759–2605, November. Kimberly Voll and Maite Taboada. 2007. Not all words are created equal: Extracting semantic orientation as a function of adjective relevance. In Proceedings of Australian Conference on Artificial Intelligence. Ngo Xuan Bach, Nguyen Le Minh, and Akira Shimazu. 2012. A Reranking Model for Discourse Segmentation using Subtree Features. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 160–168. Chun-Nam John Yu and Thorsten Joachims. 2009. Learning structural SVMs with latent variables. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1169–1176. ACM. 24
2014
2
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 208–217, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Sparser, Better, Faster GPU Parsing David Hall Taylor Berg-Kirkpatrick John Canny Dan Klein Computer Science Division University of California, Berkeley {dlwh,tberg,jfc,klein}@cs.berkeley.edu Abstract Due to their origin in computer graphics, graphics processing units (GPUs) are highly optimized for dense problems, where the exact same operation is applied repeatedly to all data points. Natural language processing algorithms, on the other hand, are traditionally constructed in ways that exploit structural sparsity. Recently, Canny et al. (2013) presented an approach to GPU parsing that sacrifices traditional sparsity in exchange for raw computational power, obtaining a system that can compute Viterbi parses for a high-quality grammar at about 164 sentences per second on a mid-range GPU. In this work, we reintroduce sparsity to GPU parsing by adapting a coarse-to-fine pruning approach to the constraints of a GPU. The resulting system is capable of computing over 404 Viterbi parses per second—more than a 2x speedup—on the same hardware. Moreover, our approach allows us to efficiently implement less GPU-friendly minimum Bayes risk inference, improving throughput for this more accurate algorithm from only 32 sentences per second unpruned to over 190 sentences per second using pruning—nearly a 6x speedup. 1 Introduction Because NLP models typically treat sentences independently, NLP problems have long been seen as “embarrassingly parallel” – large corpora can be processed arbitrarily fast by simply sending different sentences to different machines. However, recent trends in computer architecture, particularly the development of powerful “general purpose” GPUs, have changed the landscape even for problems that parallelize at the sentence level. First, classic single-core processors and main memory architectures are no longer getting substantially faster over time, so speed gains must now come from parallelism within a single machine. Second, compared to CPUs, GPUs devote a much larger fraction of their computational power to actual arithmetic. Since tasks like parsing boil down to repeated read-multiply-write loops, GPUs should be many times more efficient in time, power, or cost. The challenge is that GPUs are not a good fit for the kinds of sparse computations that most current CPU-based NLP algorithms rely on. Recently, Canny et al. (2013) proposed a GPU implementation of a constituency parser that sacrifices all sparsity in exchange for the sheer horsepower that GPUs can provide. Their system uses a grammar based on the Berkeley parser (Petrov and Klein, 2007) (which is particularly amenable to GPU processing), “compiling” the grammar into a sequence of GPU kernels that are applied densely to every item in the parse chart. Together these kernels implement the Viterbi inside algorithm. On a mid-range GPU, their system can compute Viterbi derivations at 164 sentences per second on sentences of length 40 or less (see timing details below). In this paper, we develop algorithms that can exploit sparsity on a GPU by adapting coarse-tofine pruning to a GPU setting. On a CPU, pruning methods can give speedups of up to 100x. Such extreme speedups over a dense GPU baseline currently seem unlikely because fine-grained sparsity appears to be directly at odds with dense parallelism. However, in this paper, we present a system that finds a middle ground, where some level of sparsity can be maintained without losing the parallelism of the GPU. We use a coarse-to-fine approach as in Petrov and Klein (2007), but with only one coarse pass. Figure 1 shows an overview of the approach: we first parse densely with a coarse grammar and then parse sparsely with the 208 fine grammar, skipping symbols that the coarse pass deemed sufficiently unlikely. Using this approach, we see a gain of more than 2x over the dense GPU implementation, resulting in overall speeds of up to 404 sentences per second. For comparison, the publicly available CPU implementation of Petrov and Klein (2007) parses approximately 7 sentences per second per core on a modern CPU. A further drawback of the dense approach in Canny et al. (2013) is that it only computes Viterbi parses. As with other grammars with a parse/derivation distinction, the grammars of Petrov and Klein (2007) only achieve their full accuracy using minimum-Bayes-risk parsing, with improvements of over 1.5 F1 over best-derivation Viterbi parsing on the Penn Treebank (Marcus et al., 1993). To that end, we extend our coarse-tofine GPU approach to computing marginals, along the way proposing a new way to exploit the coarse pass to avoid expensive log-domain computations in the fine pass. We then implement minimumBayes-risk parsing via the max recall algorithm of Goodman (1996). Without the coarse pass, the dense marginal computation is not efficient on a GPU, processing only 32 sentences per second. However, our approach allows us to process over 190 sentences per second, almost a 6x speedup. 2 A Note on Experiments We build up our approach incrementally, with experiments interspersed throughout the paper, and summarized in Tables 1 and 2. In this paper, we focus our attention on current-generation NVIDIA GPUs. Many of the ideas described here apply to other GPUs (such as those from AMD), but some specifics will differ. All experiments are run with an NVIDIA GeForce GTX 680, a mid-range GPU that costs around $500 at time of writing. Unless otherwise noted, all experiments are conducted on sentences of length ≤40 words, and we estimate times based on batches of 20K sentences.1 We should note that our experimental condition differs from that of Canny et al. (2013): they evaluate on sentences of length ≤30. Furthermore, they 1The implementation of Canny et al. (2013) cannot handle batches so large, and so we tested it on batches of 1200 sentences. Our reimplementation is approximately the same speed for the same batch sizes. For batches of 20K sentences, we used sentences from the training set. We verified that there was no significant difference in speed for sentences from the training set and from the test set. use two NVIDIA GeForce GTX 690s—each of which is essentially a repackaging of two 680s— meaning that our system and experiments would run approximately four times faster on their hardware. (This expected 4x factor is empirically consistent with the result of running their system on our hardware.) 3 Sparsity and CPUs One successful approach for speeding up constituency parsers has been to use coarse-to-fine inference (Charniak et al., 2006). In coarse-tofine inference, we have a sequence of increasingly complex grammars Gℓ. Typically, each successive grammar Gℓis a refinement of the preceding grammar Gℓ−1. That is, for each symbol Ax in the fine grammar, there is some symbol A in the coarse grammar. For instance, in a latent variable parser, the coarse grammar would have symbols like NP, V P, etc., and the fine pass would have refined symbols NP0, NP1, V P4, and so on. In coarse-to-fine inference, one applies the grammars in sequence, computing inside and outside scores. Next, one computes (max) marginals for every labeled span (A, i, j) in a sentence. These max marginals are used to compute a pruning mask for every span (i, j). This mask is the set of symbols allowed for that span. Then, in the next pass, one only processes rules that are licensed by the pruning mask computed at the previous level. This approach works because a low quality coarse grammar can still reliably be used to prune many symbols from the fine chart without loss of accuracy. Petrov and Klein (2007) found that over 98% of symbols can be pruned from typical charts using a simple X-bar grammar without any loss of accuracy. Thus, the vast majority of rules can be skipped, and therefore most computation can be avoided. It is worth pointing out that although 98% of labeled spans can be skipped due to X-bar pruning, we found that only about 79% of binary rule applications can be skipped, because the unpruned symbols tend to be the ones with a larger grammar footprint. 4 GPU Architectures Unfortunately, the standard coarse-to-fine approach does not na¨ıvely translate to GPU architectures. GPUs work by executing thousands of threads at once, but impose the constraint that large blocks of threads must be executing the same 209 RAM CPU GPU RAM Instruction Cache Parse Charts Work Array Grammar Queue Sentences Queue Masks Masks Queue Trees Figure 1: Overview of the architecture of our system, which is an extension of Canny et al. (2013)’s system. The GPU and CPU communicate via a work queue, which ferries parse items from the CPU to the GPU. Our system uses a coarse-to-fine approach, where the coarse pass computes a pruning mask that is used by the CPU when deciding which items to queue during the fine pass. The original system of Canny et al. (2013) only used the fine pass, with no pruning. instructions in lockstep, differing only in their input data. Thus sparsely skipping rules and symbols will not save any work. Indeed, it may actually slow the system down. In this section, we provide an overview of GPU architectures, focusing on the details that are relevant to building an efficient parser. The large number of threads that a GPU executes are packaged into blocks of 32 threads called warps. All threads in a warp must execute the same instruction at every clock cycle: if one thread takes a branch the others do not, then all threads in the warp must follow both code paths. This situation is called warp divergence. Because all threads execute all code paths that any thread takes, time can only be saved if an entire warp agrees to skip any particular branch. NVIDIA GPUs have 8-15 processors called streaming multi-processors or SMs.2 Each SM can process up to 48 different warps at a time: it interleaves the execution of each warp, so that when one warp is stalled another warp can execute. Unlike threads within a single warp, the 48 warps do not have to execute the same instructions. However, the memory architecture is such that they will be faster if they access related memory locations. 2Older hardware (600 series or older) has 8 SMs. Newer hardware has more. A further consideration is that the number of registers available to a thread in a warp is rather limited compared to a CPU. On the 600 series, maximum occupancy can only be achieved if each thread uses at most 63 registers (Nvidia, 2008).3 Registers are many times faster than variables located in thread-local memory, which is actually the same speed as global memory. 5 Anatomy of a Dense GPU Parser This architecture environment puts very different constraints on parsing algorithms from a CPU environment. Canny et al. (2013) proposed an implementation of a PCFG parser that sacrifices standard sparse methods like coarse-to-fine pruning, focusing instead on maximizing the instruction and memory throughput of the parser. They assume that they are parsing many sentences at once, with throughput being more important than latency. In this section, we describe their dense algorithm, which we take as the baseline for our work; we present it in a way that sets up the changes to follow. At the top level, the CPU and GPU communicate via a work queue of parse items of the form (s, i, k, j), where s is an identifier of a sentence, i is the start of a span, k is the split point, and j 3A thread can use more registers than this, but the full complement of 48 warps cannot execute if too many are used. 210 Clustering Pruning Sent/Sec Speedup Canny et al. – 164.0 – Reimpl – 192.9 1.0x Reimpl Empty, Coarse 185.5 0.96x Reimpl Labeled, Coarse 187.5 0.97x Parent – 158.6 0.82x Parent Labeled, Coarse 278.9 1.4x Parent Labeled, 1-split 404.7 2.1x Parent Labeled, 2-split 343.6 1.8x Table 1: Performance numbers for computing Viterbi inside charts on 20,000 sentences of length ≤40 from the Penn Treebank. All times are measured on an NVIDIA GeForce GTX 680. ‘Reimpl’ is our reimplementation of their approach. Speedups are measured in reference to this reimplementation. See Section 7 for discussion of the clustering algorithms and Section 6 for a description of the pruning methods. The Canny et al. (2013) system is benchmarked on a batch size of 1200 sentences, the others on 20,000. is the end point. The GPU takes large numbers of parse items and applies the entire grammar to them in parallel. These parse items are enqueued in order of increasing span size, blocking until all items of a given length are complete. This approach is diagrammed in Figure 2. Because all rules are applied to all parse items, all threads are executing the same sequence of instructions. Thus, there is no concern of warp divergence. 5.1 Grammar Compilation One important feature of Canny et al. (2013)’s system is grammar compilation. Because registers are so much faster than thread-local memory, it is critical to keep as many variables in registers as possible. One way to accomplish this is to unroll loops at compilation time. Therefore, they inlined the iteration over the grammar directly into the GPU kernels (i.e. the code itself), which allows the compiler to more effectively use all of its registers. However, register space is limited on GPUs. Because the Berkeley grammar is so large, the compiler is not able to efficiently schedule all of the operations in the grammar, resulting in register spills. Canny et al. (2013) found they had to partition the grammar into multiple different kernels. We discuss this partitioning in more detail in Section 7. However, in short, the entire grammar G is broken into multiple clusters Gi where each rule belongs to exactly one cluster. NP DT NN VB VP NP NP PP IN NP S VP (0, 1 , 3) (0, 2, 3) (1 , 2, 4) (1 , 3, 4) (2, 3, 5 ) (2, 4, 5 ) Grammar Queue (i, k, j) Figure 2: Schematic representation of the work queue used in Canny et al. (2013). The Viterbi inside loop for the grammar is inlined into a kernel. The kernel is applied to all items in the queue in a blockwise manner. NP DT NN NP DT NN NP DT NN NP NP PP IN NP PP IN NP PP IN PP VB VP NP VB VP NP VB VP NP VP (0, 1 , 3) (1 , 2, 4) (3, 5 , 6 ) (1 , 3, 4) (1 , 2, 4) (0, 2, 3) (2, 4, 5 ) (3, 4, 6 ) Queues (i, k, j) Grammar Clusters Figure 3: Schematic representation of the work queue and grammar clusters used in the fine pass of our work. Here, the rules of the grammar are clustered by their coarse parent symbol. We then have multiple work queues, with parse items only being enqueued if the span (i, j) allows that symbol in its pruning mask. All in all, Canny et al. (2013)’s system is able to compute Viterbi charts at 164 sentences per second, for sentences up to length 40. On larger batch sizes, our reimplementation of their approach is able to achieve 193 sentences per second on the same hardware. (See Table 1.) 6 Pruning on a GPU Now we turn to the algorithmic and architectural changes in our approach. First, consider trying to 211 directly apply the coarse-to-fine method sketched in Section 3 to the dense baseline described above. The natural implementation would be for each thread to check if each rule is licensed before applying it. However, we would only avoid the work of applying the rule if all threads in the warp agreed to skip it. Since each thread in the warp is processing a different span (perhaps even from a different sentence), consensus from all 32 threads on any skip would be unlikely. Another approach would be to skip enqueuing any parse item (s, i, k, j) where the pruning mask for any of (i, j), (i, k), or (k, j) is entirely empty (i.e. all symbols are pruned in this cell by the coarse grammar). However, our experiments showed that only 40% of parse items are pruned in this manner. Because of the overhead associated with creating pruning masks and the further overhead of GPU communication, we found that this method did not actually produce any time savings at all. The result is a parsing speed of 185.5 sentences per second, as shown in Table 1 on the row labeled ‘Reimpl’ with ‘Empty, Coarse’ pruning. Instead, we take advantage of the partitioned structure of the grammar and organize our computation around the coarse symbol set. Recall that the baseline already partitions the grammar G into rule clusters Gi to improve register sharing. (See Section 7 for more on the baseline clustering.) We create a separate work queue for each partition. We call each such queue a labeled work queue, and each one only queues items to which some rule in the corresponding partition applies. We call the set of coarse symbols for a partition (and therefore the corresponding labeled work queue) a signature. During parsing, we only enqueue items (s, i, k, j) to a labeled queue if two conditions are met. First, the span (i, j)’s pruning mask must have a non-empty intersection with the signature of the queue. Second, the pruning mask for the children (i, k) and (k, j) must be non-empty. Once on the GPU, parse items are processed using the same style of compiled kernel as in Canny et al. (2013). Because the entire partition (though not necessarily the entire grammar) is applied to each item in the queue, we still do not need to worry about warp divergence. At the top level, our system first computes pruning masks with a coarse grammar. Then it processes the same sentences with the fine grammar. However, to the extent that the signatures are small, items can be selectively queued only to certain queues. This approach is diagrammed in Figure 3. We tested our new pruning approach using an X-bar grammar as the coarse pass. The resulting speed is 187.5 sentences per second, labeled in Table 1 as row labeled ‘Reimpl’ with ‘Labeled, Coarse’ pruning. Unfortunately, this approach again does not produce a speedup relative to our reimplemented baseline. To improve upon this result, we need to consider how the grammar clustering interacts with the coarse pruning phase. 7 Grammar Clustering Recall that the rules in the grammar are partitioned into a set of clusters, and that these clusters are further divided into subclusters. How can we best cluster and subcluster the grammar so as to maximize performance? A good clustering will group rules together that use the same symbols, since this means fewer memory accesses to read and write scores for symbols. Moreover, we would like the time spent processing each of the subclusters within a cluster to be about the same. We cannot move on to the next cluster until all threads from a cluster are finished, which means that the time a cluster takes is the amount of time taken by the longest-running subcluster. Finally, when pruning, it is best if symbols that have the same coarse projection are clustered together. That way, we are more likely to be able to skip a subcluster, since fewer distinct symbols need to be “off” for a parse item to be skipped in a given subcluster. Canny et al. (2013) clustered symbols of the grammar using a sophisticated spectral clustering algorithm to obtain a permutation of the symbols. Then the rules of the grammar were laid out in a (sparse) three-dimensional tensor, with one dimension representing the parent of the rule, one representing the left child, and one representing the right child. They then split the cube into 6x2x2 contiguous “major cubes,” giving a partition of the rules into 24 clusters. They then further subdivided these cubes into 2x2x2 minor cubes, giving 8 subclusters that executed in parallel. Note that the clusters induced by these major and minor cubes need not be of similar sizes; indeed, they often are not. Clustering using this method is labeled ‘Reimplementation’ in Table 1. The addition of pruning introduces further considerations. First, we have a coarse grammar, with 212 many fewer rules and symbols. Second, we are able to skip a parse item for an entire cluster if that item’s pruning mask does not intersect the cluster’s signature. Spreading symbols across clusters may be inefficient: if a parse item licenses a given symbol, we will have to enqueue that item to any queue that has the symbol in its signature, no matter how many other symbols are in that cluster. Thus, it makes sense to choose a clustering algorithm that exploits the structure introduced by the pruning masks. We use a very simple method: we cluster the rules in the grammar by coarse parent symbol. When coarse symbols are extremely unlikely (and therefore have few corresponding rules), we merge their clusters to avoid the overhead of beginning work on clusters where little work has to be done.4 In order to subcluster, we divide up rules among subclusters so that each subcluster has the same number of active parent symbols. We found this approach to subclustering worked well in practice. Clustering using this method is labeled ‘Parent’ in Table 1. Now, when we use a coarse pruning pass, we are able to parse nearly 280 sentences per second, a 70% increase in parsing performance relative to Canny et al. (2013)’s system, and nearly 50% over our reimplemented baseline. It turns out that this simple clustering algorithm produces relatively efficient kernels even in the unpruned case. The unpruned Viterbi computations in a fine grammar using the clustering method of Canny et al. (2013) yields a speed of 193 sentences per second, whereas the same computation using coarse parent clustering has a speed of 159 sentences per second. (See Table 1.) This is not as efficient as Canny et al. (2013)’s highly tuned method, but it is still fairly fast, and much simpler to implement. 8 Pruning with Finer Grammars The coarse to fine pruning approach of Petrov and Klein (2007) employs an X-bar grammar as its first pruning phase, but there is no reason why we cannot begin with a more complex grammar for our initial pass. As Petrov and Klein (2007) have shown, intermediate-sized Berkeley grammars prune many more symbols than the X-bar system. However, they are slower to parse with 4Specifically, after clustering based on the coarse parent symbol, we merge all clusters with less than 300 rules in them into one large cluster. in a CPU context, and so they begin with an X-bar grammar. Because of the overhead associated with transferring work items to GPU, using a very small grammar may not be an efficient use of the GPU’s computational resources. To that end, we tried computing pruning masks with one-split and twosplit Berkeley grammars. The X-bar grammar can compute pruning masks at just over 1000 sentences per second, the 1-split grammar parses 858 sentences per second, and the 2-split grammar parses 526 sentences per second. Because parsing with these grammars is still quite fast, we tried using them as the coarse pass instead. As shown in Table 1, using a 1-split grammar as a coarse pass allows us to produce over 400 sentences per second, a full 2x improvement over our original system. Conducting a coarse pass with a 2-split grammar is somewhat slower, at a “mere” 343 sentences per second. 9 Minimum Bayes risk parsing The Viterbi algorithm is a reasonably effective method for parsing. However, many authors have noted that parsers benefit substantially from minimum Bayes risk decoding (Goodman, 1996; Simaan, 2003; Matsuzaki et al., 2005; Titov and Henderson, 2006; Petrov and Klein, 2007). MBR algorithms for parsing do not compute the best derivation, as in Viterbi parsing, but instead the parse tree that maximizes the expected count of some figure of merit. For instance, one might want to maximize the expected number of correct constituents (Goodman, 1996), or the expected rule counts (Simaan, 2003; Petrov and Klein, 2007). MBR parsing has proven especially useful in latent variable grammars. Petrov and Klein (2007) showed that MBR trees substantially improved performance over Viterbi parses for latent variable grammars, earning up to 1.5F1. Here, we implement the Max Recall algorithm of Goodman (1996). This algorithm maximizes the expected number of correct coarse symbols (A, i, j) with respect to the posterior distribution over parses for a sentence. This particular MBR algorithm has the advantage that it is relatively straightforward to implement. In essence, we must compute the marginal probability of each fine-labeled span µ(Ax, i, j), and then marginalize to obtain µ(A, i, j). Then, for each span (i, j), we find the best possible split 213 point k that maximizes C(i, j) = µ(A, i, j) + maxk (C(i, k) + C(k, j)). Parse extraction is then just a matter of following back pointers from the root, as in the Viterbi algorithm. 9.1 Computing marginal probabilities The easiest way to compute marginal probabilities is to use the log space semiring rather than the Viterbi semiring, and then to run the inside and outside algorithms as before. We should expect this algorithm to be at least a factor of two slower: the outside pass performs at least as much work as the inside pass. Moreover, it typically has worse memory access patterns, leading to slower performance. Without pruning, our approach does not handle these log domain computations well at all: we are only able to compute marginals for 32.1 sentences/second, more than a factor of 5 slower than our coarse pass. To begin, log space addition requires significantly more operations than max, which is a primitive operation on GPUs. Beyond the obvious consequence that executing more operations means more time taken, the sheer number of operations becomes too much for the compiler to handle. Because the grammars are compiled into code, the additional operations are all inlined into the kernels, producing much larger kernels. Indeed, in practice the compiler will often hang if we use the same size grammar clusters as we did for Viterbi. In practice, we found there is an effective maximum of 2000 rules per kernel using log sums, while we can use more than 10,000 rules rules in a single kernel with Viterbi. With coarse pruning, however, we can avoid much of the increased cost associated with log domain computations. Because so many labeled spans are pruned, we are able to skip many of the grammar clusters and thus avoid many of the expensive operations. Using coarse pruning and log domain calculations, our system produces MBR trees at a rate of 130.4 sentences per second, a four-fold increase. 9.2 Scaling with the Coarse Pass One way to avoid the expense of log domain computations is to use scaled probabilities rather than log probabilities. Scaling is one of the folk techniques that are commonly used in the NLP community, but not generally written about. Recall that floating point numbers are composed of a mantissa m and an exponent e, giving a number System Sent/Sec Speedup Unpruned Log Sum MBR 32.1 – Pruned Log Sum MBR 130.4 4.1x Pruned Scaling MBR 190.6 5.9x Pruned Viterbi 404.7 12.6x Table 2: Performance numbers for computing max constituent (Goodman, 1996) trees on 20,000 sentences of length 40 or less from the Penn Treebank. For convenience, we have copied our pruned Viterbi system’s result. f = m · 2e. When a float underflows, the exponent becomes too low to represent the available number of bits. In scaling, floating point numbers are paired with an additional number that extends the exponent. That is, the number is represented as f′ = f · exp(s). Whenever f becomes either too big or too small, the number is rescaled back to a less “dangerous” range by shifting mass from the exponent e to the scaling factor s. In practice, one scale s is used for an entire span (i, j), and all scores for that span are rescaled in concert. In our GPU system, multiple scores in any given span are being updated at the same time, which makes this dynamic rescaling tricky and expensive, especially since inter-warp communication is fairly limited. We propose a much simpler static solution that exploits the coarse pass. In the coarse pass, we compute Viterbi inside and outside scores for every span. Because the grammar used in the coarse pass is a projection of the grammar used in the fine pass, these coarse scores correlate reasonably closely with the probabilities computed in the fine pass: If a span has a very high or very low score in the coarse pass, it typically has a similar score in the fine pass. Thus, we can use the coarse pass’s inside and outside scores as the scaling values for the fine pass’s scores. That is, in addition to computing a pruning mask, in the coarse pass we store the maximum inside and outside score in each span, giving two arrays of scores sI i,j and sO i,j. Then, when applying rules in the fine pass, each fine inside score over a split span (i, k, j) is scaled to the appropriate sI i,j by multiplying the score by exp  sI i,k + sI k,j −sI i,j  , where sI i,k, sI k,j, sI i,j are the scaling factors for the left child, right child, and parent, respectively. The outside scores are scaled analogously. By itself, this approach works on nearly every sentence. However, scores for approximately 214 0.5% of sentences overflow (sic). Because we are summing instead of maxing scores in the fine pass, the scaling factors computed using max scores are not quite large enough, and so the rescaled inside probabilities grow too large when multiplied together. Most of this difference arises at the leaves, where the lexicon typically has more uncertainty than higher up in the tree. Therefore, in the fine pass, we normalize the inside scores at the leaves to sum to 1.0.5 Using this slight modification, no sentences from the Treebank under- or overflow. We know of no reason why this same trick cannot be employed in more traditional parsers, but it is especially useful here: with this static scaling, we can avoid the costly log sums without introducing any additional inter-thread communication, making the kernels much smaller and much faster. Using scaling, we are able to push our parser to 190.6 sentences/second for MBR extraction, just under half the speed of the Viterbi system. 9.3 Parsing Accuracies It is of course important verify the correctness of our system; one easy way to do so is to examine parsing accuracy, as compared to the original Berkeley parser. We measured parsing accuracy on sentences of length ≤40 from section 22 of the Penn Treebank. Our Viterbi parser achieves 89.7 F1, while our MBR parser scores 91.0. These results are nearly identical to the Berkeley parsers most comparable numbers: 89.8 for Viterbi, and 90.9 for their “Max-Rule-Sum” MBR algorithm. These slight differences arise from the usual minor variation in implementation details. In particular, we use one coarse pass instead of several, and a different MBR algorithm. In addition, there are some differences in unary processing. 10 Analyzing System Performance In this section we attempt to break down how exactly our system is spending its time. We do this in an effort to give a sense of how time is spent during computation on GPUs. These timing numbers are computed using the built-in profiling capabilities of the programming environment. As usual, profiles exhibit an observer effect, where the act of measuring the system changes the execution. Nev5One can instead interpret this approach as changing the scaling factors to sI′ i,j = sI i,j · Q i≤k<j P A inside(A, k, k + 1), where inside is the array of scores for the fine pass. System Coarse Pass Fine Pass Unpruned Viterbi – 6.4 Pruned Viterbi 1.2 1.5 Unpruned Logsum MBR — 28.6 Pruned Scaling MBR 1.2 4.3 Table 3: Time spent in the passes of our different systems, in seconds per 1000 sentences. Pruning refers to using a 1-split grammar for the coarse pass. ertheless, the general trends should more or less be preserved as compared to the unprofiled code. To begin, we can compute the number of seconds needed to parse 1000 sentences. (We use seconds per sentence rather than sentences per second because the former measure is additive.) The results are in Table 3. In the case of pruned Viterbi, pruning reduces the amount of time spent in the fine pass by more than 4x, though half of those gains are lost to computing the pruning masks. In Table 4, we break down the time taken by our system into individual components. As expected, binary rules account for the vast majority of the time in the unpruned Viterbi case, but much less time in the pruned case, with the total time taken for binary rules in the coarse and fine passes taking about 1/5 of the time taken by binaries in the unpruned version. Queueing, which involves copying memory around within the GPU to process the individual parse items, takes a fairly consistent amount of time in all systems. Overhead, which includes transport time between the CPU and GPU and other processing on the CPU, is relatively small for most system configurations. There is greater overhead in the scaling system, because scaling factors are copied to the CPU between the coarse and fine passes. A final question is: how many sentences per second do we need to process to saturate the GPU’s processing power? We computed Viterbi parses of successive powers of 10, from 1 to 100,000 sentences.6 In Figure 4, we then plotted the throughput, in terms of number of sentences per second. Throughput increases through parsing 10,000 sentences, and then levels off by the time it reaches 100,000 sentences. 6We replicated the Treebank for the 100,000 sentences pass. 215 System Coarse Pass Fine Pass Binary Unary Queueing Masks Overhead Binary Unary Queueing Overhead Unpruned Viterbi – – – – – 5.42 0.14 0.33 0.40 Pruned Viterbi 0.59 0.02 0.19 0.04 0.22 0.56 0.10 0.34 0.22 Pruned Scaling 0.59 0.02 0.19 0.04 0.20 1.74 0.24 0.46 0.84 Table 4: Breakdown of time spent in our different systems, in seconds per 1000 sentences. Binary and Unary refer to spent processing binary rules. Queueing refers to the amount of time used to move memory around within the GPU for processing. Overhead includes all other time, which includes communication between the GPU and the CPU. Sentences/Second 0 100 200 300 400 Number of Sentences 1 10 100 1K 10K 100K Figure 4: Plot of speeds (sentences / second) for various sizes of input corpora. The full power of the GPU parser is only reached when run on large numbers of sentences. 11 Related Work Apart from the model of Canny et al. (2013), there have been a few attempts at using GPUs in NLP contexts before. Johnson (2011) and Yi et al. (2011) both had early attempts at porting parsing algorithms to the GPU. However, they did not demonstrate significantly increased speed over a CPU implementation. In machine translation, He et al. (2013) adapted algorithms designed for GPUs in the computational biology literature to speed up on-demand phrase table extraction. 12 Conclusion GPUs represent a challenging opportunity for natural language processing. By carefully designing within the constraints imposed by the architecture, we have created a parser that can exploit the same kinds of sparsity that have been developed for more traditional architectures. One of the key remaining challenges going forward is confronting the kind of lexicalized sparsity common in other NLP models. The Berkeley parser’s grammars—by virtue of being unlexicalized—can be applied uniformly to all parse items. The bilexical features needed by dependency models and lexicalized constituency models are not directly amenable to acceleration using the techniques we described here. Determining how to efficiently implement these kinds of models is a promising area for new research. Our system is available as open-source at https://www.github.com/dlwh/puck. Acknowledgments This work was partially supported by BBN under DARPA contract HR0011-12-C-0014, by a Google PhD fellowship to the first author, and an NSF fellowship to the second. We further gratefully acknowledge a hardware donation by NVIDIA Corporation. References John Canny, David Hall, and Dan Klein. 2013. A multi-teraflop constituency parser using GPUs. In Proceedings of EMNLP, pages 1898–1907, October. Eugene Charniak, Mark Johnson, Micha Elsner, Joseph Austerweil, David Ellis, Isaac Haxton, Catherine Hill, R Shrivaths, Jeremy Moore, Michael Pozar, et al. 2006. Multilevel coarse-to-fine pcfg parsing. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 168–175. Association for Computational Linguistics. Joshua Goodman. 1996. Parsing algorithms and metrics. In ACL, pages 177–183. Hua He, Jimmy Lin, and Adam Lopez. 2013. Massively parallel suffix array queries and on-demand phrase extraction for statistical machine translation using gpus. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 325–334, Atlanta, Georgia, June. Association for Computational Linguistics. Mark Johnson. 2011. Parsing in parallel on multiple cores and gpus. In Proceedings of the Australasian Language Technology Association Workshop. 216 Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In ACL, pages 75–82, Morristown, NJ, USA. CUDA Nvidia. 2008. Programming guide. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In NAACL-HLT. Khalil Simaan. 2003. On maximizing metrics for syntactic disambiguation. In Proceedings of IWPT. Ivan Titov and James Henderson. 2006. Loss minimization in parse reranking. In Proceedings of EMNLP, pages 560–567. Association for Computational Linguistics. Youngmin Yi, Chao-Yue Lai, Slav Petrov, and Kurt Keutzer. 2011. Efficient parallel cky parsing on gpus. In Proceedings of the 2011 Conference on Parsing Technologies, Dublin, Ireland, October. 217
2014
20
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 218–227, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Shift-Reduce CCG Parsing with a Dependency Model Wenduan Xu University of Cambridge Computer Laboratory [email protected] Stephen Clark University of Cambridge Computer Laboratory [email protected] Yue Zhang Singapore University of Technology and Design yue [email protected] Abstract This paper presents the first dependency model for a shift-reduce CCG parser. Modelling dependencies is desirable for a number of reasons, including handling the “spurious” ambiguity of CCG; fitting well with the theory of CCG; and optimizing for structures which are evaluated at test time. We develop a novel training technique using a dependency oracle, in which all derivations are hidden. A challenge arises from the fact that the oracle needs to keep track of exponentially many goldstandard derivations, which is solved by integrating a packed parse forest with the beam-search decoder. Standard CCGBank tests show the model achieves up to 1.05 labeled F-score improvements over three existing, competitive CCG parsing models. 1 Introduction Combinatory Categorial Grammar (CCG; Steedman (2000)) is able to derive typed dependency structures (Hockenmaier, 2003; Clark and Curran, 2007), providing a useful approximation to the underlying predicate-argument relations of “who did what to whom”. To date, CCG remains the most competitive formalism for recovering “deep” dependencies arising from many linguistic phenomena such as raising, control, extraction and coordination (Rimell et al., 2009; Nivre et al., 2010). To achieve its expressiveness, CCG exhibits so-called “spurious” ambiguity, permitting many non-standard surface derivations which ease the recovery of certain dependencies, especially those arising from type-raising and composition. But this raises the question of what is the most suitable model for CCG: should we model the derivations, the dependencies, or both? The choice for some existing parsers (Hockenmaier, 2003; Clark and Curran, 2007) is to model derivations directly, restricting the gold-standard to be the normal-form derivations (Eisner, 1996) from CCGBank (Hockenmaier and Steedman, 2007). Modelling dependencies, as a proxy for the semantic interpretation, fits well with the theory of CCG, in which Steedman (2000) argues that the derivation is merely a “trace” of the underlying syntactic process, and that the structure which is built, and predicated over when applying constraints on grammaticality, is the semantic interpretation. The early dependency model of Clark et al. (2002), in which model features were defined over only dependency structures, was partly motivated by these theoretical observations. More generally, dependency models are desirable for a number of reasons. First, modelling dependencies provides an elegant solution to the spurious ambiguity problem (Clark and Curran, 2007). Second, obtaining training data for dependencies is likely to be easier than for syntactic derivations, especially for incomplete data (Schneider et al., 2013). Clark and Curran (2006) show how the dependency model from Clark and Curran (2007) extends naturally to the partialtraining case, and also how to obtain dependency data cheaply from gold-standard lexical category sequences alone. And third, it has been argued that dependencies are an ideal representation for parser evaluation, especially for CCG (Briscoe and Carroll, 2006; Clark and Hockenmaier, 2002), and so optimizing for dependency recovery makes sense from an evaluation perspective. In this paper, we fill a gap in the literature by developing the first dependency model for a shiftreduce CCG parser. Shift-reduce parsing applies naturally to CCG (Zhang and Clark, 2011), and the left-to-right, incremental nature of the decoding fits with CCG’s cognitive claims. The discriminative model is global and trained with the structured perceptron. The decoder is based on beam-search 218 (Zhang and Clark, 2008) with the advantage of linear-time decoding (Goldberg et al., 2013). A main contribution of the paper is a novel technique for training the parser using a dependency oracle, in which all derivations are hidden. A challenge arises from the potentially exponential number of derivations leading to a gold-standard dependency structure, which the oracle needs to keep track of. Our solution is an integration of a packed parse forest, which efficiently stores all the derivations, with the beam-search decoder at training time. The derivations are not explicitly part of the data, since the forest is built from the gold-standard dependencies. We also show how perceptron learning with beam-search (Collins and Roark, 2004) can be extended to handle the additional ambiguity, by adapting the “violationfixing” perceptron of Huang et al. (2012). Results on the standard CCGBank tests show that our parser achieves absolute labeled F-score gains of up to 0.5 over the shift-reduce parser of Zhang and Clark (2011); and up to 1.05 and 0.64 over the normal-form and hybrid models of Clark and Curran (2007), respectively. 2 Shift-Reduce with Beam-Search This section describes how shift-reduce techniques can be applied to CCG, following Zhang and Clark (2011). First we describe the deterministic process which a parser would follow when tracing out a single, correct derivation; then we describe how a model of normal-form derivations — or, more accurately, a sequence of shift-reduce actions leading to a normal-form derivation — can be used with beam-search to develop a nondeterministic parser which selects the highest scoring sequence of actions. Note this section only describes a normal-form derivation model for shiftreduce parsing. Section 3 explains how we extend the approach to dependency models. The shift-reduce algorithm adapted to CCG is similar to that of shift-reduce dependency parsing (Yamada and Matsumoto, 2003; Nivre and McDonald, 2008; Zhang and Clark, 2008; Huang and Sagae, 2010). Following Zhang and Clark (2011), we define each item in the parser as a pair ⟨s, q⟩, where q is a queue of remaining input, consisting of words and a set of possible lexical categories for each word (with q0 being the front word), and s is the stack that holds subtrees s0, s1, ... (with s0 at the top). Subtrees on the stack are partial derivastep stack (sn, ..., s1, s0) queue (q0, q1, ..., qm) action 0 Mr. President visited Paris 1 N/N President visited Paris SHIFT 2 N/N N visited Paris SHIFT 3 N visited Paris REDUCE 4 NP visited Paris UNARY 5 NP (S[dcl]\NP)/NP Paris SHIFT 6 NP (S[dcl]\NP)/NP N SHIFT 7 NP (S[dcl]\NP)/NP NP UNARY 8 NP S[dcl]\NP REDUCE 9 S[dcl] REDUCE Figure 1: Deterministic example of shift-reduce CCG parsing (lexical categories omitted on queue). tions that have been built as part of the shift-reduce process. SHIFT, REDUCE and UNARY are the three types of actions that can be applied to an item. A SHIFT action shifts one of the lexical categories of q0 onto the stack. A REDUCE action combines s0 and s1 according to a CCG combinatory rule, producing a new category on the top of the stack. A UNARY action applies either a type-raising or type-changing rule to the stack-top category s0.1 Figure 1 shows a deterministic example for the sentence Mr. President visited Paris, giving a single sequence of shift-reduce actions which produces a correct derivation (i.e. one producing the correct set of dependencies). Starting with the initial item ⟨s, q⟩0 (row 0), which has an empty stack and a full queue, a total of nine actions are applied to produce the complete derivation. Applying beam-search to a statistical shiftreduce parser is a straightforward extension to the deterministic example. At each step, a beam is used to store the top-k highest-scoring items, resulting from expanding all items in the previous beam. An item becomes a candidate output once it has an empty queue, and the parser keeps track of the highest scored candidate output and returns the best one as the final output. Compared with greedy local-search (Nivre and Scholz, 2004), the use of a beam allows the parser to explore a larger search space and delay difficult ambiguity-resolving decisions by considering multiple items in parallel. We refer to the shift-reduce model of Zhang and Clark (2011) as the normal-form model, where the oracle for each sentence specifies a unique sequence of gold-standard actions which produces the corresponding normal-form derivation. No dependency structures are involved at training and test time, except for evaluation. In the next section, we describe a dependency oracle which considers all sequences of actions producing a goldstandard dependency structure to be correct. 1See Hockenmaier (2003) and Clark and Curran (2007) for a description of CCG rules. 219 Mr. President visited Paris N /N N (S[dcl]\NP)/NP NP > > N S[dcl]\NP >TC NP < S[dcl] (a) Mr. President visited Paris N /N N (S[dcl]\NP)/NP NP > N >TC NP >T S[dcl]/(S[dcl]\NP) >B S[dcl]/NP > S[dcl] (b) Figure 2: Two derivations leading to the same dependency structure. TC denotes type-changing. 3 The Dependency Model Categories in CCG are either basic (such as NP and PP) or complex (such as (S[dcl]\NP)/NP). Each complex category in the lexicon defines one or more predicate-argument relations, which can be realized as a predicate-argument dependency when the corresponding argument slot is consumed. For example, the transitive verb category above defines two relations: one for the subject NP and one for the object NP. In this paper a CCG predicate-argument dependency is a 4-tuple: ⟨hf, f, s, ha⟩where hf is the lexical item of the lexical category expressing the relation; f is the lexical category; s is the argument slot; and ha is the head word of the argument. Since the lexical items in a dependency are indexed by their sentence positions, all dependencies for a sentence form a set, which is referred to as a CCG dependency structure. Clark and Curran (2007) contains a detailed description of dependency structures. Fig. 2 shows an example demonstrating spurious ambiguity in relation to a CCG dependency structure. In both derivations, the first two lexical categories are combined using forward application (>) and the following dependency is realized: ⟨Mr., N /N1, 1, President⟩. In the normal-form derivation (a), the dependency ⟨visited, (S\NP1)/NP2, 2, Paris⟩is created by combining the transitive verb category with the object NP using forward application. One final dependency, ⟨visited, (S\NP1)/NP2, 1, President⟩, is realized when the root node S[dcl] is produced through backward application (<). Fig. 2(b) shows a non-normal-form derivation which uses type-raising (T) and composition (B) (which are not required to derive the correct dependency structure). In this alternative derivation, the dependency ⟨visited, (S\NP1)/NP2, 1, President⟩ is realized using forward composition (B), and ⟨visited, (S\NP1)/NP2, 2, Paris⟩is realized when the S[dcl] root is produced. The chart-based dependency model of Clark and Curran (2007) treats all derivations as hidden, and defines a probabilistic model for a dependency structure by summing probabilities of all derivations leading to a particular structure. Features are defined over both derivations and CCG predicate-argument dependencies. We follow a similar approach, but rather than define a probabilistic model (which requires summing), we define a linear model over sequences of shiftreduce actions, as for the normal-form shift-reduce model. However, the difference compared to the normal-form model is that we do not assume a single gold-standard sequence of actions. Similar to Goldberg and Nivre (2012), we define an oracle which determines, for a goldstandard dependency structure, G, what the valid transition sequences are (i.e. those sequences corresponding to derivations leading to G). More specifically, the oracle can determine, given G and an item ⟨s, q⟩, what the valid actions are for that item (i.e. what actions can potentially lead to G, starting with ⟨s, q⟩and the dependencies already built on s). However, there can be exponentially many valid action sequences for G, which we represent efficiently using a packed parse forest. We show how the forest can be used, during beamsearch decoding, to determine the valid actions for a parse item (Section 3.2). We also show, in Section 3.3, how perceptron training with earlyupdate (Collins and Roark, 2004) can be used in this setting. 3.1 The Oracle Forest A CCG parse forest efficiently represents an exponential number of derivations. Following Clark and Curran (2007) (which builds on Miyao and Tsujii (2002)), and using the same notation, we define a CCG parse forest Φ as a tuple ⟨C, D, R, γ, δ⟩, where C is a set of conjunctive 220 Algorithm 1 (Clark and Curran, 2007) Input: A packed forest ⟨C, D, R, γ, δ⟩, with dmax(c) and dmax(d) already computed 1: function MAIN 2: for each dr ∈R s.t. dmax . (dr) = |G| do 3: MARK(dr) 4: procedure MARK(d) 5: mark d as a correct node 6: for each c ∈γ(d) do 7: if dmax(c) == dmax(d) then 8: mark c as a correct node 9: for each d′ ∈δ(c) do 10: if d′ has not been visited then 11: MARK(d′) nodes and D is a set of disjunctive nodes.2 Conjunctive nodes are individual CCG categories in Φ, and are either obtained from the lexicon, or by combining two disjunctive nodes using a CCG rule, or by applying a unary rule to a disjunctive node. Disjunctive nodes are equivalence classes of conjunctive nodes. Two conjunctive nodes are equivalent iff they have the same category, head and unfilled dependencies (i.e. they will lead to the same derivation, and produce the same dependencies, in any future parsing). R ⊆D is a set of root disjunctive nodes. γ : D →2C is the conjunctive child function and δ : C →2D is the disjunctive child function. The former returns the set of all conjunctive nodes of a disjunctive node, and the latter returns the disjunctive child nodes of a conjunctive node. The dependency model requires all the conjunctive and disjunctive nodes of Φ that are part of the derivations leading to a gold-standard dependency structure G. We refer to such derivations as correct derivations and the packed forest containing all these derivations as the oracle forest, denoted as ΦG, which is a subset of Φ. It is prohibitive to enumerate all correct derivations, but it is possible to identify, from Φ, all the conjunctive and disjunctive nodes that are part of ΦG. Clark and Curran (2007) gives an algorithm for doing so, which we use here. The main intuition behind the algorithm is that a gold-standard dependency structure decomposes over derivations; thus gold-standard dependencies realized at conjunctive nodes can be counted when Φ is built, and all nodes that are part of ΦG can then be marked out of Φ by traversing it top-down. A key idea in understanding the algo2Under the hypergraph framework (Gallo et al., 1993; Huang and Chiang, 2005), a conjunctive node corresponds to a hyperedge and a disjunctive node corresponds to the head of a hyperedge or hyperedge bundle. rithm is that dependencies are created when disjunctive nodes are combined, and hence are associated with, or “live on”, conjunctive nodes in the forest. Following Clark and Curran (2007), we also define the following three values, where the first decomposes only over local rule productions, while the other two decompose over derivations: cdeps(c) = ( ∗if ∃τ ∈deps(c), τ /∈G |deps(c)| otherwise dmax(c) =      ∗if cdeps(c) == ∗ ∗if dmax(d) == ∗for some d ∈δ(c) P d∈δ(c) dmax(d) + cdeps(c) otherwise dmax(d) = max{dmax(c) | c ∈γ(d)} deps(c) is the set of all dependencies on conjunctive node c, and cdeps(c) counts the number of correct dependencies on c. dmax(c) is the maximum number of correct dependencies over any sub-derivation headed by c and is calculated recursively; dmax(d) returns the same value for a disjunctive node. In all cases, a special value ∗ indicates the presence of incorrect dependencies. To obtain the oracle forest, we first pre-compute dmax(c) and dmax(d) for all d and c in Φ when Φ is built using CKY, which are then used by Algorithm 1 to identify all the conjunctive and disjunctive nodes in ΦG. 3.2 The Dependency Oracle Algorithm We observe that the canonical shift-reduce algorithm (as demonstrated in Fig. 1) applied to a single parse tree exactly resembles bottom-up postorder traversal of that tree. As an example, consider the derivation in Fig. 2a, where the corresponding sequence of actions is: sh N /N , sh N , re N , un NP, sh (S[dcl]\NP)/NP, sh NP, re S[dcl]\NP, re S[dcl].3 The order of traversal is left-child, right-child and parent. For a single parse, the corresponding shift-reduce action sequence is unique, and for a given item this canonical order restricts the possible derivations that can be formed using further actions. We now extend this observation to the more general case of an oracle forest, where there may be more than one gold-standard action for a given item. Definition 1. Given a gold-standard dependency 3The derivation is “upside down”, following the convention used for CCG, where the root is S[dcl]. We use sh, re and un to denote the three types of shift-reduce action. 221 Mr. President visited Paris N /N N (S[dcl]\NP)/NP NP > > N S[dcl]\NP (a) Mr. President visited Paris N/N N (S[dcl]\NP)/NP NP > S[dcl]\NP (b) Figure 3: Example subtrees on two stacks, with two subtrees in (a) and three in (b); roots of subtrees are in bold. structure G, an oracle forest ΦG, and an item ⟨s, q⟩, we say s is a realization of G, denoted s ≃G, if |s| = 1, q is empty and the single derivation on s is correct. If |s| > 0 and the subtrees on s can lead to a correct derivation in ΦG using further actions, we say s is a partial-realization of G, denoted as s ∼G. And we define s ∼G for |s| = 0. As an example, assume that ΦG contains only the derivation in Fig. 2a; then a stack containing the two subtrees in Fig. 3a is a partial-realization, while a stack containing the three subtrees in Fig. 3b is not. Note that each of the three subtrees in Fig. 3b is present in ΦG; however, these subtrees cannot be combined into the single correct derivation, since the correct sequence of shiftreduce actions must first combine the lexical categories for Mr. and President before shifting the lexical category for visited. We denote an action as a pair (x, c), where x ∈{SHIFT, REDUCE, UNARY} and c is the root of the subtree resulting from that action. For all three types of actions, c also corresponds to a unique conjunctive node in the complete forest Φ; and we use csi to denote the conjunctive node in Φ corresponding to subtree si on the stack. Let ⟨s′, q′⟩= ⟨s, q⟩◦(x, c) be the resulting item from applying the action (x, c) to ⟨s, q⟩; and let the set of all possible actions for ⟨s, q⟩be X⟨s,q⟩= {(x, c) | (x, c) is applicable to ⟨s, q⟩}. Definition 2. Given ΦG and an item ⟨s, q⟩s.t. s ∼ G, we say an applicable action (x, c) for the item is valid iff s′ ∼G or s′ ≃G, where ⟨s′, q′⟩= ⟨s, q⟩◦(x, c). Definition 3. Given ΦG, the dependency oracle function fd is defined as: fd(⟨s, q⟩, (x, c), ΦG) =  true if s′ ∼G or s′ ≃G false otherwise where (x, c) ∈X⟨s,q⟩and ⟨s′, q′⟩= ⟨s, q⟩◦(x, c). The pseudocode in Algorithm 2 implements fd. It determines, for a given item, whether an applicable action is valid in ΦG. It is trivial to determine the validity of a SHIFT action for the initial item, ⟨s, q⟩0, since the SHIFT action is valid iff its category matches the goldstandard lexical category of the first word in the sentence. For any subsequent SHIFT action (SHIFT, c) to be valid, the necessary condition is c ≡clex0, where clex0 denotes the gold-standard lexical category of the front word in the queue, q0 (line 3). However, this condition is not sufficient; a counterexample is the case where all the goldstandard lexical categories for the sentence in Figure 2 are shifted in succession. Hence, in general, the conditions under which an action is valid are more complex than the trivial case above. First, suppose there is only one correct derivation in ΦG. A SHIFT action (SHIFT, clex0) is valid whenever cs0 (the conjunctive node in ΦG corresponding to the subtree s0 on the stack) and clex0 (the conjunctive node in ΦG corresponding to the next gold-standard lexical category from the queue) are both dominated by the conjunctive node parent p of cs0 in ΦG.4 A REDUCE action (REDUCE, c) is valid if c matches the category of the conjunctive node parent of cs0 and cs1 in ΦG. A UNARY action (UNARY, c) is valid if c matches the conjunctive node parent of cs0 in ΦG. We now generalize the case where ΦG contains a single correct parse to the case of an oracle forest, where each parent p is replaced by a set of conjunctive nodes in ΦG. Definition 4. The left parent set pL(c) of conjunctive node c ∈ΦG is the set of all parent conjunctive nodes of c in ΦG, which have the disjunctive node d containing c (i.e. c ∈γ(d)) as a left child. Definition 5. The ancestor set A(c) of conjunctive node c ∈ΦG is the set of all reachable ancestor conjunctive nodes of c in ΦG. Definition 6. Given an item ⟨s, q⟩, if |s| = 1 we say s is a frontier stack. 4Strictly speaking, the conjunctive node parent is a parent of the disjunctive node containing the conjunctive node cs0. We will continue to use this shorthand for parents of conjunctive nodes throughout the paper. 222 Algorithm 2 The Dependency Oracle Function fd Input: ΦG, an item ⟨s, q⟩s.t. s ∼G, (x, c) ∈X⟨s,q⟩ Let s′ be the stack of ⟨s′, q′⟩= ⟨s, q⟩◦(x, c) 1: function MAIN(⟨s, q⟩, (x, c), ΦG) 2: if x is SHIFT then 3: if c ̸≡clex0 then ▷c not gold lexical category 4: return false 5: else if c ≡clex0 and |s| = 0 then ▷the initial item 6: return true 7: else if c ≡clex0 and |s| ̸= 0 then 8: compute R(cs′ 1, cs′ 0) 9: return R(cs′ 1, cs′ 0) ̸= ∅ 10: if x is REDUCE then ▷s is non-frontier 11: if c ∈R(cs1, cs0) then 12: compute R(cs′ 1, cs′ 0) 13: return true 14: else return false 15: if x is UNARY then 16: if |s| = 1 then ▷s is frontier 17: return c ∈ΦG 18: if |s| ̸= 1 and c ∈ΦG then ▷s is non-frontier 19: compute R(cs′ 1, cs′ 0) 20: return R(cs′ 1, cs′ 0) ̸= ∅ A key to defining the dependency oracle function is the notion of a shared ancestor set. Intuitively, shared ancestor sets are built up through shift actions, and contain sets of nodes which can potentially become the results of reduce or unary actions. A further intuition is that shared ancestor sets define the space of possible correct derivations, and nodes in these sets are “ticked off” when reduce and unary actions are applied, as a single correct derivation is built through the shift-reduce process (corresponding to a bottom-up post-order traversal of the derivation). The following definition shows how the dependency oracle function builds shared ancestor sets for each action type. Definition 7. Let ⟨s, q⟩be an item and let ⟨s′, q′⟩= ⟨s, q⟩◦(x, c). We define the shared ancestor set R(cs′ 1, cs′ 0) of cs′ 0, after applying action (x, c), as: • {c′ | c′ ∈pL(cs0) ∩A(c)}, if s is frontier and x = SHIFT • {c′ | c′ ∈pL(cs0) ∩A(c) and there is some c′′ ∈ R(cs1, cs0) s.t. c′′ ∈A(c′)}, if s is non-frontier and x = SHIFT • {c′ | c′ ∈R(cs2, cs1) ∩A(c)}, if x = REDUCE • {c′ | c′ ∈R(cs1, cs0) ∩A(c)}, if s is non-frontier and x = UNARY • R(ϵ, c0 s0) = ∅where c0 s0 is the conjunctive node corresponding to the gold-standard lexical category of the first word in the sentence (ϵ is a dummy symbol indicating the bottom of stack). The base case for Definition 7 is when the goldstandard lexical category of the first word in the sentence has been shifted, which creates an empty shared ancestor set. Furthermore, the shared ancestor set is always empty when the stack is a frontier stack. The dependency oracle algorithm checks the validity of applicable actions. A SHIFT action is valid if R(cs′ 1, cs′ 0) ̸= ∅for the resulting stack s′. A valid REDUCE action consumes s1 and s0. For the new node, its shared ancestor set is the subset of the conjunctive nodes in R(cs2, cs1) which dominate the resulting conjunctive node of a valid REDUCE action. The UNARY case for a frontier stack is trivial: any UNARY action applicable to s in ΦG is valid. For a non-frontier stack, the UNARY case is similar to REDUCE except the resulting shared ancestor set is a subset of R(cs1, cs0). We now turn to the problem of finding the shared ancestor sets. In practice, we do not do this by traversing ΦG top-down from the conjunctive nodes in pL(cs0) on-the-fly to find each member of R. Instead, when we build ΦG in bottom-up topological order, we pre-compute the set of reachable disjunctive nodes of each conjunctive node c in ΦG as: D(c) = δ(c) ∪(∪c′∈γ(d),d∈δ(c)(D(c′))) Each D is implemented as a hash map, which allows us to test the membership of one potential conjunctive node in O(1) time. For example, a conjunctive node c ∈pL(cs0) is reachable from clex0 if there is a disjunctive node d ∈D(c) s.t. clex0 ∈γ(d). With this implementation, the complexity of checking each valid SHIFT action is then O(|pL(cs0)|). 3.3 Training We use the averaged perceptron (Collins, 2002) to train a global linear model and score each action. The normal-form model of Zhang and Clark (2011) uses an early update mechanism (Collins and Roark, 2004), where decoding is stopped to update model weights whenever the single gold action falls outside the beam. In our parser, there can be multiple gold items in a beam. One option would be to apply early update whenever at least 223 Algorithm 3 Dependency Model Training Input: (y, G) and beam size k 1: w ←0; B0 ←∅; i ←0 2: B0.push(⟨s, q⟩0) ▷the initial item 3: cand ←∅ ▷candidate output priority queue 4: gold ←∅ ▷gold output priority queue 5: while Bi ̸= ∅do 6: for each ⟨s, q⟩∈Bi do 7: if |q| = 0 then ▷candidate output 8: cand.push(⟨s, q⟩) 9: if s ≃G then ▷s is a realization of G 10: gold.push(⟨s, q⟩) 11: expand ⟨s, q⟩into Bi+1 12: Bi+1 ←Bi+1[1 : k] ▷apply beam 13: if ΠG ̸= ∅, ΠG ∩Bi+1 = ∅and cand[0] ̸≃G then 14: w ←w + φ(ΠG[0]) −φ(Bi+1[0]) ▷early update 15: return 16: i ←i + 1 ▷continue to next step 17: if cand[0] ̸≃G then ▷final update 18: w ←w + φ(gold[0]) −φ(cand[0]) one of these gold items falls outside the beam. However, this may not be a true violation of the gold-standard (Huang et al., 2012). Thus, we use a relaxed version of early update, in which all goldstandard actions must fall outside the beam before an update is performed. This update mechanism is provably correct under the violation-fixing framework of Huang et al. (2012). Let (y, G) be a training sentence paired with its gold-standard dependency structure and let Π⟨s,q⟩ be the following set for an item ⟨s, q⟩: {⟨s, q⟩◦(x, c) | fd(⟨s, q⟩, (x, c), ΦG) = true} Π⟨s,q⟩contains all correct items at step i + 1 obtained by expanding ⟨s, q⟩. Let the set of all correct items at a step i + 1 be:5 ΠG = [ ⟨s,q⟩∈Bi Π⟨s,q⟩ Algorithm 3 shows the pseudocode for training the dependency model with early update for one input (y, G). The score of an item ⟨s, q⟩is calculated as w · φ(⟨s, q⟩) with respect to the current model w, where φ(⟨s, q⟩) is the feature vector for the item. At step i, all items are expanded and added onto the next beam Bi+1, and the top-k retained. Early update is applied when all gold items first fall outside the beam, and any candidate output is incorrect (line 14). Since there are potentially many gold items, and one gold item is required for the perceptron update, a decision needs 5In Algorithm 3 we abuse notation by using ΠG[0] to denote the highest scoring gold item in the set. to be made regarding which gold item to update against. We choose to reward the highest scoring gold item, in line with the violation-fixing framework; and penalize the highest scoring incorrect item, using the standard perceptron update. A final update is performed if no more expansions are possible but the final output is incorrect. 4 Experiments We implement our shift-reduce parser on top of the core C&C code base (Clark and Curran, 2007) and evaluate it against the shift-reduce parser of Zhang and Clark (2011) (henceforth Z&C) and the chartbased normal-form and hybrid models of Clark and Curran (2007). For all experiments, we use CCGBank with the standard split: sections 2-21 for training (39,604 sentences), section 00 for development (1,913 sentences) and section 23 (2,407 sentences) for testing. The way that the CCG grammar is implemented in C&C has some implications for our parser. First, unlike Z&C, which uses a context-free cover (Fowler and Penn, 2010) and hence is able to use all sentences in the training data, we are only able to use 36,036 sentences. The reason is that the grammar in C&C does not have complete coverage of CCGBank, due to the fact that e.g. not all rules in CCGBank conform to the combinatory rules of CCG. Second, our parser uses the unification mechanism from C&C to output dependencies directly, and hence does not need a separate postprocessing step to convert derivations into CCG dependencies, as required by Z&C. The feature templates of our model consist of all of those in Z&C, except the ones which require lexical heads to come from either the left or right child, as such features are incompatible with the head passing mechanism used by C&C. Each Z&C template is defined over a parse item, and captures various aspects of the stack and queue context. For example, one template returns the top category on the stack plus its head word, together with the first word and its POS tag on the queue. Another template returns the second category on the stack, together with the POS tag of its head word. Every Z&C feature is defined as a pair, consisting of an instantiated context template and a parse action. In addition, we use all the CCG predicate-argument dependency features from Clark and Curran (2007), which contribute to the score of a REDUCE action when dependencies 224 LP % LR % LF % LSent. % CatAcc. % coverage % this parser 86.29 84.09 85.18 34.40 92.75 100 Z&C 87.15 82.95 85.00 33.82 92.77 100 C&C (normal-form) 85.22 82.52 83.85 31.63 92.40 100 this parser 86.76 84.90 85.82 34.72 93.20 99.06 (C&C coverage) Z&C 87.55 83.63 85.54 34.14 93.11 99.06 (C&C coverage) C&C (hybrid) – – 85.25 – – 99.06 (C&C coverage) C&C (normal-form) 85.22 84.29 84.76 31.93 92.83 99.06 (C&C coverage) Table 1: Accuracy comparison on Section 00 (auto POS). 60 65 70 75 80 85 90 0 5 10 15 20 25 30 Precision % Dependency length (bins of 5) C&C Z&C this parser (a) precision vs. dependency length 50 55 60 65 70 75 80 85 90 0 5 10 15 20 25 30 Recall % Dependency length (bins of 5) C&C Z&C this parser (b) recall vs. dependency length Figure 4: Labeled precision and recall relative to dependency length on the development set. C&C normal-form model is used. are realized. Detailed descriptions of all the templates in our model can be found in the respective papers. We run 20 training iterations and the resulting model contains 16.5M features with a nonzero weight. We use 10-fold cross validation for POS tagging and supertagging the training data, and automatically assigned POS tags for all experiments. A probability cut-off value of 0.0001 for the β parameter in the supertagger is used for both training and testing. The β parameter determines how many lexical categories are assigned to each word; β = 0.0001 is a relatively small value which allows in a large number of categories, compared to the default value used in Clark and Curran (2007). For training only, if the gold-standard lexical category is not supplied by the supertagger for a particular word, it is added to the list of categories. 4.1 Results and Analysis The beam size was tuned on the development set, and a value of 128 was found to achieve a reasonable balance of accuracy and speed; hence this value was used for all experiments. Since C&C always enforces non-fragmentary output (i.e. it can only produce spanning analyses), it fails on some sentences in the development and test sets, and thus we also evaluate on the reduced sets, following Clark and Curran (2007). Our parser does not fail on any sentences because it permits fragmentary output (those cases where there is more than one subtree left on the final stack). The results for Z&C, and the C&C normal-form and hybrid models, are taken from Zhang and Clark (2011). Table 1 shows the accuracies of all parsers on the development set, in terms of labeled precision and recall over the predicate-argument dependencies in CCGBank. On both the full and reduced sets, our parser achieves the highest F-score. In comparison with C&C, our parser shows significant increases across all metrics, with 0.57% and 1.06% absolute F-score improvements over the hybrid and normal-form models, respectively. Another major improvement over the other two parsers is in sentence level accuracy, LSent, which measures the number of sentences for which the dependency structure is completely correct. Table 1 also shows that our parser has improved recall over Z&C at some expense of precision. To probe this further we compare labeled precision and recall relative to dependency length, as measured by the distance between the two words in a dependency, grouped into bins of 5 values. Fig. 4 shows clearly that Z&C favors precision over recall, giving higher precision scores for almost all dependency lengths compared to our parser. In 225 category LP % (o) LP % (z) LP % (c) LR % (o) LR % (z) LR % (c) LF % (o) LF % (z) LF % (c) freq. N /N 95.53 95.77 95.28 95.83 95.79 95.62 95.68 95.78 95.45 7288 NP/N 96.53 96.70 96.57 97.12 96.59 96.03 96.83 96.65 96.30 4101 (NP\NP)/NP 81.64 83.19 82.17 90.63 89.24 88.90 85.90 86.11 85.40 2379 (NP\NP)/NP 81.70 82.53 81.58 88.91 87.99 85.74 85.15 85.17 83.61 2174 ((S\NP)\(S\NP))/NP 77.64 77.60 71.94 72.97 71.58 73.32 75.24 74.47 72.63 1147 ((S\NP)\(S\NP))/NP 75.78 76.30 70.92 71.27 70.60 71.93 73.45 73.34 71.42 1058 ((S[dcl]\NP)/NP 83.94 85.60 81.57 86.04 84.30 86.37 84.98 84.95 83.90 917 PP/NP 77.06 73.76 75.06 73.63 72.83 70.09 75.31 73.29 72.49 876 ((S[dcl]\NP)/NP 82.03 85.32 81.62 83.26 82.00 85.55 82.64 83.63 83.54 872 ((S\NP)\(S\NP)) 86.42 84.44 86.85 86.19 86.60 86.73 86.31 85.51 86.79 746 Table 2: Accuracy comparison on most frequent dependency types, for our parser (o), Z&C (z) and C&C hybrid model (c). Categories in bold indicate the argument slot in the relation. LP % LR % LF % LSent. % CatAcc. % coverage % our parser 87.03 85.08 86.04 35.69 93.10 100 Z&C 87.43 83.61 85.48 35.19 93.12 100 C&C (normal-form) 85.58 82.85 84.20 32.90 92.84 100 our parser 87.04 85.16 86.09 35.84 93.13 99.58 (C&C coverage) Z&C 87.43 83.71 85.53 35.34 93.15 99.58 (C&C coverage) C&C (hybrid) 86.17 84.74 85.45 32.92 92.98 99.58 (C&C coverage) C&C (normal-form) 85.48 84.60 85.04 33.08 92.86 99.58 (C&C coverage) Table 3: Accuracy comparison on section 23 (auto POS). terms of recall (Fig. 4b), our parser outperforms Z&C over all dependency lengths, especially for longer dependencies (x ≥20). When compared with C&C, the recall of the Z&C parser drops quickly for dependency lengths over 10. While our parser also suffers from this problem, it is less severe and is able to achieve higher recall at x ≥30. Table 2 compares our parser with Z&C and the C&C hybrid model, for the most frequent dependency relations. While our parser achieved lower precision than Z&C, it is more balanced and gives higher recall for all of the dependency relations except the last one, and higher F-score for over half of them. Table 3 presents the final test results on Section 23. Again, our parser achieves the highest scores across all metrics (for both the full and reduced test sets), except for precision and lexical category assignment, where Z&C performed better. 5 Conclusion We have presented a dependency model for a shiftreduce CCG parser, which fully aligns CCG parsing with the left-to-right, incremental nature of a shiftreduce parser. Our work is in part inspired by the dependency models of Clark and Curran (2007) and, in the use of a dependency oracle, is close in spirit to that of Goldberg and Nivre (2012). The difference is that the Goldberg and Nivre parser builds, and scores, dependency structures directly, whereas our parser uses a unification mechanism to create dependencies, and scores the CCG derivations, allowing great flexibility in terms of what dependencies can be realized. Another related work is Yu et al. (2013), which introduced a similar technique to deal with spurious ambiguity in MT. Finally, there may be potential to integrate the techniques of Auli and Lopez (2011), which currently represents the state-of-the-art in CCGBank parsing, into our parser. Acknowledgements We thank the anonymous reviewers for their helpful comments. Wenduan Xu is fully supported by the Carnegie Trust and receives additional funding from the Cambridge Trusts. Stephen Clark is supported by ERC Starting Grant DisCoTex (306920) and EPSRC grant EP/I037512/1. Yue Zhang is supported by Singapore MOE Tier2 grant T2MOE201301. References Michael Auli and Adam Lopez. 2011. A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing. In Proc. ACL 2011, pages 470–480, Portland, OR. Ted Briscoe and John Carroll. 2006. Evaluating the accuracy of an unlexicalized statistical parser on the 226 PARC DepBank. In Proc. of COLING/ACL, pages 41–48, Sydney, Australia. Stephen Clark and James R. Curran. 2006. Partial training for a lexicalized-grammar parser. In Proc. NAACL-06, pages 144–151, New York, USA. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. Stephen Clark and Julia Hockenmaier. 2002. Evaluating a wide-coverage CCG parser. In Proc. of the LREC 2002 Beyond Parseval Workshop, pages 60– 66, Las Palmas, Spain. Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building deep dependency structures with a wide-coverage CCG parser. In Proc. ACL, pages 327–334, Philadelphia, PA. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. of ACL, pages 111–118, Barcelona, Spain. Michael Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proc. of EMNLP, pages 1–8, Philadelphia, USA. Jason Eisner. 1996. Efficient normal-form parsing for Combinatory Categorial Grammar. In Proc. ACL, pages 79–86, Santa Cruz, CA. Timothy AD Fowler and Gerald Penn. 2010. Accurate context-free parsing with Combinatory Categorial Grammar. In Proc. ACL, pages 335–344, Uppsala, Sweden. Giorgio Gallo, Giustino Longo, Stefano Pallottino, and Sang Nguyen. 1993. Directed hypergraphs and applications. Discrete applied mathematics, 42(2):177–201. Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Proc. COLING, Mumbai, India. Yoav Goldberg, Kai Zhao, and Liang Huang. 2013. Efficient implementation for beam search incremental parsers. In Proceedings of the Short Papers of ACL, Sofia, Bulgaria. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. Julia Hockenmaier. 2003. Data and Models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. Liang Huang and David Chiang. 2005. Better kbest parsing. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 53– 64, Vancouver, Canada. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proc. ACL, pages 1077–1086, Uppsala, Sweden. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proc. NAACL, pages 142–151, Montreal, Canada. Yusuke Miyao and Jun’ichi Tsujii. 2002. Maximum entropy estimation for feature forests. In Proceedings of the Human Language Technology Conference, San Diego, CA. Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proc. of ACL/HLT, pages 950–958, Columbus, Ohio. J. Nivre and M Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING 2004, pages 64–70, Geneva, Switzerland. Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos Gomez-Rodriguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proc. of COLING, Beijing, China. Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proc. EMNLP, pages 813–821, Edinburgh, UK. Nathan Schneider, Brendan O’Connor, Naomi Saphra, David Bamman, Manaal Faruqui, Noah A. Smith, Chris Dyer, and Jason Baldridge. 2013. A framework for (under)specifying dependency syntax without overloading annotators. In Proc. of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, Sofia, Bulgaria. Mark Steedman. 2000. The Syntactic Process. The MIT Press, Cambridge, Mass. H Yamada and Y Matsumoto. 2003. Statistical dependency analysis using support vector machines. In Proc. of IWPT, Nancy, France. Heng Yu, Liang Huang, Haitao Mi, and Kai Zhao. 2013. Max-violation perceptron and forced decoding for scalable mt training. In Proc. EMNLP, Seattle, Washington, USA. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search. In Proc. of EMNLP, Hawaii, USA. Yue Zhang and Stephen Clark. 2011. Shift-reduce CCG parsing. In Proc. ACL 2011, pages 683–692, Portland, OR. 227
2014
21
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 228–237, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Less Grammar, More Features David Hall Greg Durrett Dan Klein Computer Science Division University of California, Berkeley {dlwh,gdurrett,klein}@cs.berkeley.edu Abstract We present a parser that relies primarily on extracting information directly from surface spans rather than on propagating information through enriched grammar structure. For example, instead of creating separate grammar symbols to mark the definiteness of an NP, our parser might instead capture the same information from the first word of the NP. Moving context out of the grammar and onto surface features can greatly simplify the structural component of the parser: because so many deep syntactic cues have surface reflexes, our system can still parse accurately with context-free backbones as minimal as Xbar grammars. Keeping the structural backbone simple and moving features to the surface also allows easy adaptation to new languages and even to new tasks. On the SPMRL 2013 multilingual constituency parsing shared task (Seddah et al., 2013), our system outperforms the top single parser system of Bj¨orkelund et al. (2013) on a range of languages. In addition, despite being designed for syntactic analysis, our system also achieves stateof-the-art numbers on the structural sentiment task of Socher et al. (2013). Finally, we show that, in both syntactic parsing and sentiment analysis, many broad linguistic trends can be captured via surface features. 1 Introduction Na¨ıve context-free grammars, such as those embodied by standard treebank annotations, do not parse well because their symbols have too little context to constrain their syntactic behavior. For example, to PPs usually attach to verbs and of PPs usually attach to nouns, but a context-free PP symbol can equally well attach to either. Much of the last few decades of parsing research has therefore focused on propagating contextual information from the leaves of the tree to internal nodes. For example, head lexicalization (Eisner, 1996; Collins, 1997; Charniak, 1997), structural annotation (Johnson, 1998; Klein and Manning, 2003), and state-splitting (Matsuzaki et al., 2005; Petrov et al., 2006) are all designed to take coarse symbols like PP and decorate them with additional context. The underlying reason that such propagation is even needed is that PCFG parsers score trees based on local configurations only, and any information that is not threaded through the tree becomes inaccessible to the scoring function. There have been non-local approaches as well, such as tree-substitution parsers (Bod, 1993; Sima’an, 2000), neural net parsers (Henderson, 2003), and rerankers (Collins and Koo, 2005; Charniak and Johnson, 2005; Huang, 2008). These non-local approaches can actually go even further in enriching the grammar’s structural complexity by coupling larger domains in various ways, though their non-locality generally complicates inference. In this work, we instead try to minimize the structural complexity of the grammar by moving as much context as possible onto local surface features. We examine the position that grammars should not propagate any information that is available from surface strings, since a discriminative parser can access that information directly. We therefore begin with a minimal grammar and iteratively augment it with rich input features that do not enrich the context-free backbone. Previous work has also used surface features in their parsers, but the focus has been on machine learning methods (Taskar et al., 2004), latent annotations (Petrov and Klein, 2008a; Petrov and Klein, 2008b), or implementation (Finkel et al., 2008). By contrast, we investigate the extent to which 228 we need a grammar at all. As a thought experiment, consider a parser with no grammar, which functions by independently classifying each span (i, j) of a sentence as an NP, VP, and so on, or null if that span is a non-constituent. For example, spans that begin with the might tend to be NPs, while spans that end with of might tend to be non-constituents. An independent classification approach is actually very viable for part-of-speech tagging (Toutanova et al., 2003), but is problematic for parsing – if nothing else, parsing comes with a structural requirement that the output be a well-formed, nested tree. Our parser uses a minimal PCFG backbone grammar to ensure a basic level of structural well-formedness, but relies mostly on features of surface spans to drive accuracy. Formally, our model is a CRF where the features factor over anchored rules of a small backbone grammar, as shown in Figure 1. Some aspects of the parsing problem, such as the tree constraint, are clearly best captured by a PCFG. Others, such as heaviness effects, are naturally captured using surface information. The open question is whether surface features are adequate for key effects like subcategorization, which have deep definitions but regular surface reflexes (e.g. the preposition selected by a verb will often linearly follow it). Empirically, the answer seems to be yes, and our system produces strong results, e.g. up to 90.5 F1 on English parsing. Our parser is also able to generalize well across languages with little tuning: it achieves state-of-the-art results on multilingual parsing, scoring higher than the best single-parser system from the SPMRL 2013 Shared Task on a range of languages, as well as on the competition’s average F1 metric. One advantage of a system that relies on surface features and a simple grammar is that it is portable not only across languages but also across tasks to an extent. For example, Socher et al. (2013) demonstrates that sentiment analysis, which is usually approached as a flat classification task, can be viewed as tree-structured. In their work, they propagate real-valued vectors up a tree using neural tensor nets and see gains from their recursive approach. Our parser can be easily adapted to this task by replacing the X-bar grammar over treebank symbols with a grammar over the sentiment values to encode the output variables and then adding n-gram indicators to our feature set to capture the bulk of the lexical effects. When applied to this task, our system generally matches their accuracy overall and is able to outperform it on the overall sentence-level subtask. 2 Parsing Model In order to exploit non-independent surface features of the input, we use a discriminative formulation. Our model is a conditional random field (Lafferty et al., 2001) over trees, in the same vein as Finkel et al. (2008) and Petrov and Klein (2008a). Formally, we define the probability of a tree T conditioned on a sentence w as p(T|w) ∝exp θ⊺X r∈T f(r, w) ! (1) where the feature domains r range over the (anchored) rules used in the tree. An anchored rule r is the conjunction of an unanchored grammar rule rule(r) and the start, stop, and split indexes where that rule is anchored, which we refer to as span(r). It is important to note that the richness of the backbone grammar is reflected in the structure of the trees T, while the features that condition directly on the input enter the equation through the anchoring span(r). To optimize model parameters, we use the Adagrad algorithm of Duchi et al. (2010) with L2 regularization. We start with a simple X-bar grammar whose only symbols are NP, NP-bar, VP, and so on. Our base model has no surface features: formally, on each anchored rule r we have only an indicator of the (unanchored) rule identity, rule(r). Because the X-bar grammar is so minimal, this grammar does not parse very accurately, scoring just 73 F1 on the standard English Penn Treebank task. In past work that has used tree-structured CRFs in this way, increased accuracy partially came from decorating trees T with additional annotations, giving a tree T ′ over a more complex symbol set. These annotations introduce additional context into the model, usually capturing linguistic intuition about the factors that influence grammaticality. For instance, we might annotate every constituent X in the tree with its parent Y , giving a tree with symbols X[ˆY ]. Finkel et al. (2008) used parent annotation, head tag annotation, and horizontal sibling annotation together in a single large grammar. In Petrov and Klein (2008a) and Petrov and Klein (2008b), these annotations were latent; they were inferred automatically during training. 229 Hall and Klein (2012) employed both kinds of annotations, along with lexicalized head word annotation. All of these past CRF parsers do also exploit span features, as did the structured margin parser of Taskar et al. (2004); the current work primarily differs in shifting the work from the grammar to the surface features. The problem with rich annotations is that they increase the state space of the grammar substantially. For example, adding parent annotation can square the number of symbols, and each subsequent annotation causes a multiplicative increase in the size of the state space. Hall and Klein (2012) attempted to reduce this state space by factoring these annotations into individual components. Their approach changed the multiplicative penalty of annotation into an additive penalty, but even so their individual grammar projections are much larger than the base X-bar grammar. In this work, we want to see how much of the expressive capability of annotations can be captured using surface evidence, with little or no annotation of the underlying grammar. To that end, we avoid annotating our trees at all, opting instead to see how far simple surface features will go in achieving a high-performance parser. We will return to the question of annotation in Section 5. 3 Surface Feature Framework To improve the performance of our X-bar grammar, we will add a number of surface feature templates derived only from the words in the sentence. We say that an indicator is a surface property if it can be extracted without reference to the parse tree. These features can be implemented without reference to structured linguistic notions like headedness; however, we will argue that they still capture a wide range of linguistic phenomena in a data-driven way. Throughout this and the following section, we will draw on motivating examples from the English Penn Treebank, though similar examples could be equally argued for other languages. For performance on other languages, see Section 6. Recall that our CRF factors over anchored rules r, where each r has identity rule(r) and anchoring span(r). The X-bar grammar has only indicators of rule(r), ignoring the anchoring. Let a surface property of r be an indicator function of span(r) and the sentence itself. For example, the first word in a constituent is a surface property, as averted financial disaster VP NP VBD JJ NN PARENT = VP FIRSTWORD = averted LENGTH = 3 RULE = VP → VBD NP PARENT = VP Span properties Rule backoffs Features ... 5 6 7 8 ... LASTWORD = disaster ⌦ FIRSTWORD = averted LASTWORD = disaster PARENT = VP ⌦ ⌦ FIRSTWORD = averted RULE = VP → VBD NP Figure 1: Features computed over the application of the rule VP →VBD NP over the anchored span averted financial disaster with the shown indices. Span properties are generated as described throughout Section 4; they are then conjoined with the rule and just the parent nonterminal to give the features fired over the anchored production. is the word directly preceding the constituent. As illustrated in Figure 1, the actual features of the model are obtained by conjoining surface properties with various abstractions of the rule identity. For rule abstractions, we use two templates: the parent of the rule and the identity of the rule. The surface features are somewhat more involved, and so we introduce them incrementally. One immediate computational and statistical issue arises from the sheer number of possible surface features. There are a great number of spans in a typical treebank; extracting features for every possible combination of span and rule is prohibitive. One simple solution is to only extract features for rule/span pairs that are actually observed in gold annotated examples during training. Because these “positive” features correspond to observed constituents, they are far less numerous than the set of all possible features extracted from all spans. As far as we can tell, all past CRF parsers have used “positive” features only. However, negative features—features that are not observed in any tree—are still powerful indicators of (un)grammaticality: if we have never seen a PRN that starts with “has,” or a span that begins with a quotation mark and ends with a close bracket, then we would like the model to be able to place negative weights on these features. Thus, we use a simple feature hashing scheme where positive features are indexed individually, while nega230 Features Section F1 RULE 4 73.0 + SPAN FIRST WORD + SPAN LAST WORD + LENGTH 4.1 85.0 + WORD BEFORE SPAN + WORD AFTER SPAN 4.2 89.0 + WORD BEFORE SPLIT + WORD AFTER SPLIT 4.3 89.7 + SPAN SHAPE 4.4 89.9 Table 1: Results for the Penn Treebank development set, reported in F1 on sentences of length ≤40 on Section 22, for a number of incrementally growing feature sets. We show that each feature type presented in Section 4 adds benefit over the previous, and in combination they produce a reasonably good yet simple parser. tive features are bucketed together. During training there are no collisions between positive features, which generally receive positive weight, and negative features, which generally receive negative weight; only negative features can collide. Early experiments indicated that using a number of negative buckets equal to the number of positive features was effective. 4 Features Our goal is to use surface features to replicate the functionality of other annotations, without increasing the state space of our grammar, meaning that the rules rule(r) remain simple, as does the state space used during inference. Before we present our main features, we briefly discuss the issue of feature sparsity. While lexical features are a powerful driver of our parser, firing features on rare words would allow it to overfit the training data quite heavily. To that end, for the purposes of computing our features, a word is represented by its longest suffix that occurs 100 or more times in the training data (which will be the entire word, for common words).1 Table 1 shows the results of incrementally building up our feature set on the Penn Treebank development set. RULE specifies that we use only indicators on rule identity for binary production and nonterminal unaries. For this experiment and all others, we include a basic set of lexicon features, i.e. features on preterminal part-of-speech tags. A given preterminal unary at position i in the sentence includes features on the words (suffixes) at position i −1, i, and i + 1. Because the lexicon is especially sensitive to morphological effects, we also fire features on all prefixes and suf1Experiments with the Brown clusters (Brown et al., 1992) provided by Turian et al. (2010) in lieu of suffixes were not promising. Moreover, lowering this threshold did not improve performance. fixes of the current word up to length 5, regardless of frequency. Subsequent lines in Table 1 indicate additional surface feature templates computed over the span, which are then conjoined with the rule identity as shown in Figure 1 to give additional features. In the rest of the section, we describe the features of this type that we use. Note that many of these features have been used before (Taskar et al., 2004; Finkel et al., 2008; Petrov and Klein, 2008b); our goal here is not to amass as many feature templates as possible, but rather to examine the extent to which a simple set of features can replace a complicated state space. 4.1 Basic Span Features We start with some of the most obvious properties available to us, namely, the identity of the first and last words of a span. Because heads of constituents are often at the beginning or the end of a span, these feature templates can (noisily) capture monolexical properties of heads without having to incur the inferential cost of lexicalized annotations. For example, in English, the syntactic head of a verb phrase is typically at the beginning of the span, while the head of a simple noun phrase is the last word. Other languages, like Korean or Japanese, are more consistently head final. Structural contexts like those captured by parent annotation (Johnson, 1998) are more subtle. Parent annotation can capture, for instance, the difference in distribution in NPs that have S as a parent (that is, subjects) and NPs under VPs (objects). We try to capture some of this same intuition by introducing a feature on the length of a span. For instance, VPs embedded in NPs tend to be short, usually as embedded gerund phrases. Because constituents in the treebank can be quite long, we bin our length features into 8 buckets, of 231 no read messages in his inbox VP VBP NNS VP → no VBP NNS Figure 2: An example showing the utility of span context. The ambiguity about whether read is an adjective or a verb is resolved when we construct a VP and notice that the word proceeding it is unlikely. has an impact on the market PP NP NP NP → (NP ... impact) PP) Figure 3: An example showing split point features disambiguating a PP attachment. Because impact is likely to take a PP, the monolexical indicator feature that conjoins impact with the appropriate rule will help us parse this example correctly. lengths 1, 2, 3, 4, 5, 10, 20, and ≥21 words. Adding these simple features (first word, last word, and lengths) as span features of the Xbar grammar already gives us a substantial improvement over our baseline system, improving the parser’s performance from 73.0 F1 to 85.0 F1 (see Table 1). 4.2 Span Context Features Of course, there is no reason why we should confine ourselves to just the words within the span: words outside the span also provide a rich source of context. As an example, consider disambiguating the POS tag of the word read in Figure 2. A VP is most frequently preceded by a subject NP, whose rightmost word is often its head. Therefore, we fire features that (separately) look at the words immediately preceding and immediately following the span. 4.3 Split Point Features Another important source of features are the words at and around the split point of a binary rule application. Figure 3 shows an example of one in( CEO of Enron ) PRN (XxX) said , “ Too bad , ” VP x,“Xx,” Figure 4: Computation of span shape features on two examples. Parentheticals, quotes, and other punctuation-heavy, short constituents benefit from being explicitly modeled by a descriptor like this. stance of this feature template. impact is a noun that is more likely to take a PP than other nouns, and so we expect this feature to have high weight and encourage the attachment; this feature proves generally useful in resolving such cases of rightattachments to noun phrases, since the last word of the noun phrase is often the head. As another example, coordination can be represented by an indicator of the conjunction, which comes immediately after the split point. Finally, control structures with infinitival complements can be captured with a rule S →NP VP with the word “to” at the split point. 4.4 Span Shape Features We add one final feature characterizing the span, which we call span shape. Figure 4 shows how this feature is computed. For each word in the span,2 we indicate whether that word begins with a capital letter, lowercase letter, digit, or punctuation mark. If it begins with punctuation, we indicate the punctuation mark explicitly. Figure 4 shows that this is especially useful in characterizing constructions such as parentheticals and quoted expressions. Because this feature indicates capitalization, it can also capture properties of NP internal structure relevant to named entities, and its sensitivity to capitalization and punctuation makes it useful for recognizing appositive constructions. 5 Annotations We have built up a strong set of features by this point, but have not yet answered the question of whether or not grammar annotation is useful on top of them. In this section, we examine two of the most commonly used types of additional annotation, structural annotation, and lexical annotation. 2For longer spans, we only use words sufficiently close to the span’s beginning and end. 232 Annotation Dev, len ≤40 v = 0, h = 0 90.1 v = 1, h = 0 90.5 v = 0, h = 1 90.2 v = 1, h = 1 90.9 Lexicalized 90.3 Table 2: Results for the Penn Treebank development set, sentences of length ≤40, for different annotation schemes implemented on top of the Xbar grammar. Recall from Section 3 that every span feature is conjoined with indicators over rules and rule parents to produce features over anchored rule productions; when we consider adding an annotation layer to the grammar, what that does is refine the rule indicators that are conjoined with every span feature. While this is a powerful way of refining features, we show that common successful annotation schemes provide at best modest benefit on top of the base parser. 5.1 Structural Annotation The most basic, well-understood kind of annotation on top of an X-bar grammar is structural annotation, which annotates each nonterminal with properties of its environment (Johnson, 1998; Klein and Manning, 2003). This includes vertical annotation (parent, grandparent, etc.) as well as horizontal annotation (only partially Markovizing rules as opposed to using an X-bar grammar). Table 2 shows the performance of our feature set in grammars with several different levels of structural annotation.3 Klein and Manning (2003) find large gains (6% absolute improvement, 20% relative improvement) going from v = 0, h = 0 to v = 1, h = 1; however, we do not find the same level of benefit. To the extent that our parser needs to make use of extra information in order to apply a rule correctly, simply inspecting the input to determine this information appears to be almost as effective as relying on information threaded through the parser. In Section 6 and Section 7, we use v = 1 and h = 0; we find that v = 1 provides a small, reliable improvement across a range of languages and tasks, whereas other annotations are less clearly beneficial. 3We use v = 0 to indicate no annotation, diverging from the notation in Klein and Manning (2003). Test ≤40 Test all Berkeley 90.6 90.1 This work 89.9 89.2 Table 3: Final Parseval results for the v = 1, h = 0 parser on Section 23 of the Penn Treebank. 5.2 Lexical Annotation Another commonly-used kind of structural annotation is lexicalization (Eisner, 1996; Collins, 1997; Charniak, 1997). By annotating grammar nonterminals with their headwords, the idea is to better model phenomena that depend heavily on the semantics of the words involved, such as coordination and PP attachment. Table 2 shows results from lexicalizing the Xbar grammar; it provides meager improvements. One probable reason for this is that our parser already includes monolexical features that inspect the first and last words of each span, which captures the syntactic or the semantic head in many cases or can otherwise provide information about what the constituent’s type may be and how it is likely to combine. Lexicalization allows us to capture bilexical relationships along dependency arcs, but it has been previously shown that these add only marginal benefit to Collins’s model anyway (Gildea, 2001). 5.3 English Evaluation Finally, Table 3 shows our final evaluation on Section 23 of the Penn Treebank. We use the v = 1, h = 0 grammar. While we do not do as well as the Berkeley parser, we will see in Section 6 that our parser does a substantially better job of generalizing to other languages. 6 Other Languages Historically, many annotation schemes for parsers have required language-specific engineering: for example, lexicalized parsers require a set of head rules and manually-annotated grammars require detailed analysis of the treebank itself (Klein and Manning, 2003). A key strength of a parser that does not rely heavily on an annotated grammar is that it may be more portable to other languages. We show that this is indeed the case: on nine languages, our system is competitive with or better than the Berkeley parser, which is the best single 233 Arabic Basque French German Hebrew Hungarian Korean Polish Swedish Avg Dev, all lengths Berkeley 78.24 69.17 79.74 81.74 87.83 83.90 70.97 84.11 74.50 78.91 Berkeley-Rep 78.70 84.33 79.68 82.74 89.55 89.08 82.84 87.12 75.52 83.28 Our work 78.89 83.74 79.40 83.28 88.06 87.44 81.85 91.10 75.95 83.30 Test, all lengths Berkeley 79.19 70.50 80.38 78.30 86.96 81.62 71.42 79.23 79.18 78.53 Berkeley-Tags 78.66 74.74 79.76 78.28 85.42 85.22 78.56 86.75 80.64 80.89 Our work 78.75 83.39 79.70 78.43 87.18 88.25 80.18 90.66 82.00 83.17 Table 4: Results for the nine treebanks in the SPMRL 2013 Shared Task; all values are F-scores for sentences of all lengths using the version of evalb distributed with the shared task. Berkeley-Rep is the best single parser from (Bj¨orkelund et al., 2013); we only compare to this parser on the development set because neither the system nor test set values are publicly available. Berkeley-Tags is a version of the Berkeley parser run by the task organizers where tags are provided to the model, and is the best single parser submitted to the official task. In both cases, we match or outperform the baseline parsers in aggregate and on the majority of individual languages. parser4 for the majority of cases we consider. We evaluate on the constituency treebanks from the Statistical Parsing of Morphologically Rich Languages Shared Task (Seddah et al., 2013). We compare to the Berkeley parser (Petrov and Klein, 2007) as well as two variants. First, we use the “Replaced” system of Bj¨orkelund et al. (2013) (Berkeley-Rep), which is their best single parser.5 The “Replaced” system modifies the Berkeley parser by replacing rare words with morphological descriptors of those words computed using language-specific modules, which have been hand-crafted for individual languages or are trained with additional annotation layers in the treebanks that we do not exploit. Unfortunately, Bj¨orkelund et al. (2013) only report results on the development set for the Berkeley-Rep model; however, the task organizers also use a version of the Berkeley parser provided with parts of speech from high-quality POS taggers for each language (Berkeley-Tags). These part-of-speech taggers often incorporate substantial knowledge of each language’s morphology. Both BerkeleyRep and Berkeley-Tags make up for some shortcomings of the Berkeley parser’s unknown word model, which is tuned to English. In Table 4, we see that our performance is overall substantially higher than that of the Berkeley parser. On the development set, we outperform the Berkeley parser and match the performance of the Berkeley-Rep parser. On the test set, we outper4I.e. it does not use a reranking step or post-hoc combination of parser results. 5Their best parser, and the best overall parser from the shared task, is a reranked product of “Replaced” Berkeley parsers. form both the Berkeley parser and the BerkeleyTags parser on seven of nine languages, losing only on Arabic and French. These results suggest that the Berkeley parser may be heavily fit to English, particularly in its lexicon. However, even when language-specific unknown word handling is added to the parser, our model still outperforms the Berkeley parser overall, showing that our model generalizes even better across languages than a parser for which this is touted as a strength (Petrov and Klein, 2007). Our span features appear to work well on both head-initial and head-final languages (see Basque and Korean in the table), and the fact that our parser performs well on such morphologicallyrich languages as Hungarian indicates that our suffix model is sufficient to capture most of the morphological effects relevant to parsing. Of course, a language that was heavily prefixing would likely require this feature to be modified. Likewise, our parser does not perform as well on Arabic and Hebrew. These closely related languages use templatic morphology, for which suffixing is not appropriate; however, using additional surface features based on the output of a morphological analyzer did not lead to increased performance. Finally, our high performance on languages such as Polish and Swedish, whose training treebanks consist of 6578 and 5000 sentences, respectively, show that our feature-rich model performs robustly even on treebanks much smaller than the Penn Treebank.6 6The especially strong performance on Polish relative to other systems is partially a result of our model being able to produce unary chains of length two, which occur frequently in the Polish treebank (Bj¨orkelund et al., 2013). 234 While “ Gangs ” is never lethargic , it is hindered by its plot . 4 1 2 2 → (4 While...) 1 Figure 5: An example of a sentence from the Stanford Sentiment Treebank which shows the utility of our span features for this task. The presence of “While” under this kind of rule tells us that the sentiment of the constituent to the right dominates the sentiment to the left. 7 Sentiment Analysis Finally, because the system is, at its core, a classifier of spans, it can be used equally well for tasks that do not normally use parsing algorithms. One example is sentiment analysis. While approaches to sentiment analysis often simply classify the sentence monolithically, treating it as a bag of ngrams (Pang et al., 2002; Pang and Lee, 2005; Wang and Manning, 2012), the recent dataset of Socher et al. (2013) imposes a layer of structure on the problem that we can exploit. They annotate every constituent in a number of training trees with an integer sentiment value from 1 (very negative) to 5 (very positive), opening the door for models such as ours to learn how syntax can structurally affect sentiment.7 Figure 5 shows an example that requires some analysis of sentence structure to correctly understand. The first constituent conveys positive sentiment with never lethargic and the second conveys negative sentiment with hindered, but to determine the overall sentiment of the sentence, we need to exploit the fact that while signals a discounting of the information that follows it. The grammar rule 2 →4 1 already encodes the notion of the sentiment of the right child being dominant, so when this is conjoined with our span feature on the first word (While), we end up with a feature that captures this effect. Our features can also lexicalize on other discourse connectives such as but or however, which often occur at the split point between two spans. 7Note that the tree structure is assumed to be given; the problem is one of labeling a fixed parse backbone. 7.1 Adapting to Sentiment Our parser is almost entirely unchanged from the parser that we used for syntactic analysis. Though the treebank grammar is substantially different, with the nonterminals consisting of five integers with very different semantics from syntactic nonterminals, we still find that parent annotation is effective and otherwise additional annotation layers are not useful. One structural difference between sentiment analysis and syntactic parsing lies in where the relevant information is present in a span. Syntax is often driven by heads of constituents, which tend to be located at the beginning or the end, whereas sentiment is more likely to depend on modifiers such as adjectives, which are typically present in the middle of spans. Therefore, we augment our existing model with standard sentiment analysis features that look at unigrams and bigrams in the span (Wang and Manning, 2012). Moreover, the Stanford Sentiment Treebank is unique in that each constituent was annotated in isolation, meaning that context never affects sentiment and that every word always has the same tag. We exploit this by adding an additional feature template similar to our span shape feature from Section 4.4 which uses the (deterministic) tag for each word as its descriptor. 7.2 Results We evaluated our model on the fine-grained sentiment analysis task presented in Socher et al. (2013) and compare to their released system. The task is to predict the root sentiment label of each parse tree; however, because the data is annotated with sentiment at each span of each parse tree, we can also evaluate how well our model does at these intermediate computations. Following their experimental conditions, we filter the test set so that it only contains trees with non-neutral sentiment labels at the root. Table 5 shows that our model outperforms the model of Socher et al. (2013)—both the published numbers and latest released version—on the task of root classification, even though the system was not explicitly designed for this task. Their model has high capacity to model complex interactions of words through a combinatory tensor, but it appears that our simpler, feature-driven model is just as effective at capturing the key effects of compositionality for sentiment analysis. 235 Root All Spans Non-neutral Dev (872 trees) Stanford CoreNLP current 50.7 80.8 This work 53.1 80.5 Non-neutral Test (1821 trees) Stanford CoreNLP current 49.1 80.2 Stanford EMNLP 2013 45.7 80.7 This work 49.6 80.4 Table 5: Fine-grained sentiment analysis results on the Stanford Sentiment Treebank of Socher et al. (2013). We compare against the printed numbers in Socher et al. (2013) as well as the performance of the corresponding release, namely the sentiment component in the latest version of the Stanford CoreNLP at the time of this writing. Our model handily outperforms the results from Socher et al. (2013) at root classification and edges out the performance of the latest version of the Stanford system. On all spans of the tree, our model has comparable accuracy to the others. 8 Conclusion To date, the most successful constituency parsers have largely been generative, and operate by refining the grammar either manually or automatically so that relevant information is available locally to each parsing decision. Our main contribution is to show that there is an alternative to such annotation schemes: namely, conditioning on the input and firing features based on anchored spans. We build up a small set of feature templates as part of a discriminative constituency parser and outperform the Berkeley parser on a wide range of languages. Moreover, we show that our parser is adaptable to other tree-structured tasks such as sentiment analysis; we outperform the recent system of Socher et al. (2013) and obtain state of the art performance on their dataset. Our system is available as open-source at https://www.github.com/dlwh/epic. Acknowledgments This work was partially supported by BBN under DARPA contract HR0011-12-C-0014, by a Google PhD fellowship to the first author, and an NSF fellowship to the second. We further gratefully acknowledge a hardware donation by NVIDIA Corporation. References Anders Bj¨orkelund, Ozlem Cetinoglu, Rich´ard Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (Re)ranking Meets Morphosyntax: State-of-the-art Results from the SPMRL 2013 Shared Task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. Rens Bod. 1993. Using an Annotated Corpus As a Stochastic Grammar. In Proceedings of the Sixth Conference on European Chapter of the Association for Computational Linguistics. Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467–479. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine N-best Parsing and MaxEnt Discriminative Reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Eugene Charniak. 1997. Statistical Techniques for Natural Language Parsing. AI Magazine, 18:33–44. Michael Collins and Terry Koo. 2005. Discriminative Reranking for Natural Language Parsing. Computational Linguistics, 31(1):25–70, March. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In ACL, pages 16–23. John Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. COLT. Jason Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96). Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In ACL 2008, pages 959–967. Daniel Gildea. 2001. Corpus variation and parser performance. In Proceedings of Empirical Methods in Natural Language Processing. David Hall and Dan Klein. 2012. Training factored PCFGs with expectation propagation. In EMNLP. James Henderson. 2003. Inducing History Representations for Broad Coverage Statistical Parsing. In Proceedings of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-08: HLT, pages 586–594, Columbus, Ohio, June. Association for Computational Linguistics. 236 Mark Johnson. 1998. PCFG Models of Linguistic Tree Representations. Computational Linguistics, 24(4):613–632, December. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In ACL, pages 423–430. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning. Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In ACL, pages 75–82, Morristown, NJ, USA. Bo Pang and Lillian Lee. 2005. Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs Up?: Sentiment Classification Using Machine Learning Techniques. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing - Volume 10. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In NAACL-HLT. Slav Petrov and Dan Klein. 2008a. Discriminative log-linear grammars with latent variables. In NIPS, pages 1153–1160. Slav Petrov and Dan Klein. 2008b. Sparse multi-scale grammars for discriminative latent variable parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 867–876, Honolulu, Hawaii, October. Association for Computational Linguistics. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433–440, Sydney, Australia, July. Djam´e Seddah, Reut Tsarfaty, Sandra K¨ubler, Marie Candito, Jinho D. Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi´orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, and Alina Wr´oblewska. 2013. Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morphologically Rich Languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. Khalil Sima’an. 2000. Tree-gram Parsing Lexical Dependencies and Structural Relations. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of Empirical Methods in Natural Language Processing. Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Christopher Manning. 2004. MaxMargin Parsing. In In Proceedings of Empirical Methods in Natural Language Processing. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich Partof-speech Tagging with a Cyclic Dependency Network. In Proceedings of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394. Association for Computational Linguistics. Sida Wang and Christopher Manning. 2012. Baselines and Bigrams: Simple, Good Sentiment and Topic Classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 237
2014
22
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 238–247, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors Marco Baroni and Georgiana Dinu and Germ´an Kruszewski Center for Mind/Brain Sciences (University of Trento, Italy) (marco.baroni|georgiana.dinu|german.kruszewski)@unitn.it Abstract Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts. 1 Introduction A long tradition in computational linguistics has shown that contextual information provides a good approximation to word meaning, since semantically similar words tend to have similar contextual distributions (Miller and Charles, 1991). In concrete, distributional semantic models (DSMs) use vectors that keep track of the contexts (e.g., co-occurring words) in which target terms appear in a large corpus as proxies for meaning representations, and apply geometric techniques to these vectors to measure the similarity in meaning of the corresponding words (Clark, 2013; Erk, 2012; Turney and Pantel, 2010). It has been clear for decades now that raw cooccurrence counts don’t work that well, and DSMs achieve much higher performance when various transformations are applied to the raw vectors, for example by reweighting the counts for context informativeness and smoothing them with dimensionality reduction techniques. This vector optimization process is generally unsupervised, and based on independent considerations (for example, context reweighting is often justified by information-theoretic considerations, dimensionality reduction optimizes the amount of preserved variance, etc.). Occasionally, some kind of indirect supervision is used: Several parameter settings are tried, and the best setting is chosen based on performance on a semantic task that has been selected for tuning. The last few years have seen the development of a new generation of DSMs that frame the vector estimation problem directly as a supervised task, where the weights in a word vector are set to maximize the probability of the contexts in which the word is observed in the corpus (Bengio et al., 2003; Collobert and Weston, 2008; Collobert et al., 2011; Huang et al., 2012; Mikolov et al., 2013a; Turian et al., 2010). The traditional construction of context vectors is turned on its head: Instead of first collecting context vectors and then reweighting these vectors based on various criteria, the vector weights are directly set to optimally predict the contexts in which the corresponding words tend to appear. Since similar words occur in similar contexts, the system naturally learns to assign similar vectors to similar words. This new way to train DSMs is attractive because it replaces the essentially heuristic stacking of vector transforms in earlier models with a single, well-defined supervised learning step. At the same time, supervision comes at no manual annotation cost, given that the context windows used for training can be automatically extracted from an unannotated corpus (indeed, they are the very same data used to build traditional DSMs). Moreover, at least some of the relevant methods can efficiently scale up to process very large amounts of input data.1 1The idea to directly learn a parameter vector based on an objective optimum function is shared by Latent Dirichlet 238 We will refer to DSMs built in the traditional way as count models (since they initialize vectors with co-occurrence counts), and to their trainingbased alternative as predict(ive) models.2 Now, the most natural question to ask, of course, is which of the two approaches is best in empirical terms. Surprisingly, despite the long tradition of extensive evaluations of alternative count DSMs on standard benchmarks (Agirre et al., 2009; Baroni and Lenci, 2010; Bullinaria and Levy, 2007; Bullinaria and Levy, 2012; Sahlgren, 2006; Pad´o and Lapata, 2007), the existing literature contains very little in terms of direct comparison of count vs. predictive DSMs. This is in part due to the fact that context-predicting vectors were first developed as an approach to language modeling and/or as a way to initialize feature vectors in neuralnetwork-based “deep learning” NLP architectures, so their effectiveness as semantic representations was initially seen as little more than an interesting side effect. Sociological reasons might also be partly responsible for the lack of systematic comparisons: Context-predictive models were developed within the neural-network community, with little or no awareness of recent DSM work in computational linguistics. Whatever the reasons, we know of just three works reporting direct comparisons, all limited in their scope. Huang et al. (2012) compare, in passing, one count model and several predict DSMs on the standard WordSim353 benchmark (Table 3 of their paper). In this experiment, the count model actually outperforms the best predictive approach. Instead, in a word-similarity-in-context task (Table 5), the best predict model outperforms the count model, albeit not by a large margin. Blacoe and Lapata (2012) compare count and predict representations as input to composition functions. Count vectors make for better inputs in a phrase similarity task, whereas the two representations are comparable in a paraphrase classification experiment.3 Allocation (LDA) models (Blei et al., 2003; Griffiths et al., 2007), where parameters are set to optimize the joint probability distribution of words and documents. However, the fully probabilistic LDA models have problems scaling up to large data sets. 2We owe the first term to Hinrich Sch¨utze (p.c.). Predictive DSMs are also called neural language models, because their supervised context prediction training is performed with neural networks, or, more cryptically, “embeddings”. 3We refer here to the updated results reported in the erratum at http://homepages.inf.ed.ac.uk/ s1066731/pdf/emnlp2012erratum.pdf Finally, Mikolov et al. (2013d) compare their predict models to “Latent Semantic Analysis” (LSA) count vectors on syntactic and semantic analogy tasks, finding that the predict models are highly superior. However, they provide very little details about the LSA count vectors they use.4 In this paper, we overcome the comparison scarcity problem by providing a direct evaluation of count and predict DSMs across many parameter settings and on a large variety of mostly standard lexical semantics benchmarks. Our title already gave away what we discovered. 2 Distributional semantic models Both count and predict models are extracted from a corpus of about 2.8 billion tokens constructed by concatenating ukWaC,5 the English Wikipedia6 and the British National Corpus.7 For both model types, we consider the top 300K most frequent words in the corpus both as target and context elements. 2.1 Count models We prepared the count models using the DISSECT toolkit.8 We extracted count vectors from symmetric context windows of two and five words to either side of target. We considered two weighting schemes: positive Pointwise Mutual Information and Local Mutual Information (akin to the widely used Log-Likelihood Ratio scheme) (Evert, 2005). We used both full and compressed vectors. The latter were obtained by applying the Singular Value Decomposition (Golub and Van Loan, 1996) or Non-negative Matrix Factorization (Lee and Seung, 2000), Lin (2007) algorithm, with reduced sizes ranging from 200 to 500 in steps of 100. In total, 36 count models were evaluated. Count models have such a long and rich history that we can only explore a small subset of the counting, weighting and compressing methods proposed in the literature. However, it is worth pointing out that the evaluated parameter subset encompasses settings (narrow context window, positive PMI, SVD reduction) that have been 4Chen et al. (2013) present an extended empirical evaluation, that is however limited to alternative context-predictive models, and does not include the word2vec variant we use here. 5http://wacky.sslmit.unibo.it 6http://en.wikipedia.org 7http://www.natcorp.ox.ac.uk 8http://clic.cimec.unitn.it/composes/ toolkit/ 239 found to be most effective in the systematic explorations of the parameter space conducted by Bullinaria and Levy (2007; 2012). 2.2 Predict models We trained our predict models with the word2vec toolkit.9 The toolkit implements both the skipgram and CBOW approaches of Mikolov et al. (2013a; 2013c). We experimented only with the latter, which is also the more computationallyefficient model of the two, following Mikolov et al. (2013b) which recommends CBOW as more suitable for larger datasets. The CBOW model learns to predict the word in the middle of a symmetric window based on the sum of the vector representations of the words in the window. We considered context windows of 2 and 5 words to either side of the central element. We vary vector dimensionality within the 200 to 500 range in steps of 100. The word2vec toolkit implements two efficient alternatives to the standard computation of the output word probability distributions by a softmax classifier. Hierarchical softmax is a computationally efficient way to estimate the overall probability distribution using an output layer that is proportional to log(unigram.perplexity(W)) instead of W (for W the vocabulary size). As an alternative, negative sampling estimates the probability of an output word by learning to distinguish it from draws from a noise distribution. The number of these draws (number of negative samples) is given by a parameter k. We test both hierarchical softmax and negative sampling with k values of 5 and 10. Very frequent words such as the or a are not very informative as context features. The word2vec toolkit implements a method to downsize their effect (and simultaneously improve speed performance). More precisely, words in the training data are discarded with a probability that is proportional to their frequency (capturing the same intuition that motivates traditional count vector weighting measures such as PMI). This is controlled by a parameter t and words that occur with higher frequency than t are aggressively subsampled. We train models without subsampling and with subsampling at t = 1e−5 (the toolkit page suggests 1e−3 −1e−5 as a useful range based on empirical observations). In total, we evaluate 48 predict models, a num9https://code.google.com/p/word2vec/ ber comparable to that of the count models we consider. 2.3 Out-of-the-box models Baroni and Lenci (2010) make the vectors of their best-performing Distributional Memory (dm) model available.10 This model, based on the same input corpus we use, exemplifies a “linguistically rich” count-based DSM, that relies on lemmas instead or raw word forms, and has dimensions that encode the syntactic relations and/or lexicosyntactic patterns linking targets and contexts. Baroni and Lenci showed, in a large scale evaluation, that dm reaches near-state-of-the-art performance in a variety of semantic tasks. We also experiment with the popular predict vectors made available by Ronan Collobert.11 Following the earlier literature, with refer to them as Collobert and Weston (cw) vectors. These are 100-dimensional vectors trained for two months (!) on the Wikipedia. In particular, the vectors were trained to optimize the task of choosing the right word over a random alternative in the middle of an 11-word context window (Collobert et al., 2011). 3 Evaluation materials We test our models on a variety of benchmarks, most of them already widely used to test and compare DSMs. The following benchmark descriptions also explain the figures of merit and stateof-the-art results reported in Table 2. Semantic relatedness A first set of semantic benchmarks was constructed by asking human subjects to rate the degree of semantic similarity or relatedness between two words on a numerical scale. The performance of a computational model is assessed in terms of correlation between the average scores that subjects assigned to the pairs and the cosines between the corresponding vectors in the model space (following the previous art, we use Pearson correlation for rg, Spearman in all other cases). The classic data set of Rubenstein and Goodenough (1965) (rg) consists of 65 noun pairs. State of the art performance on this set has been reported by Hassan and Mihalcea (2011) using a technique that exploits the Wikipedia linking structure and word sense disambiguation techniques. Finkelstein et al. (2002) 10http://clic.cimec.unitn.it/dm/ 11http://ronan.collobert.com/senna/ 240 introduced the widely used WordSim353 set (ws) that, as the name suggests, consists of 353 pairs. The current state of the art is reached by Halawi et al. (2012) with a method that is in the spirit of the predict models, but lets synonymy information from WordNet constrain the learning process (by favoring solutions in which WordNet synonyms are near in semantic space). Agirre et al. (2009) split the ws set into similarity (wss) and relatedness (wsr) subsets. The first contains tighter taxonomic relations, such as synonymy and cohyponymy (king/queen) whereas the second encompasses broader, possibly topical or syntagmatic relations (family/planning). We report stateof-the-art performance on the two subsets from the work of Agirre and colleagues, who used different kinds of count vectors extracted from a very large corpus (orders of magnitude larger than ours). Finally, we use (the test section of) MEN (men), that comprises 1,000 word pairs. Bruni et al. (2013), the developers of this benchmark, achieve state-ofthe-art performance by extensive tuning on ad-hoc training data, and by using both textual and imageextracted features to represent word meaning. Synonym detection The classic TOEFL (toefl) set was introduced by Landauer and Dumais (1997). It contains 80 multiple-choice questions that pair a target term with 4 synonym candidates. For example, for the target levied one must choose between imposed (correct), believed, requested and correlated. The DSMs compute cosines of each candidate vector with the target, and pick the candidate with largest cosine as their answer. Performance is evaluated in terms of correct-answer accuracy. Bullinaria and Levy (2012) achieved 100% accuracy by a very thorough exploration of the count model parameter space. Concept categorization Given a set of nominal concepts, the task is to group them into natural categories (e.g., helicopters and motorcycles should go to the vehicle class, dogs and elephants into the mammal class). Following previous art, we tackle categorization as an unsupervised clustering task. The vectors produced by a model are clustered into n groups (with n determined by the gold standard partition) using the CLUTO toolkit (Karypis, 2003), with the repeated bisections with global optimization method and CLUTO’s default settings otherwise (these are standard choices in the literature). Performance is evaluated in terms of purity, a measure of the extent to which each cluster contains concepts from a single gold category. If the gold partition is reproduced perfectly, purity reaches 100%; it approaches 0 as cluster quality deteriorates. The Almuhareb-Poesio (ap) benchmark contains 402 concepts organized into 21 categories (Almuhareb, 2006). State-of-the-art purity was reached by Rothenh¨ausler and Sch¨utze (2009) with a count model based on carefully crafted syntactic links. The ESSLLI 2008 Distributional Semantic Workshop shared-task set (esslli) contains 44 concepts to be clustered into 6 categories (Baroni et al., 2008) (we ignore here the 3- and 2way higher-level partitions coming with this set). Katrenko and Adriaans (2008) reached top performance on this set using the full Web as a corpus and manually crafted, linguistically motivated patterns. Finally, the Battig (battig) test set introduced by Baroni et al. (2010) includes 83 concepts from 10 categories. Current state of the art was reached by the window-based count model of Baroni and Lenci (2010). Selectional preferences We experiment with two data sets that contain verb-noun pairs that were rated by subjects for the typicality of the noun as a subject or object of the verb (e.g., people received a high average score as subject of to eat, and a low score as object of the same verb). We follow the procedure proposed by Baroni and Lenci (2010) to tackle this challenge: For each verb, we use the corpus-based tuples they make available to select the 20 nouns that are most strongly associated to the verb as subjects or objects, and we average the vectors of these nouns to obtain a “prototype” vector for the relevant argument slot. We then measure the cosine of the vector for a target noun with the relevant prototype vector (e.g., the cosine of people with the eating subject prototype vector). Systems are evaluated by Spearman correlation of these cosines with the averaged human typicality ratings. Our first data set was introduced by Ulrike Pad´o (2007) and includes 211 pairs (up). Top-performance was reached by the supervised count vector system of Herda˘gdelen and Baroni (2009) (supervised in the sense that they directly trained a classifier on gold data, as opposed to the 0-cost supervision of the context-learning methods). The mcrae set (McRae et al., 1998) consists of 100 noun–verb pairs, with top performance reached by the DepDM system of Baroni and Lenci (2010), a count DSM relying on 241 syntactic information. Analogy While all the previous data sets are relatively standard in the DSM field to test traditional count models, our last benchmark was introduced in Mikolov et al. (2013a) specifically to test predict models. The data-set contains about 9K semantic and 10.5K syntactic analogy questions. A semantic question gives an example pair (brothersister), a test word (grandson) and asks to find another word that instantiates the relation illustrated by the example with respect to the test word (granddaughter). A syntactic question is similar, but in this case the relationship is of a grammatical nature (work–works, speak. . . speaks). Mikolov and colleagues tackle the challenge by subtracting the second example term vector from the first, adding the test term, and looking for the nearest neighbour of the resulting vector (what is the nearest neighbour of ⃗ brother − ⃗ sister + ⃗ grandson?). Systems are evaluated in terms of proportion of questions where the nearest neighbour from the whole semantic space is the correct answer (the given example and test vector triples are excluded from the nearest neighbour search). Mikolov et al. (2013a) reach top accuracy on the syntactic subset (ansyn) with a CBOW predict model akin to ours (but trained on a corpus twice as large). Top accuracy on the entire data set (an) and on the semantic subset (ansem) was reached by Mikolov et al. (2013c) using a skip-gram predict model. Note however that, because of the way the task is framed, performance also depends on the size of the vocabulary to be searched: Mikolov et al. (2013a) pick the nearest neighbour among vectors for 1M words, Mikolov et al. (2013c) among 700K words, and we among 300K words. Some characteristics of the benchmarks we use are summarized in Table 1. 4 Results Table 2 summarizes the evaluation results. The first block of the table reports the maximum pertask performance (across all considered parameter settings) for count and predict vectors. The latter emerge as clear winners, with a large margin over count vectors in most tasks. Indeed, the predictive models achieve an impressive overall performance, beating the current state of the art in several cases, and approaching it in many more. It is worth stressing that, as reviewed in Section 3, the state-of-the-art results were obtained in almost all cases using specialized approaches that rely on external knowledge, manually-crafted rules, parsing, larger corpora and/or task-specific tuning. Our predict results were instead achieved by simply downloading the word2vec toolkit and running it with a range of parameter choices recommended by the toolkit developers. The success of the predict models cannot be blamed on poor performance of the count models. Besides the fact that this would not explain the near-state-of-the-art performance of the predict vectors, the count model results are actually quite good in absolute terms. Indeed, in several cases they are close, or even better than those attained by dm, a linguistically-sophisticated countbased approach that was shown to reach top performance across a variety of tasks by Baroni and Lenci (2010). Interestingly, count vectors achieve performance comparable to that of predict vectors only on the selectional preference tasks. The up task in particular is also the only benchmark on which predict models are seriously lagging behind stateof-the-art and dm performance. Recall from Section 3 that we tackle selectional preference by creating average vectors representing typical verb arguments. We conjecture that this averaging approach, that worked well for dm vectors, might be problematic for prediction-trained vectors, and we plan to explore alternative methods to build the prototypes in future research. Are our results robust to parameter choices, or are they due to very specific and brittle settings? The next few blocks of Table 2 address this question. The second block reports results obtained with single count and predict models that are best in terms of average performance rank across tasks (these are the models on the top rows of tables 3 and 4, respectively). We see that, for both approaches, performance is not seriously affected by using the single best setup rather than task-specific settings, except for a considerable drop in performance for the best predict model on esslli (due to the small size of this data set?), and an even more dramatic drop of the count model on ansem. A more cogent and interesting evaluation is reported in the third block of Table 2, where we see what happens if we use the single models with worst performance across tasks (recall from Section 2 above that, in any case, we are exploring a space of reasonable parameter settings, of the sort that an 242 name task measure source soa rg relatedness Pearson Rubenstein and Goodenough Hassan and Mihalcea (2011) (1965) ws relatedness Spearman Finkelstein et al. (2002) Halawi et al. (2012) wss relatedness Spearman Agirre et al. (2009) Agirre et al. (2009) wsr relatedness Spearman Agirre et al. (2009) Agirre et al. (2009) men relatedness Spearman Bruni et al. (2013) Bruni et al. (2013) toefl synonyms accuracy Landauer and Dumais Bullinaria and Levy (2012) (1997) ap categorization purity Almuhareb (2006) Rothenh¨ausler and Sch¨utze (2009) esslli categorization purity Baroni et al. (2008) Katrenko and Adriaans (2008) battig categorization purity Baroni et al. (2010) Baroni and Lenci (2010) up sel pref Spearman Pad´o (2007) Herda˘gdelen and Baroni (2009) mcrae sel pref Spearman McRae et al. (1998) Baroni and Lenci (2010) an analogy accuracy Mikolov et al. (2013a) Mikolov et al. (2013c) ansyn analogy accuracy Mikolov et al. (2013a) Mikolov et al. (2013a) ansem analogy accuracy Mikolov et al. (2013a) Mikolov et al. (2013c) Table 1: Benchmarks used in experiments, with type of task, figure of merit (measure), original reference (source) and reference to current state-of-the-art system (soa). rg ws wss wsr men toefl ap esslli battig up mcrae an ansyn ansem best setup on each task cnt 74 62 70 59 72 76 66 84 98 41 27 49 43 60 pre 84 75 80 70 80 91 75 86 99 41 28 68 71 66 best setup across tasks cnt 70 62 70 57 72 76 64 84 98 37 27 43 41 44 pre 83 73 78 68 80 86 71 77 98 41 26 67 69 64 worst setup across tasks cnt 11 16 23 4 21 49 24 43 38 -6 -10 1 0 1 pre 74 60 73 48 68 71 65 82 88 33 20 27 40 10 best setup on rg cnt (74) 59 66 52 71 64 64 84 98 37 20 35 42 26 pre (84) 71 76 64 79 85 72 84 98 39 25 66 70 61 other models soa 86 81 77 62 76 100 79 91 96 60 32 61 64 61 dm 82 35 60 13 42 77 76 84 94 51 29 NA NA NA cw 48 48 61 38 57 56 58 61 70 28 15 11 12 9 Table 2: Performance of count (cnt), predict (pre), dm and cw models on all tasks. See Section 3 and Table 1 for figures of merit and state-of-the-art results (soa). Since dm has very low coverage of the an* data sets, we do not report its performance there. 243 experimenter might be tempted to choose without tuning). The count model performance is severely affected by this unlucky choice (2-word window, Local Mutual Information, NMF, 400 dimensions, mean performance rank: 83), whereas the predict approach is much more robust: To put its worst instantiation (2-word window, hierarchical softmax, no subsampling, 200 dimensions, mean rank: 51) into perspective, its performance is more than 10% below the best count model only for the an and ansem tasks, and actually higher than it in 3 cases (note how on esslli the worst predict models performs much better than the best one, confirming our suspicion about the brittleness of this small data set). The fourth block reports performance in what might be the most realistic scenario, namely by tuning the parameters on a development task. Specifically, we pick the models that work best on the small rg set, and report their performance on all tasks (we obtained similar results by picking other tuning sets). The selected count model is the third best overall model of its class as reported in Table 3. The selected predict model is the fourth best model in Table 4. The overall count performance is not greatly affected by this choice. Again, predict models confirm their robustness, in that their rg-tuned performance is always close (and in 3 cases better) than the one achieved by the best overall setup. Tables 3 and 4 let us take a closer look at the most important count and predict parameters, by reporting the characteristics of the best models (in terms of average performance-based ranking across tasks) from both classes. For the count models, PMI is clearly the better weighting scheme, and SVD outperforms NMF as a dimensionality reduction technique. However, no compression at all (using all 300K original dimensions) works best. Compare this to the best overall predict vectors, that have 400 dimensions only, making them much more practical to use. For the predict models, we observe in Table 4 that negative sampling, where the task is to distinguish the target output word from samples drawn from the noise distribution, outperforms the more costly hierarchical softmax method. Subsampling frequent words, which downsizes the importance of these words similarly to PMI weighting in count models, is also bringing significant improvements. Finally, we go back to Table 2 to point out the poor performance of the out-of-the-box cw model. window weight compress dim. mean rank 2 PMI no 300K 35 5 PMI no 300K 38 2 PMI SVD 500 42 2 PMI SVD 400 46 5 PMI SVD 500 47 2 PMI SVD 300 50 5 PMI SVD 400 51 2 PMI NMF 300 52 2 PMI NMF 400 53 5 PMI SVD 300 53 Table 3: Top count models in terms of mean performance-based model ranking across all tasks. The first row states that the window-2, PMI, 300K count model was the best count model, and, across all tasks, its average rank, when ALL models are decreasingly ordered by performance, was 35. See Section 2.1 for explanation of the parameters. We must leave the investigation of the parameters that make our predict vectors so much better than cw (more varied training corpus? window size? objective function being used? subsampling? ...) to further work. Still, our results show that it’s not just training by context prediction that ensures good performance. The cw approach is very popular (for example both Huang et al. (2012) and Blacoe and Lapata (2012) used it in the studies we discussed in Section 1). Had we also based our systematic comparison of count and predict vectors on the cw model, we would have reached opposite conclusions from the ones we can draw from our word2vec-trained vectors! 5 Conclusion This paper has presented the first systematic comparative evaluation of count and predict vectors. As seasoned distributional semanticists with thorough experience in developing and using count vectors, we set out to conduct this study because we were annoyed by the triumphalist overtones often surrounding predict models, despite the almost complete lack of a proper comparison to count vectors.12 Our secret wish was to discover that it is all hype, and count vectors are far superior to their predictive counterparts. A more realistic expec12Here is an example, where word2vec is called the crown jewel of natural language processing: http://bit.ly/ 1ipv72M 244 win. hier. neg. subsamp. dim mean softm. samp. rank 5 no 10 yes 400 10 2 no 10 yes 300 13 5 no 5 yes 400 13 5 no 5 yes 300 13 5 no 10 yes 300 13 2 no 10 yes 400 13 2 no 5 yes 400 15 5 no 10 yes 200 15 2 no 10 yes 500 15 2 no 5 yes 300 16 Table 4: Top predict models in terms of mean performance-based model ranking across all tasks. See Section 2.2 for explanation of the parameters. tation was that a complex picture would emerge, with predict and count vectors beating each other on different tasks. Instead, we found that the predict models are so good that, while the triumphalist overtones still sound excessive, there are very good reasons to switch to the new architecture. However, due to space limitations we have only focused here on quantitative measures: It remains to be seen whether the two types of models are complementary in the errors they make, in which case combined models could be an interesting avenue for further work. The space of possible parameters of count DSMs is very large, and it’s entirely possible that some options we did not consider would have improved count vector performance somewhat. Still, given that the predict vectors also outperformed the syntax-based dm model, and often approximated state-of-the-art performance, a more proficuous way forward might be to focus on parameters and extensions of the predict models instead: After all, we obtained our already excellent results by just trying a few variations of the word2vec defaults. Add to this that, beyond the standard lexical semantics challenges we tested here, predict models are currently been successfully applied in cutting-edge domains such as representing phrases (Mikolov et al., 2013c; Socher et al., 2012) or fusing language and vision in a common semantic space (Frome et al., 2013; Socher et al., 2013). Based on the results reported here and the considerations we just made, we would certainly recommend anybody interested in using DSMs for theoretical or practical applications to go for the predict models, with the important caveat that they are not all created equal (cf. the big difference between word2vec and cw models). At the same time, given the large amount of work that has been carried out on count DSMs, we would like to explore, in the near future, how certain questions and methods that have been considered with respect to traditional DSMs will transfer to predict models. For example, the developers of Latent Semantic Analysis (Landauer and Dumais, 1997), Topic Models (Griffiths et al., 2007) and related DSMs have shown that the dimensions of these models can be interpreted as general “latent” semantic domains, which gives the corresponding models some a priori cognitive plausibility while paving the way for interesting applications. Another important line of DSM research concerns “context engineering”: There has been for example much work on how to encode syntactic information into context features (Pad´o and Lapata, 2007), and more recent studies construct and combine feature spaces expressing topical vs. functional information (Turney, 2012). To give just one last example, distributional semanticists have looked at whether certain properties of vectors reflect semantic relations in the expected way: e.g., whether the vectors of hypernyms “distributionally include” the vectors of hyponyms in some mathematical precise sense. Do the dimensions of predict models also encode latent semantic domains? Do these models afford the same flexibility of count vectors in capturing linguistically rich contexts? Does the structure of predict vectors mimic meaningful semantic relations? Does all of this even matter, or are we on the cusp of discovering radically new ways to tackle the same problems that have been approached as we just sketched in traditional distributional semantics? Either way, the results of the present investigation indicate that these are important directions for future research in computational semantics. Acknowledgments We acknowledge ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pasc¸a, and Aitor Soroa. 2009. 245 A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of HLT-NAACL, pages 19–27, Boulder, CO. Abdulrahman Almuhareb. 2006. Attributes in Lexical Acquisition. Phd thesis, University of Essex. Marco Baroni and Alessandro Lenci. 2010. Distributional Memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721. Marco Baroni, Stefan Evert, and Alessandro Lenci, editors. 2008. Bridging the Gap between Semantic Theory and Computational Simulations: Proceedings of the ESSLLI Workshop on Distributional Lexical Semantic. FOLLI, Hamburg. Marco Baroni, Eduard Barbu, Brian Murphy, and Massimo Poesio. 2010. Strudel: A distributional semantic model based on properties and types. Cognitive Science, 34(2):222–254. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of EMNLP, pages 546–556, Jeju Island, Korea. David Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2013. Multimodal distributional semantics. Journal of Artificial Intelligence Research. In press; http://clic.cimec.unitn.it/ marco/publications/mmds-jair.pdf. John Bullinaria and Joseph Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39:510–526. John Bullinaria and Joseph Levy. 2012. Extracting semantic representations from word co-occurrence statistics: Stop-lists, stemming and SVD. Behavior Research Methods, 44:890–907. Yanqing Chen, Bryan Perozzi, Rami Al-Rfou’, and Steven Skiena. 2013. The expressive power of word embeddings. In Proceedings of the ICML Workshop on Deep Learning for Audio, Speech and Language Processing, Atlanta, GA. Published online: https://sites.google.com/site/ deeplearningicml2013/accepted_ papers. Stephen Clark. 2013. Vector space models of lexical meaning. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics, 2nd ed. Blackwell, Malden, MA. In press; http://www.cl.cam.ac.uk/ ˜sc609/pubs/sem_handbook.pdf. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML, pages 160–167, Helsinki, Finland. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Katrin Erk. 2012. Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635–653. Stefan Evert. 2005. The Statistics of Word Cooccurrences. Ph.D dissertation, Stuttgart University. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116–131. Andrea Frome, Greg Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. 2013. DeViSE: A deep visual-semantic embedding model. In Proceedings of NIPS, pages 2121–2129, Lake Tahoe, Nevada. Gene Golub and Charles Van Loan. 1996. Matrix Computations (3rd ed.). JHU Press, Baltimore, MD. Tom Griffiths, Mark Steyvers, and Josh Tenenbaum. 2007. Topics in semantic representation. Psychological Review, 114:211–244. Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of KDD, pages 1406–1414. Samer Hassan and Rada Mihalcea. 2011. Semantic relatedness using salient semantic analysis. In Proceedings of AAAI, pages 884–889, San Francisco, CA. Amac¸ Herda˘gdelen and Marco Baroni. 2009. BagPack: A general framework to represent semantic relations. In Proceedings of GEMS, pages 33–40, Athens, Greece. Eric Huang, Richard Socher, Christopher Manning, and Andrew Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL, pages 873–882, Jeju Island, Korea. George Karypis. 2003. CLUTO: A clustering toolkit. Technical Report 02-017, University of Minnesota Department of Computer Science. 246 Sophia Katrenko and Pieter Adriaans. 2008. Qualia structures and their impact on the concrete noun categorization task. In Proceedings of the ESSLLI Workshop on Distributional Lexical Semantics, pages 17–24, Hamburg, Germany. Thomas Landauer and Susan Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211– 240. Daniel Lee and Sebastian Seung. 2000. Algorithms for Non-negative Matrix Factorization. In Proceedings of NIPS, pages 556–562. Chih-Jen Lin. 2007. Projected gradient methods for Nonnegative Matrix Factorization. Neural Computation, 19(10):2756–2779. Ken McRae, Michael Spivey-Knowlton, and Michael Tanenhaus. 1998. Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension. Journal of Memory and Language, 38:283–312. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. http://arxiv.org/ abs/1301.3781/. Tomas Mikolov, Quoc Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for Machine Translation. http://arxiv.org/abs/ 1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean. 2013c. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119, Lake Tahoe, Nevada. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013d. Linguistic regularities in continuous space word representations. In Proceedings of NAACL, pages 746–751, Atlanta, Georgia. George Miller and Walter Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. Sebastian Pad´o and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. Ulrike Pad´o. 2007. The Integration of Syntax and Semantic Plausibility in a Wide-Coverage Model of Sentence Processing. Dissertation, Saarland University, Saarbr¨ucken. Klaus Rothenh¨ausler and Hinrich Sch¨utze. 2009. Unsupervised classification with dependency based word spaces. In Proceedings of GEMS, pages 17– 24, Athens, Greece. Herbert Rubenstein and John Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Magnus Sahlgren. 2006. The Word-Space Model. Ph.D dissertation, Stockholm University. Richard Socher, Brody Huval, Christopher Manning, and Andrew Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP, pages 1201–1211, Jeju Island, Korea. Richard Socher, Milind Ganjoo, Christopher Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Proceedings of NIPS, pages 935–943, Lake Tahoe, Nevada. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of ACL, pages 384–394, Uppsala, Sweden. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Peter Turney. 2012. Domain and function: A dualspace model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533– 585. 247
2014
23
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 248–258, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Metaphor Detection with Cross-Lingual Model Transfer Yulia Tsvetkov Leonid Boytsov Anatole Gershman Eric Nyberg Chris Dyer Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 USA {ytsvetko, srchvrs, anatoleg, ehn, cdyer}@cs.cmu.edu Abstract We show that it is possible to reliably discriminate whether a syntactic construction is meant literally or metaphorically using lexical semantic features of the words that participate in the construction. Our model is constructed using English resources, and we obtain state-of-the-art performance relative to previous work in this language. Using a model transfer approach by pivoting through a bilingual dictionary, we show our model can identify metaphoric expressions in other languages. We provide results on three new test sets in Spanish, Farsi, and Russian. The results support the hypothesis that metaphors are conceptual, rather than lexical, in nature. 1 Introduction Lakoff and Johnson (1980) characterize metaphor as reasoning about one thing in terms of another, i.e., a metaphor is a type of conceptual mapping, where words or phrases are applied to objects and actions in ways that do not permit a literal interpretation. They argue that metaphors play a fundamental communicative role in verbal and written interactions, claiming that much of our everyday language is delivered in metaphorical terms. There is empirical evidence supporting the claim: recent corpus studies have estimated that the proportion of words used metaphorically ranges from 5% to 20% (Steen et al., 2010), and Thibodeau and Boroditsky (2011) provide evidence that a choice of metaphors affects decision making. Given the prevalence and importance of metaphoric language, effective automatic detection of metaphors would have a number of benefits, both practical and scientific. Language processing applications that need to understand language or preserve meaning (information extraction, machine translation, dialog systems, sentiment analysis, and text analytics, etc.) would have access to a potentially useful high-level bit of information about whether something is to be understood literally or not. Second, scientific hypotheses about metaphoric language could be tested more easily at a larger scale with automation. However, metaphor detection is a hard problem. On one hand, there is a subjective component: humans may disagree whether a particular expression is used metaphorically or not, as there is no clear-cut semantic distinction between figurative and metaphorical language (Shutova, 2010). On the other, metaphors can be domain- and contextdependent.1 Previous work has focused on metaphor identification in English, using both extensive manuallycreated linguistic resources (Mason, 2004; Gedigian et al., 2006; Krishnakumaran and Zhu, 2007; Turney et al., 2011; Broadwell et al., 2013) and corpus-based approaches (Birke and Sarkar, 2007; Shutova et al., 2013; Neuman et al., 2013; Shutova and Sun, 2013; Hovy et al., 2013). We build on this foundation and also extend metaphor detection into other languages in which few resources may exist. Our work makes the following contributions: (1) we develop a new state-of-the-art English metaphor detection system that uses conceptual semantic features, such as a degree of abstractness and semantic supersenses;2 (2) we create new metaphor-annotated corpora for Russian and English;3 (3) using a paradigm of model transfer (McDonald et al., 2011; T¨ackstr¨om et al., 2013; Kozhenikov and Titov, 2013), we provide support for the hypothesis that metaphors are concep1For example, drowning students could be used metaphorically to describe the situation where students are overwhelmed with work, but in the sentence a lifeguard saved drowning students, this phrase is used literally. 2https://github.com/ytsvetko/metaphor 3http://www.cs.cmu.edu/˜ytsvetko/ metaphor/datasets.zip 248 tual (rather than lexical) in nature by showing that our English-trained model can detect metaphors in Spanish, Farsi, and Russian. 2 Methodology Our task in this work is to define features that distinguish between metaphoric and literal uses of two syntactic constructions: subject-verb-object (SVO) and adjective-noun (AN) tuples.4 We give examples of a prototypical metaphoric usage of each type: • SVO metaphors. A sentence containing a metaphoric SVO relation is my car drinks gasoline. According to Wilks (1978), this metaphor represents a violation of selectional preferences for the verb drink, which is normally associated with animate subjects (the car is inanimate and, hence, cannot drink in the literal sense of the verb). • AN metaphors. The phrase broken promise is an AN metaphor, where attributes from a concrete domain (associated with the concrete word broken) are transferred to a more abstract domain, which is represented by the relatively abstract word promise. That is, we map an abstract concept promise to a concrete domain of physical things, where things can be literally broken to pieces. Motivated by Lakoff’s (1980) argument that metaphors are systematic conceptual mappings, we will use coarse-grained conceptual, rather than fine-grained lexical features, in our classifier. Conceptual features pertain to concepts and ideas as opposed to individual words or phrases expressed in a particular language. In this sense, as long as two words in two different languages refer to the same concepts, their conceptual features should be the same. Furthermore, we hypothesize that our coarse semantic features give us a languageinvariant representation suitable for metaphor detection. To test this hypothesis, we use a crosslingual model transfer approach: we use bilingual dictionaries to project words from other syntactic constructions found in other languages into English and then apply the English model on the derived conceptual representations. 4Our decision to focus on SVO and AN metaphors is justified by corpus studies that estimate that verb- and adjectivebased metaphors account for a substantial proportion of all metaphoric expressions, approximately 60% and 24%, respectively (Shutova and Teufel, 2010; Gandy et al., 2013). Each SVO (or AN) instance will be represented by a triple (duple) from which a feature vector will be extracted.5 The vector will consist of the concatenation of the conceptual features (which we discuss below) for all participating words, and conjunction features for word pairs.6 For example, to generate the feature vector for the SVO triple (car, drink, gasoline), we compute all the features for the individual words car, drink, gasoline and combine them with the conjunction features for the pairs car drink and drink gasoline. We define three main feature categories (1) abstractness and imageability, (2) supersenses, (3) unsupervised vector-space word representations; each category corresponds to a group of features with a common theme and representation. • Abstractness and imageability. Abstractness and imageability were shown to be useful in detection of metaphors (it is easier to invoke mental pictures of concrete and imageable words) (Turney et al., 2011; Broadwell et al., 2013). We expect that abstractness, used in conjunction features (e.g., a feature denoting that the subject is abstract and the verb is concrete), is especially useful: semantically, an abstract agent performing a concrete action is a strong signal of metaphorical usage. Although often correlated with abstractness, imageability is not a redundant property. While most abstract things are hard to visualize, some call up images, e.g., vengeance calls up an emotional image, torture calls up emotions and even visual images. There are concrete things that are hard to visualize too, for example, abbey is harder to visualize than banana (B. MacWhinney, personal communication). • Supersenses. Supersenses7 are coarse semantic categories originating in WordNet. For nouns and verbs there are 45 classes: 26 for nouns and 15 for verbs, for example, 5Looking at components of the syntactic constructions independent of their context has its limitations, as discussed above with the drowning students example; however, it simplifies the representation challenges considerably. 6If word one is represented by features u ∈Rn and word two by features v ∈Rm then the conjunction feature vector is the vectorization of the outer product uv⊤. 7Supersenses are called “lexicographer classes” in WordNet documentation (Fellbaum, 1998), http://wordnet. princeton.edu/man/lexnames.5WN.html 249 noun.body, noun.animal, verb.consumption, or verb.motion (Ciaramita and Altun, 2006). English adjectives do not, as yet, have a similar high-level semantic partitioning in WordNet, thus we use a 13-class taxonomy of adjective supersenses constructed by Tsvetkov et al. (2014) (discussed in §3.2). Supersenses are particularly attractive features for metaphor detection: coarse sense taxonomies can be viewed as semantic concepts, and since concept mapping is a process in which metaphors are born, we expect different supersense co-occurrences in metaphoric and literal combinations. In “drinks gasoline”, for example, mapping to supersenses would yield a pair <verb.consumption, noun.substance>, contrasted with <verb.consumption, noun.food> for “drinks juice”. In addition, this coarse semantic categorization is preserved in translation (Schneider et al., 2013), which makes supersense features suitable for cross-lingual approaches such as ours. • Vector space word representations. Vector space word representations learned using unsupervised algorithms are often effective features in supervised learning methods (Turian et al., 2010). In particular, many such representations are designed to capture lexical semantic properties and are quite effective features in semantic processing, including named entity recognition (Turian et al., 2009), word sense disambiguation (Huang et al., 2012), and lexical entailment (Baroni et al., 2012). In a recent study, Mikolov et al. (2013) reveal an interesting cross-lingual property of distributed word representations: there is a strong similarity between the vector spaces across languages that can be easily captured by linear mapping. Thus, vector space models can also be seen as vectors of (latent) semantic concepts, that preserve their “meaning” across languages. 3 Model and Feature Extraction In this section we describe a classification model, and provide details on mono- and cross-lingual implementation of features. 3.1 Classification using Random Forests To make classification decisions, we use a random forest classifier (Breiman, 2001), an ensemble of decision tree classifiers learned from many independent subsamples of the training data. Given an input, each tree classifier assigns a probability to each label; those probabilities are averaged to compute the probability distribution across the ensemble. Random forest ensembles are particularly suitable for our resource-scarce scenario: rather than overfitting, they produce a limiting value of the generalization error as the number of trees increases,8 and no hyperparameter tuning is required. In addition, decision-tree classifiers learn non-linear responses to inputs and often outperform logistic regression (Perlich et al., 2003).9 Our random forest classifier models the probability that the input syntactic relation is metaphorical. If this probability is above a threshold, the relation is classified as metaphoric, otherwise it is literal. We used the scikit-learn toolkit to train our classifiers (Pedregosa et al., 2011). 3.2 Feature extraction Abstractness and imageability. The MRC psycholinguistic database is a large dictionary listing linguistic and psycholinguistic attributes obtained experimentally (Wilson, 1988).10 It includes, among other data, 4,295 words rated by the degrees of abstractness and 1,156 words rated by the imageability. Similarly to Tsvetkov et al. (2013), we use a logistic regression classifier to propagate abstractness and imageability scores from MRC ratings to all words for which we have vector space representations. More specifically, we calculate the degree of abstractness and imageability of all English items that have a vector space representation, using vector elements as features. We train two separate classifiers for abstractness and imageability on a seed set of words from the MRC database. Degrees of abstractness and imageability are posterior probabilities of classifier predictions. We binarize these posteriors into abstractconcrete (or imageable-unimageable) boolean indicators using pre-defined thresholds.11 Perfor8See Theorem 1.2 in (Breiman, 2001) for details. 9In our experiments, random forests model slightly outperformed logistic regression and SVM classifiers. 10http://ota.oucs.ox.ac.uk/headers/ 1054.xml 11Thresholds are equal to 0.8 for abstractness and to 0.9 for imageability. They were chosen empirically based on ac250 mance of these classifiers, tested on a sampled held-out data, is 0.94 and 0.85 for the abstractness and imageability classifiers, respectively. Supersenses. In the case of SVO relations, we incorporate supersense features for nouns and verbs; noun and adjective supersenses are used in the case of AN relations. Supersenses of nouns and verbs. A lexical item can belong to several synsets, which are associated with different supersenses. Degrees of membership in different supersenses are represented by feature vectors, where each element corresponds to one supersense. For example, the word head (when used as a noun) participates in 33 synsets, three of which are related to the supersense noun.body. The value of the feature corresponding to this supersense is 3/33 ≈0.09. Supersenses of adjectives. WordNet lacks coarse-grained semantic categories for adjectives. To divide adjectives into groups, Tsvetkov et al. (2014) use 13 top-level classes from the adapted taxonomy of Hundsnurscher and Splett (1982), which is incorporated in GermaNet (Hamp and Feldweg, 1997). For example, the top-level classes in GermaNet include: adj.feeling (e.g., willing, pleasant, cheerful); adj.substance (e.g., dry, ripe, creamy); adj.spatial (e.g., adjacent, gigantic).12 For each adjective type in WordNet, they produce a vector with a classifier posterior probabilities corresponding to degrees of membership of this word in one of the 13 semantic classes,13 similar to the feature vectors we build for nouns and verbs. For example, for a word calm the top-2 categories (with the first and second highest degrees of membership) are adj.behavior and adj.feeling. Vector space word representations. We employ 64-dimensional vector-space word representations constructed by Faruqui and Dyer (2014).14 Vector construction algorithm is a variation on traditional latent semantic analysis (Deerwester et al., 1990) that uses multilingual information to produce representations in which synonymous words have similar vectors. The vectors were curacy during cross-validation. 12For the full taxonomy see http://www.sfs. uni-tuebingen.de/lsd/adjectives.shtml 13http://www.cs.cmu.edu/˜ytsvetko/ adj-supersenses.tar.gz 14http://www.cs.cmu.edu/˜mfaruqui/soft. html trained on the news commentary corpus released by WMT-2011,15 comprising 180,834 types. 3.3 Cross-lingual feature projection For languages other than English, feature vectors are projected to English features using translation dictionaries. We used the Babylon dictionary,16 which is a proprietary resource, but any bilingual dictionary can in principle be used. For a nonEnglish word in a source language, we first obtain all translations into English. Then, we average all feature vectors related to these translations. Consider an example related to projection of WordNet supersenses. A Russian word ãîëîâà is translated as head and brain. Hence, we select all the synsets of the nouns head and brain. There are 38 such synsets (33 for head and 5 for brain). Four of these synsets are associated with the supersense noun.body. Therefore, the value of the feature noun.body is 4/38 ≈0.11. 4 Datasets In this section we describe a training and testing dataset as well a data collection procedure. 4.1 English training sets To train an SVO metaphor classifier, we employ the TroFi (Trope Finder) dataset.17 TroFi includes 3,737 manually annotated English sentences from the Wall Street Journal (Birke and Sarkar, 2007). Each sentence contains either literal or metaphorical use for one of 50 English verbs. First, we use a dependency parser (Martins et al., 2010) to extract subject-verb-object (SVO) relations. Then, we filter extracted relations to eliminate parsing-related errors, and relations with verbs which are not in the TroFi verb list. After filtering, there are 953 metaphorical and 656 literal SVO relations which we use as a training set. In the case of AN relations, we construct and make publicly available a training set containing 884 metaphorical AN pairs and 884 pairs with literal meaning. It was collected by two annotators using public resources (collections of metaphors on the web). At least one additional person carefully examined and culled the collected metaphors, by removing duplicates, weak metaphors, and metaphorical phrases (such as 15http://www.statmt.org/wmt11/ 16http://www.babylon.com 17http://www.cs.sfu.ca/˜anoop/students/ jbirke/ 251 drowning students) whose interpretation depends on the context. 4.2 Multilingual test sets We collect and annotate metaphoric and literal test sentences in four languages. Thus, we compile eight test datasets, four for SVO relations, and four for AN relations. Each dataset has an equal number of metaphors and non-metaphors, i.e., the datasets are balanced. English (EN) and Russian (RU) datasets have been compiled by our team and are publicly available. Spanish (ES) and Farsi (FA) datasets are published elsewhere (Levin et al., 2014). Table 1 lists test set sizes. SVO AN EN 222 200 RU 240 200 ES 220 120 FA 44 320 Table 1: Sizes of the eight test sets. Each dataset is balanced, i.e., it has an equal number of metaphors and non-metaphors. For example, English SVO dataset has 222 relations: 111 metaphoric and 111 literal. We used the following procedure to compile the EN and RU test sets. A moderator started with seed lists of 1000 most common verbs and adjectives.18 Then she used the SketchEngine, which provides searching capability for the TenTen Web corpus,19 to extract sentences with words that frequently co-occurred with words from the seed lists. From these sentences, she removed sentences that contained more than one metaphor, and sentences with non-SVO and non-AN metaphors. Remaining sentences were annotated by several native speakers (five for English and six for Russian), who judged AN and SVO phrases in context. The annotation instructions were general: “Please, mark in bold all words that, in your opinion, are used non-literally in the following sentences. In many sentences, all the words may be used literally.” The Fleiss’ Kappas for 5 English and 6 Russian annotators are: EN-AN = .76, RU18Selection of 1000 most common verbs and adjectives achieves much broader lexical and domain coverage than what can be realistically obtained from continuous text. Our test sentence domains are, therefore, diverse: economic, political, sports, etc. 19http://trac.sketchengine.co.uk/wiki/ Corpora/enTenTen AN = .85, EN-SVO = .75, RU-SVO = .78. For the final selection, we filtered out low-agreement (<.8) sentences. The test candidate sentences were selected by a person who did not participate in the selection of the training samples. No English annotators of the test set, and only one Russian annotator out of 6 participated in the selection of the training samples. Thus, we trust that annotator judgments were not biased towards the cases that the system is trained to process. 5 Experiments 5.1 English experiments Our task, as defined in Section 2, is to classify SVO and AN relations as either metaphoric or literal. We first conduct a 10-fold cross-validation experiment on the training set defined in Section 4.1. We represent each candidate relation using the features described in Section 3.2, and evaluate performance of the three feature categories and their combinations. This is done by computing an accuracy in the 10-fold cross validation. Experimental results are given in Table 2, where we also provide the number of features in each feature set. SVO AN # FEAT ACC # FEAT ACC AbsImg 20 0.73∗ 16 0.76∗ Supersense 67 0.77∗ 116 0.79∗ AbsImg+Sup. 87 0.78∗ 132 0.80∗ VSM 192 0.81 228 0.84∗ All 279 0.82 360 0.86 Table 2: 10-fold cross validation results for three feature categories and their combination, for classifiers trained on English SVO and AN training sets. # FEAT column shows a number of features. ACC column reports an accuracy score in the 10fold cross validation. Statistically significant differences (p < 0.01) from the all-feature combination are marked with a star. These results show superior performance over previous state-of-the-art results, confirming our hypothesis that conceptual features are effective in metaphor classification. For the SVO task, the cross-validation accuracy is about 10% better than that of Tsvetkov et al. (2013). For the AN task, the cross validation accuracy is better by 8% than the result of Turney et al. (2011) (two baseline 252 methods are described in Section 5.2). We can see that all types of features have good performance on their own (VSM is the strongest feature type). Noun supersense features alone allows us to achieve an accuracy of 75%, i.e., adjective supersense features contribute 4% to adjective-noun supersense feature combination. Experiments with the pairs of features yield better results than individual features, implying that the feature categories are not redundant. Yet, combining all features leads to even higher accuracy during crossvalidation. In the case of the AN task, a difference between the All feature combination and any other combination of features listed in Table 2 is statistically significant (p < 0.01 for both the sign and the permutation test). Although the first experiment shows very high scores, the 10-fold cross-validation cannot fully reflect the generality of the model, because all folds are parts of the same corpus. They are collected by the same human judges and belong to the same domain. Therefore, experiments on out-ofdomain data are crucial. We carry out such experiments using held-out SVO and AN EN test sets, described in Section 4.2 and Table 1. In this experiment, we measure the f-score. We classify SVO and AN relations using a classifier trained on the All feature combination and balanced thresholds. The values of the f-score are 0.76, both for SVO and AN tasks. This out-of-domain experiment suggests that our classifier is portable across domains and genres. However, (1) different application may have different requirements for recall/precision, and (2) classification results may be skewed towards having high precision and low recall (or vice versa). It is possible to trade precision for recall by choosing a different threshold. Thus, in addition to giving a single f-score value for balanced thresholds, we present a Receiver Operator Characteristic (ROC) curve, where we plot a fraction of true positives against the fraction of false positives for 100 threshold values in the range from zero to one. The area under the ROC curve (AUC) can be interpreted as the probability that a classifier will assign a higher score to a randomly chosen positive example than to a randomly chosen negative example.20 For a randomly guessing classifier, the ROC curve is a dashed diagonal line. A bad classi20Assuming that positive examples are labeled by ones, and negative examples are labeled by zeros. fier has an ROC curve that goes close to the dashed diagonal or even below it. 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Supersenses (area = 0.77) AbsImg (area = 0.73) VSM (area = 0.8) All (area = 0.79) (a) SVO 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate AbsImg (area = 0.9) Supersenses (area = 0.86) VSM (area = 0.89) All (area = 0.92) (b) AN Figure 1: ROC curves for classifiers trained using different feature sets (English SVO and AN test sets). According to ROC plots in Figure 1, all three feature sets are effective, both for SVO and for AN tasks. Abstractness and Imageability features work better for adjectives and nouns, which is in line with previous findings (Turney et al., 2011; Broadwell et al., 2013). It can be also seen that VSM features are very effective. This is in line with results of Hovy et al. (2013), who found that it is hard to improve over the classifier that uses only VSM features. 5.2 Comparison to baselines In this section, we compare our method to state-ofthe-art methods of Tsvetkov et al. (2013) and of Turney et al. (2011), who focused on classifying SVO and AN relations, respectively. In the case of SVO relations, we use software 253 and datasets from Tsvetkov et al. (2013). These datasets, denoted as an SVO-baseline, consist of 98 English and 149 Russian sentences. We train SVO metaphor detection tools on SVO relations extracted from TroFi sentences and evaluate them on the SVO-baseline dataset. We also use the same thresholds for classifier posterior probabilities as Tsvetkov et al. (2013). Our approach is different from that of Tsvetkov et al. (2013) in that it uses additional features (vector space word representations) and a different classification method (we use random forests while Tsvetkov et al. (2013) use logistic regression). According to Table 3, we obtain higher performance scores for both Russian and English. EN RU SVO-baseline 0.78 0.76 This work 0.86 0.85 Table 3: Comparing f-scores of our SVO metaphor detection method to the baselines. In the case of AN relations, we use the dataset (denoted as an AN-baseline) created by Turney et al. (2011) (see Section 4.1 in the referred paper for details). Turney et al. (2011) manually annotated 100 pairs where an adjective was one of the following: dark, deep, hard, sweet, and worm. The pairs were presented to five human judges who rated each pair on a scale from 1 (very literal/denotative) to 4 (very nonliteral/connotative). Turney et al. (2011) train logistic-regression employing only abstractness ratings as features. Performance of the method was evaluated using the 10-fold cross-validation separately for each judge. We replicate the above described evaluation procedure of Turney et al. (2011) using their model and features. In our classifier, we use the All feature combination and the balanced threshold as described in Section 5.1. According to results in Table 4, almost all of the judge-specific f-scores are slightly higher for our system, as well as the overall average f-score. In both baseline comparisons, we obtain performance at least as good as in previously published studies. 5.3 Cross-lingual experiments In the next experiment we corroborate the main hypothesis of this paper: a model trained on EnAN-baseline This work Judge 1 0.73 0.75 Judge 2 0.81 0.84 Judge 3 0.84 0.88 Judge 4 0.79 0.81 Judge 5 0.78 0.77 average 0.79 0.81 Table 4: Comparing AN metaphor detection method to the baselines: accuracy of the 10fold cross validation on annotations of five human judges. glish data can be successfully applied to other languages. Namely, we use a trained English model discussed in Section 5.1 to classify literal and metaphoric SVO and AN relations in English, Spanish, Farsi and Russian test sets, listed in Section 4.2. This time we used all available features. Experimental results for all four languages, are given in Figure 2. The ROC curves for SVO and AN tasks are plotted in Figure 2a and Figure 2b, respectively. Each curve corresponds to a test set described in Table 1. In addition, we perform an oracle experiment, to obtain actual f-score values for best thresholds. Detailed results are shown in Table 5. Consistent results with high f-scores are obtained across all four languages. Note that higher scores are obtained for the Russian test set. We hypothesize that this happens due to a higher-quality translation dictionary (which allows a more accurate model transfer). Relatively lower (yet reasonable) results for Farsi can be explained by a smaller size of the bilingual dictionary (thus, fewer feature projections can be obtained). Also note that, in our experience, most of Farsi metaphors are adjective-noun constructions. This is why the AN FA dataset in Table 1 is significantly larger than SVO FA. In that, for the AN Farsi task we observe high performance scores. Figure 2 and Table 5 confirm, that we obtain similar, robust results on four very different languages, using the same English classifiers. We view this result as a strong evidence of language-independent nature of our metaphor detection method. In particular, this shows that proposed conceptual features can be used to detect selectional preferences violation across languages. To summarize the experimental section, our metaphor detection approach obtains state-of-the254 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate EN (area = 0.79) ES (area = 0.71) FA (area = 0.69) RU (area = 0.89) (a) SVO 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate EN (area = 0.92) ES (area = 0.73) FA (area = 0.83) RU (area = 0.8) (b) AN Figure 2: Cross-lingual experiment: ROC curves for classifiers trained on the English data using a combination of all features, and applied to SVO and AN metaphoric and literal relations in four test languages: English, Russian, Spanish, and Farsi. art performance in English, is effective when applied to out-of-domain English data, and works cross-lingually. 5.4 Examples Manual data analysis on adjective-noun pairs supports an abstractness-concreteness hypothesis formulated by several independent research studies. For example, in English we classify as metaphoric dirty word and cloudy future. Word pairs dirty diaper and cloudy weather have same adjectives. Yet they are classified as literal. Indeed, diaper is a more concrete term than word and weather is more concrete than future. Same pattern is observed in non-English datasets. In Russian, áîëüíîå îáùåñòâî “sick society” and ïóñòîé çâóê “empty sound” are classified as metaphoric, while SVO AN EN 0.79 0.85 RU 0.84 0.77 ES 0.76 0.72 FA 0.75 0.74 Table 5: Cross-lingual experiment: f-scores for classifiers trained on the English data using a combination of all features, and applied, with optimal thresholds, to SVO and AN metaphoric and literal relations in four test languages: English, Russian, Spanish, and Farsi. áîëüíàÿ áàáóøêà “sick grandmother” and ïóñòàÿ ÷àøêà “empty cup” are classified as literal. Spanish example of an adjective-noun metaphor is a well-known m´usculo econ´omico “economic muscle”. We also observe that non-metaphoric adjective noun pairs tend to have more imageable adjectives, such as literal derecho humano “human right”. In Spanish, human is more imageable than economic. Verb-based examples that are correctly classified by our model are: blunder escaped notice (metaphoric) and prisoner escaped jail (literal). We hypothesize that supersense features are instrumental in the correct classification of these examples: <noun.person,verb.motion> is usually used literally, while <noun.act,verb.motion> is used metaphorically. 6 Related Work For a historic overview and a survey of common approaches to metaphor detection, we refer the reader to recent reviews by Shutova et al. (Shutova, 2010; Shutova et al., 2013). Here we focus only on recent approaches. Shutova et al. (2010) proposed a bottom-up method: one starts from a set of seed metaphors and seeks phrases where verbs and/or nouns belong to the same cluster as verbs or nouns in seed examples. Turney et al. (2011) show how abstractness scores could be used to detect metaphorical AN phrases. Neuman et al. (2013) describe a Concrete Category Overlap algorithm, where co-occurrence statistics and Turney’s abstractness scores are used to determine WordNet supersenses that correspond to literal usage of a given adjective or verb. For example, given an adjective, we can learn that it modifies concrete nouns that usually have the 255 supersense noun.body. If this adjective modifies a noun with the supersense noun.feeling, we conclude that a metaphor is found. Broadwell et al. (2013) argue that metaphors are highly imageable words that do not belong to a discussion topic. To implement this idea, they extend MRC imageability scores to all dictionary words using links among WordNet supersenses (mostly hypernym and hyponym relations). Strzalkowski et al. (2013) carry out experiments in a specific (government-related) domain for four languages: English, Spanish, Farsi, and Russian. Strzalkowski et al. (2013) explain the algorithm only for English and say that is the same for Spanish, Farsi, and Russian. Because they heavily rely on WordNet and availability of imageability scores, their approach may not be applicable to low-resource languages. Hovy et al. (2013) applied tree kernels to metaphor detection. Their method also employs WordNet supersenses, but it is not clear from the description whether WordNet is essential or can be replaced with some other lexical resource. We cannot compare directly our model with this work because our classifier is restricted to detection of only SVO and AN metaphors. Tsvetkov et al. (2013) propose a cross-lingual detection method that uses only English lexical resources and a dependency parser. Their study focuses only on the verb-based metaphors. Tsvetkov et al. (2013) employ only English and Russian data. Current work builds on this study, and incorporates new syntactic relations as metaphor candidates, adds several new feature sets and different, more reliable datasets for evaluating results. We demonstrate results on two new languages, Spanish and Farsi, to emphasize the generality of the method. A words sense disambiguation (WSD) is a related problem, where one identifies meanings of polysemous words. The difference is that in the WSD task, we need to select an already existing sense, while for the metaphor detection, the goal is to identify cases of sense borrowing. Studies showed that cross-lingual evidence allows one to achieve a state-of-the-art performance in the WSD task, yet, most cross-lingual WSD methods employ parallel corpora (Navigli, 2009). 7 Conclusion The key contribution of our work is that we show how to identify metaphors across languages by building a model in English and applying it— without adaptation—to other languages: Spanish, Farsi, and Russian. This model uses languageindependent (rather than lexical or language specific) conceptual features. Not only do we establish benchmarks for Spanish, Farsi, and Russian, but we also achieve state-of-the-art performance in English. In addition, we present a comparison of relative contributions of several types of features. We concentrate on metaphors in the context of two kinds of syntactic relations: subjectverb-object (SVO) relations and adjective-noun (AN) relations, which account for a majority of all metaphorical phrases. Future work will expand the scope of metaphor identification by including nominal metaphoric relations as well as explore techniques for incorporating contextual features, which can play a key role in identifying certain kinds of metaphors. Second, cross-lingual model transfer can be improved with more careful cross-lingual feature projection. Acknowledgments We are extremely grateful to Shuly Wintner for a thorough review that helped us improve this draft; we also thank people who helped in creating the datasets and/or provided valuable feedback on this work: Ed Hovy, Vlad Niculae, Davida Fromm, Brian MacWhinney, Carlos Ram´ırez, and other members of the CMU METAL team. This work was supported by the U.S. Army Research Laboratory and the U.S. Army Research Office under contract/grant number W911NF-10-1-0533. References Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proc. of EACL, pages 23–32. Julia Birke and Anoop Sarkar. 2007. Active learning for the identification of nonliteral language. In Proc. of the Workshop on Computational Approaches to Figurative Language, FigLanguages ’07, pages 21– 28. Leo Breiman. 2001. Random forests. Machine Learning, 45(1):5–32. 256 George Aaron Broadwell, Umit Boz, Ignacio Cases, Tomek Strzalkowski, Laurie Feldman, Sarah Taylor, Samira Shaikh, Ting Liu, Kit Cho, and Nick Webb. 2013. Using imageability and topic chaining to locate metaphors in linguistic corpora. In Social Computing, Behavioral-Cultural Modeling and Prediction, pages 102–110. Springer. Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger. In Proc. of EMNLP, pages 594–602. Scott C. Deerwester, Susan T Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. JASIS, 41(6):391–407. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proc. of EACL. Association for Computational Linguistics. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. Language, Speech and Communication. MIT Press. Lisa Gandy, Nadji Allan, Mark Atallah, Ophir Frieder, Newton Howard, Sergey Kanareykin, Moshe Koppel, Mark Last, Yair Neuman, and Shlomo Argamon. 2013. Automatic identification of conceptual metaphors with limited knowledge. In Proc. of the Twenty-Seventh AAAI Conference on Artificial Intelligence, pages 328–334. Matt Gedigian, John Bryant, Srini Narayanan, and Branimir Ciric. 2006. Catching metaphors. In Proceedings of the 3rd Workshop on Scalable Natural Language Understanding, pages 41–48. Birgit Hamp and Helmut Feldweg. 1997. Germaneta lexical-semantic net for German. In Proc. of ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 9–15. Dirk Hovy, Shashank Srivastava, Sujay Kumar Jauhar, Mrinmaya Sachan, Kartik Goyal, Huiying Li, Whitney Sanders, and Eduard Hovy. 2013. Identifying metaphorical word use with tree kernels. In Proc. of the First Workshop on Metaphor in NLP, page 52. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proc. of ACL, pages 873–882. Franz Hundsnurscher and Jochen Splett. 1982. Semantik der Adjektive des Deutschen. Number 3137. Westdeutscher Verlag. Mikhail Kozhenikov and Ivan Titov. 2013. Crosslingual transfer of semantic role labeling models. In Proc. of ACL, pages 1190–1200. Saisuresh Krishnakumaran and Xiaojin Zhu. 2007. Hunting elusive metaphors using lexical resources. In Proc. of the Workshop on Computational approaches to Figurative Language, pages 13–20. George Lakoff and Mark Johnson. 1980. Conceptual metaphor in everyday language. The Journal of Philosophy, pages 453–486. Lori Levin, Teruko Mitamura, Davida Fromm, Brian MacWhinney, Jaime Carbonell, Weston Feely, Robert Frederking, Anatole Gershman, and Carlos Ramirez. 2014. Resources for the detection of conventionalized metaphors in four languages. In Proc. of LREC. Andr´e F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2010. Turbo parsers: dependency parsing by approximate variational inference. In Proc. of ENMLP, pages 34– 44. Zachary J Mason. 2004. CorMet: a computational, corpus-based conventional metaphor extraction system. Computational Linguistics, 30(1):23–44. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proc. of EMNLP. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for Machine Translation. CoRR, abs/1309.4168. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Comput. Surv., 41(2):10:1–10:69, February. Yair Neuman, Dan Assaf, Yohai Cohen, Mark Last, Shlomo Argamon, Newton Howard, and Ophir Frieder. 2013. Metaphor identification in large texts corpora. PloS one, 8(4):e62343. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Claudia Perlich, Foster Provost, and Jeffrey S. Simonoff. 2003. Tree induction vs. logistic regression: a learning-curve analysis. Journal of Machine Learning Research, 4:211–255. Nathan Schneider, Behrang Mohit, Chris Dyer, Kemal Oflazer, and Noah A Smith. 2013. Supersense tagging for Arabic: the MT-in-the-middle attack. In Proc. of NAACL-HLT, pages 661–667. Ekaterina Shutova and Lin Sun. 2013. Unsupervised metaphor identification using hierarchical graph factorization clustering. In Proc. of NAACL-HLT, pages 978–988. 257 Ekaterina Shutova and Simone Teufel. 2010. Metaphor corpus annotated for source-target domain mappings. In Proc. of LREC, pages 3255–3261. Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proc. of COLING, pages 1002–1010. Ekaterina Shutova, Simone Teufel, and Anna Korhonen. 2013. Statistical metaphor processing. Computational Linguistics, 39(2):301–353. Ekaterina Shutova. 2010. Models of metaphor in NLP. In Proc. of ACL, pages 688–697. Gerard J Steen, Aletta G Dorst, J Berenike Herrmann, Anna A Kaal, and Tina Krennmayr. 2010. Metaphor in usage. Cognitive Linguistics, 21(4):765–796. Tomek Strzalkowski, George Aaron Broadwell, Sarah Taylor, Laurie Feldman, Boris Yamrom, Samira Shaikh, Ting Liu, Kit Cho, Umit Boz, Ignacio Cases, et al. 2013. Robust extraction of metaphors from novel data. In Proc. of the First Workshop on Metaphor in NLP, page 67. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. TACL, 1:1–12. Paul H Thibodeau and Lera Boroditsky. 2011. Metaphors we think with: The role of metaphor in reasoning. PLoS One, 6(2):e16782. Yulia Tsvetkov, Elena Mukomel, and Anatole Gershman. 2013. Cross-lingual metaphor detection using common semantic features. In The 1st Workshop on Metaphor in NLP 2013, page 45. Yulia Tsvetkov, Nathan Schneider, Dirk Hovy, Archna Bhatia, Manaal Faruqui, and Chris Dyer. 2014. Augmenting English adjective senses with supersenses. In Proc. of LREC. Joseph Turian, Lev Ratinov, Yoshua Bengio, and Dan Roth. 2009. A preliminary evaluation of word representations for named-entity recognition. In NIPS Workshop on Grammar Induction, Representation of Language and Language Learning, pages 1–8. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proc. of ACL, pages 384–394. Peter D. Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proc. of EMNL, pages 680–690. Yorick Wilks. 1978. Making preferences more active. Artificial Intelligence, 11(3):197–223. Michael Wilson. 1988. MRC Psycholinguistic Database: Machine-usable dictionary, version 2.00. Behavior Research Methods, Instruments, & Computers, 20(1):6–10. 258
2014
24
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 259–270, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Word Sense Distributions, Detecting Unattested Senses and Identifying Novel Senses Using Topic Models Jey Han Lau,♠Paul Cook,♥Diana McCarthy,♦Spandana Gella,♥and Timothy Baldwin♥ ♠Dept of Philosophy, King’s College London ♥Dept of Computing and Information Systems, The University of Melbourne ♦University of Cambridge [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Unsupervised word sense disambiguation (WSD) methods are an attractive approach to all-words WSD due to their non-reliance on expensive annotated data. Unsupervised estimates of sense frequency have been shown to be very useful for WSD due to the skewed nature of word sense distributions. This paper presents a fully unsupervised topic modelling-based approach to sense frequency estimation, which is highly portable to different corpora and sense inventories, in being applicable to any part of speech, and not requiring a hierarchical sense inventory, parsing or parallel text. We demonstrate the effectiveness of the method over the tasks of predominant sense learning and sense distribution acquisition, and also the novel tasks of detecting senses which aren’t attested in the corpus, and identifying novel senses in the corpus which aren’t captured in the sense inventory. 1 Introduction The automatic determination of word sense information has been a long-term pursuit of the NLP community (Agirre and Edmonds, 2006; Navigli, 2009). Word sense distributions tend to be Zipfian, and as such, a simple but surprisingly highaccuracy back-off heuristic for word sense disambiguation (WSD) is to tag each instance of a given word with its predominant sense (McCarthy et al., 2007). Such an approach requires knowledge of predominant senses; however, word sense distributions — and predominant senses too — vary from corpus to corpus. Therefore, methods for automatically learning predominant senses and sense distributions for specific corpora are required (Koeling et al., 2005; Lapata and Brew, 2004). In this paper, we propose a method which uses topic models to estimate word sense distributions. This method is in principle applicable to all parts of speech, and moreover does not require a parser, a hierarchical sense representation or parallel text. Topic models have been used for WSD in a number of studies (Boyd-Graber et al., 2007; Li et al., 2010; Lau et al., 2012; Preiss and Stevenson, 2013; Cai et al., 2007; Knopp et al., 2013), but our work extends significantly on this earlier work in focusing on the acquisition of prior word sense distributions (and predominant senses). Because of domain differences and the skewed nature of word sense distributions, it is often the case that some senses in a sense inventory will not be attested in a given corpus. A system capable of automatically finding such senses could reduce ambiguity, particularly in domain adaptation settings, while retaining rare but nevertheless viable senses. We further propose a method for applying our sense distribution acquisition system to the task of finding unattested senses — i.e., senses that are in the sense inventory but not attested in a given corpus. In contrast to the previous work of McCarthy et al. (2004a) on this topic which uses the sense ranking score from McCarthy et al. (2004b) to remove low-frequency senses from WordNet, we focus on finding senses that are unattested in the corpus on the premise that, given accurate disambiguation, rare senses in a corpus contribute to correct interpretation. Corpus instances of a word can also correspond to senses that are not present in a given sense inventory. This can be due to, for example, words taking on new meanings over time (e.g. the rela259 tively recent senses of tablet and swipe related to touchscreen computers) or domain-specific terms not being included in a more general-purpose sense inventory. A system for automatically identifying such novel senses — i.e. senses that are attested in the corpus but not in the sense inventory — would be a very valuable lexicographical tool for keeping sense inventories up-to-date (Cook et al., 2013). We further propose an application of our proposed method to the identification of such novel senses. In contrast to McCarthy et al. (2004b), the use of topic models makes this possible, using topics as a proxy for sense (Brody and Lapata, 2009; Yao and Durme, 2011; Lau et al., 2012). Earlier work on identifying novel senses focused on individual tokens (Erk, 2006), whereas our approach goes further in identifying groups of tokens exhibiting the same novel sense. 2 Background and Related Work There has been a considerable amount of research on representing word senses and disambiguating usages of words in context (WSD) as, in order to produce computational systems that understand and produce natural language, it is essential to have a means of representing and disambiguating word sense. WSD algorithms require word sense information to disambiguate token instances of a given ambiguous word, e.g. in the form of sense definitions (Lesk, 1986), semantic relationships (Navigli and Velardi, 2005) or annotated data (Zhong and Ng, 2010). One extremely useful piece of information is the word sense prior or expected word sense frequency distribution. This is important because word sense distributions are typically skewed (Kilgarriff, 2004), and systems do far better when they take bias into account (Agirre and Martinez, 2004). Typically, word frequency distributions are estimated with respect to a sense-tagged corpus such as SemCor (Miller et al., 1993), a 220,000 word corpus tagged with WordNet (Fellbaum, 1998) senses. Due to the expense of hand tagging, and sense distributions being sensitive to domain and genre, there has been some work on trying to estimate sense frequency information automatically (McCarthy et al., 2004b; Chan and Ng, 2005; Mohammad and Hirst, 2006; Chan and Ng, 2006). Much of this work has been focused on ranking word senses to find the predominant sense in a given corpus (McCarthy et al., 2004b; Mohammad and Hirst, 2006), which is a very powerful heuristic approach to WSD. Most WSD systems rely upon this heuristic for back-off in the absence of strong contextual evidence (McCarthy et al., 2007). McCarthy et al. (2004b) proposed a method which relies on distributionally similar words (nearest neighbours) associated with the target word in an automatically acquired thesaurus (Lin, 1998). The distributional similarity scores of the nearest neighbours are associated with the respective target word senses using a WordNet similarity measure, such as those proposed by Jiang and Conrath (1997) and Banerjee and Pedersen (2002). The word senses are ranked based on these similarity scores, and the most frequent sense is selected for the corpus that the distributional similarity thesaurus was trained over. As well as sense ranking for predominant sense acquisition, automatic estimates of sense frequency distribution can be very useful for WSD for training data sampling purposes (Agirre and Martinez, 2004), entropy estimation (Jin et al., 2009), and prior probability estimates, all of which can be integrated within a WSD system (Chan and Ng, 2005; Chan and Ng, 2006; Lapata and Brew, 2004). Various approaches have been adopted, such as normalizing sense ranking scores to obtain a probability distribution (Jin et al., 2009), using subcategorisation information as an indication of verb sense (Lapata and Brew, 2004) or alternatively using parallel text (Chan and Ng, 2005; Chan and Ng, 2006; Agirre and Martinez, 2004). The work of Boyd-Graber and Blei (2007) is highly related in that it extends the method of McCarthy et al. (2004b) to provide a generative model which assumes the words in a given document are generated according to the topic distribution appropriate for that document. They then predict the most likely sense for each word in the document based on the topic distribution and the words in context (“corroborators”), each of which, in turn, depends on the document’s topic distribution. Using this approach, they get comparable results to McCarthy et al. when context is ignored (i.e. using a model with one topic), and at most a 1% improvement on SemCor when they use more topics in order to take context into account. Since the results do not improve on McCarthy et al. as regards sense distribution acquisition irrespective of context, we will compare our model with that proposed by McCarthy et al. 260 Recent work on finding novel senses has tended to focus on comparing diachronic corpora (Sagi et al., 2009; Cook and Stevenson, 2010; Gulordava and Baroni, 2011) and has also considered topic models (Lau et al., 2012). In a similar vein, Peirsman et al. (2010) considered the identification of words having a sense particular to one language variety with respect to another (specifically Belgian and Netherlandic Dutch). In contrast to these studies, we propose a model for comparing a corpus with a sense inventory. Carpuat et al. (2013) exploit parallel corpora to identify words in domain-specific monolingual corpora with previously-unseen translations; the method we propose does not require parallel data. 3 Methodology Our methodology is based on the WSI system described in Lau et al. (2012),1 which has been shown (Lau et al., 2012; Lau et al., 2013a; Lau et al., 2013b) to achieve state-of-the-art results over the WSI tasks from SemEval-2007 (Agirre and Soroa, 2007), SemEval-2010 (Manandhar et al., 2010) and SemEval-2013 (Navigli and Vannella, 2013; Jurgens and Klapaftis, 2013). The system is built around a Hierarchical Dirichlet Process (HDP: Teh et al. (2006)), a non-parametric variant of a Latent Dirichlet Allocation topic model (Blei et al., 2003) where the model automatically optimises the number of topics in a fully-unsupervised fashion over the training data. To learn the senses of a target lemma, we train a single topic model per target lemma. The system reads in a collection of usages of that lemma, and automatically induces topics (= senses) in the form of a multinomial distribution over words, and per-usage topic assignments (= probabilistic sense assignments) in the form of a multinomial distribution over topics. Following Lau et al. (2012), we assign one topic to each usage by selecting the topic that has the highest cumulative probability density, based on the topic allocations of all words in the context window for that usage.2 Note that in their original work, Lau et al. (2012) experimented with the use of features extracted from a dependency parser. Due to the computational overhead associated with these features, and the fact that the empirical impact of the features was found to be 1Based on the implementation available at: https:// github.com/jhlau/hdp-wsi 2This includes all words in the usage sentence except stopwords, which were filtered in the preprocessing step. marginal, we make no use of parser-based features in this paper.3 The induced topics take the form of word multinomials, and are often represented by the top-N words in descending order of conditional probability. We interpret each topic as a sense of the target lemma.4 To illustrate this, we give the example of topics induced by the HDP model for network in Table 1. We refer to this method as HDP-WSI henceforth.5 In predominant sense acquisition, the task is to learn, for each target lemma, the most frequently occurring word sense in a particular domain or corpus, relative to a predefined sense inventory. The WSI system provides us with a topic allocation per usage of a given word, from which we can derive a distribution of topics over usages and a predominant topic. In order to map this onto the predominant sense, we need to have some way of aligning a topic with a sense. We design our topic– sense alignment methodology with portability in mind — it should be applicable to any sense inventory. As such, our alignment methodology assumes only that we have access to a conventional sense gloss or definition for each sense, and does not rely on ontological/structural knowledge (e.g. the WordNet hierarchy). To compute the similarity between a sense and a topic, we first convert the words in the gloss/definition into a multinomial distribution over words, based on simple maximum likelihood estimation.6 We then calculate the Jensen– Shannon divergence between the multinomial distribution (over words) of the gloss and that of the topic, and convert the divergence value into a similarity score by subtracting it from 1. Formally, the similarity sense si and topic tj is: sim(si, tj) = 1 −JS(S∥T) (1) where S and T are the multinomial distributions 3For hyper-parameters α and γ, we used 0.1 for both. We did not tune the parameters, and opted to use the default parameters introduced in Teh et al. (2006). 4To avoid confusion, we will refer to the HDP-induced topics as topics, and reserve the term sense to denote senses in a sense inventory. 5The code used to learn predominant sense and run all experiments described in this paper is available at: https: //github.com/jhlau/predom_sense. 6Words are tokenised using OpenNLP and lemmatised with Morpha (Minnen et al., 2001). We additionally remove the target lemma, stopwords and words that are less than 3 characters in length. 261 Topic Num Top-10 Terms 1 network support @card@ information research service group development community member 2 service @card@ road company transport rail area government network public 3 network social model system family structure analysis form relationship neural 4 network @card@ computer system service user access internet datum server 5 system network management software support corp company service application product 6 @card@ radio news television show bbc programme call think film 7 police drug criminal terrorist intelligence network vodafone iraq attack cell 8 network atm manager performance craigavon group conference working modelling assistant 9 root panos comenius etd unipalm lse brazil telephone xxx discuss Table 1: An example to illustrate the topics induced for network by the HDP model. The top-10 highest probability terms are displayed to represent each topic (@card@ denotes a tokenised cardinal number). over words for sense si and topic tj, respectively, and JS(X∥Y ) is the Jensen–Shannon divergence for distribution X and Y . To learn the predominant sense, we compute the prevalence score of each sense and take the sense with the highest prevalence score as the predominant sense. The prevalence score for a sense is computed by summing the product of its similarity scores with each topic (i.e. sim(si, tj)) and the prior probability of the topic in question (based on maximum likelihood estimation). Formally, the prevalence score of sense si is given as follows: prevalence(si) = T X j (sim(si, tj) × P(tj)) (2) = T X j sim(si, tj) × f(tj) PT k f(tk) ! where f(tj) is the frequency of topic tj (i.e. the number of usages assigned to topic tj), and T is the number of topics. The intuition behind the approach is that the predominant sense should be the sense that has relatively high similarity (in terms of lexical overlap) with high-probability topic(s). 4 WordNet Experiments We first test the proposed method over the tasks of predominant sense learning and sense distribution induction, using the WordNet-tagged dataset of Koeling et al. (2005), which is made up of 3 collections of documents: a domain-neutral corpus (BNC), and two domain-specific corpora (SPORTS and FINANCE). For each domain, annotators were asked to sense-annotate a random selection of sentences for each of 40 target nouns, based on WordNet v1.7. The predominant sense and distribution across senses for each target lemma was obtained by aggregating over the sense annotations. The authors evaluated their method in terms of WSD accuracy over a given corpus, based on assigning all instances of a target word with the predominant sense learned from that corpus. For the remainder of the paper, we denote their system as MKWC. To compare our system (HDP-WSI) with MKWC, we apply it to the three datasets of Koeling et al. (2005). For each dataset, we use HDP to induce topics for each target lemma, compute the similarity between the topics and the WordNet senses (Equation (1)), and rank the senses based on the prevalence scores (Equation (2)). In addition to the WSD accuracy based on the predominant sense inferred from a particular corpus, we additionally compute: (1) AccUB, the upper bound for the first sense-based WSD accuracy (using the gold standard predominant sense for disambiguation);7 and (2) ERR, the error rate reduction between the accuracy for a given system (Acc) and the upper bound (AccUB), calculated as follows: ERR = 1 −AccUB −Acc AccUB Looking at the results in Table 2, we see little difference in the results for the two methods, with MKWC performing better over two of the datasets (BNC and SPORTS) and HDP-WSI performing better over the third (FINANCE), but all differences are small. Based on the McNemar’s Test with Yates correction for continuity, MKWC is significantly better over BNC and HDP-WSI is significantly better over FINANCE (p < 0.0001 in both cases), but the difference over SPORTS is not statistically significance (p > 0.1). Note that there is still much room for improvement with 7The upper bound for a WSD approach which tags all token occurrences of a given word with the same sense, as a first step towards context-sensitive unsupervised WSD. 262 Dataset FSCORPUS MKWC HDP-WSI AccUB Acc ERR Acc ERR BNC 0.524 0.407 (0.777) 0.376 (0.718) FINANCE 0.801 0.499 (0.623) 0.555 (0.693) SPORTS 0.774 0.437 (0.565) 0.422 (0.545) Table 2: WSD accuracy for MKWC and HDP-WSI on the WordNet-annotated datasets, as compared to the upper-bound based on actual first sense in the corpus (higher values indicate better performance; the best system in each row [other than the FSCORPUS upper bound] is indicated in boldface). Dataset MKWC HDP-WSI BNC 0.226 0.214 FINANCE 0.426 0.375 SPORTS 0.420 0.363 Table 3: Sense distribution evaluation of MKWC and HDP-WSI on the WordNet-annotated datasets, evaluated using JS divergence (lower values indicate better performance; the best system in each row is indicated in boldface). both systems, as we see in the gap between the upper bound (based on perfect determination of the first sense) and the respective system accuracies. Given that both systems compute a continuousvalued prevalence score for each sense of a target lemma, a distribution of senses can be obtained by normalising the prevalence scores across all senses. The predominant sense learning task of McCarthy et al. (2007) evaluates the ability of a method to identify only the head of this distribution, but it is also important to evaluate the full sense distribution (Jin et al., 2009). To this end, we introduce a second evaluation metric: the Jensen–Shannon (JS) divergence between the inferred sense distribution and the gold-standard sense distribution, noting that smaller values are better in this case, and that it is now theoretically possible to obtain a JS divergence of 0 in the case of a perfect estimate of the sense distribution. Results are presented in Table 3. HDP-WSI consistently achieves lower JS divergence, indicating that the distribution of senses that it finds is closer to the gold standard distribution. Testing for statistical significance over the paired JS divergence values for each lemma using the Wilcoxon signed-rank test, the result for FINANCE is significant (p < 0.05) but the results for the other two datasets are not (p > 0.1 in each case). Dataset FSCORPUS FSDICT HDP-WSI AccUB Acc ERR Acc ERR UKWAC 0.574 0.387 (0.674) 0.514 (0.895) TWITTER 0.468 0.297 (0.635) 0.335 (0.716) Table 4: WSD accuracy for HDP-WSI on the Macmillan-annotated datasets, as compared to the upper-bound based on actual first sense in the corpus (higher values indicate better performance; the best system in each row [other than the FSCORPUS upper bound] is indicated in boldface). Dataset FSCORPUS FSDICT HDP-WSI UKWAC 0.210 0.393 0.156 TWITTER 0.259 0.472 0.171 Table 5: Sense distribution evaluation of HDPWSI on the Macmillan-annotated datasets as compared to corpus- and dictionary-based first sense methods, evaluated using JS divergence (lower values indicate better performance; the best system in each row is indicated in boldface). To summarise, the results for MKWC and HDPWSI are fairly even for predominant sense learning (each outperforms the other at a level of statistical significance over one dataset), but HDP-WSI is better at inducing the overall sense distribution. It is important to bear in mind that MKWC in these experiments makes use of full-text parsing in calculating the distributional similarity thesaurus, and the WordNet graph structure in calculating the similarity between associated words and different senses. Our method, on the other hand, uses no parsing, and only the synset definitions (and not the graph structure) of WordNet.8 The non-reliance on parsing is significant in terms of portability to text sources which are less amenable to parsing (such as Twitter: (Baldwin et al., 2013)), and the non-reliance on the graph structure of WordNet is significant in terms of portability to conventional “flat” sense inventories. While comparable results on a different dataset have been achieved with a proximity thesaurus (McCarthy et al., 2007) compared to a dependency one,9 it is not stated how 8McCarthy et al. (2004b) obtained good results with definition overlap, but their implementation uses the relation structure alongside the definitions (Banerjee and Pedersen, 2002). Iida et al. (2008) demonstrate that further extensions using distributional data are required when applying the method to resources without hierarchical relations. 9The thesauri used in the reimplementation of MKWC in this paper were obtained from http://webdocs.cs. ualberta.ca/˜lindek/downloads.htm. 263 wide a window is needed for the proximity thesaurus. This could be a significant issue with Twitter data, where context tends to be limited. In the next section, we demonstrate the robustness of the method in experimenting with two new datasets, based on Twitter and a web corpus, and the Macmillan English Dictionary. 5 Macmillan Experiments In our second set of experiments, we move to a new dataset (Gella et al., to appear) based on text from ukWaC (Ferraresi et al., 2008) and Twitter, and annotated using the Macmillan English Dictionary10 (henceforth “Macmillan”). For the purposes of this research, the choice of Macmillan is significant in that it is a conventional dictionary with sense definitions and examples, but no linking between senses.11 In terms of the original research which gave rise to the sense-tagged dataset, Macmillan was chosen over WordNet for reasons including: (1) the well-documented difficulties of sense tagging with fine-grained WordNet senses (Palmer et al., 2004; Navigli et al., 2007); (2) the regular update cycle of Macmillan (meaning it contains many recently-emerged senses); and (3) the finding in a preliminary sense-tagging task that it better captured Twitter usages than WordNet (and also OntoNotes: Hovy et al. (2006)). The dataset is made up of 20 target nouns which were selected to span the high- to mid-frequency range in both Twitter and the ukWaC corpus, and have at least 3 Macmillan senses. The average sense ambiguity of the 20 target nouns in Macmillan is 5.6 (but 12.3 in WordNet). 100 usages of each target noun were sampled from each of Twitter (from a crawl over the time period Jan 3–Feb 28, 2013 using the Twitter Streaming API) and ukWaC, after language identification using langid.py (Lui and Baldwin, 2012) and POS tagging (based on the CMU ARK Twitter POS tagger v2.0 (Owoputi et al., 2012) for Twitter, and the POS tags provided with the corpus for ukWaC). Amazon Mechanical Turk (AMT) was then used to 5-way sense-tag each usage relative to Macmillan, including allowing the annotators the option to label a usage as “Other” in instances where the usage was not captured by any of the Macmillan senses. After quality control over the annotators/annotations (see 10http://www.macmillandictionary.com/ 11Strictly speaking, there is limited linking in the form of sets of synonyms in Macmillan, but we choose to not use this information in our research. Gella et al. (to appear) for details), and aggregation of the annotations into a single sense per usage (possibly “Other”), there were 2000 sense-tagged ukWaC sentences and Twitter messages over the 20 target nouns. We refer to these two datasets as UKWAC and TWITTER henceforth. To apply our method to the two datasets, we use HDP-WSI to train a model for each target noun, based on the combined set of usages of that lemma in each of the two background corpora, namely the original Twitter crawl that gave rise to the TWITTER dataset, and all of ukWaC. 5.1 Learning Sense Distributions As in Section 4, we evaluate in terms of WSD accuracy (Table 4) and JS divergence over the gold-standard sense distribution (Table 5). We also present the results for: (a) a supervised baseline (“FSCORPUS”), based on the most frequent sense in the corpus; and (b) an unsupervised baseline (“FSDICT”), based on the first-listed sense in Macmillan. In each case, the sense distribution is based on allocating all probability mass for a given word to the single sense identified by the respective method. We first notice that, despite the coarser-grained senses of Macmillan as compared to WordNet, the upper bound WSD accuracy using Macmillan is comparable to that of the WordNet-based datasets over the balanced BNC, and quite a bit lower than that of the two domain corpora of Koeling et al. (2005). This suggests that both datasets are diverse in domain and content. In terms of WSD accuracy, the results over UKWAC (ERR = 0.895) are substantially higher than those for BNC, while those over TWITTER (ERR = 0.716) are comparable. The accuracy is significantly higher than the dictionary-based first sense baseline (FSDICT) over both datasets (McNemar’s test; p < 0.0001), and the ERR is also considerably higher than for the two domain datasets in Section 4 (FINANCE and SPORTS). One cause of difficulty in sense-modelling TWITTER is large numbers of missing senses, with 12.3% of usages in TWITTER and 6.6% in UKWAC having no corresponding Macmillan sense.12 This challenges the assumption built into the sense prevalence calculation that all topics will align to a preexisting sense, a point we return to in Section 5.2. 12The relative occurrence of unlisted/unclear senses in the datasets of Koeling et al. (2005) is comparable to UKWAC. 264 Dataset P R F UKWAC 0.73 0.85 0.74 TWITTER 0.56 0.88 0.65 Table 6: Evaluation of our method for identifying unattested senses, averaged over 10 runs of 10fold cross validation The JS divergence results for both datasets are well below (= better than) the results for all three WordNet-based datasets, and also superior to both the supervised and unsupervised first-sense baselines. Part of the reason for this improvement is simply that the average polysemy in Macmillan (5.6 senses per target lemma) is slightly less than in WordNet (6.7 senses per target lemma),13 making the task slightly easier in the Macmillan case. 5.2 Identification of Unattested Senses We observed in Section 5.1 that there are relatively frequent occurrences of usages (e.g. 12.3% for TWITTER) which aren’t captured by Macmillan. Conversely, there are also senses in Macmillan which aren’t attested in the annotated sample of usages. Specifically, of the 112 senses defined for the 20 target lemmas, 25 (= 22.3%) of the senses are not attested in the 2000 usages in either corpora. Given that our methodology computes a prevalence score for each sense, it can equally be applied to the detection of these unattested senses, and it is this task that we address in this section: the identification of senses that are defined in the sense inventory but not attested in a given corpus. Intuitively, an unused sense should have low similarity with the HDP induced topics. As such, we introduce sense-to-topic affinity, a measure that estimates how likely a sense is not attested in the corpus: st-affinity(si) = PT j sim(si, tj) PS k PT l sim(sk, tl) (3) where sim(si, tj) is carried over from Equation (1), and T and S represent the number of topics and senses, respectively. We treat the task of identification of unused senses as a binary classification problem, where the goal is to find a sense-to-topic affinity threshold below which a sense will be considered to 13Note that the set of lemmas differs between the respective datasets, so this isn’t an accurate reflection of the relative granularity of the two dictionaries. be unused. We pool together all the senses and run 10-fold cross validation to learn the threshold for identifying unused senses,14 evaluated using sense-level precision (P), recall (R) and F-score (F) at detecting unattested senses. We repeat the experiment 10 times (partitioning the items randomly into folds) and collect the mean precision, recall and F-scores across the 10 runs. We found encouraging results for the task, as detailed in Table 6. For the threshold, the average value with standard deviation is 0.092 ± 0.044 over UKWAC and 0.125±0.052 over TWITTER, indicating relative stability in the value of the threshold both internally within a dataset, and also across datasets. 5.3 Identification of Novel Senses In both TWITTER and UKWAC, we observed frequent occurrences of usages of our target nouns which didn’t map onto a pre-existing Macmillan sense. A natural question to ask is whether our method can be used to predict word senses that are missing from our sense inventory, and identify usages associated with each such missing sense. We will term these “novel senses”, and define “novel sense identification” to be the task of identifying new senses that are not recorded in the inventory but are seen in the corpus. An immediate complication in evaluating novel sense identification is that we are attempting to identify senses which explicitly aren’t in our sense inventory. This contrasts with the identification of unattested senses, e.g., where we were attempting to identify which of the known senses wasn’t observed in the corpus. Also, while we have annotations of “Other” usages in TWITTER and UKWAC, there is no real expectation that all such usages will correspond to the same sense: in practice, they are attributable to a myriad of effects such as incorporation in a non-compositional multiword expression, and errors in POS tagging (i.e. the usage not being nominal). As such, we can’t use the “Other” annotations to evaluate novel sense identification. The evaluation of systems for this task is a known challenge, which we address similarly to Erk (2006) by artificially synthesising novel senses through removal of senses from the sense inventory. In this way, even if we remove multiple senses for a given word, we still have access to information about which usages correspond to 14We used a fixed step and increment at steps of 0.001, up to the max value of st-affinity when optimising the threshold. 265 No. Lemmas with Relative Freq Threshold P R F a Removed Sense of Removed Sense Mean±stdev 20 0.0–0.2 0.052±0.009 0.35 0.42 0.36 9 0.2–0.4 0.089±0.024 0.24 0.59 0.29 6 0.4–0.6 0.061±0.004 0.63 0.64 0.63 Table 7: Classification of usages with novel sense for all target lemmas. No. Lemmas with Relative Freq Threshold P R F a Removed Sense of Removed Sense Mean±stdev 9 0.2–0.4 0.093±0.023 0.50 0.66 0.52 6 0.4–0.6 0.099±0.018 0.73 0.90 0.80 Table 8: Classification of usages with novel sense for target lemmas with a removed sense. which novel sense. An additional advantage of this procedure is that it allows us to control an important property of novel senses: their frequency of occurrence. In the experiments that follow, we randomly select senses for removal from three frequency bands: low, medium and high frequency senses. Frequency is defined by relative occurrence in the annotated usages: low = 0.0–0.2; medium = 0.2– 0.4; and high = 0.4–0.6. Note that we do not consider high-frequency senses with frequency higher than 0.6, as it is rare for a medium- to highfrequency word to take on a novel sense which is then the predominant sense in a given corpus. Note also that not all target lemmas will have a novel sense through synthesis, as they may have no senses that fall within the indicated bounds of relative occurrence (e.g. if > 60% of usages are a single sense). For example, only 6 of our 20 target nouns have senses which are candidates for highfrequency novel senses. As before, we treat the novel sense identification task as a classification problem, although with a significantly different formulation: we are no longer attempting to identify pre-existing senses, as novel senses are by definition not included in the sense inventory. Instead, we are seeking to identify clusters of usages which are instances of a novel sense, e.g. for presentation to a lexicographer as part of a dictionary update process (Rundell and Kilgarriff, 2011; Cook et al., 2013). That is, for each usage, we want to classify whether it is an instance of a given novel sense. A usage that corresponds to a novel sense should have a topic that does not align well with any of the pre-existing senses in the sense inventory. Based on this intuition, we introduce topicto-sense affinity to estimate the similarity of a topic to the set of senses, as follows: ts-affinity(tj) = PS i sim(si, tj) PT l PS k sim(sk, tl) (4) where, once again, sim(si, tj) is defined as in Equation (1), and T and S represent the number of topics and senses, respectively. Using topic-to-sense affinity as the sole feature, we pool together all instances and optimise the affinity feature to classify instances that have novel senses. Evaluation is done by computing the mean precision, recall and F-score across 10 separate runs; results are summarised in Table 7. Note that we evaluate only over UKWAC in this section, for ease of presentation. The results show that instances with highfrequency novel senses are more easily identifiable than instances with medium/low-frequency novel senses. This is unsurprising given that highfrequency senses have a higher probability of generating related topics (sense-related words are observed more frequently in the corpus), and as such are more easily identifiable. We are interested in understanding whether pooling all instances — instances from target lemmas that have a sense artificially removed and those that do not — impacted the results (recall that not all target lemmas have a removed sense). To that end, we chose to include only instances from lemmas with a removed sense, and repeated the experiment for the medium- and high-frequency novel sense condition (for the lowfrequency condition, all target lemmas have a novel sense). In other words, we are assuming knowledge of which words have novel sense, and the task is to identify specifically what the novel sense is, as represented by novel usages. Results are presented in Table 8. 266 No. of Lemmas with No. of Lemmas without Relative Freq Wilcoxon Rank Sum a Removed Sense a Removed Sense of Removed Sense p-value 10 0 0.0–0.2 0.4543 9 11 0.2–0.4 0.0391 6 14 0.4–0.6 0.0247 Table 9: Wilcoxon Rank Sum p-value results for testing target lemmas with removed sense vs. target lemmas without removed sense using novelty. From the results, we see that the F-scores improved notably. This reveals that an additional step is necessary to determine whether a target lemma has a potential novel sense before feeding its instances to learn which of them contains the usage of the novel sense. In the last experiment, we propose a new measure to tackle this: the identification of target lemmas that have a novel sense. We introduce novelty, a measure of the likelihood of a target lemma w having a novel sense: novelty(w) = min tj  max si sim(si, tj) f(tj)  (5) where f(tj) is the frequency of topic tj in the corpus. The intuition behind novelty is that a target lemma with a novel sense should have a (somewhat-)frequent topic that has low association with any sense. That we use the frequency rather than the probability of the topic here is deliberate, as topics with a higher raw number of occurrences (whether as a low-probability topic for a high-frequency word, or a high-probability topic for a low-frequency word) are indicative of a novel word sense. For each of our three datasets (with low-, medium- and high-frequency novel senses, respectively), we compute the novelty of the target lemmas and the p-value of a one-tailed Wilcoxon rank sum test to test if the two groups of lemmas (i.e. lemmas with a novel sense vs. lemmas without a novel sense) are statistically different.15 Results are presented in Table 9. We see that the novelty measure can readily identify target lemmas with high- and medium-frequency novel senses (p < 0.05), but the results are less promising for the low-frequency novel senses. 6 Discussion Our methodologies for the two proposed tasks of identifying unused and novel senses are simple 15Note that the number of words with low-frequency novel senses here is restricted to 10 (cf. 20 in Table 7) to ensure we have both positive and negative lemmas in the dataset. extensions to demonstrate the flexibility and robustness of our methodology. Future work could pursue a more sophisticated methodology, using non-linear combinations of sim(si, tj) for computing the affinity measures or multiple features in a supervised context. We contend, however, that these extensions are ultimately a preliminary demonstration to the flexibility and robustness of our methodology. A natural next step for this research would be to couple sense distribution estimation and the detection of unattested senses with evidence from the context, using topics or other information about the local context (e.g. Agirre and Soroa (2009)) to carry out unsupervised WSD of individual token occurrences of a given word. In summary, we have proposed a topic modelling-based method for estimating word sense distributions, based on Hierarchical Dirichlet Processes and the earlier work of Lau et al. (2012) on word sense induction, in probabilistically mapping the automatically-learned topics to senses in a sense inventory. We evaluated the ability of the method to learn predominant senses and induce word sense distributions, based on a broad range of datasets and two separate sense inventories. In doing so, we established that our method is comparable to the approach of McCarthy et al. (2007) at predominant sense learning, and superior at inducing word sense distributions. We further demonstrated the applicability of the method to the novel tasks of detecting word senses which are unattested in a corpus, and identifying novel senses which are found in a corpus but not captured in a word sense inventory. Acknowledgements We wish to thank the anonymous reviewers for their valuable comments. This research was supported in part by funding from the Australian Research Council. 267 References Eneko Agirre and Philip Edmonds, editors. 2006. Word Sense Disambiguation: Algorithms and Applications. Springer, Dordrecht, Netherlands. Eneko Agirre and David Martinez. 2004. Unsupervised WSD based on automatically retrieved examples: The importance of bias. In Proceedings of EMNLP 2004, pages 25–32, Barcelona, Spain. Eneko Agirre and Aitor Soroa. 2007. SemEval-2007 task 02: Evaluating word sense induction and discrimination systems. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 7–12, Prague, Czech Republic. Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank for word sense disambiguation. In Proceedings of the 12th Conference of the EACL (EACL 2009), pages 33–41, Athens, Greece. Timothy Baldwin, Paul Cook, Marco Lui, Andrew MacKinlay, and Li Wang. 2013. How noisy social media text, how diffrnt social media sources? In Proceedings of the 6th International Joint Conference on Natural Language Processing (IJCNLP 2013), pages 356–364, Nagoya, Japan. Satanjeev Banerjee and Ted Pedersen. 2002. An adapted Lesk algorithm for word sense disambiguation using WordNet. In Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2002), pages 136–145, Mexico City, Mexico. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Jordan Boyd-Graber and David Blei. 2007. Putop: Turning predominant senses into a topic model for word sense disambiguation. In Proc. of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 277–281, Prague, Czech Republic. Jordan Boyd-Graber, David Blei, and Xiaojin Zhu. 2007. A topic model for word sense disambiguation. In Proc. of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1024–1033, Prague, Czech Republic. Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In Proceedings of the 12th Conference of the EACL (EACL 2009), pages 103– 111, Athens, Greece. Jun Fu Cai, Wee Sun Lee, and Yee Whye Teh. 2007. NUS-ML: Improving word sense disambiguation using topic features. In Proc. of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 249–252, Prague, Czech Republic. Marine Carpuat, Hal Daum´e III, Katharine Henry, Ann Irvine, Jagadeesh Jagarlamudi, and Rachel Rudinger. 2013. SenseSpotting: Never let your parallel data tie you to an old domain. In Proc. of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), pages 1435–1445, Sofia, Bulgaria. Yee Seng Chan and Hwee Tou Ng. 2005. Word sense disambiguation with distribution estimation. In Proc. of the 19th International Joint Conference on Artificial Intelligence (IJCAI 2005), pages 1010– 1015, Edinburgh, UK. Yee Seng Chan and Hwee Tou Ng. 2006. Estimating class priors in domain adaptation for word sense disambiguation. In Proc. of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 89–96, Sydney, Australia. Paul Cook and Suzanne Stevenson. 2010. Automatically identifying changes in the semantic orientation of words. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), pages 28–34, Valletta, Malta. Paul Cook, Jey Han Lau, Michael Rundell, Diana McCarthy, and Timothy Baldwin. 2013. A lexicographic appraisal of an automatic approach for detecting new word senses. In Proceedings of eLex 2013, pages 49–65, Tallinn, Estonia. Katrin Erk. 2006. Unknown word sense detection as outlier detection. In Proc. of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 128–135, New York City, USA. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, USA. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWaC, a very large web-derived corpus of English. In Proc. of the 4th Web as Corpus Workshop: Can we beat Google, pages 47–54, Marrakech, Morocco. Spandana Gella, Paul Cook, and Timothy Baldwin. to appear. One sense per tweeter ... and other lexical semantic tales of Twitter. In Proceedings of the 14th Conference of the EACL (EACL 2014), Gothenburg, Sweden. Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 67–71, Edinburgh, UK. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of 268 the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 57–60, New York City, USA. Ryu Iida, Diana McCarthy, and Rob Koeling. 2008. Gloss-based semantic similarity metrics for predominant sense acquisition. In Proc. of the Third International Joint Conference on Natural Language Processing, pages 561–568. Jay Jiang and David Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings on International Conference on Research in Computational Linguistics, pages 19–33, Taipei, Taiwan. Peng Jin, Diana McCarthy, Rob Koeling, and John Carroll. 2009. Estimating and exploiting the entropy of sense distributions. In Proceedings of the North American Chapter of the Association for Computational Linguistics – Human Language Technologies 2009 (NAACL HLT 2009): Short Papers, pages 233– 236, Boulder, USA. David Jurgens and Ioannis Klapaftis. 2013. Semeval2013 task 13: Word sense induction for graded and non-graded senses. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), pages 290–299, Atlanta, USA. Adam Kilgarriff. 2004. How dominant is the commonest sense of a word? Technical Report ITRI-04-10, Information Technology Research Institute, University of Brighton. Johannes Knopp, Johanna V¨olker, and Simone Paolo Ponzetto. 2013. Topic modeling for word sense induction. In Proc. of the International Conference of the German Society for Computational Linguistics and Language Technology, pages 97–103, Darmstadt, Germany. Rob Koeling, Diana McCarthy, and John Carroll. 2005. Domain-specific sense distributions and predominant sense acquisition. In Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing (EMNLP 2005), pages 419– 426, Vancouver, Canada. Mirella Lapata and Chris Brew. 2004. Verb class disambiguation using informative priors. Computational Linguistics, 30(1):45–75. Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In Proceedings of the 13th Conference of the EACL (EACL 2012), pages 591–601, Avignon, France. Jey Han Lau, Paul Cook, and Timothy Baldwin. 2013a. unimelb: Topic modelling-based word sense induction. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), pages 307–311, Atlanta, USA. Jey Han Lau, Paul Cook, and Timothy Baldwin. 2013b. unimelb: Topic modelling-based word sense induction for web snippet clustering. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), pages 217–221, Atlanta, USA. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 1986 SIGDOC Conference, pages 24–26, Ontario, Canada. Linlin Li, Benjamin Roth, and Caroline Sporleder. 2010. Topic models for word sense disambiguation and token-based idiom detection. In Proc. of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1138–1147, Uppsala, Sweden. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the ACL and 17th International Conference on Computational Linguistics (COLING/ACL98), pages 768–774, Montreal, Canada. Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012) Demo Session, pages 25–30, Jeju, Republic of Korea. Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 Task 14: Word sense induction & disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63–68, Uppsala, Sweden. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004a. Automatic identification of infrequent word senses. In Proc. of the 20th International Conference of Computational Linguistics, COLING2004, pages 1220–1226, Geneva, Switzerland. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004b. Finding predominant senses in untagged text. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL 2004), pages 280–287, Barcelona, Spain. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2007. Unsupervised acquisition of predominant word senses. Computational Linguistics, 4(33):553–590. George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proc. of the ARPA Workshop on Human Language Technology, pages 303–308. Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natural Language Engineering, 7(3):207–223. 269 Saif Mohammad and Graeme Hirst. 2006. Determining word sense dominance using a thesaurus. In Proc. of EACL-2006, pages 121–128, Trento, Italy. Roberto Navigli and Daniele Vannella. 2013. SemEval-2013 task 11: Word sense induction and disambiguation within an end-user application. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), pages 193– 201, Atlanta, USA. Roberto Navigli and Paola Velardi. 2005. Structural semantic interconnections: a knowledge-based approach to word sense disambiguation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1075–1088. Roberto Navigli, Kenneth C. Litkowski, and Orin Hargraves. 2007. SemEval-2007 task 07: Coarsegrained English all-words task. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 30–35, Prague, Czech Republic. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41(2). Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, and Nathan Schneider. 2012. Partof-speech tagging for Twitter: Word clusters and other advances. Technical Report CMU-ML-12107, Machine Learning Department, Carnegie Mellon University. Martha Palmer, Olga Babko-Malaya, and Hoa Trang Dang. 2004. Different sense granularities for different applications. In Proceedings of the HLT-NAACL 2004 Workshop: 2nd Workshop on Scalable Natural Language Understanding, pages 49–56, Boston, USA. Yves Peirsman, Dirk Geeraerts, and Dirk Speelman. 2010. The automatic identification of lexical variation between language varieties. Natural Language Engineering, 16(4):469–491. Judita Preiss and Mark Stevenson. 2013. Unsupervised domain tuning to improve word sense disambiguation. In Proc. of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 680–684, Atlanta, USA. Michael Rundell and Adam Kilgarriff. 2011. Automating the creation of dictionaries: where will it all end? In Fanny Meunier, Sylvie De Cock, Ga¨etanelle Gilquin, and Magali Paquot, editors, A Taste for Corpora. In honour of Sylviane Granger, pages 257–282. John Benjamins, Amsterdam, Netherlands. Eyal Sagi, Stefan Kaufmann, and Brady Clark. 2009. Semantic density analysis: Comparing word meaning across time and space. In Proceedings of the EACL 2009 Workshop on GEMS: GEometrical Models of Natural Language Semantics, pages 104– 111, Athens, Greece. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581. Xuchen Yao and Benjamin Van Durme. 2011. Nonparametric Bayesian word sense induction. In Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing, pages 10–14, Portland, USA. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proc. of the ACL 2010 System Demonstrations, pages 78–83, Uppsala, Sweden. 270
2014
25
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 271–281, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning to Automatically Solve Algebra Word Problems Nate Kushman†, Yoav Artzi‡, Luke Zettlemoyer‡, and Regina Barzilay† † Computer Science and Articial Intelligence Laboratory, Massachusetts Institute of Technology {nkushman, regina}@csail.mit.edu ‡ Computer Science & Engineering, University of Washington {yoav, lsz}@cs.washington.edu Abstract We present an approach for automatically learning to solve algebra word problems. Our algorithm reasons across sentence boundaries to construct and solve a system of linear equations, while simultaneously recovering an alignment of the variables and numbers in these equations to the problem text. The learning algorithm uses varied supervision, including either full equations or just the final answers. We evaluate performance on a newly gathered corpus of algebra word problems, demonstrating that the system can correctly answer almost 70% of the questions in the dataset. This is, to our knowledge, the first learning result for this task. 1 Introduction Algebra word problems concisely describe a world state and pose questions about it. The described state can be modeled with a system of equations whose solution specifies the questions’ answers. For example, Figure 1 shows one such problem. The reader is asked to infer how many children and adults were admitted to an amusement park, based on constraints provided by ticket prices and overall sales. This paper studies the task of learning to automatically solve such problems given only the natural language.1 Solving these problems requires reasoning across sentence boundaries to find a system of equations that concisely models the described semantic relationships. For example, in Figure 1, the total ticket revenue computation in the second equation summarizes facts about ticket prices and total sales described in the second, third, and fifth 1The code and data for this work are available at http://groups.csail.mit.edu/rbg/code/ wordprobs/. Word problem An amusement park sells 2 kinds of tickets. Tickets for children cost $1.50. Adult tickets cost $4. On a certain day, 278 people entered the park. On that same day the admission fees collected totaled $792. How many children were admitted on that day? How many adults were admitted? Equations x + y = 278 1.5x + 4y = 792 Solution x = 128 y = 150 Figure 1: An example algebra word problem. Our goal is to map a given problem to a set of equations representing its algebraic meaning, which are then solved to get the problem’s answer. sentences. Furthermore, the first equation models an implicit semantic relationship, namely that the children and adults admitted are non-intersecting subsets of the set of people who entered the park. Our model defines a joint log-linear distribution over full systems of equations and alignments between these equations and the text. The space of possible equations is defined by a set of equation templates, which we induce from the training examples, where each template has a set of slots. Number slots are filled by numbers from the text, and unknown slots are aligned to nouns. For example, the system in Figure 1 is generated by filling one such template with four specific numbers (1.5, 4, 278, and 792) and aligning two nouns (“Tickets” in “Tickets for children”, and “tickets” in “Adult tickets”). These inferred correspondences are used to define cross-sentence features that provide global cues to the model. For instance, in our running example, the string 271 pairs (“$1.50”, “children”) and (“$4”,“adults”) both surround the word “cost,” suggesting an output equation with a sum of two constant-variable products. We consider learning with two different levels of supervision. In the first scenario, we assume access to each problem’s numeric solution (see Figure 1) for most of the data, along with a small set of seed examples labeled with full equations. During learning, a solver evaluates competing hypotheses to drive the learning process. In the second scenario, we are provided with a full system of equations for each problem. In both cases, the available labeled equations (either the seed set, or the full set) are abstracted to provide the model’s equation templates, while the slot filling and alignment decisions are latent variables whose settings are estimated by directly optimizing the marginal data log-likelihood. The approach is evaluated on a new corpus of 514 algebra word problems and associated equation systems gathered from Algebra.com. Provided with full equations during training, our algorithm successfully solves over 69% of the word problems from our test set. Furthermore, we find the algorithm can robustly handle weak supervision, achieving more than 70% of the above performance when trained exclusively on answers. 2 Related Work Our work is related to three main areas of research: situated semantic interpretation, information extraction, and automatic word problem solvers. Situated Semantic Interpretation There is a large body of research on learning to map natural language to formal meaning representations, given varied forms of supervision. Reinforcement learning can be used to learn to read instructions and perform actions in an external world (Branavan et al., 2009; Branavan et al., 2010; Vogel and Jurafsky, 2010). Other approaches have relied on access to more costly annotated logical forms (Zelle and Mooney, 1996; Thompson and Mooney, 2003; Wong and Mooney, 2006; Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2010). These techniques have been generalized more recently to learn from sentences paired with indirect feedback from a controlled application. Examples include question answering (Clarke et al., 2010; Cai and Yates, 2013a; Cai and Yates, 2013b; Berant et al., 2013; Kwiatkowski et al., 2013), dialog systems (Artzi and Zettlemoyer, 2011), robot instruction (Chen and Mooney, 2011; Chen, 2012; Kim and Mooney, 2012; Matuszek et al., 2012; Artzi and Zettlemoyer, 2013), and program executions (Kushman and Barzilay, 2013; Lei et al., 2013). We focus on learning from varied supervision, including question answers and equation systems, both can be obtained reliably from annotators with no linguistic training and only basic math knowledge. Nearly all of the above work processed single sentences in isolation. Techniques that consider multiple sentences typically do so in a serial fashion, processing each in turn with limited cross-sentence reasoning (Branavan et al., 2009; Zettlemoyer and Collins, 2009; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013). We focus on analyzing multiple sentences simultaneously, as is necessary to generate the global semantic representations common in domains such as algebra word problems. Information Extraction Our approach is related to work on template-based information extraction, where the goal is to identify instances of event templates in text and extract their slot fillers. Most work has focused on the supervised case, where the templates are manually defined and data is labeled with alignment information, e.g. (Grishman et al., 2005; Maslennikov and Chua, 2007; Ji and Grishman, 2008; Reichart and Barzilay, 2012). However, some recent work has studied the automatic induction of the set of possible templates from data (Chambers and Jurafsky, 2011; Ritter et al., 2012). In our approach, systems of equations are relatively easy to specify, providing a type of template structure, and the alignment of the slots in these templates to the text is modeled primarily with latent variables during learning. Additionally, mapping to a semantic representation that can be executed allows us to leverage weaker supervision during learning. Automatic Word Problem Solvers Finally, there has been research on automatically solving various types of mathematical word problems. The dominant existing approach is to hand engineer rule-based systems to solve math problem in specific domains (Mukherjee and Garain, 2008; Lev et al., 2004). Our focus is on learning a model for the end-to-end task of solving word problems given only a training corpus of questions paired with equations or answers. 272 Derivation 1 Word problem An amusement park sells 2 kinds of tickets. Tickets for children cost $ 1.50 . Adult tickets cost $ 4 . On a certain day, 278 people entered the park. On that same day the admission fees collected totaled $ 792 . How many children were admitted on that day? How many adults were admitted? Aligned template u1 1 + u1 2 −n1 = 0 n2 × u2 1 + n3 × u2 2 −n4 = 0 Instantiated equations x + y −278 = 0 1.5x + 4y −792 = 0 Answer x = 128 y = 150 Derivation 2 Word problem A motorist drove 2 hours at one speed and then for 3 hours at another speed. He covered a distance of 252 kilometers. If he had traveled 4 hours at the first speed and 1 hour at the second speed , he would have covered 244 kilometers. Find two speeds? Aligned template n1 × u1 1 + n2 × u1 2 −n3 = 0 n4 × u2 1 + n5 × u2 2 −n6 = 0 Instantiated equations 2x + 3y −252 = 0 4x + 1y −244 = 0 Answer x = 48 y = 52 Figure 2: Two complete derivations for two different word problems. Derivation 1 shows an alignment where two instances of the same slot are aligned to the same word (e.g., u1 1 and u2 1 both are aligned to “Tickets”). Derivation 2 includes an alignment where four identical nouns are each aligned to different slot instances in the template (e.g., the first “speed” in the problem is aligned to u1 1). 3 Mapping Word Problems to Equations We define a two step process to map word problems to equations. First, a template is selected to define the overall structure of the equation system. Next, the template is instantiated with numbers and nouns from the text. During inference we consider these two steps jointly. Figure 2 shows both steps for two derivations. The template dictates the form of the equations in the system and the type of slots in each: u slots represent unknowns and n slots are for numbers that must be filled from the text. In Derivation 1, the selected template has two unknown slots, u1 and u2, and four number slots, n1 to n4. Slots can be shared between equations, for example, the unknown slots u1 and u2 in the example appear in both equations. A slot may have different instances, for example u1 1 and u2 1 are the two instances of u1 in the example. We align each slot instance to a word in the problem. Each number slot n is aligned to a number, and each unknown slot u is aligned to a noun. For example, Derivation 1 aligns the number 278 to n1, 1.50 to n2, 4 to n3, and 792 to n4. It also aligns both instances of u1 (e.g., u1 1 and u2 1) to “Tickets”, and both instances of u2 to “tickets”. In contrast, in Derivation 2, instances of the same unknown slot (e.g. u1 1 and u2 1) are aligned to two different words in the problem (different occurrences of the word “speed”). This allows for a tighter mapping between the natural language and the system template, where the words aligned to the first equation in the template come from the first two sentences, and the words aligned to the second equation come from the third. Given an alignment, the template can then be instantiated: each number slot n is replaced with the aligned number, and each unknown slot u with a variable. This output system of equations is then automatically solved to generate the final answer. 273 3.1 Derivations Definitions Let X be the set of all word problems. A word problem x ∈X is a sequence of k words ⟨w1, . . . wk⟩. Also, define an equation template t to be a formula A = B, where A and B are expressions. An expression A is one of the following: • A number constant f. • A number slot n. • An unknown slot u. • An application of a mathematical relation R to two expressions (e.g., n1 × u1). We define a system template T to be a set of l equation templates {t0, . . . , tl}. T is the set of all system templates. A slot may occur more than once in a system template, to allow variables to be reused in different equations. We denote a specific instance i of a slot, u for example, as ui. For brevity, we omit the instance index when a slot appears only once. To capture a correspondence between the text of x and a template T, we define an alignment p to be a set of pairs (w, s), where w is a token in x and s is a slot instance in T. Given the above definitions, an equation e can be constructed from a template t where each number slot n is replaced with a real number, each unknown slot u is replaced with a variable, and each number constant f is kept as is. We call the process of turning a template into an equation template instantiation. Similarly, an equation system E is a set of l equations {e0, . . . , el}, which can be constructed by instantiating each of the equation templates in a system template T. Finally, an answer a is a tuple of real numbers. We define a derivation y from a word problem to an answer as a tuple (T, p, a), where T is the selected system template, p is an alignment between T and x, and a is the answer generated by instantiating T using x through p and solving the generated equations. Let Y be the set of all derivations. The Space of Possible Derivations We aim to map each word problem x to an equation system E. The space of equation systems considered is defined by the set of possible system templates T and the words in the original problem x, that are available for filling slots. In practice, we generate T from the training data, as described in Section 4.1. Given a system template T ∈T , we create an alignment p between T and x. The set of possible alignment pairs is constrained as folAn amusement park sells 2 kinds of tickets. Tickets for children cost $ 1.50 . Adult tickets cost $ 4 . On a certain day, 278 people entered the park. On that same day the admission fees collected totaled $ 792 . How many children were admitted on that day? How many adults were admitted? u1 1 + u1 2 −n1 = 0 n2 × u2 1 + n3 × u2 2 −n4 = 0 Figure 3: The first example problem and selected system template from Figure 2 with all potential aligned words marked. Nouns (boldfaced) may be aligned to unknown slot instances uj i, and number words (highlighted) may be aligned to number slots ni. lows: each number slot n ∈T can be aligned to any number in the text, a number word can only be aligned to a single slot n, and must be aligned to all instances of that slot. Additionally, an unknown slot instance u ∈T can only be aligned to a noun word. A complete derivation’s alignment pairs all slots in T with words in x. Figure 3 illustrates the space of possible alignments for the first problem and system template from Figure 2. Nouns (shown in boldface) can be aligned to any of the unknown slot instances in the selected template (u1 1, u2 1, u1 2, and u2 2 for the template selected). Numbers (highlighted) can be aligned to any of the number slots (n1, n2, n3, and n4 in the template). 3.2 Probabilistic Model Due to the ambiguity in selecting the system template and alignment, there will be many possible derivations y ∈Y for each word problem x ∈X. We discriminate between competing analyses using a log-linear model, which has a feature function φ : X × Y →Rd and a parameter vector θ ∈Rd. The probability of a derivation y given a problem x is defined as: p(y|x; θ) = eθ·φ(x,y) P y′∈Y eθ·φ(x,y′) Section 6 defines the full set of features used. The inference problem at test time requires us to find the most likely answer a given a problem 274 x, assuming the parameters θ are known: f(x) = arg max a p(a|x; θ) Here, the probability of the answer is marginalized over template selection and alignment: p(a|x; θ) = X y∈Y s.t. AN(y)=a p(y|x; θ) (1) where AN(y) extracts the answer a out of derivation y. In this way, the distribution over derivations y is modeled as a latent variable. We use a beam search inference procedure to approximately compute Equation 1, as described in Section 5. 4 Learning To learn our model, we need to induce the structure of system templates in T and estimate the model parameters θ. 4.1 Template Induction It is possible to generate system templates T when provided access to a set of n training examples {(xi, Ei) : i = 1, . . . , n}, where xi is a word problem and Ei is a set of equations. We generalize each E to a system template T by (a) replacing each variable with an unknown slot, and (b) replacing each number mentioned in the text with a number slot. Numbers not mentioned in the problem text remain in the template as constants. This allows us to solve problems that require numbers that are implied by the problem semantics rather than appearing directly in the text, such as the percent problem in Figure 4. 4.2 Parameter Estimation For parameter estimation, we assume access to n training examples {(xi, Vi) : i = 1, . . . , n}, each containing a word problem xi and a validation function Vi. The validation function V : Y →{0, 1} maps a derivation y ∈Y to 1 if it is correct, or 0 otherwise. We can vary the validation function to learn from different types of supervision. In Section 8, we will use validation functions that check whether the derivation y has either (1) the correct system of equations E, or (2) the correct answer a. Also, using different types of validation functions on different subsets of the data enables semi-supervised learning. This approach is related to Artzi and Zettlemoyer (2013). Word problem A chemist has a solution that is 18 % alcohol and one that is 50 % alcohol. He wants to make 80 liters of a 30 % solution. How many liters of the 18 % solution should he add? How many liters of the 30 % solution should he add? Labeled equations 18 × 0.01 × x + 50 × 0.01 × y = 30 × 0.01 × 80 x + y = 80 Induced template system n1 × 0.01 × u1 1 + n2 × 0.01 × u1 2 = n3 × 0.01 × n4 u2 1 + u2 2 = n5 Figure 4: During template induction, we automatically detect the numbers in the problem (highlighted above) to generalize the labeled equations to templates. Numbers not present in the text are considered part of the induced template. We estimate θ by maximizing the conditional log-likelihood of the data, marginalizing over all valid derivations: O = X i X y∈Y s.t. Vi(y)=1 log p(y|xi; θ) We use L-BFGS (Nocedal and Wright, 2006) to optimize the parameters. The gradient of the individual parameter θj is given by: ∂O ∂θj = X i Ep(y|xi,Vi(y)=1;θ) [φj(xi, y)] − Ep(y|xi;θ) [φj(xi, y)] (2) Section 5 describes how we approximate the two terms of the gradient using beam search. 5 Inference Computing the normalization constant for Equation 1 requires summing over all templates and all possible ways to instantiate them. This results in a search space exponential in the number of slots in the largest template in T , the set of available system templates. Therefore, we approximate this computation using beam search. We initialize the beam with all templates in T and iteratively align slots from the templates in the beam to words in the problem text. For each template, the next slot 275 to be considered is selected according to a predefined canonicalized ordering for that template. After each iteration we prune the beam to keep the top-k partial derivations according to the model score. When pruning the beam, we allow at most l partial derivations for each template, to ensure that a small number of templates don’t monopolize the beam. We continue this process until all templates in the beam are fully instantiated. During learning we compute the second term in the gradient (Equation 2) using our beam search approximation. Depending on the available validation function V (as defined in Section 4.2), we can also accurately prune the beam for the computation of the first half of the gradient. Specifically, when assuming access to labeled equations, we can constrain the search to consider only partial hypotheses that could possibly be completed to produce the labeled equations. 6 Model Details Template Canonicalization There are many syntactically different but semantically equivalent ways to express a given system of equations. For example, the phrase “John is 3 years older than Bill” can be written as j = b + 3 or j −3 = b. To avoid such ambiguity, we canonicalize templates into a normal form representation. We perform this canonicalization by obtaining the symbolic solution for the unknown slots in terms of the number slots and constants using the mathematical solver Maxima (Maxima, 2014). Slot Signature In a template like s1+s2 = s3, the slot s1 is distinct from the slot s2, but we would like them to share many of the features used in deciding their alignment. To facilitate this, we generate signatures for each slot and slot pair. The signature for a slot indicates the system of equations it appears in, the specific equation it is in, and the terms of the equation it is a part of. Pairwise slot signatures concatenate the signatures for the two slots as well as indicating which terms are shared. This allows, for example, n2 and n3 in Derivation 1 in Figure 2 to have the same signature, while the pairs ⟨n2, u1⟩and ⟨n3, u1⟩have different ones. To share features across templates, slot and slot-pair signatures are generated for both the full template, as well as for each of the constituent equations. Features The features φ(x, y) are computed for a derivation y and problem x and cover all derivaDocument level Unigrams Bigrams Single slot Has the same lemma as a question object Is a question object Is in a question sentence Is equal to one or two (for numbers) Word lemma X nearby constant Slot pair Dep. path contains: Word Dep. path contains: Dep. Type Dep. path contains: Word X Dep. Type Are the same word instance Have the same lemma In the same sentence In the same phrase Connected by a preposition Numbers are equal One number is larger than the other Equivalent relationship Solution Features Is solution all positive Is solution all integer Table 1: The features divided into categories. tion decisions, including template and alignment selection. When required, we use standard tools to generate part-of-speech tags, lematizations, and dependency parses to compute features.2 For each number word in y we also identify the closest noun in the dependency parse. For example, the noun for 278 in Derivation 1, Figure 2 would be “people.” The features are calculated based on these nouns, rather than the number words. We use four types of features: document level features, features that look at a single slot entry, features that look at pairs of slot entries, and features that look at the numeric solutions. Table 1 lists all the features used. Unless otherwise noted, when computing slot and slot pair features, a separate feature is generated for each of the signature types discussed earlier. Document level features Oftentimes the natural language in x will contain words or phrases which are indicative of a certain template, but are not associated with any of the words aligned to slots in the template. For example, the word “chemist” 2In our experiments these are generated using the Stanford parser (de Marneffe et al., 2006) 276 might indicate a template like the one seen in Figure 4. We include features that connect each template with the unigrams and bigrams in the word problem. We also include an indicator feature for each system template, providing a bias for its use. Single Slot Features The natural language x always contains one or more questions or commands indicating the queried quantities. For example, the first problem in Figure 2 asks “How many children were admitted on that day?” The queried quantities, the number of children in this case, must be represented by an unknown in the system of equations. We generate a set of features which look at both the word overlap and the noun phrase overlap between slot words and the objects of a question or command sentence. We also compute a feature indicating whether a slot is filled from a word in a question sentence. Additionally, algebra problems frequently use phrases such as “2 kinds of tickets” (e.g., Figure 2). These numbers do not typically appear in the equations. To account for this, we add a single feature indicating whether a number is one or two. Lastly, many templates contain constants which are identifiable from words used in nearby slots. For example, in Figure 4 the constant 0.01 is related to the use of “%” in the text. To capture such usage, we include a set of lexicalized features which concatenate the word lemma with nearby constants in the equation. These features do not include the slot signature. Slot Pair Features The majority of features we compute account for relationships between slot words. This includes features that trigger for various equivalence relations between the words themselves, as well as features of the dependency path between them. We also include features that look at the numerical relationship of two numbers, where the numeric values of the unknowns are generated by solving the system of equations. This helps recognize that, for example, the total of a sum is typically larger than each of the (typically positive) summands. Additionally, we also have a single feature looking at shared relationships between pairs of slots. For example, in Figure 2 the relationship between “tickets for children” and “$1.50” is “cost”. Similarly the relationship between “Adult tickets” and “$4” is also “cost”. Since the actual nature of this relationship is not important, this feature is not lexicalized, instead it is only triggered for the presence of equality. We consider two cases: subject# problems 514 # sentences 1616 # words 19357 Vocabulary size 2352 Mean words per problem 37 Mean sentences per problem 3.1 Mean nouns per problem 13.4 # unique equation systems 28 Mean slots per system 7 Mean derivations per problem 4M Table 2: Dataset statistics. object relationships where the intervening verb is equal, and noun-to-preposition object relationships where the intervening preposition is equal. Solution Features By grounding our semantics in math, we are able to include features which look at the final answer, a, to learn which answers are reasonable for the algebra problems we typically see. For example, the solution to many, but not all, of the problems involves the size of some set of objects which must be both positive and integer. 7 Experimental Setup Dataset We collected a new dataset of algebra word problems from Algebra.com, a crowdsourced tutoring website. The questions were posted by students for members of the community to respond with solutions. Therefore, the problems are highly varied, and are taken from real problems given to students. We heuristically filtered the data to get only linear algebra questions which did not require any explicit background knowledge. From these we randomly chose a set of 1024 questions. As the questions are posted to a web forum, the posts often contained additional comments which were not part of the word problems and the solutions are embedded in long freeform natural language descriptions. To clean the data we asked Amazon Mechanical Turk workers to extract from the text: the algebra word problem itself, the solution equations, and the numeric answer. We manually verified both the equations and the numbers to ensure they were correct. To ensure each problem type is seen at least a few times in the training data, we removed the infrequent problem types. Specifically, we induced the system template from each equation system, as described in Section 4.1, and removed all problems for which the associated system template appeared 277 less than 6 times in the dataset. This left us with 514 problems. Table 2 provides the data statistics. Forms of Supervision We consider both semisupervised and supervised learning. In the semisupervised scenario, we assume access to the numerical answers of all problems in the training corpus and to a small number of problems paired with full equation systems. To select which problems to annotate with equations, we identified the five most common types of questions in the data and annotated a randomly sampled question of each type. 5EQ+ANS uses this form of weak supervision. To show the benefit of using the weakly supervised data, we also provide results for a baseline scenario 5EQ, where the training data includes only the five seed questions annotated with equation systems. In the fully supervised scenario ALLEQ, we assume access to full equation systems for the entire training set. Evaluation Protocol We run all our experiments using 5-fold cross-validation. Since our model generates a solution for every problem, we report only accuracy. We report two metrics: equation accuracy to measure how often the system generates the correct equation system, and answer accuracy to evaluate how often the generated numerical answer is correct. When comparing equations, we avoid spurious differences by canonicalizing the equation system, as described in Section 6. To compare answer tuples we disregard the ordering and require each number appearing in the reference answer to appear in the generated answer. Parameters and Solver In our experiments we set k in our beam search algorithm (Section 5) to 200, and l to 20. We run the L-BFGS computation for 50 iterations. We regularize our learning objective using the L2-norm and a λ value of 0.1. The set of mathematical relations supported by our implementation is {+, −, ×, /}.Our implementation uses the Gaussian Elimination function in the Efficient Java Matrix Library (EJML) (Abeles, 2014) to generate answers given a set of equations. 8 Results 8.1 Impact of Supervision Table 3 summarizes the results. As expected, having access to the full system of equations (ALLEQ) at training time results in the best learned model, with nearly 69% accuracy. However, training from primarily answer annotations (5EQ+ANS) Equation Answer accuracy accuracy 5EQ 20.4 20.8 5EQ+ANS 45.7 46.1 ALLEQ 66.1 68.7 Table 3: Cross-validation accuracy results for various forms of supervision. Equation Answer % of accuracy accuracy data ≤10 43.6 50.8 25.5 11 −15 46.6 45.1 10.5 16 −20 44.2 52.0 11.3 > 20 85.7 86.1 52.7 Table 4: Performance on different template frequencies for ALLEQ. results in performance which is almost 70% of ALLEQ, demonstrating the value of weakly supervised data. In contrast, 5EQ, which cannot use this weak supervision, performs much worse. 8.2 Performance and Template Frequency To better understand the results, we also measured equation accuracy as a function of the frequency of each equation template in the data set. Table 4 reports results for ALLEQ after grouping the problems into four different frequency bins. We can see that the system correctly answers more than 85% of the question types which occur frequently while still achieving more than 50% accuracy on those that occur relatively infrequently. We do not include template frequency results for 5EQ+ANS since in this setup our system is given only the top five most common templates. This limited set of templates covers only those questions in the > 20 bin, or about 52% of the data. However, on this subset 5EQ+ANS performs very well, answering 88% of them correctly, which is approximately the same as the 86% achieved by ALLEQ. Thus while the weak supervision is not helpful in generating the space of possible equations, it is very helpful in learning to generate the correct answer when given an appropriate space of equations. 8.3 Ablation Analysis Table 5 shows ablation results for each group of features. The results along the diagonal show the performance when a single group of features is ablated, while the off-diagonal numbers show the 278 w/o w/o w/o w/o pair document solution single w/o pair 42.8 25.7 19.0 39.6 w/o document – 63.8 50.4 57.6 w/o solution – – 63.6 62.0 w/o single – – – 65.9 Table 5: Cross-validation accuracy results with different feature groups ablated for ALLEQ. Results are for answer accuracy which is 68.7% without any features ablated. performance when two groups of features are ablated together. We can see that all of the features contribute to the overall performance, and that the pair features are the most important followed by the document and solution features. We also see that the pair features can compensate for the absence of other features. For example, the performance drops only slightly when either the document or solution features are removed in isolation. However, the drop is much more dramatic when they are removed along with the pair features. 8.4 Qualitative Error Analysis We examined our system output on one fold of ALLEQ and identified two main classes of errors. The first, accounting for approximately onequarter of the cases, includes mistakes where more background or world knowledge might have helped. For example, Problem 1 in Figure 5 requires understanding the relation between the dimensions of a painting, and how this relation is maintained when the painting is printed, and Problem 2 relies on understanding concepts of commerce, including cost, sale price, and profit. While these relationships could be learned in our model with enough data, as it does for percentage problems (e.g., Figure 4), various outside resources, such as knowledge bases (e.g. Freebase) or distributional statistics from a large text corpus, might help us learn them with less training data. The second category, which accounts for about half of the errors, includes mistakes that stem from compositional language. For example, the second sentence in Problem 3 in Figure 5 could generate the equation 2x−y = 5, with the phrase “twice of one of them” generating the expression 2x. Given the typical shallow nesting, it’s possible to learn templates for these cases given enough data, and in the future it might also be possible to develop new, cross-sentence semantic parsers to enable better generalization from smaller datasets. (1) A painting is 10 inches tall and 15 inches wide. A print of the painting is 25 inches tall, how wide is the print in inches? (2) A textbook costs a bookstore 44 dollars, and the store sells it for 55 dollars. Find the amount of profit based on the selling price. (3) The sum of two numbers is 85. The difference of twice of one of them and the other one is 5. Find both numbers. (4) The difference between two numbers is 6. If you double both numbers, the sum is 36. Find the two numbers. Figure 5: Examples of problems our system does not solve correctly. 9 Conclusion We presented an approach for automatically learning to solve algebra word problems. Our algorithm constructs systems of equations, while aligning their variables and numbers to the problem text. Using a newly gathered corpus we measured the effects of various forms of weak supervision on performance. To the best of our knowledge, we present the first learning result for this task. There are still many opportunities to improve the reported results, and extend the approach to related domains. We would like to develop techniques to learn compositional models of meaning for generating new equations. Furthermore, the general representation of mathematics lends itself to many different domains including geometry, physics, and chemistry. Eventually, we hope to extend the techniques to synthesize even more complex structures, such as computer programs, from natural language. Acknowledgments The authors acknowledge the support of Battelle Memorial Institute (PO#300662) and NSF (grant IIS-0835652). We thank Nicholas FitzGerald, the MIT NLP group, the UW NLP group and the ACL reviewers for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. 279 References Peter Abeles. 2014. Efficient java matrix library. https://code.google.com/p/efficient -java-matrix-library/. Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. S.R.K Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. S.R.K Branavan, Luke Zettlemoyer, and Regina Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to commands. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Qingqing Cai and Alexander Yates. 2013a. Largescale semantic parsing via schema matching and lexicon extension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Qingqing Cai and Alexander Yates. 2013b. Semantic parsing freebase: Towards open-domain semantic parsing. In Proceedings of the Joint Conference on Lexical and Computational Semantics. Nathanael Chambers and Dan Jurafsky. 2011. Template-based information extraction without the templates. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. David Chen and Raymond Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the Conference on Artificial Intelligence. David Chen. 2012. Fast online lexicon learning for grounded language acquisition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Conference on Computational Natural Language Learning. Association for Computational Linguistics. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Conference on Language Resources and Evaluation. Ralph Grishman, David Westbrook, and Adam Meyers. 2005. NYUs English ACE 2005 System Description. In Proceedings of the Automatic Content Extraction Evaluation Workshop. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Joohyun Kim and Raymond Mooney. 2012. Unsupervised pcfg induction for grounded language learning with highly ambiguous supervision. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Nate Kushman and Regina Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Proceeding of the Annual Meeting of the North American Chapter of the Association for Computational Linguistics. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic ccg grammars from logical form with higherorder unification. In Proceedings of the Conference on Empirical Methods on Natural Language Processing. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of Empirical Methods in Natural Language Processing. Tao Lei, Fan Long, Regina Barzilay, and Martin Rinard. 2013. From natural language specifications to program input parsers. In Proceeding of the Association for Computational Linguistics. Iddo Lev, Bill MacCartney, Christopher Manning, and Roger Levy. 2004. Solving logic puzzles: From robust processing to precise semantics. In Proceedings of the Workshop on Text Meaning and Interpretation. Association for Computational Linguistics. Mstislav Maslennikov and Tat-Seng Chua. 2007. A multi-resolution framework for information extraction from free text. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Cynthia Matuszek, Nicholas FitzGerald, Luke Zettlemoyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded attribute learning. In Proceedings of the International Conference on Machine Learning. Maxima. 2014. Maxima, a computer algebra system. version 5.32.1. 280 Anirban Mukherjee and Utpal Garain. 2008. A review of methods for automatic understanding of natural language mathematical problems. Artificial Intelligence Review, 29(2). Jorge Nocedal and Stephen Wright. 2006. Numerical optimization, series in operations research and financial engineering. Springer, New York. Roi Reichart and Regina Barzilay. 2012. Multi-event extraction guided by global constraints. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics. Alan Ritter, Mausam, Oren Etzioni, and Sam Clark. 2012. Open domain event extraction from twitter. In Proceedings of the Conference on Knowledge Discovery and Data Mining. Cynthia Thompson and Raymond Mooney. 2003. Acquiring word-meaning mappings for natural language interfaces. Journal of Artificial Intelligence Research, 18(1). Adam Vogel and Dan Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Yuk Wah Wong and Raymond Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the Annual Meeting of the North American Chapter of the Association of Computational Linguistics. Association for Computational Linguistics. John Zelle and Raymond Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Conference on Artificial Intelligence. Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. Luke Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of the Joint Conference of the Association for Computational Linguistics and International Joint Conference on Natural Language Processing. 281
2014
26
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 282–292, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Modelling function words improves unsupervised word segmentation Mark Johnson1,2, Anne Christophe3,4, Katherine Demuth2,6 and Emmanuel Dupoux3,5 1 Department of Computing, Macquarie University, Sydney, Australia 2 Santa Fe Institute, Santa Fe, New Mexico, USA 3 Ecole Normale Sup´erieure, Paris, France 4 Centre National de la Recherche Scientifique, Paris, France 5 Ecole des Hautes Etudes en Sciences Sociales, Paris, France 6 Department of Linguistics, Macquarie University, Sydney, Australia Abstract Inspired by experimental psychological findings suggesting that function words play a special role in word learning, we make a simple modification to an Adaptor Grammar based Bayesian word segmentation model to allow it to learn sequences of monosyllabic “function words” at the beginnings and endings of collocations of (possibly multi-syllabic) words. This modification improves unsupervised word segmentation on the standard BernsteinRatner (1987) corpus of child-directed English by more than 4% token f-score compared to a model identical except that it does not special-case “function words”, setting a new state-of-the-art of 92.4% token f-score. Our function word model assumes that function words appear at the left periphery, and while this is true of languages such as English, it is not true universally. We show that a learner can use Bayesian model selection to determine the location of function words in their language, even though the input to the model only consists of unsegmented sequences of phones. Thus our computational models support the hypothesis that function words play a special role in word learning. 1 Introduction Over the past two decades psychologists have investigated the role that function words might play in human language acquisition. Their experiments suggest that function words play a special role in the acquisition process: children learn function words before they learn the vast bulk of the associated content words, and they use function words to help identify context words. The goal of this paper is to determine whether computational models of human language acquisition can provide support for the hypothesis that function words are treated specially in human language acquisition. We do this by comparing two computational models of word segmentation which differ solely in the way that they model function words. Following Elman et al. (1996) and Brent (1999) our word segmentation models identify word boundaries from unsegmented sequences of phonemes corresponding to utterances, effectively performing unsupervised learning of a lexicon. For example, given input consisting of unsegmented utterances such as the following: j u w ɑ n t t u s i ð ə b ʊ k a word segmentation model should segment this as ju wɑnt tu si ðə bʊk, which is the IPA representation of “you want to see the book”. We show that a model equipped with the ability to learn some rudimentary properties of the target language’s function words is able to learn the vocabulary of that language more accurately than a model that is identical except that it is incapable of learning these generalisations about function words. This suggests that there are acquisition advantages to treating function words specially that human learners could take advantage of (at least to the extent that they are learning similar generalisations as our models), and thus supports the hypothesis that function words are treated specially in human lexical acquisition. As a reviewer points out, we present no evidence that children use function words in the way that our model does, and we want to emphasise we make no such claim. While absolute accuracy is not directly relevant to the main point of the paper, we note that the models that learn generalisations about function words perform unsupervised word segmentation at 92.5% token f-score on the standard BernsteinRatner (1987) corpus, which improves the previous state-of-the-art by more than 4%. As a reviewer points out, the changes we make to our models to incorporate function words can be viewed as “building in” substantive information about possible human languages. The model 282 that achieves the best token f-score expects function words to appear at the left edge of phrases. While this is true for languages such as English, it is not true universally. By comparing the posterior probability of two models — one in which function words appear at the left edges of phrases, and another in which function words appear at the right edges of phrases — we show that a learner could use Bayesian posterior probabilities to determine that function words appear at the left edges of phrases in English, even though they are not told the locations of word boundaries or which words are function words. This paper is structured as follows. Section 2 describes the specific word segmentation models studied in this paper, and the way we extended them to capture certain properties of function words. The word segmentation experiments are presented in section 3, and section 4 discusses how a learner could determine whether function words occur on the left-periphery or the rightperiphery in the language they are learning. Section 5 concludes and describes possible future work. The rest of this introduction provides background on function words, the Adaptor Grammar models we use to describe lexical acquisition and the Bayesian inference procedures we use to infer these models. 1.1 Psychological evidence for the role of function words in word learning Traditional descriptive linguistics distinguishes function words, such as determiners and prepositions, from content words, such as nouns and verbs, corresponding roughly to the distinction between functional categories and lexical categories of modern generative linguistics (Fromkin, 2001). Function words differ from content words in at least the following ways: 1. there are usually far fewer function word types than content word types in a language 2. function word types typically have much higher token frequency than content word types 3. function words are typically morphologically and phonologically simple (e.g., they are typically monosyllabic) 4. function words typically appear in peripheral positions of phrases (e.g., prepositions typically appear at the beginning of prepositional phrases) 5. each function word class is associated with specific content word classes (e.g., determiners and prepositions are associated with nouns, auxiliary verbs and complementisers are associated with main verbs) 6. semantically, content words denote sets of objects or events, while function words denote more complex relationships over the entities denoted by content words 7. historically, the rate of innovation of function words is much lower than the rate of innovation of content words (i.e., function words are typically “closed class”, while content words are “open class”) Properties 1–4 suggest that function words might play a special role in language acquisition because they are especially easy to identify, while property 5 suggests that they might be useful for identifying lexical categories. The models we study here focus on properties 3 and 4, in that they are capable of learning specific sequences of monosyllabic words in peripheral (i.e., initial or final) positions of phrase-like units. A number of psychological experiments have shown that infants are sensitive to the function words of their language within their first year of life (Shi et al., 2006; Hall´e et al., 2008; Shafer et al., 1998), often before they have experienced the “word learning spurt”. Crucially for our purpose, infants of this age were shown to exploit frequent function words to segment neighboring content words (Shi and Lepage, 2008; Hall´e et al., 2008). In addition, 14 to 18-month-old children were shown to exploit function words to constrain lexical access to known words - for instance, they expect a noun after a determiner (Cauvet et al., 2014; Kedar et al., 2006; Zangl and Fernald, 2007). In addition, it is plausible that function words play a crucial role in children’s acquisition of more complex syntactic phenomena (Christophe et al., 2008; Demuth and McCullough, 2009), so it is interesting to investigate the roles they might play in computational models of language acquisition. 1.2 Adaptor grammars Adaptor grammars are a framework for Bayesian inference of a certain class of hierarchical nonparametric models (Johnson et al., 2007b). They define distributions over the trees specified by a context-free grammar, but unlike probabilistic context-free grammars, they “learn” distributions over the possible subtrees of a user-specified set of “adapted” nonterminals. (Adaptor grammars are non-parametric, i.e., not characterisable by a finite 283 set of parameters, if the set of possible subtrees of the adapted nonterminals is infinite). Adaptor grammars are useful when the goal is to learn a potentially unbounded set of entities that need to satisfy hierarchical constraints. As section 2 explains in more detail, word segmentation is such a case: words are composed of syllables and belong to phrases or collocations, and modelling this structure improves word segmentation accuracy. Adaptor Grammars are formally defined in Johnson et al. (2007b), which should be consulted for technical details. Adaptor Grammars (AGs) are an extension of Probabilistic Context-Free Grammars (PCFGs), which we describe first. A Context-Free Grammar (CFG) G = (N, W, R, S) consists of disjoint finite sets of nonterminal symbols N and terminal symbols W, a finite set of rules R of the form A →α where A ∈N and α ∈(N ∪W)⋆, and a start symbol S ∈N. (We assume there are no “ϵ-rules” in R, i.e., we require that |α| ≥1 for each A →α ∈R). A Probabilistic Context-Free Grammar (PCFG) is a quintuple (N, W, R, S, θ) where N, W, R and S are the nonterminals, terminals, rules and start symbol of a CFG respectively, and θ is a vector of non-negative reals indexed by R that satisfy ∑ α∈RA θA →α = 1 for each A ∈N, where RA = {A →α : A →α ∈R} is the set of rules expanding A. Informally, θA →α is the probability of a node labelled A expanding to a sequence of nodes labelled α, and the probability of a tree is the product of the probabilities of the rules used to construct each non-leaf node in it. More precisely, for each X ∈N ∪W a PCFG associates distributions GX over the set of trees TX generated by X as follows: If X ∈W (i.e., if X is a terminal) then GX is the distribution that puts probability 1 on the single-node tree labelled X. If X ∈N (i.e., if X is a nonterminal) then: GX = ∑ X →B1...Bn∈RX θX →B1...BnTDX(GB1, . . . , GBn) (1) where RX is the subset of rules in R expanding nonterminal X ∈N, and: TDX(G1, . . . , Gn) ( . X . t1 . tn . . . . ) = n ∏ i=1 Gi(ti). That is, TDX(G1, . . . , Gn) is a distribution over the set of trees TX generated by nonterminal X, where each subtree ti is generated independently from Gi. The PCFG generates the distribution GS over the set of trees TS generated by the start symbol S; the distribution over the strings it generates is obtained by marginalising over the trees. In a Bayesian PCFG one puts Dirichlet priors Dir(α) on the rule probability vector θ, such that there is one Dirichlet parameter αA →α for each rule A →α ∈R. There are Markov Chain Monte Carlo (MCMC) and Variational Bayes procedures for estimating the posterior distribution over rule probabilities θ and parse trees given data consisting of terminal strings alone (Kurihara and Sato, 2006; Johnson et al., 2007a). PCFGs can be viewed as recursive mixture models over trees. While PCFGs are expressive enough to describe a range of linguisticallyinteresting phenomena, PCFGs are parametric models, which limits their ability to describe phenomena where the set of basic units, as well as their properties, are the target of learning. Lexical acqusition is an example of a phenomenon that is naturally viewed as non-parametric inference, where the number of lexical entries (i.e., words) as well as their properties must be learnt from the data. It turns out there is a straight-forward modification to the PCFG distribution (1) that makes it suitably non-parametric. As Johnson et al. (2007b) explain, by inserting a Dirichlet Process (DP) or Pitman-Yor Process (PYP) into the generative mechanism (1) the model “concentrates” mass on a subset of trees (Teh et al., 2006). Specifically, an Adaptor Grammar identifies a subset A ⊆N of adapted nonterminals. In an Adaptor Grammar the unadapted nonterminals N \ A expand via (1), just as in a PCFG, but the distributions of the adapted nonterminals A are “concentrated” by passing them through a DP or PYP: HX = ∑ X →B1...Bn∈RX θX →B1...BnTDX(GB1, . . . , GBn) GX = PYP(HX, aX, bX) Here aX and bX are parameters of the PYP associated with the adapted nonterminal X. As Goldwater et al. (2011) explain, such Pitman-Yor Processes naturally generate power-law distributed data. Informally, Adaptor Grammars can be viewed as caching entire subtrees of the adapted nonterminals. Roughly speaking, the probability of generating a particular subtree of an adapted nonterminal is proportional to the number of times that subtree has been generated before. This “rich get 284 richer” behaviour causes the distribution of subtrees to follow a power-law (the power is specified by the aX parameter of the PYP). The PCFG rules expanding an adapted nonterminal X define the “base distribution” of the associated DP or PYP, and the aX and bX parameters determine how much mass is reserved for “new” trees. There are several different procedures for inferring the parse trees and the rule probabilities given a corpus of strings: Johnson et al. (2007b) describe a MCMC sampler and Cohen et al. (2010) describe a Variational Bayes procedure. We use the MCMC procedure here since this has been successfully applied to word segmentation problems in previous work (Johnson, 2008). 2 Word segmentation with Adaptor Grammars Perhaps the simplest word segmentation model is the unigram model, where utterances are modeled as sequences of words, and where each word is a sequence of segments (Brent, 1999; Goldwater et al., 2009). A unigram model can be expressed as an Adaptor Grammar with one adapted nonterminal Word (we indicate adapted nonterminals by underlining them in grammars here; regular expressions are expanded into right-branching productions). Sentence →Word+ (2) Word →Phone+ (3) The first rule (2) says that a sentence consists of one or more Words, while the second rule (3) states that a Word consists of a sequence of one or more Phones; we assume that there are rules expanding Phone into all possible phones. Because Word is an adapted nonterminal, the adaptor grammar memoises Word subtrees, which corresponds to learning the phone sequences for the words of the language. The more sophisticated Adaptor Grammars discussed below can be understood as specialising either the first or the second of the rules in (2–3). The next two subsections review the Adaptor Grammar word segmentation models presented in Johnson (2008) and Johnson and Goldwater (2009): section 2.1 reviews how phonotactic syllable-structure constraints can be expressed with Adaptor Grammars, while section 2.2 reviews how phrase-like units called “collocations” capture inter-word dependencies. Section 2.3 presents the major novel contribution of this paper by explaining how we modify these adaptor grammars to capture some of the special properties of function words. 2.1 Syllable structure and phonotactics The rule (3) models words as sequences of independently generated phones: this is what Goldwater et al. (2009) called the “monkey model” of word generation (it instantiates the metaphor that word types are generated by a monkey randomly banging on the keys of a typewriter). However, the words of a language are typically composed of one or more syllables, and explicitly modelling the internal structure of words typically improves word segmentation considerably. Johnson (2008) suggested replacing (3) with the following model of word structure: Word →Syllable1:4 (4) Syllable →(Onset) Rhyme (5) Onset →Consonant+ (6) Rhyme →Nucleus (Coda) (7) Nucleus →Vowel+ (8) Coda →Consonant+ (9) Here and below superscripts indicate iteration (e.g., a Word consists of 1 to 4 Syllables), while an Onset consists of an unbounded number of Consonants), while parentheses indicate optionality (e.g., a Rhyme consists of an obligatory Nucleus followed by an optional Coda). We assume that there are rules expanding Consonant and Vowel to the set of all consonants and vowels respectively (this amounts to assuming that the learner can distinguish consonants from vowels). Because Onset, Nucleus and Coda are adapted, this model learns the possible syllable onsets, nucleii and coda of the language, even though neither syllable structure nor word boundaries are explicitly indicated in the input to the model. The model just described assumes that wordinternal syllables have the same structure as wordperipheral syllables, but in languages such as English word-peripheral onsets and codas can be more complex than the corresponding wordinternal onsets and codas. For example, the word “string” begins with the onset cluster str, which is relatively rare word-internally. Johnson (2008) showed that word segmentation accuracy improves if the model can learn different consonant sequences for word-inital onsets and wordfinal codas. It is easy to express this as an Adaptor 285 Grammar: (4) is replaced with (10–11) and (12– 17) are added to the grammar. Word →SyllableIF (10) Word →SyllableI Syllable0:2 SyllableF (11) SyllableIF →(OnsetI) RhymeF (12) SyllableI →(OnsetI) Rhyme (13) SyllableF →(Onset) RhymeF (14) OnsetI →Consonant+ (15) RhymeF →Nucleus (CodaF) (16) CodaF →Consonant+ (17) In this grammar the suffix “I” indicates a wordinitial element, and “F” indicates a word-final element. Note that the model simply has the ability to learn that different clusters can occur wordperipherally and word-internally; it is not given any information about the relative complexity of these clusters. 2.2 Collocation models of inter-word dependencies Goldwater et al. (2009) point out the detrimental effect that inter-word dependencies can have on word segmentation models that assume that the words of an utterance are independently generated. Informally, a model that generates words independently is likely to incorrectly segment multiword expressions such as “the doggie” as single words because the model has no way to capture word-to-word dependencies, e.g., that “doggie” is typically preceded by “the”. Goldwater et al show that word segmentation accuracy improves when the model is extended to capture bigram dependencies. Adaptor grammar models cannot express bigram dependencies, but they can capture similiar inter-word dependencies using phrase-like units that Johnson (2008) calls collocations. Johnson and Goldwater (2009) showed that word segmentation accuracy improves further if the model learns a nested hierarchy of collocations. This can be achieved by replacing (2) with (18–21). Sentence →Colloc3+ (18) Colloc3 →Colloc2+ (19) Colloc2 →Colloc1+ (20) Colloc1 →Word+ (21) Informally, Colloc1, Colloc2 and Colloc3 define a nested hierarchy of phrase-like units. While not designed to correspond to syntactic phrases, by examining the sample parses induced by the Adaptor Grammar we noticed that the collocations often correspond to noun phrases, prepositional phrases or verb phrases. This motivates the extension to the Adaptor Grammar discussed below. 2.3 Incorporating “function words” into collocation models The starting point and baseline for our extension is the adaptor grammar with syllable structure phonotactic constraints and three levels of collocational structure (5-21), as prior work has found that this yields the highest word segmentation token f-score (Johnson and Goldwater, 2009). Our extension assumes that the Colloc1 − Colloc3 constituents are in fact phrase-like, so we extend the rules (19–21) to permit an optional sequence of monosyllabic words at the left edge of each of these constituents. Our model thus captures two of the properties of function words discussed in section 1.1: they are monosyllabic (and thus phonologically simple), and they appear on the periphery of phrases. (We put “function words” in scare quotes below because our model only approximately captures the linguistic properties of function words). Specifically, we replace rules (19–21) with the following sequence of rules: Colloc3 →(FuncWords3) Colloc2+ (22) Colloc2 →(FuncWords2) Colloc1+ (23) Colloc1 →(FuncWords1) Word+ (24) FuncWords3 →FuncWord3+ (25) FuncWord3 →SyllableIF (26) FuncWords2 →FuncWord2+ (27) FuncWord2 →SyllableIF (28) FuncWords1 →FuncWord1+ (29) FuncWord1 →SyllableIF (30) This model memoises (i.e., learns) both the individual “function words” and the sequences of “function words” that modify the Colloc1 − Colloc3 constituents. Note also that “function words” expand directly to SyllableIF, which in turn expands to a monosyllable with a word-initial onset and word-final coda. This means that “function words” are memoised independently of the “content words” that Word expands to; i.e., the model learns distinct “function word” and “content word” vocabularies. Figure 1 depicts a sample parse generated by this grammar. 286 . Sentence . Colloc3 . FuncWords3 . FuncWord3 . you . FuncWord3 . want . FuncWord3 . to . Colloc2 . Colloc1 . Word . see . Colloc1 . FuncWords1 . FuncWord1 . the . Word . book Figure 1: A sample parse generated by the “function word” Adaptor Grammar with rules (10–18) and (22–30). To simplify the parse we only show the root node and the adapted nonterminals, and replace word-internal structure by the word’s orthographic form. This grammar builds in the fact that function words appear on the left periphery of phrases. This is true of languages such as English, but is not true cross-linguistically. For comparison purposes we also include results for a mirror-image model that permits “function words” on the right periphery, a model which permits “function words” on both the left and right periphery (achieved by changing rules 22–24), as well as a model that analyses all words as monosyllabic. Section 4 explains how a learner could use Bayesian model selection to determine that function words appear on the left periphery in English by comparing the posterior probability of the data under our “function word” Adaptor Grammar to that obtained using a grammar which is identical except that rules (22–24) are replaced with the mirror-image rules in which “function words” are attached to the right periphery. 3 Word segmentation results This section presents results of running our Adaptor Grammar models on subsets of the BernsteinRatner (1987) corpus of child-directed English. We use the Adaptor Grammar software available from http://web.science.mq.edu.au/˜mjohnson/ with the same settings as described in Johnson and Goldwater (2009), i.e., we perform Bayesian inference with “vague” priors for all hyperparameters (so there are no adjustable parameters in our models), and perform 8 different MCMC runs of each condition with table-label resampling for 2,000 sweeps of the training data. At every 10th sweep of the last 1,000 sweeps we use the model to segment the entire corpus (even if it is only trained on a subset of it), so we collect Model Token f-score Boundary precision Boundary recall Baseline 0.872 0.918 0.956 + left FWs 0.924 0.935 0.990 + left + right FWs 0.912 0.957 0.953 Table 1: Mean token f-scores and boundary precision and recall results averaged over 8 trials, each consisting of 8 MCMC runs of models trained and tested on the full Bernstein-Ratner (1987) corpus (the standard deviations of all values are less than 0.006; Wilcox sign tests show the means of all token f-scores differ p < 2e-4). 800 sample segmentations of each utterance. The most frequent segmentation in these 800 sample segmentations is the one we score in the evaluations below. 3.1 Word segmentation with “function word” models Here we evaluate the word segmentations found by the “function word” Adaptor Grammar model described in section 2.3 and compare it to the baseline grammar with collocations and phonotactics from Johnson and Goldwater (2009). Figure 2 presents the standard token and lexicon (i.e., type) f-score evaluations for word segmentations proposed by these models (Brent, 1999), and Table 1 summarises the token and lexicon f-scores for the major models discussed in this paper. It is interesting to note that adding “function words” improves token f-score by more than 4%, corresponding to a 40% reduction in overall error rate. When the training data is very small the Monosyllabic grammar produces the highest accuracy results, presumably because a large proportion of the words in child-directed speech are monosyllabic. However, at around 25 sentences the more complex models that are capable of finding multisyllabic words start to become more accurate. It’s interesting that after about 1,000 sentences the model that allows “function words” only on the right periphery is considerably less accurate than the baseline model. Presumably this is because it tends to misanalyse multi-syllabic words on the right periphery as sequences of monosyllabic words. The model that allows “function words” only on the left periphery is more accurate than the model that allows them on both the left and right periphery when the input data ranges from about 100 to about 1,000 sentences, but when the training data 287 0.00 0.25 0.50 0.75 1.00 1 10 100 1000 10000 NumberWofWtrainingWsentences TokenWf-score Model Monosyllables Baseline +WleftWFWs +WrightWFWs +WleftW+WFWs 0.00 0.25 0.50 0.75 1.00 1 10 100 1000 10000 NumberWofWtrainingWsentences LexiconWf-score Model Monosyllables Baseline +WleftWFWs +WrightWFWs +WleftW+WrightWFWs Figure 2: Token and lexicon (i.e., type) f-score on the Bernstein-Ratner (1987) corpus as a function of training data size for the baseline model, the model where “function words” can appear on the left periphery, a model where “function words” can appear on the right periphery, and a model where “function words” can appear on both the left and the right periphery. For comparison purposes we also include results for a model that assumes that all words are monosyllabic. is larger than about 1,000 sentences both models are equally accurate. 3.2 Content and function words found by “function word” model As noted earlier, the “function word” model generates function words via adapted nonterminals other than the Word category. In order to better understand just how the model works, we give the 5 most frequent words in each word category found during 8 MCMC runs of the left-peripheral “function word” grammar above: Word : book, doggy, house, want, I FuncWord1 : a, the, your, little1, in FuncWord2 : to, in, you, what, put FuncWord3 : you, a, what, no, can Interestingly, these categories seem fairly reasonable. The Word category includes open-class nouns and verbs, the FuncWord1 category includes noun modifiers such as determiners, while the FuncWord2 and FuncWord3 categories include prepositions, pronouns and auxiliary verbs. 1The phone ‘l’ is generated by both Consonant and Vowel, so “little” can be (incorrectly) analysed as one syllable. Thus, the present model, initially aimed at segmenting words from continuous speech, shows three interesting characteristics that are also exhibited by human infants: it distinguishes between function words and content words (Shi and Werker, 2001), it allows learners to acquire at least some of the function words of their language (e.g. (Shi et al., 2006)); and furthermore, it may also allow them to start grouping together function words according to their category (Cauvet et al., 2014; Shi and Melanc¸on, 2010). 4 Are “function words” on the left or right periphery? We have shown that a model that expects function words on the left periphery performs more accurate word segmentation on English, where function words do indeed typically occur on the left periphery, leaving open the question: how could a learner determine whether function words generally appear on the left or the right periphery of phrases in the language they are learning? This question is important because knowing the side where function words preferentially occur is re288 lated to the question of the direction of syntactic headedness in the language, and an accurate method for identifying the location of function words might be useful for initialising a syntactic learner. Experimental evidence suggests that infants as young as 8 months of age already expect function words on the correct side for their language — left-periphery for Italian infants and right-periphery for Japanese infants (Gervain et al., 2008) — so it is interesting to see whether purely distributional learners such as the ones studied here can identify the correct location of function words in phrases. We experimented with a variety of approaches that use a single adaptor grammar inference process, but none of these were successful. For example, we hoped that given an Adaptor Grammar that permits “function words” on both the left and right periphery, the inference procedure would decide that the right-periphery rules simply are not used in a language like English. Unfortunately we did not find this in our experiments; the right-periphery rules were used almost as often as the left-periphery rules (recall that a large fraction of the words in English child-directed speech are monosyllabic). In this section, we show that learners could use Bayesian model selection to determine that function words appear on the left periphery in English by comparing the marginal probability of the data for the left-periphery and the right-periphery models. Instead, we used Bayesian model selection techniques to determine whether left-peripheral or a right-peripheral model better fits the unsegmented utterances that constitute the training data.2 While Bayesian model selection is in principle straight-forward, it turns out to require the ratio of two integrals (for the “evidence” or marginal likelihood) that are often intractable to compute. Specifically, given a training corpus D of unsegmented sentences and model families G1 and G2 (here the “function word” adaptor grammars with left-peripheral and right-peripheral attachment respectively), the Bayes factor K is the ratio of the marginal likelihoods of the data: K = P(D | G1) P(D | G2) 2Note that neither the left-peripheral nor the rightperipheral model is correct: even strongly left-headed languages like English typically contain a few right-headed constructions. For example, “ago” is arguably the head of the phrase “ten years ago”. 0 2000 4000 6000 1 10 100 1000 10000 Number of training sentences log Bayes factor Figure 3: Bayes factor in favour of left-peripheral “function word” attachment as a function of the number of sentences in the training corpus, calculated using the Harmonic Mean estimator (see warning in text). where the marginal likelihood or “evidence” for a model G is obtained by integrating over all of the hidden or latent structure and parameters θ: P(D | G) = ∫ ∆ P(D, θ | G) dθ (31) Here the variable θ ranges over the space ∆of all possible parses for the utterances in D and all possible configurations of the Pitman-Yor processes and their parameters that constitute the “state” of the Adaptor Grammar G. While the probability of any specific Adaptor Grammar configuration θ is not too hard to calculate (the MCMC sampler for Adaptor Grammars can print this after each sweep through D), the integral in (31) is in general intractable. Textbooks such as Murphy (2012) describe a number of methods for calculating P(D | G), but most of them assume that the parameter space ∆ is continuous and so cannot be directly applied here. The Harmonic Mean estimator (32) for (31), which we used here, is a popular estimator for (31) because it only requires the ability to calculate P(D, θ | G) for samples from P(θ | D, G): P(D | G) ≈ ( 1 n n ∑ i=1 1 P(D, θi | G) )−1 where θi, . . . , θn are n samples from P(θ | 289 D, G), which can be generated by the MCMC procedure. Figure 3 depicts how the Bayes factor in favour of left-peripheral attachment of “function words” varies as a function of the number of utterances in the training data D (calculated from the last 1000 sweeps of 8 MCMC runs of the corresponding adaptor grammars). As that figure shows, once the training data contains more than about 1,000 sentences the evidence for the leftperipheral grammar becomes very strong. On the full training data the estimated log Bayes factor is over 6,000, which would constitute overwhelming evidence in favour of left-peripheral attachment. Unfortunately, as Murphy and others warn, the Harmonic Mean estimator is extremely unstable (Radford Neal calls it “the worst MCMC method ever” in his blog), so we think it is important to confirm these results using a more stable estimator. However, given the magnitude of the differences and the fact that the two models being compared are of similar complexity, we believe that these results suggest that Bayesian model selection can be used to determine properties of the language being learned. 5 Conclusions and future work This paper showed that the word segmentation accuracy of a state-of-the-art Adaptor Grammar model is significantly improved by extending it so that it explicitly models some properties of function words. We also showed how Bayesian model selection can be used to identify that function words appear on the left periphery of phrases in English, even though the input to the model only consists of an unsegmented sequence of phones. Of course this work only scratches the surface in terms of investigating the role of function words in language acquisition. It would clearly be very interesting to examine the performance of these models on other corpora of child-directed English, as well as on corpora of child-directed speech in other languages. Our evaluation focused on wordsegmentation, but we could also evaluate the effect that modelling “function words” has on other aspects of the model, such as its ability to learn syllable structure. The models of “function words” we investigated here only capture two of the 7 linguistic properties of function words identified in section 1 (i.e., that function words tend to be monosyllabic, and that they tend to appear phrase-peripherally), so it would be interesting to develop and explore models that capture other linguistic properties of function words. For example, following the suggestion by Hochmann et al. (2010) that human learners use frequency cues to identify function words, it might be interesting to develop computational models that do the same thing. In an Adaptor Grammar the frequency distribution of function words might be modelled by specifying the prior for the Pitman-Yor Process parameters associated with the function words’ adapted nonterminals so that it prefers to generate a small number of high-frequency items. It should also be possible to develop models which capture the fact that function words tend not to be topic-specific. Johnson et al. (2010) and Johnson et al. (2012) show how Adaptor Grammars can model the association between words and non-linguistic “topics”; perhaps these models could be extended to capture some of the semantic properties of function words. It would also be interesting to further explore the extent to which Bayesian model selection is a useful approach to linguistic “parameter setting”. In order to do this it is imperative to develop better methods than the problematic “Harmonic Mean” estimator used here for calculating the evidence (i.e., the marginal probability of the data) that can handle the combination of discrete and continuous hidden structure that occur in computational linguistic models. As well as substantially improving the accuracy of unsupervised word segmentation, this work is interesting because it suggests a connection between unsupervised word segmentation and the induction of syntactic structure. It is reasonable to expect that hierarchical non-parametric Bayesian models such as Adaptor Grammars may be useful tools for exploring such a connection. Acknowledgments This work was supported in part by the Australian Research Council’s Discovery Projects funding scheme (project numbers DP110102506 and DP110102593), the European Research Council (ERC-2011-AdG-295810 BOOTPHON), the Agence Nationale pour la Recherche (ANR-10LABX-0087 IEC, and ANR-10-IDEX-0001-02 PSL*), and the Mairie de Paris, Ecole des Hautes Etudes en Sciences Sociales, the Ecole Normale Sup´erieure, and the Fondation Pierre Gilles de Gennes. 290 References N. Bernstein-Ratner. 1987. The phonology of parentchild speech. In K. Nelson and A. van Kleeck, editors, Children’s Language, volume 6, pages 159– 174. Erlbaum, Hillsdale, NJ. M. Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71–105. E. Cauvet, R. Limissuri, S. Millotte, K. Skoruppa, D. Cabrol, and A. Christophe. 2014. Function words constrain on-line recognition of verbs and nouns in French 18-month-olds. Language Learning and Development, pages 1–18. A. Christophe, S. Millotte, S. Bernal, and J. Lidz. 2008. Bootstrapping lexical and syntactic acquisition. Language and Speech, 51(1-2):61–75. S. B. Cohen, D. M. Blei, and N. A. Smith. 2010. Variational inference for adaptor grammars. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 564–572, Los Angeles, California, June. Association for Computational Linguistics. K. Demuth and E. McCullough. 2009. The prosodic (re-)organization of childrens early English articles. Journal of Child Language, 36(1):173–200. J. Elman, E. Bates, M. H. Johnson, A. KarmiloffSmith, D. Parisi, and K. Plunkett. 1996. Rethinking Innateness: A Connectionist Perspective on Development. MIT Press/Bradford Books, Cambridge, MA. V. Fromkin, editor. 2001. Linguistics: An Introduction to Linguistic Theory. Blackwell, Oxford, UK. J. Gervain, M. Nespor, R. Mazuka, R. Horie, and J. Mehler. 2008. Bootstrapping word order in prelexical infants: A japaneseitalian cross-linguistic study. Cognitive Psychology, 57(1):56 – 74. S. Goldwater, T. L. Griffiths, and M. Johnson. 2009. A Bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21– 54. S. Goldwater, T. L. Griffiths, and M. Johnson. 2011. Producing power-law distributions and damping word frequencies with two-stage language models. Journal of Machine Learning Research, 12:2335– 2382. P. A. Hall´e, C. Durand, and B. de Boysson-Bardies. 2008. Do 11-month-old French infants process articles? Language and Speech, 51(1-2):23–44. J.-R. Hochmann, A. D. Endress, and J. Mehler. 2010. Word frequency as a cue for identifying function words in infancy. Cognition, 115(3):444 – 457. M. Johnson and S. Goldwater. 2009. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 317–325, Boulder, Colorado, June. Association for Computational Linguistics. M. Johnson, T. Griffiths, and S. Goldwater. 2007a. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139– 146, Rochester, New York. Association for Computational Linguistics. M. Johnson, T. L. Griffiths, and S. Goldwater. 2007b. Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 641–648. MIT Press, Cambridge, MA. M. Johnson, K. Demuth, M. Frank, and B. Jones. 2010. Synergies in learning words and their referents. In J. Lafferty, C. K. I. Williams, J. ShaweTaylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1018–1026. M. Johnson, K. Demuth, and M. Frank. 2012. Exploiting social information in grounded language learning via grammatical reduction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 883–891, Jeju Island, Korea, July. Association for Computational Linguistics. M. Johnson. 2008. Using Adaptor Grammars to identify synergies in the unsupervised acquisition of linguistic structure. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics, pages 398–406, Columbus, Ohio. Association for Computational Linguistics. Y. Kedar, M. Casasola, and B. Lust. 2006. Getting there faster: 18- and 24-month-old infants’ use of function words to determine reference. Child Development, 77(2):325–338. K. Kurihara and T. Sato. 2006. Variational Bayesian grammar induction for natural language. In Y. Sakakibara, S. Kobayashi, K. Sato, T. Nishino, and E. Tomita, editors, Grammatical Inference: Algorithms and Applications, pages 84–96. Springer. K. P. Murphy. 2012. Machine learning: a probabilistic perspective. The MIT Press. V. L. Shafer, D. W. Shucard, J. L. Shucard, and L. Gerken. 1998. An electrophysiological study of infants’ sensitivity to the sound patterns of English 291 speech. Journal of Speech, Language and Hearing Research, 41(4):874. R. Shi and M. Lepage. 2008. The effect of functional morphemes on word segmentation in preverbal infants. Developmental Science, 11(3):407–413. R. Shi and A. Melanc¸on. 2010. Syntactic categorization in French-learning infants. Infancy, 15(517– 533). R. Shi and J. Werker. 2001. Six-months old infants’ preference for lexical words. Psychological Science, 12:71–76. R. Shi, A. Cutler, J. Werker, and M. Cruickshank. 2006. Frequency and form as determinants of functor sensitivity in English-acquiring infants. The Journal of the Acoustical Society of America, 119(6):EL61–EL67. Y. W. Teh, M. Jordan, M. Beal, and D. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581. R. Zangl and A. Fernald. 2007. Increasing flexibility in children’s online processing of grammatical and nonce determiners in fluent speech. Language Learning and Development, 3(3):199–231. 292
2014
27
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 293–303, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Max-Margin Tensor Neural Network for Chinese Word Segmentation Wenzhe Pei Tao Ge Baobao Chang∗ Key Laboratory of Computational Linguistics, Ministry of Education School of Electronics Engineering and Computer Science, Peking University Beijing, P.R.China, 100871 {peiwenzhe,getao,chbb}@pku.edu.cn Abstract Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability to alleviate the burden of manual feature engineering. In this paper, we propose a novel neural network model for Chinese word segmentation called Max-Margin Tensor Neural Network (MMTNN). By exploiting tag embeddings and tensorbased transformation, MMTNN has the ability to model complicated interactions between tags and context characters. Furthermore, a new tensor factorization approach is proposed to speed up the model and avoid overfitting. Experiments on the benchmark dataset show that our model achieves better performances than previous neural network models and that our model can achieve a competitive performance with minimal feature engineering. Despite Chinese word segmentation being a specific case, MMTNN can be easily generalized and applied to other sequence labeling tasks. 1 Introduction Unlike English and other western languages, Chinese do not delimit words by white-space. Therefore, word segmentation is a preliminary and important pre-process for Chinese language processing. Most previous systems address this problem by treating this task as a sequence labeling problem where each character is assigned a tag indicating its position in the word. These systems are effective because researchers can incorporate a large body of handcrafted features into the models. However, the ability of these models is restricted ∗Corresponding author by the design of features and the number of features could be so large that the result models are too large for practical use and prone to overfit on training corpus. Recently, neural network models have been increasingly focused on for their ability to minimize the effort in feature engineering. Collobert et al. (2011) developed the SENNA system that approaches or surpasses the state-of-the-art systems on a variety of sequence labeling tasks for English. Zheng et al. (2013) applied the architecture of Collobert et al. (2011) to Chinese word segmentation and POS tagging and proposed a perceptronstyle algorithm to speed up the training process with negligible loss in performance. Workable as previous neural network models seem, a limitation of them to be pointed out is that the tag-tag interaction, tag-character interaction and character-character interaction are not well modeled. In conventional feature-based linear (log-linear) models, these interactions are explicitly modeled as features. Take phrase “打篮 球(play basketball)” as an example, assuming we are labeling character C0=“篮”, possible features could be: f1 = ( 1 C−1=“打” and C1=“球” and y0=“B” 0 else f2 = ( 1 C0=“篮” and y0=“B” and y−1=“S” 0 else To capture more interactions, researchers have designed a large number of features based on linguistic intuition and statistical information. In previous neural network models, however, hardly can such interactional effects be fully captured relying only on the simple transition score and the single non-linear transformation (See section 2). In order to address this problem, we propose a new model called Max-Margin Tensor Neural Network (MMTNN) that explicitly models the interactions 293 between tags and context characters by exploiting tag embeddings and tensor-based transformation. Moreover, we propose a tensor factorization approach that effectively improves the model efficiency and prevents from overfitting. We evaluate the performance of Chinese word segmentation on the PKU and MSRA benchmark datasets in the second International Chinese Word Segmentation Bakeoff (Emerson, 2005) which are commonly used for evaluation of Chinese word segmentation. Experiment results show that our model outperforms other neural network models. Although we focus on the question that how far we can go without using feature engineering in this paper, the study of deep learning for NLP tasks is still a new area in which it is currently challenging to surpass the state-of-the-art without additional features. Following Mansur et al. (2013), we wonder how well our model can perform with minimal feature engineering. Therefore, we integrate additional simple character bigram features into our model and the result shows that our model can achieve a competitive performance that other systems hardly achieve unless they use more complex task-specific features. The main contributions of our work are as follows: • We propose a Max-Margin Tensor Neural Network for Chinese word segmentation without feature engineering. The test results on the benchmark dataset show that our model outperforms previous neural network models. • We propose a new tensor factorization approach that models each tensor slice as the product of two low-rank matrices. Not only does this approach improve the efficiency of our model but also it avoids the risk of overfitting. • Compared with previous works that use a large number of handcrafted features, our model can achieve a competitive performance with minimal feature engineering. • Despite Chinese word segmentation being a specific case, our approach can be easily generalized to other sequence labeling tasks. The remaining part of this paper is organized as follows. Section 2 describes the details of conventional neural network architecture. Section 3 Figure 1: Conventional Neural Network describes the details of our model. Experiment results are reported in Section 4. Section 5 reviews the related work. The conclusions are given in Section 6. 2 Conventional Neural Network 2.1 Lookup Table The idea of distributed representation for symbolic data is one of the most important reasons why the neural network works. It was proposed by Hinton (1986) and has been a research hot spot for more than twenty years (Bengio et al., 2003; Collobert et al., 2011; Schwenk et al., 2012; Mikolov et al., 2013a). Formally, in the Chinese word segmentation task, we have a character dictionary D of size |D|. Unless otherwise specified, the character dictionary is extracted from the training set and unknown characters are mapped to a special symbol that is not used elsewhere. Each character c ∈D is represented as a real-valued vector (character embedding) Embed(c) ∈Rd where d is the dimensionality of the vector space. The character embeddings are then stacked into a embedding matrix M ∈Rd×|D|. For a character c ∈D that has an associated index k, the corresponding character embedding Embed(c) ∈Rd is retrieved by the Lookup Table layer as shown in Figure 1: Embed(c) = Mek (1) Here ek ∈R|D| is a binary vector which is zero in all positions except at k-th index. The Lookup Table layer can be seen as a simple projection layer where the character embedding for each context 294 character is achieved by table lookup operation according to their indices. The embedding matrix M is initialized with small random numbers and trained by back-propagation. We will analyze in more detail about the effect of character embeddings in Section 4. 2.2 Tag Scoring The most common tagging approach is the window approach. The window approach assumes that the tag of a character largely depends on its neighboring characters. Given an input sentence c[1:n], a window of size w slides over the sentence from character c1 to cn. We set w = 5 in all experiments. As shown in Figure 1, at position ci, 1 ≤i ≤n, the context characters are fed into the Lookup Table layer. The characters exceeding the sentence boundaries are mapped to one of two special symbols, namely “start” and “end” symbols. The character embeddings extracted by the Lookup Table layer are then concatenated into a single vector a ∈RH1, where H1 = w · d is the size of Layer 1. Then a is fed into the next layer which performs linear transformation followed by an element-wise activation function g such as tanh, which is used in our experiments: h = g(W1a + b1) (2) where W1 ∈RH2×H1, b1 ∈RH2×1, h ∈RH2. H2 is a hyper-parameter which is the number of hidden units in Layer 2. Given a set of tags T of size |T|, a similar linear transformation is performed except that no non-linear function is followed: f(t|c[i−2:i+2]) = W2h + b2 (3) where W2 ∈ R|T|×H2, b2 ∈ R|T|×1. f(t|c[i−2:i+2]) ∈R|T| is the score vector for each possible tag. In Chinese word segmentation, the most prevalent tag set T is BMES tag set, which uses 4 tags to carry word boundary information. It uses B, M, E and S to denote the Beginning, the Middle, the End of a word and a Single character forming a word respectively. We use this tag set in our method. 2.3 Model Training and Inference Despite sharing commonalities mentioned above, previous work models the segmentation task differently and therefore uses different training and inference procedure. Mansur et al. (2013) modeled Chinese word segmentation as a series of classification task at each position of the sentence in which the tag score is transformed into probability using softmax function: p(ti|c[i−2:i+2]) = exp(f(ti|c[i−2:i+2])) P t′ exp(f(t′|c[i−2:i+2])) The model is then trained in MLE-style which maximizes the log-likelihood of the tagged data. Obviously, it is a local model which cannot capture the dependency between tags and does not support to infer the tag sequence globally. To model the tag dependency, previous neural network models (Collobert et al., 2011; Zheng et al., 2013) introduce a transition score Aij for jumping from tag i ∈T to tag j ∈T. For a input sentence c[1:n] with a tag sequence t[1:n], a sentence-level score is then given by the sum of transition and network scores: s(c[1:n], t[1:n], θ) = n X i=1 (Ati−1ti+fθ(ti|c[i−2:i+2])) (4) where fθ(ti|c[i−2:i+2]) indicates the score output for tag ti at the i-th character by the network with parameters θ = (M, A, W1, b1, W2, b2). Given the sentence-level score, Zheng et al. (2013) proposed a perceptron-style training algorithm inspired by the work of Collins (2002). Compared with Mansur et al. (2013), their model is a global one where the training and inference is performed at sentence-level. Workable as these methods seem, one of the limitations of them is that the tag-tag interaction and the neural network are modeled seperately. The simple tag-tag transition neglects the impact of context characters and thus limits the ability to capture flexible interactions between tags and context characters. Moreover, the simple nonlinear transformation in equation (2) is also poor to model the complex interactional effects in Chinese word segmentation. 3 Max-Margin Tensor Neural Network 3.1 Tag Embedding To better model the tag-tag interaction given the context characters, distributed representation for tags instead of traditional discrete symbolic representation is used in our model. Similar to character embeddings, given a fixed-sized tag set T, the tag embeddings for tags are stored in a tag embedding matrix L ∈Rd×|T|, where d is the dimensionality 295 Figure 2: Max-Margin Tensor Neural Network of the vector space (same with character embeddings). Then the tag embedding Embed(t) ∈Rd for tag t ∈T with index k can be retrieved by the lookup operation: Embed(t) = Lek (5) where ek ∈R|T|×1 is a binary vector which is zero in all positions except at k-th index. The tag embeddings start from a random initialization and can be automatically trained by back-propagation. Figure 2 shows the new Lookup Table layer with tag embeddings. Assuming we are at the i-th character of a sentence, besides the character embeddings, the tag embeddings of the previous tags are also considered1. For a fast tag inference, only the previous tag ti−1 is used in our model even though a longer history of tags can be considered. The concatenation operation in Layer 1 then concatenates the character embeddings and tag embedding together into a long vector a. In this way, the tag representation can be directly incorporated in the neural network so that the tag-tag interaction and tag-character interaction can be explicitly modeled in deeper layers (See Section 3.2). Moreover, the transition score in equation (4) is not necessary in our model, because, by incorporating tag embedding into the neural network, the effect of tag-tag interaction and tag-character interaction are covered uniformly in one same model. Now 1We also tried the architecture in which the tag embedding of current tag is also considered, but this did not bring much improvement and runs slower Figure 3: The tensor-based transformation in Layer 2. a is the input from Layer 1. V is the tensor parameter. Each dashed box represents one of the H2-many tensor slices, which defines the bilinear form on vector a. equation (4) can be rewritten as follows: s(c[1:n], t[1:n], θ) = n X i=1 fθ(ti|c[i−2:i+2], ti−1) (6) where fθ(ti|c[i−2:i+2], ti−1) is the score output for tag ti at the i-th character by the network with parameters θ. Like Collobert et al. (2011) and Zheng et al. (2013), our model is also trained at sentencelevel and carries out inference globally. 3.2 Tensor Neural Network A tensor is a geometric object that describes relations between vectors, scalars, and other tensors. It can be represented as a multi-dimensional array of numerical values. An advantage of the tensor is that it can explicitly model multiple interactions in data. As a result, tensor-based model have been widely used in a variety of tasks (Salakhutdinov et al., 2007; Krizhevsky et al., 2010; Socher et al., 2013b). In Chinese word segmentation, a proper modeling of the tag-tag interaction, tag-character interaction and character-character interaction is very important. In linear models, these kinds of interactions are usually modeled as features. In conventional neural network models, however, the input embeddings only implicitly interact through the non-linear function which can hardly model the complexity of the interactions. Given the advantage of tensors, we apply a tensor-based transformation to the input vector. Formally, we use a 3-way tensor V [1:H2] ∈RH2×H1×H1 to directly model the interactions, where H2 is the size of 296 Layer 2 and H1 = (w + 1) · d is the size of concatenated vector a in Layer 1 as shown in Figure 2. Figure 3 gives an example of the tensor-based transformation2. The output of a tensor product is a vector z ∈RH2 where each dimension zi is the result of the bilinear form defined by each tensor slice V [i] ∈RH1×H1: z = aT V [1:H2]a; zi = aT V [i]a = X j,k V [i] jk ajak (7) Since vector a is the concatenation of character embeddings and the tag embedding, equation (7) can be written in the following form: zi = X p,q X j,k V [i] (p,q,j,k)E[p] j E[q] k where E[p] j is the j-th element of the p-th embedding in Lookup Table layer and V [i] (p,q,j,k) is the corresponding coefficient for E[p] j and E[q] k in V [i]. As we can see, in each tensor slice i, the embeddings are explicitly related in a bilinear form which captures the interactions between characters and tags. The multiplicative operations between tag embeddings and character embeddings can somehow be seen as “feature combination”, which are hand-designed in feature-based models. Our model learns the information automatically and encodes them in tensor parameters and embeddings. Intuitively, we can interpret each slice of the tensor as capturing a specific type of tagcharacter interaction and character-character interaction. Combining the tensor product with linear transformation, the tensor-based transformation in Layer 2 is defined as: h = g(aT V [1:H2]a + W1a + b1) (8) where W1 ∈RH2×H1, b1 ∈RH2×1, h ∈RH2. In fact, equation (2) used in previous work is a special case of equation (8) when V is set to 0. 3.3 Tensor Factorization Despite tensor-based transformation being effective for capturing the interactions, introducing tensor-based transformation into neural network models to solve sequence labeling task is time prohibitive since the tensor product operation drastically slows down the model. Without considering matrix optimization algorithms, the complexity of the non-linear transformation in equation (2) 2The bias term is omitted in Figure 3 for simplicity Figure 4: Tensor product with tensor factorization is O(H1H2) while the tensor operation complexity in equation (8) is O(H2 1H2). The tensor-based transformation is H1 times slower. Moreover, the additional tensor could bring millions of parameters to the model which makes the model suffer from the risk of overfitting. To remedy this, we propose a tensor factorization approach that factorizes each tensor slice as the product of two low-rank matrices. Formally, each tensor slice V [i] ∈RH1×H1 is factorized into two low rank matrix P [i] ∈RH1×r and Q[i] ∈Rr×H1: V [i] = P [i]Q[i], 1 ≤i ≤H2 (9) where r ≪H1 is the number of factors. Substituting equation (9) into equation (8), we get the factorized tensor function: h = g(aT P [1:H2]Q[1:H2]a + W1a + b1) (10) Figure 4 illustrates the operation in each slice of the factorized tensor. First, vector a is projected into two r-dimension vectors f1 and f2. Then the output zi for each tensor slice i is the dot-product of f1 and f2. The complexity of the tensor operation is now O(rH1H2). As long as r is small enough, the factorized tensor operation would be much faster than the un-factorized one and the number of free parameters would also be much smaller, which prevent the model from overfitting. 3.4 Max-Margin Training We use the Max-Margin criterion to train our model. Intuitively, the Max-Margin criterion provides an alternative to probabilistic, likelihoodbased estimation methods by concentrating directly on the robustness of the decision boundary of a model (Taskar et al., 2005). We use Y (xi) to denote the set of all possible tag sequences for 297 a given sentence xi and the correct tag sequence for xi is yi. The parameters of our model are θ = {W1, b1, W2, b2, M, L, P [1:H2], Q[1:H2]}. We first define a structured margin loss △(yi, ˆy) for predicting a tag sequence ˆy for a given correct tag sequence yi: △(yi, ˆy) = n X j κ1{yi,j ̸= ˆyj} (11) where n is the length of sentence xi and κ is a discount parameter. The loss is proportional to the number of characters with an incorrect tag in the proposed tag sequence, which increases the more incorrect the proposed tag sequence is. For a given training instance (xi, yi), we search for the tag sequence with the highest score: y∗= arg max ˆy∈Y (x) s(xi, ˆy, θ) (12) where the tag sequence is found and scored by the Tensor Neural Network via the function s in equation (6). The object of Max-Margin training is that the highest scoring tag sequence is the correct one: y∗= yi and its score will be larger up to a margin to other possible tag sequences ˆy ∈Y (xi): s(x, yi, θ) ≥s(x, ˆy, θ) + △(yi, ˆy) This leads to the regularized objective function for m training examples: J(θ) = 1 m m X i=1 li(θ) + λ 2 ||θ||2 li(θ) = max ˆy∈Y (xi)(s(xi, ˆy, θ) + △(yi, ˆy)) −s(xi, yi, θ)) (13) By minimizing this object, the score of the correct tag sequence yi is increased and score of the highest scoring incorrect tag sequence ˆy is decreased. The objective function is not differentiable due to the hinge loss. We use a generalization of gradient descent called subgradient method (Ratliff et al., 2007) which computes a gradient-like direction. The subgradient of equation (13) is: ∂J ∂θ = 1 m X i (∂s(xi, ˆymax, θ) ∂θ −∂s(xi, yi, θ) ∂θ )+λθ where ˆymax is the tag sequence with the highest score in equation (13). Following Socher et al. (2013a), we use the diagonal variant of AdaGrad PKU MSRA Identical words 5.5 × 104 8.8 × 104 Total words 1.1 × 106 2.4 × 106 Identical characters 5 × 103 5 × 103 Total characters 1.8 × 106 4.1 × 106 Table 1: Details of the PKU and MSRA datasets Window size w = 5 Character(tag) embedding size d = 25 Hidden unit number H2 = 50 Number of factors r = 10 Initial learning rate α = 0.2 Margin loss discount κ = 0.2 Regularization λ = 10−4 Table 2: Hyperparameters of our model (Duchi et al., 2011) with minibatchs to minimize the objective. The parameter update for the i-th parameter θt,i at time step t is as follows: θt,i = θt−1,i − α qPt τ=1 g2 τ,i gt,i (14) where α is the initial learning rate and gτ ∈R|θi| is the subgradient at time step τ for parameter θi. 4 Experiment 4.1 Data and Model Selection We use the PKU and MSRA data provided by the second International Chinese Word Segmentation Bakeoff (Emerson, 2005) to test our model. They are commonly used by previous state-of-the-art models and neural network models. Details of the data are listed in Table 1. For evaluation, we use the standard bake-off scoring program to calculate precision, recall, F1-score and out-of-vocabulary (OOV) word recall. For model selection, we use the first 90% sentences in the training data for training and the rest 10% sentences as development data. The minibatch size is set to 20. Generally, the number of hidden units has a limited impact on the performance as long as it is large enough. We found that 50 is a good trade-off between speed and model performance. The dimensionality of character (tag) embedding is set to 25 which achieved the best performance and faster than 50- or 100dimensional ones. We also validated on the number of factors for tensor factorization. The performance is not boosted and the training time in298 P R F OOV CRF 87.8 85.7 86.7 57.1 NN 92.4 92.2 92.3 60.0 NN+Tag Embed 93.0 92.7 92.9 61.0 MMTNN 93.7 93.4 93.5 64.2 Table 3: Test results with different configurations. NN stands for the conventional neural network. NN+Tag Embed stands for the neural network with tag embeddings. creases drastically when the number of factors is larger than 10. We hypothesize that larger factor size results in too many parameters to train and hence perform worse. The final hyperparameters of our model are set as in Table 2. 4.2 Experiment Results We first perform a close test3 on the PKU dataset to show the effect of different model configurations. We also compare our model with the CRF model (Lafferty et al., 2001), which is a widely used log-linear model for Chinese word segmentation. The input feature to the CRF model is simply the context characters (unigram feature) without any additional feature engineering. We use an open source toolkit CRF++4 to train the CRF model. All the neural networks are trained using the Max-Margin approach described in Section 3.4. Table 3 summarizes the test results. As we can see, by using Tag embedding, the Fscore is improved by +0.6% and OOV recall is improved by +1.0%, which shows that tag embeddings succeed in modeling the tag-tag interaction and tag-character interaction. Model performance is further boosted after using tensor-based transformation. The F-score is improved by +0.6% while OOV recall is improved by +3.2%, which denotes that tensor-based transformation captures more interactional information than simple nonlinear transformation. Another important result in Table 3 is that our neural network models perform much better than CRF-based model when only unigram features are used. Compared with CRF, there are two differences in neural network models. First, the discrete feature vector is replaced with dense character embeddings. Second, the non-linear transformation 3No other material or knowledge except the training data is allowed 4http://crfpp.googlecode.com/svn/ trunk/doc/index.html?source=navbar 一 一 一(one) 李 李 李(Li) 。 。 。(period) 二(two) 赵(Zhao) ,(comma) 三(three) 蒋(Jiang) :(colon) 四(four) 孔(Kong) ?(question mark) 五(five) 冯(Feng) “(quotation mark) 六(six) 吴(Wu) 、(Chinese comma) Table 4: Examples of character embeddings is used to discover higher level representation. In fact, CRF can be regarded as a special neural network without non-linear function (Wang and Manning, 2013). Wang and Manning (2013) conduct an empirical study on the effect of non-linearity and the results suggest that non-linear models are highly effective only when distributed representation is used. To explain why distributed representation captures more information than discrete features, we show in Table 4 the effect of character embeddings which are obtained from the lookup table of MMTNN after training. The first row lists three characters we are interested in. In each column, we list the top 5 characters that are nearest (measured by Euclidean distance) to the corresponding character in the first row according to their embeddings. As we can see, characters in the first column are all Chinese number characters and characters in the second column and the third column are all Chinese family names and Chinese punctuations respectively. Therefore, compared with discrete feature representations, distributed representation can capture the syntactic and semantic similarity between characters. As a result, the model can still perform well even if some words do not appear in the training cases. We further compare our model with previous neural network models on both PKU and MSRA datasets. Since Zheng et al. (2013) did not report the results on the these datasets, we reimplemented their model and tested it on the test data. The results are listed in the first three rows of Table 5, which shows that our model achieved higher F-score than the previous neural network models. 4.3 Unsupervised Pre-training Previous work found that the performance can be improved by pre-training the character embeddings on large unlabeled data and using the obtained embeddings to initialize the character lookup table instead of random initialization 299 Models PKU MSRA P R F OOV P R F OOV (Mansur et al., 2013) 87.1 87.9 87.5 48.9 92.3 92.2 92.2 53.7 (Zheng et al., 2013) 92.8 92.0 92.4 63.3 92.9 93.6 93.3 55.7 MMTNN 93.7 93.4 93.5 64.2 94.6 94.2 94.4 61.4 (Mansur et al., 2013) + Pre-training 91.2 92.7 92.0 68.8 93.1 93.1 93.1 59.7 (Zheng et al., 2013) + Pre-training 93.5 92.2 92.8 69.0 94.2 93.7 93.9 64.1 MMTNN + Pre-training 94.4 93.6 94.0 69.0 95.2 94.6 94.9 64.8 Table 5: Comparison with previous neural network models (Mansur et al., 2013; Zheng et al., 2013). Mikolov et al. (2013b) show that pre-trained embeddings can capture interesting semantic and syntactic information such as king−man+woman ≈queen on English data. There are several ways to learn the embeddings on unlabeled data. Mansur et al. (2013) used the model proposed by Bengio et al. (2003) which learns the embeddings based on neural language model. Zheng et al. (2013) followed the model proposed by Collobert et al. (2008). They constructed a neural network that outputs high scores for windows that occur in the corpus and low scores for windows where one character is replaced by a random one. Mikolov et al. (2013a) proposed a faster skip-gram model word2vec5 which tries to maximize classification of a word based on another word in the same sentence. In this paper, we use word2vec because preliminary experiments did not show differences between performances of these models but word2vec is much faster to train. We pre-train the embeddings on the Chinese Giga-word corpus (Graff and Chen, 2005). As shown in Table 5 (last three rows), both the F-score and OOV recall of our model boost by using pre-training. Our model still outperforms other models after pre-training. 4.4 Minimal Feature Engineering Although we focus on the question that how far we can go without using feature engineering in this paper, the study of deep learning for NLP tasks is still a new area in which it is currently challenging to surpass the state-of-the-art without additional features. To incorporate features into the neural network, Mansur et al. (2013) proposed the feature-based neural network where each context feature is represented as feature embeddings. The idea of feature embeddings is similar to that of character embeddings described in section 2.1. 5https://code.google.com/p/word2vec/ Model PKU MSRA Best05(Chen et al., 2005) 95.0 96.0 Best05(Tseng et al., 2005) 95.0 96.4 (Zhang et al., 2006) 95.1 97.1 (Zhang and Clark, 2007) 94.5 97.2 (Sun et al., 2009) 95.2 97.3 (Sun et al., 2012) 95.4 97.4 (Zhang et al., 2013) 96.1 97.4 MMTNN 94.0 94.9 MMTNN + bigram 95.2 97.2 Table 6: Comparison with state-of-the-art systems Formally, we assume the extracted features form a feature dictionary Df. Then each feature f ∈Df is represented by a d-dimensional vector which is called feature embedding. Following their idea, we try to find out how well our model can perform with minimal feature engineering. A very common feature in Chinese word segmentation is the character bigram feature. Formally, at the i-th character of a sentence c[1:n], the bigram features are ckck+1(i −3 < k < i + 2). In our model, the bigram features are extracted in the window context and then the corresponding bigram embeddings are concatenated with character embeddings in Layer 1 and fed into Layer 2. In Mansur et al. (2013), the bigram embeddings are pre-trained on unlabeled data with character embeddings, which significantly improves the model performance. Given the long time for pre-training bigram embeddings, we only pre-train the character embeddings and the bigram embeddings are initialized as the average of character embeddings of ck and ck+1. Further improvement could be obtained if the bigram embeddings are also pre-trained. Table 6 lists the segmentation performances of our model as well as previous state-of-the-art systems. When bigram features are added, the F-score of our model improves 300 from 94.0% to 95.2% on PKU dataset and from 94.9% to 97.2% on MSRA dataset. It is a competitive result given that our model only use simple bigram features while other models use more complex features. For example, Sun et al. (2012) uses additional word-based features. Zhang et al. (2013) uses eight types of features such as Mutual Information and Accessor Variety and they extract dynamic statistical features from both an in-domain corpus and an out-of-domain corpus using co-training. Since feature engineering is not the main focus of this paper, we did not experiment with more features. 5 Related Work Chinese word segmentation has been studied with considerable efforts in the NLP community. The most popular approach treats word segmentation as a sequence labeling problem which was first proposed in Xue (2003). Most previous systems address this task by using linear statistical models with carefully designed features such as bigram features, punctuation information (Li and Sun, 2009) and statistical information (Sun and Xu, 2011). Recently, researchers have tended to explore new approaches for word segmentation which circumvent the feature engineering by automatically learning features with neural network models (Mansur et al., 2013; Zheng et al., 2013). Our study is consistent with this line of research, however, our model explicitly models the interactions between tags and context characters and accordingly captures more semantic information. Tensor-based transformation was also used in other neural network models for its ability to capture multiple interactions in data. For example, Socher et al. (2013b) exploited tensor-based function in the task of Sentiment Analysis to capture more semantic information from constituents. However, given the small size of their tensor matrix, they do not have the problem of high time cost and overfitting problem as we faced in modeling a sequence labeling task like Chinese word segmentation. That’s why we propose to decrease computational cost and avoid overfitting with tensor factorization. Various tensor factorization (decomposition) methods have been proposed recently for tensorbased dimension reduction (Cohen et al., 2013; Van de Cruys et al., 2013; Chang et al., 2013). For example, Chang et al. (2013) proposed the Multi-Relational Latent Semantic Analysis. Similar to LSA, a low rank approximation of the tensor is derived using a tensor decomposition approch. Similar ideas were also used for collaborative filtering (Salakhutdinov et al., 2007) and object recognition (Ranzato et al., 2010). Our tensor factorization is related to these work but uses a different tensor factorization approach. By introducing tensor factorization into the neural network model for sequence labeling tasks, the model training and inference are speeded up and overfitting is prevented. 6 Conclusion In this paper, we propose a new model called MaxMargin Tensor Neural Network that explicitly models the interactions between tags and context characters. Moreover, we propose a tensor factorization approach that effectively improves the model efficiency and avoids the risk of overfitting. Experiments on the benchmark datasets show that our model achieve better results than previous neural network models and that our model can achieve a competitive result with minimal feature engineering. In the future, we plan to further extend our model and apply it to other structure prediction problems. Acknowledgments This work is supported by National Natural Science Foundation of China under Grant No. 61273318 and National Key Basic Research Program of China 2014CB340504. References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. Kai-Wei Chang, Wen-tau Yih, and Christopher Meek. 2013. Multi-relational latent semantic analysis. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1602–1612, Seattle, Washington, USA, October. Association for Computational Linguistics. Aitao Chen, Yiping Zhou, Anne Zhang, and Gordon Sun. 2005. Unigram language model for chinese word segmentation. In Proceedings of the 4th SIGHAN Workshop on Chinese Language Processing, pages 138–141. Association for Computational Linguistics Jeju Island, Korea. 301 Shay B Cohen, Giorgio Satta, and Michael Collins. 2013. Approximate pcfg parsing using tensor decomposition. In Proceedings of NAACL-HLT, pages 487–496. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Association for Computational Linguistics. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 999999:2121–2159. Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, volume 133. David Graff and Ke Chen. 2005. Chinese gigaword. LDC Catalog No.: LDC2003T09, ISBN, 1:58563– 230. Geoffrey E Hinton. 1986. Learning distributed representations of concepts. In Proceedings of the eighth annual conference of the cognitive science society, pages 1–12. Amherst, MA. Alex Krizhevsky, Geoffrey E Hinton, et al. 2010. Factored 3-way restricted boltzmann machines for modeling natural images. In International Conference on Artificial Intelligence and Statistics, pages 621– 628. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for chinese word segmentation. Computational Linguistics, 35(4):505–512. Mairgup Mansur, Wenzhe Pei, and Baobao Chang. 2013. Feature-based neural language model and chinese word segmentation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of NAACLHLT, pages 746–751. Marc’Aurelio Ranzato, Alex Krizhevsky, and Geoffrey E Hinton. 2010. Factored 3-way restricted boltzmann machines for modeling natural images. Nathan D Ratliff, J Andrew Bagnell, and Martin A Zinkevich. 2007. (online) subgradient methods for structured prediction. Ruslan Salakhutdinov, Andriy Mnih, and Geoffrey Hinton. 2007. Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24th international conference on Machine learning, pages 791–798. ACM. Holger Schwenk, Anthony Rousseau, and Mohammed Attik. 2012. Large, pruned or continuous space language models on a gpu for statistical machine translation. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, pages 11–19. Association for Computational Linguistics. Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with compositional vector grammars. In Annual Meeting of the Association for Computational Linguistics (ACL). Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. EMNLP. Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 970–979. Association for Computational Linguistics. Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun’ichi Tsujii. 2009. A discriminative latent variable chinese segmenter with hybrid word/character information. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 56–64. Association for Computational Linguistics. Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 253–262, Jeju Island, 302 Korea, July. Association for Computational Linguistics. Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd international conference on Machine learning, pages 896–903. ACM. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, volume 171. Tim Van de Cruys, Thierry Poibeau, and Anna Korhonen. 2013. A tensor-based factorization model of semantic compositionality. In Proceedings of NAACL-HLT, pages 1142–1151. Mengqiu Wang and Christopher D Manning. 2013. Effect of non-linear deep architecture in sequence labeling. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Nianwen Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29–48. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In ANNUAL MEETING-ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, volume 45, page 840. Ruiqiang Zhang, Genichiro Kikui, and Eiichiro Sumita. 2006. Subword-based tagging by conditional random fields for chinese word segmentation. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 193–196. Association for Computational Linguistics. Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from unlabeled data with co-training for Chinese word segmentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 311–321, Seattle, Washington, USA, October. Association for Computational Linguistics. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 647–657, Seattle, Washington, USA, October. Association for Computational Linguistics. 303
2014
28
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 304–313, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics An Empirical Study on the Effect of Negation Words on Sentiment Xiaodan Zhu, Hongyu Guo, Saif Mohammad and Svetlana Kiritchenko National Research Council Canada 1200 Montreal Road Ottawa, K1A 0R6, ON, Canada {Xiaodan.Zhu,Hongyu.Guo,Saif.Mohammad,Svetlana.Kiritchenko} @nrc-cnrc.gc.ca Abstract Negation words, such as no and not, play a fundamental role in modifying sentiment of textual expressions. We will refer to a negation word as the negator and the text span within the scope of the negator as the argument. Commonly used heuristics to estimate the sentiment of negated expressions rely simply on the sentiment of argument (and not on the negator or the argument itself). We use a sentiment treebank to show that these existing heuristics are poor estimators of sentiment. We then modify these heuristics to be dependent on the negators and show that this improves prediction. Next, we evaluate a recently proposed composition model (Socher et al., 2013) that relies on both the negator and the argument. This model learns the syntax and semantics of the negator’s argument with a recursive neural network. We show that this approach performs better than those mentioned above. In addition, we explicitly incorporate the prior sentiment of the argument and observe that this information can help reduce fitting errors. 1 Introduction Morante and Sporleder (2012) define negation to be “a grammatical category that allows the changing of the truth value of a proposition”. Negation is often expressed through the use of negative signals or negators–words like isn’t and never, and it can significantly affect the sentiment of its scope. Understanding the impact of negation on sentiment is essential in automatic analysis of sentiment. The literature contains interesting research attempting to model and understand the behavior (reviewed in Section 2). For example, Figure 1: Effect of a list of common negators in modifying sentiment values in Stanford Sentiment Treebank. The x-axis is s(⃗w), and y-axis is s(wn, ⃗w). Each dot in the figure corresponds to a text span being modified by (composed with) a negator in the treebank. The red diagonal line corresponds to the sentiment-reversing hypothesis that simply reverses the sign of sentiment values. a simple yet influential hypothesis posits that a negator reverses the sign of the sentiment value of the modified text (Polanyi and Zaenen, 2004; Kennedy and Inkpen, 2006). The shifting hypothesis (Taboada et al., 2011), however, assumes that negators change sentiment values by a constant amount. In this paper, we refer to a negation word as the negator (e.g., isn’t), a text span being modified by and composed with a negator as the argument (e.g., very good), and entire phrase (e.g., isn’t very good) as the negated phrase. The recently available Stanford Sentiment Treebank (Socher et al., 2013) renders manually annotated, real-valued sentiment scores for all phrases in parse trees. This corpus provides us with the data to further understand the quantitative behavior of negators, as the effect of negators can now be studied with arguments of rich syntactic and semantic variety. Figure 1 illustrates the effect of a common list of negators on sentiment as observed 304 on the Stanford Sentiment Treebank.1 Each dot in the figure corresponds to a negated phrase in the treebank. The x-axis is the sentiment score of its argument s(⃗w) and y-axis the sentiment score of the entire negated phrase s(wn, ⃗w). We can see that the reversing assumption (the red diagonal line) does capture some regularity of human perception, but rather roughly. Moreover, the figure shows that same or similar s(⃗w) scores (x-axis) can correspond to very different s(wn, ⃗w) scores (y-axis), which, to some degree, suggests the potentially complicated behavior of negators.2 This paper describes a quantitative study of the effect of a list of frequent negators on sentiment. We regard the negators’ behavior as an underlying function embedded in annotated data; we aim to model this function from different aspects. By examining sentiment compositions of negators and arguments, we model the quantitative behavior of negators in changing sentiment. That is, given a negated phrase (e.g., isn’t very good) and the sentiment score of its argument (e.g., s(“very good′′) = 0.5), we focus on understanding the negator’s quantitative behavior in yielding the sentiment score of the negated phrase s(“isn′t very good′′). We first evaluate the modeling capabilities of two influential heuristics and show that they capture only very limited regularity of negators’ effect. We then extend the models to be dependent on the negators and demonstrate that such a simple extension can significantly improve the performance of fitting to the human annotated data. Next, we evaluate a recently proposed composition model (Socher, 2013) that relies on both the negator and the argument. This model learns the syntax and semantics of the negator’s argument with a recursive neural network. This approach performs significantly better than those mentioned above. In addition, we explicitly incorporate the prior sentiment of the argument and observe that this information helps reduce fitting errors. 1The sentiment values have been linearly rescaled from the original range [0, 1] to [-0.5, 0.5]; in the figure a negative or positive value corresponds to a negative or a positive sentiment respectively; zero means neutral. The negator list will be discussed later in the paper. 2Similar distribution is observed in other data such as Tweets (Kiritchenko et al., 2014). 2 Related work Automatic sentiment analysis The expression of sentiment is an integral component of human language. In written text, sentiment is conveyed with word senses and their composition, and in speech also via prosody such as pitch (Mairesse et al., 2012). Early work on automatic sentiment analysis includes the widely cited work of (Hatzivassiloglou and McKeown, 1997; Pang et al., 2002; Turney, 2002), among others. Since then, there has been an explosion of research addressing various aspects of the problem, including detecting subjectivity, rating and classifying sentiment, labeling sentiment-related semantic roles (e.g., target of sentiment), and visualizing sentiment (see surveys by Pang and Lee (2008) and Liu and Zhang (2012)). Negation modeling Negation is a general grammatical category pertaining to the changing of the truth values of propositions; negation modeling is not limited to sentiment. For example, paraphrase and contradiction detection systems rely on detecting negated expressions and opposites (Harabagiu et al., 2006). In general, a negated expression and the opposite of the expression may or may not convey the same meaning. For example, not alive has the same meaning as dead, however, not tall does not always mean short. Some automatic methods to detect opposites were proposed by Hatzivassiloglou and McKeown (1997) and Mohammad et al. (2013). Negation modeling for sentiment An early yet influential reversing assumption conjectures that a negator reverses the sign of the sentiment value of the modified text (Polanyi and Zaenen, 2004; Kennedy and Inkpen, 2006), e.g., from +0.5 to 0.5, or vice versa. A different hypothesis, called the shifting hypothesis in this paper, assumes that negators change the sentiment values by a constant amount (Taboada et al., 2011; Liu and Seneff, 2009). Other approaches to negation modeling have been discussed in (Jia et al., 2009; Wiegand et al., 2010; Lapponi et al., 2012; Benamara et al., 2012). In the process of semantic composition, the effect of negators could depend on the syntax and semantics of the text spans they modify. The approaches of modeling this include bag-of-wordbased models. For example, in the work of (Kennedy and Inkpen, 2006), a feature not good will be created if the word good is encountered 305 within a predefined range after a negator. There exist different ways of incorporating more complicated syntactic and semantic information. Much recent work considers sentiment analysis from a semantic-composition perspective (Moilanen and Pulman, 2007; Choi and Cardie, 2008; Socher et al., 2012; Socher et al., 2013), which achieved the state-of-the-art performance. Moilanen and Pulman (2007) used a collection of hand-written compositional rules to assign sentiment values to different granularities of text spans. Choi and Cardie (2008) proposed a learning-based framework. The more recent work of (Socher et al., 2012; Socher et al., 2013) proposed models based on recursive neural networks that do not rely on any heuristic rules. Such models work in a bottom-up fashion over the parse tree of a sentence to infer the sentiment label of the sentence as a composition of the sentiment expressed by its constituting parts. The approach leverages a principled method, the forward and backward propagation, to learn a vector representation to optimize the system performance. In principle neural network is able to fit very complicated functions (Mitchell, 1997), and in this paper, we adapt the state-of-the-art approach described in (Socher et al., 2013) to help understand the behavior of negators specifically. 3 Negation models based on heuristics We begin with previously proposed methods that leverage heuristics to model the behavior of negators. We then propose to extend them to consider lexical information of the negators themselves. 3.1 Non-lexicalized assumptions and modeling In previous research, some influential, widely adopted assumptions posit the effect of negators to be independent of both the specific negators and the semantics and syntax of the arguments. In this paper, we call a model based on such assumptions a non-lexicalized model. In general, we can simply define this category of models in Equation 1. That is, the model parameters are only based on the sentiment value of the arguments. s(wn, ⃗w) def = f(s(⃗w)) (1) 3.1.1 Reversing hypothesis A typical model falling into this category is the reversing hypothesis discussed in Section 2, where a negator simply reverses the sentiment score s(⃗w) to be −s(⃗w); i.e., f(s(⃗w)) = −s(⃗w). 3.1.2 Shifting hypothesis Basic shifting Similarly, a shifting based model depends on s(⃗w) only, which can be written as: f(s(⃗w)) = s(⃗w) −sign(s(⃗w)) ∗C (2) where sign(.) is the standard sign function which determines if the constant C should be added to or deducted from s(wn): the constant is added to a negative s(⃗w) but deducted from a positive one. Polarity-based shifting As will be shown in our experiments, negators can have different shifting power when modifying a positive or a negative phrase. Thus, we explore the use of two different constants for these two situations, i.e., f(s(⃗w)) = s(⃗w)−sign(s(⃗w))∗C(sign(s(⃗w))). The constant C now can take one of two possible values. We will show that this simple modification improves the fitting performance statistically significantly. Note also that instead of determining these constants by human intuition, we use the training data to find the constants in all shifting-based models as well as for the parameters in other models. 3.2 Simple lexicalized assumptions The above negation hypotheses rely on s(⃗w). As intuitively shown in Figure 1, the capability of the non-lexicalized heuristics might be limited. Further semantic or syntactic information from either the negators or the phrases they modify could be helpful. The most straightforward way of expanding the non-lexicalized heuristics is probably to make the models to be dependent on the negators. s(wn, ⃗w) def = f(wn, s(⃗w)) (3) Negator-based shifting We can simply extend the basic shifting model above to consider the lexical information of negators: f(s(⃗w)) = s(⃗w) − sign(s(⃗w)) ∗C(wn). That is, each negator has its own C. We call this model negator-based shifting. We will show that this model also statistically significantly outperforms the basic shifting without overfitting, although the number of parameters have increased. 306 Combined shifting We further combine the negator-based shifting and polarity-based shifting above: f(s(⃗w)) = s(⃗w) −sign(s(⃗w)) ∗ C(wn, sign(s(⃗w))). This shifting model is based on negators and the polarity of the text they modify: constants can be different for each negator-polarity pair. The number of parameters in this model is the multiplication of number of negators by two (the number of sentiment polarities). This model further improves the fitting performance on the test data. 4 Semantics-enriched modeling Negators can interact with arguments in complex ways. Figure 1 shows the distribution of the effect of negators on sentiment without considering further semantics of the arguments. The question then is that whether and how much incorporating further syntax and semantic information can help better fit or predict the negation effect. Above, we have considered the semantics of the negators. Below, we further make the models to be dependent on the arguments. This can be written as: s(wn, ⃗w) def = f(wn, s(⃗w), r(⃗w)) (4) In the formula, r(⃗w) is a certain type of representation for the argument ⃗w and it models the semantics or/and syntax of the argument. There exist different ways of implementing r(⃗w). We consider two models in this study: one drops s(⃗w) in Equation 4 and directly models f(wn, r(⃗w)). That is, the non-uniform information shown in Figure 1 is not directly modeled. The other takes into account s(⃗w) too. For the former, we adopt the recursive neural tensor network (RNTN) proposed recently by Socher et al. (2013), which has showed to achieve the state-of-the-art performance in sentiment analysis. For the latter, we propose a prior sentimentenriched tensor network (PSTN) to take into account the prior sentiment of the argument s(⃗w). 4.1 RNTN: Recursive neural tensor network A recursive neural tensor network (RNTN) is a specific form of feed-forward neural network based on syntactic (phrasal-structure) parse tree to conduct compositional sentiment analysis. For completeness, we briefly review it here. More details can be found in (Socher et al., 2013). As shown in the black portion of Figure 2, each instance of RNTN corresponds to a binary parse Figure 2: Prior sentiment-enriched tensor network (PSTN) model for sentiment analysis. tree of a given sentence. Each node of the parse tree is a fixed-length vector that encodes compositional semantics and syntax, which can be used to predict the sentiment of this node. The vector of a node, say p2 in Figure 2, is computed from the ddimensional vectors of its two children, namely a and p1 (a, p1 ∈Rd×1), with a non-linear function: p2 = tanh(  a p1 T V [1:d]  a p1  + W  a p1  ) (5) where, W ∈Rd×(d+d) and V ∈R(d+d)×(d+d)×d are the matrix and tensor for the composition function. A major difference of RNTN from the conventional recursive neural network (RRN) (Socher et al., 2012) is the use of the tensor V in order to directly capture the multiplicative interaction of two input vectors, although the matrix W implicitly captures the nonlinear interaction between the input vectors. The training of RNTN uses conventional forward-backward propagation. 4.2 PSTN: Prior sentiment-enriched tensor network The non-uniform distribution in Figure 1 has showed certain correlations between the sentiment values of s(wn, ⃗w) and s(⃗w), and such information has been leveraged in the models discussed in Section 3. We intend to devise a model that implements Equation 4. It bridges between the models we have discussed above that use either s(⃗w) or r(⃗w). We extend RNTN to directly consider the sentiment information of arguments. Consider the node p2 in Figure 2. When calculating its vector, we aim to directly engage the sentiment information of its right child, i.e., the argument. To this end, we make use of the sentiment class information of 307 p1, noted as psen 1 . As a result, the vector of p2 is calculated as follows: p2 = tanh(  a p1 T V [1:d]  a p1  + W  a p1  (6) +  a psen 1 T V sen[1:d]  a psen 1  + W sen  a psen 1  ) As shown in Equation 6, for the node vector p1 ∈Rd×1, we employ a matrix, namely W sen ∈ Rd×(d+m) and a tensor, V sen ∈R(d+m)×(d+m)×d, aiming at explicitly capturing the interplays between the sentiment class of p1, denoted as psen 1 (∈ Rm×1), and the negator a. Here, we assume the sentiment task has m classes. Following the idea of Wilson et al. (2005), we regard the sentiment of p1 as a prior sentiment as it has not been affected by the specific context (negators), so we denote our method as prior sentiment-enriched tensor network (PSTN). In Figure 2, the red portion shows the added components of PSTN. Note that depending on different purposes, psen 1 can take the value of the automatically predicted sentiment distribution obtained in forward propagation, the gold sentiment annotation of node p1, or even other normalized prior sentiment value or confidence score from external sources (e.g., sentiment lexicons or external training data). This is actually an interesting place to extend the current recursive neural network to consider extrinsic knowledge. However, in our current study, we focus on exploring the behavior of negators. As we have discussed above, we will use the human annotated sentiment for the arguments, same as in the models discussed in Section 3. With the new matrix and tensor, we then have θ = (V, V sen, W, W sen, W label, L) as the PSTN model’s parameters. Here, L denotes the vector representations of the word dictionary. 4.2.1 Inference and Learning Inference and learning in PSTN follow a forwardbackward propagation process similar to that in (Socher et al., 2013), and for completeness, we depict the details as follows. To train the model, one first needs to calculate the predicted sentiment distribution for each node: psen i = W labelpi, psen i ∈Rm×1 and then compute the posterior probability over the m labels: yi = softmax(psen i ) During learning, following the method used by the RNTN model in (Socher et al., 2013), PSTN also aims to minimize the cross-entropy error between the predicted distribution yi ∈Rm×1 at node i and the target distribution ti ∈Rm×1 at that node. That is, the error for a sentence is calculated as: E(θ) = X i X j ti jlogyi j + λ ∥θ∥2 (7) where, λ represents the regularization hyperparameters, and j ∈m denotes the j-th element of the multinomial target distribution. To minimize E(θ), the gradient of the objective function with respect to each of the parameters in θ is calculated efficiently via backpropagation through structure, as proposed by Goller and Kchler (1996). Specifically, we first compute the prediction errors in all tree nodes bottom-up. After this forward process, we then calculate the derivatives of the softmax classifiers at each node in the tree in a top-down fashion. We will discuss the gradient computation for the V sen and W sen in detail next. Note that the gradient calculations for the V, W, W label, L are the same as that of presented in (Socher et al., 2013). In the backpropogation process of the training, each node (except the root node) in the tree carries two kinds of errors: the local softmax error and the error passing down from its parent node. During the derivative computation, the two errors will be summed up as the complement incoming error for the node. We denote the complete incoming error and the softmax error vector for node i as δi,com ∈Rd×1 and δi,s ∈Rd×1, respectively. With this notation, the error for the root node p2 can be formulated as follows. δp2,com = δp2,s = (W T (yp2 −tp2)) ⊗f ′([a; p1]) (8) where ⊗is the Hadamard product between the two vectors and f ′ is the element-wise derivative of f = tanh. With the results from Equation 8, we then can calculate the derivatives for the W sen at node p2 using the following equation: ∂Ep2 W sen = δp2,com([a; psen 1 ])T Similarly, for the derivative of each slice k(k = 308 1, . . . , d) of the V sen tensor, we have the following: ∂Ep2 V sen [k] = δp2,com k  a psen 1   a psen 1 T Now, let’s form the equations for computing the error for the two children of the p2 node. The difference for the error at p2 and its two children is that the error for the latter will need to compute the error message passing down from p2. We denote the error passing down as δp2,down, where the left child and the right child of p2 take the 1st and 2nd half of the error δp2,down, namely δp2,down[1 : d] and δp2,down[d + 1 : 2d], respectively. Following this notation, we have the error message for the two children of p2, provided that we have the δp2,down: δp1,com = δp1,s + δp2,down[d + 1 : 2d] = (W T (yp1 −tp1)) ⊗f ′([b; c]) + δp2,down[d + 1 : 2d] The incoming error message of node a can be calculated similarly. Finally, we can finish the above equations with the following formula for computing δp2,down: δp2,down = (W T δp2,com) ⊗f ′([a; p1]) + δtensor where δtensor = [δV [1 : d] + δV sen[1 : d], δV [d + 1 : 2d]] = d X k=1 δp2,com k (V[k] + (V[k])T ) ⊗f ′([a; p1])[1 : d] + d X k=1 δp2,com k (V sen [k] + (V sen [k] )T ) ⊗f ′([a; psen 1 ])[1 : d] + d X k=1 δp2,com k (V[k] + (V[k])T ) ⊗f ′([a; p1])[d + 1 : 2d] After the models are trained, they are applied to predict the sentiment of the test data. The original RNTN and the PSTN predict 5-class sentiment for each negated phrase; we map the output to real-valued scores based on the scale that Socher et al. (2013) used to map real-valued sentiment scores to sentiment categories. Specifically, we conduct the mapping with the formula: preal i = yi · [0.1 0.3 0.5 0.7 0.9]; i.e., we calculate the dot product of the posterior probability yi and the scaling vector. For example, if yi = [0.5 0.5 0 0 0], meaning this phrase has a 0.5 probability to be in the first category (strong negative) and 0.5 for the second category (weak negative), the resulting preal i will be 0.2 (0.5*0.1+0.5*0.3). 5 Experiment set-up Data As described earlier, the Stanford Sentiment Treebank (Socher et al., 2013) has manually annotated, real-valued sentiment values for all phrases in parse trees. This provides us with the training and evaluation data to study the effect of negators with syntax and semantics of different complexity in a natural setting. The data contain around 11,800 sentences from movie reviews that were originally collected by Pang and Lee (2005). The sentences were parsed with the Stanford parser (Klein and Manning, 2003). The phrases at all tree nodes were manually annotated with one of 25 sentiment values that uniformly span between the positive and negative poles. The values are normalized to the range of [0, 1]. In this paper, we use a list of most frequent negators that include the words not, no, never, and their combinations with auxiliaries (e.g., didn’t). We search these negators in the Stanford Sentiment Treebank and normalize the same negators to a single form; e.g., “is n’t”, “isn’t”, and “is not” are all normalized to “is not”. Each occurrence of a negator and the phrase it is directly composed with in the treebank, i.e., ⟨wn, ⃗w⟩, is considered a data point in our study. In total, we collected 2,261 pairs, including 1,845 training and 416 test cases. The split of training and test data is same as specified in (Socher et al., 2013). Evaluation metrics We use the mean absolute error (MAE) to evaluate the models, which measures the averaged absolute offsets between the predicted sentiment values and the gold standard. More specifically, MAE is calculated as: MAE = 1 N P ⟨wn,⃗w⟩|(ˆs(wn, ⃗w) −s(wn, ⃗w))|, where ˆs(wn, ⃗w) denotes the gold sentiment value and s(wn, ⃗w) the predicted one for the pair ⟨wn, ⃗w⟩, and N is the total number of test instances. Note that mean square error (MSE) is another widely used measure for regression, but it is less intuitive for out task here. 6 Experimental results Overall regression performance Table 1 shows the overall fitting performance of all models. The first row of the table is a random baseline, which 309 simply guesses the sentiment value for each test case randomly in the range [0,1]. The table shows that the basic reversing and shifting heuristics do capture negators’ behavior to some degree, as their MAE scores are higher than that of the baseline. Making the basic shifting model to be dependent on the negators (model 4) reduces the prediction error significantly as compared with the error of the basic shifting (model 3). The same is true for the polarity-based shifting (model 5), reflecting that the roles of negators are different when modifying positive and negative phrases. Merging these two models yields additional improvement (model 6). Assumptions MAE Baseline (1) Random 0.2796 Non-lexicalized (2) Reversing 0.1480* (3) Basic shifting 0.1452* Simple-lexicalized (4) Negator-based shifting 0.1415† (5) Polarity-based shifting 0.1417† (6) Combined shifting 0.1387† Semantics-enriched (7) RNTN 0.1097** (8) PSTN 0.1062†† Table 1: Mean absolute errors (MAE) of fitting different models to Stanford Sentiment Treebank. Models marked with an asterisk (*) are statistically significantly better than the random baseline. Models with a dagger sign (†) significantly outperform model (3). Double asterisks ** indicates a statistically significantly different from model (6), and the model with the double dagger ††is significantly better than model (7). One-tailed paired t-test with a 95% significance level is used here. Furthermore, modeling the syntax and semantics with the state-of-the-art recursive neural network (model 7 and 8) can dramatically improve the performance over model 6. The PSTN model, which takes into account the human-annotated prior sentiment of arguments, performs the best. This could suggest that additional external knowledge, e.g., that from human-built resources or automatically learned from other data (e.g., as in (Kiritchenko et al., 2014)), including sentiment that cannot be inferred from its constituent expressions, might be incorporated to benefit the current is_never will_not is_not does_not barely was_not could_not not did_not unlikely do_not can_not no has_not superficial would_not should_not 0.05 0.10 0.15 0.20 0.25 0.30 Figure 3: Effect of different negators in shifting sentiment values. neural-network-based models as prior knowledge. Note that the two neural network based models incorporate the syntax and semantics by representing each node with a vector. One may consider that a straightforward way of considering the semantics of the modified phrases is simply memorizing them. For example, if a phrase very good modified by a negator not appears in the training and test data, the system can simply memorize the sentiment score of not very good in training and use this score at testing. When incorporating this memorizing strategy into model (6), we observed a MAE score of 0.1222. It’s not surprising that memorizing the phrases has some benefit, but such matching relies on the exact reoccurrences of phrases. Note that this is a special case of what the neural network based models can model. Discriminating negators The results in Table 1 has demonstrated the benefit of discriminating negators. To understand this further, we plot in Figure 3 the behavior of different negators: the x-axis is a subset of our negators and the y-axis denotes absolute shifting in sentiment values. For example, we can see that the negator “is never” on average shifts the sentiment of the arguments by 0.26, which is a significant change considering the range of sentiment value is [0, 1]. For each negator, a 95% confidence interval is shown by the boxes in the figure, which is calculated with the bootstrapping resampling method. We can observe statistically significant differences of shifting abilities between many negator pairs such as that between “is never” and “do not” as well as between “does not” and “can not”. Figure 3 also includes three diminishers (the 310 is_not(nn) is_not(np) does_not(nn) does_not(np) not(nn) not(np) do_not(nn) do_not(np) no(nn) no(np) 0.15 0.20 0.25 0.30 Figure 4: The behavior of individual negators in negated negative (nn) and negated positive (np) context. white bars), i.e., barely, unlikely, and superficial. By following (Kennedy and Inkpen, 2006), we extracted 319 diminishers (also called understatement or downtoners) from General Inquirer3. We calculated their shifting power in the same manner as for the negators and found three diminishers having shifting capability in the shifting range of these negators. This shows that the boundary between negators and diminishers can by fuzzy. In general, we argue that one should always consider modeling negators individually in a sentiment analysis system. Alternatively, if the modeling has to be done in groups, one should consider clustering valence shifters by their shifting abilities in training or external data. Figure 4 shows the shifting capacity of negators when they modify positive (blue boxes) or negative phrases (red boxes). The figure includes five most frequently used negators found in the sentiment treebank. Four of them have significantly different shifting power when composed with positive or negative phrases, which can explain why the polarity-based shifting model achieves improvement over the basic shifting model. Modeling syntax and semantics We have seen above that modeling syntax and semantics through the-state-of-the-art neural networks help improve the fitting performance. Below, we take a closer look at the fitting errors made at different depths of the sentiment treebank. The depth here is defined as the longest distance between the root of a negator-phrase pair ⟨wn, ⃗w⟩and their descendant 3http://www.wjh.harvard.edu/ inquirer/ Figure 5: Errors made at different depths in the sentiment tree bank. leafs. Negators appearing at deeper levels of the tree tend to have more complicated syntax and semantics. In Figure 5, the x-axis corresponds to different depths and y-axis is the mean absolute errors (MAE). The figure shows that both RNTN and PSTN perform much better at all depths than the model 6 in Table 1. When the depths are within 4, the RNTN performs very well and the (human annotated) prior sentiment of arguments used in PSTN does not bring additional improvement over RNTN. PSTN outperforms RNTN at greater depths, where the syntax and semantics are more complicated and harder to model. The errors made by model 6 is bumpy, as the model considers no semantics and hence its errors are not dependent on the depths. On the other hand, the errors of RNTN and PSTN monotonically increase with depths, indicating the increase in the task difficulty. 7 Conclusions Negation plays a fundamental role in modifying sentiment. In the process of semantic composition, the impact of negators is complicated by the syntax and semantics of the text spans they modify. This paper provides a comprehensive and quantitative study of the behavior of negators through a unified view of fitting human annotation. We first measure the modeling capabilities of two influential heuristics on a sentiment treebank and find that they capture some effect of negation; however, extending these non-lexicalized models to be dependent on the negators improves the per311 formance statistically significantly. The detailed analysis reveals the differences in the behavior among negators, and we argue that they should always be modeled separately. We further make the models to be dependent on the text being modified by negators, through adaptation of a state-ofthe-art recursive neural network to incorporate the syntax and semantics of the arguments; we discover this further reduces fitting errors. References Farah Benamara, Baptiste Chardon, Yannick Mathieu, Vladimir Popescu, and Nicholas Asher. 2012. How do negation and modality impact on opinions? In Proceedings of the ACL-2012 Workshop on ExtraPropositional Aspects of Meaning in Computational Linguistics, pages 10–18, Jeju, Republic of Korea. Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 793–801, Honolulu, Hawaii. Christoph Goller and Andreas Kchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In In Proc. of the ICNN-96, pages 347–352, Bochum, Germany. IEEE. Sanda Harabagiu, Andrew Hickl, and Finley Lacatusu. 2006. Negation, contrast and contradiction in text processing. In AAAI, volume 6, pages 755–762. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the 8th Conference of European Chapter of the Association for Computational Linguistics, EACL ’97, pages 174–181, Madrid, Spain. Lifeng Jia, Clement Yu, and Weiyi Meng. 2009. The effect of negation on sentiment analysis and retrieval effectiveness. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM ’09, pages 1827–1830, Hong Kong, China. ACM. Alistair Kennedy and Diana Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence, 22(2):110–125. Svetlana Kiritchenko, Xiaodan Zhu, and Saif Mohammad. 2014. Sentiment analysis of short informal texts. (to appear) Journal of Artificial Intelligence Research. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 423– 430, Sapporo, Japan. Association for Computational Linguistics. Emanuele Lapponi, Jonathon Read, and Lilja Ovrelid. 2012. Representing and resolving negation for sentiment analysis. In Jilles Vreeken, Charles Ling, Mohammed Javeed Zaki, Arno Siebes, Jeffrey Xu Yu, Bart Goethals, Geoffrey I. Webb, and Xindong Wu, editors, ICDM Workshops, pages 687– 692. IEEE Computer Society. Jingjing Liu and Stephanie Seneff. 2009. Review sentiment scoring via a parse-and-paraphraseparadigm. In EMNLP, pages 161–169, Singapore. Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Charu C. Aggarwal and ChengXiang Zhai, editors, Mining Text Data, pages 415–463. Springer US. Franc¸ois Mairesse, Joseph Polifroni, and Giuseppe Di Fabbrizio. 2012. Can prosody inform sentiment analysis? experiments on short spoken reviews. In ICASSP, pages 5093–5096, Kyoto, Japan. Tom M Mitchell. 1997. Machine learning. 1997. Burr Ridge, IL: McGraw Hill, 45. Saif M. Mohammad, Bonnie J. Dorr, Graeme Hirst, and Peter D. Turney. 2013. Computing lexical contrast. Computational Linguistics, 39(3):555–590. Karo Moilanen and Stephen Pulman. 2007. Sentiment composition. In Proceedings of RANLP 2007, Borovets, Bulgaria. Roser Morante and Caroline Sporleder. 2012. Modality and negation: An introduction to the special issue. Computational linguistics, 38(2):223–260. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, ACL ’05, pages 115–124. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1–2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of EMNLP, pages 79–86, Philadelphia, USA. Livia Polanyi and Annie Zaenen. 2004. Contextual valence shifters. In Exploring Attitude and Affect in Text: Theories and Applications (AAAI Spring Symposium Series). Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In 312 Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’12, Jeju, Korea. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’13, Seattle, USA. Association for Computational Linguistics. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexiconbased methods for sentiment analysis. Computational Linguistics, 37(2):267–307. Peter Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In ACL, pages 417–424, Philadelphia, USA. Michael Wiegand, Alexandra Balahur, Benjamin Roth, Dietrich Klakow, and Andr´es Montoyo. 2010. A survey on the role of negation in sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, NeSpNLP ’10, pages 60–68, Stroudsburg, PA, USA. Association for Computational Linguistics. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 347–354, Stroudsburg, PA, USA. Association for Computational Linguistics. 313
2014
29
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 25–35, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Text-level Discourse Dependency Parsing Sujian Li1 Liang Wang1 Ziqiang Cao1 Wenjie Li2 1 Key Laboratory of Computational Linguistics, Peking University, MOE, China 2 Department of Computing, The Hong Kong Polytechnic University, HongKong {lisujian,intfloat,ziqiangyeah}@pku.edu.cn [email protected] Abstract Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing. 1 Introduction It is widely agreed that no units of the text can be understood in isolation, but in relation to their context. Researches in discourse parsing aim to acquire such relations in text, which is fundamental to many natural language processing applications such as question answering, automatic summarization and so on. One important issue behind discourse parsing is the representation of discourse structure. Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), one of the most influential discourse theories, posits a hierarchical generative tree representation, as illustrated in Figure 1. The leaves of a tree correspond to contiguous text spans called Elementary Discourse Units (EDUs)1. The adjacent EDUs are combined into 1 EDU segmentation is a relatively trivial step in discourse parsing. Since our work focus here is not EDU segmentation but discourse parsing. We assume EDUs are already known. the larger text spans by rhetorical relations (e.g., Contrast and Elaboration) and the larger text spans continue to be combined until the whole text constitutes a parse tree. The text spans linked by rhetorical relations are annotated as either nucleus or satellite depending on how salient they are for interpretation. It is attractive and challenging to parse the whole text into one tree. Since such a hierarchical discourse tree is analogous to a constituency based syntactic tree except that the constituents in the discourse trees are text spans, previous researches have explored different constituency based syntactic parsing techniques (eg. CKY and chart parsing) and various features (eg. length, position et al.) for discourse parsing (Soricut and Marcu, 2003; Joty et al., 2012; Reitter, 2003; LeThanh et al., 2004; Baldridge and Lascarides, 2005; Subba and Di Eugenio, 2009; Sagae, 2009; Hernault et al., 2010b; Feng and Hirst, 2012). However, the existing approaches suffer from at least one of the following three problems. First, it is difficult to design a set of production rules as in syntactic parsing, since there are no determinate generative rules for the interior text spans. Second, the different levels of discourse units (e.g. EDUs or larger text spans) occurring in the generative process are better represented with different features, and thus a uniform framework for discourse analysis is hard to develop. Third, to reduce the time complexity of the state-of-the-art constituency based parsing techniques, the approximate parsing approaches are prone to trap in local maximum. In this paper, we propose to adopt the dependency structure in discourse representation to overcome the limitations mentioned above. Here is the basic idea: the discourse structure consists of EDUs which are linked by the binary, asymmetrical relations called dependency relations. A dependency relation holds between a subordinate EDU called the dependent, and another EDU on 25 which it depends called the head, as illustrated in Figure 2. Each EDU has one head. So, the dependency structure can be seen as a set of headdependent links, which are labeled by functional relations. Now, we can analyze the relations between EDUs directly, without worrying about any interior text spans. Since dependency trees contain much fewer nodes and on average they are simpler than constituency based trees, the current dependency parsers can have a relatively low computational complexity. Moreover, concerning linearization, it is well known that dependency structures can deal with non-projective relations, while constituency-based models need the addition of complex mechanisms like transformations, movements and so on. In our work, we adopt the graph based dependency parsing techniques learned from large sets of annotated dependency trees. The Eisner (1996) algorithm and maximum spanning tree (MST) algorithm are used respectively to parse the optimal projective and non-projective dependency trees with the large-margin learning technique (Crammer and Singer, 2003). To the best of our knowledge, we are the first to apply the dependency structure and introduce the dependency parsing techniques into discourse analysis. The rest of this paper is organized as follows. Section 2 formally defines discourse dependency structure and introduces how to build a discourse dependency treebank from the existing RST corpus. Section 3 presents the discourse parsing approach based on the Eisner and MST algorithms. Section 4 elaborates on the large-margin learning technique as well as the features we use. Section 5 discusses the experimental results. Section 6 introduces the related work and Section 7 concludes the paper. e1 e2 e1-e2* e3 e1-e2*-e3 e1 e2 e1-e2* e3 e1-e2-e3* e1 e2 e1*-e2 e3 e1-e2-e3* e1 e2 e1*-e2 e3 e1*-e2-e3 e1 e2 e2*-e3 e3 e1-e2*-e3 e1 e2 e3 e1 e2 e3 e1*-e2-e3 e2*-e3 e2-e3* e1*-e2-e3 1 2 3 4 5 6 7 8 e1 e2 e3 e1-e2-e3* e2-e3* Figure 1: Headed Constituency based Discourse Tree Structure (e1,e2 and e3 denote three EDUs, and * denotes the NUCLEUS constituent) e1 e2 e3 e1 e2 e3 e1 e2 e3 e1 e2 e3 e1 e2 e3 e1 e2 e3 e1 e2 e3 e1 e2 e3 1' 2' 3' 4' 5' 6' 7' 8' 9' e1 e2 e3 e0 e0 e0 e0 e0 e0 e0 e0 e0 Figure 2: Discourse Dependency Tree Structures (e1,e2 and e3 denote three EDUS, and the directed arcs denote one dependency relations. The artificial e0 is also displayed here. ) 2 Discourse Dependency Structure and Tree Bank 2.1 Discourse Dependency Structure Similar to the syntactic dependency structure defined by McDonald (2005a, 2005b), we insert an artificial EDU e0 in the beginning for each document and label the dependency relation linking from e0 as ROOT. This treatment will simplify both formal definitions and computational implementations. Normally, we assume that each EDU should have one and only one head except for e0. A labeled directed arc is used to represent the dependency relation from one head to its dependent. Then, discourse dependency structure can be formalized as the labeled directed graph, where nodes correspond to EDUs and labeled arcs correspond to labeled dependency relations. 26 We assume that the text2 T is composed of n+1 EDUs including the artificial e0. That is T=e0 e1 e2 … en. Let R={r1,r2, … ,rm} denote a finite set of functional relations that hold between two EDUs. Then a discourse dependency graph can be denoted by G=<V, A> where V denotes a set of nodes and A denotes a set of labeled directed arcs, such that for the text T=e0 e1 e2 … en and the label set R the following holds: (1) V = { e0, e1, e2, … en } (2) A  V R  V, where <ei, r, ej>A represents an arc from the head ei to the dependent ej labeled with the relation r. (3) If <ei, r, ej>A then <ek, r’, ej>A for all ki (4) If <ei, r, ej>A then <ei, r’, ej>A for all r’r The third condition assures that each EDU has one and only one head and the fourth tells that only one kind of dependency relation holds between two EDUs. According to the definition, we illustrate all the 9 possible unlabeled dependency trees for a text containing three EDUs in Figure 2. The dependency trees 1’ to 7’ are projective while 8’ and 9’ are non-projective with crossing arcs. 2.2 Our Discourse Dependency Treebank To automatically conduct discourse dependency parsing, constructing a discourse dependency treebank is fundamental. It is costly to manually construct such a treebank from scratch. Fortunately, RST Discourse Treebank (RST-DT) (Carlson et al., 2001) is an available resource to help with. A RST tree constitutes a hierarchical structure for one document through rhetorical relations. A total of 110 fine-grained relations (e.g. Elaboration-part-whole and List) were used for tagging RST-DT. They can be categorized into 18 classes (e.g. Elaboration and Joint). All these relations can be hypotactic (“mononuclear”) or paratactic (“multi-nuclear”). A hypotactic relation holds between a nucleus span and an adjacent satellite span, while a paratactic relation connects two or more equally important adjacent nucleus spans. For convenience of computation, we convert the n-ary (n>2) RST trees3 to binary trees through adding a new node for the latter n-1 nodes and assume each relation is connected to only one nucleus4. This departure from the original theory 2 The two terms “text” and “document” are used interchangeably and represent the same meaning. 3 According to our statistics, there are totally 381 n-ary relations in RST-DT. 4 We set the first nucleus as the only nucleus. is not such a major step as it may appear, since any nucleus is known to contribute to the essential meaning. Now, each RST tree can be seen as a headed constituency based binary tree where the nuclei are heads and the children of each node are linearly ordered. Given three EDUs5, Figure 1 shows the possible 8 headed constituency based trees where the superscript * denotes the heads (nuclei). We use dependency trees to simulate the headed constituency based trees. Contrasting Figure 1 with Figure 2, we use dependency tree 1’ to simulate binary trees 1 and 8, and dependency tress 2’- 7’ to simulate binary trees 2-7 correspondingly. The rhetorical relations in RST trees are kept as the functional relations which link the two EDUs in dependency trees. With this kind of conversion, we can get our discourse dependency treebank. It is worth noting that the non-projective trees like 8’ and 9’ do not exist in our dependency treebank, though they are eligible according to the definition of discourse dependency graph. 3 Discourse Dependency Parsing 3.1 System Overview As stated above, T=e0 e1 …en represents an input text (document) where ei denotes the ith EDU of T. We use V to denote all the EDU nodes and VRV-0 (V-0 =V-{e0}) denote all the possible discourse dependency arcs. The goal of discourse dependency parsing is to parse an optimal spanning tree from VRV-0. Here we follow the arc factored method and define the score of a dependency tree as the sum of the scores of all the arcs in the tree. Thus, the optimal dependency tree for T is a spanning tree with the highest score and obtained through the function DT(T,w): 0 0 0 , , , , ( , ) ( , ) ( , , ) ( , , ) f T T i j T T i j T G V R V G V R V i j e r e G G V R V i j e r e G T DT T argmax argmax e r e argmax e r s ore T G e c                  w w where GT means a possible spanning tree with ( , ) T score T G and ( ) denotes the score of the arc <ei, r, ej> which is calculated according to its feature representation f(ei,r,ej) and a weight vector w. Next, two basic problems need to be solved: how to find the dependency tree with the highest 5 We can easily get all possible headed binary trees for one more complex text containing more than three EDUs, by extending the 8 possible situations for three EDUs. 27 score for T given all the arc scores (i.e. a parsing problem), and how to learn and compute the scores of arcs according to a set of arc features (i.e. a learning problem). The following of this section addresses the first problem. Given the text T, we first reduce the multi-digraph composed of all possible arcs to the digraph. The digraph keeps only one arc <ei, r, ej> between two nodes which satisfies ( ) . Thus, we can proceed with a reduction from labeled parsing to unlabeled parsing. Next, two algorithms, i.e. the Eisner algorithm and MST algorithm, are presented to parse the projective and non-projective unlabeled dependency trees respectively. 3.2 Eisner Algorithm It is well known that projective dependency parsing can be handled with the Eisner algorithm (1996) which is based on the bottom-up dynamic programming techniques with the time complexity of O(n3). The basic idea of the Eisner algorithm is to parse the left and right dependents of an EDU independently and combine them at a later stage. This reduces the overhead of indexing heads. Only two binary variables, i.e. c and d, are required to specify whether the heads occur leftmost or rightmost and whether an item is complete. Eisner(T, ) Input: Text T=e0 e1… en; Arc scores (ei,ej) 1 Instantiate E[i, i, d, c]=0.0 for all i, d, c 2 For m := 1 to n 3 For i := 1 to n 4 j = i + m 5 if j> n then break; 6 # Create subgraphs with c=0 by adding arcs 7 E[i, j, 0, 0]=maxiqj (E[i,q,1,1]+E[q+1,j,0,1]+(ej,ei)) 8 E[i, j, 1, 0]=maxiqj (E[i,q,1,1]+E[q+1,j,0,1]+(ei,ej)) 9 # Add corresponding left/right subgraphs 10 E[i, j, 0, 1]=maxiqj (E[i,q,0,1]+E[q,j,0,0] 11 E[i, j, 1, 1]=maxiqj (E[i,q,1,0]+E[q,j,1,1]) Figure 3: Eisner Algorithm Figure 3 shows the pseudo-code of the Eisner algorithm. A dynamic programming table E[i,j,d,c] is used to represent the highest scored subtree spanning ei to ej. d indicates whether ei is the head (d=1) or ej is head (d=0). c indicates whether the subtree will not take any more dependents (c=1) or it needs to be completed (c=0). The algorithm begins by initializing all lengthone subtrees to a score of 0.0. In the inner loop, the first two steps (Lines 7 and 8) are to construct the new dependency arcs by taking the maximum over all the internal indices (iqj) in the span, and calculating the value of merging the two subtrees and adding one new arc. The last two steps (Lines 10 and 11) attempt to achieve an optimal left/right subtree in the span by adding the corresponding left/right subtree to the arcs that have been added previously. This algorithm considers all the possible subtrees. We can then get the optimal dependency tree with the score E[0,n,1,1] . 3.3 Maximum Spanning Tree Algorithm As the bottom-up Eisner Algorithm must maintain the nested structural constraint, it cannot parse the non-projective dependency trees like 8’ and 9’ in Figure 2. However, the non-projective dependency does exist in real discourse. For example, the earlier text mainly talks about the topic A with mentioning the topic B, while the latter text gives a supplementary explanation for the topic B. This example can constitute a nonprojective tree and its pictorial diagram is exhibited in Figure 4. Following the work of McDonald (2005b), we formalize discourse dependency parsing as searching for a maximum spanning tree (MST) in a directed graph. ... ... A A A B B ... Figure 4: Pictorial Diagram of Non-projective Trees Chu and Liu (1965) and Edmonds (1967) independently proposed the virtually identical algorithm named the Chu-Liu/Edmonds algorithm, for finding MSTs on directed graphs (McDonald et al. 2005b). Figure 5 shows the details of the Chu-Liu/Edmonds algorithm for discourse parsing. Each node in the graph greedily selects the incoming arc with the highest score. If one tree results, the algorithm ends. Otherwise, there must exist a cycle. The algorithm contracts the identified cycle into a single node and recalculates the scores of the arcs which go in and out of the cycle. Next, the algorithm recursively call itself on the contracted graph. Finally, those arcs which go in or out of one cycle will recover themselves to connect with the original nodes in V. Like McDonald et al. (2005b), we adopt an efficient implementation of the ChuLiu/Edmonds algorithm that is proposed by Tarjan (1997) with O(n2) time complexity. 28 Chu-Liu-Edmonds(G, ) Input: Text T=e0 e1… en; Arc scores (ei,ej) 1 A’ = {<ei, ej>| ei = argmax (ei,ej); 1j|V|} 2 G’ = (V, A’) 3 If G’ has no cycles, then return G’ 4 Find an arc set AC that is a cycle in G’ 5 <GC, ep> = contract(G, AC, ) 6 G = (V, A)=Chu-Liu-Edmonds(GC, ) 7 For the arc <ei,eC> where ep(ei,eC)=ej: 8 A=AAC{<ei,ej)}-{<ei,eC>, <a(ej),ej>} 9 For the arc <eC, ei> where ep(eC ,ei)=ej: 10 A=A{<ej,ei>}-{<eC,ei>} 11 V = V 12 Return G Contract(G=(V,A), AC, ) 1 Let GC be the subgraph of G excluding nodes in C 2 Add a node eC to GC denoting the cycle C 3 For ej V-C : eiC <ei,ej>A 4 Add arc <eC,ej> to GC with ep(eC,ej)= (ei,ej) 5 (eC,ej) = (ep(eC,ej),ej) 6 For ei V-C: ejC (ei,ej)A 7 Add arc <ei,eC> to GC with ep(ei,eC)= = [(ei,ej)-(a(ei),ej)] 8 (ei,eC) =(ei,ej)-(a(ei),ej)+score(C) 9 Return <GC, ep> Figure 5: Chu-Liu/Edmonds MST Algorithm 4 Learning In Section 3, we assume that the arc scores are available. In fact, the score of each arc is calculated as a linear combination of feature weights. Thus, we need to determine the features for arc representation first. With referring to McDonald et al. (2005a; 2005b), we use the Margin Infused Relaxed Algorithm (MIRA) to learn the feature weights based on a training set of documents annotated with dependency structures     1 , N i i T  iy where yi denotes the correct dependency tree for the text Ti. 4.1 Features Following (Feng and Hirst, 2012; Lin et al., 2009; Hernault et al., 2010b), we explore the following 6 feature types combined with relations to represent each labeled arc <ei, r, ej> . (1) WORD: The first one word, the last one word, and the first bigrams in each EDU, the pair of the two first words and the pair of the two last words in the two EDUs are extracted as features. (2) POS: The first one and two POS tags in each EDU, and the pair of the two first POS tags in the two EDUs are extracted as features. (3) Position: These features concern whether the two EDUs are included in the same sentence, and the positions where the two EDUs are located in one sentence, one paragraph, or one document. (4) Length: The length of each EDU. (5) Syntactic: POS tags of the dominating nodes as defined in Soricut and Marcu (2003) are extracted as features. We use the syntactic trees from the Penn Treebank to find the dominating nodes,. (6) Semantic similarity: We compute the semantic relatedness between the two EDUs based on WordNet. The word pairs are extracted from (ei, ej) and their similarity is calculated. Then, we can get a weighted complete bipartite graph where words are deemed as nodes and similarity as weights. From this bipartite graph, we get the maximum weighted matching and use the averaged weight of the matches as the similarity between ei and ej. In particular, we use path_similarity, wup_similarity, res_similarity, jcn_similarity and lin_similarity provided by the nltk.wordnet.similarity (Bird et. al., 2009) package for calculating word similarity. As for relations, we experiment two sets of relation labels from RST-DT. One is composed of 19 coarse-grained relations and the other 111 fine-grained relations6. 4.2 MIRA based Learning Margin Infused Relaxed Algorithm (MIRA) is an online algorithm for multiclass classification and is extended by Taskar et al. (2003) to cope with structured classification. MIRA Input: a training set     1 , N i i T  iy 1 w0 = 0; v = 0; j = 0 2 For iter := 1 to K 3 For i := 1 to N 4 update w according to   , iT iy : 1 min j j  w w s.t. ( , ) ( , ') ( , ') where ' ( , ) i i i i i i j i i s T s T L DT T    y y y y y w 5 v = v + wj ; 6 j = j+1 7 w = v/(K*N) Figure 6: MIRA based Learning Figure 6 gives the pseudo-code of the MIRA algorithm (McDonld et al., 2005b). This algorithm is designed to update the parameters w using a single training instance   , iT iy in each iteration. On each update, MIRA attempts to keep the norm of the change to the weight vector 6 19 relations include the original 18 relation in RST-DT plus one artificial ROOT relation. The 111 relations also include the ROOT relation. 29 as small as possible, which is subject to constructing the correct dependency tree under consideration with a margin at least as large as the loss of the incorrect dependency trees. We define the loss of a discourse dependency tree ' iy (denoted by ( , ') i i L y y ) as the number of the EDUs that have incorrect heads. Since there are exponentially many possible incorrect dependency trees and thus exponentially many margin constraints, here we relax the optimization and stay with a single best dependency tree ' ( , ) j i i DT T  y w which is parsed under the weight vector wj. In this algorithm, the successive updated values of w are accumulated and averaged to avoid overfitting. 5 Experiments 5.1 Preparation We test our methods experimentally using the discourse dependency treebank which is built as in Section 2. The training part of the corpus is composed of 342 documents and contains 18,765 EDUs, while the test part consists of 38 documents and 2,346 EDUs. The number of EDUs in each document ranges between 2 and 304. Two sets of relations are adopted. One is composed of 19 relations and Table 1 shows the number of each relation in the training and test corpus. The other is composed of 111 relations. Due to space limitation, Table 2 only lists the 10 highestdistributed relations with regard to their frequency in the training corpus. The following experiments are conducted: (1) to measure the parsing performance with different relation sets and different feature types; (2) to compare our parsing methods with the state-ofthe-art discourse parsing methods. Relations Train Test Relations Train Test Elaboration 6879 796 Temporal 426 73 Attribution 2641 343 ROOT 342 38 Joint 1711 212 Compari. 273 29 Same-unit 1230 127 Condition 258 48 Contrast 944 146 Manner. 191 27 Explanation 849 110 Summary 188 32 Background 786 111 Topic-Cha. 187 13 Cause 785 82 Textual 147 9 Evaluation 502 80 TopicCom. 126 24 Enablement 500 46 Total 18765 2346 Table 1: Coarse-grained Relation Distribution Relations Train Test Elaboration-additional 2912 312 Attribution 2474 329 Elaboration-object-attribute-e 2274 250 List 1690 206 Same-unit 1230 127 Elaboration-additional-e 747 69 Circumstance 545 80 Explanation-argumentative 524 70 Purpose 430 43 Contrast 358 64 Table 2: 10 Highest Distributed Fine-grained Relations 5.2 Feature Influence on Two Relation Sets So far, researches on discourse parsing avoid adopting too fine-grained relations and the relation sets containing around 20 labels are widely used. In our experiments, we observe that adopting a fine-grained relation set can even be helpful to building the discourse trees. Here, we conduct experiments on two relation sets that contain 19 and 111 labels respectively. At the same time, different feature types are tested their effects on discourse parsing. Method Features Unlabeled Acc. Labeled Acc. Eisner 1+2 0.3602 0.2651 1+2+3 0.7310 0.4855 1+2+3+4 0.7370 0.4868 1+2+3+4+5 0.7447 0.4957 1+2+3+4+5+6 0.7455 0.4983 MST 1+2 0.1957 0.1479 1+2+3 0.7246 0.4783 1+2+3+4 0.7280 0.4795 1+2+3+4+5 0.7340 0.4915 1+2+3+4+5+6 0.7331 0.4851 Table 3: Performance Using Coarse-grained Relations. Method Feature types Unlabeled Acc. Labeled Acc. Eisner 1+2 0.3743 0.2421 1+2+3 0.7451 0.4079 1+2+3+4 0.7472 0.4041 1+2+3+4+5 0.7506 0.4254 1+2+3+4+5+6 0.7485 0.4288 MST 1+2 0.2080 0.1300 1+2+3 0.7366 0.4054 1+2+3+4 0.7468 0.4071 1+2+3+4+5 0.7494 0.4288 1+2+3+4+5+6 0.7460 0.4309 Table 4: Performance Using Fine-grained Relations. Based on the MIRA leaning algorithm, the Eisner algorithm and MST algorithm are used to parse the test documents respectively. Referring to the evaluation of syntactic dependency parsing, 30 we use unlabeled accuracy to calculate the ratio of EDUs that correctly identify their heads, labeled accuracy the ratio of EDUs that have both correct heads and correct relations. Table 3 and Table 4 show the performance on two relation sets. The numbers (1-6) represent the corresponding feature types described in Section 4.1. From Table 3 and Table 4, we can see that the addition of more feature types, except the 6th feature type (semantic similarity), can promote the performance of relation labeling, whether using the coarse-grained 19 relations and the finegrained 111 relations. As expected, the first and second types of features (WORD and POS) are the ones which play an important role in building and labeling the discourse dependency trees. These two types of features attain similar performance on two relation sets. The Eisner algorithm can achieve unlabeled accuracy around 0.36 and labeled accuracy around 0.26, while MST algorithm achieves unlabeled accuracy around 0.20 and labeled accuracy around 0.14. The third feature type (Position) is also very helpful to discourse parsing. With the addition of this feature type, both unlabeled accuracy and labeled accuracy exhibit a marked increase. Especially, when applying MST algorithm on discourse parsing, unlabeled accuracy rises from around 0.20 to around 0.73. This result is consistent with Hernault’s work (2010b) whose experiments have exhibited the usefulness of those position-related features. The other two types of features which are related to length and syntactic parsing, only promote the performance slightly. As we employed the MIRA learning algorithm, it is possible to identify which specific features are useful, by looking at the weights learned to each feature using the training data. Table 5 selects 10 features with the highest weights in absolute value for the parser which uses the coarsegrained relations, while Table 6 selects the top 10 features for the parser using the fine-grained relations. Each row denotes one feature: the left part before the symbol “&” is from one of the 6 feature types and the right part denotes a specific relation. From Table 5 and Table 6, we can see that some features are reasonable. For example, The sixth feature in Table 5 represents that the dependency relation is preferred to be labeled Explanation with the fact that “because” is the first word of the dependent EDU. From these two tables, we also observe that most of the heavily weighted features are usually related to those highly distributed relations. When using the coarse-grained relations, the popular relations (eg. Elaboration, Attribution and Joint) are always preferred to be labeled. When using the fine-grained relations, the large relations including List and Elaboration-object-attribute-e are given the precedence of labeling. This phenomenon is mainly caused by the sparseness of the training corpus and the imbalance of relations. To solve this problem, the augment of training corpus is necessary. Feature description Weight 1 Last two words in dependent EDU are “appeals court” & Joint 0.475 2 First word in dependent EDU is “racked” & Elaboration 0.445 3 First two words in head EDU are “I ‘d” & Attribution 0.324 4 Last word in dependent EDU is “in” & Elaboration -0.323 5 The res_similarity between two EDUs is 0 & Elaboration 0.322 6 First word in dependent EDU is “because” & Explanation 0.306 7 First POS in head EDU is “DT” & Joint -0.299 8 First two words in dependent EDU are “that required” & Elaboration 0.287 9 First two words in dependent EDU are “that the” & Elaboration 0.277 10 First word in dependent EDU is “because” & Cause 0.265 Table 5: Top 10 Feature Weights for Coarsegrained Relation Labeling (Eisner Algorithm) Features Weight 1 Last two words in dependent EDU are “appeals court” & List 0.576 2 First two words in head EDU are “I ‘d” & Attribution 0.385 3 First two words in dependent EDU is “that the” & Elaboration-object-attribute-e 0.348 4 First POS in head EDU is “DT” & List -0.323 5 Last word in dependent EDU is “in” & List -0.286 6 First word in dependent EDU is “racked” & Elaboration-object-attribute-e 0.445 7 First two word pairs are <”In an”,”But even”> & List -0.252 8 Dependent EDU has a dominating node tagged “CD”& Elaboration-object-attribute-e -0.244 9 First two words in dependent EDU are “patents disputes” & Purpose 0.231 10 First word in dependent EDU is “to” & Purpose 0.230 Table 6: Top 10 Feature Weights for Coarsegrained Relation Labeling (Eisner Algorithm) Unlike previous discourse parsing approaches, our methods combine tree building and relation labeling into a uniform framework naturally. This means that relations play a role in building the dependency tree structure. From Table 3 and Table 4, we can see that fine-grained relations are more helpful to building unlabeled discourse 31 trees more than the coarse-grained relations. The best result of unlabeled accuracy using 111 relations is 0.7506, better than the best performance (0.7447) using 19 relations. We can also see that the labeled accuracy using the fine-grained relations can achieve 0.4309, only 0.06 lower than the best labeled accuracy (0.4915) using the coarse-grained relations. In addition, comparing the MST algorithm with the Eisner algorithm, Table 3 and Table 4 show that their performances are not significantly different from each other. But we think that MST algorithm has more potential in discourse dependency parsing, because our converted discourse dependency treebank contains only projective trees and somewhat suppresses the MST algorithm to exhibit its advantage of parsing nonprojective trees. In fact, we observe that some non-projective dependencies produced by the MST algorithm are even reasonable than what they are in the dependency treebank. Thus, it is important to build a manually labeled discourse dependency treebank, which will be our future work. 5.3 Comparison with Other Systems The state-of-the-art discourse parsing methods normally produce the constituency based discourse trees. To comprehensively evaluate the performance of a labeled constituency tree, the blank tree structure (‘S’), the tree structure with nuclearity indication (‘N’), and the tree structure with rhetorical relation indication but no nuclearity indication (‘R’) are evaluated respectively using the F measure (Marcu 2000). To compare our discourse parsers with others, we adopt MIRA and Eisner algorithm to conduct discourse parsing with all the 6 types of features and then convert the produced projective dependency trees to constituency based trees through their correspondence as stated in Section 2. Our parsers using two relation sets are named Our-coarse and Our-fine respectively. The inputted EDUs of our parsers are from the standard segmentation of RST-DT. Other text-level discourse parsing methods include: (1) Percepcoarse: we replace MIRA with the averaged perceptron learning algorithm and the other settings are the same with Our-coarse; (2) HILDAmanual and HILDA-seg are from Hernault (2010b)’s work, and their inputted EDUs are from RST-DT and their own EDU segmenter respectively; (3) LeThanh indicates the results given by LeThanh el al. (2004), which built a multi-level rule based parser and used 14 relations evaluated on 21 documents from RST-DT; (4) Marcu denotes the results given by Marcu(2000)’s decision-tree based parser which used 15 relations evaluated on unspecified documents. Table 7 shows the performance comparison for all the parsers mentioned above. Human denotes the manual agreement between two human annotators. From this table, we can see that both our parsers perform better than all the other parsers as a whole, though our parsers are not developed directly for constituency based trees. Our parsers do not exhibit obvious advantage than HILDA-manual on labeling the blank tree structure, because our parsers and HILDAmanual all perform over 94% of Human and this performance level somewhat reaches a bottleneck to promote more. However, our parsers outperform the other parsers on both nuclearity and relation labeling. Our-coarse achieves 94.2% and 91.8% of the human F-scores, on labeling nuclearity and relation respectively, while Ourfine achieves 95.2% and 87.6%. We can also see that the averaged perceptron learning algorithm, though simple, can achieve a comparable performance, better than HILDA-manual. The parsers HILDA-seg, LeThanh and Marcu use their own automatic EDU segmenters and exhibit a relatively low performance. This means that EDU segmentation is important to a practical discourse parser and worth further investigation. S N R Our-coarse 82.9 73.0 60.6 Our-fine 83.4 73.8 57.8 Percep-coarse 82.3 72.6 59.4 HILDA-manual 83.0 68.4 55.3 HILDA-seg 72.3 59.1 47.8 LeThanh 53.7 47.1 39.9 Marcu 44.8 30.9 18.8 Human 88.1 77.5 66.0 Table 7: Full Parser Evaluation MAFS WAFS Acc Our-coarse 0.454 0.643 66.84 Percep-coarse 0.438 0.633 65.37 Feng 0.440 0.607 65.30 HILDA-manual 0.428 0.604 64.18 Baseline - - 35.82 Table 8: Relation Labeling Performance To further compare the performance of relation labeling, we follow Hernault el al. (2010a) and use Macro-averaged F-score (MAFS) to evaluate each relation. Due to space limitation, we do not list the F scores for each relation. Macro-averaged F-score is not influenced by the number of instances that are contained in each 32 relation. Weight-averaged F-score (WAFS) weights the performance of each relation by the number of its existing instances. Table 8 compares our parser Our-coarse with other parsers HILDA-manual, Feng (Feng and Hirst, 2012) and Baseline. Feng (Feng and Hirst, 2012) can be seen as a strengthened version of HILDA which adopts more features and conducts feature selection. Baseline always picks the most frequent relation (i.e. Elaboration). From the results, we find that Our-coarse consistently provides superior performance for most relations over other parsers, and therefore results in higher MAFS and WAFS. 6 Related Work So far, the existing discourse parsing techniques are mainly based on two well-known treebanks. One is the Penn Discourse TreeBank (PDTB) (Prasad et al., 2007) and the other is RST-DT. PDTB adopts the predicate-arguments representation by taking an implicit/explicit connective as a predication of two adjacent sentences (arguments). Then the discourse relation between each pair of sentences is annotated independently to characterize its predication. A majority of researches regard discourse parsing as a classification task and mainly focus on exploiting various linguistic features and classifiers when using PDTB (Wellner et al., 2006; Pitler et al., 2009; Wang et al., 2010). However, the predicatearguments annotation scheme itself has such a limitation that one can only obtain the local discourse relations without knowing the rich context. In contrast, RST and its treebank enable people to derive a complete representation of the whole discourse. Researches have begun to investigate how to construct a RST tree for the given text. Since the RST tree is similar to the constituency based syntactic tree except that the constituent nodes are different, the syntactic parsing techniques have been borrowed for discourse parsing (Soricut and Marcu, 2003; Baldridge and Lascarides, 2005; Sagae, 2009; Hernault et al., 2010b; Feng and Hirst, 2012). Soricut and Marcu (2003) use a standard bottomup chart parsing algorithm to determine the discourse structure of sentences. Baldridge and Lascarides (2005) model the process of discourse parsing with the probabilistic head driven parsing techniques. Sagae (2009) apply a transition based constituent parsing approach to construct a RST tree for a document. Hernault et al. (2010b) develop a greedy bottom-up tree building strategy for discourse parsing. The two adjacent text spans with the closest relations are combined in each iteration. As the extension of Hernault’s work, Feng and Hirst (2012) further explore various features aiming to achieve better performance. However, as analyzed in Section 1, there exist three limitations with the constituency based discourse representation and parsing. We innovatively adopt the dependency structure, which can be benefited from the existing RSTDT, to represent the discourse. To the best of our knowledge, this work is the first to apply dependency structure and dependency parsing techniques in discourse analysis. 7 Conclusions In this paper, we present the benefits and feasibility of applying dependency structure in textlevel discourse parsing. Through the correspondence between constituency-based trees and dependency trees, we build a discourse dependency treebank by converting the existing RST-DT. Based on dependency structure, we are able to directly analyze the relations between the EDUs without worrying about the additional interior text spans, and apply the existing state-of-the-art dependency parsing techniques which have a relatively low time complexity. In our work, we use the graph based dependency parsing techniques learned from the annotated dependency trees. The Eisner algorithm and the MST algorithm are applied to parse the optimal projective and non-projective dependency trees respectively based on the arc-factored model. To calculate the score for each arc, six types of features are explored to represent the arcs and the feature weights are learned based on the MIRA learning technique. Experimental results exhibit the effectiveness of the proposed approaches. In the future, we will focus on non-projective discourse dependency parsing and explore more effective features. Acknowledgments This work was partially supported by National High Technology Research and Development Program of China (No. 2012AA011101), National Key Basic Research Program of China (No. 2014CB340504), National Natural Science Foundation of China (No. 61273278), and National Key Technology R&D Program (No: 2011BAH10B04-03). We also thank the three anonymous reviewers for their helpful comments. 33 References Jason Baldridge and Alex Lascarides. 2005. Probabilistic Head-driven Parsing for Discourse Structure. In Proceedings of the Ninth Conference on Computational Natural Language Learning, pages 96– 103. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python — Analyzing Text with the Natural Language Toolkit. O’Reilly. Lynn Carlson, Daniel Marcu, and Mary E. Okurowski. 2001. Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. Proceedings of the Second SIGdial Workshop on Discourse and Dialogue-Volume 16, pages 1–10. Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the Shortest Arborescence of a Directed Graph, Science Sinica, v.14, pp.1396-1400. Koby Crammer and Yoram Singer. 2003. Ultraconservative Online Algorithms for Multiclass Problems. JMLR. Jack Edmonds. 1967. Optimum Branchings, J. Research of the National Bureau of Standards, 71B, pp.233-240. Jason Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. In Proc. COLING. Vanessa Wei Feng and Graeme Hirst. Text-level Discourse Parsing with Rich Linguistic Features, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 60–68, Jeju, Republic of Korea, 8-14 July 2012. Hugo Hernault, Danushka Bollegala, and Mitsuru Ishizuka. 2010a. A Semi-supervised Approach to Improve Classification of Infrequent Discourse Relations Using Feature Vector Extension. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 399–409, Cambridge, MA, October. Association for Computational Linguistics. Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010b. HILDA: A Discourse Parser Using Support Vector Machine Classification. Dialogue and Discourse, 1(3):1–33. Shafiq Joty, Giuseppe Carenini and Raymond T. Ng. A Novel Discriminative Framework for Sentencelevel Discourse Analysis. EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning Stroudsburg, PA, USA. Huong LeThanh, Geetha Abeysinghe, and Christian Huyck. 2004. Generating Discourse Structures for Written Texts. In Proceedings of the 20th International Conference on Computational Linguistics, pages 329– 335. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing Implicit Discourse Relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Method in Natural Language Processing, Vol. 1, EMNLP’09, pages 343-351. William Mann and Sandra Thompson. 1988. Rhetorical Structure Theory: Toward a Functional Theory of Text Organization. Text, 8(3):243–281. Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. MIT Press, Cambridge, MA, USA. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online Large-Margin Training of Dependency Parsers, 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005) . Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-projective Dependency Parsing using Spanning Tree Algorithms, Proceedings of HLT/EMNLP 2005. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic Sense Prediction for Implicit Discourse Relations in Text, In Proc. of the 47th ACL. pages 683-691. Rashmi Prasad, Eleni Miltsakaki, Nikhil Dinesh, Alan Lee, Aravind Joshi, Livio Robaldo, and Bonnie Webber. 2007. The Penn Discourse Treebank 2.0 Annotation Manual. The PDTB Research Group, December. David Reitter. 2003. Simple Signals for Complex Rhetorics: On Rhetorical Analysis with Richfeature Support Vector Models. LDV Forum, 18(1/2):38–52. Kenji Sagae. 2009. Analysis of discourse structure with syntactic dependencies and data-driven shiftreduce parsing. In Proceedings of the 11th International Conference on Parsing Technologies, pages 81-84. Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of the 2003 Conference 34 of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Volume 1, pages 149–156. Rajen Subba and Barbara Di Eugenio. 2009. An effective discourse parser that uses rich linguistic information. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 566–574. Robert Endre Tarjan, 1977. Finding Optimum Branchings, Networks, v.7, pp.25-35. Ben Taskar, Carlos Guestrin and Daphne Koller. 2003. Max-margin Markov Networks. In Proc. NIPS. Bonnie Webber. 2004. D-LTAG: Extending Lexicalized TAG to Discourse. Cognitive Science, 28(5):751–779. Wen Ting Wang, Jian Su and Chew Lim Tan. 2010. Kernel based Discourse Relation Recognition with Temporal Ordering Information, In Proc. of ACL’10. pages 710-719. Ben Wellner, James Pustejovsky, Catherine Havasi, Anna Rumshisky and Roser Sauri. 2006. Classification of Discourse Coherence Relations: an Exploratory Study Using Multiple Knowledge Sources. In Proc.of the 7th SIGDIAL Workshop on Discourse and Dialogue. pages 117-125. 35
2014
3
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 314–324, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Extracting Opinion Targets and Opinion Words from Online Reviews with Graph Co-ranking Kang Liu, Liheng Xu and Jun Zhao National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China {kliu, lhxu, jzhao}@nlpr.ia.ac.cn Abstract Extracting opinion targets and opinion words from online reviews are two fundamental tasks in opinion mining. This paper proposes a novel approach to collectively extract them with graph coranking. First, compared to previous methods which solely employed opinion relations among words, our method constructs a heterogeneous graph to model two types of relations, including semantic relations and opinion relations. Next, a co-ranking algorithm is proposed to estimate the confidence of each candidate, and the candidates with higher confidence will be extracted as opinion targets/words. In this way, different relations make cooperative effects on candidates’ confidence estimation. Moreover, word preference is captured and incorporated into our coranking algorithm. In this way, our coranking is personalized and each candidate’s confidence is only determined by its preferred collocations. It helps to improve the extraction precision. The experimental results on three data sets with different sizes and languages show that our approach achieves better performance than state-of-the-art methods. 1 Introduction In opinion mining, extracting opinion targets and opinion words are two fundamental subtasks. Opinion targets are objects about which users’ opinions are expressed, and opinion words are words which indicate opinions’ polarities. Extracting them can provide essential information for obtaining fine-grained analysis on customers’ opinions. Thus, it has attracted a lot of attentions (Hu and Liu, 2004b; Liu et al., 2012; Moghaddam and Ester, 2011; Mukherjee and Liu, 2012). To this end, previous work usually employed a collective extraction strategy (Qiu et al., 2009; Hu and Liu, 2004b; Liu et al., 2013b). Their intuition is: opinion words usually co-occur with opinion targets in sentences, and there are strong modification relationship between them (called opinion relation in (Liu et al., 2012)). If a word is an opinion word, other words with which that word having opinion relations will have highly probability to be opinion targets, and vice versa. In this way, extraction is alternatively performed and mutual reinforced between opinion targets and opinion words. Although this strategy has been widely employed by previous approaches, it still has several limitations. 1) Only considering opinion relations is insufficient. Previous methods mainly focused on employing opinion relations among words for opinion target/word co-extraction. They have investigated a series of techniques to enhance opinion relations identification performance, such as nearest neighbor rules (Liu et al., 2005), syntactic patterns (Zhang et al., 2010; Popescu and Etzioni, 2005), word alignment models (Liu et al., 2012; Liu et al., 2013b; Liu et al., 2013a), etc. However, we are curious that whether merely employing opinion relations among words is enough for opinion target/word extraction? We note that there are additional types of relations among words. For example, “LCD” and “LED” both denote the same aspect “screen” in TV set domain, and they are topical related. We call such relations between homogeneous words as semantic relations. If we have known “LCD” to be an opinion target, “LED” is naturally to be an opinion target. Intuitively, besides opinion relations, semantic relations may provide additional rich clues for indicating opinion targets/words. Which kind of relations is more effective for opinion targets/words extraction? Is it beneficial to consider these two types of relations together for the extraction? To our best knowl314 edge, these problems have seldom been studied before (see Section 2). 2) Ignoring word preference. When employing opinion relations to perform mutual reinforcing extraction between opinion targets and opinion words, previous methods depended on opinion associations among words, but seldom considered word preference. Word preference denotes a word’s preferred collocations. Intuitively, the confidence of a candidate being an opinion target (opinion word) should mostly be determined by its word preferences rather than all words having opinion relations with it. For example “This camera’s price is expensive for me.” “It’s price is good.” “Canon 40D has a good price.” In these three sentences, “price” is modified by “good” more times than “expensive”. In traditional extraction strategy, opinion associations are usually computed based on the co-occurrence frequency. Thus, “good” has more strong opinion association with “price” than “expensive”, and it would have more contributions on determining “price” to be an opinion target or not. It’s unreasonable. “Expensive” actually has more relatedness with “price” than “good”, and “expensive” is likely to be a word preference for “price”. The confidence of “price” being an opinion target should be influenced by “expensive” in greater extent than “good”. In this way, we argue that the extraction will be more precise. 𝑂𝑂𝑂𝑂4 𝑂𝑂𝑂𝑂6 𝑂𝑂𝑂𝑂5 𝑂𝑂𝑂𝑂1 𝑂𝑂𝑂𝑂3 𝑂𝑂𝑂𝑂2 𝑇𝑇𝑇𝑇2 𝑇𝑇𝑇𝑇4 𝑇𝑇𝑇𝑇3 𝑇𝑇𝑇𝑇5 𝑇𝑇𝑇𝑇6 𝑇𝑇𝑇𝑇1 𝐺𝐺𝑡𝑡𝑡𝑡 𝐺𝐺𝑜𝑜𝑜𝑜 𝐺𝐺𝑡𝑡𝑡𝑡 Figure 1: Heterogeneous Graph: OC means opinion word candidates. TC means opinion target candidates. Solid curves and dotted lines respectively mean semantic relations and opinion relations between two candidates. Thus, to resolve these two problems, we present a novel approach with graph co-ranking. The collective extraction of opinion targets/words is performed in a co-ranking process. First, we operate over a heterogeneous graph to model semantic relations and opinion relations into a unified model. Specifically, our heterogeneous graph is composed of three subgraphs which model different relation types and candidates, as shown in Figure 1. The first subgraph Gtt represents semantic relations among opinion target candidates, and the second subgraph Goo models semantic relations among opinion word candidates. The third part is a bipartite subgraph Gto, which models opinion relations among different candidate types and connects the above two subgraphs together. Then we perform a random walk algorithm on Gtt, Goo and Gto separately, to estimate all candidates’ confidence, and the entries with higher confidence than a threshold are correspondingly extracted as opinion targets/words. The results could reflect which type of relation is more useful for the extraction. Second, a co-ranking algorithm, which incorporates three separate random walks on Gtt, Goo and Gto into a unified process, is proposed to perform candidate confidence estimation. Different relations may cooperatively affect candidate confidence estimation and generate more global ranking results. Moreover, we discover each candidate’s preferences through topics. Such word preference will be different for different candidates. We add word preference information into our algorithm and make our co-ranking algorithm be personalized. A candidate’s confidence would mainly absorb the contributions from its word preferences rather than its all neighbors with opinion relations, which may be beneficial for improving extraction precision. We perform experiments on real-world datasets from different languages and different domains. Results show that our approach effectively improves extraction performance compared to the state-of-the-art approaches. 2 Related Work There are many significant research efforts on opinion targets/words extraction (sentence level and corpus level). In sentence level extraction, previous methods (Wu et al., 2009; Ma and Wan, 2010; Li et al., 2010; Yang and Cardie, 2013) mainly aimed to identify all opinion target/word mentions in sentences. They regarded it as a sequence labeling task, where several classical models were used, such as CRFs (Li et al., 2010) and SVM (Wu et al., 2009). This paper belongs to corpus level extraction, and aims to generate a sentiment lexicon and a target list rather than to identify mentions in sen315 tences. Most of previous corpus-level methods adopted a co-extraction framework, where opinion targets and opinion words reinforce each other according to their opinion relations. Thus, how to improve opinion relations identification performance was their main focus. (Hu and Liu, 2004a) exploited nearest neighbor rules to mine opinion relations among words. (Popescu and Etzioni, 2005) and (Qiu et al., 2011) designed syntactic patterns to perform this task. (Zhang et al., 2010) promoted Qiu’s method. They adopted some special designed patterns to increase recall. (Liu et al., 2012; Liu et al., 2013a; Liu et al., 2013b) employed word alignment model to capture opinion relations rather than syntactic parsing. The experimental results showed that these alignment-based methods are more effective than syntax-based approaches for online informal texts. However, all aforementioned methods only employed opinion relations for the extraction, but ignore considering semantic relations among homogeneous candidates. Moreover, they all ignored word preference in the extraction process. In terms of considering semantic relations among words, our method is related with several approaches based on topic model (Zhao et al., 2010; Moghaddam and Ester, 2011; Moghaddam and Ester, 2012a; Moghaddam and Ester, 2012b; Mukherjee and Liu, 2012). The main goals of these methods weren’t to extract opinion targets/words, but to categorize all given aspect terms and sentiment words. Although these models could be used for our task according to the associations between candidates and topics, solely employing semantic relations is still one-sided and insufficient to obtain expected performance. Furthermore, there is little work which considered these two types of relations globally (Su et al., 2008; Hai et al., 2012; Bross and Ehrig, 2013). They usually captured different relations using cooccurrence information. That was too coarse to obtain expected results (Liu et al., 2012). In addition, (Hai et al., 2012) extracted opinion targets/words in a bootstrapping process, which had an error propagation problem. In contrast, we perform extraction with a global graph co-ranking process, where error propagation can be effectively alleviated. (Su et al., 2008) used heterogeneous relations to find implicit sentiment associations among words. Their aim was only to perform aspect terms categorization but not to extract opinion targets/words. They extracted opinion targets/words in advanced through simple phrase detection. Thus, the extraction performance is far from expectation. 3 The Proposed Method In this section, we propose our method in detail. We formulate opinion targets/words extraction as a co-ranking task. All nouns/noun phrases are regarded as opinion target candidates, and all adjectives/verbs are regarded as opinion word candidates, which are widely adopted by pervious methods (Hu and Liu, 2004a; Qiu et al., 2011; Wang and Wang, 2008; Liu et al., 2012). Then each candidate will be assigned a confidence and ranked, and the candidates with higher confidence than a threshold will be extracted as the results. Different from traditional methods, besides opinion relations among words, we additionally capture semantic relations among homogeneous candidates. To this end, a heterogeneous undirected graph G = (V, E) is constructed. V = V t ∪V o denotes the vertex set, which includes opinion target candidates vt ∈V t and opinion word candidates vo ∈V o. E denotes the edge set, where eij ∈E means that there is a relation between two vertices. Ett ⊂E represents the semantic relations between two opinion target candidates. Eoo ⊂E represents the semantic relations between two opinion word candidates. Eto ⊂E represents the opinion relations between opinion target candidates and opinion word candidates. Based on different relation types, we used three matrices Mtt ∈R|V t|×|V t|, Moo ∈R|V o|×|V o| and Mto ∈R|V t|×|V o| to record the association weights between any two vertices, respectively. Section 3.4 will illustrate how to construct them. 3.1 Only Considering Opinion Relations To estimate the confidence of each candidate, we use a random walk algorithm on our graph to perform co-ranking. Most previous methods (Hu and Liu, 2004a; Qiu et al., 2011; Wang and Wang, 2008; Liu et al., 2012) only considered opinion relations among words. Their basic assumption is as follows. Assumption 1: If a word is likely to be an opinion word, the words which it has opinion relation with will have higher confidence to be opinion targets, and vice versa. 316 In this way, candidates’ confidences (vt or vo) are collectively determined by each other iteratively. It equals to making random walk on subgraph Gto = (V, Eto) of G. Thus we have Ct = (1 −µ) × Mto × Co + µ × It Co = (1 −µ) × MT to × Ct + µ × Io (1) where Ct and Co respectively represent confidences of opinion targets and opinion words. mto i,j ∈Mto means the association weight between the ith opinion target and the jth opinion word according to their opinion relations. It’s worthy noting that It and Io respectively denote prior confidences of opinion target candidates and opinion word candidates. We argue that opinion targets are usually domain-specific, and there are remarkably distribution difference of them on different domains (in-domain Din vs. out-domain Dout). If a candidate is salient in Din but common in Dout, it’s likely to be an opinion target in Din. Thus, we use a domain relevance measure (DR) (Hai et al., 2013) to compute It. DR(t) = R(t, Din) R(t, Dout) (2) where R(t, D) = ¯wt st × PN j=1(wtj − 1 Wj × PWj k=1 wkj) represents candidate relevance with domain D. wtj = (1 + logTFtj) × log N DFt is a TF-IDF-like weight of candidate t in document j. TFtj is the frequency of the candidate t in the jth document, and DFt is document frequency. N means the document number in domain D. R(t, D) includes two measures to reflect the salient of a candidate in D. 1) wtj − 1 Wj × PWj k=1 wkj reflects how frequently a term is mentioned in a particular document. Wj denotes the word number in document j. 2) ¯wt st quantifies how significantly a term is mentioned across all documents in D. ¯wt = 1 N × PN k=1 wtk denotes average weight across all documents for t. st = q 1 N × PN j=1 (wtj −¯wj)2 denotes the standard variance of term t. We use the given reviews as in-domain collection Din and Google n-gram corpus1 as out-domain collection Dout. Finally, each entry in It is a normalized DR(t) score. In contrast, opinion words are usually domain-independent. Users may use same words to express theirs opinions, like “good”, “bad”, etc. But there are still some domain-dependent opinion 1http://books.google.com/ngrams/datasets words, like “delicious” in the restaurant domain, “powerful” in the car domain. It’s difficult to discriminate them from other words by using statistical information. So we simply set all entries in Io to be 1. µ ∈[0, 1] in Eq.1 determines the impact of the prior confidence on results. 3.2 Only Considering Semantic Relations To estimate candidates’ confidences by only considering semantic relations among words, we make two separately random walks on the subgraphs of G, Gtt = (V, Ett) and Goo = (V, Eoo). The basic assumption is as follows: Assumption 2: If a word is likely to be an opinion target (opinion word), the words which it has strong semantic relation with will have higher confidence to be opinion targets (opinion words). In this way, the confidence of the candidate is determined only by its homogeneous neighbours. There is no mutual reinforcement between opinion targets and opinion words. Thus we have Ct = (1 −ν) × Mtt × Ct + ν × It Co = (1 −ν) × Moo × Co + ν × Io (3) where ν has the same role as µ in Eq.1. 3.3 Considering Semantic Relations and Opinion Relations Together To jointly model semantic relations and opinion relations for opinion targets/words extraction, we couple two random walking algorithms mentioned above together. Here, Assumption 1 and Assumption 2 are both satisfied. Thus, an opinion target/word candidate’s confidence is collectively determined by its neighbours according to different relation types. Meanwhile, each item may make influence on it’s neighbours. It’s an iterative reinforcement process. Thus, we have Ct = (1 −λ −µ) × Mto × Co + λ × Mtt × Ct + µ × It Co = (1 −λ −µ) × MT to × Ct + λ × Moo × Co + µ × Io (4) where λ ∈[0, 1] determines which type of relations dominates candidate confidence estimation. λ = 0 means that each candidate’s confidence is estimated by only considering opinion relations among words, which equals to Eq.1. Otherwise, when λ = 1, candidate confidence estimation only 317 considers semantic relations among words, which equals to Eq.3. µ, Io and It have the same meaning in Eq.1. Our algorithm will run iteratively until it converges or in a fixed iteration number Iter. In experiments, we set Iter = 200. Obtaining Word Preference. The co-ranking algorithm in Eq.4 is based on a standard random walking algorithm, which randomly selects a link according to the association matrix Mto, Mtt and Moo, or jumps to a random node with prior confidence value. However, it generates a global ranking over all candidates without taking the node preference (word preference) into account. As mentioned in the first section, each opinion target/word has its preferred collocations, it’s reasonable that the confidence of an opinion target (opinion word) candidate should be preferentially determined by its preferences, rather than all of its neighbors with opinion relations. To obtain the word preference, we resort to topics. We believe that if an opinion word vio is topical related with a target word vjt, vio can be regarded as a word preference for vjt, and vice versa. For example, “price” and “expensive” are topically related in phone’s domain, so they are a word preference for each other. Specifically, we use a vector P Ti = [P Ti 1 , ..., P Ti k , ..., P Ti |V o|]1×|V o| to represent word preference of the ith opinion target candidate. P Ti k means the preferred probability of the ith potential opinion target for the kth potential opinion words. To compute P Ti k , we first use Kullback-Leibler divergence to measure the semantic distance between any two candidates on the bridge of topics. Thus, we have D(vi, vj) = 1 2Σz(KLz(vi||vj) + KLz(vj||vi)) where KLz(vi||vj) = p(z|vi)log p(z|vi) p(z|vj) means the KL-divergence from candidate vi to vj based on topic z. p(z|v) = p(v|z) p(z) p(v), where p(v|z) is the probability of the candidate v to topic z (see Section 3.4). p(z) is the probability that topic z in reviews. p(v) is the probability that a candidate occurs in reviews. Then, a logistic function is used to map D(vi, vj) into [0, 1]. SA(vi, vj) = 1 1 + eD(vi,vj) (5) Then, we calculate P Ti k by normalize SA(vi, vj) score, i.e. P Ti k = SA(vt i,vo k) P|V o| p=1 SA(vt i,vop). For demonstration, we give some examples in Table 1, where each entry denotes a SA(vi, vj) score between two candidates. We can see that using topics can successfully capture the preference information for each opinion target/word. expensive good long colorful price 0.265 0.043 0.003 0.000 LED 0.002 0.035 0.007 0.098 battery 0.000 0.015 0.159 0.001 Table 1: Examples of Calculated Word Preference And we use a vector P Oj = [P Oj 1 , ..., P Oj q , ..., P Oj |V t|]1×|V t| to represent the preference information of the jth opinion word candidate. Similarly, we have P Oj q = SA(vt q,vo j ) P|V t| k=1 SA(vt k,vo j ). Incorporating Word Preference into Coranking. To consider such word preference in our co-ranking algorithm, we incorporate it into the random walking on Gto. Intuitively, preference vectors will be different for different candidates. Thus, the co-ranking algorithm would be personalized. It allows that the candidate confidence propagates to other candidates only in its preference cluster. Specifically, we make modification on original transition matrix Mto = (Mto 1 , Mto 2 , ..., Mto |V t|) and add each candidate’s preference in it. Let ˆ Mto = ( ˆ Mto 1 , ˆ Mto 2 , ..., ˆ Mto |V t|) be the modified transition matrix, which records the associations between opinion target candidates and opinion word candidates. Here Mto k ∈ R1×|V o| and ˆ Mto k ∈R1×|V o| denotes the kth column vector in Mto and ˆ Mto, respectively. And let Diag(P Tk) denote a diagonal matrix whose eigenvalue is vector P Tk, we have ˆ Mto k = Mto k Diag(P Tk) Similarly, let Uto k ∈R1×|V t| and ˆUto k ∈R1×|V t| denotes the kth row vector in MT to and ˆ MT to, respectively. Diag(P Ok) denote a diagonal matrix whose eigenvalue is vector P Ok. Then we have ˆUto k = Uto k Diag(P Ok) In this way, each candidate’s preference is incorporated into original associations based on opinion relation Mto through Diag(P Ok) and Diag(P Tk). And candidates’ confidences will mainly come from the contributions of its preferences. Thus, Ct and Co in Eq.4 become: 318 Ct = (1 −λ −µ) × ˆ Mto × Co + λ × Mtt × Ct + µ × It Co = (1 −λ −µ) × ˆ MT to × Ct + λ × Moo × Co + µ × Io (6) 3.4 Capturing Semantic and Opinion Relations In this section, we explain how to capture semantic relations and opinion relations for constructing transition matrices Mtt, Moo and Mto. Capturing Semantic Relations: For capturing semantic relations among homogenous candidates, we employ topics. We believe that if two candidates share similar topics in the corpus, there is a strong semantic relation between them. Thus, we employ a LDA variation (Mukherjee and Liu, 2012), an extension of (Zhao et al., 2010), to discover topic distribution on words, which sampled all words into two separated observations: opinion targets and opinion words. It’s because that we are only interested in topic distribution of opinion targets/words, regardless of other useless words, including conjunctions, prepositions etc. This model has been proven to be better than the standard LDA model and other LDA variations for opinion mining (Mukherjee and Liu, 2012). After topic modeling, we obtain the probability of the candidates (vt and vo) to topic z, i.e. p(z|vt) and p(z|vo), and topic distribution p(z). Then, a symmetric Kullback-Leibler divergence as same as Eq.5 is used to calculate the semantical associations between any two homogenous candidates. Thus, we obtain SA(vt, vt) and SA(vo, vo), which correspond to the entries in Mtt and Moo, respectively. Capturing Opinion Relations: To capture opinion relations among words and construct the transition matrix Mto, we used an alignmentbased method proposed in (Liu et al., 2013b). This approach models capturing opinion relations as a monolingual word alignment process. Each opinion target can find its corresponding modifiers in sentences through alignment, in which multiple factors are considered globally, such as co-occurrence information, word position in sentence, etc. Moreover, this model adopted a partially supervised framework to combine syntactic information with alignment results, which has been proven to be more precise than the state-ofthe-art approaches for opinion relations identification (Liu et al., 2013b). After performing word alignment, we obtain a set of word pairs composed of a noun (noun phrase) and its corresponding modified word. Then, we simply employ Pointwise Mutual Information (PMI) to calculate the opinion associations among words as the entries in Mto. OA(vt, vo) = log p(vt,vo) p(vt)p(vo), where vt and vo denote an opinion target candidate and an opinion word candidate, respectively. p(vt, vo) is the co-occurrence probability of vt and vo based on the opinion relation identification results. p(vt) and p(vo) give the independent occurrence probability of of vt and vo, respectively 4 Experiments 4.1 Datasets and Evaluation Metrics Datasets: To evaluate the proposed method, we used three datasets. The first one is Customer Review Datasets (CRD), used in (Hu and Liu, 2004a), which contains reviews about five products. The second one is COAE2008 dataset22, which contains Chinese reviews about four products. The third one is Large, also used in (Wang et al., 2011; Liu et al., 2012; Liu et al., 2013a), where two domains are selected (Mp3 and Hotel). As mentioned in (Liu et al., 2012), Large contains 6,000 sentences for each domain. Opinion targets/words are manually annotated, where three annotators were involved. Two annotators were required to annotate out opinion words/targets in reviews. When conflicts occur, the third annotator make final judgement. In total, we respectively obtain 1,112, 1,241 opinion targets and 334, 407 opinion words in Hotel, MP3. Pre-processing: All sentences are tagged to obtain words’ part-of-speech tags using Stanford NLP tool3. And noun phrases are identified using the method in (Zhu et al., 2009) before extraction. Evaluation Metrics: We select precision(P), recall(R) and f-measure(F) as metrics. And a significant test is performed, i.e., a t-test with a default significant level of 0.05. 4.2 Our Method vs. The State-of-the-art Methods To prove the effectiveness of the proposed method, we select some state-of-the-art methods for comparison as follows: 2http://ir-china.org.cn/coae2008.html 3http://nlp.stanford.edu/software/tagger.shtml 319 Methods D1 D2 D3 D4 D5 Avg. P R F P R F P R F P R F P R F F Hu 0.75 0.82 0.78 0.71 0.79 0.75 0.72 0.76 0.74 0.69 0.82 0.75 0.74 0.80 0.77 0.758 DP 0.87 0.81 0.84 0.90 0.81 0.85 0.90 0.86 0.88 0.81 0.84 0.82 0.92 0.86 0.89 0.856 Zhang 0.83 0.84 0.83 0.86 0.85 0.85 0.86 0.88 0.87 0.80 0.85 0.82 0.86 0.86 0.86 0.846 SAS 0.80 0.79 0.79 0.82 0.76 0.79 0.79 0.74 0.76 0.77 0.78 0.77 0.80 0.76 0.78 0.778 Liu 0.84 0.85 0.84 0.87 0.85 0.86 0.88 0.89 0.88 0.81 0.85 0.83 0.89 0.87 0.88 0.858 Hai 0.77 0.87 0.83 0.79 0.86 0.82 0.79 0.89 0.84 0.72 0.88 0.79 0.74 0.88 0.81 0.818 CR 0.84 0.86 0.85 0.87 0.85 0.86 0.87 0.90 0.88 0.81 0.87 0.83 0.89 0.88 0.89 0.862 CR WP 0.86 0.86 0.86 0.88 0.86 0.87 0.89 0.90 0.89 0.81 0.87 0.83 0.91 0.89 0.90 0.870 Table 2: Results of Opinion Targets Extraction on Customer Review Dataset Methods Camera Car Laptop Phone Mp3 Hotel Avg. P R F P R F P R F P R F P R F P R F F Hu 0.63 0.65 0.64 0.62 0.58 0.60 0.51 0.67 0.58 0.69 0.60 0.64 0.61 0.68 0.64 0.60 0.65 0.62 0.587 DP 0.71 0.70 0.70 0.72 0.65 0.68 0.58 0.69 0.63 0.78 0.66 0.72 0.69 0.70 0.69 0.67 0.69 0.68 0.683 Zhang 0.71 0.78 0.74 0.69 0.68 0.68 0.57 0.80 0.67 0.80 0.71 0.75 0.67 0.77 0.72 0.67 0.76 0.71 0.712 SAS 0.72 0.72 0.72 0.71 0.64 0.67 0.59 0.72 0.65 0.78 0.69 0.73 0.69 0.75 0.72 0.69 0.74 0.71 0.700 Liu 0.75 0.81 0.78 0.71 0.71 0.71 0.61 0.85 0.71 0.83 0.74 0.78 0.70 0.82 0.76 0.71 0.80 0.75 0.749 Hai 0.68 0.84 0.76 0.69 0.75 0.72 0.58 0.86 0.72 0.75 0.76 0.76 0.65 0.83 0.74 0.62 0.82 0.75 0.742 CR 0.75 0.83 0.79 0.72 0.74 0.73 0.60 0.85 0.70 0.83 0.77 0.80 0.70 0.84 0.76 0.71 0.83 0.77 0.758 CR WP 0.78 0.84 0.81 0.74 0.75 0.74 0.64 0.85 0.73 0.84 0.76 0.80 0.74 0.84 0.79 0.74 0.82 0.78 0.773 Table 3: Results of Opinion Targets Extraction on COAE 2008 and Large Hu extracted opinion targets/words using association mining rules (Hu and Liu, 2004a). DP used syntax-based patterns to capture opinion relations in sentences, and then used a bootstrapping process to extract opinion targets/words (Qiu et al., 2011),. Zhang is proposed by (Zhang et al., 2010). They also used syntactic patterns to capture opinion relations between words. Then a HITS (Kleinberg, 1999) algorithm is employed to extract opinion targets. Liu is proposed by (Liu et al., 2013a), an extension of (Liu et al., 2012). They employed a word alignment model to capture opinion relations among words, and then used a random walking algorithm to extract opinion targets. Hai is proposed by (Hai et al., 2012), which is similar to our method. They employed both of semantic relations and opinion relations to extract opinion words/targets in a bootstrapping framework. But they captured relations only using cooccurrence statistics. Moreover, word preference was not considered. SAS is proposed by (Mukherjee and Liu, 2012), an extended lda-based model of (Zhao et al., 2010). The top K items for each aspect are extracted as opinion targets/words. It means that only semantic relations among words are considered in SAS. And we set aspects number to be 9 as same as (Mukherjee and Liu, 2012). CR: is the proposed method in this paper by using co-ranking, referring to Eq.4. CR doesn’t consider word preference. CR WP: is the full implementation of our method, referring to Eq.6. Hu, DP, Zhang and Liu are the methods which only consider opinion relations among words. SAS is the methods which only consider semantic relations among words. Hai, CR and CR WP consider these two types of relations together. The parameter settings of state-of-the-art methods are same as their original paper. In CR and CR WP, we set λ = 0.4 and µ = 0.1. The experimental results are shown in Table 2, 3, 4 and 5, where the last column presents the average F-measure scores for multiple domains. Since Liu and Zhang aren’t designed for opinion words extraction, we don’t present their results in Table 4 and 5. From experimental results, we can see. 1) Our methods (CR and CR WP) outperform other methods not only on opinion targets extraction but on opinion words extraction in most domains. It proves the effectiveness of the proposed method. 2) CR and CR WP have much better performance than Liu and Zhang, especially on Recall. Liu and Zhang also use a ranking framework like ours, but they only employ opinion relations for extraction. In contrast, besides opinion relations, CR and CR WP further take semantic relations into account. Thus, more opinion targets/words can be extracted. Furthermore, we observe that CR and CR WP outperform SAS. SAS only exploits semantic relations, but ignores opinion relations among words. Its extraction is performed separately and neglects the reinforcement between opinion targets and opinion words. Thus, SAS has worse performance than our methods. It demonstrates the usefulness of considering multiple relation types. 320 Methods D1 D2 D3 D4 D5 Avg. P R F P R F P R F P R F P R F F Hu 0.57 0.75 0.65 0.51 0.76 0.61 0.57 0.73 0.64 0.54 0.62 0.58 0.62 0.67 0.64 0.624 DP 0.64 0.73 0.68 0.57 0.79 0.66 0.65 0.70 0.67 0.61 0.65 0.63 0.70 0.68 0.69 0.666 SAS 0.64 0.68 0.66 0.55 0.70 0.62 0.62 0.65 0.63 0.60 0.61 0.60 0.68 0.63 0.65 0.632 Hai 0.62 0.77 0.69 0.52 0.80 0.64 0.60 0.74 0.67 0.56 0.69 0.62 0.66 0.70 0.68 0.660 CR 0.62 0.75 0.68 0.57 0.79 0.67 0.64 0.75 0.69 0.63 0.69 0.66 0.68 0.69 0.69 0.678 CR WP 0.65 0.75 0.70 0.59 0.80 0.68 0.65 0.74 0.70 0.66 0.68 0.67 0.71 0.70 0.70 0.690 Table 4: Results of Opinion Words Extraction on Customer Review Dataset Methods Camera Car Laptop Phone Mp3 Hotel Avg. P R F P R F P R F P R F P R F P R F F Hu 0.72 0.74 0.73 0.70 0.71 0.70 0.66 0.70 0.68 0.70 0.70 0.70 0.48 0.67 0.56 0.52 0.69 0.59 0.660 DP 0.80 0.73 0.76 0.79 0.71 0.75 0.75 0.69 0.72 0.78 0.68 0.73 0.60 0.65 0.62 0.61 0.66 0.63 0.702 SAS 0.73 0.70 0.71 0.75 0.68 0.71 0.72 0.68 0.69 0.71 0.66 0.68 0.64 0.62 0.63 0.66 0.61 0.63 0.675 Hai 0.76 0.74 0.75 0.72 0.74 0.73 0.69 0.72 0.70 0.72 0.70 0.71 0.61 0.69 0.64 0.59 0.68 0.64 0.690 CR 0.80 0.75 0.77 0.77 0.74 0.75 0.73 0.71 0.72 0.75 0.71 0.73 0.63 0.69 0.64 0.63 0.68 0.66 0.710 CR WP 0.80 0.75 0.77 0.80 0.74 0.77 0.77 0.71 0.74 0.78 0.72 0.75 0.66 0.68 0.67 0.67 0.69 0.68 0.730 Table 5: Results of Opinion Words Extraction on COAE 2008 and Large 3) CR and CR WP both outperform Hai. We believe the reasons are as follows. First, CR and CR WP considers multiple relations in a unified process by using graph co-ranking. In contrast, Hai adopts a bootstrapping framework which performs extraction step by step and may have the problem of error propagation. It demonstrates that our graph co-ranking is more suitable for this task than bootstrapping-based strategy. Second, our method captures semantic relations using topic modeling and captures opinion relations through word alignments, which are more precise than Hai which merely uses co-occurrence information to indicate such relations among words. In addition, word preference is not handled in Hai, but processed in CR WP. The results show the usefulness of word preference for opinion targets/words extraction. 4) CR WP outperforms CR, especially on precision. The only difference between them is that CR WP considers word preference when performing graph ranking for candidate confidence estimation, but CR does not. Each candidate confidence estimation in CR WP gives more weights for this candidate’s preferred words than CR. Thus, the precision can be improved. 4.3 Semantic Relation vs. Opinion Relation In this section, we discuss which relation type is more effective for this task. For comparison, we design two baselines, called OnlySA and OnlyOA. OnlyOA only employs opinion relations among words, which equals to Eq.1. OnlySA only employs semantic relations among words, which equals to Eq.3. Moreover, Combine is our method which considers both of opinion relations and semantic relations together, referring to Eq.4 with MP3 Hotel Laptop Phone Recall .65 .70 .75 .80 .85 .90 .95 OnlySA OnlyOA Combine (a) Opinion Target Extraction Results MP3 Hotel Laptop Phone Recall .60 .65 .70 .75 .80 OnlySA OnlyOA Combine (b) Opinion Word Extraction Results Figure 2: Semantic Relations vs. Opinion Relations λ = 0.5. Figure 2 presents experimental results. The left graph presents opinion targets extraction results and the right graph presents opinion words extraction results. Because of space limitation, we only shown the results of four domains (MP3, Hotel, Laptop and Phone). From results, we observe that OnlyOA outperforms OnlySA in all domains. It demonstrates that employing opinion relations are more useful than semantic relations for co-extracting opinion targets/words. And it is necessary to utilize the mutual reinforcement relationship between opinion words and opinion targets. Moreover, Combine outperforms OnlySA and OnlyOA in all domains. It indicates that combining different relations among words together is effective. 4.4 The Effectiveness of Considering Word Preference In this section, we try to prove the necessity of considering word preference in Eq.6. Besides the comparison between CR and CR WP performed 321 in the main experiment in Section 4.2, we further incorporate word preference in aforementioned OnlyOA, named as OnlyOA WP, which only employs opinion relations among words and equals to Eq.6 with λ = 0. Experimental results are shown in Figure 3. Because of space limitation, we only show the results of the same domains in section 4.3, Form results, we observe that CR WP outperforms CR, and OnlyOA WP outperforms OnlyOA in all domains, especially on precision. These observations demonstrate that considering word preference is very important for opinion targets/words extraction. We believe the reason is that exploiting word preference can provide more fine information for opinion target/word candidates’ confidence estimation. Thus the performance can be improved. MP3 Hotel Laptop Phone Precision .60 .65 .70 .75 .80 .85 .90 OnlyOA OnlyOA_WP CR CR_WP MP3 Hotel Laptop Phone Recall .70 .75 .80 .85 .90 .95 OnlyOA OnlyOA_WP CR CR_WP (a) Opinion Target Extraction Results MP3 Hotel Laptop Phone Precision .60 .65 .70 .75 .80 OnlyOA OnlyOA_WP CR CR_WP MP3 Hotel Laptop Phone Recall .60 .65 .70 .75 .80 OnlyOA OnlyOA_WP CR CR_WP (b) Opinion Word Extraction Results Figure 3: Experimental results when considering word preference 4.5 Parameter Sensitivity In this subsection, we discuss the variation of extraction performance when changing λ and µ in Eq.6. Due to space limitation, we only show the F-measure of CR WP on four domains. Experimental results are shown in Figure 4 and Figure 5. The left graphs in Figure 4 and 5 present the performance variation of CR WP with varying λ from 0 to 0.9 and fixing µ = 0.1. The right graphs in Figure 4 and 5 present the performance variation of CR WP with varying µ from 0 to 0.6 and fixing λ = 0.4. In the left graphs in Figure 4 and 5, we observe the best performance is obtained when λ = 0.4. It indicates that opinion relations and semantic relations are both useful for extracting opinion targets/words. The extraction performance is beneficial from their combination. In the right graphs in Figure 4 and 5, the best performance is obtained when µ = 0.1. It indicates prior knowledge is useful for extraction. When µ increases, performance, however, decreases. It demonstrates that incorporating more prior knowledge into our algorithm would restrain other useful clues on estimating candidate confidence, and hurt the performance. 0.0 .1 .2 .3 .4 .5 .6 .7 .8 .9 F-Measure .60 .65 .70 .75 .80 .85 MP3 Hotel Laptop Phone 0.0 .1 .2 .3 .4 .5 .6 F-Measure .65 .70 .75 .80 .85 MP3 Hotel Laptop Phone Figure 4: Opinion targets extraction results 0.0 .1 .2 .3 .4 .5 .6 .7 .8 .9 F-Measure .55 .60 .65 .70 .75 .80 MP3 Hotel Laptop Phone 0.0 .1 .2 .3 .4 .5 .6 F-Measure .50 .55 .60 .65 .70 .75 .80 MP3 Hotel Laptop Phone Figure 5: Opinion words extraction results 5 Conclusions This paper presents a novel method with graph coranking to co-extract opinion targets/words. We model extracting opinion targets/words as a coranking process, where multiple heterogenous relations are modeled in a unified model to make cooperative effects on the extraction. In addition, we especially consider word preference in co-ranking process to perform more precise extraction. Compared to the state-of-the-art methods, experimental results prove the effectiveness of our method. Acknowledgement This work was sponsored by the National Basic Research Program of China (No. 2014CB340500), the National Natural Science Foundation of China (No. 61272332 and No. 61202329), the National High Technology Development 863 Program of China (No. 2012AA011102), and CCF-Tencent Open Research Fund. References Juergen Bross and Heiko Ehrig. 2013. Automatic construction of domain and aspect specific sentiment 322 lexicons for customer review mining. In Proceedings of the 22nd ACM international conference on Conference on information &#38; knowledge management, CIKM ’13, pages 1077–1086, New York, NY, USA. ACM. Zhen Hai, Kuiyu Chang, and Gao Cong. 2012. One seed to find them all: mining opinion features via association. In CIKM, pages 255–264. Zhen Hai, Kuiyu Chang, Jung-Jae Kim, and Christopher C. Yang. 2013. Identifying features in opinion mining via intrinsic and extrinsic domain relevance. IEEE Transactions on Knowledge and Data Engineering, 99(PrePrints):1. Mingqin Hu and Bing Liu. 2004a. Mining opinion features in customer reviews. In Proceedings of Conference on Artificial Intelligence (AAAI). Minqing Hu and Bing Liu. 2004b. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’04, pages 168–177, New York, NY, USA. ACM. Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604–632, September. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Yingju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In Chu-Ren Huang and Dan Jurafsky, editors, COLING, pages 653–661. Tsinghua University Press. Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opinions on the web. In Allan Ellis and Tatsuya Hagino, editors, WWW, pages 342–351. ACM. Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opinion target extraction using word-based translation model. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1346–1356, Jeju Island, Korea, July. Association for Computational Linguistics. Kang Liu, Liheng Xu, Yang Liu, and Jun Zhao. 2013a. Opinion target extraction using partially supervised word alignment model. Kang Liu, Liheng Xu, and Jun Zhao. 2013b. Syntactic patterns versus word alignment: Extracting opinion targets from online reviews. Tengfei Ma and Xiaojun Wan. 2010. Opinion target extraction in chinese news comments. In ChuRen Huang and Dan Jurafsky, editors, COLING (Posters), pages 782–790. Chinese Information Processing Society of China. Samaneh Moghaddam and Martin Ester. 2011. Ilda: Interdependent lda model for learning latent aspects and their ratings from online product reviews. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’11, pages 665–674, New York, NY, USA. ACM. Samaneh Moghaddam and Martin Ester. 2012a. Aspect-based opinion mining from product reviews. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’12, pages 1184–1184, New York, NY, USA. ACM. Samaneh Moghaddam and Martin Ester. 2012b. On the design of lda models for aspect-based opinion mining. In CIKM, pages 803–812. Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 339–348, Stroudsburg, PA, USA. Association for Computational Linguistics. Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 339–346, Stroudsburg, PA, USA. Association for Computational Linguistics. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Che. 2009. Expanding domain sentiment lexicon through double propagation. Guang Qiu, Bing Liu 0001, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Linguistics, 37(1):9–27. Qi Su, Xinying Xu, Honglei Guo, Zhili Guo, Xian Wu, Xiaoxun Zhang, Bin Swen, and Zhong Su. 2008. Hidden sentiment association in chinese web opinion mining. In Jinpeng Huai, Robin Chen, Hsiao-Wuen Hon, Yunhao Liu, Wei-Ying Ma, Andrew Tomkins, and Xiaodong Zhang 0001, editors, WWW, pages 959–968. ACM. Bo Wang and Houfeng Wang. 2008. Bootstrapping both product features and opinion words from chinese customer reviews with cross-inducing. Hongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect keyword supervision. In Chid Apt, Joydeep Ghosh, and Padhraic Smyth, editors, KDD, pages 618–626. ACM. Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In EMNLP, pages 1533–1541. ACL. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long 323 Papers), pages 1640–1649, Sofia, Bulgaria, August. Association for Computational Linguistics. Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O’Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In ChuRen Huang and Dan Jurafsky, editors, COLING (Posters), pages 1462–1470. Chinese Information Processing Society of China. Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li. 2010. Jointly modeling aspects and opinions with a maxent-lda hybrid. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 56– 65, Stroudsburg, PA, USA. Association for Computational Linguistics. Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Muhua Zhu. 2009. Multi-aspect opinion polling from textual reviews. In David Wai-Lok Cheung, Il-Yeol Song, Wesley W. Chu, Xiaohua Hu, and Jimmy J. Lin, editors, CIKM, pages 1799–1802. ACM. 324
2014
30
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 325–335, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Context-aware Learning for Sentence-level Sentiment Analysis with Posterior Regularization Bishan Yang Department of Computer Science Cornell University [email protected] Claire Cardie Department of Computer Science Cornell University [email protected] Abstract This paper proposes a novel context-aware method for analyzing sentiment at the level of individual sentences. Most existing machine learning approaches suffer from limitations in the modeling of complex linguistic structures across sentences and often fail to capture nonlocal contextual cues that are important for sentiment interpretation. In contrast, our approach allows structured modeling of sentiment while taking into account both local and global contextual information. Specifically, we encode intuitive lexical and discourse knowledge as expressive constraints and integrate them into the learning of conditional random field models via posterior regularization. The context-aware constraints provide additional power to the CRF model and can guide semi-supervised learning when labeled data is limited. Experiments on standard product review datasets show that our method outperforms the state-of-theart methods in both the supervised and semi-supervised settings. 1 Introduction The ability to extract sentiment from text is crucial for many opinion-mining applications such as opinion summarization, opinion question answering and opinion retrieval. Accordingly, extracting sentiment at the fine-grained level (e.g. at the sentence- or phrase-level) has received increasing attention recently due to its challenging nature and its importance in supporting these opinion analysis tasks (Pang and Lee, 2008). In this paper, we focus on the task of sentencelevel sentiment classification in online reviews. Typical approaches to the task employ supervised machine learning algorithms with rich features and take into account the interactions between words to handle compositional effects such as polarity reversal (e.g. (Nakagawa et al., 2010; Socher et al., 2013)). Still, their methods can encounter difficulty when the sentence on its own does not contain strong enough sentiment signals (due to the lack of statistical evidence or the requirement for background knowledge). Consider the following review for example, 1. Hearing the music in real stereo is a true revelation. 2. You can feel that the music is no longer constrained by the mono recording. 3. In fact, it is more like the players are performing on a stage in front of you ... Existing feature-based classifiers may be effective in identifying the positive sentiment of the first sentence due to the use of the word revelation, but they could be less effective in the last two sentences due to the lack of explicit sentiment signals. However, if we examine these sentences within the discourse context, we can see that: the second sentence expresses sentiment towards the same aspect – the music – as the first sentence; the third sentence expands the second sentence with the discourse connective In fact. These discourse-level relations help indicate that sentence 2 and 3 are likely to have positive sentiment as well. The importance of discourse for sentiment analysis has become increasingly recognized. Most existing work considers discourse relations between adjacent sentences or clauses and incorporates them as constraints (Kanayama and Nasukawa, 2006; Zhou et al., 2011) or features in classifiers Trivedi and Eisenstein (2013; Lazaridou et al. (2013). Very little work has explored long-distance discourse relations for sentiment analysis. Somasundaran et al. (2008) defines coreference relations on opinion targets and applies them to constrain the polarity of sentences. 325 However, the discourse relations were obtained from fine-grained annotations and implemented as hard constraints on polarity. Obtaining sentiment labels at the fine-grained level is costly. Semi-supervised techniques have been proposed for sentence-level sentiment classification (T¨ackstr¨om and McDonald, 2011a; Qu et al., 2012). However, they rely on a large amount of document-level sentiment labels that may not be naturally available in many domains. In this paper, we propose a sentence-level sentiment classification method that can (1) incorporate rich discourse information at both local and global levels; (2) encode discourse knowledge as soft constraints during learning; (3) make use of unlabeled data to enhance learning. Specifically, we use the Conditional Random Field (CRF) model as the learner for sentence-level sentiment classification, and incorporate rich discourse and lexical knowledge as soft constraints into the learning of CRF parameters via Posterior Regularization (PR) (Ganchev et al., 2010). As a framework for structured learning with constraints, PR has been successfully applied to many structural NLP tasks (Ganchev et al., 2009; Ganchev et al., 2010; Ganchev and Das, 2013). Our work is the first to explore PR for sentiment analysis. Unlike most previous work, we explore a rich set of structural constraints that cannot be naturally encoded in the feature-label form, and show that such constraints can improve the performance of the CRF model. We evaluate our approach on the sentencelevel sentiment classification task using two standard product review datasets. Experimental results show that our model outperforms state-ofthe-art methods in both the supervised and semisupervised settings. We also show that discourse knowledge is highly useful for improving sentence-level sentiment classification. 2 Related Work There has been a large amount of work on sentiment analysis at various levels of granularity (Pang and Lee, 2008). In this paper, we focus on the study of sentence-level sentiment classification. Existing machine learning approaches for the task can be classified based on the use of two ideas. The first idea is to exploit sentiment signals at the sentence level by learning the relevance of sentiment and words while taking into account the context in which they occur: Nakagawa et al. (2010) uses tree-CRF to model word interactions based on dependency tree structures; Choi and Cardie (2008) applies compositional inference rules to handle polarity reversal; Socher et al. (2011) and Socher et al. (2013) compute compositional vector representations for words and phrases and use them as features in a classifier. The second idea is to exploit sentiment signals at the inter-sentential level. Polanyi and Zaenen (2006) argue that discourse structure is important in polarity classification. Various attempts have been made to incorporate discourse relations into sentiment analysis: Pang and Lee (2004) explored the consistency of subjectivity between neighboring sentences; Mao and Lebanon (2007),McDonald et al. (2007), and T¨ackstr¨om and McDonald (2011a) developed structured learning models to capture sentiment dependencies between adjacent sentences; Kanayama and Nasukawa (2006) and Zhou et al. (2011) use discourse relations to constrain two text segments to have either the same polarity or opposite polarities; Trivedi and Eisenstein (2013) and Lazaridou et al. (2013) encode the discourse connectors as model features in supervised classifiers. Very little work has explored long-distance discourse relations. Somasundaran et al. (2008) define opinion target relations and apply them to constrain the polarity of text segments annotated with target relations. Recently, Zhang et al. (2013) explored the use of explanatory discourse relations as soft constraints in a Markov Logic Network framework for extracting subjective text segments. Leveraging both ideas, our approach exploits sentiment signals from both intra-sentential and inter-sentential context. It has the advantages of utilizing rich discourse knowledge at different levels of context and encoding it as soft constraints during learning. Our approach is also semi-supervised. Compared to the existing work on semi-supervised learning for sentence-level sentiment classification (T¨ackstr¨om and McDonald, 2011a; T¨ackstr¨om and McDonald, 2011b; Qu et al., 2012), our work does not rely on a large amount of coarse-grained (document-level) labeled data, instead, distant supervision mainly comes from linguisticallymotivated constraints. Our work also relates to the study of posterior regularization (PR) (Ganchev et al., 2010). PR has been successfully applied to many structured NLP 326 tasks such as dependency parsing, information extraction and cross-lingual learning tasks (Ganchev et al., 2009; Bellare et al., 2009; Ganchev et al., 2010; Ganchev and Das, 2013). Most previous work using PR mainly experiments with featurelabel constraints. In contrast, we explore a rich set of linguistically-motivated constraints which cannot be naturally formulated in the feature-label form. We also show that constraints derived from the discourse context can be highly useful for disambiguating sentence-level sentiment. 3 Approach In this section, we present the details of our proposed approach. We formulate the sentence-level sentiment classification task as a sequence labeling problem. The inputs to the model are sentencesegmented documents annotated with sentencelevel sentiment labels (positive, negative or neutral) along with a set of unlabeled documents. During prediction, the model outputs sentiment labels for a sequence of sentences in the test document. We utilize conditional random fields and use Posterior Regularization (PR) to learn their parameters with a rich set of context-aware constraints. In what follows, we first briefly describe the framework of Posterior Regularization. Then we introduce the context-aware constraints derived based on intuitive discourse and lexical knowledge. Finally we describe how to perform learning and inference with these constraints. 3.1 Posterior Regularization PR is a framework for structured learning with constraints (Ganchev et al., 2010). In this work, we apply PR in the context of CRFs for sentencelevel sentiment classification. Denote x as a sequence of sentences within a document and y as a vector of sentiment labels associated with x. The CRF model the following conditional probabilities: pθ(y|x) = exp(θ · f(x, y)) Zθ(x) where f(x, y) are the model features, θ are the model parameters, and Zθ(x) = P y exp(θ · f(x, y)) is a normalization constant. The objective function for a standard CRF is to maximize the log-likelihood over a collection of labeled documents plus a regularization term: max θ L(θ) = max θ X (x,y) log pθ(y|x) −||θ||2 2 2δ2 PR makes the assumption that the labeled data we have is not enough for learning good model parameters, but we have a set of constraints on the posterior distribution of the labels. We can define the set of desirable posterior distrbutions as Q = {q(Y) : Eq[φ(X, Y)] = b} (1) where φ is a constraint function, b is a vector of desired values of the expectations of the constraint functions under the distribution q 1. Note that the distribution q is defined over a collection of unlabeled documents where the constraint functions apply, and we assume independence between documents. The PR objective can be written as the original model objective penalized with a regularization term, which minimizes the KL-divergence between the desired model posteriors and the learned model posteriors with an L2 penalty 2 for the constraint violations. max θ L(θ) −min q∈Q{KL(q(Y)||pθ(Y|X)) + β||Eq[φ(X, Y)] −b||2 2} (2) The objective can be optimized by an EM-like scheme that iteratively solves the minimization problem and the maximization problem. Solving the minimization problem is equivalent to solving its dual since the objective is convex. The dual problem is arg max λ λ · b −log Zλ(X) −1 4β ||λ||2 2 (3) We optimize the objective function 2 using stochastic projected gradient, and compute the learning rate using AdaGrad (Duchi et al., 2010). 3.2 Context-aware Posterior Constraints We develop a rich set of context-aware posterior constraints for sentence-level sentiment analysis by exploiting lexical and discourse knowledge. Specifically, we construct the lexical constraints by extracting sentiment-bearing patterns 1In general, inequality constraints can also be used. We focus on the equality constraints since we found them to express the sentiment-relevant constraints well. 2Other convex functions can be used for the penalty. We use L2 norm because it works well in practice. β is a regularization constant 327 within sentences and construct the discourse-level constraints by extracting discourse relations that indicate sentiment coherence or sentiment changes both within and across sentences. Each constraint can be formulated as equality between the expectation of a constraint function value and a desired value set by prior knowledge. The equality is not strictly enforced (due to the regularization in the PR objective 2). Therefore all the constraints are applied as soft constraints. Table 1 provides intuitive description and examples for all the constraints used in our model. Lexical Patterns The existence of a polaritycarrying word alone may not correctly indicate the polarity of the sentence, as the polarity can be reversed by other polarity-reversing words. We extract lexical patterns that consist of polar words and negators 3, and apply the heuristics based on compositional semantics (Choi and Cardie, 2008) to assign a sentiment value to each pattern. We encode the extracted lexical patterns along with their sentiment values as feature-label constraints. The constraint function can be written as φw(x, y) = X i fw(xi, yi) where fw(xi, yi) is a feature function which has value 1 when sentence xi contains the lexical pattern w and its sentiment label yi equals to the expected sentiment value and has value 0 otherwise. The constraint expectation value is set to be the prior probability of associating w with its sentiment value. Note that sentences with neutral sentiment can also contain such lexical patterns. Therefore we allow the lexical patterns to be assigned a neutral sentiment with a prior probability r0 (we compute this value as the empirical probability of neutral sentiment in the training documents). Using the polarity indicated by lexical patterns to constrain the sentiment of sentences is quite aggressive. Therefore we only consider lexical patterns that are strongly discriminative (many opinion words in the lexicon only indicate sentiment with weak strength). The selected lexical patterns include a handful of seed patterns (such as “pros” and “cons”) and the lexical patterns that have high precision (larger then 0.9) of predicting sentiment in the training data. 3The polar words are identified using the MPQA lexicon and the negators are identified using a handful of seed words extended by the General Inquirer dictionary and WordNet as described in (Choi and Cardie, 2008). Discourse Connectives. Lexical patterns can be limited in capturing contextual information since they only look at interactions between words within an expression. To capture context at the clause or sentence level, we consider discourse connectives, which are cue phrases or words that indicate discourse relations between adjacent sentences or clauses. To identify discourse connectives, we apply a discourse tagger trained on the Penn Discourse Treebank (Prasad et al., 2008) 4 to our data. Discourse connectives are tagged with four senses: Expansion, Contingency, Comparison, Temporal. Discourse connectives can operate at both intrasentential and inter-sentential level. For example, the word “although” is often used to connect two polar clauses within a sentence, while the word “however” is often used to at the beginning of the sentence to connect two polar sentences. It is important to distinguish these two types of discourse connectives. We consider a discourse connective to be intra-sentential if it has the Comparison sense and connects two polar clauses with opposite polarities (determined by the lexical patterns). We construct a feature-label constraint for each intra-sentential discourse connective and set its expected sentiment value to be neutral. Unlike the intra-sentential discourse connectives, the inter-sentential discourse connectives can indicate sentiment transitions between sentences. Intuitively, discourse connectives with the senses of Expansion (e.g. also, for example, furthermore) and Contingency (e.g. as a result, hence, because) are likely to indicate sentiment coherence; discourse connectives with the sense of Comparison (e.g. but, however, nevertheless) are likely to indicate sentiment changes. This intuition is reasonable but it assumes the two sentences connected by the discourse connective are both polar sentences. In general, discourse connectives can also be used to connect non-polar (neutral) sentences. Thus it is hard to directly constrain the posterior expectation for each type of sentiment transitions using inter-sentential discourse connectives. Instead, we impose constraints on the model posteriors by reducing constraint violations. We 4http://www.cis.upenn.edu/˜epitler/ discourse.html 328 Types Description and Examples Inter-sentential Lexical patterns The sentence containing a polar lexical pattern w tends to have the polarity indicated by w. Example lexical patterns are annoying, hate, amazing, not disappointed, no concerns, favorite, recommend. Discourse Connectives (clause) The sentence containing a discourse connective c which connects its two clauses that have opposite polarities indicated by the lexical patterns tends to have neutral sentiment. Example connectives are while, although, though, but. Discourse Connectives (sentence) Two adjacent sentences which are connected by a discourse connective c tends to have the same polarity if c indicates a Expansion or Contingency relation, e.g. also, for example, in fact, because ; opposite polarities if c indicates a Comparison relation, e.g. otherwise, nevertheless, however. ✓ Coreference The sentences which contain coreferential entities appeared as targets of opinion expressions tend to have the same polarity. ✓ Listing patterns A series of sentences connected via a listing tend to have the same polarity. ✓ Global labels The sentence-level polarity tends to be consistent with the document-level polarity. ✓ Table 1: Summarization of Posterior Constraints for Sentence-level Sentiment Classification define the following constraint function: φc,s(x, y) = X i fc,s(xi, yi, yi−1) where c denotes a discourse connective, s indicates its sense, and fc,s is a penalty function that takes value 1.0 when yi and yi−1 form a contradictory sentiment transition, that is, yi ̸=polar yi−1 if s ∈{Expansion, Contingency}, or yi =polar yi−1 if s = Comparison. The desired value for the constraint expectation is set to 0 so that the model is encouraged to have less constraint violations. Opinion Coreference Sentences in a discourse can be linked by many types of coherence relations (Jurafsky et al., 2000). Coreference is one of the commonly used relations in written text. In this work, we explore coreference in the context of sentence-level sentiment analysis. We consider a set of polar sentences to be linked by the opinion coreference relation if they contain coreferring opinion-related entities. For example, the following sentences express opinions towards “the speaker phone”, “The speaker phone” and “it” respectively. As these opinion targets are coreferential (referring to the same entity “the speaker phone”), they are linked by the opinion coreference relation 5. My favorite features are the speaker phone and the radio. The speaker phone is very functional. I use it in the car, very audible even with freeway noise. 5In general, the opinion-related entities include both the opinion targets and the opinion holders. In this work, we only consider the targets since we experiment with singleauthor product reviews. The opinion holders can be included in a similar way as the opinion targets. Our coreference relations indicated by opinion targets overlap with the same target relation introduced in (Somasundaran et al., 2009). The differences are: (1) we encode the coreference relations as soft constraints during learning instead of applying them as hard constraints during inference time; (2) our constraints can apply to both polar and non-polar sentences; (3) our identification of coreference relations is automatic without any fine-grained annotations for opinion targets. To extract coreferential opinion targets, we apply Stanford’s coreference system (Lee et al., 2013) to extract coreferential mentions in the document, and then apply a set of syntactic rules to identify opinion targets from the extracted mentions. The syntactic rules correspond to the shortest dependency paths between an opinion word and an extracted mention. We consider the 10 most frequent dependency paths in the training data. Example dependency paths include nsubj(opinion, mention), nobj(opinion, mention), and amod(mention, opinion). For sentences connected by the opinion coreference relation, we expect their sentiment to be consistent. To encode this intuition, we define the following constraint function: φcoref(x, y) = X i,ant(i)=j,j≥0 fcoref(xi, xj, yi, yj) where ant(i) denotes the index of the sentence which contains an antecedent target of the target mentioned in sentence i (the antecedent relations over pairs of opinion targets can be constructed using the coreference resolver), and fcoref is a penalty function which takes value 1.0 when the expected sentiment coherency is violated, that is, yi ̸=polar yj. Similar to the inter-sentential dis329 course connectives, modeling opinion coreference via constraint violations allows the model to handle neutral sentiment. The expected value of the constraint functions is set to 0. Listing Patterns Another type of coherence relations we observe in online reviews is listing, where a reviewer expresses his/her opinions by listing a series of statements followed by a sequence of numbers. For example, “1. It’s smaller than the ipod mini .... 2. It has a removable battery ....”. We expect sentences connected by a listing to have consistent sentiment. We implement this constraint in the same form as the coreference constraint (the antecedent assignments are constructed from the numberings). Global Sentiment Previous studies have demonstrated the value of document-level sentiment in guiding the semi-supervised learning of sentence-level sentiment (T¨ackstr¨om and McDonald, 2011b; Qu et al., 2012). In this work, we also take into account this information and encode it as posterior constraints. Note that these constraints are not necessary for our model and can be applied when the document-level sentiment labels are naturally available. Based on an analysis of the Amazon review data, we observe that sentence-level sentiment usually doesn’t conflict with the document-level sentiment in terms of polarity. For example, the proportion of negative sentences in the positive documents is very small compared to the proportion of positive sentences. To encode this intuition, we define the following constraint function: φg(x, y) = n X i δ(yi ̸=polar g)/n where g ∈{positive, negative} denotes the sentiment value of a polar document, n is the total number of sentences in x, and δ is an indicator function. We hope the expectation of the constraint function takes a small value. In our experiments, we set the expected value to be the empirical estimate of the probability of “conflicting” sentiment in polar documents using the training data. 3.3 Training and Inference During training, we need to compute the constraint expectations and the feature expectations under the auxiliary distribution q at each gradient step. We can derive q by solving the dual problem in 3: q(y|x) = exp(θ · f(x, y) + λ · φ(x, y)) Zλ,θ(X) (4) where Zλ,θ(X) is a normalization constant. Most of our constraints can be factorized in the same way as factorizing the model features in the firstorder CRF model, and we can compute the expectations under q very efficiently using the forwardbackward algorithm. However, some of our discourse constraints (opinion coreference and listing) can break the tractable structure of the model. For constraints with higher-order structures, we use Gibbs Sampling (Geman and Geman, 1984) to approximate the expectations. Given a sequence x, we sample a label yi at each position i by computing the unnormalized conditional probabilities p(yi = l|y−i) ∝exp(θ · f(x, yi = l, y−i) + λ · φ(x, yi = l, y−i)) and renormalizing them. Since the possible label assignments only differ at position i, we can make the computation efficient by maintaining the structure of the coreference clusters and precomputing the constraint function for different types of violations. During inference, we find the best label assignment by computing arg maxy q(y|x). For documents where the higher-order constraints apply, we use the same Gibbs sampler as described above to infer the most likely label assignment, otherwise, we use the Viterbi algorithm. 4 Experiments We experimented with two product review datasets for sentence-level sentiment classification: the Customer Review (CR) data (Hu and Liu, 2004)6 which contains 638 reviews of 14 products such as cameras and cell phones, and the Multi-domain Amazon (MD) data from the test set of T¨ackstr¨om and McDonald (2011a) which contains 294 reivews from 5 different domains. As in Qu et al. (2012), we chose the books, electronics and music domains for evaluation. Each domain also comes with 33,000 extra reviews with only document-level sentiment labels. We evaluated our method in two settings: supervised and semi-supervised. In the supervised setting, we treated the test data as unlabeled data and performed transductive learning. In the semisupervised setting, our unlabeled data consists of 6Available at http://www.cs.uic.edu/˜liub/ FBS/sentiment-analysis.html. 330 both the available unlabeled data and the test data. For each domain in the MD dataset, we made use of no more than 100 unlabeled documents in which our posterior constraints apply. We adopted the evaluation schemes used in previous work: 10fold cross validation for the CR dataset and 3-fold cross validation for the MD dataset. We also report both two-way classification (positive vs. negative) and three-way classification results (positive, negative or neutral). We use accuracy as the performance measure. In our tables, boldface numbers are statistically significant by paired t-test for p < 0.05 against the best baseline developed in this paper 7. We trained our model using a CRF incorporated with the proposed posterior constraints. For the CRF features, we include the tokens, the partof-speech tags, the prior polarities of lexical patterns indicated by the opinion lexicon and the negator lexicon, the number of positive and negative tokens and the output of the vote-flip algorithm (Choi and Cardie, 2009). In addition, we include the discourse connectives as local or transition features and the document-level sentiment labels as features (only available in the MD dataset). We set the CRF regularization parameter σ = 1 and set the posterior regularization parameter β and γ (a trade-off parameter we introduce to balance the supervised objective and the posterior regularizer in 2) by using grid search 8. For approximation inference with higher-order constraints, we perform 2000 Gibbs sampling iterations where the first 1000 iterations are burn-in iterations. To make the results more stable, we construct three Markov chains that run in parallel, and select the sample with the largest objective value. All posterior constraints were developed using the training data on each training fold. For the MD dataset, we also used the dvd domain as additional labeled data for developing the constraints. Baselines. We compared our method to a number of baselines: (1) CRF: CRF with the same set of model features as in our method. (2) CRFINF: CRF augmented with inference constraints. We can incorporate the proposed constraints (constraints derived from lexical patterns and discourse connectives) as hard constraints into CRF during 7Significance test was not conducted over the previous methods as we do not have their results for each fold. 8We conducted 10-fold cross-validation on each training fold with the parameter space: β : [0.01, 0.05, 0.1, 0.5, 1.0] and γ : [0.1, 0.5, 1.0, 5.0, 10.0]. Methods CR MD CRF 81.1 67.0 CRF-inflex 80.9 66.4 CRF-infdisc 81.1 67.2 PRlex 81.8 69.7 PR 82.7 70.6 Previous work TreeCRF (Nakagawa et al., 2010) 81.4 Dropout LR (Wang and Manning, 2013) 82.1 Table 2: Accuracy results (%) for supervised sentiment classification (two-way) Books Electronics Music Avg VoteFlip 44.6 45.0 47.8 45.8 DocOracle 53.6 50.5 63.0 55.7 CRF 57.4 57.5 61.8 58.9 CRF-inflex 56.7 56.4 60.4 57.8 CRF-infdisc 57.2 57.6 62.1 59.0 PRlex 60.3 59.9 63.2 61.1 PR 61.6 61.0 64.4 62.3 Previous work HCRF 55.9 61.0 58.7 58.5 MEM 59.7 59.6 63.8 61.0 Table 3: Accuracy results (%) for semi-supervised sentiment classification (three-way) on the MD dataset inference by manually setting λ in equation 4 to a large value,9. When λ is large enough, it is equivalent to adding hard constraints to the viterbi inference. To better understand the different effects of lexical and discourse constraints, we report results for applying only the lexical constraints (CRFINFlex) as well as results for applying only the discourse constraints (CRF-INFdisc). (3) PRlex: a variant of our PR model which only applies the lexical constraints. For the three-way classification task on the MD dataset, we also implemented the following baselines: (4) VOTEFLIP: a rulebased algorithm that leverages the positive, negative and neutral cues along with the effect of negation to determine the sentence sentiment (Choi and Cardie, 2009). (5) DOCORACLE: assigns each sentence the label of its corresponding document. 4.1 Results We first report results on a binary (positive or negative) sentence-level sentiment classification task. For this task, we used the supervised setting and performed transductive learning for our model. Table 2 shows the accuracy results. We can see 9We set λ to 1000 for the lexical constraints and -1000 to the discourse connective constraints in the experiments 331 Books Electronics Music pos/neg/neu pos/neg/neu pos/neg/neu VoteFlip 43/42/47 45/46/44 50/46/46 DocOracle 54/60/49 57/54/42 72/65/52 CRF 47/51/64 60/61/52 67/60/58 CRF-inflex 46/52/63 59/61/50 65/59/57 CRF-infdisc 47/51/64 60/61/52 67/61/59 PRlex 50/56/66 64/63/53 67/64/59 PR 52/56/68 64/66/53 69/65/60 Table 4: F1 scores for each sentiment category (positive, negative and neutral) for semisupervised sentiment classification on the MD dataset that PR significantly outperforms all other baselines in both the CR dataset and the MD dataset (average accuracy across domains is reported). The poor performance of CRF-INFlex indicates that directly applying lexical constraints as hard constraints during inference could only hurt the performance. CRF-INFdisc slightly outperforms CRF but the improvement is not significant. In contrast, both PRlex and PR significantly outperform CRF, which implies that incorporating lexical and discourse constraints as posterior constraints is much more effective. The superior performance of PR over PRlex further suggests that the proper use of discourse information can significantly improve accuracy for sentence-level sentiment classification. We also analyzed the model’s performance on a three-way sentiment classification task. By introducing the “neutral” category, the sentiment classification problem becomes harder. Table 4 shows the results in terms of accuracy for each domain in the MD dataset. We can see that both PR and PRlex significantly outperform all other baselines in all domains. The rule-based baseline VOTEFLIP gave the weakest performance because it has no prediction power on sentences with no opinion words. DOCORACLE performs much better than VOTEFLIP and performs especially well on the Music domain. This indicates that the documentlevel sentiment is a very strong indicator of the sentence-level sentiment label. For the CRF baseline and its invariants, we observe a similar performance trend as in the two-way classification task: there is nearly no performance improvement from applying the lexical and discourseconnective-based constraints during CRF inference. In contrast, both PRlex and PR provide substantial improvements over CRF. This confirms that encoding lexical and discourse knowledge as posterior constraints allows the featurebased model to gain additional learning power for sentence-level sentiment prediction. In particular, incorporating discourse constraints leads to consistent improvements to our model. This demonstrates that our modeling of discourse information is effective and that taking into account the discourse context is important for improving sentence-level sentiment analysis. We also compare our results to the previously published results on the same dataset. HCRF (T¨ackstr¨om and McDonald, 2011a) and MEM (Qu et al., 2012) are two state-of-the-art semi-supervised methods for sentence-level sentiment classification. We can see that our best model PR gives the best results in most categories. Table 4 shows the results in terms of F1 scores for each sentiment category (positive, negative and neutral). We can see that the PR models are able to provide improvements over all the sentiment categories compared to all the baselines in general. We observe that the DOCORACLE baseline provides very strong F1 scores on the positive and negative categories especially in the Books and Music domains, but very poor F1 on the neutral category. This is because it over-predicts the polar sentences in the polar documents, and predicts no polar sentences in the neutral documents. In contrast, our PR models provide more balanced F1 scores among all the sentiment categories. Compared to the CRF baseline and its variants, we found that the PR models can greatly improve the precision of predicting positive and negative sentences, resulting in a significant improvement on the positive/negative F1 scores. However, the improvement on the neutral category is modest. A plausible explanation is that most of our constraints focus on discriminating polar sentences. They can help reduce the errors of misclassifying polar sentences, but the model needs more constraints in order to distinguish neutral sentences from polar sentences. We plan to address this issue in future work. 4.2 Discussion We analyze the errors to better understand the merits and limitations of the PR model. We found that the PR model is able to correct many CRF errors caused by the lack of labeled data. The first row in Table 5 shows an example of such errors. 332 Example Sentences CRF PR Example 1: ⟨neg⟩If I could, I would like to return it or exchange for something better.⟨/neg⟩ ⟨neu⟩× ✓ Example 2: ⟨neg⟩Things I wasn’t a fan of – the ending was to cutesy for my taste.⟨/neg⟩⟨neg⟩Also, all of the side characters (particularly the mom, vee, and the teacher) were incredibly flat and stereotypical to me.⟨/neg⟩ ⟨neu⟩⟨pos⟩× ✓ Example 3: ⟨neg⟩I also have excessive noise when I talk and have phone in my pocket while walking.⟨/neg⟩⟨neu⟩But other models are no better.⟨/neu⟩ ⟨neg⟩⟨pos⟩× ⟨neg⟩⟨pos⟩× Table 5: Example sentences where PR succeeds and fails to correct the mistakes of CRF The lexical features return and exchange may be good indicators of negative sentiment for the sentence. However, with limited labeled data, the CRF learner can only associate very weak sentiment signals to these features. In contrast, the PR model is able to associate stronger sentiment signals to these features by leveraging unlabeled data for indirect supervision. A simple lexicon-based constraint during inference time may also correct this case. However, hard-constraint baselines can hardly improve the performance in general because the contributions of different constraints are not learned and their combination may not lead to better predictions. This is also demonstrated by the limited performance of CRF-INF in our experiments. We also found that the discourse constraints play an important role in improving the sentiment prediction. The lexical constraints alone are often not sufficient since their coverage is limited by the sentiment lexicon and they can only constrain sentiment locally. On the contrary, discourse constraints are not dependent on sentiment lexicons, and more importantly, they can provide sentiment preferences on multiple sentences at the same time. When combining discourse constraints with features from different sentences, the PR model becomes more powerful in disambiguating sentiment. The second example in Table 5 shows that the PR model learned with discourse constraints correctly predicts the sentiment of two sentences where no lexical constraints apply. However, discourse constraints are not always helpful. One reason is that they do not constrain the neutral sentiment. As a result they could not help disambiguate neutral sentiment from polar sentiment, such as the third example in Table 5. This is also a problem for most of our lexical constraints. In general, it is hard to learn reliable indicators for the neutral sentiment. In the MD dataset, a neutral label may be given because the sentence contains mixed sentiment or no sentiment or it is off-topic. We plan to explore more refined constraints that can deal with the neutral sentiment in future work. Another limitation of the discourse constraints is that they could be affected by the errors of the discourse parser and the coreference resolver. A potential way to address this issue is to learn discourse constraints jointly with sentiment. We plan to study this in future research. 5 Conclusion In this paper, we propose a context-aware approach for learning sentence-level sentiment. Our approach incorporates intuitive lexical and discourse knowledge as expressive constraints while training a conditional random field model via posterior regularization. We explore a rich set of context-aware constraints at both intra- and intersentential levels, and demonstrate their effectiveness in the analysis of sentence-level sentiment. While we focus on the sentence-level task, our approach can be easily extended to handle sentiment analysis at finer levels of granularity. Our experiments show that our model achieves better accuracy than existing supervised and semi-supervised models for the sentence-level sentiment classification task. Acknowledgments This work was supported in part by DARPA-BAA12-47 DEFT grant #12475008 and NSF grant BCS-0904822. We thank Igor Labutov for helpful discussion and suggestions; Oscar T¨ackstr¨om and Lizhen Qu for providing their Amazon review datasets; and the anonymous reviewers for helpful comments and suggestions. References Kedar Bellare, Gregory Druck, and Andrew McCallum. 2009. Alternating projections for learning 333 with expectation constraints. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 43–50. AUAI Press. Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 793–801. Association for Computational Linguistics. Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2Volume 2, pages 590–598. Association for Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Kuzman Ganchev and Dipanjan Das. 2013. Crosslingual discriminative learning of sequence models with posterior regularization. Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of the ACL-IJCNLP, pages 369–377. Kuzman Ganchev, Joao Grac¸a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. The Journal of Machine Learning Research, 99:2001–2049. Stuart Geman and Donald Geman. 1984. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, (6):721–741. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. ACM. Dan Jurafsky, James H Martin, Andrew Kehler, Keith Vander Linden, and Nigel Ward. 2000. Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition, volume 2. MIT Press. Hiroshi Kanayama and Tetsuya Nasukawa. 2006. Fully automatic lexicon expansion for domainoriented sentiment analysis. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 355–363. Association for Computational Linguistics. Angeliki Lazaridou, Ivan Titov, and Caroline Sporleder. 2013. A bayesian model for joint unsupervised induction of sentiment, aspect and discourse representations. In To Appear in Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, August. Association for Computational Linguistics. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Yi Mao and Guy Lebanon. 2007. Isotonic conditional random fields and local sentiment flow. Advances in neural information processing systems, 19:961. Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In Annual Meeting-Association For Computational Linguistics, volume 45, page 432. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using crfs with hidden variables. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 786–794. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Computational Linguistics, page 271. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Now Pub. Livia Polanyi and Annie Zaenen. 2006. Contextual valence shifters. In Computing attitude and affect in text: Theory and applications, pages 1–10. Springer. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer. Lizhen Qu, Rainer Gemulla, and Gerhard Weikum. 2012. A weakly supervised model for sentence-level semantic orientation analysis with multiple experts. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 149–159. Association for Computational Linguistics. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151–161. Association for Computational Linguistics. 334 Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP. Swapna Somasundaran, Janyce Wiebe, and Josef Ruppenhofer. 2008. Discourse level opinion interpretation. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 801–808. Association for Computational Linguistics. Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 170–179. Association for Computational Linguistics. Oscar T¨ackstr¨om and Ryan McDonald. 2011a. Discovering fine-grained sentiment with latent variable structured prediction models. In Advances in Information Retrieval, pages 368–374. Springer. Oscar T¨ackstr¨om and Ryan McDonald. 2011b. Semisupervised latent variable models for sentence-level sentiment analysis. Rakshit Trivedi and Jacob Eisenstein. 2013. Discourse connectors for latent subjectivity in sentiment analysis. In Proceedings of NAACL-HLT, pages 808–813. Sida Wang and Christopher Manning. 2013. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning (ICML13), pages 118–126. Qi Zhang, Jin Qian, Huan Chen, Jihua Kang, and Xuanjing Huang. 2013. Discourse level explanatory relation extraction from product reviews using firstorder logic. Lanjun Zhou, Binyang Li, Wei Gao, Zhongyu Wei, and Kam-Fai Wong. 2011. Unsupervised discovery of discourse relations for eliminating intra-sentence polarity ambiguities. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 162–171. Association for Computational Linguistics. 335
2014
31
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 336–346, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Product Feature Mining: Semantic Clues versus Syntactic Constituents Liheng Xu, Kang Liu, Siwei Lai and Jun Zhao National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China {lhxu, kliu, swlai, jzhao}@nlpr.ia.ac.cn Abstract Product feature mining is a key subtask in fine-grained opinion mining. Previous works often use syntax constituents in this task. However, syntax-based methods can only use discrete contextual information, which may suffer from data sparsity. This paper proposes a novel product feature mining method which leverages lexical and contextual semantic clues. Lexical semantic clue verifies whether a candidate term is related to the target product, and contextual semantic clue serves as a soft pattern miner to find candidates, which exploits semantics of each word in context so as to alleviate the data sparsity problem. We build a semantic similarity graph to encode lexical semantic clue, and employ a convolutional neural model to capture contextual semantic clue. Then Label Propagation is applied to combine both semantic clues. Experimental results show that our semantics-based method significantly outperforms conventional syntaxbased approaches, which not only mines product features more accurately, but also extracts more infrequent product features. 1 Introduction In recent years, opinion mining has helped customers a lot to make informed purchase decisions. However, with the rapid growth of e-commerce, customers are no longer satisfied with the overall opinion ratings provided by traditional sentiment analysis systems. The detailed functions or attributes of products, which are called product features, receive more attention. Nevertheless, a product may have thousands of features, which makes it impractical for a customer to investigate them all. Therefore, mining product features automatically from online reviews is shown to be a key step for opinion summarization (Hu and Liu, 2004; Qiu et al., 2009) and fine-grained sentiment analysis (Jiang et al., 2011; Li et al., 2012). Previous works often mine product features via syntactic constituent matching (Popescu and Etzioni, 2005; Qiu et al., 2009; Zhang et al., 2010). The basic idea is that reviewers tend to comment on product features in similar syntactic structures. Therefore, it is natural to mine product features by using syntactic patterns. For example, in Figure 1, the upper box shows a dependency tree produced by Stanford Parser (de Marneffe et al., 2006), and the lower box shows a common syntactic pattern from (Zhang et al., 2010), where <feature/NN> is a wildcard to be fit in reviews and NN denotes the required POS tag of the wildcard. Usually, the product name mp3 is specified, and when screen matches the wildcard, it is likely to be a product feature of mp3. Figure 1: An example of syntax-based product feature mining procedure. The word screen matches the wildcard <feature/NN>. Therefore, screen is likely to be a product feature of mp3. Generally, such syntactic patterns extract product features well but they still have some limitations. For example, the product-have-feature pattern may fail to find the fm tuner in a very similar case in Example 1(a), where the product is mentioned by using player instead of mp3. Similarly, it may also fail on Example 1(b), just with have replaced by support. In essence, syntactic pattern is 336 a kind of one-hot representation for encoding the contexts, which can only use partial and discrete features, such as some key words (e.g., have) or shallow information (e.g., POS tags). Therefore, such a representation often suffers from the data sparsity problem (Turian et al., 2010). One possible solution for this problem is using a more general pattern such as NP-VB-feature, where NP represents a noun or noun phrase and VB stands for any verb. However, this pattern becomes too general that it may find many irrelevant cases such as the one in Example 1(c), which is not talking about the product. Consequently, it is very difficult for a pattern designer to balance between precision and generalization. Example 1: (a) This player has an :: fm::::: tuner. (b) This mp3 supports :::: wma::: file. (c) This review has helped ::::: people a lot. (d) This mp3 has some::::: flaws. To solve the problems stated above, it is argued that deeper semantics of contexts shall be exploited. For example, we can try to automatically discover that the verb have indicates a part-whole relation (Zhang et al., 2010) and support indicates a product-function relation, so that both sth. have and sth. support suggest that terms following them are product features, where sth. can be replaced by any terms that refer to the target product (e.g., mp3, player, etc.). This is called contextual semantic clue. Nevertheless, only using contexts is not sufficient enough. As in Example 1(d), we can see that the word flaws follows mp3 have, but it is not a product feature. Thus, a noise term may be extracted even with high contextual support. Therefore, we shall also verify whether a candidate is really related to the target product. We call it lexical semantic clue. This paper proposes a novel bootstrapping approach for product feature mining, which leverages both semantic clues discussed above. Firstly, some reliable product feature seeds are automatically extracted. Then, based on the assumption that terms that are more semantically similar to the seeds are more likely to be product features, a graph which measures semantic similarities between terms is built to capture lexical semantic clue. At the same time, a semi-supervised convolutional neural model (Collobert et al., 2011) is employed to encode contextual semantic clue. Finally, the two kinds of semantic clues are combined by a Label Propagation algorithm. In the proposed method, words are represented by continuous vectors, which capture latent semantic factors of the words (Turian et al., 2010). The vectors can be unsupervisedly trained on large scale corpora, and words with similar semantics will have similar vectors. This enables our method to be less sensitive to lexicon change, so that the data sparsity problem can be alleviated . The contributions of this paper include: • It uses semantics of words to encode contextual clues, which exploits deeper level information than syntactic constituents. As a result, it mines product features more accurately than syntaxbased methods. • It exploits semantic similarity between words to capture lexical clues, which is shown to be more effective than co-occurrence relation between words and syntactic patterns. In addition, experiments show that the semantic similarity has the advantage of mining infrequent product features, which is crucial for this task. For example, one may say “This hotel has low water pressure”, where low water pressure is seldom mentioned, but fatal to someone’s taste. • We compare the proposed semantics-based approach with three state-of-the-art syntax-based methods. Experiments show that our method achieves significantly better results. The rest of this paper is organized as follows. Section 2 introduces related work. Section 3 describes the proposed method in details. Section 4 gives the experimental results. Lastly, we conclude this paper in Section 5. 2 Related Work In product feature mining task, Hu and Liu (2004) proposed a pioneer research. However, the association rules they used may potentially introduce many noise terms. Based on the observation that product features are often commented on by similar syntactic structures, it is natural to use patterns to capture common syntactic constituents around product features. Popescu and Etzioni (2005) designed some syntactic patterns to search for product feature candidates and then used Pointwise Mutual Information (PMI) to remove noise terms. Qiu et al. (2009) proposed eight heuristic syntactic rules to jointly extract product features and sentiment lexicons, where a bootstrapping algorithm named Double 337 Propagation was applied to expand a given seed set. Zhang et al. (2010) improved Qiu’s work by adding more feasible syntactic patterns, and the HITS algorithm (Kleinberg, 1999) was employed to rank candidates. Moghaddam and Ester (2010) extracted product features by automatical opinion pattern mining. Zhuang et al. (2006) used various syntactic templates from an annotated movie corpus and applied them to supervised movie feature extraction. Wu et al. (2009) proposed a phrase level dependency parsing for mining aspects and features of products. As discussed in the first section, syntactic patterns often suffer from data sparsity. Furthermore, most pattern-based methods rely on term frequency, which have the limitation of finding infrequent but important product features. A recent research (Xu et al., 2013) extracted infrequent product features by a semi-supervised classifier, which used word-syntactic pattern co-occurrence statistics as features for the classifier. However, this kind of feature is still sparse for infrequent candidates. Our method adopts a semantic word representation model, which can train dense features unsupervisedly on a very large corpus. Thus, the data sparsity problem can be alleviated. 3 The Proposed Method We propose a semantics-based bootstrapping method for product feature mining. Firstly, some product feature seeds are automatically extracted. Then, a semantic similarity graph is created to capture lexical semantic clue, and a Convolutional Neural Network (CNN) (Collobert et al., 2011) is trained in each bootstrapping iteration to encode contextual semantic clue. Finally we use Label Propagation to find some reliable new seeds for the training of the next bootstrapping iteration. 3.1 Automatic Seed Generation The seed set consists of positive labeled examples (i.e. product features) and negative labeled examples (i.e. noise terms). Intuitively, popular product features are frequently mentioned in reviews, so they can be extracted by simply mining frequently occurring nouns (Hu and Liu, 2004). However, this strategy will also find many noise terms (e.g., commonly used nouns like thing, one, etc.). To produce high quality seeds, we employ a Domain Relevance Measure (DRM) (Jiang and Tan, 2010), which combines term frequency with a domainspecific measuring metric called Likelihood Ratio Test (LRT) (Dunning, 1993). Let λ(t) denotes the LRT score of a product feature candidate t, λ(t) = pk1(1 −p)n1−k1pk2(1 −p)n2−k2 pk1 1 (1 −p1)n1−k1pk2 2 (1 −p2)n2−k2 (1) where k1 and k2 are the frequencies of t in the review corpus R and a background corpus1 B, n1 and n2 are the total number of terms in R and B, p = (k1 + k2)/(n1 + n2), p1 = k1/n1 and p2 = k2/n2. Then a modified DRM2 is proposed, DRM(t) = tf(t) max[tf(·)] × 1 log df(t) × | log λ(t)| −min| log λ(·)| max| log λ(·)| −min| log λ(·)| (2) where tf(t) is the frequency of t in R and df(t) is the frequency of t in B. All nouns in R are ranked by DRM(t) in descent order, where top N nouns are taken as the positive example set V + s . On the other hand, Xu et al. (2013) show that a set of general nouns seldom appear to be product features. Therefore, we employ their General Noun Corpus to create the negative example set V − s , where N most frequent terms are selected. Besides, it is guaranteed that V + s ∩V − s = ∅, i.e., conflicting terms are taken as negative examples. 3.2 Capturing Lexical Semantic Clue in a Semantic Similarity Graph To capture lexical semantic clue, each word is first converted into word embedding, which is a continuous vector with each dimension’s value corresponds to a semantic or grammatical interpretation (Turian et al., 2010). Learning large-scale word embeddings is very time-consuming (Collobert et al., 2011), we thus employ a faster method named Skip-gram model (Mikolov et al., 2013). 3.2.1 Learning Word Embedding for Semantic Representation Given a sequence of training words W = {w1, w2, ..., wm}, the goal of the Skip-gram model is to learn a continuous vector space EB = {e1, e2, ..., em}, where ei is the word embedding of wi. The training objective is to maximize the 1Google-n-Gram (http://books.google.com/ngrams) is used as the background corpus. 2The df(t) part of the original DRM is slightly modified because we want a tf × idf-like scheme (Liu et al., 2012). 338 average log probability of using word wt to predict a surrounding word wt+j, ˆ EB = argmax et∈EB 1 m m X t=1 X −c≤j≤c,j̸=0 log p(wt+j|wt; et) (3) where c is the size of the training window. Basically, p(wt+j|wt; et) is defined as, p(wt+j|wt; et) = exp(e′T t+jet) Pm w=1 exp(e′T wet) (4) where e′i is an additional training vector associated with ei. This basic formulation is impractical because it is proportional to m. A hierarchical softmax approximation can be applied to reduce the computational cost to log2(m), see (Morin and Bengio, 2005) for details. To alleviate the data sparsity problem, EB is first trained on a very large corpus3 (denoted by C), and then fine-tuned on the target review corpus R. Particularly, for phrasal product features, a statistic-based method in (Zhu et al., 2009) is used to detect noun phrases in R. Then, an Unfolding Recursive Autoencoder (Socher et al., 2011) is trained on C to obtain embedding vectors for noun phrases. In this way, semantics of infrequent terms in R can be well captured. Finally, the phrasebased Skip-gram model in (Mikolov et al., 2013) is applied on R. 3.2.2 Building the Semantic Similarity Graph Lexical semantic clue is captured by measuring semantic similarity between terms. The underlying motivation is that if we have known some product feature seeds, then terms that are more semantically similar to these seeds are more likely to be product features. For example, if screen is known to be a product feature of mp3, and lcd is of high semantic similarity with screen, we can infer that lcd is also a product feature. Analogously, terms that are semantically similar to negative labeled seeds are not product features. Word embedding naturally meets the demand above: words that are more semantically similar to each other are located closer in the embedding space (Collobert et al., 2011). Therefore, we can use cosine distance between two embedding vectors as the semantic distance measuring metric. Thus, our method does not rely on term frequency 3Wikipedia(http://www.wikipedia.org) is used in practice. to rank candidates. This could potentially improve the ability of mining infrequent product features. Formally, we create a semantic similarity graph G = (V, E, W), where V = {Vs ∪Vc} is the vertex set, which contains the labeled seed set Vs and the unlabeled candidate set Vc; E is the edge set which connects every vertex pair (u, v), where u, v ∈V ; W = {wuv : cos(EBu, EBv)} is a function which associates a weight to each edge. 3.3 Encoding Contextual Semantic Clue Using Convolutional Neural Network The CNN is trained on each occurrence of seeds that is found in review texts. Then for a candidate term t, the CNN classifies all of its occurrences. Since seed terms tend to have high frequency in review texts, only a few seeds will be enough to provide plenty of occurrences for the training. 3.3.1 The architecture of the Convolutional Neural Network The architecture of the Convolutional Neural Network is shown in Figure 2. For a product feature candidate t in sentence s, every consecutive subsequence qi of s that containing t with a window of length l is fed to the CNN. For example, as in Figure 2, if t = {screen}, and l = 3, there are three inputs: q1 = [the, ipod, screen], q2 = [ipod, screen, is], q3 = [screen, is, impressive]. Partially, t is replaced by a token “*PF*” to remove its lexicon influence4. Figure 2: The architecture of the Convolutional Neural Network. To get the output score, qi is first converted into a concatenated vector xi = [e1; e2; ...; el], where ej is the word embedding of the j-th word. In this way, the CNN serves as a soft pattern miner: 4Otherwise, the CNN will quickly get overfitting on t, because very few seed lexicons are used for the training. 339 since words that have similar semantics have similar low-dimension embedding vectors, the CNN is less sensitive to lexicon change. The network is computed by, y(1) i = tanh(W (1)xi + b(1)) (5) y(2) = max(y(1) i ) (6) y(3) = W (3)y(2) + b(3) (7) where y(i) is the output score of the i-th layer, and b(i) is the bias of the i-th layer; W (1) ∈Rh×(nl) and W (3) ∈R2×h are parameter matrixes, where n is the dimension of word embedding, and h is the size of nodes in the hidden layer. In conventional neural models, the candidate term t is placed in the center of the window. However, from Example 2, when l = 5, we can see that the best windows should be the bracketed texts (Because, intuitively, the windows should contain mp3, which is a strong evidence for finding the product feature), where t = {screen} is at the boundary. Therefore, we use Equ. 6 to formulate a max-convolutional layer, which is aimed to enable the CNN to find more evidences in contexts than conventional neural models. Example 2: (a) The [screen of this mp3 is] great. (b) This [mp3 has a great screen]. 3.3.2 Training Let θ = {EB, W (·), b(·)} denotes all the trainable parameters. The softmax function is used to convert the output score of the CNN to a probability, p(t|X; θ) = exp(y(3)) P|C| j=1 exp(y(3) j ) (8) where X is the input set for term t, and C = {0, 1} is the label set representing product feature and non-product feature, respectively. To train the CNN, we first use Vs to collect each occurrence of the seeds in R to form a training set Ts. Then, the training criterion is to minimize cross-entropy over Ts, ˆθ = argmin θ |Ts| X i=1 −log δip(ti|Xi; θ) (9) where δi is the binomial target label distribution for one entry. Backpropagation algorithm with mini-batch stochastic gradient descent is used to solve this optimization problem. In addition, some useful tricks can be applied during the training. The weight matrixes W (·) are initialized by normalized initialization (Glorot and Bengio, 2010). W (1) is pre-trained by an autoencoder (Hinton, 1989) to capture semantic compositionality. To speed up the learning, a momentum method is applied (Sutskever et al., 2013). 3.4 Combining Lexical and Contextual Semantic Clues by Label Propagation We propose a Label Propagation algorithm to combine both semantic clues in a unified process. Each term t ∈V is assumed to have a label distribution Lt = (p+ t , p− t ), where p+ t denotes the probability of the candidate being a product feature, and on the contrary, p− t = 1 −p+ t . The classified results of the CNN which encode contextual semantic clue serve as the prior knowledge, It =    (1, 0), if t ∈V + s (0, 1), if t ∈V − s (r+ t , r− t ), if t ∈Vc (10) where (r+ t , r− t ) is estimated by, r+ t = count+(t) count+(t) + count−(t) (11) where count+(t) is the number of occurrences of term t that are classified as positive by the CNN, and count−(t) represents the negative count. Label Propagation is applied to propagate the prior knowledge distribution I to the product feature distribution L via semantic similarity graph G, so that a product feature candidate is determined by exploring its semantic relations to all of the seeds and other candidates globally. We propose an adapted version on the random walking view of the Adsorption algorithm (Baluja et al., 2008) by updating the following formula until L converges, Li+1 = (1 −α)MT Li + αDI (12) where M is the semantic transition matrix built from G; D = Diag[log tf(t)] is a diagonal matrix of log frequencies, which is designed to assign higher “confidence” scores to more frequent seeds; and α is a balancing parameter. Particularly, when α = 0, we can set the prior knowledge I without Vc to L0 so that only lexical semantic clue is used; otherwise if α = 1, only contextual semantic clue is used. 340 3.5 The Bootstrapping Framework We summarize the bootstrapping framework of the proposed method in Algorithm 1. During bootstrapping, the CNN is enhanced by Label Propagation which finds more labeled examples for training, and then the performance of Label Propagation is also improved because the CNN outputs a more accurate prior distribution. After running for several iterations, the algorithm gets enough seeds, and a final Label Propagation is conducted to produce the results. Algorithm 1: Bootstrapping using semantic clues Input: The review corpus R, a large corpus C Output: The mined product feature list P Initialization: Train word embedding set EB first on C, and then on R Step 1: Generate product feature seeds Vs (Section 3.1) Step 2: Build semantic similarity graph G (Section 3.2) while iter < MAX ITER do Step 3: Use Vs to collect occurrence set Ts from R for training Step 4: Train a CNN N on Ts (Section 3.3) Apply mini-batch SGD on Equ. 9; Step 5: Run Label Propagation (Section 3.4) Classify candidates using N to setup I; L0 ←I; repeat Li+1 ←(1 −α)MT Li + αDI; until ||Li+1 −Li||2 < ε; Step 6: Expand product feature seeds Move top T terms from Vc to Vs; iter++ end Step 7: Run Label Propagation for a final result Lf Rank terms by L+ f to get P, where L+ f > L− f ; 4 Experiments 4.1 Datasets and Evaluation Metrics Datasets: We select two real world datasets to evaluate the proposed method. The first one is a benchmark dataset in Wang et al. (2011), which contains English review sets on two domains (MP3 and Hotel)5. The second dataset is proposed by Chinese Opinion Analysis Evaluation 2008 (COAE 2008)6, where two review sets (Camera and Car) are selected. Xu et al. (2013) had manually annotated product features on these four domains, so we directly employ their annotation as the gold standard. The detailed information can be found in their original paper. 5http://timan.cs.uiuc.edu/downloads.html 6http://ir-china.org.cn/coae2008.html Evaluation Metrics: We evaluate the proposed method in terms of precision(P), recall(R) and Fmeasure(F). The English results are evaluated by exact string match. And for Chinese results, we use an overlap matching metric, because determining the exact boundaries is hard even for human (Wiebe et al., 2005). 4.2 Experimental Settings For English corpora, the pre-processing are the same as that in (Qiu et al., 2009), and for Chinese corpora, the Stanford Word Segmenter (Chang et al., 2008) is used to perform word segmentation. We select three state-of-the-art syntax-based methods to be compared with our method: DP uses a bootstrapping algorithm named as Double Propagation (Qiu et al., 2009), which is a conventional syntax-based method. DP-HITS is an enhanced version of DP proposed by Zhang et al. (2010), which ranks product feature candidates by s(t) = log tf(t) ∗importance(t) (13) where importance(t) is estimated by the HITS algorithm (Kleinberg, 1999). SGW is the Sentiment Graph Walking algorithm proposed in (Xu et al., 2013), which first extracts syntactic patterns and then uses random walking to rank candidates. Afterwards, wordsyntactic pattern co-occurrence statistic is used as feature for a semi-supervised classifier TSVM (Joachims, 1999) to further refine the results. This two-stage method is denoted as SGW-TSVM. LEX only uses lexical semantic clue. Label Propagation is applied alone in a self-training manner. The dimension of word embedding n = 100, the convergence threshold ε = 10−7, and the number of expanded seeds T = 40. The size of the seed set N is 40. To output product features, it ranks candidates in descent order by using the positive score L+ f (t). CONT only uses contextual semantic clue, which only contains the CNN. The window size l is 5. The CNN is trained with a mini-batch size of 50. The hidden layer size h = 250. Finally, importance(t) in Equ. 13 is replaced with r+ t in Equ. 11 to rank candidates. LEX&CONT leverages both semantic clues. 341 Method MP3 Hotel Camera Car Avg. P R F P R F P R F P R F F DP 0.66 0.57 0.61 0.66 0.60 0.63 0.71 0.70 0.70 0.72 0.65 0.68 0.66 DP-HITS 0.65 0.62 0.63 0.64 0.66 0.65 0.71 0.78 0.74 0.69 0.68 0.68 0.68 SGW 0.62 0.68 0.65 0.63 0.71 0.67 0.69 0.80 0.74 0.66 0.71 0.68 0.69 LEX 0.64 0.74 0.69 0.65 0.75 0.70 0.69 0.84 0.76 0.68 0.78 0.73 0.72 CONT 0.68 0.65 0.66 0.69 0.68 0.68 0.74 0.77 0.75 0.74 0.70 0.72 0.71 SGW-TSVM 0.73 0.71 0.72 0.75 0.73 0.74 0.78 0.81 0.79 0.76 0.73 0.74 0.75 LEX&CONT 0.74 0.75 0.74 0.75 0.77 0.76 0.80 0.84 0.82 0.79 0.79 0.79 0.78 Table 1: Experimental results of product feature mining. The precision or recall of CONT is the average performance over five runs with different random initialization of parameters of the CNN. Avg. stands for the average score. 4.3 The Semantics-based Methods vs. State-of-the-art Syntax-based Methods The experimental results are shown in Table 1, from which we have the following observations: (i) Our method achieves the best performance among all of the compared methods. We also equally split the dataset into five subsets, and perform one-tailed t-test (p ≤0.05), which shows that the proposed semanticsbased method (LEX&CONT) significantly outperforms the three syntax-based strong competitors (DP, DP-HITS and SGW-TSVM). (ii) LEX&CONT which leverages both lexical and contextual semantic clues outperforms approaches that only use one kind of semantic clue (LEX and CONT), showing that the combination of the semantic clues is helpful. (iii) Our methods which use only one kind of semantic clue (LEX and CONT) outperform syntax-based methods (DP, DP-HITS and SGW). Comparing DP-HITS with LEX and CONT, the difference between them is that DP-HITS uses a syntax-pattern-based algorithm to estimate importance(t) in Equ. 13, while our methods use lexical or contextual semantic clue instead. We believe the reason that LEX or CONT is better is that syntactic patterns only use discrete and local information. In contrast, CONT exploits latent semantics of each word in context, and LEX takes advantage of word embedding, which is induced from global word co-occurrence statistic. Furthermore, comparing SGW and LEX, both methods are base on random surfer model, but LEX gets better results than SGW. Therefore, the wordword semantic similarity relation used in LEX is more reliable than the word-syntactic pattern relation used in SGW. (iv) LEX&CONT achieves the highest recall among all of the evaluated methods. Since DP and DP-HITS rely on frequency for ranking product features, infrequent candidates are ranked low in their extracted list. As for SGWTSVM, the features they used for the TSVM suffer from the data sparsity problem for infrequent terms. In contrast, LEX&CONT is frequency-independent to the review corpus. Further discussions on this observation are given in the next section. 4.4 The Results on Extracting Infrequent Product Features We conservatively regard 30% product features with the highest frequencies in R as frequent features, so the remaining terms in the gold standard are infrequent features. In product feature mining task, frequent features are relatively easy to find. Table 2 shows the recall of all the four approaches for mining frequent product features. We can see that the performance are very close among different methods. Therefore, the recall mainly depends on mining the infrequent features. Method MP3 Hotel Camera Car DP 0.89 0.92 0.86 0.84 DP-HITS 0.89 0.91 0.86 0.85 SGW-TSVM 0.87 0.92 0.88 0.87 LEX&CONT 0.89 0.91 0.89 0.87 Table 2: The recall of frequent product features. Figure 3 gives the recall of infrequent product features, where LEX&CONT achieves the best performance. So our method is less influenced by term frequency. Furthermore, LEX gets better recall than CONT and all syntax-based methods, which indicates that lexical semantic clue does aid to mine more infrequent features as expected. 342 1 2 3 4 5 6 7 8 9 .5 .6 .7 .8 .9 1.0 LEX&CONT CONT LEX (a) MP3 1 2 3 4 5 6 7 8 9 .5 .6 .7 .8 .9 1.0 LEX&CONT CONT LEX (b) Hotel 1 2 3 4 5 6 7 8 9 .5 .6 .7 .8 .9 1.0 LEX&CONT CONT LEX (c) Camera 1 2 3 4 5 6 7 8 9 .5 .6 .7 .8 .9 1.0 LEX&CONT CONT LEX (d) Car Figure 4: Accuracy (y-axis) of product feature seed expansion at each bootstrapping iteration (x-axis). The error bar shows the standard deviation over five runs. Method MP3 Hotel Camera Car P R F P R F P R F P R F FW-5 0.62 0.63 0.62 0.64 0.64 0.64 0.68 0.73 0.70 0.67 0.66 0.66 FW-9 0.64 0.65 0.64 0.66 0.68 0.67 0.70 0.76 0.73 0.71 0.70 0.70 CONT 0.68 0.65 0.66 0.69 0.68 0.68 0.74 0.77 0.75 0.74 0.70 0.72 Table 3: The results of convolutional method vs. the results of non-convolutional methods. MP3 Hotel Camera Car Recall .4 .5 .6 .7 .8 .9 DP DP-HITS SGW-TSVM CONT LEX LEX&CONT Figure 3: The recall of infrequent features. The error bar shows the standard deviation over five different runs. 4.5 Lexical Semantic Clue vs. Contextual Semantic Clue This section studies the effects of lexical semantic clue and contextual semantic clue during seed expansion (Step 6 in Algorithm 1), which is controlled by α. When α = 1, we get the CONT; and if α is set 0, we get the LEX. To take into account the correctly expanded terms for both positive and negative seeds, we use Accuracy as the evaluation metric, Accuracy = #TP + #TN # Extracted Seeds where TP denotes the true positive seeds, and TN denotes the true negative seeds. Figure 4 shows the performance of seed expansion during bootstrapping, in which the accuracy is computed on 40 seeds (20 being positive and 20 being negative) expanded in each iteration. We can see that the accuracies of CONT and LEX&CONT retain at a high level, which shows that they can find reliable new product feature seeds. However, the performance of LEX oscillates sharply and it is very low for some points, which indicates that using lexical semantic clue alone is infeasible. On another hand, comparing CONT with LEX in Table 1, we can see that LEX performs generally better than CONT. Although LEX is not so accurate as CONT during seed expansion, its final performance surpasses CONT. Consequently, we can draw conclusion that CONT is more suitable for the seed expansion, and LEX is more robust for the final result production. To combine advantages of the two kinds of semantic clues, we set α = 0.7 in Step 5 of Algorithm 1, so that contextual semantic clue plays a key role to find new seeds accurately. For Step 7, we set α = 0.3. Thus, lexical semantic clue is emphasized for producing the final results. 4.6 The Effect of Convolutional Layer Two non-convolutional variations of the proposed method are used to be compared with the convolutional method in CONT. FW-5 uses a traditional neural network with a fixed window size of 5 to replace the CNN in CONT, and the candidate term to be classified is placed in the center of the window. Similarly, FW-9 uses a fixed window size of 9. Note that CONT uses a 5-term dynamic window containing the candidate term, so the exploited number of words in the context is equivalent to FW-9. 343 Table 3 shows the experimental results. We can see that the performance of FW-5 is much worse than CONT. The reason is that FW-5 only exploits half of the context as that of CONT, which is not sufficient enough. Meanwhile, although FW-9 exploits equivalent range of context as that of CONT, it gets lower precisions. It is because FW-9 has approximately two times parameters in the parameter matrix W (1) than that in Equ. 5 of CONT, which makes it more difficult to be trained with the same amount of data. Also, lengths of many sentences in the review corpora are shorter than 9. Therefore, the convolutional approach in CONT is the most effective way among these settings. 4.7 Parameter Study We investigate two key parameters of the proposed method: the initial number of seeds N, and the size of the window l used by the CNN. Figure 5 shows the performance under different N, where the F-Measure saturates when N equates to 40 and beyond. Hence, very few seeds are needed for starting our algorithm. N 10 20 30 40 50 60 F-Measure .65 .70 .75 .80 .85 MP3 Hotel Camera Car Figure 5: F-Measure vs. N for the final results. Figure 6 shows F-Measure under different window size l. We can see that the performance is improved little when l is larger than 5. Therefore, l = 5 is a proper window size for these datasets. l 2 3 4 5 6 7 F-Measure .5 .6 .7 .8 .9 MP3 Hotel Camera Car Figure 6: F-Measure vs. l for the final results. 5 Conclusion and Future Work This paper proposes a product feature mining method by leveraging contextual and lexical semantic clues. A semantic similarity graph is built to capture lexical semantic clue, and a convolutional neural network is used to encode contextual semantic clue. Then, a Label Propagation algorithm is applied to combine both semantic clues. Experimental results prove the effectiveness of the proposed method, which not only mines product features more accurately than conventional syntax-based method, but also extracts more infrequent product features. In future work, we plan to extend the proposed method to jointly mine product features along with customers’ opinions on them. The learnt semantic representations of words may also be utilized to predict fine-grained sentiment distributions over product features. Acknowledgement This work was sponsored by the National Basic Research Program of China (No. 2012CB316300), the National Natural Science Foundation of China (No. 61272332 and No. 61202329), the National High Technology Development 863 Program of China (No. 2012AA011102), and CCF-Tencent Open Research Fund. This work was also supported in part by Noahs Ark Lab of Huawei Tech. Ltm. References Shumeet Baluja, Rohan Seth, D. Sivakumar, Yushi Jing, Jay Yagnik, Shankar Kumar, Deepak Ravichandran, and Mohamed Aly. 2008. Video suggestion and discovery for youtube: Taking random walks through the view graph. In Proceedings of the 17th International Conference on World Wide Web, WWW ’08, pages 895–904, New York, NY, USA. ACM. Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing chinese word segmentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Translation, StatMT ’08, pages 224–232. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed 344 dependency parses from phrase structure parses. In Proceedings of the IEEE / ACL’06 Workshop on Spoken Language Technology. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Comput. Linguist., 19(1):61–74, March. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics. Geoffrey E. Hinton. 1989. Connectionist learning procedures. Artificial Intelligence, 40(1C3):185 – 234. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’04, pages 168–177, New York, NY, USA. ACM. Xing Jiang and Ah-Hwee Tan. 2010. Crctol: A semantic-based domain ontology learning system. Journal of the American Society for Information Science and Technology, 61(1):150–168. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 151–160, Stroudsburg, PA, USA. Association for Computational Linguistics. Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Proceedings of the 16th International Conference on Machine Learning, pages 200–209. Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604–632, September. Fangtao Li, Sinno Jialin Pan, Ou Jin, Qiang Yang, and Xiaoyan Zhu. 2012. Cross-domain co-extraction of sentiment and topic lexicons. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 410–419, Stroudsburg, PA, USA. Association for Computational Linguistics. Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opinion target extraction using word-based translation model. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1346–1356, Jeju Island, Korea, July. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Samaneh Moghaddam and Martin Ester. 2010. Opinion digger: An unsupervised opinion miner from unstructured product reviews. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, CIKM ’10, pages 1825–1828, New York, NY, USA. ACM. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the international workshop on artificial intelligence and statistics, AISTATS05, pages 246–252. Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 339–346. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In Proceedings of the 21st international jont conference on Artifical intelligence, IJCAI’09, pages 1199–1204. Richard Socher, Eric H Huang, Jeffrey Pennington, Andrew Y Ng, and Christopher D Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In NIPS’2011, volume 24, pages 801–809. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 30 th International Conference on Machine Learning. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 384–394, Stroudsburg, PA, USA. Association for Computational Linguistics. Hongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect keyword supervision. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, pages 618– 626, New York, NY, USA. ACM. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2-3):165–210. Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3, EMNLP ’09, pages 1533– 1541, Stroudsburg, PA, USA. Association for Computational Linguistics. 345 Liheng Xu, Kang Liu, Siwei Lai, Yubo Chen, and Jun Zhao. 2013. Mining opinion words and opinion targets in a two-stage framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1764–1773, Sofia, Bulgaria, August. Association for Computational Linguistics. Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O’Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING ’10, pages 1462–1470, Stroudsburg, PA, USA. Association for Computational Linguistics. Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Muhua Zhu. 2009. Multi-aspect opinion polling from textual reviews. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM ’09, pages 1799–1802, New York, NY, USA. ACM. Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management, CIKM ’06, pages 43–50, New York, NY, USA. ACM. 346
2014
32
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 347–358, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Aspect Extraction with Automated Prior Knowledge Learning Zhiyuan Chen Arjun Mukherjee Bing Liu Department of Computer Science University of Illinois at Chicago Chicago, IL 60607, USA {czyuanacm,arjun4787}@gmail.com,[email protected] Abstract Aspect extraction is an important task in sentiment analysis. Topic modeling is a popular method for the task. However, unsupervised topic models often generate incoherent aspects. To address the issue, several knowledge-based models have been proposed to incorporate prior knowledge provided by the user to guide modeling. In this paper, we take a major step forward and show that in the big data era, without any user input, it is possible to learn prior knowledge automatically from a large amount of review data available on the Web. Such knowledge can then be used by a topic model to discover more coherent aspects. There are two key challenges: (1) learning quality knowledge from reviews of diverse domains, and (2) making the model fault-tolerant to handle possibly wrong knowledge. A novel approach is proposed to solve these problems. Experimental results using reviews from 36 domains show that the proposed approach achieves significant improvements over state-of-the-art baselines. 1 Introduction Aspect extraction aims to extract target entities and their aspects (or attributes) that people have expressed opinions upon (Hu and Liu, 2004, Liu, 2012). For example, in “The voice is not clear,” the aspect term is “voice.” Aspect extraction has two subtasks: aspect term extraction and aspect term resolution. Aspect term resolution groups extracted synonymous aspect terms together. For example, “voice” and “sound” should be grouped together as they refer to the same aspect of phones. Recently, topic models have been extensively applied to aspect extraction because they can perform both subtasks at the same time while other existing methods all need two separate steps (see Section 2). Traditional topic models such as LDA (Blei et al., 2003) and pLSA (Hofmann, 1999) are unsupervised methods for extracting latent topics in text documents. Topics are aspects in our task. Each aspect (or topic) is a distribution over (aspect) terms. However, researchers have shown that fully unsupervised models often produce incoherent topics because the objective functions of topic models do not always correlate well with human judgments (Chang et al., 2009). To tackle the problem, several semi-supervised topic models, also called knowledge-based topic models, have been proposed. DF-LDA (Andrzejewski et al., 2009) can incorporate two forms of prior knowledge from the user: must-links and cannot-links. A must-link implies that two terms (or words) should belong to the same topic whereas a cannot-link indicates that two terms should not be in the same topic. In a similar but more generic vein, must-sets and cannot-sets are used in MC-LDA (Chen et al., 2013b). Other related works include (Andrzejewski et al., 2011, Chen et al., 2013a, Chen et al., 2013c, Mukherjee and Liu, 2012, Hu et al., 2011, Jagarlamudi et al., 2012, Lu et al., 2011, Petterson et al., 2010). They all allow prior knowledge to be specified by the user to guide the modeling process. In this paper, we take a major step further. We mine the prior knowledge directly from a large amount of relevant data without any user intervention, and thus make this approach fully automatic. We hypothesize that it is possible to learn quality prior knowledge from the big data (of reviews) available on the Web. The intuition is that although every domain is different, there is a decent amount of aspect overlapping across domains. For example, every product domain has the aspect/topic of “price,” most electronic products share the aspect “battery” and some also share “screen.” Thus, the shared aspect knowl347 edge mined from a set of domains can potentially help improve aspect extraction in each of these domains, as well as in new domains. Our proposed method aims to achieve this objective. There are two major challenges: (1) learning quality knowledge from a large number of domains, and (2) making the extraction model fault-tolerant, i.e., capable of handling possibly incorrect learned knowledge. We briefly introduce the proposed method below, which consists of two steps. Learning quality knowledge: Clearly, learned knowledge from only a single domain can be erroneous. However, if the learned knowledge is shared by multiple domains, the knowledge is more likely to be of high quality. We thus propose to first use LDA to learn topics/aspects from each individual domain and then discover the shared aspects (or topics) and aspect terms among a subset of domains. These shared aspects and aspect terms are more likely to be of good quality. They can serve as the prior knowledge to guide a model to extract aspects. A piece of knowledge is a set of semantically coherent (aspect) terms which are likely to belong to the same topic or aspect, i.e., similar to a must-link, but mined automatically. Extraction guided by learned knowledge: For reliable aspect extraction using the learned prior knowledge, we must account for possible errors in the knowledge. In particular, a piece of automatically learned knowledge may be wrong or domain specific (i.e., the words in the knowledge are semantically coherent in some domains but not in others). To leverage such knowledge, the system must detect those inappropriate pieces of knowledge. We propose a method to solve this problem, which also results in a new topic model, called AKL (Automated Knowledge LDA), whose inference can exploit the automatically learned prior knowledge and handle the issues of incorrect knowledge to produce superior aspects. In summary, this paper makes the following contributions: 1. It proposes to exploit the big data to learn prior knowledge and leverage the knowledge in topic models to extract more coherent aspects. The process is fully automatic. To the best of our knowledge, none of the existing models for aspect extraction is able to achieve this. 2. It proposes an effective method to learn quality knowledge from raw topics produced using review corpora from many different domains. 3. It proposes a new inference mechanism for topic modeling, which can handle incorrect knowledge in aspect extraction. 2 Related Work Aspect extraction has been studied by many researchers in sentiment analysis (Liu, 2012, Pang and Lee, 2008), e.g., using supervised sequence labeling or classification (Choi and Cardie, 2010, Jakob and Gurevych, 2010, Kobayashi et al., 2007, Li et al., 2010, Yang and Cardie, 2013) and using word frequency and syntactic patterns (Hu and Liu, 2004, Ku et al., 2006, Liu et al., 2013, Popescu and Etzioni, 2005, Qiu et al., 2011, Somasundaran and Wiebe, 2009, Wu et al., 2009, Xu et al., 2013, Yu et al., 2011, Zhao et al., 2012, Zhou et al., 2013, Zhuang et al., 2006). However, these works only perform extraction but not aspect term grouping or resolution. Separate aspect term grouping has been done in (Carenini et al., 2005, Guo et al., 2009, Zhai et al., 2011). They assume that aspect terms have been extracted beforehand. To extract and group aspects simultaneously, topic models have been applied by researchers (Branavan et al., 2008, Brody and Elhadad, 2010, Chen et al., 2013b, Fang and Huang, 2012, He et al., 2011, Jo and Oh, 2011, Kim et al., 2013, Lazaridou et al., 2013, Li et al., 2011, Lin and He, 2009, Lu et al., 2009, Lu et al., 2012, Lu and Zhai, 2008, Mei et al., 2007, Moghaddam and Ester, 2013, Mukherjee and Liu, 2012, Sauper and Barzilay, 2013, Titov and McDonald, 2008, Wang et al., 2010, Zhao et al., 2010). Our proposed AKL model belongs to the class of knowledge-based topic models. Besides the knowledge-based topic models discussed in Section 1, document labels are incorporated as implicit knowledge in (Blei and McAuliffe, 2007, Ramage et al., 2009). Geographical region knowledge has also been considered in topic models (Eisenstein et al., 2010). All of these models assume that the prior knowledge is correct. GK-LDA (Chen et al., 2013a) is the only knowledge-based topic model that deals with wrong lexical knowledge to some extent. As we will see in Section 6, AKL outperformed GKLDA significantly due to AKL’s more effective error handling mechanism. Furthermore, GK-LDA does not learn any prior knowledge. Our work is also related to transfer learning to some extent. Topic models have been used to help 348 Input: Corpora DL for knowledge learning Test corpora DT 1: // STEP 1: Learning prior knowledge. 2: for r = 0 to R do // Iterate R + 1 times. 3: for each domain corpus Di ∈DL do 4: if r = 0 then 5: Ai ←LDA(Di); 6: else 7: Ai ←AKL(Di, K); 8: end if 9: end for 10: A ←∪iAi; 11: TC ←Clustering(A); 12: for each cluster Tj ∈TC do 13: Kj ←FPM(Tj); 14: end for 15: K ←∪jKj; 16: end for 17: // STEP 2: Using the learned knowledge. 18: for each test corpus Di ∈DT do 19: Ai ←AKL(Di, K); 20: end for Figure 1: The proposed overall algorithm. transfer learning (He et al., 2011, Pan and Yang, 2010, Xue et al., 2008). However, transfer learning in these papers is for traditional classification rather than topic/aspect extraction. In (Kang et al., 2012), labeled documents from source domains are transferred to the target domain to produce topic models with better fitting. However, we do not use any labeled data. In (Yang et al., 2011), a user provided parameter indicating the technicality degree of a domain was used to model the language gap between topics. In contrast, our method is fully automatic without human intervention. 3 Overall Algorithm This section introduces the proposed overall algorithm. It consists of two main steps: learning quality knowledge and using the learned knowledge. Figure 1 gives the algorithm. Step 1 (learning quality knowledge, Lines 116): The input is the review corpora DL from multiple domains, from which the knowledge is automatically learned. Lines 3 and 5 run LDA on each review domain corpus Di ∈DL to generate a set of aspects/topics Ai (lines 2, 4, and 69 will be discussed below). Line 10 unions the topics from all domains to give A. Lines 11-14 cluster the topics in A into some coherent groups (or clusters) and then discover knowledge Kj from each group of topics using frequent pattern mining (FPM) (Han et al., 2007). We will detail these in Section 4. Each piece of the learned knowledge is a set of terms which are likely to belong to the same aspect. Iterative improvement: The above process can actually run iteratively because the learned knowledge K can help the topic model learn better topics in each domain Di ∈DL, which results in better knowledge K in the next iteration. This iterative process is reflected in lines 2, 4, 6-9 and 16. We will examine the performance of the process at different iterations in Section 6.2. From the second iteration, we can use the knowledge learned from the previous iteration (lines 6-8). The learned knowledge is leveraged by the new model AKL, which is discussed below in Step 2. Step 2 (using the learned knowledge, Lines 1720): The proposed model AKL is employed to use the learned knowledge K to help topic modeling in test domains DT , which can be DL or other unseen domains. The key challenge of this step is how to use the learned prior knowledge K effectively in AKL and deal with possible errors in K. We will elaborate them in Section 5. Scalability: the proposed algorithm is naturally scalable as both LDA and AKL run on each domain independently. Thus, for all domains, the algorithm can run in parallel. Only the resulting topics need to be brought together for knowledge learning (Step 1). These resulting topics used in learning are much smaller than the domain corpus as only a list of top terms from each topic are utilized due to their high reliability. 4 Learning Quality Knowledge This section details Step 1 in the overall algorithm, which has three sub-steps: running LDA (or AKL) on each domain corpus, clustering the resulting topics, and mining frequent patterns from the topics in each cluster. Since running LDA is simple, we will not discuss it further. The proposed AKL model will be discussed in Section 5. Below we focus on the other two sub-steps. 4.1 Topic Clustering After running LDA (or AKL) on each domain corpus, a set of topics is obtained. Each topic is a distribution over terms (or words), i.e., terms with their associated probabilities. Here, we use only the top terms with high probabilities. As discussed earlier, quality knowledge should be shared 349 by topics across several domains. Thus, it is natural to exploit a frequency-based approach to discover frequent set of terms as quality knowledge. However, we need to deal with two issues. 1. Generic aspects, such as price with aspect terms like cost and pricy, are shared by many (even all) product domains. But specific aspects such as screen, occur only in domains with products having them. It means that different aspects may have distinct frequencies. Thus, using a single frequency threshold in the frequency-based approach is not sufficient to extract both generic and specific aspects because the generic aspects will result in numerous spurious aspects (Han et al., 2007). 2. A term may have multiple senses in different domains. For example, light can mean “of little weight” or “something that makes things visible”. A good knowledge base should have the capacity of handling this ambiguity. To deal with these two issues, we propose to discover knowledge in two stages: topic clustering and frequent pattern mining (FPM). The purpose of clustering is to group raw topics from a topic model (LDA or AKL) into clusters. Each cluster contains semantically related topics likely to indicate the same real-world aspect. We then mine knowledge from each cluster using an FPM technique. Note that the multiple senses of a term can be distinguished by the semantic meanings represented by the topics in different clusters. For clustering, we tried k-means and kmedoids (Kaufman and Rousseeuw, 1990), and found that k-medoids performs slightly better. One possible reason is that k-means is more sensitive to outliers. In our topic clustering, each data point is a topic represented by its top terms (with their probabilities normalized). The distance between two data points is measured by symmetrised KL-Divergence. 4.2 Frequent Pattern Mining Given topics within each cluster, this step finds sets of terms that appear together in multiple topics, i.e., shared terms among similar topics across multiple domains. Terms in such a set are likely to belong to the same aspect. To find such sets of terms within each cluster, we use frequent pattern mining (FPM) (Han et al., 2007), which is suited for the task. The probability of each term is ignored in FPM. FPM is stated as follows: Given a set of transactions T, where each transaction ti ∈T is a set of items from a global item set I, i.e., ti ∈I. In our context, ti is the topic vector comprising the top terms of a topic (no probability attached). T is the collection of all topics within a cluster and I is the set of all terms in T. The goal of FPM is to find all patterns that satisfy some user-specified frequency threshold (also called minimum support count), which is the minimum number of times that a pattern should appear in T. Such patterns are called frequent patterns. In our context, a pattern is a set of terms which have appeared multiple times in the topics within a cluster. Such patterns compose our knowledge base as shown below. 4.3 Knowledge Representation As the knowledge is extracted from each cluster individually, we represent our knowledge base as a set of clusters, where each cluster consists of a set of frequent 2-patterns mined using FPM, e.g., Cluster 1: {battery, life}, {battery, hour}, {battery, long}, {charge, long} Cluster 2: {service, support}, {support, customer}, {service, customer} Using two terms in a set is sufficient to cover the semantic relationship of the terms belonging to the same aspect. Longer patterns tend to contain more errors since some terms in a set may not belong to the same aspect as others. Such partial errors hurt performance in the downstream model. 5 AKL: Using the Learned Knowledge We now present the proposed topic model AKL, which is able to use the automatically learned knowledge to improve aspect extraction. 5.1 Plate Notation Differing from most topic models based on topicterm distribution, AKL incorporates a latent cluster variable c to connect topics and terms. The plate notation of AKL is shown in Figure 2. The inputs of the model are M documents, T topics and C clusters. Each document m has Nm terms. We model distribution P(cluster|topic) as ψ and distribution P(term|topic, cluster) as ϕ with Dirichlet priors β and γ respectively. P(topic|document) is modeled by θ with a Dirichlet prior α. The terms in each document are assumed to be generated by first sampling a topic z, and then a cluster c given topic z, and finally 350 α θ z c w Nm M ψ T φ TXC β γ Figure 2: Plate notation for AKL. a term w given topic z and cluster c. This plate notation of AKL and its associated generative process are similar to those of MC-LDA (Chen et al., 2013b). However, there are three key differences. 1. Our knowledge is automatically mined which may have errors (or noises), while the prior knowledge for MC-LDA is manually provided and assumed to be correct. As we will see in Section 6, using our knowledge, MC-LDA does not generate as coherent aspects as AKL. 2. Our knowledge is represented as clusters. Each cluster contains a set of frequent 2-patterns with semantically correlated terms. They are different from must-sets used in MC-LDA. 3. Most importantly, due to the use of the new form of knowledge, AKL’s inference mechanism (Gibbs sampler) is entirely different from that of MC-LDA (Section 5.2), which results in superior performances (Section 6). Note that the inference mechanism and the prior knowledge cannot be reflected in the plate notation for AKL in Figure 2. In short, our modeling contributions are (1) the capability of handling more expressive knowledge in the form of clusters, (2) a novel Gibbs sampler to deal with inappropriate knowledge. 5.2 The Gibbs Sampler As the automatically learned prior knowledge may contain errors for a domain, AKL has to learn the usefulness of each piece of knowledge dynamically during inference. Instead of assigning weights to each piece of knowledge as a fixed prior in (Chen et al., 2013a), we propose a new Gibbs sampler, which can dynamically balance the use of prior knowledge and the information in the corpus during the Gibbs sampling iterations. We adopt a Blocked Gibbs sampler (Rosen-Zvi et al., 2010) as it improves convergence and reduces autocorrelation when the variables (topic z and cluster c in AKL) are highly related. For each term wi in each document, we jointly sample a topic zi and cluster ci (containing wi) based on the conditional distribution in Gibbs sampler (will be detailed in Equation 4). To compute this distribution, instead of considering how well zi matches with wi only (as in LDA), we also consider two other factors: 1. The extent ci corroborates wi given the corpus. By “corroborate”, we mean whether those frequent 2-patterns in ci containing wi are also supported by the actual information in the domain corpus to some extent (see the measure in Equation 1 below). If ci corroborates wi well, ci is likely to be useful, and thus should also provide guidance in determining zi. Otherwise, ci may not be a suitable piece of knowledge for wi in the domain. 2. Agreement between ci and zi. By agreement we mean the degree that the terms (union of all frequent 2-patterns of ci) in cluster ci are reflected in topic zi. Unlike the first factor, this is a global factor as it concerns all the terms in a knowledge cluster. For the first factor, we measure how well ci corroborates wi given the corpus based on codocument frequency ratio. As shown in (Mimno et al., 2011), co-document frequency is a good indicator of term correlation in a domain. Following (Mimno et al., 2011), we define a symmetric co-document frequency ratio as follows: Co-Doc(w, w′) = D(w, w′) + 1 (D(w) + D(w′)) × 1 2 + 1 (1) where (w, w′) refers to each frequent 2-pattern in the knowledge cluster ci. D(w, w′) is the number of documents that contain both terms w and w′ and D(w) is the number of documents containing w. A smoothing count of 1 is added to avoid the ratio being 0. For the second factor, if cluster ci and topic zi agree, the intuition is that the terms in ci (union of all frequent 2-patterns of ci) should appear as top terms under zi (i.e., ranked top according to the term probability under zi). We define the agreement using symmetrised KL-Divergence between the two distributions (DISTc and DISTz) corresponding to ci and zi respectively. As there is no prior preference on the terms of ci, we use the uniform distribution over all terms in ci for DISTc. For DISTz, as only top 20 terms under zi are usually reliable, we use these top terms 351 with their probabilities (re-normalized) to represent the topic. Note that a smoothing probability (i.e., a very small value) is also given to every term for calculating KL-Divergence. Given DISTc and DISTz, the agreement is computed with: Agreement(c, z) = 1 KL(DISTc, DISTz) (2) The rationale of Equation 2 is that the lesser divergence between DISTc and DISTz implies the more agreement between ci and zi. We further employ the Generalized Plya urn (GPU) model (Mahmoud, 2008) which was shown to be effective in leveraging semantically related words (Chen et al., 2013a, Chen et al., 2013b, Mimno et al., 2011). The GPU model here basically states that assigning topic zi and cluster ci to term wi will not only increase the probability of connecting zi and ci with wi, but also make it more likely to associate zi and ci with term w′ where w′ shares a 2-pattern with wi in ci. The amount of probability increase is determined by matrix Ac,w′,w defined as: Ac,w′,w =      1, if w = w′ σ, if (w, w′) ∈c, w ̸= w′ 0, otherwise (3) where value 1 controls the probability increase of w by seeing w itself, and σ controls the probability increase of w′ by seeing w. Please refer to (Chen et al., 2013b) for more details. Putting together Equations 1, 2 and 3 into a blocked Gibbs Sampler, we can define the following sampling distribution in Gibbs sampler so that it provides helpful guidance in determining the usefulness of the prior knowledge and in selecting the semantically coherent topic. P(zi = t, ci = c|z−i, c−i, w, α, β, γ, A) ∝ X (w,w′)∈c Co-Doc(w, w′) × Agreement(c, t) × n−i m,t + α PT t′=1(n−i m,t′ + α) × PV w′=1 PV v′=1 Ac,v′,w′ × n−i t,c,v′ + β PC c′=1(PV w′=1 PV v′=1 Ac′,v′,w′ × n−i t,c′,v′ + β) × PV w′=1 Ac,w′,wi × n−i t,c,w′ + γ PV v′=1(PV w′=1 Ac,w′,v′ × n−i t,c,w′ + γ) (4) where n−i denotes the count excluding the current assignment of zi and ci, i.e., z−i and c−i. nm,t denotes the number of times that topic t was assigned to terms in document m. nt,c denotes the times that cluster c occurs under topic t. nt,c,v refers to the number of times that term v appears in cluster c under topic t. α, β and γ are predefined Dirichlet hyperparameters. Note that although the above Gibbs sampler is able to distinguish useful knowledge from wrong knowledge, it is possible that there is no cluster corroborates for a particular term. For every term w, apart from its knowledge clusters, we also add a singleton cluster for w, i.e., a cluster with one pattern {w, w} only. When no knowledge cluster is applicable, this singleton cluster is used. As a singleton cluster does not contain any knowledge information but only the word itself, Equations 1 and 2 cannot be computed. For the values of singleton clusters for these two equations, we assign them as the averages of those values of all nonsingleton knowledge clusters. 6 Experiments This section evaluates and compares the proposed AKL model with three baseline models LDA, MC-LDA, and GK-LDA. LDA (Blei et al., 2003) is the most popular unsupervised topic model. MC-LDA (Chen et al., 2013b) is a recent knowledge-based model for aspect extraction. GK-LDA (Chen et al., 2013a) handles wrong knowledge by setting prior weights using the ratio of word probabilities. Our automatically extracted knowledge is provided to these models. Note that cannot-set of MC-LDA is not used in AKL. 6.1 Experimental Settings Dataset. We created a large dataset containing reviews from 36 product domains or types from Amazon.com. The product domain names are listed in Table 1. Each domain contains 1, 000 reviews. This gives us 36 domain corpora. We have made the dataset publically available at the website of the first author. Pre-processing. We followed (Chen et al., 2013b) to employ standard pre-processing like lemmatization and stopword removal. To have a fair comparison, we also treat each sentence as a document as in (Chen et al., 2013a, Chen et al., 2013b). Parameter Settings. For all models, posterior estimates of latent variables were taken with a sampling lag of 20 iterations in the post burn-in phase (first 200 iterations for burn-in) with 2, 000 iterations in total. The model parameters were tuned on the development set in our pilot experiments 352 Amplifier DVD Player Kindle MP3 Player Scanner Video Player Blu-Ray Player GPS Laptop Network Adapter Speaker Video Recorder Camera Hard Drive Media Player Printer Subwoofer Watch CD Player Headphone Microphone Projector Tablet Webcam Cell Phone Home Theater System Monitor Radar Detector Telephone Wireless Router Computer Keyboard Mouse Remote Control TV Xbox Table 1: List of 36 domain names. and set to α = 1, β = 0.1, T = 15, and σ = 0.2. Furthermore, for each cluster, γ is set proportional to the number of terms in it. The other parameters for MC-LDA and GK-LDA were set as in their original papers. For parameters of AKL, we used the top 15 terms for each topic in the clustering phrase. The number of clusters is set to the number of domains. We will test the sensitivity of these clustering parameters in Section 6.4. The minimum support count for frequent pattern mining was set empirically to min(5, 0.4 × #T), where #T is the number of transactions (i.e., the number of topics from all domains) in a cluster. Test Settings: We use two test settings as below: 1. (Sections 6.2, 6.3 and 6.4) Test on the same corpora as those used in learning the prior knowledge. This is meaningful as the learning phrase is automatic and unsupervised (Figure 1). We call this self-learning-and-improvement. 2. (Section 6.5) Test on new/unseen domain corpora after knowledge learning. 6.2 Topic Coherence This sub-section evaluates the topics/aspects generated by each model based on Topic Coherence (Mimno et al., 2011) in test setting 1. Traditionally, topic models have been evaluated using perplexity. However, perplexity on the heldout test set does not reflect the semantic coherence of topics and may be contrary to human judgments (Chang et al., 2009). Instead, the metric Topic Coherence has been shown in (Mimno -1510 -1490 -1470 -1450 -1430 0 1 2 3 4 5 6 Topic Coherence AKL GK-LDA MC-LDA LDA Figure 3: Average Topic Coherence of each model at different learning iterations (Iteration 0 is equivalent to LDA). et al., 2011) to correlate well with human judgments. Recently, it has become a standard practice to use Topic Coherence for evaluation of topic models (Arora et al., 2013). A higher Topic Coherence value indicates a better topic interpretability, i.e., semantically more coherent topics. Figure 3 shows the average Topic Coherence of each model using knowledge learned at different learning iterations (Figure 1). For MC-LDA or GK-LDA, this is done by replacing AKL in lines 7 and 19 of Figure 1 with MC-LDA or GK-LDA. Each value is the average over all 36 domains. From Figure 3, we can observe the followings: 1. AKL performs the best with the highest Topic Coherence values at all iterations. It is actually the best in all 36 domains. These show that AKL finds more interpretable topics than the baselines. Its values stabilize after iteration 3. 2. Both GK-LDA and MC-LDA perform slightly better than LDA in iterations 1 and 2. MCLDA does not handle wrong knowledge. This shows that the mined knowledge is of good quality. Although GK-LDA uses large word probability differences under a topic to detect wrong lexical knowledge, it is not as effective as AKL. The reason is that as the lexical knowledge is from general dictionaries rather than mined from relevant domain data, the words in a wrong piece of knowledge usually have a very large probability difference under a topic. However, our knowledge is mined from top words in related topics including topics from the current domain. The words in a piece of incorrect (or correct) knowledge often have similar probabilities under a topic. The proposed dynamic knowledge adjusting mechanism in AKL is superior. Paired t-test shows that AKL outperforms all baselines significantly (p < 0.0001). 6.3 User Evaluation As our objective is to discover more coherent aspects, we recruited two human judges. Here we also use the test setting 1. Each topic is annotated as coherent if the judge feels that most of its top 353 0.6 0.7 0.8 0.9 1.0 Camera Computer Headphone GPS Precision @ 5 AKL GK-LDA MC-LDA LDA 0.6 0.7 0.8 0.9 1.0 Camera Computer Headphone GPS Precision @ 10 AKL GK-LDA MC-LDA LDA Figure 4: Average Precision@5 (Left) and Precision@10 (Right) of coherent topics from four models in each domain. (Headphone has a lot of overlapping topics in other domains while GPS has little.) terms coherently represent a real-world product aspect; otherwise incoherent. For a coherent topic, each top term is annotated as correct if it reflects the aspect represented by the topic; otherwise incorrect. We labeled the topics of each model at learning iteration 1 where the same pieces of knowledge (extracted from LDA topics at learning iteration 0) are provided to each model. After learning iteration 1, the gap between AKL and the baseline models tends to widen. To be consistent, the results later in Sections 6.4 and 6.5 also show each model at learning iteration 1. We also notice that after a few learning iterations, the topics from AKL model tend to have some resemblance across domains. We found that AKL with 2 learning iterations achieved the best topics. Note that LDA cannot use any prior knowledge. We manually labeled results from four domains, i.e., Camera, Computer, Headphone, and GPS. We chose Headphone as it has a lot of overlapping of topics with other domains because many electronic products use headphone. GPS was chosen because it does not have much topic overlapping with other domains as its aspects are mostly about Navigation and Maps. Domains Camera and Computer lay in between. We want to see how domain overlapping influences the performance of AKL. Cohen’s Kappa scores for annotator agreement are 0.918 (for topics) and 0.872 (for terms). To measure the results, we compute Precision@n (or p@n) based on the annotations, which was also used in (Chen et al., 2013b, Mukherjee and Liu, 2012). Figure 4 shows the precision@n results for n = 5 and 10. We can see that AKL makes improvements in all 4 domains. The improvement varies in domains with the most increase in Headphone and the least in GPS as Headphone overlaps more with other domains than GPS. Note that if a domain shares aspects with many other domains, its model should benefit more; otherwise, it is reasonable to expect lesser improvements. For the baselines, GK-LDA and MC-LDA perform similarly to LDA with minor variations, all of which are inferior to AKL. AKL’s improvements over other models are statistically significant based on paired t-test (p < 0.002). In terms of the number of coherent topics, AKL discovers one more coherent topic than LDA in Computer and one more coherent topic than GKLDA and MC-LDA in Headphone. For the other domains, the numbers of coherent topics are the same for all models. Table 2 shows an example aspect (battery) and its top 10 terms produced by AKL and LDA for each domain to give a flavor of the kind of improvements made by AKL. The results for GKLDA and MC-LDA are about the same as LDA (see also Figure 4). Table 2 focuses on the aspects generated by AKL and LDA. From Table 2, we can see that AKL discovers more correct and meaningful aspect terms at the top. Note that those terms marked in red and italicized are errors. Apart from Table 2, many aspects are dramatically improved by AKL, including some commonly shared aspects such as Price, Screen, and Customer Service. 6.4 Sensitivity to Clustering Parameters This sub-section investigates the sensitivity of the clustering parameters of AKL (again in test setting 1). The top sub-figure in Figure 5 shows the average Topic Coherence values versus the top k terms per topic used in topic clustering (Section 4.1). The number of clusters is set to the number of domains (see below). We can observe that using k = 15 top terms gives the highest value. This is intuitive as too few (or too many) top terms may generate insufficient (or noisy) knowledge. The bottom sub-figure in Figure 5 shows the average Topic Coherence given different number 354 Camera Computer Headphone GPS AKL LDA AKL LDA AKL LDA AKL LDA battery battery battery battery hour long battery trip life card hour cable long battery hour battery hour memory life speaker battery hour long hour long life long dvi life comfortable model mile charge usb speaker sound charge easy life long extra hour sound hour amp uncomfortable charge life minute minute charge connection uncomfortable headset trip destination charger sd dvi life comfortable life purchase phone short extra tv hdmus period money older charge aa device hdmus tv output hard compass mode Table 2: Example aspect Battery from AKL and LDA in each domain. Errors are italicized in red. -1510 -1490 -1470 -1450 -1430 5 10 15 20 25 30 Topic Coherence #Top Terms for Clustering -1510 -1490 -1470 -1450 -1430 20 30 40 50 60 70 Topic Coherence #Clusters Figure 5: Average topic coherence of AKL versus #top k terms (Top) and #clusters (Bottom). -1490 -1480 -1470 -1460 -1450 AKL GK-LDA MC-LDA LDA Topic Coherence Figure 6: Average topic coherence of each model tested on new/unseen domain. of clusters. We fix the number of top terms per topic to 15 as it yields the best result (see the top sub-figure in Figure 5). We can see that the performance is not very sensitive to the number of clusters. The model performs similarly for 30 to 50 clusters, with lower Topic Coherence for less than 30 or more than 50 clusters. The significance test indicates that using 30, 40, and 50 clusters, AKL achieved significant improvements over all baseline models (p < 0.0001). With more domains, we should expect a larger number of clusters. However, it is difficult to obtain the optimal number of clusters. Thus, we empirically set the number of clusters to the number of domains in our experiments. Note that the number of clusters (C) is expected to be larger than the number of topics in one domain (T) because C is for all domains while T is for one particular domain. 6.5 Test on New Domains We now evaluate AKL in test setting 2, i.e., the automatically extracted knowledge K (Figure 1) is applied in new/unseen domains other than those in domains DL used in knowledge learning. The aim is to see how K can help modeling in an unseen domain. In this set of experiments, each domain is tested by using the learned knowledge from the rest 35 domains. Figure 6 shows the average Topic Coherence of each model. The values are also averaged over the 36 tested domains. We can see that AKL achieves the highest Topic Coherence value while LDA has the lowest. The improvements of AKL over all baseline models are significant with p < 0.0001. 7 Conclusions This paper proposed an advanced aspect extraction framework which can learn knowledge automatically from a large number of review corpora and exploit the learned knowledge in extracting more coherent aspects. It first proposed a technique to learn knowledge automatically by clustering and FPM. Then a new topic model with an advanced inference mechanism was proposed to exploit the learned knowledge in a fault-tolerant manner. Experimental results using review corpora from 36 domains showed that the proposed method outperforms state-of-the-art methods significantly. Acknowledgments This work was supported in part by a grant from National Science Foundation (NSF) under grant no. IIS-1111092. 355 References David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet Forest priors. In Proceedings of ICML, pages 25–32. David Andrzejewski, Xiaojin Zhu, Mark Craven, and Benjamin Recht. 2011. A framework for incorporating general domain knowledge into latent Dirichlet allocation using first-order logic. In Proceedings of IJCAI, pages 1171–1177. Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. 2013. A Practical Algorithm for Topic Modeling with Provable Guarantees. In Proceedings of ICML, pages 280–288. David M. Blei and Jon D McAuliffe. 2007. Supervised Topic Models. In Proceedings of NIPS, pages 121– 128. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. S R K Branavan, Harr Chen, Jacob Eisenstein, and Regina Barzilay. 2008. Learning Document-Level Semantic Properties from Free-Text Annotations. In Proceedings of ACL, pages 263–271. Samuel Brody and Noemie Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Proceedings of NAACL, pages 804–812. Giuseppe Carenini, Raymond T Ng, and Ed Zwart. 2005. Extracting knowledge from evaluative text. In Proceedings of K-CAP, pages 11–18. Jonathan Chang, Jordan Boyd-Graber, Wang Chong, Sean Gerrish, and David Blei, M. 2009. Reading Tea Leaves: How Humans Interpret Topic Models. In Proceedings of NIPS, pages 288–296. Zhiyuan Chen, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013a. Discovering Coherent Topics Using General Knowledge. In Proceedings of CIKM, pages 209– 218. Zhiyuan Chen, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013b. Exploiting Domain Knowledge in Aspect Extraction. In Proceedings of EMNLP, pages 1655– 1667. Zhiyuan Chen, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013c. Leveraging Multi-Domain Prior Knowledge in Topic Models. In Proceedings of IJCAI, pages 2071–2077. Yejin Choi and Claire Cardie. 2010. Hierarchical Sequential Learning for Extracting Opinions and their Attributes. In Proceedings of ACL, pages 269–274. Jacob Eisenstein, Brendan O’Connor, Noah A Smith, and Eric P Xing. 2010. A Latent Variable Model for Geographic Lexical Variation. In Proceedings of EMNLP, pages 1277–1287. Lei Fang and Minlie Huang. 2012. Fine Granular Aspect Analysis using Latent Structural Models. In Proceedings of ACL, pages 333–337. Honglei Guo, Huijia Zhu, Zhili Guo, Xiaoxun Zhang, and Zhong Su. 2009. Product feature categorization with multilevel latent semantic association. In Proceedings of CIKM, pages 1087–1096. Jiawei Han, Hong Cheng, Dong Xin, and Xifeng Yan. 2007. Frequent pattern mining: current status and future directions. Data Mining and Knowledge Discovery, 15(1):55–86. Yulan He, Chenghua Lin, and Harith Alani. 2011. Automatically Extracting Polarity-Bearing Topics for Cross-Domain Sentiment Classification. In Proceedings of ACL, pages 123–131. Thomas Hofmann. 1999. Probabilistic Latent Semantic Analysis. In Proceedings of UAI, pages 289–296. Minqing Hu and Bing Liu. 2004. Mining and Summarizing Customer Reviews. In Proceedings of KDD, pages 168–177. Yuening Hu, Jordan Boyd-Graber, and Brianna Satinoff. 2011. Interactive Topic Modeling. In Proceedings of ACL, pages 248–257. Jagadeesh Jagarlamudi, Hal Daum´e III, and Raghavendra Udupa. 2012. Incorporating Lexical Priors into Topic Models. In Proceedings of EACL, pages 204– 213. Niklas Jakob and Iryna Gurevych. 2010. Extracting Opinion Targets in a Single- and Cross-Domain Setting with Conditional Random Fields. In Proceedings of EMNLP, pages 1035–1045. Yohan Jo and Alice H. Oh. 2011. Aspect and sentiment unification model for online review analysis. In Proceedings of WSDM, pages 815–824. Jeon-hyung Kang, Jun Ma, and Yan Liu. 2012. Transfer Topic Modeling with Ease and Scalability. In Proceedings of SDM, pages 564–575. L Kaufman and P J Rousseeuw. 1990. Finding groups in data: an introduction to cluster analysis. John Wiley and Sons. Suin Kim, Jianwen Zhang, Zheng Chen, Alice Oh, and Shixia Liu. 2013. A Hierarchical Aspect-Sentiment Model for Online Reviews. In Proceedings of AAAI, pages 526–533. Nozomi Kobayashi, Kentaro Inui, and Yuji Matsumoto. 2007. Extracting Aspect-Evaluation and Aspect-of Relations in Opinion Mining. In Proceedings of EMNLP, pages 1065–1074. 356 Lun-Wei Ku, Yu-Ting Liang, and Hsin-Hsi Chen. 2006. Opinion Extraction, Summarization and Tracking in News and Blog Corpora. In Proceedings of AAAI-CAAW, pages 100–107. Angeliki Lazaridou, Ivan Titov, and Caroline Sporleder. 2013. A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations. In Proceedings of ACL, pages 1630–1639. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Yingju Xia, Shu Zhang, and Hao Yu. 2010. Structure-Aware Review Mining and Summarization. In Proceedings of COLING, pages 653–661. Peng Li, Yinglin Wang, Wei Gao, and Jing Jiang. 2011. Generating Aspect-oriented Multi-Document Summarization with Event-aspect model. In Proceedings of EMNLP, pages 1137–1146. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of CIKM, pages 375–384. Kang Liu, Liheng Xu, and Jun Zhao. 2013. Syntactic Patterns versus Word Alignment: Extracting Opinion Targets from Online Reviews. In Proceedings of ACL, pages 1754–1763. Bing Liu. 2012. Sentiment Analysis and Opinion Mining. Morgan & Claypool Publishers. Yue Lu and Chengxiang Zhai. 2008. Opinion integration through semi-supervised topic modeling. In Proceedings of WWW, pages 121–130. Yue Lu, ChengXiang Zhai, and Neel Sundaresan. 2009. Rated aspect summarization of short comments. In Proceedings of WWW, pages 131–140. Bin Lu, Myle Ott, Claire Cardie, and Benjamin K Tsou. 2011. Multi-aspect Sentiment Analysis with Topic Models. In Proceedings of ICDM Workshops, pages 81–88. Yue Lu, Hongning Wang, ChengXiang Zhai, and Dan Roth. 2012. Unsupervised discovery of opposing opinion networks from forum discussions. In Proceedings of CIKM, pages 1642–1646. Hosam Mahmoud. 2008. Polya Urn Models. Chapman & Hall/CRC Texts in Statistical Science. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of WWW, pages 171–180. David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of EMNLP, pages 262–272. Samaneh Moghaddam and Martin Ester. 2013. The FLDA Model for Aspect-based Opinion Mining: Addressing the Cold Start Problem. In Proceedings of WWW, pages 909–918. Arjun Mukherjee and Bing Liu. 2012. Aspect Extraction through Semi-Supervised Modeling. In Proceedings of ACL, pages 339–348. Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng., 22(10):1345–1359. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1–135. James Petterson, Alex Smola, Tib´erio Caetano, Wray Buntine, and Shravan Narayanamurthy. 2010. Word Features for Latent Dirichlet Allocation. In Proceedings of NIPS, pages 1921–1929. AM Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of HLT, pages 339–346. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion Word Expansion and Target Extraction through Double Propagation. Computational Linguistics, 37(1):9–27. Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled LDA: a supervised topic model for credit attribution in multilabeled corpora. In Proceedings of EMNLP, pages 248–256. Michal Rosen-Zvi, Chaitanya Chemudugunta, Thomas Griffiths, Padhraic Smyth, and Mark Steyvers. 2010. Learning author-topic models from text corpora. ACM Transactions on Information Systems, 28(1):1–38. Christina Sauper and Regina Barzilay. 2013. Automatic Aggregation by Joint Modeling of Aspects and Values. J. Artif. Intell. Res. (JAIR), 46:89–127. Swapna Somasundaran and J Wiebe. 2009. Recognizing stances in online debates. In Proceedings of ACL, pages 226–234. Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of ACL, pages 308–316. Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: a rating regression approach. In Proceedings of KDD, pages 783–792. Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In Proceedings of EMNLP, pages 1533–1541. Liheng Xu, Kang Liu, Siwei Lai, Yubo Chen, and Jun Zhao. 2013. Mining Opinion Words and Opinion Targets in a Two-Stage Framework. In Proceedings of ACL, pages 1764–1773. GR Xue, Wenyuan Dai, Q Yang, and Y Yu. 2008. Topic-bridged PLSA for cross-domain text classification. In Proceedings of SIGIR, pages 627–634. 357 Bishan Yang and Claire Cardie. 2013. Joint Inference for Fine-grained Opinion Extraction. In Proceedings of ACL, pages 1640–1649. Shuang Hong Yang, Steven P. Crain, and Hongyuan Zha. 2011. Bridging the language gap: Topic adaptation for documents with different technicality. In Proceedings of AISTATS, pages 823–831. Jianxing Yu, Zheng-Jun Zha, Meng Wang, and TatSeng Chua. 2011. Aspect Ranking: Identifying Important Product Aspects from Online Consumer Reviews. In Proceedings of ACL, pages 1496–1505. Zhongwu Zhai, Bing Liu, Hua Xu, and Peifa Jia. 2011. Constrained LDA for grouping product features in opinion mining. In Proceedings of PAKDD, pages 448–459. Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li. 2010. Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid. In Proceedings of EMNLP, pages 56–65. Yanyan Zhao, Bing Qin, and Ting Liu. 2012. Collocation polarity disambiguation using web-based pseudo contexts. In Proceedings of EMNLPCoNLL, pages 160–170. Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2013. Collective Opinion Target Extraction in Chinese Microblogs. In Proceedings of EMNLP, pages 1840– 1850. Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Proceedings of CIKM, pages 43–50. 358
2014
33
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 359–369, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Anchors Regularized: Adding Robustness and Extensibility to Scalable Topic-Modeling Algorithms Thang Nguyen iSchool and UMIACS, University of Maryland and National Library of Medicine, National Institutes of Health [email protected] Yuening Hu Computer Science University of Maryland [email protected] Jordan Boyd-Graber iSchool and UMIACS University of Maryland [email protected] Abstract Spectral methods offer scalable alternatives to Markov chain Monte Carlo and expectation maximization. However, these new methods lack the rich priors associated with probabilistic models. We examine Arora et al.’s anchor words algorithm for topic modeling and develop new, regularized algorithms that not only mathematically resemble Gaussian and Dirichlet priors but also improve the interpretability of topic models. Our new regularization approaches make these efficient algorithms more flexible; we also show that these methods can be combined with informed priors. 1 Introduction Topic models are of practical and theoretical interest. Practically, they have been used to understand political perspective (Paul and Girju, 2010), improve machine translation (Eidelman et al., 2012), reveal literary trends (Jockers, 2013), and understand scientific discourse (Hall et al., 2008). Theoretically, their latent variable formulation has served as a foundation for more robust models of other linguistic phenomena (Brody and Lapata, 2009). Modern topic models are formulated as a latent variable model. Like hidden Markov models (Rabiner, 1989, HMM), each token comes from one of K unknown distributions. Unlike a HMM, topic models assume that each document is an admixture of these hidden components called topics. Posterior inference discovers the hidden variables that best explain a dataset. Typical solutions use MCMC (Griffiths and Steyvers, 2004) or variational EM (Blei et al., 2003), which can be viewed as local optimization: searching for the latent variables that maximize the data likelihood. An exciting vein of new research provides provable polynomial-time alternatives. These approaches provide solutions to hidden Markov models (Anandkumar et al., 2012), mixture models (Kannan et al., 2005), and latent variable grammars (Cohen et al., 2013). The key insight is not to directly optimize observation likelihood but to instead discover latent variables that can reconstruct statistics of the assumed generative model. Unlike search-based methods, which can be caught in local minima, these techniques are often guaranteed to find global optima. These general techniques can be improved by making reasonable assumptions about the models. For example, Arora et al. (2012b)’s approach for inference in topic models assume that each topic has a unique “anchor” word (thus, we call this approach anchor). This approach is fast and effective; because it only uses word co-occurrence information, it can scale to much larger datasets than MCMC or EM alternatives. We review the anchor method in Section 2. Despite their advantages, these techniques are not a panacea. They do not accommodate the rich priors that modelers have come to expect. Priors can improve performance (Wallach et al., 2009), provide domain adaptation (Daum´e III, 2007; Finkel and Manning, 2009), and guide models to reflect users’ needs (Hu et al., 2013). In Section 3, we regularize the anchor method to trade-off the reconstruction fidelity with the penalty terms that mimic Gaussian and Dirichlet priors. Another shortcoming is that these models have not been scrutinized using standard NLP evaluations. Because these approaches emerged from the theory community, anchor’s evaluations, when present, typically use training reconstruction. In Section 4, we show that our regularized models can generalize to previously unseen data—as measured by held-out likelihood (Blei et al., 2003)—and are more interpretable (Chang et al., 2009; Newman et al., 2010). We also show that our extension to the anchor method enables new applications: for 359 K number of topics V vocabulary size M document frequency: minimum documents an anchor word candidate must appear in Q word co-occurrence matrix Qi,j = p(w1 = i, w2 = j) ¯ Q conditional distribution of Q ¯Qi,j = p(w1 = j | w2 = i) ¯ Qi,· row i of ¯Q A topic matrix, of size V × K Aj,k = p(w = j | z = k) C anchor coefficient of size K × V Cj,k = p(z = k | w = j) S set of anchor word indexes {s1, . . . sK} λ regularization weight Table 1: Notation used. Matrices are in bold (Q, C), sets are in script S example, using an informed priors to discover concepts of interest. Having shown that regularization does improve performance, in Section 5 we explore why. We discuss the trade-off of training data reconstruction with sparsity and why regularized topics are more interpretable. 2 Anchor Words: Scalable Topic Models In this section, we briefly review the anchor method and place it in the context of topic model inference. Once we have established the anchor objective function, in the next section we regularize the objective function. Rethinking Data: Word Co-occurrence Inference in topic models can be viewed as a black box: given a set of documents, discover the topics that best explain the data. The difference between anchor and conventional inference is that while conventional methods take a collection of documents as input, anchor takes word co-occurrence statistics. Given a vocabulary of size V , we represent this joint distribution as Qi,j = p(w1 = i, w2 = j), each cell represents the probability of words appearing together in a document. Like other topic modeling algorithms, the output of the anchor method is the topic word distributions A with size V ∗K, where K is the total number of topics desired, a parameter of the algorithm. The kth column of A will be the topic distribution over all words for topic k, and Aw,k is the probability of observing type w given topic k. Anchors: Topic Representatives The anchor method (Arora et al., 2012a) is based on the separability assumption (Donoho and Stodden, 2003), which assumes that each topic contains at least one namesake “anchor word” that has non-zero probability only in that topic. Intuitively, this means that each topic has unique, specific word that, when used, identifies that topic. For example, while “run”, “base”, “fly”, and “shortstop” are associated with a topic about baseball, only “shortstop” is unambiguous, so it could serve as this topic’s anchor word. Let’s assume that we knew what the anchor words were: a set S that indexes rows in Q. Now consider the conditional distribution of word i, the probability of the rest of the vocabulary given an observation of word i; we represent this as ¯Qi,·, as we can construct this by normalizing the rows of Q. For an anchor word sa ∈S, this will look like a topic; ¯Q“shortstop”,· will have high probability for words associated with baseball. The key insight of the anchor algorithm is that the conditional distribution of polysemous nonanchor words can be reconstructed as a linear combination of the conditional distributions of anchor words. For example, ¯Q“fly”,· could be reconstructed by combining the anchor words “insecta”, “boeing”, and “shortshop”. We represent the coefficients of this reconstruction as a matrix C, where Ci,k = p(z = k | w = i). Thus, for any word i, ¯Qi,· ≈ X sk∈S Ci,k ¯Qsk,·. (1) The coefficient matrix is not the usual output of a topic modeling algorithm. The usual output is the probability of a word given a topic. The coefficient matrix C is the probability of a topic given a word. We use Bayes rule to recover the topic distribution p(w = i|z = k) ≡ Ai,k ∝p(z = k|w = i)p(w = i) = Ci,k X j ¯Qi,j (2) where p(w) is the normalizer of Q to obtain ¯Qw,·. The geometric argument for finding the anchor words is one of the key contributions of Arora et al. (2012a) and is beyond the scope of this paper. The algorithms in Section 3 use the anchor selection subroutine unchanged. The difference in our approach is in how we discover the anchor coefficients C. From Anchors to Topics After we have the anchor words, we need to find the coefficients that 360 best reconstruct the data ¯Q (Equation 1). Arora et al. (2012a) chose the C that minimizes the KL divergence between ¯Qi,· and the reconstruction based on the anchor word’s conditional word vectors P sk∈S Ci,k ¯Qsk,·, Ci,· = argminCi,·DKL  ¯Qi,· || X sk∈S Ci,k ¯Qsk,·  . (3) The anchor method is fast, as it only depends on the size of the vocabulary once the cooccurrence statistics Q are obtained. However, it does not support rich priors for topic models, while MCMC (Griffiths and Steyvers, 2004) and variational EM (Blei et al., 2003) methods can. This prevents models from using priors to guide the models to discover particular themes (Zhai et al., 2012), or to encourage sparsity in the models (Yao et al., 2009). In the rest of this paper, we correct this lacuna by adding regularization inspired by Bayesian priors to the anchor algorithm. 3 Adding Regularization In this section, we add regularizers to the anchor objective (Equation 3). In this section, we briefly review regularizers and then add two regularizers, inspired by Gaussian (L2, Section 3.1) and Dirichlet priors (Beta, Section 3.2), to the anchor objective function (Equation 3). Regularization terms are ubiquitous. They typically appear as an additional term in an optimization problem. Instead of optimizing a function just of the data x and parameters β, f(x, β), one optimizes an objective function that includes a regularizer that is only a function of parameters: f(w, β) + r(β). Regularizers are critical in staid methods like linear regression (Ng, 2004), in workhorse methods such as maximum entropy modeling (Dud´ık et al., 2004), and also in emerging fields such as deep learning (Wager et al., 2013). In addition to being useful, regularization terms are appealing theoretically because they often correspond to probabilistic interpretations of parameters. For example, if we are seeking the MLE of a probabilistic model parameterized by β, p(x|β), adding a regularization term r(β) = PL i=1 β2 i corresponds to adding a Gaussian prior f(βi) = 1 √ 2πσ2 exp  −β2 i 2σ2  (4) Corpus Train Dev Test Vocab NIPS 1231 247 262 12182 20NEWS 11243 3760 3726 81604 NYT 9255 2012 1959 34940 Table 2: The number of documents in the train, development, and test folds in our three datasets. and maximizing log probability of the posterior (ignoring constant terms) (Rennie, 2003). 3.1 L2 Regularization The simplest form of regularization we can add is L2 regularization. This is similar to assuming that probability of a word given a topic comes from a Gaussian distribution. While the distribution over topics is typically Dirichlet, Dirichlet distributions have been replaced by logistic normals in topic modeling applications (Blei and Lafferty, 2005) and for probabilistic grammars of language (Cohen and Smith, 2009). Augmenting the anchor objective with an L2 penalty yields Ci,· =argminCi,·DKL  ¯Qi,· || X sk∈S Ci,k ¯Qsk,·   + λ∥Ci,· −µi,·∥2 2, (5) where regularization weight λ balances the importance of a high-fidelity reconstruction against the regularization, which encourages the anchor coefficients to be close to the vector µ. When the mean vector µ is zero, this encourages the topic coefficients to be zero. In Section 4.3, we use a non-zero mean µ to encode an informed prior to encourage topics to discover specific concepts. 3.2 Beta Regularization The more common prior for topic models is a Dirichlet prior (Minka, 2000). However, we cannot apply this directly because the optimization is done on a row-by-row basis of the anchor coefficient matrix C, optimizing C for a fixed word w for and all topics. If we want to model the probability of a word, it must be the probability of word w in a topic versus all other words. Modeling this dichotomy (one versus all others in a topic) is possible. The constructive definition of the Dirichlet distribution (Sethuraman, 1994) states that if one has a V -dimensional multinomial θ ∼Dir(α1 . . . αV ), then the marginal distribution 361 of θw follows θw ∼Beta(αw, P i̸=w αi). This is the tool we need to consider the distribution of a single word’s probability. This requires including the topic matrix as part of the objective function. The topic matrix is a linear transformation of the coefficient matrix (Equation 2). The objective for beta regularization becomes Ci,· =argminCi,·DKL  ¯Qi,· || X sk∈S Ci,k ¯Qsk,·   −λ X sk∈S log (Beta(Ai,k; a, b)), (6) where λ again balances reconstruction against the regularization. To ensure the tractability of this algorithm, we enforce a convex regularization function, which requires that a > 1 and b > 1. If we enforce a uniform prior—EBeta(a,b) [Ai,k] = 1 V — and that the mode of the distribution is also 1 V ,1 this gives us the following parametric form for a and b: a = x V + 1, and b = (V −1)x V + 1 (7) for real x greater than zero. 3.3 Initialization and Convergence Equation 5 and Equation 6 are optimized using LBFGS gradient optimization (Galassi et al., 2003). We initialize C randomly from Dir(α) with α = 60 V (Wallach et al., 2009). We update C after optimizing all V rows. The newly updated C replaces the old topic coefficients. We track how much the topic coefficients C change between two consecutive iterations i and i + 1 and represent it as ∆C ≡∥Ci+1 −Ci∥2. We stop optimization when ∆C ≤δ. When δ = 0.1, the L2 and unregularized anchor algorithm converges after a single iteration, while beta regularization typically converges after fewer than ten iterations (Figure 4). 4 Regularization Improves Topic Models In this section, we measure the performance of our proposed regularized anchor word algorithms. We will refer to specific algorithms in bold. For example, the original anchor algorithm is anchor. Our L2 regularized variant is anchor-L2, 1For a, b < 1, the expected value is still the uniform distribution but the mode lies at the boundaries of the simplex. This corresponds to a sparse Dirichlet distribution, which our optimization cannot at present model. and our beta regularized variant is anchor-beta. To provide conventional baselines, we also compare our methods against topic models from variational inference (Blei et al., 2003, variational) and MCMC (Griffiths and Steyvers, 2004; McCallum, 2002, MCMC). We apply these inference strategies on three diverse corpora: scientific articles from the Neural Information Processing Society (NIPS),2 Internet newsgroups postings (20NEWS),3 and New York Times editorials (Sandhaus, 2008, NYT). Statistics for the datasets are summarized in Table 2. We split each dataset into a training fold (70%), development fold (15%), and a test fold (15%): the training data are used to fit models; the development set are used to select parameters (anchor threshold M, document prior α, regularization weight λ); and final results are reported on the test fold. We use two evaluation measures, held-out likelihood (Blei et al., 2003, HL) and topic interpretability (Chang et al., 2009; Newman et al., 2010, TI). Held-out likelihood measures how well the model can reconstruct held-out documents that the model has never seen before. This is the typical evaluation for probabilistic models. Topic interpretability is a more recent metric to capture how useful the topics can be to human users attempting to make sense of a large datasets. Held-out likelihood cannot be computed with existing anchor algorithms, so we use the topic distributions learned from anchor as input to a reference variational inference implementation (Blei et al., 2003) to compute HL. This requires an additional parameter, the Dirichlet prior α for the per-document distribution over topics. We select α using grid search on the development set. To compute TI and evaluate topic coherence, we use normalized pairwise mutual information (NPMI) (Lau et al., 2014) over topics’ twenty most probable words. Topic coherence is computed against the NPMI of a reference corpus. For coherence evaluations, we use both intrinsic and extrinsic text collections to compute NPMI. Intrinsic coherence (TI-i) is computed on training and development data at development time and on training and test data at test time. Extrinsic coherence (TI-e) is computed from English Wikipedia articles, with disjoint halves (1.1 million pages each) for distinct development and testing TI-e evaluation. 2http://cs.nyu.edu/˜roweis/data.html 3http://qwone.com/˜jason/20Newsgroups/ 362 G G G G G G G G G G G G G G G G G G G G G G G G G −392 −390 −388 −4720 −4710 −4700 −4690 −4680 −890.0 −887.5 −885.0 −882.5 20NEWS NIPS NYT 100 300 500 700 900 Document Frequency M HL Score G G G G G G G G G G G G G G G G G G G G G G G G G 0.02 0.03 0.04 0.05 0.06 0.07 0.055 0.060 0.065 0.06 0.07 0.08 0.09 0.10 20NEWS NIPS NYT 100 300 500 700 900 Document Frequency M TI−i Score Figure 1: Grid search for document frequency M for our datasets with 20 topics (other configurations not shown) on development data. The performance on both HL and TI score indicate that the unregularized anchor algorithm is very sensitive to M. The M selected here is applied to subsequent models. Topics G 20 40 60 80 Beta L2 G G G G G GGGGGG G G G G GGGG G G G G G G GGGGGG G G G G GGGGG G G G G G GGGGGG G G G G GGGGG G G G G G GGGGGG G G G G GGGGG G G G G G GGGGGG G G G G GGGGG G G G G G GGGGGG G G G G GGGGG −410 −405 −400 −395 −390 −4800 −4750 −4700 −4650 −920 −910 −900 −890 −880 20NEWS NIPS NYT 00.01 0.1 0.5 1 00.01 0.1 0.5 1 Regularization Weight λ HL Score Topics G 20 40 60 80 Beta L2 G G G G G GGGGGG G G G G GGGG G G G G G G GGGGGG G G G G G G GGG G G G G G GGGGGG G G G G G GGGG G G G G G GGGGGG G G G G GGGGG G G G G G G GGGGG G G G G GGGGG G G G G G GGGGGG G G G G GGGGG 0.02 0.04 0.06 0.08 0.10 0.02 0.04 0.06 0.08 0.06 0.09 0.12 0.15 20NEWS NIPS NYT 00.01 0.1 0.5 1 00.01 0.1 0.5 1 Regularization Weight λ TI−i Score Figure 2: Selection of λ based on HL and TI scores on the development set. The value of λ = 0 is equivalent to the original anchor algorithm; regularized versions find better solutions as the regularization weight λ becomes non-zero. 4.1 Grid Search for Parameters on Development Set Anchor Threshold A good anchor word must have a unique, specific context but also explain other words well. A word that appears only once will have a very specific cooccurence pattern but will explain other words’ coocurrence poorly because the observations are so sparse. As discussed in Section 2, the anchor method uses document frequency M as a threshold to only consider words with robust counts. Because all regularizations benefit equally from higher-quality anchor words, we use crossvalidation to select the document frequency cutoff M using the unregularized anchor algorithm. Figure 1 shows the performance of anchor with different M on our three datasets with 20 topics for our two measures HL and TI-i. Regularization Weight Once we select a cutoff M for each combination of dataset, number of topics K and a evaluation measure, we select a regularization weight λ on the development set. Figure 2 shows that beta regularization framework improves topic interpretability TI-i on all datasets and improved the held-out likelihood HL on 20NEWS. The L2 regularization also improves held-out likelihood HL for the 20NEWS corpus (Figure 2). In the interests of space, we do not show the figures for selecting M and λ using TI-e, which is similar to TI-i: anchor-beta improves TI-e score on all datasets, anchor-L2 improves TI-e on 20NEWS and NIPS with 20 topics and NYT with 40 topics. 4.2 Evaluating Regularization With document frequency M and regularization weight λ selected from the development set, we 363 compare the performance of those models on the test set. We also compare with standard implementations of Latent Dirichlet Allocation: Blei’s LDAC (variational) and Mallet (mcmc). We run 100 iterations for LDAC and 5000 iterations for Mallet. Each result is averaged over three random runs and appears in Figure 3. The highly-tuned, widelyused implementations uniformly have better heldout likelihood than anchor-based methods, but the much faster anchor methods are often comparable. Within anchor-based methods, L2-regularization offers comparable held-out likelihood as unregularized anchor, while anchor-beta often has better interpretability. Because of the mismatch between the specialized vocabulary of NIPS and the generalpurpose language of Wikipedia, TI-e has a high variance. 4.3 Informed Regularization A frequent use of priors is to add information to a model. This is not possible with the existing anchor method. An informed prior for topic models seeds a topic with words that describe a topic of interest. In a topic model, these seeds will serve as a “magnet”, attracting similar words to the topic (Zhai et al., 2012). We can achieve a similar goal with anchor-L2. Instead of encouraging anchor coefficients to be zero in Equation 5, we can instead encourage word probabilities to close to an arbitrary mean µi,k. This vector can reflect expert knowledge. One example of a source of expert knowledge is Linguistic Inquiry and Word Count (Pennebaker and Francis, 1999, LIWC), a dictionary of keywords related to sixty-eight psychological concepts such as positive emotions, negative emotions, and death. For example, it associates “excessive, estate, money, cheap, expensive, living, profit, live, rich, income, poor, etc.” for the concept materialism. We associate each anchor word with its closest LIWC category based on the cooccurrence matrix Q. This is computed by greedily finding the anchor word that has the highest cooccurrence score for any LIWC category: we define the score of a category to anchor word wsk as P i Qsk,i, where i ranges over words in this category; we compute the scores of all categories to all anchor words; then we find the highest score and assign the category to that anchor word; we greedily repeat this process until all anchor words have a category. Given these associations, we create a goal mean µi,k. If there are Li anchor words associated with LIWC word i, µi,k = 1 Li if this keyword i is associated with anchor word wsk and zero otherwise. We apply anchor-L2 with informed priors on NYT with twenty topics and compared the topics against the original topics from anchor. Table 3 shows that the topic with anchor word “soviet”, when combined with LIWC, draws in the new words “bush” and “nuclear”; reflecting the threats of force during the cold war. For the topic with topic word “arms”, when associated with the LIWC category with the terms “agree” and “agreement”, draws in “clinton”, who represented a more conciliatory foreign policy compared to his republican predecessors. 5 Discussion Having shown that regularization can improve the anchor topic modeling algorithm, in this section we discuss why these regularizations can improve the model and the implications for practitioners. Efficiency Efficiency is a function of the number of iterations and the cost of each iteration. Both anchor and anchor-L2 require a single iteration, although the latter’s iteration is slightly more expensive. For beta, as described in Section 3.2, we update anchor coefficients C row by row, and then repeat the process over several iterations until it converges. However, it often converges within ten iterations (Figure 4) on all three datasets: this requires much fewer iterations than MCMC or variational inference, and the iterations are less expensive. In addition, since we optimize each row Ci,· independently, the algorithm can be easily parallelized. Sensitivity to Document Frequency While the original anchor is sensitive to the document frequency M (Figure 1), adding regularization makes this less critical. Both anchor-L2 and anchor-beta are less sensitive to M than anchor. To highlight this, we compare the topics of anchor and anchor-beta when M = 100. As Table 4 shows, the words “article”, “write”, “don” and “doe” appear in most of anchor’s topics. While anchor-L2 also has some bad topics, it still can find reasonable topics, demonstrating anchor-beta’s greater robustness to suboptimal M. L2 (Sometimes) Improves Generalization As Figure 2 shows, anchor-L2 sometimes improves held-out development likelihood for the smaller 364 Algorithm G anchor anchor−beta anchor−L2 MCMC variational 20NEWS G G G G G G G G G G G G −410 −405 −400 −395 −390 0.03 0.04 0.05 0.06 0.07 0.06 0.08 0.10 HL TI−e TI−i 20 40 60 80 topic number NIPS G G G G G G G G G G G G −4580 −4560 −4540 −4520 −4500 −4480 −4460 0.08 0.09 0.10 0.11 0.06 0.07 0.08 0.09 HL TI−e TI−i 20 40 60 80 topic number NYT G G G G G G G G G G G G −880 −870 −860 0.07 0.08 0.09 0.08 0.10 0.12 0.14 HL TI−e TI−i 20 40 60 80 topic number Figure 3: Comparing anchor-beta and anchor-L2 against the original anchor and the traditional variational and MCMC on HL score and TI score. variational and mcmc provide the best held-out generalization. anchor-beta sometimes gives the best TI score and is consistently better than anchor. The specialized vocabulary of NIPS causes high variance for the extrinsic interpretability evaluation (TI-e). Topic Shared Words Original (Top, green) vs. Informed L2 (Bottom, orange) soviet american make president soviet union war years gorbachev moscow russian force economic world europe political communist lead reform germany country military state service washington bush army unite chief troops officer nuclear time week district assembly board city county district member state york representative manhattan brooklyn queens election bronx council island local incumbent housing municipal people party group social republican year make years friend vote compromise million peace american force government israel peace political president state unite washington war military country minister leaders nation world palestinian israeli election offer justice aid deserve make bush years fair clinton hand arms arms bush congress force iraq make north nuclear president state washington weapon administration treaty missile defense war military korea reagan agree agreement american accept unite share clinton years trade administration america american country economic government make president state trade unite washington world market japan foreign china policy price political business economy congress year years clinton bush buy Table 3: Examples of topic comparison between anchor and informed anchor-L2. A topic is labeled with the anchor word for that topic. The bold words are the informed prior from LIWC. With an informed prior, relevant words appear in the top words of a topic; this also draws in other related terms (red). 20NEWS corpus. However, the λ selected on development data does not always improve test set performance. This, in Figure 3, anchor-beta closely tracks anchor. Thus, L2 regularization does not hurt generalization while imparting expressiveness and robustness to parameter settings. Beta Improves Interpretability Figure 3 shows that anchor-beta improves topic interpretability (TI) compared to unregularized anchor methods. In this section, we try to understand why. We first compare the topics from the original anchor against anchor-beta to analyze the topics qualitatively. Table 5 shows that beta regularization promotes rarer words within a topic and demotes common words. For example, in the topic about hockey with the anchor word game, “run” and “good”—ambiguous, polysemous words—in the unregularized topic are replaced by “playoff” 365 Topic anchor anchor-beta frequently article write don doe make time people good file question article write don doe make people time good email file debate write article people make don doe god key government time people make god article write don doe key point government wings game team write wings article win red play hockey year game team wings win red hockey play season player fan stats player team write game article stats year good play doe stats player season league baseball fan team individual playoff nhl compile program file write email doe windows call problem run don compile program code file ftp advance package error windows sun Table 4: Topics from anchor and anchor-beta with M = 100 on 20NEWS with 20 topics. Each topic is identified with its associated anchor word. When M = 100, the topics of anchor suffer: the four colored words appear in almost every topic. anchor-beta, in contrast, is less sensitive to suboptimal M. G G G G G G G G G G G G G G G G G G G G G 0 10 20 30 40 0 5 10 15 20 Iteration ∆C Dataset G 20NEWS NIPS NYT Figure 4: Convergence of anchor coefficient C for anchor-beta. ∆C is the difference of current C from the C at the previous iteration. C is converged within ten iterations for all three datasets. and “trade” in the regularized topic. These words are less ambiguous and more likely to make sense to a consumer of topic models. Figure 5 shows why this happens. Compared to the unregularized topics from anchor, the beta regularized topic steals from the rich and creates a more uniform distribution. Thus, highly frequent words do not as easily climb to the top of the distribution, and the topics reflect topical, relevant words rather than globally frequent terms. 6 Conclusion A topic model is a popular tool for quickly getting the gist of large corpora. However, running such an analysis on these large corpora entail a substantial computational cost. While techniques such as anchor algorithms offer faster solutions, it comes at the cost of the expressive priors common in Bayesian formulations. This paper introduces two different regularizations that offer users more interpretable models and the ability to inject prior knowledge without sacrificing the speed and generalizability of the underlying approach. However, one sacrifice that this approach does make is the beautiful theoretical guarantees of previous work. An important piece of future work is a theoretical understanding of generalizability in extensible, regularized models. Incorporating other regularizations could further improve performance or unlock new applications. Our regularizations do not explicitly encourage sparsity; applying other regularizations such as L1 could encourage true sparsity (Tibshirani, 1994), and structured priors (Andrzejewski et al., 2009) could efficiently incorporate constraints on topic models. These regularizations could improve spectral algorithms for latent variables models, improving the performance for other NLP tasks such as latent variable PCFGs (Cohen et al., 2013) and HMMs (Anandkumar et al., 2012), combining the flexibility and robustness offered by priors with the speed and accuracy of new, scalable algorithms. Acknowledgments We would like to thank the anonymous reviewers, Hal Daum´e III, Ke Wu, and Ke Zhai for their helpful comments. This work was supported by NSF Grant IIS-1320538. Boyd-Graber is also supported by NSF Grant CCF-1018625. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. 366 computer drive game god power −20 −15 −10 −5 0 −20 −15 −10 −5 0 anchor anchor−beta Rank of word in topic (topic shown by anchor word) log p(word | topic) Figure 5: How beta regularization influences the topic distribution. Each topic is identified with its associated anchor word. Compared to the unregularized anchor method, anchor-beta steals probability mass from the “rich” and prefers a smoother distribution of probability mass. These words often tend to be unimportant, polysemous words common across topics. Topic Shared Words anchor (Top, green) vs. anchor-beta (Bottom, orange) computer computer means science screen system phone university problem doe work windows internet software chip mac set fax technology information data quote mhz pro processor ship remote print devices complex cpu electrical transfer ray engineering serial reduce power power play period supply ground light battery engine car good make high problem work back turn control current small time circuit oil wire unit water heat hot ranger input total joe plug god god jesus christian bible faith church life christ belief religion hell word lord truth love people make things true doe sin christianity atheist peace heaven game game team player play win fan hockey season baseball red wings score division league goal leaf cup toronto run good playoff trade drive drive disk hard scsi controller card floppy ide mac bus speed monitor switch apple cable internal port meg problem work ram pin Table 5: Comparing topics—labeled by their anchor word—from anchor and anchor-beta. With beta regularization, relevant words are promoted, while more general words are suppressed, improving topic coherence. References Animashree Anandkumar, Daniel Hsu, and Sham M. Kakade. 2012. A method of moments for mixture models and hidden markov models. In Proceedings of Conference on Learning Theory. David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In Proceedings of the International Conference of Machine Learning. Sanjeev Arora, Rong Ge, Yoni Halpern, David M. Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. 2012a. A practical algorithm for topic modeling with provable guarantees. CoRR, abs/1212.4777. Sanjeev Arora, Rong Ge, and Ankur Moitra. 2012b. Learning topic models - going beyond svd. CoRR, abs/1204.1956. David M. Blei and John D. Lafferty. 2005. Correlated topic models. In Proceedings of Advances in Neural Information Processing Systems. David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3. Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In Proceedings of the European Chapter of the Association for Computational Linguistics, Athens, Greece. Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of Advances in Neural Information Processing Systems. Shay B. Cohen and Noah A. Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Conference of the North American Chapter of the Association for Computational Linguistics. 367 Shay Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle Ungar. 2013. Experiments with spectral learning of latent-variable PCFGs. In Conference of the North American Chapter of the Association for Computational Linguistics. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of the Association for Computational Linguistics. David Donoho and Victoria Stodden. 2003. When does non-negative matrix factorization give correct decomposition into parts? page 2004. MIT Press. Miroslav Dud´ık, Steven J. Phillips, and Robert E. Schapire. 2004. Performance guarantees for regularized maximum entropy density estimation. In Proceedings of Conference on Learning Theory. Vladimir Eidelman, Jordan Boyd-Graber, and Philip Resnik. 2012. Topic models for dynamic translation model adaptation. In Proceedings of the Association for Computational Linguistics. Jenny Rose Finkel and Christopher D. Manning. 2009. Hierarchical bayesian domain adaptation. In Conference of the North American Chapter of the Association for Computational Linguistics, Morristown, NJ, USA. Mark Galassi, Jim Davies, James Theiler, Brian Gough, Gerard Jungman, Michael Booth, and Fabrice Rossi. 2003. Gnu Scientific Library: Reference Manual. Network Theory Ltd. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(Suppl 1):5228–5235. David Hall, Daniel Jurafsky, and Christopher D. Manning. 2008. Studying the history of ideas using topic models. In Proceedings of Emperical Methods in Natural Language Processing. Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2013. Interactive topic modeling. Machine Learning Journal. Matt L. Jockers. 2013. Macroanalysis: Digital Methods and Literary History. Topics in the Digital Humanities. University of Illinois Press. Ravindran Kannan, Hadi Salmasian, and Santosh Vempala. 2005. The spectral method for general mixture models. In Proceedings of Conference on Learning Theory. Ken Lang. 2007. 20 newsgroups data set. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the European Chapter of the Association for Computational Linguistics. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://www.cs.umass.edu/ mccallum/mallet. Thomas P. Minka. 2000. Estimating a dirichlet distribution. Technical report, Microsoft. http://research.microsoft.com/enus/um/people/minka/papers/dirichlet/. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Conference of the North American Chapter of the Association for Computational Linguistics. Andrew Y. Ng. 2004. Feature selection, l1 vs. l2 regularization, and rotational invariance. In Proceedings of the International Conference of Machine Learning. Michael Paul and Roxana Girju. 2010. A twodimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advancement of Artificial Intelligence. James W. Pennebaker and Martha E. Francis. 1999. Linguistic Inquiry and Word Count. Lawrence Erlbaum, 1 edition, August. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257– 286. Jason Rennie. 2003. On l2-norm regularization and the Gaussian prior. Sam Roweis. 2002. NIPS 1-12 Dataset. Evan Sandhaus. 2008. The New York Times annotated corpus. http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp? catalogId=LDC2008T19. Jayaram Sethuraman. 1994. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639–650. Robert Tibshirani. 1994. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267–288. Stefan Wager, Sida Wang, and Percy Liang. 2013. Dropout training as adaptive regularization. In Proceedings of Advances in Neural Information Processing Systems, pages 351–359. Hanna Wallach, David Mimno, and Andrew McCallum. 2009. Rethinking LDA: Why priors matter. In Proceedings of Advances in Neural Information Processing Systems. Limin Yao, David Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on streaming document collections. In Knowledge Discovery and Data Mining. 368 Ke Zhai, Jordan Boyd-Graber, Nima Asadi, and Mohamad Alkhouja. 2012. Mr. LDA: A flexible large scale topic modeling package using variational inference in mapreduce. In Proceedings of World Wide Web Conference. 369
2014
34
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 370–379, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Bayesian Mixed Effects Model of Literary Character David Bamman School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Ted Underwood Department of English University of Illinois Urbana, IL 61801, USA [email protected] Noah A. Smith School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Abstract We consider the problem of automatically inferring latent character types in a collection of 15,099 English novels published between 1700 and 1899. Unlike prior work in which character types are assumed responsible for probabilistically generating all text associated with a character, we introduce a model that employs multiple effects to account for the influence of extra-linguistic information (such as author). In an empirical evaluation, we find that this method leads to improved agreement with the preregistered judgments of a literary scholar, complementing the results of alternative models. 1 Introduction Recent work in NLP has begun to exploit the potential of entity-centric modeling for a variety of tasks: Chambers (2013) places entities at the center of probabilistic frame induction, showing gains over a comparable event-centric model (Cheung et al., 2013); Bamman et al. (2013) explicitly learn character types (or “personas”) in a dataset of Wikipedia movie plot summaries; and entity-centric models form one dominant approach in coreference resolution (Durrett et al., 2013; Haghighi and Klein, 2010). One commonality among all of these very different probabilistic approaches is that each learns statistical regularities about how entities are depicted in text (whether for the sake of learning a set of semantic roles, character types, or linking anaphora to the entities to which they refer). In each case, the text we observe associated with an entity in a document is directly dependent on the class of entity—and only that class. This relationship between entity and text is a theoretical assumption, with important consequences for learning: entity types learned in this way will be increasingly similar the more similar the domain, author, and other extra-linguistic effects are between them.1 While in many cases the topically similar types learned under this assumption may be desirable, we explore here the alternative, in which entity types are learned in a way that controls for such effects. In introducing a model based on different assumptions, we provide a method that complements past work and provides researchers with more flexible tools to infer different kinds of character types. We focus here on the literary domain, exploring a large collection of 15,099 English novels published in the 18th and 19th centuries. By accounting for the influence of individual authors while inferring latent character types, we are able to learn personas that cut across different authors more effectively than if we learned types conditioned on the text alone. Modeling the language used to describe a character as the joint result of that character’s latent type and of other formal variables allows us to test multiple models of character and assess their value for different interpretive problems. As a test case, we focus on separating character from authorial diction, but this approach can readily be generalized to produce models that provisionally distinguish character from other factors (such as period, genre, or point of view) as well. 2 Literary Background Inferring character is challenging from a literary perspective partly because scholars have not reached consensus about the meaning of the term. It may seem obvious that a “character” is a representation of a (real or imagined) person, and many humanists do use the term that way. But there is 1For example, many entities in Early Modern English texts may be judged to be more similar to each other than to entities from later texts simply by virtue of using hath and other archaic verb forms. 370 an equally strong critical tradition that treats character as a formal dimension of narrative. To describe a character as a “blocking figure” or “firstperson narrator,” for instance, is a statement less about the attributes of an imagined person than about a narrative function (Keen, 2003). Characters are in one sense collections of psychological or moral attributes, but in another sense “wordmasses” (Forster, 1927). This tension between “referential” and “formalist” models of character has been a centrally “divisive question in ... literary theory” (Woloch, 2003). Considering primary source texts (as distinct from plot summaries) forces us to confront new theoretical questions about character. In a plot summary (such as those explored by Bamman et al., 2013), a human reader may already have used implicit models of character to extract high-level features. To infer character types from raw narrative text, researchers need to explicitly model the relationship of character to narrative form. This is not a solved problem, even for human readers. For instance, it has frequently been remarked that the characters of Charles Dickens share certain similarities—including a reliance on tag phrases and recurring tics. A referential model of character might try to distinguish this common stylistic element from underlying “personalities.” A strictly formalist model might refuse to separate authorial diction from character at all. In practice, human readers can adopt either perspective: we recognize that characters have a “Dickensian” quality but also recognize that a Dickens villain is (in one sense) more like villains in other authors than like a Dickensian philanthropist. Our goal is to show that computational methods can support the same range of perspectives—allowing a provisional, flexible separation between the referential and formal dimensions of narrative. 3 Data The dataset for this work consists of 15,099 distinct narratives drawn from HathiTrust Digital Library.2 From an initial collection of 469,200 volumes written in English and published between 1700 and 1899 (including poetry, drama, and nonfiction as well as prose narrative), we extract 32,209 volumes of prose fiction, remove duplicates and fuse multi-volume works to create the final dataset. Since the original texts were produced 2http://www.hathitrust.org by scanning and running OCR on physical books, we automatically correct common OCR errors and trim front and back matter from the volumes using the page-level classifiers and HMM of Underwood et al. (2013) Many aspects of this process would be simpler if we used manually-corrected texts, such as those drawn from Project Gutenberg. But we hope to produce research that has historical as well as computational significance, and doing so depends on the provenance of a collection. Gutenberg’s decentralized selection process tends to produce exceptionally good coverage of currently-popular genres like science fiction, whereas HathiTrust aggregates university libraries. Library collections are not guaranteed to represent the past perfectly, but they are larger, and less strongly shaped by contemporary preferences. The goal of this work is to provide a method to infer a set of character types in an unsupervised fashion from the data. As with prior work (Bamman et al., 2013), we define this target, a character persona, as a distribution over several categories of typed dependency relations:3 1. agent: the actions of which a character is the agent (i.e., verbs for which the character holds an nsubj or agent relation). 2. patient: the actions of which a character is the patient (i.e., verbs for which the character holds a dobj or nsubjpass relation). 3. possessive: the objects that a character possesses (i.e., all words for which the character holds a poss relation). 4. predicative: attributes predicated of a character (i.e., adjectives or nouns holding an nsubj relation to the character, with an inflection of be as a child). This set captures the constellation of what a character does and has done to them, what they possess, and what they are described as being. While previous work uses the Stanford CoreNLP toolkit to identify characters and extract typed dependencies for them, we found this approach to be too slow for the scale of our data (a total of 1.8 billion tokens); in particular, syntactic parsing, with cubic complexity in sentence length, and out-of-the-box coreference resolution (with thousands of potential antecedents) prove to be 3All categories are described using the Stanford typed dependencies (de Marneffe and Manning, 2008), but any syntactic formalism is equally applicable. 371 the biggest bottlenecks. Before addressing character inference, we present here a prerequisite NLP pipeline that scales well to book-length documents.4 This pipeline uses the Stanford POS tagger (Toutanova et al., 2003), the linear-time MaltParser (Nivre et al., 2007) for dependency parsing (trained on Stanford typed dependencies), and the Stanford named entity recognizer (Finkel et al., 2005). It includes the following components for clustering character name mentions, resolving pronominal coreference, and reducing vocabulary dimensionality. 3.1 Character Clustering First, let us terminologically distinguish between a character mention in a text (e.g., the token Tom on page 141 of The Adventures of Tom Sawyer) and a character entity (e.g., TOM SAWYER the character, to which that token refers). To resolve the former to the latter, we largely follow Davis et al. (2003) and Elson et al. (2010): we define a set of initial characters corresponding to each unique character name that is not a subset of another (e.g., Mr. Tom Sawyer) and deterministically create a set of allowable variants for each one (Mr. Tom Sawyer →Tom, Sawyer, Tom Sawyer, Mr. Sawyer, and Mr. Tom); then, from the beginning of the book to the end, we greedily assign each mention to the most recently linked entity for whom it is a variant. The result constitutes our set of characters, with all mentions partitioned among them. 3.2 Pronominal Coreference Resolution While the character clustering stage is essentially performing proper noun coreference resolution, approximately 74% of references to characters in books come in the form of pronouns.5 To resolve this more difficult class at the scale of an entire book, we train a log-linear discriminative classifier only on the task of resolving pronominal anaphora (i.e., ignoring generic noun phrases such as the paint or the rascal). For this task, we annotated a set of 832 coreference links in 3 books (Pride and Prejudice, The Turn of the Screw, and Heart of Darkness) and featurized coreference/antecedent pairs with: 4All code is available at http://www.ark.cs.cmu. edu/literaryCharacter 5Over all 15,099 narratives, the average number of character proper name mentions is 1,673; the average number of gendered singular pronouns (he, she, him, his, her) is 4,641. 1. The syntactic dependency path from a pronoun to its potential antecedent (e.g., dobj↑pred→↓pred↓nsubj (where →denotes movement across sentence boundaries). 2. The salience of the antecedent character (defined as the count of that character’s named mentions in the previous 500 words). 3. The antecedent part of speech. 4. Whether or not the pronoun and antecedent appear in the same quotation scope (false if one appears in a quotation and one outside). 5. Whether or not the two agree for gender. 6. The syntactic tree distance between the two. 7. The linear (word) distance between the two. With this featurization and training data, we train a binary logistic regression classifier with ℓ1 regularization (where negative examples are comprised of all character entities in the previous 100 words not labeled as the true antecedent). In a 10-fold cross-validation on predicting the true nearest antecedent for a pronominal anaphor, this method achieves an average accuracy of 82.7%. With this trained model, we then select the highest-scoring antecedent within 100 words for each pronominal anaphor in our data. 3.3 Dimensionality Reduction To manage the degrees of freedom in the model described in §4, we perform dimensionality reduction on the vocabulary by learning word embeddings with a log-linear continuous skip-gram language model (Mikolov et al., 2013) on the entire collection of 15,099 books. This method learns a low-dimensional real-valued vector representation of each word to predict all of the words in a window around it; empirically, we find that with a sufficient window size (we use n = 10), these word embeddings capture semantic similarity (placing topically similar words near each other in vector space).6 We learn a 100-dimensional embedding for each of the 512,344 words in our vocabulary. To create a partition over the vocabulary, we use hard K-means clustering (with Euclidean distance) to group the 512,344 word types into 1,000 clusters. We then agglomeratively cluster those 1,000 groups to assign bitstring representations to each one, forming a balanced binary tree by only merging existing clusters at equal levels in the hi6In comparison, Brown et al. (1992) clusters learned from the same data capture syntactic similarity (placing functionally similar words in the same cluster). 372 0 1 0 1 0 1 0111001110: hat coat cap cloak handkerchief 0111001111: pair boots shoes gloves leather 0111001100: dressed costume uniform clad clothed 0111001101: dress clothes wore worn wear 01110011 → Figure 1: Bitstring representations of neural agglomerative clusters, illustrating the leaf nodes in a binary tree rooted in the prefix 01110011. Bitstring encodings of intermediate nodes and terminal leaves result by following the left (0) and right (1) branches of the merge tree created through agglomerative clustering. erarchy. We use Euclidean distance as a fundamental metric and a group-average similarity function for calculating the distance between groups. Fig. 1 illustrates four of the 1,000 learned clusters. 4 Model In order to separate out the effects that a character’s persona has on the words that are associated with them (as opposed to other factors, such as time period, genre, or author), we adopt a hierarchical Bayesian approach in which the words we observe are generated conditional on a combination of different effects captured in a log-linear (or “maximum entropy”) distribution. Maximum entropy approaches to language modeling have been used since Rosenfeld (1996) to incorporate long-distance information, such as previously-mentioned trigger words, into n-gram language models. This work has since been extended to a Bayesian setting by applying both a Gaussian prior (Chen and Rosenfeld, 2000), which dampens the impact of any individual feature, and sparsity-inducing priors (Kazama and Tsujii, 2003; Goodman, 2004), which can drive many feature weights to 0. The latter have been applied specifically to the problem of estimating word probabilities with sparse additive generative (SAGE) models (Eisenstein et al., 2011), where sparse extra-linguistic effects can influence a word probability in a larger generative setting. In contrast to previous work in which the probability of a word linked to a character is dependent entirely on the character’s latent persona, in our model, we see the probability of a word as dependent on: (i) the background likelihood of the word, (ii) the author, so that a word becomes more probable if a particular author tends to use it more, and (iii) the character’s persona, so that a word is more probable if appearing with a particular persona. Intuitively, if the author Jane Austen is associated with a high weight for the word manners, and all personas have little effect for this word, then manners will have little impact on deciding which persona a particular Austen character embodies, since its presence is explained largely by Austen having penned the word. While we address only the author as an observed effect, this model is easily extended to other features as well, including period, genre, point of view, and others. The generative story runs as follows (Figure 2 depicts the full graphical model): Let there be M unique authors in the data, P latent personas (a hyperparameter to be set), and V words in the vocabulary (in the general setting these may be word types; in our data the vocabulary is the set of 1,000 unique cluster IDs). Each role type r ∈{agent, patient, possessive, predicative} and vocabulary word v (here, a cluster ID) is associated with a real-valued vector ηr,v = [ηmeta r,v , ηpers r,v , η0 r,v] of length M + P + 1. The first M + P elements are drawn from a Laplace prior with mean µ = 0 and scale λ = 1; the last element η0 r,v is an unregularized bias term accounting for the background. Each element in this vector captures the log-additive effect of each author, persona, and the background distribution on the word’s probability (Eq. 1, below). Much like latent Dirichlet allocation (Blei et al., 2003), each document d in our dataset draws a multinomial distribution θd over personas from a shared Dirichlet prior α, which captures the proportion of each character type in that particular document. Every character c in the document draws its persona p from this document-specific multinomial. Given document metadata m (here, one of a set of M authors) and persona p, each tuple of a role r with word w is assumed to be drawn from Eq. 1 in Fig. 3. This SAGE model can be understood as a log-linear distribution with three kinds of features (metadata, persona, and back373 P(w | m, p, r, η) = exp ηmeta r,w [m] + ηpers r,w [p] + η0 r,w  , V X v=1 exp ηmeta r,v [m] + ηpers r,v [p] + η0 r,v  (1) P(b | m, p, r, η) = n−1 Y j=0    logit−1  ηmeta r,b1:j[m] + ηpers r,b1:j[p] + η0 r,b1:j  if bj+1 = 1 1 −logit−1  ηmeta r,b1:j[m] + ηpers r,b1:j[p] + η0 r,b1:j  otherwise (2) Figure 3: Parameterizations of the SAGE word distribution. Eq. 1 is a “flat” multinomial logistic regression with one η-vector per role and word. Eq. 2 uses the hierarchical softmax formulation, with one η-vector per role and node in the binary tree of word clusters, giving a distribution over bit strings (b) with the same number of parameters as Eq. 1. ground bias). 4.1 Hierarchical Softmax The partition function in Eq. 1 can lead to slow inference for any reasonably-sized vocabulary. To address this, we reparameterize the model by exploiting the structure of the agglomerative clustering in §3.3 to perform a hierarchical softmax, following Goodman (2001), Morin and Bengio (2005) and Mikolov et al. (2013). The bitstring representations by which we encode each word in the vocabulary serve as natural, and inherently meaningful, intermediate classes that correspond to semantically related subsets of the vocabulary, with each bitstring prefix denoting one such class. Longer bitstrings correspond to more fine-grained classes. In the example shown in Figure 1, 011100111 is one such intermediate class, containing the union of pair, boots, shoes, gloves leather and hat, coat, cap cloak, handkerchief. Because these classes recursively partition the vocabulary, they offer a convenient way to reparameterize the model through the chain rule of probability. Consider, for example, a word represented as the bitstring c = 01011; calculating P(c = 01011)—we suppress conditioning variables for clarity—involves the product: P(c1 = 0) × P(c2 = 1 | c1 = 0) × P(c3 = 0 | c1:2 = 01) × P(c4 = 1 | c1:3 = 010) × P(c5 = 1 | c1:4 = 0101). Since each multiplicand involves a binary prediction, we can avoid partition functions and use the classic binary logistic regression.7 We have converted the V -way multiclass logistic regression problem of Eq. 1 into a sequence of log V evaluations (assuming a perfectly balanced tree). Given 7Recall that logistic regression lets PLR(y = 1 | x, β) = logit−1(x⊤β) = 1/(1 + exp −x⊤β) for binary dependent variable y, independent variables x, and coefficients β. m, p, and r (as above) we let b = b1b2 · · · bn denote the bitstring representation of a word cluster, and the distribution is given by Eq. 2 in Fig. 3. In this paramaterization, rather than one ηvector for each role and vocabulary term, we have one η-vector for each role and conditional binary decision in the tree (each bitstring prefix). Since the tree is binary with V leaves, this yields the same total number of parameters. As Goodman (2001) points out, while this reparameterization is exact for true probabilities, it remains an approximation for estimated models (with generalization behavior dependent on how well the class hierarchy is supported by the data). In addition to enabling faster inference, one advantage of the bitstring representation and the hierarchical softmax parameterization is that we can easily calculate probabilities of clusters at different granularities. 4.2 Inference Our primary quantities of interest in this model are p (the personas for each character) and η, the effects that each author and persona have on the probability of a word. Rather than adopting a fully Bayesian approach (e.g., sampling all variables), we infer these values using stochastic EM, alternating between collapsed Gibbs sampling for each p and maximizing with respect to η. Collapsed Gibbs for personas.8 At each step, the required quantity is the probability that character c in document d has persona z, given everything else. This is proportional to the number of other characters in document d who also (currently) have that persona (plus the Dirichlet hyperparameter which acts as a smoother) times the probability (under pd,c = z) of all of the words 8We assume the reader is familiar with collapsed Gibbs sampling as used in latent-variable NLP models. 374 θ α p w r m η µ λ W C D P Number of personas (hyperparameter) D Number of documents Cd Number of characters in document d Wd,c Number of (cluster, role) tuples for character c md Metadata for document d (ranges over M authors) θd Document d’s distribution over personas pd,c Character c’s persona j An index for a ⟨r, w⟩tuple in the data wj Word cluster ID for tuple j rj Role for tuple j ∈{agent, patient, poss, pred} η Coefficients for the log-linear language model µ, λ Laplace mean and scale (for regularizing η) α Dirichlet concentration parameter Figure 2: Above: Probabilistic graphical model. Observed variables are shaded, latent variables are clear, and collapsed variables are dotted. Below: Definition of variables. observed in each role r for that character: (count(z; pd,−c) + αz)× R Y r=1 Y j:rj=r P(bj | m, p, r, η) (3) The metadata features (like author, etc.) influence this probability by being constant for all choices of z; e.g., if the coefficient learned for Austen for vocabulary term manners is high and all coefficients for all z are close to zero, then the probability of manners will change little under different choices of z. Eq. 3 contains one multiplicand for every word associated with a character, and only one term reflecting the influence of the shared document multinomial. The implication is that, for major characters with many observed words, the words will dominate the choice of persona; where the document influence would have a bigger effect is with characters for whom we don’t have much data. In that case, it can act as a kind of informed background; given what little data we have for that character, it would nudge us toward the character types that the other characters in the book embody. Given an assignment of all p, we choose η to maximize the conditional log-likelihood of the words, as represented by their bitstring cluster IDs, given the observed author and background effects and the sampled personas. This equates to solving 4V ℓ1-regularized logistic regressions (see Eq. 2 in Figure 3), one for each role type and bitstring prefix, each with M + P + 1 parameters. We apply OWL-QN (Andrew and Gao, 2007) to minimize the ℓ1-regularized objective with an absolute convergence threshold of 10−5. 5 Evaluation While standard NLP and machine learning practice is to evaluate the performance of an algorithm on a held-out gold standard, articulating what a true “persona” might be for a character is inherently problematic. Rather, we evaluate the performance and output of our model by preregistering a set of 29 hypotheses of varying scope and difficulty and comparing the performance of different models in either confirming, or failing to confirm, those hypotheses. This kind of evaluation was previously applied to a subjective text measurement problem by Sim et al. (2013). All hypotheses were created by a literary scholar with specialization in the period to not only give an empirical measure of the strengths and weaknesses of different models, but also to help explore exactly what the different models may, or may not, be learning. All preregistered hypotheses establish the degrees of similarity among three characters, taking the form: “character X is more similar to character Y than either X or Y is to a distractor character Z”; for a given model and definition of distance under that model, each hypothesis yields two yes/no decisions that we can evaluate: • distance(X, Y ) < distance(X, Z) • distance(X, Y ) < distance(Y, Z) To tease apart the different kinds of similarities we hope to explore, we divide the hypotheses into four classes: 375 A. This class constitutes sanity checks: character X and Y are more similar to each other in every way than to character Z. E.g.: Elizabeth Bennet in Pride and Prejudice resembles Elinor Dashwood in Sense and Sensibility (Jane Austen) more than either character resembles Allen Quatermain in Allen Quatermain (H. Rider Haggard). (Austenian protagonists should resemble each other more than they resemble a grizzled hunter.) B. This class captures our ability to identify two characters in the same author as being more similar to each other than to a closely related character in a different author. E.g.: Wickham in Pride and Prejudice resembles Willoughby in Sense and Sensibility (Jane Austen) more than either character resembles Mr. Rochester in Jane Eyre (Charlotte Bront¨e). C. This class captures our ability to discriminate among similar characters in the same author. In these hypotheses, two characters X and Y from the same author are more similar to each other than to a third character Z from that same author. E.g.: Wickham in Pride and Prejudice (Jane Austen) resembles Willoughby in Sense and Sensibility more than either character resembles Mr. Darcy in Pride and Prejudice. D. This class constitutes more difficult, exploratory hypotheses, including differences among point of view. E.g.: Montoni in Mysteries of Udolpho (Radcliffe) resembles Heathcliff in Wuthering Heights (Emily Bront¨e) more than either resembles Mr. Bennet in Pride and Prejudice. (Testing our model’s ability to discern similarities in spite of elapsed time.) All 29 hypotheses can be found in a supplementary technical report (Bamman et al., 2014). We emphasize that the full set of hypotheses was locked before the model was estimated. 6 Experiments Part of the motivation of our mixed effects model is to be able to tackle hypothesis class C—by factoring out the influence of a particular author on the learning of personas, we would like to be able to discriminate between characters that all have a common authorial voice. In contrast, the Persona Regression model of Bamman et al. (2013), which uses metadata variables (like authorship) to encourage entities with similar covariates to have similar personas, reflects an assumption that makes it likely to perform well at class B. To judge their respective strengths on different hypothesis classes, we evaluate three models: 1. The mixed-effects Author/Persona model (described above), which includes author information as a metadata effect; here, each η-vector (of length M + P + 1) contains a parameter for each of the distinct authors in our data, a parameter for each persona, and a background parameter. 2. A Basic persona model, which ablates author information but retains the same loglinear architecture; here, the η-vector is of size P +1 and does not model author effects. 3. The Persona Regression model of Bamman et al. (2013). All models are run with P ∈{10, 25, 50, 100, 250} personas; Persona Regression additionally uses K = 25 latent topics. All configurations use the full dataset of 15,099 novels, and all characters with at least 25 total roles (a total of 257,298 entities). All experiments are run with 50 iterations of Gibbs sampling to collect samples for the personas p, alternating with maximization steps for η. The value of α is optimized using slice sampling (with a non-informative prior) every 5 iterations. The value of λ is held constant at 1. At the end of inference, we calculate the posterior distributions over personas for all characters as the sampling probability of the final iteration. To formally evaluate “similarity” between two characters, we measure the Jensen-Shannon divergence between personas (calculated as the average JS distance over the cluster distributions for each role type), marginalizing over the characters’ posterior distributions over personas; two characters with a lower JS divergence are judged to be more similar than two characters with a higher one. As a Baseline, we also evaluate all hypotheses on a model with no latent variables whatsoever, which instead measures similarity as the average JS divergence between the empirical word distributions over each role type. Table 1 presents the results of this comparison; for all models with latent variables, we report the average of 5 sampling runs with different random initializations. Figure 4 provides a syn376 P Model Hypothesis Class A B C D 250 Author/Persona 1.00 0.58 0.75 0.42 Basic Persona 1.00 0.73 0.58 0.53 Persona Reg. 0.90 0.70 0.58 0.44 100 Author/Persona 0.98 0.68 0.70 0.46 Basic Persona 0.95 0.73 0.53 0.47 Persona Reg. 0.93 0.78 0.63 0.49 50 Author/Persona 0.95 0.73 0.63 0.50 Basic Persona 0.98 0.75 0.48 0.53 Persona Reg. 1.00 0.75 0.65 0.38 25 Author/Persona 1.00 0.63 0.65 0.50 Basic Persona 1.00 0.63 0.50 0.50 Persona Reg. 0.90 0.78 0.60 0.39 10 Author/Persona 0.95 0.63 0.70 0.51 Basic Persona 0.78 0.80 0.48 0.46 Persona Reg. 0.90 0.73 0.43 0.41 Baseline 1.00 0.63 0.58 0.37 Table 1: Agreement rates with preregistered hypotheses, averaged over 5 sampling runs with different initializations. 0 25 50 75 100 A B C D Hypothesis class Accuracy Author/Persona Basic Persona Reg. Baseline Figure 4: Synopsis of table 1: average accuracy across all P. Persona regression is best able to judge characters in one author to be more similar to each other than to characters in another (B), while our mixed-effects Author/Persona model outperforms other models at discriminating characters in the same author (C). opsis of this table by illustrating the average accuracy across all choice of P. All models, including the baseline, perform well on the sanity checks (A). As expected, the Persona Regression model performs best at hypothesis class B (correctly judging two characters from the same author to be more similar to each other than to a character from a different author); this behavior is encouraged in this model by allowing an author (as an external metadata variable) to directly influence the persona choice, which has the effect of pushing characters from the same author to embody the same character type. Our mixed effects Author/Persona model, in contrast, outperforms the other models at hypothesis class C (correctly discriminating different character types present in the same author). By discounting author-specific lexical effects during persona inference, we are better able to detect variation among the characters of a single author that we are not able to capture otherwise. While these different models complement each other in this manner, we note that there is no absolute separation among them, which may be suggestive of the degree to which the formal and referential dimensions are fused in novels. Nevertheless, the strengths of these different models on these different hypothesis classes gives us flexible alternatives to use depending on the kinds of character types we are looking to infer. 7 Analysis The latent personas inferred from this model will support further exploratory analysis of literary history. Figure 2 illustrates this with a selection of three character types learned, displaying characteristic clusters for all role types, along with the distribution of that persona’s use across time and the gender distribution of characters embodying that persona. In general, the personas learned so far do not align neatly with character types known to literary historians. But they do have legible associations both with literary genres and with social categories. Even though gender is not an observable variable known to the model during inference, personas tend to be clearly gendered. This is not in itself surprising (since literary scholars know that assumptions about character are strongly gendered), but it does suggest that diachronic analysis of latent character types might cast new light on the history of gender in fiction. This is especially true since the distribution of personas across the time axis similarly reveals coherent trends. Table 3 likewise illustrates what our model learns by presenting a sample of the fixed effects learned for a set of five major 19th-century authors. These are clusters that are conditionally more likely to appear associated with a character in a work by the given author than they are in the overall data; by factoring this information out of the inference process for learning character types (by attributing its presence in a text to the author 377 1800 1820 1840 1860 1880 1900 1800 1820 1840 1860 1880 1900 1800 1820 1840 1860 1880 1900 Agent carried ran threw sent received arrived turns begins returns rose fell suddenly appeared struck showed thinks loves calls is seems returned immediately waiting does knows comes Patient wounded killed murdered wounded killed murdered thinks loves calls suffer yield acknowledge destroy bind crush love hope true free saved unknown attend haste proceed turn hold show Poss death happiness future army officers troops lips cheek brow lips cheek brow soldiers band armed eyes face eye mouth fingers tongue party join camp table bed chair Pred crime guilty murder king emperor throne beautiful fair fine youth lover hers general officer guard good kind ill dead living died soldier knight hero dead living died % Female 12.2 3.7 54.7 Table 2: Snapshots of three personas learned from the P = 50, Author/Persona model. Gender and time proportions are calculated by summing and normalizing the posterior distributions over all characters with that feature. We truncate time series at 1800 due to data sparsity before that date; the y-axis illustrates the frequency of its use in a given year, relative to its lifetime. Author clusters Jane Austen praise gift consolation letter read write character natural taste Charlotte Bront¨e lips cheek brow book paper books hat coat cap Charles Dickens hat coat cap table bed chair hand head hands Herman Melville boat ship board hat coat cap feet ground foot Jules Verne journey travel voyage master company presence success plan progress Table 3: Characteristic possessive clusters in a sample of major 19th-century authors. rather than the persona), we are able to learn personas that cut across different topics more effectively than if a character type is responsible for explaining the presence of these terms as well. 8 Conclusion Our method establishes the possibility of representing the relationship between character and narrative form in a hierarchical Bayesian model. Postulating an interaction between authorial diction and character allows models that consider the effect of the author to more closely reproduce a human reader’s judgments, especially by learning to distinguish different character types within a single author’s oeuvre. This opens the door to considering other structural and formal dimensions of narration. For instance, representation of character is notoriously complicated by narrative point of view (Booth, 1961); and indeed, comparisons between first-person narrators and other characters are a primary source of error for all models tested above. The strategy we have demonstrated suggests that it might be productive to address this by modeling the interaction of character and point of view as a separate effect analogous to authorship. It is also worth noting that the models tested above diverge from many structuralist theories of narrative (Propp, 1998) by allowing multiple instances of the same persona in a single work. Learning structural limitations on the number of “protagonists” likely to coexist in a single story, for example, may be another fruitful area to explore. In all cases, the machinery of hierarchical models gives us the flexibility to incorporate such effects at will, while also being explicit about the theoretical assumptions that attend them. 9 Acknowledgments We thank the reviewers for their helpful comments. The research reported here was supported by a National Endowment for the Humanities start-up grant to T.U., U.S. National Science Foundation grant CAREER IIS-1054319 to N.A.S., and an ARCS scholarship to D.B. This work was made possible through the use of computing resources made available by the Pittsburgh Supercomputing Center. Eleanor Courtemanche provided advice about the history of narrative theory. 378 References Galen Andrew and Jianfeng Gao. 2007. Scalable training of l1-regularized log-linear models. In Proc. of ICML. David Bamman, Brendan O’Connor, and Noah A. Smith. 2013. Learning latent personas of film characters. Proc. of ACL. David Bamman, Ted Underwood, and Noah A. Smith. 2014. Appendix to ‘A Bayesian mixed effects model of literary character’. Technical report, Carnegie Mellon University, University of IllinoisUrbana Champaign. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Wayne Booth. 1961. The Rhetoric of Fiction. University of Chicago Press, Chicago. Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479. Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In Proc. of EMNLP, Seattle, Washington, USA. Stanley F. Chen and Roni Rosenfeld. 2000. A survey of smoothing techniques for me models. IEEE Transactions on Speech and Audio Processing, 8(1):37–50. Jackie Chi Kit Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In Proc. of NAACL. Peter T. Davis, David K. Elson, and Judith L. Klavans. 2003. Methods for precise named entity matching in digital collections. In Proc. of JCDL, Washington, DC, USA. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. Stanford typed dependencies manual. Technical report, Stanford University. Greg Durrett, David Hall, and Dan Klein. 2013. Decentralized entity-level modeling for coreference resolution. In Proc. of ACL. Jacob Eisenstein, Amr Ahmed, and Eric P. Xing. 2011. Sparse additive generative models of text. In Proc. of ICML. David K. Elson, Nicholas Dames, and Kathleen R. McKeown. 2010. Extracting social networks from literary fiction. In Proc. of ACL, Stroudsburg, PA, USA. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proc. of ACL. E. M. Forster. 1927. Aspects of the Novel. Harcourt, Brace & Co. Joshua Goodman. 2001. Classes for fast maximum entropy training. In Proc. of ICASSP. Joshua Goodman. 2004. Exponential priors for maximum entropy models. In Proc. of NAACL. Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In Proc. of NAACL. Jun’ichi Kazama and Jun’ichi Tsujii. 2003. Evaluation and extension of maximum entropy models with inequality constraints. In Proc. of EMNLP. Suzanne Keen. 2003. Narrative Form. Palgrave Macmillan, Basingstoke. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. Proc. of ICLR. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proc. of AISTATS. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. Maltparser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13:95–135, 5. Vladimir Propp. 1998. Morphology of the Folktale. University of Texas Press, 2nd edition. Roni Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modelling. Computer Speech and Language, 10(3):187 – 228. Yanchuan Sim, Brice D. L. Acree, Justin H. Gross, and Noah A. Smith. 2013. Measuring ideological proportions in political speeches. In Proc. of EMNLP, Seattle, Washington, USA. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proc. of NAACL. Ted Underwood, Michael L Black, Loretta Auvil, and Boris Capitanu. 2013. Mapping mutable genres in structurally complex volumes. In Proc. of IEEE International Conference on Big Data. Alex Woloch. 2003. The One vs. the Many: Minor Characters and the Space of the Protagonist in the Novel. Princeton University Press, Princeton NJ. 379
2014
35
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 380–390, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Collective Tweet Wikification based on Semi-supervised Graph Regularization Hongzhao Huang1, Yunbo Cao2, Xiaojiang Huang2, Heng Ji1, Chin-Yew Lin2 1Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY 12180, USA 2Microsoft Research Asia, Beijing 100080, P.R.China {huangh9,jih}@rpi.edu1, {yunbo.cao,xiaojih,cyl}@microsoft.com2 Abstract Wikification for tweets aims to automatically identify each concept mention in a tweet and link it to a concept referent in a knowledge base (e.g., Wikipedia). Due to the shortness of a tweet, a collective inference model incorporating global evidence from multiple mentions and concepts is more appropriate than a noncollecitve approach which links each mention at a time. In addition, it is challenging to generate sufficient high quality labeled data for supervised models with low cost. To tackle these challenges, we propose a novel semi-supervised graph regularization model to incorporate both local and global evidence from multiple tweets through three fine-grained relations. In order to identify semanticallyrelated mentions for collective inference, we detect meta path-based semantic relations through social networks. Compared to the state-of-the-art supervised model trained from 100% labeled data, our proposed approach achieves comparable performance with 31% labeled data and obtains 5% absolute F1 gain with 50% labeled data. 1 Introduction With millions of tweets posted daily, Twitter enables both individuals and organizations to disseminate information, from current affairs to breaking news in a timely fashion. In this work, we study the wikification (Disambiguation to Wikipedia) task (Mihalcea and Csomai, 2007) for tweets, which aims to automatically identify each concept mention in a tweet, and link it to a concept referent in a knowledge base (KB) (e.g., Wikipedia). For example, as shown in Figure 1, Hawks is an identified mention, and its correct referent concept in Wikipedia is Atlanta Hawks. An end-to-end wikification system needs to solve two sub-problems: (i) concept mention detection, (ii) concept mention disambiguation. Wikification is a particularly useful task for short messages such as tweets because it allows a reader to easily grasp the related topics and enriched information from the KB. From a systemto-system perspective, wikification has demonstrated its usefulness in a variety of applications, including coreference resolution (Ratinov and Roth, 2012) and classification (Vitale et al., 2012). Sufficient labeled data is crucial for supervised models. However, manual wikification annotation for short documents is challenging and timeconsuming (Cassidy et al., 2012). The challenges are: (i) unlinkability, a valid concept may not exist in the KB. (ii) ambiguity, it is impossible to determine the correct concept due to the dearth of information within a single tweet or multiple correct answer. For instance, it would be difficult to determine the correct referent concept for “Gators” in t1 in Figure 1. Linking “UCONN” in t3 to University of Connecticut may also be acceptable since Connecticut Huskies is the athletic team of the university. (iii) prominence, it is challenging to select a set of linkable mentions that are important and relevant. It is not tricky to select “Fans”, “slump”, and “Hawks” as linkable mentions, but other mentions such as “stay up” and “stay positive” are not prominent. Therefore, it is challenging to create sufficient high quality labeled tweets for supervised models and worth considering semi-supervised learning with the exploration of unlabeled data. 380 Stay up Hawk Fans. We are going through a slump now, but we have to stay positive. Go Hawks! Congrats to UCONN and Kemba Walker. 5 wins in 5 days, very impressive... Just getting to the Arena, we play the Bucks tonight. Let's get it! Fan (person); Mechanical fan Slump (geology); Slump (sports) Atlanta Hawks; Hawks (film) University of Connecticut; Connecticut Huskies Kemba Walker Arena; Arena (magazine); Arena (TV series) Bucks County, Pennsylvania; Milwaukee Bucks Tweets Concept Candidates Go Gators!!! Florida Gators football; Florida Gators men's basketball t1 t2 t3 t4 Figure 1: An illustration of Wikification Task for Tweets. Concept mentions detected in tweets are marked as bold, and correctly linked concepts are underlined. The concept candidates are ranked by their prior popularity which will be explained in section 4.1, and only top 2 ranked concepts are listed. However, when selecting semi-supervised learning frameworks, we noticed another unique challenge that tweets pose to wikification due to their informal writing style, shortness and noisiness. The context of a single tweet usually cannot provide enough information for prominent mention detection and similarity computing for disambiguation. Therefore, a collective inference model over multiple tweets in the semi-supervised setting is desirable. For instance, the four tweets in Figure 1 are posted by the same author within a short time period. If we perform collective inference over them we can reliably link ambiguous mentions such as “Gators”, “Hawks”, and “Bucks” to basketball teams instead of other concepts such as the county Bucks County. In order to address these unique challenges for wikification for the short tweets, we employ graph-based semi-supervised learning algorithms (Zhu et al., 2003; Smola and Kondor, 2003; Blum et al., 2004; Zhou et al., 2004; Talukdar and Crammer, 2009) for collective inference by exploiting the manifold (cluster) structure in both unlabeled and labeled data. These approaches normally assume label smoothness over a defined graph, where the nodes represent a set of labeled and unlabeled instances, and the weighted edges reflect the closeness of each pair of instances. In order to construct a semantic-rich graph capturing the similarity between mentions and concepts for the model, we introduce three novel fine-grained relations based on a set of local features, social networks and meta paths. The main contributions of this paper are summarized as follows: • To the best of our knowledge, this is the first effort to explore graph-based semi-supervised learning algorithms for the wikification task. • We propose a novel semi-supervised graph regularization model performing collective inference for joint mention detection and disambiguation. Our approach takes advantage of three proposed principles to incorporate both local and global evidence from multiple tweets. • We propose a meta path-based unified framework to detect both explicitly and implicitly relevant mentions. 2 Preliminaries Concept and Concept Mention We define a concept c as a Wikipedia article (e.g., Atlanta Hawks), and a concept mention m as an n-gram from a specific tweet. Each concept has a set of textual representation fields (Meij et al., 2012), including title (the title of the article), sentence (the first sentence of the article), paragraph (the first paragraph of the article), content (the entire content of the article), and anchor (the set of all anchor texts with incoming links to the article). Wikipedia Lexicon Construction We first construct an offline lexicon with each entry as ⟨m, {c1, ..., ck}⟩, where {c1, ..., ck} is the set of possible referent concepts for the mention m. Following the previous work (Bunescu, 2006; Cucerzan, 2007; Hachey et al., 2013), we extract the possible mentions for a given concept c using the following resources: the title of c; the aliases appearing in the introduction and infoboxes of c (e.g., The Evergreen State is an alias of Washington state); the titles of pages redirecting to c (e.g., State of Washington is a redirecting page of Washington (state)); the titles of the disambigua381 tion pages containing c; and all the anchor texts appearing in at least 5 pages with hyperlinks to c (e.g., WA is a mention for the concept Washington (state) in the text “401 5th Ave N [[Seattle]], [[Washington (state)—WA]] 98109 USA”. We also propose three heuristic rules to extract mentions (i.e., different combinations of the family name and given name for a person, the headquarters of an organization, and the city name for a sports team). Concept Mention Extraction Based on the constructed lexicon, we then consider all n-grams of size ≤n (n=7 in this paper) as concept mention candidates if their entries in the lexicon are not empty. We first segment @usernames and #hashtags into regular tokens (e.g., @amandapalmer is segmented as amanda palmer and #WorldWaterDay is split as World Water Day) using the approach proposed by (Wang et al., 2011). Segmentation assists finding concept candidates for these non-regular mentions. 3 Principles and Approach Overview Relational Graph Construction Knowledge Base (Wikipedia) Labeled and Unlabeled Tweets Wikipedia Lexicon Construction Concept Mention and Concept Candidate Extraction Local Compatibility (local features, cosine similarity) Coreference (meta path, mention similarity) Semantic Relatedness (meta path, concept semantic relatedness) Semi-Supervised Graph Regularization <Mention, Concept> Pairs Figure 2: Approach Overview. 3.1 Principles A single tweet may not provide enough evidence to identify prominent mentions and infer their correct referent concepts due to the lack of contextual information. To tackle this problem, we propose to incorporate global evidence from multiple tweets and performing collective inference for both mention identification and disambiguation. We first introduce the following three principles that our approach relies on. Principle 1 (Local compatibility): Two pairs of ⟨m, c⟩with strong local compatibility tend to have similar labels. Mentions and their correct referent concepts usually tend to share a set of characteristics such as string similarity between m and c (e.g., ⟨Chicago, Chicago⟩and ⟨Facebook, Facebook⟩). We define the local compatibility to model such set of characteristics. Principle 2 (Coreference): Two coreferential mentions should be linked to the same concept. For example, if we know “nc” and “North Carolina” are coreferential, then they should both be linked to North Carolina. Principle 3 (Semantic Relatedness): Two highly semantically-related mentions are more likely to be linked to two highly semanticallyrelated concepts. For instance, when “Sweet 16” and “Hawks” often appear together within relevant contexts, they can be reliably linked to two baseketball-related concepts NCAA Men’s Division I Basketball Championship and Atlanta Hawks, respectively. 3.2 Approach Overview Given a set of tweets ⟨t1, ..., t|T|⟩, our system first generates a set of candidate concept mentions, and then extracts a set of candidate concept referents for each mention based on the Wikipedia lexicon. Given a pair of mention and its candidate referent concept ⟨m, c⟩, the remaining task of wikification is to assign either a positive label if m should be selected as a prominently linkable mention and c is its correct referent concept, or otherwise a negative label. The label assignment is obtained by our semi-supervised graph regularization framework based on a relational graph, which is constructed from local compatibility, coreference, and semantic relatedness relations. The overview of our approach is as illustrated in Figure 2. 4 Relational Graph Construction We first construct the relational graph G = ⟨V, E⟩, where V = {v1, ..., vn} is a set of nodes and E = {e1, ..., em} is a set of edges. Each vi = ⟨mi, ci⟩ represents a tuple of mention mi and its referent concept candidate ci. An edge is added between two nodes vi and vj if there is a proposed relation based on the three principles described in section 3.1. 4.1 Local Compatibility We first compute local compatibility (Principle 1) by considering a set of novel local features to cap382 ture the importance and relevance of a mention m to a tweet t, as well as the correctness of its linkage to a concept c. We have designed a number of features which are similar to those commonly used in wikification and entity linking work (Meij et al., 2012; Guo et al., 2013; Mihalcea and Csomai, 2007). Mention Features We define the following features based on information from mentions. • IDFf(m) = log( |C| df(m)), where |C| is the total number of concepts in Wikipedia and df(m) is the total number of concepts in which m occurs, and f indicates the field property, including title, content, and anchor. • Keyphraseness(m) = |Ca(m)| df(m) to measure how likely m is used as an anchor in Wikipedia, where Ca(m) is the set of concepts where m appears as an anchor. • LinkProb(m) = P c∈Ca(m) count(m,c) P c∈C count(m,c) , where count(m, c) indicates the number of occurrence of m in c. • SNIL(m) and SNCL(m) to count the number of concepts that are equal to or contain a sub-ngram of m, respectively (Meij et al., 2012). Concept Features The concept features are solely based on Wikipedia, including the number of incoming and outgoing links for c, and the number of words and characters in c. Mention + Concept Features This set of features considers information from both mentions and concepts: • prior popularity prior(m, c) = count(m,c) P c′ count(m,c′), where count(m, c) measures the frequency of the anchor links from m to c in Wikipedia. • TFf(m, c) = countf(m,c) |f| to measure the relative frequency of m in each field representation f of c, normalized by the length of f. The fields include title, sentence, paragraph, content and anchor. • NCT(m, c), TCN(m, c), and TEN(m, c) to measure whether m contains the title of c, whether the title of c contains m, and whether m equals to the title of c, respectively. Context Features This set of features include (i) Context Capitalization features, which indicate whether the current mention, the token before, and the token after are capitalized. (ii) tf-idf based features, which include the dot product of two word vectors vc and vt, and the average tf-idf value of common items in vc and vt, where vc and vt are the top 100 tf-idf word vectors in c and t. Local Compatibility Computation For each node vi = ⟨mi, ci⟩, we collect its local features as a feature vector Fi = ⟨f1, f2, ..., fd⟩. To avoid features with large numerical values that dominate other features, the value of each feature is re-scaled using feature standardization approach. The cosine similarity is then adopted to compute the local compatibility of two nodes and construct a k nearest neighbor (kNN) graph, where each node is connected to its k nearest neighboring nodes. We compute the weight matrix that represents the local compatibility relation as: W loc ij =  cosine(Fi, Fj) j ∈kNN(i) 0 Otherwise 4.2 Meta Path Mention Hashtag Tweet User post-1 post contain-1 contain contain-1 contain Figure 3: Schema of the Twitter network. In this subsection, we introduce the concept meta path which will be used to detect coreference (section 4.3) and semantic relatedness relations (section 4.4). A meta-path is a path defined over a network and composed of a sequence of relations between different object types (Sun et al., 2011b). In our experimental setting, we can construct a natural Twitter network summarized by the network schema in Figure 3. The network contains four types of objects: Mention (M), Tweet (T), User (U), and Hashtag (H). Tweets and mentions are connected by links “contain” and “contained by” (denoted as “contain−1”); and other linked relationships can be described similarly. We then define the following five types of meta paths to connect two mentions as: • “M - T - M”, • “M - T - U - T - M”, • “M - T - H - T - M”, • “M - T - U - T - M - T - H - T - M”, • “M - T - H - T - M - T - U - T - M”. 383 Each meta path represents one particular semantic relation. For instance, the first three paths are basic ones expressing the explicit relations that two mentions are from the same tweet, posted by the same user, and share the same #hashtag, respectively. The last two paths are concatenated ones which are constructed by concatenating the first three simple paths to express the implicit relations that two mentions co-occur with a third mention sharing either the same authorship or #hashtag. Such complicated paths can be exploited to detect more semantically-related mentions from wider contexts. For example, the relational link between “narita airport” and “Japan” would be missed without using the path “narita airport - t1 - u1 - t2 - american - t3 - h1 - t4 - Japan” since they don’t directly share any authorships or #hashtags. 4.3 Coreference A coreference relation (Principle 2) usually occurs across multiple tweets due to the highly redundant information in Twitter. To ensure high precision, we propose a simple yet effective approach utilizing the rich social network relations in Twitter. We consider two mentions mi and mj coreferential if mi and mj share the same surface form or one is an abbreviation of the other, and at least one meta path exists between mi and mj. Then we define the weight matrix representing the coreferential relation as: W coref ij =    1.0 if mi and mj are coreferential, and ci = cj 0 Otherwise 4.4 Semantic Relatedness Ensuring topical coherence (Principle 3) has been beneficial for wikification on formal texts (e.g., News) by linking a set of semantically-related mentions to a set of semantically-related concepts simultaneously (Han et al., 2011; Ratinov et al., 2011; Cheng and Roth, 2013). However, the shortness of a single tweet means that it may not provide enough topical clues. Therefore, it is important to extend this evidence to capture semantic relatedness information from multiple tweets. We define the semantic relatedness score between two mentions as SR(mi, mj) = 1.0 if at least one meta path exists between mi and mj, otherwise SR(mi, mj) = 0. In order to compute the semantic relatedness of two concepts ci and cj, we adopt the approach proposed by (Milne and Witten, 2008a): SR(ci, cj) = 1−log max(|Ci|, |Cj|) −log |Ci ∩Cj| log(|C|) −log min(|Ci|, |Cj|) , where |C| is the total number of concepts in Wikipedia, and Ci and Cj are the set of concepts that have links to ci and cj, respectively. Then we compute a weight matrix representing the semantic relatedness relation as: W rel ij =  SR(Ni, Nj) if SR(Ni, Nj) ≥δ 0 Otherwise where SR(Ni, Nj) = SR(mi, mj) × SR(ci, cj) and δ = 0.3, which is optimized from a development set. 4.5 The Combined Relational Graph hawks, Atlanta Hawks uconn, Connecticut Huskies bucks, Milwaukee Bucks kemba walker, Kemba Walker 0.404 gators, Florida Gators men's basketball now, Now days, Day tonight, Tonight 0.932 0.764 0.665 0.467 0.563 0.538 0.447 Figure 4: A example of the relational graph constructed for the example tweets in Figure 1. Each node represents a pair of ⟨m, c⟩, separated by a comma. The edge weight is obtained from the linear combination of the weights of the three proposed relations. Not all mentions are included due to the space limitations. Based on the above three weight matrices W loc, W coref, and W rel, we first obtain their corresponding transition matrices P loc, P coref, and P rel, respectively. The entry Pij of the transition matrix P for a weight matrix W is computed as Pij = Wij P k Wik such that P k Pik = 1. Then we obtain the combined graph G with weight matrix W, where Wij = αP loc ij + βP coref ij + γP rel ij . α, β, and γ are three coefficients between 0 and 1 with the constraint that α + β + γ = 1. They control the contributions of these three relations in our semi-supervised graph regularization model. We choose transition matrix to avoid the domination of one relation over others. An example graph of G is shown in Figure 4. Compared to the referent graph which considers each mention or concept as a node in previous graph-based re-ranking approaches (Han et al., 2011; Shen et al., 2013), our 384 novel graph representation has two advantages: (i) It can easily incorporate more features related to both mentions and concepts. (ii) It is more appropriate for our graph-based semi-supervised model since it is difficult to assign labels to a pair of mention and concept in the referent graph. 5 Semi-supervised Graph Regularization Given the constructed relational graph with the weighted matrix W and the label vector Y of all nodes, we assume the first l nodes are labeled as Yl and the remaining u nodes (u = n −l) are initialized with labels Y 0 u . Then our goal is to refine Y 0 u and obtain the final label vector Yu. Intuitively, if two nodes are strongly connected, they tend to hold the same label. We propose a novel semi-supervised graph regularization framework based on the graph-based semi-supervised learning algorithm (Zhu et al., 2003): Q(Y) = µ n X i=l+1 (yi −y0 i )2 + 1 2 X i,j Wij(yi −yj)2. The first term is a loss function that incorporates the initial labels of unlabeled examples into the model. In our method, we adopt prior popularity (section 4.1) to initialize the labels of the unlabeled examples. The second term is a regularizer that smoothes the refined labels over the constructed graph. µ is a regularization parameter that controls the trade-off between initial labels and the consistency of labels on the graph. The goal of the proposed framework is to ensure that the refined labels of unlabeled nodes are consistent with their strongly connected nodes, as well as not too far away from their initial labels. The above optimization problem can be solved directly since Q(Y) is convex (Zhu et al., 2003; Zhou et al., 2004). Let I be an identity matrix and DW be a diagonal matrix with entries Dii = P j Wij. We can split the weighted matrix W into four blocks as W = Wll Wlu Wul Wuu  , where Wmn is an m×n matrix. Dw is split similarly. We assume that the vector of the labeled examples Yl is fixed, so we only need to infer the refined label vector of the unlabeled examples Yu. In order to minimize Q(Y), we need to find Y ∗ u such that ∂Q ∂Yu Yu=Y ∗ u = (Duu + µIuu)Yu −WuuYu − WulYl −µY 0 u = 0. Therefore, a closed form solution can be derived as Y ∗ u = (Duu + µIuu −Wuu)−1(WulYl + µY 0 u ). However, for practical application to a largescale data set, an iterative solution would be more efficient to solve the optimization problem. Let Y t u be the refined labels after the tth iteration, the iterative solution can be derived as: Y t+1 u = (Duu+µIuu)−1(WuuY t u+WulYl+µY 0 u ). The iterative solution is more efficient since (Duu + µIuu) is a diagonal matrix and its inverse is very easy to compute. 6 Experiments In this section we compare our approach with state-of-the-art methods as shown in Table 1. 6.1 Data and Scoring Metric For our experiments we use a public data set (Meij et al., 2012) including 502 tweets posted by 28 verified users. The data set was annotated by two annotators. We randomly sample 102 tweets for development and the remaining for evaluation. We use a Wikipedia dump on May 3, 2013 as our knowledge base, which includes 30 million pages. For computational efficiency, we also filter some mention candidates by applying the preprocessing approach proposed in (Ferragina and Scaiella, 2010), and remove all the concepts with prior popularity less than 2% from an mention’s concept set for each mention, similar to (Guo et al., 2013). A mention and concept pair ⟨m, c⟩is judged as correct if and only if m is linkable and c is the correct referent concept for m. To evaluate the performance of a wikification system, we use the standard precision, recall and F1 measures. 6.2 Experimental Results The overall performance of various approaches is shown in Table 2. The results of the supervised method proposed by (Meij et al., 2012) are obtained from 5-fold cross validation. For our semi-supervised setting, we experimentally sample 200 tweets for training and use the remaining set as unlabeled and testing sets. In our semisupervised regularization model, the matrix W loc is constructed by a kNN graph (k = 20). The regularization parameter µ is empirically set to 0.1, and the coefficients α, β, and γ are learnt from the development set by considering all the combina385 Methods Descriptions TagMe The same approach that is described in (Ferragina and Scaiella, 2010), which aims to annotate short texts based on prior popularity and semantic relatedness of concepts. It is basically an unsupervised approach, except that it needs a development set to tune the probability threshold for linkable mentions. Meij A state-of-the-art system described in (Meij et al., 2012), which is a supervised approach based on the random forest model. It performs mention detection and disambiguation jointly, and it is trained from 400 labeled tweets. SSRegu1 Our proposed model based on Principle 1, using 200 labeled tweets. SSRegu12 Our proposed model based on Principle 1 and 2, using 200 labeled tweets. SSRegu13 Our proposed model based on Principle 1 and 3, using 200 labeled tweets. SSRegu123 Our proposed full model based on Principle 1, 2 and 3, using 200 labeled tweets. Table 1: Description of Methods. Methods Precision Recall F1 TagMe 0.329 0.423 0.370 Meij 0.393 0.598 0.475 SSRegu1 0.538 0.435 0.481 SSRegu12 0.638 0.438 0.520 SSRegu13 0.541 0.457 0.495 SSRegu123 0.650 0.441 0.525 Table 2: Overall Performance. tions of values from 0 to 1 at 0.1 intervals1. In order to randomize the experiments and make the comparison fair, we conduct 20 test runs for each method and report the average scores across the 20 trials. The relatively low performance of the baseline system TagMe demonstrates that only relying on prior popularity and topical information within a single tweet is not enough for an end-to-end wikification system for the short tweets. As an example, it is difficult to obtain topical clues in order to link the mention “Clinton” to Hillary Rodham Clinton by relying on the single tweet “wolfblitzercnn: Behind the scenes on Clinton’s Mideast trip #cnn”. Therefore, the system mistakenly links it to the most popular concept Bill Clinton. In comparision with the supervised baseline proposed by (Meij et al., 2012), our model SSRegu1 relying on local compatibility already achieves comparable performance with 50% of labeled data. This is because that our model performs collective inference by making use of the manifold (cluster) structure of both labeled and unlabeled data, and that the local compatibility relation is detected with high precision2 (89.4%). For example, the following three pairs of mentions and concepts ⟨pelosi, Nancy Pelosi⟩, ⟨obama, Barack Obama⟩, and ⟨gaddafi, Muam1These three coefficients are slightly different with different training data, a sample of them is: α = 0.4, β = 0.5, and γ = 0.1 2Here we define precision as the percentage of links that holds the same label. mar Gaddafi⟩have strong local compatibility with each other since they share many similar characteristics captured by the local features such as string similarity between the mention and the concept. Suppose the first pair is labeled, then its positive label will be propagated to other unlabeled nodes through the local compatibility relation, and correctly predict the labels of other nodes. Incorporating coreferential or semantic relatedness relation into SSRegu1 provides further gains, demonstrating the effectiveness of these two relations. For instance, “wh” is correctly linked to White House by incorporating evidence from its coreferential mention “white house”. The coreferential relation (Principle 2) is demonstrated to be more beneficial than the semantic relatedness relation (Principle 3) because the former is detected with much higher precision (99.7%) than the latter (65.4%). Our full model SSRegu123 achieves significant improvement over the supervised baseline (5% absolute F1 gain with 95.0% confidence level by the Wilcoxon Matched-Pairs Signed-Ranks Test), showing that incorporating global evidence from multiple tweets with fine-grained relations is beneficial. For instance, the supervised baseline fails to link “UCONN” and “Bucks” in our examples to Connecticut Huskies and Milwaukee Bucks, respectively. Our full model corrects these two wrong links by propagating evidence through the semantic links as shown in Figure 4 to obtain mutual ranking improvement. The best performance of our full model also illustrates that the three relations complement each other. We also study the disambiguation performance for the annotated mentions, as shown in Table 3. We can easily see that our proposed approach using 50% labeled data achieves similar performance with the state-of-the-art supervised model with 100% labeled data. When the mentions are given, the unpervised approach TagMe has already 386 Methods TagMe Meij SSRegu123 Accuracy 0.710 0.779 0.772 Table 3: Disambiguation Performance. Methods Precision Recall F1 SSRegu12 0.644 0.423 0.510 SSRegu13 0.543 0.441 0.486 SSRegu123 0.657 0.419 0.512 Table 4: The Performance of Systems Without Using Concatenated Meta Paths. achieved reasonable performance. In fact, mention detection actually is the performance bottleneck of a tweet wikification system (Guo et al., 2013). Our system performs better in identifying the prominent mention. 6.3 Effect of Concatenated Meta Paths In this work, we propose a unified framework utilizing meta path-based semantic relations to explore richer relevant context. Beyond the basic meta paths, we introduce concatenated ones by concatenating the basic ones. The performance of the system without using the concatenated meta paths is shown in Table 4. In comparison with the system based on all defined meta paths, we can clearly see that the systems using concatenated ones outperform those relying on the simple ones. This is because the concatenated meta paths can incorporate more relevant information with implicit relations into the models by increasing 1.6% coreference links and 9.3% semantic relatedness links. For example, the mention “narita airport” is correctly disambiguated to the concept “Narita International Airport” with higher confidence since its semantic relatedness relation with “Japan” is detected with the concatenated meta path as described in section 4.2. 6.4 Effect of Labeled Data Size 50 100 150 200 250 300 350 400 0.30 0.35 0.40 0.45 0.50 0.55 0.60 F1 Labeled Tweet Size SSRegu123 Meij Figure 5: The effect of Labeled Tweet Size. In previous experiments, we experimentally set the number of labeled tweets to be 200 for overall performance comparision with the baselines. In this subsection, we study the effect of labeled data size on our full model. We randomly sample 100 tweets as testing data, and randomly select 50, 100, 150, 200, 250, and 300 tweets as labeled data. 20 test runs are conducted and the average results are reported across the 20 trials, as shown in Figure 5. We find that as the size of the labeled data increases, our proposed model achieves better performance. It is encouraging to see that our approach, with only 31.3% labeled tweets (125 out of 400), already achieves a performance that is comparable to the state-of-the-art supervised model trained from 100% labeled tweets. 6.5 Parameter Analysis 0.1 0.5 1 2 5 10 20 30 40 50 0.30 0.35 0.40 0.45 0.50 0.55 0.60 F1 Regularization Parameter µ SSRegu123 Figure 6: The effect of parameter µ. In previous experiments, we empirically set the parameter µ = 0.1. µ is the regularization parameter that controls the trade-off between initial labels and the consistency of labels on the graph. When µ increases, the model tends to trust more in the initial labels. Figure 6 shows the performance of our models by varying µ from 0.02 to 50. We can easily see that the system performce is stable when µ < 0.4. However, when µ ≥0.4, the system performance dramatically decreases, showing that prior popularity is not enough for an end-toend wikification system. 7 Related Work The task of linking concept mentions to a knowledge base has received increased attentions over the past several years, from the linking of concept mentions in a single text (Mihalcea and Csomai, 2007; Milne and Witten, 2008b; Milne and Witten, 2008a; Kulkarni et al., 2009; He et al., 2011; Ratinov et al., 2011; Cassidy et al., 2012; Cheng and Roth, 2013), to the linking of a cluster of corefer387 ent named entity mentions spread throughout different documents (Entity Linking) (McNamee and Dang, 2009; Ji et al., 2010; Zhang et al., 2010; Ji et al., 2011; Zhang et al., 2011; Han and Sun, 2011; Han et al., 2011; Gottipati and Jiang, 2011; He et al., 2013; Li et al., 2013; Guo et al., 2013; Shen et al., 2013; Liu et al., 2013). A significant portion of recent work considers the two sub-problems mention detection and mention disambiguation separately and focus on the latter by first defining candidate concepts for a deemed mention based on anchor links. Mention disambiguation is then formulated as a ranking problem, either by resolving one mention at each time (non-collective approaches), or by disambiguating a set of relevant mentions simultaneously (collective approaches). Non-collective methods usually rely on prior popularity and context similarity with supervised models (Mihalcea and Csomai, 2007; Milne and Witten, 2008b; Han and Sun, 2011), while collective approaches further leverage the global coherence between concepts normally through supervised or graph-based re-ranking models (Cucerzan, 2007; Milne and Witten, 2008b; Han and Zhao, 2009; Kulkarni et al., 2009; Pennacchiotti and Pantel, 2009; Ferragina and Scaiella, 2010; Fernandez et al., 2010; Radford et al., 2010; Cucerzan, 2011; Guo et al., 2011; Han and Sun, 2011; Han et al., 2011; Ratinov et al., 2011; Chen and Ji, 2011; Kozareva et al., 2011; Cassidy et al., 2012; Shen et al., 2013; Liu et al., 2013). Especially note that when applying the collective methods to short messages from social media, evidence from other messages usually needs to be considered (Cassidy et al., 2012; Shen et al., 2013; Liu et al., 2013). Our method is a collective approach with the following novel advancements: (i) A novel graph representation with fine-grained relations, (ii) A unified framework based on meta paths to explore richer relevant context, (iii) Joint identification and linking of mentions under semi-supervised setting. Two most similar methods to ours were proposed by (Meij et al., 2012; Guo et al., 2013) by performing joint detection and disambiguation of mentions. (Meij et al., 2012) studied several supervised machine learning models, but without considering any global evidence either from a single tweet or other relevant tweets. (Guo et al., 2013) explored second order entity-to-entity relations but did not incorporate evidence from multiple tweets. This work is also related to graph-based semisupervised learning (Zhu et al., 2003; Smola and Kondor, 2003; Zhou et al., 2004; Talukdar and Crammer, 2009), which has been successfully applied in many Natural Language Processing tasks (Niu et al., 2005; Chen et al., 2006). We introduce a novel graph that incorporates three fine-grained relations. Our work is further related to meta path-based heterogeneous information network analysis (Sun et al., 2011b; Sun et al., 2011a; Kong et al., 2012; Huang et al., 2013), which has demonstrated advantages over homogeneous information network analysis without differentiating object types and relational links. 8 Conclusions We have introduced a novel semi-supervised graph regularization framework for wikification to simultaneously tackle the unique challenges of annotation and information shortage in short tweets. To the best of our knowledge, this is the first work to explore the semi-supervised collective inference model to jointly perform mention detection and disambiguation. By studying three novel finegrained relations, detecting semantically-related information with semantic meta paths, and exploiting the data manifolds in both unlabeled and labeled data for collective inference, our work can dramatically save annotation cost and achieve better performance, thus shed light on the challenging wikification task for tweets. Acknowledgments This work was supported by the U.S. Army Research Laboratory under Cooperative Agreement No. W911NF-09-2-0053 (NS-CTA), U.S. NSF CAREER Award under Grant IIS-0953149, U.S. DARPA Award No. FA8750-13-2-0041 in the Deep Exploration and Filtering of Text (DEFT) Program, IBM Faculty Award, Google Research Award and RPI faculty start-up grant. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 388 References A. Blum, J. Lafferty, M. Rwebangira, and R. Reddy. 2004. Semi-supervised learning using randomized mincuts. In Proceedings of the Twenty-first International Conference on Machine Learning, ICML ’04. Razvan Bunescu. 2006. Using encyclopedic knowledge for named entity disambiguation. In EACL, pages 9–16. T. Cassidy, H. Ji, L. Ratinov, A. Zubiaga, and H. Huang. 2012. Analysis and enhancement of wikification for microblogs with context expansion. In Proceedings of COLING 2012. Z. Chen and H. Ji. 2011. Collaborative ranking: A case study on entity linking. In Proc. EMNLP2011. J. Chen, D. Ji, C Tan, and Z. Niu. 2006. Relation extraction using label propagation based semisupervised learning. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. X. Cheng and D. Roth. 2013. Relational inference for wikification. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data. In EMNLPCoNLL 2007. S. Cucerzan. 2011. Tac entity linking by performing full-document entity extraction and disambiguation. In Proc. TAC 2011 Workshop. N. Fernandez, J. A. Fisteus, L. Sanchez, and E. Martin. 2010. Webtlab: A cooccurence-based approach to kbp 2010 entity-linking task. In Proc. TAC 2010 Workshop. P. Ferragina and U. Scaiella. 2010. Tagme: on-thefly annotation of short text fragments (by wikipedia entities). In Proceedings of the 19th ACM international conference on Information and knowledge management, CIKM ’10. S. Gottipati and J. Jiang. 2011. Linking entities to a knowledge base with query expansion. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Y. Guo, W. Che, T. Liu, and S. Li. 2011. A graphbased method for entity linking. In Proc. IJCNLP2011. S. Guo, M. Chang, and E. Kiciman. 2013. To link or not to link? a study on end-to-end tweet entity linking. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. B. Hachey, W. Radford, J. Nothman, M. Honnibal, and J. Curran. 2013. Evaluating entity linking with wikipedia. Artif. Intell. X. Han and L. Sun. 2011. A generative entity-mention model for linking entities with knowledge base. In Proc. ACL2011. X. Han and J. Zhao. 2009. Named entity disambiguation by leveraging wikipedia semantic knowledge. In Proceedings of the 18th ACM conference on Information and knowledge management, CIKM 2009. X. Han, L. Sun, and J. Zhao. 2011. Collective entity linking in web text: A graph-based method. In Proc. SIGIR2011. J. He, M. de Rijke, M. Sevenster, R. van Ommering, and Y. Qian. 2011. Generating links to background knowledge: A case study using narrative radiology reports. In Proceedings of the 20th ACM international conference on Information and knowledge management. ACM. Z. He, S. Liu, Y. Song, M. Li, M. Zhou, and H. Wang. 2013. Efficient collective entity linking with stacking. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. H. Huang, Z. Wen, D. Yu, H. Ji, Y. Sun, J. Han, and H. Li. 2013. Resolving entity morphs in censored data. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). H. Ji, R. Grishman, H.T. Dang, K. Griffitt, and J. Ellis. 2010. Overview of the tac 2010 knowledge base population track. In Text Analysis Conference (TAC) 2010. H. Ji, R. Grishman, and H.T. Dang. 2011. Overview of the tac 2011 knowledge base population track. In Text Analysis Conference (TAC) 2011. X. Kong, P. Yu, Y. Ding, and J. Wild. 2012. Meta path-based collective classification in heterogeneous information networks. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12. Z. Kozareva, K. Voevodski, and S. Teng. 2011. Class label enhancement via related instances. In Proc. EMNLP2011. S. Kulkarni, A. Singh, G. Ramakrishnan, and S. Chakrabarti. 2009. Collective annotation of wikipedia entities in web text. In KDD. Y. Li, C. Wang, F. Han, J. Han, D. Roth, and X. Yan. 2013. Mining evidences for named entity disambiguation. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13. 389 X. Liu, Y. Li, H. Wu, M. Zhou, F. Wei, and Y. Lu. 2013. Entity linking for tweets. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). P. McNamee and H.T. Dang. 2009. Overview of the tac 2009 knowledge base population track. In Text Analysis Conference (TAC) 2009. E. Meij, W. Weerkamp, and M. de Rijke. 2012. Adding semantics to microblog posts. In Proceedings of the fifth ACM international conference on Web search and data mining, WSDM ’12. R. Mihalcea and A. Csomai. 2007. Wikify!: linking documents to encyclopedic knowledge. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, CIKM ’07. D. Milne and I.H. Witten. 2008a. Learning to link with wikipedia. In An effective, low-cost measure of semantic relatedness obtained from wikipedia links. the Wikipedia and AI Workshop of AAAI. D. Milne and I.H. Witten. 2008b. Learning to link with wikipedia. In Proceeding of the 17th ACM conference on Information and knowledge management, pages 509–518. ACM. Z. Niu, D. Ji, and C. Tan. 2005. Word sense disambiguation using label propagation based semisupervised learning. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05). M. Pennacchiotti and P. Pantel. 2009. Entity extraction via ensemble semantics. In Proc. EMNLP2009. W. Radford, B. Hachey, J. Nothman, M. Honnibal, and J. R. Curran. 2010. Cmcrc at tac10: Documentlevel entity linking with graph-based re-ranking. In Proc. TAC 2010 Workshop. L. Ratinov and D. Roth. 2012. Learning-based multisieve co-reference resolution with knowledge. In EMNLP. L. Ratinov, D. Roth, D. Downey, and M. Anderson. 2011. Local and global algorithms for disambiguation to wikipedia. In Proc. of the Annual Meeting of the Association of Computational Linguistics (ACL). W. Shen, J. Wang, P. Luo, and M. Wang. 2013. Linking named entities in tweets with knowledge base via user interest modeling. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13. A. Smola and R. Kondor. 2003. Kernels and regularization on graphs. COLT. Y. Sun, R. Barber, M. Gupta, C. Aggarwal, and J. Han. 2011a. Co-author relationship prediction in heterogeneous bibliographic networks. In Proceedings of the 2011 International Conference on Advances in Social Networks Analysis and Mining, ASONAM ’11. Y. Sun, J. Han, X. Yan, P. Yu, and T. Wu. 2011b. Pathsim: Meta path-based top-k similarity search in heterogeneous information networks. PVLDB, 4(11). P. Talukdar and K. Crammer. 2009. New regularized algorithms for transductive learning. In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II, ECML PKDD ’09. D. Vitale, P. Ferragina, and U. Scaiella. 2012. Classification of short texts by deploying topical annotations. In ECIR, pages 376–387. K. Wang, C. Thrasher, and B. Hsu. 2011. Web scale nlp: A case study on url word breaking. In Proceedings of the 20th International Conference on World Wide Web, WWW ’11. W. Zhang, J. Su, C. Tan, and W. Wang. 2010. Entity linking leveraging automatically generated annotation. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). W. Zhang, J. Su, and C. L. Tan. 2011. A wikipedia-lda model for entity linking with batch size changing. In Proc. IJCNLP2011. D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch¨olkopf. 2004. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16. X. Zhu, Z. Ghahramani, and J. Lafferty. 2003. Semisupervised learning using gaussian fields and harmonic functions. In ICML. 390
2014
36
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 391–401, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Zero-shot Entity Extraction from Web Pages Panupong Pasupat Computer Science Department Stanford University [email protected] Percy Liang Computer Science Department Stanford University [email protected] Abstract In order to extract entities of a fine-grained category from semi-structured data in web pages, existing information extraction systems rely on seed examples or redundancy across multiple web pages. In this paper, we consider a new zero-shot learning task of extracting entities specified by a natural language query (in place of seeds) given only a single web page. Our approach defines a log-linear model over latent extraction predicates, which select lists of entities from the web page. The main challenge is to define features on widely varying candidate entity lists. We tackle this by abstracting list elements and using aggregate statistics to define features. Finally, we created a new dataset of diverse queries and web pages, and show that our system achieves significantly better accuracy than a natural baseline. 1 Introduction We consider the task of extracting entities of a given category (e.g., hiking trails) from web pages. Previous approaches either (i) assume that the same entities appear on multiple web pages, or (ii) require information such as seed examples (Etzioni et al., 2005; Wang and Cohen, 2009; Dalvi et al., 2012). These approaches work well for common categories but encounter data sparsity problems for more specific categories, such as the products of a small company or the dishes at a local restaurant. In this context, we may have only a single web page that contains the information we need and no seed examples. In this paper, we propose a novel task, zeroshot entity extraction, where the specification of the desired entities is provided as a natural language query. Given a query (e.g., hiking seeds Avalon Super Loop Hilton Area traditional answers Avalon Super Loop Hilton Area Wildlands Loop ... query hiking trails near Baltimore our system answers Avalon Super Loop Hilton Area Wildlands Loop ... web pages web pages web pages web page Figure 1: Entity extraction typically requires additional knowledge such as a small set of seed examples or depends on multiple web pages. In our setting, we take as input a natural language query and extract entities from a single web page. trails near Baltimore) and a web page (e.g., http://www.everytrail.com/best/ hiking-baltimore-maryland), the goal is to extract all entities corresponding to the query on that page (e.g., Avalon Super Loop, etc.). Figure 1 summarizes the task setup. The task introduces two challenges. Given a single web page to extract entities from, we can no longer rely on the redundancy of entities across multiple web pages. Furthermore, in the zero-shot learning paradigm (Larochelle et al., 2008), where entire categories might be unseen during training, the system must generalize to new queries and web pages without the additional aid of seed examples. To tackle these challenges, we cast the task as a structured prediction problem where the input is the query and the web page, and the output is a list of entities, mediated by a latent extraction predicate. To generalize across different inputs, we rely on two types of features: structural features, which look at the layout and placement of the entities being extracted; and denotation fea391 tures, which look at the list of entities as a whole and assess their linguistic coherence. When defining features on lists, one technical challenge is being robust to widely varying list sizes. We approach this challenge by defining features over a histogram of abstract tokens derived from the list elements. For evaluation, we created the OPENWEB dataset comprising natural language queries from the Google Suggest API and diverse web pages returned from web search. Despite the variety of queries and web pages, our system still achieves a test accuracy of 40.5% and an accuracy at 5 of 55.8%. 2 Problem statement We define the zero-shot entity extraction task as follows: let x be a natural language query (e.g., hiking trails near Baltimore), and w be a web page. Our goal is to construct a mapping from (x, w) to a list of entities y (e.g., [Avalon Super Loop, Patapsco Valley State Park, . . . ]) which are extracted from the web page. Ideally, we would want our data to be annotated with the correct entity lists y, but this would be very expensive to obtain. We instead define each training and test example as a triple (x, w, c), where the compatibility function c maps each y to c(y) ∈{0, 1} denoting the (approximate) correctness of the list y. In this paper, an entity list y is compatible (c(y) = 1) when the first, second, and last elements of y match the annotation; otherwise, it is incompatible (c(y) = 0). 2.1 Dataset To experiment with a diverse set of queries and web pages, we created a new dataset, OPENWEB, using web pages from Google search results.1 We use the method from Berant et al. (2013) to generate search queries by performing a breadth-first search over the query space. Specifically, we use the Google Suggest API, which takes a partial query (e.g., “list of movies”) and outputs several complete queries (e.g., “list of horror movies”). We start with seed partial queries “list of • ” where • is one or two initial letters. In each step, we call the Google Suggest API on the partial queries to obtain complete queries, 1The OPENWEB dataset and our code base are available for download at http://www-nlp.stanford.edu/ software/web-entity-extractor-ACL2014. Full query New partial queries list of X IN Y list of X where IN is a preposition list of X (list of [hotels]X in [Guam]Y ) list of X IN list of IN Y list of X CC Y list of X where CC is a conjunction list of X (list of [food]X and [drink]Y ) list of Y list of Y list of X w list of w (list of [good 2012]X [movies]w) list of w list of X Table 1: Rules for generating new partial queries from complete queries. (X and Y are sequences of words; w is a single word.) and then apply the transformation rules in Table 1 to generate more partial queries from complete queries. We run the procedure until we obtained 100K queries. Afterwards, we downloaded the top 2–3 Google search results of each query, sanitized the web pages, and randomly submitted 8000 query / web page pairs to Amazon Mechanical Turk (AMT). Each AMT worker must either mark the web page as irrelevant or extract the first, second, and last entities from the page. We only included examples where at least two AMT workers agreed on the answer. The resulting OPENWEB dataset consists of 2773 examples from 2269 distinct queries. Among these queries, there are 894 headwords ranging from common categories (e.g., movies, companies, characters) to more specific ones (e.g., enzymes, proverbs, headgears). The dataset contains web pages from 1438 web domains, of which 83% appear only once in our dataset. Figure 2 shows some queries and web pages from the OPENWEB dataset. Besides the wide range of queries, another main challenge of the dataset comes from the diverse data representation formats, including complex tables, grids, lists, headings, and paragraphs. 3 Approach Figure 3 shows the framework of our system. Given a query x and a web page w, the system generates a set Z(w) of extraction predicates z which can extract entities from semi-structured data in w. Section 3.1 describes extraction predicates in more detail. Afterwards, the system chooses z ∈Z(w) that maximizes the model probability pθ(z | x, w), and then executes z on 392 Queries airlines of italy natural causes of global warming lsu football coaches bf3 submachine guns badminton tournaments foods high in dha technical colleges in south carolina songs on glee season 5 singers who use auto tune san francisco radio stations actors from boston Examples (web page, query) airlines of italy natural causes of global warming lsu football coaches Figure 2: Some examples illustrating the diversity of queries and web pages from the OPENWEB dataset. x Generation w Z Model z Execution y hiking trails near Baltimore html head ... body ... /html[1]/body[1]/table[2]/tr/td[1] [Avalon Super Loop, Hilton Area, ...] Figure 3: An overview of our system. The system uses the input query x and web page w to produce a list of entities y via an extraction predicate z. w to get the list of entities y = JzKw. Section 3.2 describes the model and the training procedure, while Section 3.3 presents the features used in our model. 3.1 Extraction predicates We represent each web page w as a DOM tree, a common representation among wrapper induction and web information extraction systems (Sahuguet and Azavant, 1999; Liu et al., 2000; Crescenzi et al., 2001). The text of any DOM tree node that is shorter than 140 characters is a candidate entity. However, without further restrictions, the number of possible entity lists grows exponentially with the number of candidate entities. To make the problem tractable, we introduce an extraction predicate z as an intermediate representation for extracting entities from w. In our system, we let an extraction predicate be a simplified XML path (XPath) such as /html[1]/body[1]/table[2]/tr/td[1] Informally, an extraction predicate is a list of path entries. Each path entry is either a tag (e.g., tr), which selects all children with that tag; or a tag and an index i (e.g., td[1]), which selects only the ith child with that tag. The denotation y = JzKw of an extraction predicate z is the list of entities selected by the XPath. Figure 4 illustrates the execution of the extraction predicate above on a DOM tree. In the literature, many information extraction systems employ more versatile extraction predicates (Wang and Cohen, 2009; Fumarola et al., 2011). However, despite the simplicity, we are able to find an extraction predicate that extracts a compatible entity list in 69.7% of the development examples. In some examples, we cannot extract a compatible list due to unrecoverable issues such as incorrect annotation. Section 4.4 provides a detailed analysis of these issues. Additionally, extraction predicates can be easily extended to increase the coverage. For example, by introducing new index types [1:] (selects all but the first node) and [:-1] (selects all but the last node), we can increase the coverage to 76.2%. Extraction predicate generation. We generate a set Z(w) of extraction predicates for a given web page w as follows. For each node in the DOM tree, we find an extraction predicate which selects only that node, and then generalizes the predicate by removing any subset of the indices of the last k path entries. For instance, when k = 2, an extraction predicate ending in .../tr[5]/td[2] will be generalized to .../tr[5]/td[2], .../tr/td[2], .../tr[5]/td, and .../tr/td. In all experiments, we use k = 8, which gives at most 28 generalized predicates for each original predicate. This generalization step allows the system to select multiple nodes with the same structure (e.g., 393 DOM tree w html head body table tr td Home.. td Expl.. td Mobi.. td Crea.. h1 Hiki.. table tr th Name.. th Loca.. tr td Aval.. td 12.7.. ... tr td Gove.. td 3.1 .. Extraction predicate z /html[1]/body[1]/table[2]/tr/td[1] Rendered web page Home Explore Mobile Apps Create Trip Hiking near Baltimore, Maryland Name Length Avalon Super Loop 12.7 miles Hilton Area 7.8 miles Avalon Loop 9.4 miles Wildlands Loop 4.4 miles Mckeldin Area 16.7 miles Greenbury Point 3.7 miles Governer Bridge Natural Area 3.1 miles Figure 4: A simplified example of a DOM tree w and an extraction predicate z, which selects a list of entity strings y = JzKw from the page (highlighted in red). table cells from the same column or list items from the same section of the page). Out of all generalized extraction predicates, we retain the ones that extract at least two entities from w. Note that several extraction predicates may select the same list of nodes and thus produce the same list of entities. The procedure above gives a manageable number of extraction predicates. Among the development examples of the OPENWEB dataset, we generate an average of 8449 extraction predicates per example, which evaluate to an average of 1209 unique entity lists. 3.2 Modeling Given a query x and a web page w, we define a log-linear distribution over all extraction predicates z ∈Z(w) as pθ(z | x, w) ∝exp{θ⊤φ(x, w, z)}, (1) where θ ∈ Rd is the parameter vector and φ(x, w, z) is the feature vector, which will be defined in Section 3.3. To train the model, we find a parameter vector θ that maximizes the regularized log marginal probability of the compatibility function being satisfied. In other words, given training data D = {(x(i), w(i), c(i))}n i=1, we find θ that maximizes n X i=1 log pθ(c(i) = 1 | x(i), w(i)) −λ 2 ∥θ∥2 2 where pθ(c = 1 | x, w) = X z∈Z(w) pθ(z | x, w) · c(JzKw). Note that c(JzKw) = 1 when the entity list y = JzKw selected by z is compatible with the annotation; otherwise, c(JzKw) = 0. We use AdaGrad, an online gradient descent with an adaptive per-feature step size (Duchi et al., 2010), making 5 passes over the training data. We use λ = 0.01 obtained from cross-validation for all experiments. 3.3 Features To construct the log-linear model, we define a feature vector φ(x, w, z) for each query x, web page w, and extraction predicate z. The final feature vector is the concatenation of structural features φs(w, z), which consider the selected nodes in the DOM tree, and denotation features φd(x, y), which look at the extracted entities. We will use the query hiking trails near Baltimore and the web page in Figure 4 as a running example. Figure 5 lists some features extracted from the example. 3.3.1 Recipe for defining features on lists One main focus of our work is finding good feature representations for a list of objects (DOM tree nodes for structural features and entity strings for denotation features). One approach is to define the feature vector of a list to be the sum of the feature vectors of individual elements. This is commonly done in structured prediction, where the elements are local configurations (e.g., rule applications in parsing). However, this approach raises a normalization issue when we have to compare and rank lists of drastically different sizes. As an alternative, we propose a recipe for generating features from a list as follows: 394 html head body table ... h1 table tr th th tr td td tr td td ... tr td td Structural feature Value Features on selected nodes: TAG-MAJORITY = td 1 INDEX-ENTROPY 0.0 Features on parent nodes: CHILDRENCOUNT-MAJORITY = 2 1 PARENT-SINGLE 1 INDEX-ENTROPY 1.0 HEADHOLE (The first node is skipped) 1 Features on grandparent nodes: PAGECOVERAGE 0.6 . . . . . . Selected entities Avalon Super Loop Hilton Area Avalon Loop Wildlands Loop Mckeldin Area Greenbury Point Governer Bridge Natural Area Denotation feature Value WORDSCOUNT-MEAN 2.42 PHRASESHAPE-MAJORITY = Aa Aa 1 PHRASESHAPE-MAJORITYRATIO 0.71 WORDSHAPE-MAJORITY = Aa 1 PHRASEPOS-MAJORITY = NNP NN 1 LASTWORD-ENTROPY 0.74 WORDPOS = NN (normalized count) 0.53 . . . . . . Figure 5: A small subset of features from the example hiking trails near Baltimore in Figure 4. A B C D E 1 2 0 1 0 0 1 2 histogram Abstraction Aggregation Entropy Majority MajorityRatio Single (Mean) (Variance) Figure 6: The recipe for defining features on a list of objects: (i) the abstraction step converts list elements into abstract tokens; (ii) the aggregation step defines features using the histogram of the abstract tokens. Step 1: Abstraction. We map each list element into an abstract token. For example, we can map each DOM tree node onto an integer equal to the number of children, or map each entity string onto its part-of-speech tag sequence. Step 2: Aggregation. We create a histogram of the abstract tokens and define features on properties of the histogram. Generally, we use ENTROPY (entropy normalized to the maximum value of 1), MAJORITY (mode), MAJORITYRATIO (percentage of tokens sharing the majority value), and SINGLE (whether all tokens are identical). For abstract tokens with finitely many possible values (e.g., part-of-speech), we also use the normalized histogram count of each possible value as a feature. And for real-valued abstract tokens, we also use the mean and the standard deviation. In the actual system, we convert real-valued features (entropy, histogram count, mean, and standard deviation) into indicator features by binning. Figure 6 summarizes the steps explained above. We use this recipe for defining both structural and denotation features, which are discussed below. 3.3.2 Structural features Although different web pages represent data in different formats, they still share some common hierarchical structures in the DOM tree. To capture this, we define structural features φs(w, z), which consider the properties of the selected nodes in the DOM tree, as follows: Features on selected nodes. We apply our recipe on the list of nodes in w selected by z using the following abstract tokens: • TAG, ID, CLASS, etc. (HTML attributes) • CHILDRENCOUNT and SIBLINGSCOUNT (number of children and siblings) • INDEX (position among its siblings) • PARENT (parent node; e.g., PARENT-SINGLE means that all nodes share the same parent.) Additionally, we define the following features based on the coverage of all selected nodes: • NOHOLE, HEADHOLE, etc. (node coverage in the same DOM tree level; e.g., HEADHOLE activates when the first sibling of the selected nodes is not selected.) 395 • PAGECOVERAGE (node coverage relative to the entire tree; we use depth-first traversal timestamps to estimate the fraction of nodes in the subtrees of the selected nodes.) Features on ancestor nodes. We also define the same feature set on the list of ancestors of the selected nodes in the DOM tree. In our experiments, we traverse up to 5 levels of ancestors and define features from the nodes in each level. 3.3.3 Denotation features Structural features are not powerful enough to distinguish between entity lists appearing in similar structures such as columns of the same table or fields of the same record. To solve this ambiguity, we introduce denotation features φd(x, y) which considers the coherence or appropriateness of the selected entity strings y = JzKw. We observe that the correct entities often share some linguistic statistics. For instance, entities in many categories (e.g., people and place names) usually have only 2–3 word tokens, most of which are proper nouns. On the other hand, random words on the web page tend to have more diverse lengths and part-of-speech tags. We apply our recipe on the list of selected entities using the following abstract tokens: • WORDSCOUNT (number of words) • PHRASESHAPE (abstract shape of the phrase; e.g., Barack Obama becomes Aa Aa) • WORDSHAPE (abstract shape of each word; the number of abstract tokens will be the total number of words over all selected entities) • FIRSTWORD and LASTWORD • PHRASEPOS and WORDPOS (part-ofspeech tags for whole phrases and individual words) 4 Experiments In this section we evaluate our system on the OPENWEB dataset. 4.1 Evaluation metrics Accuracy. As the main metric, we use a notion of accuracy based on compatibility; specifically, we define the accuracy as the fraction of examples where the system predicts a compatible entity list as defined in Section 2. We also report accuracy at 5, the fraction of examples where the top five predictions contain a compatible entity list. Path suffix pattern (multiset) Count {a, table, tbody, td[*], tr} 1792 {a, tbody, td[*], text, tr} 1591 {a, table[*], tbody, td[*], tr} 1325 {div, table, tbody, td[*], tr} 1259 {b, div, div, div, div[*]} 1156 {div[*], table, tbody, td[*], tr} 1059 {div, table[*], tbody, td[*], tr} 844 {table, tbody, td[*], text, tr} 828 {div[*], table[*], tbody, td[*], tr} 793 {a, table, tbody, td, tr} 743 Table 2: Top 10 path suffix patterns found by the baseline learner in the development data. Since we allow path entries to be permuted, each suffix pattern is represented by a multiset of path entries. The notation [*] denotes any path entry index. To see how our compatibility-based accuracy tracks exact correctness, we sampled 100 web pages which have at least one valid extraction predicate and manually annotated the full list of entities. We found that in 85% of the examples, the longest compatible list y is the correct list of entities, and many lists in the remaining 15% miss the correct list by only a few entities. Oracle. In some examples, our system cannot find any list of entities that is compatible with the gold annotation. The oracle score is the fraction of examples in which the system can find at least one compatible list. 4.2 Baseline As a baseline, we list the suffixes of the correct extraction predicates in the training data, and then sort the resulting suffix patterns by frequency. To improve generalization, we treat path entries with different indices (e.g., td[1] vs. td[2]) as equivalent and allow path entries to be permuted. Table 2 lists the top 10 suffix patterns from the development data. At test time, we choose an extraction predicate with the most frequent suffix pattern. The baseline should work considerably well if the web pages were relatively homogeneous. 4.3 Main results We held out 30% of the dataset as test data. For the results on development data, we report the average across 10 random 80-20 splits. Table 3 shows the results. The system gets an accuracy of 41.1% and 40.5% for the development and test data, respectively. If we consider the top 5 lists of entities, the accuracy increases to 58.4% on the development data and 55.8% on the test data. 396 Development data Test data Acc A@5 Acc A@5 Baseline 10.8 ± 1.3 25.6 ± 2.0 10.3 20.9 Our system 41.1 ± 3.4 58.4 ± 2.7 40.5 55.8 Oracle 68.7 ± 2.4 68.7 ± 2.4 66.6 66.6 Table 3: Main results on the OPENWEB dataset using the default set of features. (Acc = accuracy, A@5 = accuracy at 5) 4.4 Error analysis We now investigate the errors made by our system using the development data. We classify the errors into two types: (i) coverage errors, which are when the system cannot find any entity list satisfying the compatibility function; and (ii) ranking errors, which are when a compatible list of entities exists, but the system outputs an incompatible list. Tables 4 and 5 show the breakdown of coverage and ranking errors from an experiment on the development data. Analysis of coverage errors. From Table 4, about 36% of coverage errors happen when the extraction predicate for the correct entities also captures unrelated parts of the web page (Reason C1). For example, many Wikipedia articles have the See Also section that lists related articles in an unordered list (/ul/li/a), which causes a problem when the entities are also represented in the same format. Another main source of errors is the inconsistency in HTML tag usage (Reason C2). For instance, some web pages use <b> and <strong> tags for bold texts interchangeably, or switch between <b><a>...</a></b> and <a><b>...</b></a> across entities. We expect that this problem can be solved by normalizing the web page, using an alternative web page representation (Cohen et al., 2002; Wang and Cohen, 2009; Fumarola et al., 2011), or leveraging more expressive extraction predicates (Dalvi et al., 2011). One interesting source of errors is Reason C3, where we need to filter the selected entities to match the complex requirement in the query. For example, the query tech companies in China requires the system to select only the company names with China in the corresponding location column. To handle such queries, we need a deeper understanding of the relation between the linguistic structure of the query and the hierarchical structure of the web page. Tackling this error reSetting Acc A@5 All features 41.1 ± 3.4 58.4 ± 2.7 Oracle 68.7 ± 2.4 68.7 ± 2.4 (Section 4.5) Structural features only 36.2 ± 1.9 54.5 ± 2.5 Denotation features only 19.8 ± 2.5 41.7 ± 2.7 (Section 4.6) Structural + query-denotation 41.7 ± 2.5 58.1 ± 2.4 Query-denotation features only 25.0 ± 2.3 48.0 ± 2.7 Concat. a random web page + structural + denotation 19.3 ± 2.6 41.2 ± 2.3 Concat. a random web page + structural + query-denotation 29.2 ± 1.7 49.2 ± 2.2 (Section 4.7) Add 1 seed entity 52.9 ± 3.0 66.5 ± 2.5 Table 6: System accuracy with different feature and input settings on the development data. (Acc = accuracy, A@5 = accuracy at 5) quires compositionality and is critical to generalize to more complex queries. Analysis of ranking errors. From Table 5, a large number of errors are attributed to the system selecting non-content elements such as navigation links and content headings (Reason R1). Feature analysis reveals that both structural and linguistic statistics of these non-content elements can be more coherent than those of the correct entities. We suspect that since many of our features try to capture the coherence of entities, the system sometimes erroneously favors the more homonogenous non-content parts of the page. To disfavor these parts, One possible solution is to add visual features that capture how the web page is rendered and favor more salient parts of the page. (Liu et al., 2003; Song et al., 2004; Zhu et al., 2005; Zheng et al., 2007). 4.5 Feature variations We now investigate the contribution of each feature type. The ablation results on the development set over 10 random splits are shown in Table 6. We observe that denotation features improves accuracy on top of structural features. Table 7 shows an example of an error that is eliminated by each feature type. Generally, if the entities are represented as records (e.g., rows of a table), then denotation features will help the system select the correct field from each record. On the other hand, structural features prevent the system from selecting random entities outside the main part of the page. 397 Reason Short example Count C1 Answers and contextual elements are selected by the same extraction predicate. Select entries in See Also section in addition to the content because they are all list entries. 48 C2 HTML tag usage is inconsistent. The page uses both b and strong for headers. 16 C3 The query applies to only some sections of the matching entities. Need to select only companies in China from the table of all Asian companies. 20 C4 Answers are embedded in running text. Answers are in a comma-separated list. 13 C5 Text normalization issues. Selected Silent Night Lyrics instead of Silent Night. 19 C6 Other issues. Incorrect annotation. / Entities are permuted when the web page is rendered. / etc. 18 Total 134 Table 4: Breakdown of coverage errors from the development data. Reason Short example Count R1 Select non-content strings. Select navigation links, headers, footers, or sidebars. 25 R2 Select entities from a wrong field. Select book authors instead of book names. 22 R3 Select entities from the wrong section(s). For the query schools in Texas, select all schools on the page, or select the schools in Alabama instead. 19 R4 Also select headers or footers. Select the table header in addition to the answers. 7 R5 Select only entities with a particular formatting. From a list of answers, select only anchored (a) entities. 4 R6 Select headings instead of the contents or vice versa. Select the categories of rums in h2 tags instead of the rum names in the tables. 2 R7 Other issues. Incorrect annotation. / Multiple sets of answers appear on the same page. / etc. 9 Total 88 Table 5: Breakdown of ranking errors from the development data. All features Structural only Denotation only The Sun CIRC: 2,279,492 Paperboy Australia Daily Mail CIRC: 1,821,684 Paperboy UK Daily Mirror CIRC: 1,032,144 Paperboy Home Page . . . ... ... Table 7: System outputs for the query UK newspapers with different feature sets. Without denotation features, the system selects the daily circulation of each newspaper instead of the newspaper names. And without structural features, the system selects the hidden navigation links from the top of the page. 4.6 Incorporating query information So far, note that all our features depend only on the extraction predicate z and not the input query x. Remarkably, we were still able to obtain reasonable results. One explanation is that since we obtained the web pages from a search engine, the most prominent entities on the web pages, such as entities in table cells in the middle of the page, are likely to be good independent of the query. However, different queries often denote entities with different linguistic properties. For example, queries mayors of Chicago and universities in Chicago will produce entities of different lengths, part-of-speech sequences, and word distributions. This suggests incorporating features that depend on the query. To explore the potential of query information, we conduct the following oracle experiment. We replace each denotation feature f(y) with a corresponding query-denotation feature (f(y), g(x)), where g(x) is the category of the query x. We manually classified all queries in our dataset into 7 categories: person, media title, location/organization, abtract entity, word/phrase, object name, and miscellaneous. Table 8 shows some examples where adding these query-denotation features improves the selected entity lists by favoring answers that are more suitable to the query category. However, Table 6 shows that these new features do not significantly improve the accuracy of our original system on the development data. We suspect that any gains offered by the querydenotation features are subsumed by the structural features. To test this hypothesis, we conducted two experiments, the results of which are shown in Table 6. First, we removed structural features and found that using query-denotation features improves accuracy significantly over using denotation features alone from 19.8% to 25.0%. Second, we created a modified dataset where the web page in each example is a concatenation of the original web page and an unrelated web page. On 398 Query euclid’s elements book titles soft drugs professional athletes with concussions Default features “Prematter”, “Book I.”, “Book II.”, “Book III.”, ... “Hard drugs”, “Soft drugs”, “Some drugs cannot be classified that way”, . . . “Pistons-Knicks Game Becomes Site of Incredible Dance Battle”, “Toronto Mayor Rob Ford Attends . . . ”, . . . Structural + QueryDenotation (category = media title) “Book I. The fundamentals ...”, “Book II. Geometric algebra”, ... (category = object name) “methamphetamine”, “psilocybin”, “caffeine” (category = person) “Mike Richter”, “Stu Grimson”, “Geoff Courtnall”, . . . Table 8: System outputs after changing denotation features into query-denotation features. this modified dataset, the prominent entities may not be the answers to the query. Here, querydenotation features improves accuracy over denotation features alone from 19.3% to 29.2%. 4.7 Comparison with other problem settings Since zero-shot entity extraction is a new task, we cannot directly compare our system with other systems. However, we can mimic the settings of other tasks. In one experiment, we augment each input query with a single seed entity (the second annotated entity in our experiments); this setting is suggestive of Wang and Cohen (2009). Table 6 shows that this augmentation increases accuracy from 41.1% to 52.9%, suggesting that our system can perform substantially better with a small amount of additional supervision. 5 Discussion Our work shares a base with the wrapper induction literature (Kushmerick, 1997) in that it leverages regularities of web page structures. However, wrapper induction usually focuses on a small set of web domains, where the web pages in each domain follow a fixed template (Muslea et al., 2001; Crescenzi et al., 2001; Cohen et al., 2002; Arasu and Garcia-Molina, 2003). Later work in web data extraction attempts to generalize across different web pages, but relies on either restricted data formats (Wong et al., 2009) or prior knowledge of web page structures with respect to the type of data to extract (Zhang et al., 2013). In our case, we only have the natural language query, which presents the more difficult problem of associating the entity class in the query (e.g., hiking trails) to concrete entities (e.g., Avalon Super Loop). In contrast to information extraction systems that extract homogeneous records from web pages (Liu et al., 2003; Zheng et al., 2009), our system must choose the correct field from each record and also identify the relevant part of the page based on the query. Another related line of work is information extraction from text, which relies on natural language patterns to extract categories and relations of entities. One classic example is Hearst patterns (Hearst, 1992; Etzioni et al., 2005), which can learn new entities and extraction patterns from seed examples. More recent approaches also leverage semi-structured data to obtain more robust extraction patterns (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012; Riedel et al., 2013). Although our work focuses on semistructured web pages rather than raw text, we use linguistic patterns of queries and entities as a signal for extracting appropriate answers. Additionally, our efforts can be viewed as building a lexicon on the fly. In recent years, there has been a drive to scale semantic parsing to large databases such as Freebase (Cai and Yates, 2013; Berant et al., 2013; Kwiatkowski et al., 2013). However, despite the best efforts of information extraction, such databases will always lag behind the open web. For example, Berant et al. (2013) found that less than 10% of naturally occurring questions are answerable by a simple Freebase query. By using the semi-structured data from the web as a knowledge base, we hope to increase fact coverage for semantic parsing. Finally, as pointed out in the error analysis, we need to filter or aggregate the selected entities for complex queries (e.g., tech companies in China for a web page with all Asian tech companies). In future work, we would like to explore the issue of compositionality in queries by aligning linguistic structures in natural language with the relative position of entities on web pages. Acknowledgements We gratefully acknowledge the support of the Google Natural Language Understanding Focused Program. In addition, we would like to thank anonymous reviewers for their helpful comments. 399 References A. Arasu and H. Garcia-Molina. 2003. Extracting structured data from web pages. In ACM SIGMOD international conference on Management of data, pages 337–348. J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). Q. Cai and A. Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Association for Computational Linguistics (ACL). W. W. Cohen, M. Hurst, and L. S. Jensen. 2002. A flexible learning system for wrapping tables and lists in HTML documents. In World Wide Web (WWW), pages 232–241. V. Crescenzi, G. Mecca, P. Merialdo, et al. 2001. Roadrunner: Towards automatic data extraction from large web sites. In VLDB, volume 1, pages 109–118. N. Dalvi, R. Kumar, and M. Soliman. 2011. Automatic wrappers for large scale web extraction. Proceedings of the VLDB Endowment, 4(4):219–230. B. Dalvi, W. Cohen, and J. Callan. 2012. Websets: Extracting sets of entities from the web using unsupervised information extraction. In Web Search and Data Mining (WSDM), pages 243–252. J. Duchi, E. Hazan, and Y. Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT). O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. S. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial Intelligence, 165(1):91–134. F. Fumarola, T. Weninger, R. Barber, D. Malerba, and J. Han. 2011. Extracting general lists from web documents: A hybrid approach. Modern Approaches in Applied Intelligence Springer. M. A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Interational Conference on Computational linguistics, pages 539–545. R. Hoffmann, C. Zhang, X. Ling, L. S. Zettlemoyer, and D. S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Association for Computational Linguistics (ACL), pages 541–550. N. Kushmerick. 1997. Wrapper induction for information extraction. Ph.D. thesis, University of Washington. T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Empirical Methods in Natural Language Processing (EMNLP). H. Larochelle, D. Erhan, and Y. Bengio. 2008. Zerodata learning of new tasks. In AAAI, volume 8, pages 646–651. L. Liu, C. Pu, and W. Han. 2000. XWRAP: An XMLenabled wrapper construction system for web information sources. In Data Engineering, 2000. Proceedings. 16th International Conference on, pages 611–621. B. Liu, R. Grossman, and Y. Zhai. 2003. Mining data records in web pages. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 601–606. M. Mintz, S. Bills, R. Snow, and D. Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Association for Computational Linguistics (ACL), pages 1003–1011. I. Muslea, S. Minton, and C. A. Knoblock. 2001. Hierarchical wrapper induction for semistructured information sources. Autonomous Agents and MultiAgent Systems, 4(1):93–114. S. Riedel, L. Yao, and A. McCallum. 2013. Relation extraction with matrix factorization and universal schemas. In North American Association for Computational Linguistics (NAACL). A. Sahuguet and F. Azavant. 1999. WysiWyg web wrapper factory (W4F). In WWW Conference. R. Song, H. Liu, J. Wen, and W. Ma. 2004. Learning block importance models for web pages. In World Wide Web (WWW), pages 203–211. M. Surdeanu, J. Tibshirani, R. Nallapati, and C. D. Manning. 2012. Multi-instance multi-label learning for relation extraction. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL), pages 455–465. R. C. Wang and W. W. Cohen. 2009. Character-level analysis of semi-structured documents for set expansion. In Empirical Methods in Natural Language Processing (EMNLP), pages 1503–1512. Y. W. Wong, D. Widdows, T. Lokovic, and K. Nigam. 2009. Scalable attribute-value extraction from semistructured text. In IEEE International Conference on Data Mining Workshops, pages 302–307. Z. Zhang, K. Q. Zhu, H. Wang, and H. Li. 2013. Automatic extraction of top-k lists from the web. In International Conference on Data Engineering. S. Zheng, R. Song, and J. Wen. 2007. Templateindependent news extraction based on visual consistency. In AAAI, volume 7, pages 1507–1513. 400 S. Zheng, R. Song, J. Wen, and C. L. Giles. 2009. Efficient record-level wrapper induction. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 47–56. J. Zhu, Z. Nie, J. Wen, B. Zhang, and W. Ma. 2005. 2D conditional random fields for web information extraction. In International Conference on Machine Learning (ICML), pages 1044–1051. 401
2014
37
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 402–412, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Incremental Joint Extraction of Entity Mentions and Relations Qi Li Heng Ji Computer Science Department Rensselaer Polytechnic Institute Troy, NY 12180, USA {liq7,jih}@rpi.edu Abstract We present an incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search. A segment-based decoder based on the idea of semi-Markov chain is adopted to the new framework as opposed to traditional token-based tagging. In addition, by virtue of the inexact search, we developed a number of new and effective global features as soft constraints to capture the interdependency among entity mentions and relations. Experiments on Automatic Content Extraction (ACE)1 corpora demonstrate that our joint model significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system. 1 Introduction The goal of end-to-end entity mention and relation extraction is to discover relational structures of entity mentions from unstructured texts. This problem has been artificially broken down into several components such as entity mention boundary identification, entity type classification and relation extraction. Although adopting such a pipelined approach would make a system comparatively easy to assemble, it has some limitations: First, it prohibits the interactions between components. Errors in the upstream components are propagated to the downstream components without any feedback. Second, it over-simplifies the problem as multiple local classification steps without modeling long-distance and cross-task dependencies. By contrast, we re-formulate this task as a structured prediction problem to reveal the linguistic and logical properties of the hidden 1http://www.itl.nist.gov/iad/mig//tests/ace structures. For example, in Figure 1, the output structure of each sentence can be interpreted as a graph in which entity mentions are nodes and relations are directed arcs with relation types. By jointly predicting the structures, we aim to address the aforementioned limitations by capturing: (i) The interactions between two tasks. For example, in Figure 1a, although it may be difficult for a mention extractor to predict “1,400” as a Person (PER) mention, the context word “employs” between “tire maker” and “1,400” strongly indicates an Employment-Organization (EMP-ORG) relation which must involve a PER mention. (ii) The global features of the hidden structure. Various entity mentions and relations share linguistic and logical constraints. For example, we can use the triangle feature in Figure 1b to ensure that the relations between “forces”, and each of the entity mentions “Somalia/GPE”, “Haiti/GPE” and “Kosovo/GPE”, are of the same type (Physical (PHYS), in this case). Following the above intuitions, we introduce a joint framework based on structured perceptron (Collins, 2002; Collins and Roark, 2004) with beam-search to extract entity mentions and relations simultaneously. With the benefit of inexact search, we are also able to use arbitrary global features with low cost. The underlying learning algorithm has been successfully applied to some other Natural Language Processing (NLP) tasks. Our task differs from dependency parsing (such as (Huang and Sagae, 2010)) in that relation structures are more flexible, where each node can have arbitrary relation arcs. Our previous work (Li et al., 2013) used perceptron model with token-based tagging to jointly extract event triggers and arguments. By contrast, we aim to address a more challenging task: identifying mention boundaries and types together with relations, which raises the issue that assignments for the same sentence with different mention boundaries are difficult to syn402 The tire maker | {z } ORG still employs 1,400 | {z } PER . EMP-ORG (a) Interactions between Two Tasks ... US |{z} GPE forces | {z } PER in Somalia | {z } GPE , Haiti |{z} GPE and Kosovo | {z } GPE . EMP-ORG PHYS conj and GPE PER GPE PHYS PHYS conj and (b) Example of Global Feature Figure 1: End-to-End Entity Mention and Relation Extraction. chronize during search. To tackle this problem, we adopt a segment-based decoding algorithm derived from (Sarawagi and Cohen, 2004; Zhang and Clark, 2008) based on the idea of semi-Markov chain (a.k.a, multiple-beam search algorithm). Most previous attempts on joint inference of entity mentions and relations (such as (Roth and Yih, 2004; Roth and Yih, 2007)) assumed that entity mention boundaries were given, and the classifiers of mentions and relations are separately learned. As a key difference, we incrementally extract entity mentions together with relations using a single model. The main contributions of this paper are as follows: 1. This is the first work to incrementally predict entity mentions and relations using a single joint model (Section 3). 2. Predicting mention boundaries in the joint framework raises the challenge of synchronizing different assignments in the same beam. We solve this problem by detecting entity mentions on segment-level instead of traditional tokenbased approaches (Section 3.1.1). 3. We design a set of novel global features based on soft constraints over the entire output graph structure with low cost (Section 4). Experimental results show that the proposed framework achieves better performance than pipelined approaches, and global features provide further significant gains. 2 Background 2.1 Task Definition The entity mention extraction and relation extraction tasks we are addressing are those of the Automatic Content Extraction (ACE) program2. ACE defined 7 main entity types including Person (PER), Organization (ORG), Geographical Entities (GPE), Location (LOC), 2http://www.nist.gov/speech/tests/ace Facility (FAC), Weapon (WEA) and Vehicle (VEH). The goal of relation extraction3 is to extract semantic relations of the targeted types between a pair of entity mentions which appear in the same sentence. ACE’04 defined 7 main relation types: Physical (PHYS), PersonSocial (PER-SOC), Employment-Organization (EMP-ORG), Agent-Artifact (ART), PER/ORG Affiliation (Other-AFF), GPE-Affiliation (GPE-AFF) and Discourse (DISC). ACE’05 kept PER-SOC, ART and GPE-AFF, split PHYS into PHYS and a new relation type Part-Whole, removed DISC, and merged EMP-ORG and Other-AFF into EMP-ORG. Throughout this paper, we use ⊥to denote nonentity or non-relation classes. We consider relation asymmetric. The same relation type with opposite directions is considered to be two classes, which we refer to as directed relation types. Most previous research on relation extraction assumed that entity mentions were given In this work we aim to address the problem of end-to-end entity mention and relation extraction from raw texts. 2.2 Baseline System In order to develop a baseline system representing state-of-the-art pipelined approaches, we trained a linear-chain Conditional Random Fields model (Lafferty et al., 2001) for entity mention extraction and a Maximum Entropy model for relation extraction. Entity Mention Extraction Model We re-cast the problem of entity mention extraction as a sequential token tagging task as in the state-of-theart system (Florian et al., 2006). We applied the BILOU scheme, where each tag means a token is the Beginning, Inside, Last, Outside, and Unit of an entity mention, respectively. Most of our features are similar to the work of (Florian et al., 3Throughout this paper we refer to relation mention as relation since we do not consider relation mention coreference. 403 2004; Florian et al., 2006) except that we do not have their gazetteers and outputs from other mention detection systems as features. Our additional features are as follows: • Governor word of the current token based on dependency parsing (Marneffe et al., 2006). • Prefix of each word in Brown clusters learned from TDT5 corpus (Sun et al., 2011). Relation Extraction Model Given a sentence with entity mention annotations, the goal of baseline relation extraction is to classify each mention pair into one of the pre-defined relation types with direction or ⊥(non-relation). Most of our relation extraction features are based on the previous work of (Zhou et al., 2005) and (Kambhatla, 2004). We designed the following additional features: • The label sequence of phrases covering the two mentions. For example, for the sentence in Figure 1a, the sequence is “NP VP NP”. We also augment it by head words of each phrase. • Four syntactico - semantic patterns described in (Chan and Roth, 2010). • We replicated each lexical feature by replacing each word with its Brown cluster. 3 Algorithm 3.1 The Model Our goal is to predict the hidden structure of each sentence based on arbitrary features and constraints. Let x ∈X be an input sentence, y′ ∈Y be a candidate structure, and f(x, y′) be the feature vector that characterizes the entire structure. We use the following linear model to predict the most probable structure ˆy for x: ˆy = argmax y′∈Y(x) f(x, y′) · w (1) where the score of each candidate assignment is defined as the inner product of the feature vector f(x, y′) and feature weights w. Since the structures contain both entity mentions relations, and we also aim to exploit global features. There does not exist a polynomial-time algorithm to find the best structure. In practice we apply beam-search to expand partial configurations for the input sentence incrementally to find the structure with the highest score. 3.1.1 Joint Decoding Algorithm One main challenge to search for entity mentions and relations incrementally is the alignment of different assignments. Assignments for the same sentence can have different numbers of entity mentions and relation arcs. The entity mention extraction task is often re-cast as a token-level sequential labeling problem with BIO or BILOU scheme (Ratinov and Roth, 2009; Florian et al., 2006). A naive solution to our task is to adopt this strategy by treating each token as a state. However, different assignments for the same sentence can have various mention boundaries. It is unfair to compare the model scores of a partial mention and a complete mention. It is also difficult to synchronize the search process of relations. For example, consider the two hypotheses ending at “York” for the same sentence: AllanU-PER from? NewB-ORG YorkI-ORG Stock Exchange AllanU-PER from? NewB-GPE YorkL-GPE Stock Exchange PHYS PHYS The model would bias towards the incorrect assignment “New/B-GPE York/L-GPE” since it can have more informative features as a complete mention (e.g., a binary feature indicating if the entire mention appears in a GPE gazetter). Furthermore, the predictions of the two PHYS relations cannot be synchronized since “New/B-FAC York/I-FAC” is not yet a complete mention. To tackle these problems, we employ the idea of semi-Markov chain (Sarawagi and Cohen, 2004), in which each state corresponds to a segment of the input sequence. They presented a variant of Viterbi algorithm for exact inference in semi-Markov chain. We relax the max operation by beam-search, resulting in a segment-based decoder similar to the multiple-beam algorithm in (Zhang and Clark, 2008). Let ˆd be the upper bound of entity mention length. The k-best partial assignments ending at the i-th token can be calculated as: B[i] = k-BEST y′∈{y[1..i]|y[1:i−d]∈B[i−d], d=1... ˆd} f(x, y′) · w where y[1:i−d] stands for a partial configuration ending at the (i-d)-th token, and y[i−d+1,i] corresponds to the structure of a new segment (i.e., subsequence of x) x[i−d+1,i]. Our joint decoding algorithm is shown in Figure 2. For each token index i, it maintains a beam for the partial assignments whose last segments end at the i-th token. There are two types of actions during the search: 404 Input: input sentence x = (x1, x2, ..., xm). k: beam size. T ∪{⊥}: entity mention type alphabet. R ∪{⊥}: directed relation type alphabet.4 dt: max length of type-t segment, t ∈T ∪{⊥}. Output: best configuration ˆy for x 1 initialize m empty beams B[1..m] 2 for i ←1...m do 3 for t ∈T ∪{⊥} do 4 for d ←1...dt, y′ ∈B[i −d] do 5 k ←i −d + 1 6 B[i] ←B[i] ∪APPEND(y′, t, k, i) 7 B[i] ←k-BEST(B[i]) 8 for j ←(i −1)...1 do 9 buf ←∅ 10 for y′ ∈B[i] do 11 if HASPAIR(y′, i, j) then 12 for r ∈R ∪{⊥} do 13 buf ←buf ∪LINK(y′, r, i, j) 14 else 15 buf ←buf ∪{y′} 16 B[i] ←k-BEST(buf) 17 return B[m][0] Figure 2: Joint Decoding for Entity Mentions and Relations. HASPAIR(y′, i, j) checks if there are two entity mentions in y′ that end at token xi and token xj, respectively. APPEND(y′, t, k, i) appends y′ with a type-t segment spanning from xk to xi. Similarly LINK(y′, r, i, j) augments y′ by assigning a directed relation r to the pair of entity mentions ending at xi and xj respectively. 1. APPEND (Lines 3-7). First, the algorithm enumerates all possible segments (i.e., subsequences) of x ending at the current token with various entity types. A special type of segment is a single token with non-entity label (⊥). Each segment is then appended to existing partial assignments in one of the previous beams to form new assignments. Finally the top k results are recorded in the current beam. 2. LINK (Lines 8-16). After each step of APPEND, the algorithm looks backward to link the newly identified entity mentions and previous ones (if any) with relation arcs. At the j-th sub-step, it only considers the previous mention ending at the j-th previous token. Therefore different 4The same relation type with opposite directions is considered to be two classes in R. configurations are guaranteed to have the same number of sub-steps. Finally, all assignments are re-ranked with new relation information. There are m APPEND actions, each is followed by at most (i−1) LINK actions (line 8). Therefore the worst-case time complexity is O( ˆd·k ·m2), where ˆd is the upper bound of segment length. 3.1.2 Example Demonstration th e tire maker still emp loy s 1 ,4 00 . ? PER ORG ... x y EMP-ORG Figure 3: Example of decoding steps. x-axis and y-axis represent the input sentence and entity types, respectively. The rectangles denote segments with entity types, among which the shaded ones are three competing hypotheses ending at “1,400”. The solid lines and arrows indicate correct APPEND and LINK actions respectively, while the dashed indicate incorrect actions. Here we demonstrate a simple but concrete example by considering again the sentence described in Figure 1a. Suppose we are at the token “1,400”. At this point we can propose multiple entity mentions with various lengths. Assuming “1,400/PER”, “1,400/⊥” and “(employs 1,400)/PER” are possible assignments, the algorithm appends these new segments to the partial assignments in the beams of the tokens “employs” and “still”, respectively. Figure 3 illustrates this process. For simplicity, only a small part of the search space is presented. The algorithm then links the newly identified mentions to the previous ones in the same configuration. In this example, the only previous mention is “(tire maker)/ORG”. Finally, “1,400/PER” will be preferred by the model since there are more indicative context features for EMP-ORG relation between “(tire maker)/PER” and “1,400/PER”. 405 3.2 Structured-Perceptron Learning To estimate the feature weights, we use structured perceptron (Collins, 2002), an extension of the standard perceptron for structured prediction, as the learning framework. Huang et al. (2012) proved the convergency of structured perceptron when inexact search is applied with violation-fixing update methods such as earlyupdate (Collins and Roark, 2004). Since we use beam-search in this work, we apply early-update. In addition, we use averaged parameters to reduce overfitting as in (Collins, 2002). Figure 4 shows the pseudocode for structured perceptron training with early-update. Here BEAMSEARCH is identical to the decoding algorithm described in Figure 2 except that if y′, the prefix of the gold standard y, falls out of the beam after each execution of the k-BEST function (line 7 and 16), then the top assignment z and y′ are returned for parameter update. It is worth noting that this can only happen if the gold-standard has a segment ending at the current token. For instance, in the example of Figure 1a, B[2] cannot trigger any early-update since the gold standard does not contain any segment ending at the second token. Input: training set D = {(x(j), y(j))}N i=1, maximum iteration number T Output: model parameters w 1 initialize w ←0 2 for t ←1...T do 3 foreach (x, y) ∈D do 4 (x, y′, z) ←BEAMSEARCH (x, y, w) 5 if z ̸= y then 6 w ←w + f(x, y′) −f(x, z) 7 return w Figure 4: Perceptron algorithm with beamsearch and early-update. y′ is the prefix of the gold-standard and z is the top assignment. 3.3 Entity Type Constraints Entity type constraints have been shown effective in predicting relations (Roth and Yih, 2007; Chan and Roth, 2010). We automatically collect a mapping table of permissible entity types for each relation type from our training data. Instead of applying the constraints in post-processing inference, we prune the branches that violate the type constraints during search. This type of pruning can reduce search space as well as make the input for parameter update less noisy. In our experiments, only 7 relation mentions (0.5%) in the dev set and 5 relation mentions (0.3%) in the test set violate the constraints collected from the training data. 4 Features An advantage of our framework is that we can easily exploit arbitrary features across the two tasks. This section describes the local features (Section 4.1) and global features (Section 4.2) we developed in this work. 4.1 Local Features We design segment-based features to directly evaluate the properties of an entity mention instead of the individual tokens it contains. Let ˆy be a predicted structure of a sentence x. The entity segments of ˆy can be expressed as a list of triples (e1, ..., em), where each segment ei = ⟨ui, vi, ti⟩ is a triple of start index ui, end index vi, and entity type ti. The following is an example of segmentbased feature: f001(x, ˆy, i) =      1 if x[ˆy.ui,ˆy.vi] = tire maker ˆy.t(i−1), ˆy.ti = ⊥,ORG 0 otherwise This feature is triggered if the labels of the (i−1)th and the i-th segments are “⊥,ORG”, and the text of the i-th segment is “tire maker”. Our segmentbased features are described as follows: Gazetteer features Entity type of each segment based on matching a number of gazetteers including persons, countries, cities and organizations. Case features Whether a segment’s words are initial-capitalized, all lower cased, or mixture. Contextual features Unigrams and bigrams of the text and part-of-speech tags in a segment’s contextual window of size 2. Parsing-based features Features derived from constituent parsing trees, including (a) the phrase type of the lowest common ancestor of the tokens contained in the segment, (b) the depth of the lowest common ancestor, (c) a binary feature indicating if the segment is a base phrase or a suffix of a base phrase, and (d) the head words of the segment and its neighbor phrases. In addition, we convert each triple ⟨ui, vi, ti⟩to BILOU tags for the tokens it contains to implement token-based features. The token-based men406 tion features and local relation features are identical to those of our pipelined system (Section 2.2). 4.2 Global Entity Mention Features By virtue of the efficient inexact search, we are able to use arbitrary features from the entire structure of ˆy to capture long-distance dependencies. The following features between related entity mentions are extracted once a new segment is appended during decoding. Coreference consistency Coreferential entity mentions should be assigned the same entity type. We determine high-recall coreference links between two segments in the same sentence using some simple heuristic rules: • Two segments exactly or partially string match. • A pronoun (e.g., “their”,“it”) refers to previous entity mentions. For example, in “they have no insurance on their cars”, “they” and “their” should have the same entity type. • A relative pronoun (e.g., “which”,“that”, and “who”) refers to the noun phrase it modifies in the parsing tree. For example, in “the starting kicker is nikita kargalskiy, who may be 5,000 miles from his hometown”, “nikita kargalskiy” and “who” should both be labeled as persons. Then we encode a global feature to check whether two coreferential segments share the same entity type. This feature is particularly effective for pronouns because their contexts alone are often not informative. Neighbor coherence Neighboring entity mentions tend to have coherent entity types. For example, in “Barbara Starr was reporting from the Pentagon”, “Barbara Starr” and “Pentagon” are connected by a dependency link prep from and thus they are unlikely to be a pair of PER mentions. Two types of neighbor are considered: (i) the first entity mention before the current segment, and (ii) the segment which is connected by a single word or a dependency link with the current segment. We take the entity types of the two segments and the linkage together as a global feature. For instance, “PER prep from PER” is a feature for the above example when “Barbara Starr” and “Pentagon” are both labeled as PER mentions. Part-of-whole consistency If an entity mention is semantically part of another mention (connected by a prep of dependency link), they should be assigned the same entity type. For example, in “some of Iraq’s exiles”, “some” and “exiles” are both PER mentions; in “one of the town’s two meat-packing plants”, “one” and “plants” are both FAC mentions; in “the rest of America”, “rest” and “America” are both GPE mentions. 4.3 Global Relation Features Relation arcs can also share inter-dependencies or obey soft constraints. We extract the following relation-centric global features when a new relation hypothesis is made during decoding. Role coherence If an entity mention is involved in multiple relations with the same type, then its roles should be coherent. For example, a PER mention is unlikely to have more than one employer. However, a GPE mention can be a physical location for multiple entity mentions. We combine the relation type and the entity mention’s argument roles as a global feature, as shown in Figure 5a. Triangle constraint Multiple entity mentions are unlikely to be fully connected with the same relation type. We use a negative feature to penalize any configuration that contains this type of structure. An example is shown in Figure 5b. Inter-dependent compatibility If two entity mentions are connected by a dependency link, they tend to have compatible relations with other entities. For example, in Figure 5c, the conj and dependency link between “Somalia” and “Kosovo” indicates they may share the same relation type with the third entity mention “forces”. Neighbor coherence Similar to the entity mention neighbor coherence feature, we also combine the types of two neighbor relations in the same sentence as a bigram feature. 5 Experiments 5.1 Data and Scoring Metric Most previous work on ACE relation extraction has reported results on ACE’04 data set. As we will show later in our experiments, ACE’05 made significant improvement on both relation type definition and annotation quality. Therefore we present the overall performance on ACE’05 data. We removed two small subsets in informal genres - cts and un, and then randomly split the remaining 511 documents into 3 parts: 351 for training, 80 for development, and the rest 80 for blind test. In order to compare with state-of-the-art we also performed the same 5-fold cross-validation on bnews and nwire subsets of ACE’04 corpus as in previous work. The statistics of these data sets 407 (GPE Somalia) (PER forces) (GPE US) EMP-ORG EMP-ORG ⇥ (a) (GPE Somalia) (PER forces) (GPE Haiti) PHYS PHYS PHYS ⇥ (b) (GPE Somalia) (PER forces) (GPE Kosovo) PHYS PHYS conj and (c) Figure 5: Examples of Global Relation Features. 0 5 10 15 20 25 # of training iterations 0.70 0.72 0.74 0.76 0.78 0.80 F_1 score mention local+global mention local (a) Entity Mention Performance 0 5 10 15 20 25 # of training iterations 0.30 0.35 0.40 0.45 0.50 0.55 F_1 score relation local+global relation local (b) Relation Performance Figure 6: Learning Curves on Development Set. are summarized in Table 1. We ran the Stanford CoreNLP toolkit5 to automatically recover the true cases for lowercased documents. Data Set # sentences # mentions # relations ACE’05 Train 7,273 26,470 4,779 Dev 1,765 6,421 1,179 Test 1,535 5,476 1,147 ACE’04 6,789 22,740 4,368 Table 1: Data Sets. We use the standard F1 measure to evaluate the performance of entity mention extraction and relation extraction. An entity mention is considered correct if its entity type is correct and the offsets of its mention head are correct. A relation mention is considered correct if its relation type is correct, and the head offsets of two entity mention arguments are both correct. As in Chan and 5http://nlp.stanford.edu/software/corenlp.shtml Roth (2011), we excluded the DISC relation type, and removed relations in the system output which are implicitly correct via coreference links for fair comparison. Furthermore, we combine these two criteria to evaluate the performance of end-to-end entity mention and relation extraction. 5.2 Development Results In general a larger beam size can yield better performance but increase training and decoding time. As a tradeoff, we set the beam size as 8 throughout the experiments. Figure 6 shows the learning curves on the development set, and compares the performance with and without global features. From these figures we can clearly see that global features consistently improve the extraction performance of both tasks. We set the number of training iterations as 22 based on these curves. 5.3 Overall Performance Table 2 shows the overall performance of various methods on the ACE’05 test data. We compare our proposed method (Joint w/ Global) with the pipelined system (Pipeline), the joint model with only local features (Joint w/ Local), and two human annotators who annotated 73 documents in ACE’05 corpus. We can see that our approach significantly outperforms the pipelined approach for both tasks. As a real example, for the partial sentence “a marcher from Florida” from the test data, the pipelined approach failed to identify “marcher” as a PER mention, and thus missed the GEN-AFF relation between “marcher” and “Florida”. Our joint model correctly identified the entity mentions and their relation. Figure 7 shows the details when the joint model is applied to this sentence. At the token “marcher”, the top hypothesis in the beam is “⟨⊥, ⊥⟩”, while the correct one is ranked second best. After the decoder processes the token “Florida”, the correct hypothesis is promoted to the top in the beam by the Neighbor Coherence features for PER-GPE pair. Furthermore, after 408 Model Entity Mention (%) Relation (%) Entity Mention + Relation (%) Score P R F1 P R F1 P R F1 Pipeline 83.2 73.6 78.1 67.5 39.4 49.8 65.1 38.1 48.0 Joint w/ Local 84.5 76.0 80.0 68.4 40.1 50.6 65.3 38.3 48.3 Joint w/ Global 85.2 76.9 80.8 68.9 41.9 52.1 65.4 39.8 49.5 Annotator 1 91.8 89.9 90.9 71.9 69.0 70.4 69.5 66.7 68.1 Annotator 2 88.7 88.3 88.5 65.2 63.6 64.4 61.8 60.2 61.0 Inter-Agreement 85.8 87.3 86.5 55.4 54.7 55.0 52.3 51.6 51.9 Table 2: Overall performance on ACE’05 corpus. step s h y p o th eses ran k (a) ha? marcher?i 1 ha? marcherPERi 2 (b ) ha? marcher? from?i 1 ha? marcherPER from?i 4 (c) ha? marcherPER from? FloridaGPEi 1 ha? marcher? from? FloridaGPEi 2 (d ) ha? marcherPER from? FloridaGPEi GEN-AFF 1 ha? marcher? from? FloridaGPEi 4 Figure 7: Two competing hypotheses for “a marcher from Florida” during decoding. linking the two mentions by GEN-AFF relation, the ranking of the incorrect hypothesis “⟨⊥, ⊥⟩” is dropped to the 4-th place in the beam, resulting in a large margin from the correct hypothesis. The human F1 score on end-to-end relation extraction is only about 70%, which indicates it is a very challenging task. Furthermore, the F1 score of the inter-annotator agreement is 51.9%, which is only 2.4% above that of our proposed method. Compared to human annotators, the bottleneck of automatic approaches is the low recall of relation extraction. Among the 631 remaining missing relations, 318 (50.3%) of them were caused by missing entity mention arguments. A lot of nominal mention heads rarely appear in the training data, such as persons (“supremo”, “shepherd”, “oligarchs”, “rich”), geo-political entity mentions (“stateside”), facilities (“roadblocks”, “cells”), weapons (“sim lant”, “nukes”) and vehicles (“prams”). In addition, relations are often implicitly expressed in a variety of forms. Some examples are as follows: • “Rice has been chosen by President Bush to become the new Secretary of State” indicates “Rice” has a PER-SOC relation with “Bush”. • “U.S. troops are now knocking on the door of Baghdad” indicates “troops” has a PHYS relation with “Baghdad”. • “Russia and France sent planes to Baghdad” indicates “Russia” and “France” are involved in an ART relation with “planes” as owners. In addition to contextual features, deeper semantic knowledge is required to capture such implicit semantic relations. 5.4 Comparison with State-of-the-art Table 3 compares the performance on ACE’04 corpus. For entity mention extraction, our joint model achieved 79.7% on 5-fold cross-validation, which is comparable with the best F1 score 79.2% reported by (Florian et al., 2006) on singlefold. However, Florian et al. (2006) used some gazetteers and the output of other Information Extraction (IE) models as additional features, which provided significant gains ((Florian et al., 2004)). Since these gazetteers, additional data sets and external IE models are all not publicly available, it is not fair to directly compare our joint model with their results. For end-to-end entity mention and relation extraction, both the joint approach and the pipelined baseline outperform the best results reported by (Chan and Roth, 2011) under the same setting. 6 Related Work Entity mention extraction (e.g., (Florian et al., 2004; Florian et al., 2006; Florian et al., 2010; Zitouni and Florian, 2008; Ohta et al., 2012)) and relation extraction (e.g., (Reichartz et al., 2009; Sun et al., 2011; Jiang and Zhai, 2007; Bunescu and Mooney, 2005; Zhao and Grishman, 2005; Culotta and Sorensen, 2004; Zhou et al., 2007; Qian and Zhou, 2010; Qian et al., 2008; Chan and Roth, 2011; Plank and Moschitti, 2013)) have drawn much attention in recent years but were 409 Model Entity Mention (%) Relation (%) Entity Mention + Relation (%) Score P R F1 P R F1 P R F1 Chan and Roth (2011) 42.9 38.9 40.8 Pipeline 81.5 74.1 77.6 62.5 36.4 46.0 58.4 33.9 42.9 Joint w/ Local 82.7 75.2 78.8 64.2 37.0 46.9 60.3 34.8 44.1 Joint w/ Global 83.5 76.2 79.7 64.7 38.5 48.3 60.8 36.1 45.3 Table 3: 5-fold cross-validation on ACE’04 corpus. Bolded scores indicate highly statistical significant improvement as measured by paired t-test (p < 0.01) usually studied separately. Most relation extraction work assumed that entity mention boundaries and/or types were given. Chan and Roth (2011) reported the best results using predicted entity mentions. Some previous work used relations and entity mentions to enhance each other in joint inference frameworks, including re-ranking (Ji and Grishman, 2005), Integer Linear Programming (ILP) (Roth and Yih, 2004; Roth and Yih, 2007; Yang and Cardie, 2013), and Card-pyramid Parsing (Kate and Mooney, 2010). All these work noted the advantage of exploiting crosscomponent interactions and richer knowledge. However, they relied on models separately learned for each subtask. As a key difference, our approach jointly extracts entity mentions and relations using a single model, in which arbitrary soft constraints can be easily incorporated. Some other work applied probabilistic graphical models for joint extraction (e.g., (Singh et al., 2013; Yu and Lam, 2010)). By contrast, our work employs an efficient joint search algorithm without modeling joint distribution over numerous variables, therefore it is more flexible and computationally simpler. In addition, (Singh et al., 2013) used goldstandard mention boundaries. Our previous work (Li et al., 2013) used structured perceptron with token-based decoder to jointly predict event triggers and arguments based on the assumption that entity mentions and other argument candidates are given as part of the input. In this paper, we solve a more challenging problem: take raw texts as input and identify the boundaries, types of entity mentions and relations all together in a single model. Sarawagi and Cohen (2004) proposed a segment-based CRFs model for name tagging. Zhang and Clark (2008) used a segment-based decoder for word segmentation and pos tagging. We extended the similar idea to our end-to-end task by incrementally predicting relations along with entity mention segments. 7 Conclusions and Future Work In this paper we introduced a new architecture for more powerful end-to-end entity mention and relation extraction. For the first time, we addressed this challenging task by an incremental beam-search algorithm in conjunction with structured perceptron. While detecting mention boundaries jointly with other components raises the challenge of synchronizing multiple assignments in the same beam, a simple yet effective segmentbased decoder is adopted to solve this problem. More importantly, we exploited a set of global features based on linguistic and logical properties of the two tasks to predict more coherent structures. Experiments demonstrated our approach significantly outperformed pipelined approaches for both tasks and dramatically advanced state-of-the-art. In future work, we plan to explore more soft and hard constraints to reduce search space as well as improve accuracy. In addition, we aim to incorporate other IE components such as event extraction into the joint model. Acknowledgments We thank the three anonymous reviewers for their insightful comments. This work was supported by the U.S. Army Research Laboratory under Cooperative Agreement No. W911NF-09-2-0053 (NSCTA), U.S. NSF CAREER Award under Grant IIS-0953149, U.S. DARPA Award No. FA875013-2-0041 in the Deep Exploration and Filtering of Text (DEFT) Program, IBM Faculty Award, Google Research Award and RPI faculty start-up grant. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 410 References Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proc. HLT/EMNLP, pages 724–731. Yee Seng Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In Proc. COLING, pages 152–160. Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proc. ACL, pages 551–560. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. ACL, pages 111–118. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP, pages 1–8. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proc. ACL, pages 423–429. Radu Florian, Hany Hassan, Abraham Ittycheriah, Hongyan Jing, Nanda Kambhatla, Xiaoqiang Luo, Nicolas Nicolov, and Salim Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Proc. HLT-NAACL, pages 1–8. Radu Florian, Hongyan Jing, Nanda Kambhatla, and Imed Zitouni. 2006. Factorizing complex models: A case study in mention detection. In Proc. ACL. Radu Florian, John F. Pitrelli, Salim Roukos, and Imed Zitouni. 2010. Improving mention detection robustness to noisy input. In Proc. EMNLP, pages 335– 345. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In ACL, pages 1077–1086. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proc. HLT-NAACL, pages 142–151. Heng Ji and Ralph Grishman. 2005. Improving name tagging by reference resolution and relation detection. In Proc. ACL, pages 411–418. Jing Jiang and ChengXiang Zhai. 2007. A systematic exploration of the feature space for relation extraction. In Proc. HLT-NAACL. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. In Proc. ACL, pages 178–181. Rohit J. Kate and Raymond Mooney. 2010. Joint entity and relation extraction using card-pyramid parsing. In Proc. ACL, pages 203–212. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, pages 282–289. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proc. ACL, pages 73–82. Marie-Catherine De Marneffe, Bill Maccartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proc. LREC, pages 449,454. Tomoko Ohta, Sampo Pyysalo, Jun’ichi Tsujii, and Sophia Ananiadou. 2012. Open-domain anatomical entity mention detection. In Proc. ACL Workshop on Detecting Structure in Scholarly Discourse, pages 27–36. Barbara Plank and Alessandro Moschitti. 2013. Embedding semantic similarity in tree kernels for domain adaptation of relation extraction. In Proc. ACL, pages 1498–1507. Longhua Qian and Guodong Zhou. 2010. Clusteringbased stratified seed sampling for semi-supervised relation classification. In Proc. EMNLP, pages 346– 355. Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In Proc. COLING, pages 697–704. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proc. CONLL, pages 147–155. Frank Reichartz, Hannes Korte, and Gerhard Paass. 2009. Composite kernels for relation extraction. In Proc. ACL-IJCNLP (Short Papers), pages 365–368. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proc. CoNLL. Dan Roth and Wen-tau Yih. 2007. Global inference for entity and relation identification via a lin- ear programming formulation. In Introduction to Statistical Relational Learning. MIT. Sunita Sarawagi and William W. Cohen. 2004. Semimarkov conditional random fields for information extraction. In Proc. NIPS. Sameer Singh, Sebastian Riedel, Brian Martin, Jiaping Zheng, and Andrew McCallum. 2013. Joint inference of entities, relations, and coreference. In Proc. CIKM Workshop on Automated Knowledge Base Construction. Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In Proc. ACL, pages 521–529. 411 Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proc. ACL, pages 1640–1649. Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proc. COLING (Posters), pages 1399–1407. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and pos tagging using a single perceptron. In Proc. ACL, pages 1147–1157. Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proc. ACL, pages 419–426. Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proc. ACL, pages 427–434. Guodong Zhou, Min Zhang, Dong-Hong Ji, and Qiaoming Zhu. 2007. Tree kernel-based relation extraction with context-sensitive structured parse tree information. In Proc. EMNLP-CoNLL, pages 728–736. Imed Zitouni and Radu Florian. 2008. Mention detection crossing the language barrier. In Proc. EMNLP, pages 600–609. 412
2014
38
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 413–423, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics That’s Not What I Meant! Using Parsers to Avoid Structural Ambiguities in Generated Text Manjuan Duan and Michael White Department of Linguistics The Ohio State University Columbus, OH 43210, USA {duan,mwhite}@ling.osu.edu Abstract We investigate whether parsers can be used for self-monitoring in surface realization in order to avoid egregious errors involving “vicious” ambiguities, namely those where the intended interpretation fails to be considerably more likely than alternative ones. Using parse accuracy in a simple reranking strategy for selfmonitoring, we find that with a stateof-the-art averaged perceptron realization ranking model, BLEU scores cannot be improved with any of the well-known Treebank parsers we tested, since these parsers too often make errors that human readers would be unlikely to make. However, by using an SVM ranker to combine the realizer’s model score together with features from multiple parsers, including ones designed to make the ranker more robust to parsing mistakes, we show that significant increases in BLEU scores can be achieved. Moreover, via a targeted manual analysis, we demonstrate that the SVM reranker frequently manages to avoid vicious ambiguities, while its ranking errors tend to affect fluency much more often than adequacy. 1 Introduction Rajkumar & White (2011; 2012) have recently shown that some rather egregious surface realization errors—in the sense that the reader would likely end up with the wrong interpretation—can be avoided by making use of features inspired by psycholinguistics research together with an otherwise state-of-the-art averaged perceptron realization ranking model (White and Rajkumar, 2009), as reviewed in the next section. However, one is apt to wonder: could one use a parser to check whether the intended interpretation is easy to recover, either as an alternative or to catch additional mistakes? Doing so would be tantamount to selfmonitoring in Levelt’s (1989) model of language production. Neumann & van Noord (1992) pursued the idea of self-monitoring for generation in early work with reversible grammars. As Neumann & van Noord observed, a simple, brute-force way to generate unambiguous sentences is to enumerate possible realizations of an input logical form, then to parse each realization to see how many interpretations it has, keeping only those that have a single reading; they then went on to devise a more efficient method of using self-monitoring to avoid generating ambiguous sentences, targeted to the ambiguous portion of the output. We might question, however, whether it is really possible to avoid ambiguity entirely in the general case, since Abney (1996) and others have argued that nearly every sentence is potentially ambiguous, though we (as human comprehenders) may not notice the ambiguities if they are unlikely. Taking up this issue, Khan et al. (2008)—building on Chantree et al.’s (2006) approach to identifying “innocuous” ambiguities—conducted several experiments to test whether ambiguity could be balanced against length or fluency in the context of generating referring expressions involving coordinate structures. Though Khan et al.’s study was limited to this one kind of structural ambiguity, they do observe that generating the brief variants when the intended interpretation is clear instantiates Van Deemter’s (2004) general strategy of only avoiding vicious ambiguities—that is, ambiguities where the intended interpretation fails to be considerably more likely than any other distractor interpretations—rather than trying to avoid all ambiguities. In this paper, we investigate whether Neumann & van Noord’s brute-force strategy for avoid413 ing ambiguities in surface realization can be updated to only avoid vicious ambiguities, extending (and revising) Van Deemter’s general strategy to all kinds of structural ambiguity, not just the one investigated by Khan et al. To do so—in a nutshell—we enumerate an n-best list of realizations and rerank them if necessary to avoid vicious ambiguities, as determined by one or more automatic parsers. A potential obstacle, of course, is that automatic parsers may not be sufficiently representative of human readers, insofar as errors that a parser makes may not be problematic for human comprehension; moreover, parsers are rarely successful in fully recovering the intended interpretation for sentences of moderate length, even with carefully edited news text. Consequently, we examine two reranking strategies, one a simple baseline approach and the other using an SVM reranker (Joachims, 2002). Our simple reranking strategy for selfmonitoring is to rerank the realizer’s n-best list by parse accuracy, preserving the original order in case of ties. In this way, if there is a realization in the n-best list that can be parsed more accurately than the top-ranked realization—even if the intended interpretation cannot be recovered with 100% accuracy—it will become the preferred output of the combined realization-with-selfmonitoring system. With this simple reranking strategy and each of three different Treebank parsers, we find that it is possible to improve BLEU scores on Penn Treebank development data with White & Rajkumar’s (2011; 2012) baseline generative model, but not with their averaged perceptron model. In inspecting the results of reranking with this strategy, we observe that while it does sometimes succeed in avoiding egregious errors involving vicious ambiguities, common parsing mistakes such as PP-attachment errors lead to unnecessarily sacrificing conciseness or fluency in order to avoid ambiguities that would be easily tolerated by human readers. Therefore, to develop a more nuanced self-monitoring reranker that is more robust to such parsing mistakes, we trained an SVM using dependency precision and recall features for all three parses, their n-best parsing results, and per-label precision and recall for each type of dependency, together with the realizer’s normalized perceptron model score as a feature. With the SVM reranker, we obtain a significant improvement in BLEU scores over White & Rajkumar’s averaged perceptron model on both development and test data. Additionally, in a targeted manual analysis, we find that in cases where the SVM reranker improves the BLEU score, improvements to fluency and adequacy are roughly balanced, while in cases where the BLEU score goes down, it is mostly fluency that is made worse (with reranking yielding an acceptable paraphrase roughly one third of the time in both cases). The paper is structured as follows. In Section 2, we review the realization ranking models that serve as a starting point for the paper. In Section 3, we report on our experiments with the simple reranking strategy, including a discussion of the ways in which this method typically fails. In Section 4, we describe how we trained an SVM reranker and report our results using BLEU scores (Papineni et al., 2002). In Section 5, we present a targeted manual analysis of the development set sentences with the greatest change in BLEU scores, discussing both successes and errors. In Section 6, we briefly review related work on broad coverage surface realization. Finally, in Section 7, we sum up and discuss opportunities for future work in this direction. 2 Background We use the OpenCCG1 surface realizer for the experiments reported in this paper. The OpenCCG realizer generates surface strings for input semantic dependency graphs (or logical forms) using a chart-based algorithm (White, 2006) for Combinatory Categorial Grammar (Steedman, 2000) together with a “hypertagger” for probabilistically assigning lexical categories to lexical predicates in the input (Espinosa et al., 2008). An example input appears in Figure 1. In the figure, nodes correspond to discourse referents labeled with lexical predicates, and dependency relations between nodes encode argument structure (gold standard CCG lexical categories are also shown); note that semantically empty function words such as infinitival-to are missing. The grammar is extracted from a version of the CCGbank (Hockenmaier and Steedman, 2007) enhanced for realization; the enhancements include: better analyses of punctuation (White and Rajkumar, 2008); less error prone handling of named entities (Rajkumar et al., 2009); re-inserting quotes into the CCGbank; 1http://openccg.sf.net 414 a a1 he h3 he h2 <Det> <Arg0> <Arg1> <TENSE>pres <NUM>sg <Arg0> w1 want.01 m1 <Arg1> <GenRel> <Arg1> <TENSE>pres p1 point h1 have.03 make.03 <Arg0> s[b]\np/np np/n np n s[dcl]\np/np s[dcl]\np/(s[to]\np) np Figure 1: Example OpenCCG semantic dependency input for he has a point he wants to make, with gold standard lexical categories for each node and assignment of consistent semantic roles across diathesis alternations (Boxwell and White, 2008), using PropBank (Palmer et al., 2005). To select preferred outputs from the chart, we use White & Rajkumar’s (2009; 2012) realization ranking model, recently augmented with a largescale 5-gram model based on the Gigaword corpus. The ranking model makes choices addressing all three interrelated sub-tasks traditionally considered part of the surface realization task in natural language generation research (Reiter and Dale, 2000; Reiter, 2010): inflecting lemmas with grammatical word forms, inserting function words and linearizing the words in a grammatical and natural order. The model takes as its starting point two probabilistic models of syntax that have been developed for CCG parsing, Hockenmaier & Steedman’s (2002) generative model and Clark & Curran’s (2007) normal-form model. Using the averaged perceptron algorithm (Collins, 2002), White & Rajkumar (2009) trained a structured prediction ranking model to combine these existing syntactic models with several n-gram language models. This model improved upon the state-of-the-art in terms of automatic evaluation scores on heldout test data, but nevertheless an error analysis revealed a surprising number of word order, function word and inflection errors. For each kind of error, subsequent work investigated the utility of employing more linguistically motivated features to improve the ranking model. To improve word ordering decisions, White & Rajkumar (2012) demonstrated that incorporating a feature into the ranker inspired by Gibson’s (2000) dependency locality theory can deliver statistically significant improvements in automatic evaluation scores, better match the distributional characteristics of sentence orderings, and significantly reduce the number of serious ordering errors (some involving vicious ambiguities) as confirmed by a targeted human evaluation. Supporting Gibson’s theory, comprehension and corpus studies have found that the tendency to minimize dependency length has a strong influence on constituent ordering choices; see Temperley (2007) and Gildea and Temperley (2010) for an overview. Table 1 shows examples from White and Rajkumar (2012) of how the dependency length feature (DEPLEN) affects the OpenCCG realizer’s output even in comparison to a model (DEPORD) with a rich set of discriminative syntactic and dependency ordering features, but no features directly targeting relative weight. In wsj 0015.7, the dependency length model produces an exact match, while the DEPORD model fails to shift the short temporal adverbial next year next to the verb, leaving a confusingly repetitive this year next year at the end of the sentence. Note how shifting next year from its canonical VP-final position to appear next to the verb shortens its dependency length considerably, while barely lengthening the dependency to based on; at the same time, it avoids ambiguity in what next year is modifying. In wsj 0020.1 we see the reverse case: the dependency length model produces a nearly exact match with just an equally acceptable inversion of closely watching, keeping the direct object in its canonical position. By contrast, the DEPORD model mistakenly shifts the direct object South Korea, Taiwan and Saudia Arabia to the end of the sentence where it is difficult to understand following two very long intervening phrases. With function words, Rajkumar and White (2011) showed that they could improve upon the earlier model’s predictions for when to employ that-complementizers using features inspired by Jaeger’s (2010) work on using the principle of uniform information density, which holds that human language use tends to keep information density relatively constant in order to optimize communicative efficiency. In news text, com415 wsj 0015.7 the exact amount of the refund will be determined next year based on actual collections made until Dec. 31 of this year . DEPLEN [same] DEPORD the exact amount of the refund will be determined based on actual collections made until Dec. 31 of this year next year . wsj 0020.1 the U.S. , claiming some success in its trade diplomacy , removed South Korea , Taiwan and Saudi Arabia from a list of countries it is closely watching for allegedly failing to honor U.S. patents , copyrights and other intellectual-property rights . DEPLEN the U.S. claiming some success in its trade diplomacy , removed South Korea , Taiwan and Saudi Arabia from a list of countries it is watching closely for allegedly failing to honor U.S. patents , copyrights and other intellectual-property rights . DEPORD the U.S. removed from a list of countries it is watching closely for allegedly failing to honor U.S. patents , copyrights and other intellectual-property rights , claiming some success in its trade diplomacy , South Korea , Taiwan and Saudi Arabia . Table 1: Examples of realized output for full models with and without the dependency length feature (White and Rajkumar, 2012) plementizers are left out two times out of three, but in some cases the presence of that is crucial to the interpretation. Generally, inserting a complementizer makes the onset of a complement clause more predictable, and thus less information dense, thereby avoiding a potential spike in information density that is associated with comprehension difficulty. Rajkumar & White’s experiments confirmed the efficacy of the features based on Jaeger’s work, including information density– based features, in a local classification model.2 Their experiments also showed that the improvements in prediction accuracy apply to cases in which the presence of a that-complementizer arguably makes a substantial difference to fluency or intelligiblity. For example, in (1), the presence of that avoids a local ambiguity, helping the reader to understand that for the second month in a row modifies the reporting of the shortage; without that, it is very easy to mis-parse the sentence as having for the second month in a row modifying the saying event. (1) He said that/∅? for the second month in a row, food processors reported a shortage of nonfat dry milk. (PTB WSJ0036.61) Finally, to reduce the number of subject-verb agreement errors, Rajkumar and White (2010) extended the earlier model with features enabling it to make correct verb form choices in sentences involving complex coordinate constructions and 2Note that the features from the local classification model for that-complementizer choice have not yet been incorporated into OpenCCG’s global realization ranking model, and thus do not inform the baseline realization choices in this work. with expressions such as a lot of where the correct choice is not determined solely by the head noun. They also improved animacy agreement with relativizers, reducing the number of errors where that or which was chosen to modify an animate noun rather than who or whom (and vice-versa), while also allowing both choices where corpus evidence was mixed. 3 Simple Reranking 3.1 Methods We ran two OpenCCG surface realization models on the CCGbank dev set (derived from Section 00 of the Penn Treebank) and obtained n-best (n = 10) realizations. The first one is the baseline generative model (hereafter, generative model) used in training the averaged perceptron model. This model ranks realizations using the product of the Hockenmaier syntax model, n-gram models over words, POS tags and supertags in the training sections of the CCGbank, and the large-scale 5-gram model from Gigaword. The second one is the averaged perceptron model (hereafter, perceptron model), which uses all the features reviewed in Section 2. In order to experiment with multiple parsers, we used the Stanford dependencies (de Marneffe et al., 2006), obtaining gold dependencies from the gold-standard PTB parses and automatic dependencies from the automatic parses of each realization. Using dependencies allowed us to measure parse accuracy independently of word order. We chose the Berkeley parser (Petrov et al., 2006), Brown parser (Charniak and Johnson, 2005) and Stanford parser (Klein and Manning, 2003) to parse the realizations generated by the 416 Berkeley Brown Stanford No reranking 87.93 87.93 87.93 Labeled 87.77 87.87 87.12 Unlabeled 87.90 87.97 86.97 Table 2: Devset BLEU scores for simple ranking on top of n-best perceptron model realizations That’s Not What I Meant! Using Parsers to Avoid Structural Ambiguities in Generated Text Manjuan Duan and Michael White Department of Linguistics The Ohio State University Columbus, OH 43210, USA {duan,mwhite}@ling.osu.edu . . . is propelling the region toward economic integration aux dobj prep pobj (a) gold dependency . . . is propelling toward economic integration the region aux pobj prep dobj (b) simple ranker . . . is propelling the region toward economic integration aux dobj prep pobj (c) perceptron best Figure 1: Example parsing mistake in PPattachment (wsj 0043.1) Abstract We investigate . . . Figure 2: Example parsing mistake in PPattachment (wsj 0043.1) two realization models and calculated precision, recall and F1 of the dependencies for each realization by comparing them with the gold dependencies. We then ranked the realizations by their F1 score of parse accuracy, keeping the original ranking in case of ties. We also tried using unlabeled (and unordered) dependencies, in order to possibly make better use of parses that were close to being correct. In this setting, as long as the right pair of tokens occur in a dependency relation, it was counted as a correctly recovered dependency. 3.2 Results Simple ranking with the Berkeley parser of the generative model’s n-best realizations raised the BLEU score from 85.55 to 86.07, well below the averaged perceptron model’s BLEU score of 87.93. However, as shown in Table 2, none of the parsers yielded significant improvements on the top of the perceptron model. Inspecting the results of simple ranking revealed that while simple ranking did successfully avoid vicious ambiguities in some cases, parser mistakes with PP-attachments, noun-noun compounds and coordinate structures too often blocked the gold realization from emerging on top. To illustrate, Figure 2 shows an example with a PP-attachment mistake. In the figure, the key gold dependencies of the reference sentence are shown in (a), the dependencies of the realization selected by the simple ranker are shown in (b), and the dependencies of the realization selected by the perceptron ranker (same as gold) appear in (c), with the parsing mistake indicated by the dashed line. The simple ranker ends up choosing (b) as the best realization because it has the most accurate parse compared to the reference sentence, given the mistake with (c). Other common parse errors are illustrated in Figure 3. Here, (b) ends up getting chosen by the simple ranker as the realization with the most accurate parse given the failures in (c), where the additional technology, personnel training is mistakenly analyzed as one noun phrase, a reading unlikely to be considered by human readers. In sum, although simple ranking helps to avoid vicious ambiguity in some cases, the overall results of simple ranking are no better than the perceptron model (according to BLEU, at least), as parse failures that are not reflective of human intepretive tendencies too often lead the ranker to choose dispreferred realizations. As such, we turn now to a more nuanced model for combining the results of multiple parsers in a way that is less sensitive to such parsing mistakes, while also letting the perceptron model have a say in the final ranking. 4 Reranking with SVMs 4.1 Methods Since different parsers make different errors, we conjectured that dependencies in the intersection of the output of multiple parsers may be more reliable and thus may more reliably reflect human comprehension preferences. Similarly, we conjectured that large differences in the realizer’s perceptron model score may more reliably reflect human fluency preferences than small ones, and thus we combined this score with features for parser accuracy in an SVM ranker. Additionally, given that parsers may more reliably recover some kinds of dependencies than others, we included features for each dependency type, so that the SVM ranker might learn how to weight them appropriately. Finally, since the differences among the n-best parses reflect the least certain parsing decisions, 417 the additional technology, personnel training and promotional efforts det amod nn conj cc conj amod (a) gold dependency the additional technology, training personnel and promotional efforts det amod nn conj cc conj amod (b) simple ranker the additional technology, personnel training and promotional efforts det amod nn dep cc conj amod (c) perceptron best Figure 2: Example parsing mistakes in a noun-noun compound and a coordinate structure (wsj 0085.45) Figure 3: Example parsing mistakes in a noun-noun compound and a coordinate structure (wsj 0085.45) and thus ones that may require more common sense inference that is easy for humans but not machines, we conjectured that including features from the n-best parses may help to better match human performance. In more detail, we made use of the following feature classes for each candidate realization: perceptron model score the score from the realizer’s model, normalized to [0,1] for the realizations in the n-best list precision and recall labeled and unlabeled precision and recall for each parser’s best parse per-label precision and recall (dep) precision and recall for each type of dependency obtained from each parser’s best parse (using zero if not defined for lack of predicted or gold dependencies with a given label) n-best precision and recall (nbest) labeled and unlabeled precision and recall for each parser’s top five parses, along with the same features for the most accurate of these parses In training, we used the BLEU scores of each realization compared with its reference sentence to establish a preference order over pairs of candidate realizations, assuming that the original corpus sentences are generally better than related alternatives, and that BLEU can somewhat reliably predict human preference judgments. We trained the SVM ranker (Joachims, 2002) with a linear kernel and chose the hyper-parameter c, which tunes the trade-off between training error and margin, with 6-fold cross-validation on the devset. We trained different models to investigate the contribution made by different parsers and different types of features, with the perceptron model score included as a feature in all models. For each parser, we trained a model with its overall precision and recall features, as shown at the top of Table 3. Then we combined these three models to get a new model (Bkl+Brw+St in the table) . Next, to this combined model we separately added (i) the per-label precision and recall features from all the parsers (BBS+dep), and (ii) the n-best features from the parsers (BBS+nbest). The full model (BBS+dep+nbest) includes all the features listed above. Finally, since the Berkeley parser yielded the best results on its own, we also tested models using all the feature classes but only using this parser by itself. 4.2 Results Table 3 shows the results of different SVM ranking models on the devset. We calculated significance using paired bootstrap resampling (Koehn, 2004).3 Both the per-label precision & recall fea3Kudos to Kevin Gimpel for making his implementation available: http://www.ark.cs.cmu.edu/MT/ paired_bootstrap_v13a.tar.gz 418 BLEU sig. perceptron baseline 87.93 – Berkeley 88.45 * Brown 88.34 Stanford 88.18 Bkl+Brw+St 88.44 * BBS+dep 88.63 ** BBS+nbest 88.60 ** BBS+dep+nbest 88.73 ** Bkl+dep 88.63 ** Bkl+nbest 88.48 * Bkl +dep+nbest 88.68 ** Table 3: Devset results of SVM ranking on top of perceptron model. Significance codes: ∗∗for p < 0.05, ∗for p < 0.1. BLEU sig. perceptron baseline 86.94 – BBS+dep+nbest 87.64 ** Table 4: Final test results of SVM ranking on top of perceptron model. Significance codes: ∗∗for p < 0.05, ∗for p < 0.1. tures and the n-best parse features contributed to achieving a significant improvement compared to the perceptron model. Somewhat surprisingly, the Berkeley parser did as well as all three parsers using just the overall precision and recall features, but not quite as well using all features. The complete model, BBS+dep+nbest, achieved a BLEU score of 88.73, significantly improving upon the perceptron model (p < 0.02). We then confirmed this result on the final test set, Section 23 of the CCGbank, as shown in Table 4 (p < 0.02 as well). 5 Analysis and Discussion 5.1 Targeted Manual Analysis In order to gain a better understanding of the successes and failures of our SVM ranker, we present here a targeted manual analysis of the development set sentences with the greatest change in BLEU scores, carried out by the second author (a native speaker). In this analysis, we consider whether the reranked realization improves upon or detracts from realization quality—in terms of adequacy, fluency, both or neither—along with a linguistic categorization of the differences between the reranked realization and the original top-ranked realization according to the averaged perceptron model. Unlike the broad-based and objective evaluation in terms of BLEU scores presented above, this analysis is narrowly targeted and subjective, though the interested reader is invited to review the complete set of analyzed examples that accompany the paper as a supplement. We leave a more broad-based human evaluation by naive subjects for future work. Table 5 shows the results of the analysis, both overall and for the most frequent categories of changes. Of the 50 sentences where the BLEU score went up the most, 15 showed an improvement in adequacy (i.e., in conveying the intended meaning), 22 showed an improvement in fluency (with 3 cases also improving adequacy), and 16 yielded no discernible change in fluency or adequacy. By contrast, with the 50 sentences where the BLEU score went down the most, adequacy was only affected 4 times, though fluency was affected 32 times, and 15 remained essentially unchanged.4 The table also shows that differences in the order of VP constituents usually led to a change in adequacy or fluency, as did ordering changes within NPs, with noun-noun compounds and named entities as the most frequent subcategories of NP-ordering changes. Of the cases where adequacy and fluency were not affected, contractions and subject-verb inversions were the most frequent differences. Examples of the changes yielded by the SVM ranker appear in Table 6. With wsj 0036.54, the averaged perceptron model selects a realization that regrettably (though amusingly) swaps purchasing and more than 250—yielding a sentence that suggests that the executives have been purchased!—while the SVM ranker succeeds in ranking the original sentence above all competing realizations. With wsj 0088.25, self-monitoring with the SVM ranker yields a realization nearly identical to the original except for an extra comma, where it is clear that in public modifies do this; by contrast, in the perceptron-best realization, in public mistakenly appears to modify be disclosed. With wsj 0041.18, the SVM ranker unfortunately prefers a realization where presumably seems to modify shows rather than of two politicians as 4The difference in the distribution of adequacy change, fluency change and no change counts between the two conditions is highly significant statistically (χ2 = 9.3, df = 2, p < 0.01). In this comparison, items where both fluency and adequacy were affected were counted as adequacy cases. 419 ±adq ±flu =eq ±vpord ±npord ±nn ±ne =vpord =sbjinv =cntrc BLEU wins 15 22 16 10 9 7 3 4 11 BLEU losses 4 32 15 8 13 5 5 4 7 Table 5: Manual analysis of devset sentences where the SVM ranker achieved the greatest increase/decrease in BLEU scores (50 each of wins/losses) compared to the averaged perceptron baseline model in terms of positive or negative changes in adequacy (±adq), fluency (±flu) or neither (=eq); changes in VP ordering (±vpord), NP ordering (±npord), noun-noun compound ordering (±nn) and named entities (±ne); and neither positive nor negative changes in VP ordering (=vpord), subjectinversion (=sbjinv) and contractions (=cntrc). In all but one case (counted as =eq here), the BLEU wins saw positive changes and the BLEU losses saw negative changes. wsj 0036.54 the purchasing managers ’ report is based on data provided by more than 250 purchasing executives . SVM RANKER [same] PERCEP BEST the purchasing managers ’ report is based on data provided by purchasing more than 250 executives . wsj 0088.25 Markey said we could have done this in public because so little sensitive information was disclosed , the aide said . SVM RANKER Markey said , we could have done this in public because so little sensitive information was disclosed , the aide said . PERCEP BEST Markey said , we could have done this because so little sensitive information was disclosed in public , the aide said . wsj 0041.18 the screen shows two distorted , unrecognizable photos , presumably of two politicians . SVM RANKER the screen shows two distorted , unrecognizable photos presumably , of two politicians . PERCEP BEST [same as original] wsj 0044.111 “ I was dumbfounded ” , Mrs. Ward recalls . SVM RANKER “ I was dumbfounded ” , recalls Mrs. Ward . PERCEP BEST [same as original] Table 6: Examples of devset sentences where the SVM ranker improved adequacy (top), made it worse (middle) or left it the same (bottom) in the original, which the averaged perceptron model prefers. Finally, wsj 0044.111 is an example where a subject-inversion makes no difference to adequacy or fluency. 5.2 Discussion The BLEU evaluation and targeted manual analysis together show that the SVM ranker increases the similarity to the original corpus of realizations produced with self-monitoring, often in ways that are crucial for the intended meaning to be apparent to human readers. A limitation of the experiments reported in this paper is that OpenCCG’s input semantic dependency graphs are not the same as the Stanford dependencies used with the Treebank parsers, and thus we have had to rely on the gold parses in the PTB to derive gold dependencies for measuring accuracy of parser dependency recovery. In a realistic application scenario, however, we would need to measure parser accuracy relative to the realizer’s input. We initially tried using OpenCCG’s parser in a simple ranking approach, but found that it did not improve upon the averaged perceptron model, like the three parsers used subsequently. Given that with the more refined SVM ranker, the Berkeley parser worked nearly as well as all three parsers together using the complete feature set, the prospects for future work on a more realistic scenario using the OpenCCG parser in an SVM ranker for self-monitoring now appear much more promising, either using OpenCCG’s reimplementation of Hockenmaier & Steedman’s generative CCG model, or using the Berkeley parser trained on OpenCCG’s enhanced version of the CCGbank, along the lines of Fowler and Penn (2010). 6 Related Work Approaches to surface realization have been developed for LFG, HPSG, and TAG, in addition to CCG, and recently statistical dependency-based approaches have been developed as well; see the report from the first surface realization shared 420 task (Belz et al., 2010; Belz et al., 2011) for an overview. To our knowledge, however, a comprehensive investigation of avoiding vicious structural ambiguities with broad coverage statistical parsers has not been previously explored. As our SVM ranking model does not make use of CCG-specific features, we would expect our selfmonitoring method to be equally applicable to realizers using other frameworks. 7 Conclusion In this paper, we have shown that while using parse accuracy in a simple reranking strategy for self-monitoring fails to improve BLEU scores over a state-of-the-art averaged perceptron realization ranking model, it is possible to significantly increase BLEU scores using an SVM ranker that combines the realizer’s model score together with features from multiple parsers, including ones designed to make the ranker more robust to parsing mistakes that human readers would be unlikely to make. Additionally, via a targeted manual analysis, we showed that the SVM reranker frequently manages to avoid egregious errors involving “vicious” ambiguities, of the kind that would mislead human readers as to the intended meaning. As noted in Reiter’s (2010) survey, many NLG systems use surface realizers as off-the-shelf components. In this paper, we have focused on broad coverage surface realization using widelyavailable PTB data—where there are many sentences of varying complexity with gold-standard annotations—following the common assumption that experiments with broad coverage realization are (or eventually will be) relevant for NLG applications. Of course, the kinds of ambiguity that can be problematic in news text may or may not be the same as the ones encountered in particular applications. Moreover, for certain applications (e.g. ones with medical or legal implications), it may be better to err on the side of ambiguity avoidance, even at some expense to fluency, thereby requiring training data reflecting the desired trade-off to adapt the methods described here. We leave these application-centered issues for investigation in future work. The current approach is primarily suitable for offline use, for example in report generation where there are no real-time interaction demands. In future work, we also plan to investigate ways that self-monitoring might be implemented more efficiently as a combined process, rather than running independent parsers as a post-process following realization. Acknowledgments We thank Mark Johnson, Micha Elsner, the OSU Clippers Group and the anonymous reviewers for helpful comments and discussion. This work was supported in part by NSF grants IIS-1143635 and IIS-1319318. References S. Abney. 1996. Statistical methods and linguistics. In Judith Klavans and Philip Resnik, editors, The balancing act: Combining symbolic and statistical approaches to language, pages 1–26. MIT Press, Cambridge, MA. Anja Belz, Mike White, Josef van Genabith, Deirdre Hogan, and Amanda Stent. 2010. Finding common ground: Towards a surface realisation shared task. In Proceedings of INLG-10, Generation Challenges, pages 267–272. Anja Belz, Michael White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The first surface realisation shared task: Overview and evaluation results. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 217–226, Nancy, France, September. Association for Computational Linguistics. Stephen Boxwell and Michael White. 2008. Projecting Propbank roles onto the CCGbank. In Proc. LREC08. F. Chantree, B. Nuseibeh, A. De Roeck, and A. Willis. 2006. Identifying nocuous ambiguities in natural language requirements. In Requirements Engineering, 14th IEEE International Conference, pages 59– 68. IEEE. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of ACL, pages 173–180, Ann Arbor, Michigan. Association for Computational Linguistics. Stephen Clark and James R. Curran. 2007. WideCoverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493–552. Michael Collins. 2002. Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In Proc. EMNLP-02. 421 Marie-Catherine de Marneffe, Bill MacCartney, and Christopher Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC. Dominic Espinosa, Michael White, and Dennis Mehay. 2008. Hypertagging: Supertagging for surface realization with CCG. In Proceedings of ACL-08: HLT, pages 183–191, Columbus, Ohio, June. Association for Computational Linguistics. Timothy A. D. Fowler and Gerald Penn. 2010. Accurate context-free parsing with Combinatory Categorial Grammar. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 335–344, Uppsala, Sweden, July. Association for Computational Linguistics. Edward Gibson. 2000. Dependency locality theory: A distance-based theory of linguistic complexity. In Alec Marantz, Yasushi Miyashita, and Wayne O’Neil, editors, Image, Language, brain: Papers from the First Mind Articulation Project Symposium. MIT Press, Cambridge, MA. Daniel Gildea and David Temperley. 2010. Do grammars minimize dependency length? Cognitive Science, 34(2):286–310. Julia Hockenmaier and Mark Steedman. 2002. Generative models for statistical parsing with Combinatory Categorial Grammar. In Proc. ACL-02. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. T. Florian Jaeger. 2010. Redundancy and reduction: Speakers manage information density. Cognitive Psychology, 61(1):23–62, August. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proc. KDD. I.H. Khan, K. Van Deemter, and G. Ritchie. 2008. Generation of referring expressions: Managing structural ambiguities. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 433–440. Association for Computational Linguistics. Dan Klein and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Meeting of the Association for Computational Linguistics, pages 423–430. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Association for Computational Linguistics. Willem J. M. Levelt. 1989. Speaking: From Intention to Articulation. MIT Press. G¨unter Neumann and Gertjan van Noord. 1992. Selfmonitoring with reversible grammars. In Proceedings of the 14th conference on Computational linguistics - Volume 2, COLING ’92, pages 700–706, Stroudsburg, PA, USA. Association for Computational Linguistics. Martha Palmer, Dan Gildea, and Paul Kingsbury. 2005. The proposition bank: A corpus annotated with semantic roles. Computational Linguistics, 31(1). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. ACL-02. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of COLING-ACL. Rajakrishnan Rajkumar and Michael White. 2010. Designing agreement features for realization ranking. In Proc. Coling 2010: Posters, pages 1032–1040, Beijing, China, August. Rajakrishnan Rajkumar and Michael White. 2011. Linguistically motivated complementizer choice in surface realization. In Proceedings of the UCNLG+Eval: Language Generation and Evaluation Workshop, pages 39–44, Edinburgh, Scotland, July. Association for Computational Linguistics. Rajakrishnan Rajkumar, Michael White, and Dominic Espinosa. 2009. Exploiting named entity classes in CCG surface realization. In Proc. NAACL HLT 2009 Short Papers. Ehud Reiter and Robert Dale. 2000. Building natural generation systems. Studies in Natural Language Processing. Cambridge University Press. Ehud Reiter. 2010. Natural language generation. In Alexander Clark, Chris Fox, and Shalom Lappin, editors, The Handbook of Computational Linguistics and Natural Language Processing (Blackwell Handbooks in Linguistics), Blackwell Handbooks in Linguistics, chapter 20. Wiley-Blackwell, 1 edition. Mark Steedman. 2000. The syntactic process. MIT Press, Cambridge, MA, USA. David Temperley. 2007. Minimization of dependency length in written English. Cognition, 105(2):300– 333. K. Van Deemter. 2004. Towards a probabilistic version of bidirectional OT syntax and semantics. Journal of Semantics, 21(3):251–280. Michael White and Rajakrishnan Rajkumar. 2008. A more precise analysis of punctuation for broadcoverage surface realization with CCG. In Coling 2008: Proceedings of the workshop on Grammar Engineering Across Frameworks, pages 17–24. 422 Michael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for CCG realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 410– 419, Singapore, August. Association for Computational Linguistics. Michael White and Rajakrishnan Rajkumar. 2012. Minimal dependency length in realization ranking. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 244–255, Jeju Island, Korea, July. Association for Computational Linguistics. Michael White. 2006. Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar. Research on Language & Computation, 4(1):39–75. 423
2014
39
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 36–46, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Discovering Latent Structure in Task-Oriented Dialogues Ke Zhai∗ Computer Science, University of Maryland College Park, MD 20740 [email protected] Jason D. Williams Microsoft Research Redmond, WA 98052 [email protected] Abstract A key challenge for computational conversation models is to discover latent structure in task-oriented dialogue, since it provides a basis for analysing, evaluating, and building conversational systems. We propose three new unsupervised models to discover latent structures in task-oriented dialogues. Our methods synthesize hidden Markov models (for underlying state) and topic models (to connect words to states). We apply them to two real, non-trivial datasets: human-computer spoken dialogues in bus query service, and humanhuman text-based chats from a live technical support service. We show that our models extract meaningful state representations and dialogue structures consistent with human annotations. Quantitatively, we show our models achieve superior performance on held-out log likelihood evaluation and an ordering task. 1 Introduction Modeling human conversation is a fundamental scientific pursuit. In addition to yielding basic insights into human communication, computational models of conversation underpin a host of real-world applications, including interactive dialogue systems (Young, 2006), dialogue summarization (Murray et al., 2005; Daum´e III and Marcu, 2006; Liu et al., 2010), and even medical applications such as diagnosis of psychological conditions (DeVault et al., 2013). Computational models of conversation can be broadly divided into two genres: modeling and control. Control is concerned with choosing actions in interactive settings—for example to maximize task completion—using reinforcement learn∗Work done at Microsoft Research. ing (Levin et al., 2000), supervised learning (Hurtado et al., 2010), hand-crafted rules (Larsson and Traum, 2000), or mixtures of these (Henderson and Lemon, 2008). By contrast, modeling—the genre of this paper—is concerned with inferring a phenomena in an existing corpus, such as dialogue acts in two-party conversations (Stolcke et al., 2000) or topic shifts in multi-party dialogues (Galley et al., 2003; Purver et al., 2006; Hsueh et al., 2006; Banerjee and Rudnicky, 2006). Many past works rely on supervised learning or human annotations, which usually requires manual labels and annotation guidelines (Jurafsky et al., 1997). It constrains scaling the size of training examples, and application domains. By contrast, unsupervised methods operate only on the observable signal (e.g. words) and are estimated without labels or their attendant limitations (Crook et al., 2009). They are particularly relevant because conversation is a temporal process where models are trained to infer a latent state which evolves as the dialogue progresses (Bangalore et al., 2006; Traum and Larsson, 2003). Our basic approach is to assume that each utterance in the conversation is in a latent state, which has a causal effect on the words the conversants produce. Inferring this model yields basic insights into the structure of conversation and also has broad practical benefits, for example, speech recognition (Williams and Balakrishnan, 2009), natural language generation (Rieser and Lemon, 2010), and new features for dialogue policy optimization (Singh et al., 2002; Young, 2006). There has been limited past work on unsupervised methods for conversation modeling. Chotimongkol (2008) studies task-oriented conversation and proposed a model based on a hidden Markov model (HMM). Ritter et al. (2010) extends it by introducing additional word sources, and applies to non-task-oriented conversations— social interactions on Twitter, where the subjects 36 discussed are very diffuse. The additional word sources capture the subjects, leaving the statespecific models to express common dialogue flows such as question/answer pairs. In this paper, we retain the underlying HMM, but assume words are emitted using topic models (TM), exemplified by latent Dirichlet allocation (Blei et al., 2003, LDA). LDA assumes each word in an utterance is drawn from one of a set of latent topics, where each topic is a multinomial distribution over the vocabulary. The key idea is that the set of topics is shared across all states, and each state corresponds to a mixture of topics. We propose three model variants that link topics and states in different ways. Sharing topics across states is an attractive property in task-oriented dialogue, where a single concept can be discussed at many points in a dialogue, yet different topics often appear in predictable sequences. Compared to past works, the decoupling of states and topics gives our models more expressive power and the potential to be more data efficient. Empirically, we find that our models outperform past approaches on two realworld corpora of task-oriented dialogues. This paper is organized as follows: Section 2 introduces two task-oriented domains and corpora; Section 3 details three new unsupervised generative models which combine HMMs and LDA and efficient inference schemes; Section 4 evaluates our models qualitatively and quantitatively, and finally conclude in Section 5. 2 Data To test the generality of our models, we study two very different datasets: a set of human-computer spoken dialogues in quering bus timetable (BusTime), and a set of human-human text-based dialogues in the technical support domain (TechSupport). In BusTime, the conversational structure is known because the computer followed a deterministic program (Williams, 2012), making it possible to directly compare an inferred model to ground truth on this corpus.1 In TechSupport, there is no known flowchart,2 making this a realistic application of unsupervised methods. 1Available for download at http://research.microsoft. com/en-us/events/dstc/ 2Technical support human agents use many types of documentation—mainly checklists and guidelines, but in general, there are no flowcharts. BusTime This corpus consists of logs of telephone calls between a spoken dialogue system and real bus users in Pittsburgh, USA (Black et al., 2010). For the user side, the words logged are the words recognized by the automatic speech recognizer. The vocabulary of the recognizer was constrained to the bus timetable task, so only words known to the recognizer in advance are output. Even so, the word error rate is approximately 3040%, due to the challenging audio conditions of usage—with traffic noise and extraneous speech. The system asked users sequentially for a bus route, origin and destination, and optionally date and time. The system confirmed low-confidence speech recognition results. Due to the speech recognition channel, system and user turns always alternate. An example dialogue is given below: System: Say a route like ⟨bus-route⟩, or say I’m not sure. User: ⟨bus-route⟩. System: I thought you said ⟨bus-route⟩, is that right? User: Yes. System: Say where’re you leaving from, like ⟨location⟩. User: ⟨location⟩. System: Okay, ⟨location⟩, where are you going to? ... We discard dialogues with fewer than 20 utterances. We also map all named entities (e.g., “downtown” and “28X”) to their semantic types (resp. ⟨location⟩and ⟨bus-route⟩) to reduce vocabulary size. The corpus we use consists of approximately 850 dialogue sessions or 30, 000 utterances. It contains 370, 000 tokens (words or semantic types) with vocabulary size 250. TechSupport This corpus consists of logs of real web-based human-human text “chat” conversations between clients and technical support agents at a large corporation. Usually, clients and agents first exchange names and contact information; after that, dialogues are quite free-form, as agents ask questions and suggest fixes. Most dialogues ultimately end when the client’s issue has been resolved; some clients are provided with a reference number for future follow-up. An example dialogue is given below: Agent: Welcome to the answer desk! My name is ⟨agentname⟩. How can I help you today? Agent: May I have your name, email and phone no.? Client: Hi, ⟨agent-name⟩. I recently installed new software but I kept getting error, can you help me? Agent: Sorry to hear that. Let me help you with that. Agent: May I have your name, email and phone no.? Client: The error code is ⟨error-code⟩. Client: It appears every time when I launch it. Client: Sure. My name is ⟨client-name⟩. Client: My email and phone are ⟨email⟩, ⟨phone⟩. Agent: Thanks, ⟨client-name⟩, please give me a minute. 37 w0,i N0 s0 w0,i N0 s1 w1,i N1 M s0 w0,i N0 s1 w1,i N1 ... sn wn,i Nn M (a) LM-HMM s0 w0,i N0 s1 ... sn M w1,i N1 wn,i Nn r1,i r0,i rn,i πm ψE φm s0 w0,i N0 s1 ... sn M w1,i N1 wn,i Nn r1,i r0,i rn,i tm gE um s0 w0,i N0 s1 M w1,i N1 r1,i r0,i tm gE um s0 w0,i N0 s1 M r0,i tm gE um s0 w0,i N0 s1 M N1 r1,i r0,i tm gE um s0 N0 M r0,i tm gE um s0 w0,i N0 M r0,i tm gE um M tm gE um s0 N0 M tm gE um (b) LM-HMMS Figure 1: Plate diagrams of baseline models, from existing work (Chotimongkol, 2008; Ritter et al., 2010). Variable definitions are given in the text. ... This data is less structured than BusTime; clients’ issues span software, hardware, networking, and other topics. In addition, clients use common internet short-hand (e.g., “thx”, “gtg”, “ppl”, “hv”, etc), with mis-spellings (e.g., “ofice”, “offfice”, “erorr”, etc). In addition, chats from the web interface are segmented into turns when a user hits “Enter” on a keyboard. Therefore, clients’ input and agents’ responses do not necessarily alternate consecutively, e.g., an agent’s response may take multiple turns as in the above example. Also, it is unreasonable to group consecutive chats from the same party to form a “alternating” structure like BusTime dataset due to the asynchronism of different states. For instance, the second block of client inputs clearly comes from two different states which should not be merged together. We discard dialogues with fewer than 30 utterances. We map named entities to their semantic types, apply stemming, and remove stop words.3 The corpus we use contains approximately 2, 000 dialogue sessions or 80, 000 conversation utterances. It consists of 770, 000 tokens, with a a vocabulary size of 6, 600. 3 Latent Structure in Dialogues In this work, our goal is to infer latent structure presented in task-oriented conversation. We assume that the structure can be encoded in a probabilistic state transition diagram, where the dialogue is in one state at each utterance, and states have a causal effect on the words observed. We assume the boundaries between utterances are given, which is trivial in many corpora. The simplest formulation we consider is an HMM where each state contains a unigram language model (LM), proposed by Chotimongkol (2008) for task-oriented dialogue and originally 3We used regular expression to map named entities, and Porter stemmer in NLTK to stem all tokens. developed for discourse analysis by Barzilay and Lee (2004). We call it LM-HMM as in Figure 1(a). For a corpus of M dialogues, the m-th dialogue contains n utterances, each of which contains Nn words (we omit index m from terms because it will be clear from context). At n-th utterance, we assume the dialogue is in some latent state sn. Words in n-th utterance wn,1, . . . , wn,Nn are generated (independently) according to the LM. When an utterance is complete, the next state is drawn according to HMM, i.e., P(s′|s). While LM-HMM captures the basic intuition of conversation structure, it assumes words are conditioned only on state. Ritter et al. (2010) extends LM-HMM to allow words to be emitted from two additional sources: the topic of current dialogue φ, or a background LM ψ shared across all dialogues. A multinomial π indicates the expected fraction of words from these three sources. For every word in an utterance, first draw a source indicator r from π, and then generate the word from the corresponding source. We call it LM-HMMS (Figure 1(b)). Ritter et al. (2010) finds these alternate sources are important in non-task-oriented domains, where events are diffuse and fleeting. For example, Twitter exchanges often focus on a particular event (labeled X), and follow patterns like “saw X last night?”, “X was amazing”. Here X appears throughout the dialogue but does not help to distinguish conversational states in social media. We also explore similar variants. In this paper, these two models form our baselines. For all models, we use Markov chain Monte Carlo (MCMC) inference (Neal, 2000) to find latent variables that best fit observed data. We also assume symmetric Dirichlet priors on all multinomial distributions and apply collapsed Gibbs sampling. In the rest of this section, we present our models and their inference algorithms in turn. 3.1 TM-HMM Our approach is to modify the emission probabilities of states to be distributions over topics rather than distributions over words. In other words, instead of generating words via a LM, we generate words from a topic model (TM), where each state maps to a mixture of topics. The key benefit of this additional layer of abstraction is to enable states to express higher-level concepts through pooling of topics across states. For example, topics might be inferred for content like “bus-route” or “lo38 s0 w0,i N0 s1 ... sn M z0,i w1,i N1 z1,i wn,i Nn zn,i T K θt φk s0 w0,i N0 s1 ... sn M z0,i w1,i N1 z1,i wn,i Nn zn,i T K ht gk s0 w0,i N0 s1 M z0,i w1,i N1 z1,i T K ht gk s0 w0,i N0 s1 M z0,i N1 z1,i T K ht gk s0 w0,i N0 s1 M z0,i T K ht gk s0 w0,i N0 M z0,i T K ht gk s0 N0 M z0,i T K ht gk s0 M T ht (a) TM-HMM s0 w0,i N0 s1 ... sn M z0,i w1,i N1 z1,i wn,i Nn zn,i K r1,i r0,i rn,i θm τm φk s0 w0,i N0 s1 ... sn M z0,i w1,i N1 z1,i wn,i Nn zn,i K r1,i r0,i rn,i hm um gk s0 w0,i N0 s1 M z0,i N1 z1,i K r1,i r0,i hm um gk s0 w0,i N0 s1 M z0,i w1,i N1 z1,i K r1,i r0,i hm um gk s0 w0,i N0 s1 M z0,i N1 z1,i K r0,i hm um gk s0 w0,i N0 s1 M z0,i K r0,i hm um gk s0 w0,i N0 M z0,i K r0,i hm um gk s0 N0 M z0,i K r0,i hm um gk s0 N0 M z0,i K hm um gk s0 M K hm um gk M K hm um gk M (b) TM-HMMS s0 w0,i N0 s1 ... sn M z0,i w1,i N1 z1,i wn,i Nn zn,i K r1,i r0,i rn,i T θm φk τt s0 w0,i N0 s1 ... sn M z0,i w1,i N1 z1,i wn,i Nn zn,i K r1,i r0,i rn,i T hm gk ut s0 w0,i N0 s1 M z0,i w1,i N1 z1,i K r1,i r0,i T gk ut s0 w0,i N0 s1 M z0,i N1 z1,i K r0,i T hm gk ut s0 w0,i N0 s1 M z0,i K r0,i T gk ut s0 N0 M z0,i K r0,i T hm gk ut s0 N0 M z0,i K T gk ut M K T hm gk ut M K T gk ut (c) TM-HMMSS Figure 2: Plate diagrams of proposed models. TM-HMM is an HMM with state-wise topic distributions. TM-HMMS adds session-wise topic distribution and a source generator. TM-HMMSS adds a state-wise source generator. Variable definitions are given in the text. cations”; and other topics for dialogue acts, like to “ask” or “confirm” information. States could then be combinations of these, e.g., a state might express “ask bus route” or “confirm location”. This approach also decouples the number of topics from the number of states. Throughout this paper, we denote the number of topics as K and the number of states as T. We index words, turns and dialogues in the same ways as baseline models. We develop three generative models. In the first variant (TM-HMM, Figure 2(a)), we assume every state s in HMM is associated with a distribution over topics θ, and topics generate words w at each utterance. The other two models allow words to be generated from different sources (in addition to states), akin to the LM-HMMS model. TM-HMM generates a dialogue as following: 1: For each utterance n in that dialogue, sample a state sn based on the previous state sn−1. 2: For each word in utterance n, first draw a topic z from the state-specified distribution over topics θsn conditioned on sn, then generate word w from the topic-specified distribution over vocabulary φz based on z. We assume θ’s and φ’s are drawn from corresponding Dirichlet priors, as in LDA. The posterior distributions of state assignment sn and topic assignment zn,i are p(sn|s−n, z, α, γ) ∝p(sn|s−n, γ) · p(zn|s, z−n, α), (1) p(zn,i|s, w, z−(n,i), α, β) ∝p(zn,i|s, z−(n,i), α) · p(wn,i|sn, w−(n,i), z, β), where α, β, γ are symmetric Dirichlet priors on state-wise topic distribution θt’s, topic-wise word distribution φt’s and state transition multinomials, respectively. All probabilities can be computed using collapsed Gibbs sampler for LDA (Griffiths and Steyvers, 2004) and HMM (Goldwater and Griffiths, 2007). We iteratively sample all parameters until convergence. 3.2 TM-HMMS TM-HMMS (Figure 2(b)) extends TM-HMM to allow words to be generated either from state LM (as in LM-HMM), or a set of dialogue topics (akin to LM-HMMS). Because task-oriented dialogues usually focus on a specific domain, a set of words appears repeatedly throughout a given dialogue. Therefore, the topic distribution is often stable throughout the entire dialogue, and does not vary from turn to turn. For example, in the troubleshooting domain, dialogues about network connections, desktop productivity, and anti-virus software could each map to different session-wide topics. To express this, words in the TM-HMMS model are generated either from a dialogue-specific topic distribution, or from a state-specific language model.4 A distribution over sources is sampled once at the beginning of each dialogue and selects the expected fraction of words generated from different sources. The generative story for a dialogue session is: 1: At the beginning of each session, draw a distribution over topics θ and a distribution over word sources τ. 2: For each utterance n in the conversation, draw a state sn based on previous state sn−1. 3: For each word in utterance n, first choose a word source r according to τ, and then depending on r, generate a word w either from the session-wide topic distribution θ or the language model specified by the state sn. 4Note that a TM-HMMS model with state-specific topic models (instead of state-specific language models) would be subsumed by TM-HMM, since one topic could be used as the background topic in TM-HMMS. 39 Again, we impose Dirichlet priors on distributions over topics θ’s and distributions over words φ’s as in LDA. We also assume the distributions over sources τ’s are governed by a Beta distribution. The session-wide topics is slightly different from that used in LM-HMMS: LM-HMMS was developed for social chats on Twitter where topics are very diffuse and unlikely to repeat; hence often unique to each dialogue. By contrast, our models are designed for task-oriented dialogues which pertain to a given domain where topics are more tightly clustered; thus, in TM-HMMS session-wide topics are shared across the corpus. The posterior distributions of state assignment sn, word source rn,i and topic assignment zn,i are p(sn|r, s−n, w, γ, π) ∝p(sn|s−n, γ) · p(wn|r, s, π), p(rn,i|r−(n,i), s, w, π) ∝p(rn,i|r−(n,i), π) · p(wn,i|r, s, w−(n,i), z, β), (2) p(zn,i|r, w, z−(n,i), α, β) ∝p(zn,i|r, z−(n,i), α) · p(wn,i|r, w−(n,i), z, β), where π is a symmetric Dirichlet prior on sessionwise word source distribution τ m’s, and other symbols are defined above. All these probabilities are Dirichlet-multinomial distributions and therefore can be computed efficiently. 3.3 TM-HMMSS The TM-HMMSS (Figure 2(c)) model modifies TM-HMMS to re-sample the distribution over word sources τ at every utterance, instead of once at the beginning of each session. This modification allows the fraction of words drawn from the session-wide topics to vary over the course of the dialogue. This is attractive in task-oriented dialogue, where some sections of the dialogue always follow a similar script, regardless of session topic—for example, the opening, closing, or asking the user if they will take a survey. To support these patterns, TM-HMMSS conditions the source generator distribution on the current state. The generative story of TM-HMMSS is very similar to TM-HMMS, except the distribution over word sources τ’s are sampled at every state. A dialogue is generated as following: 1: For each session, draw a topic distribution θ. 2: For each utterance n in the conversation, draw a state sn based on previous state sn−1, and subsequently retrieve the state-specific distribution over word sources τ sn. 3: For each word in utterance n, first sample a word source r according to τ sn, and then depending on r, generate a word w either from the session-wide topic distribution θ or the language model specified by the state sn. As in TM-HMMS, we assume multinomial distributions θ’s and φ’s are drawn from Dirichlet priors; and τ’s are governed by Beta distributions. The inference for TM-HMMSS is exactly same as the inference for TM-HMMS, except the posterior distributions over word source rn,i is now p(rn,i|r−(n,i), s, w, π) ∝p(rn,i|r−(n,i), sn, π) · p(wn,i|r, s, w−(n,i), z, β), (3) where the first term is integrated over all sessions and conditioned on the state assignment. 3.4 Supporting Multiple Parties Since our primary focus is task-oriented dialogues between two parties, we assume every word source is associated with two sets of LMs— one for system/agent and another for user/client. This configuration is similar to PolyLDA (Mimno et al., 2009) or LinkLDA (Yano et al., 2009), such that utterances from different parties are treated as different languages or blog-post and comments pairs. In this work, we implement all models under this setting, but omit details in plate diagrams for the sake of simplicity. In settings where the agent and client always alternate, each state emits both text before transitioning to the next state. This is the case in the BusTime dataset, where the spoken dialogue system enforces strict turn-taking. In settings where agents or client may produce more than one utterance in a row, each state emits either agent text or client text, then transitions to the next state. This is the case in the TechSupport corpus, where either conversant may send a message at any time. 3.5 Likelihood Estimation To evaluate performance across different models, we compute the likelihood on held-out test set. For TM-HMM model, there are no local dependencies, and we therefore compute the marginal likelihood using the forward algorithm. However, for TM-HMMS and TM-HMMSS models, the latent topic distribution θ creates local dependencies, rendering computation of marginal likeli40 hoods intractable. Hence, we use a Chib-style estimator (Wallach et al., 2009). Although it is computationally more expensive, it gives less biased approximation of marginal likelihood, even for finite samples. This ensures likelihood measurements are comparable across models. 4 Experiments In this section, we examine the effectiveness of our models. We first evaluate our models qualitatively by exploring the inferred state diagram. We then perform quantitative analysis with log likelihood measurements and an ordering task on a held-out test set. We train all models with 80% of the entire dataset and use the rest for testing. We run the Gibbs samplers for 1000 iterations and update all hyper-parameters using slice sampling (Neal, 2003; Wallach, 2008) every 10 iterations. The training likelihood suggest all models converge within 500−800 iterations. For all Chib-style estimators, we collect 100 samples along the Markov chain to approximate the marginal likelihood. 4.1 Qualitative Evaluation Figure 3 shows the state diagram for BusTime corpus inferred by TM-HMM without any supervision.5 Every dialogue is opened by asking the user to say a bus route, or to say “I’m not sure.” It then transits to a state about location, e.g., origin and destination. Both these two states may continue to a confirmation step immediately after. After verifying all the necessary information, the system asks if the user wants “the next few buses”.6 Otherwise, the system follows up with the user on the particular date and time information. After system reads out bus times, the user has options to “repeat” or ask for subsequent schedules. In addition, we also include the humanannotated dialogue flow in Figure 4 for reference (Williams, 2012). It only illustrates the most common design of system actions, without showing edge cases. Comparing these two figures, the dialogue flow inferred by our model along the most probable path (highlighted in bold red in Figure 3) is consistent with underlying design. Furthermore, our models are able to capture edge cases—omitted for space—through a more general and probabilistic fashion. In summary, our 5Recall in BusTime, state transitions occur after each pair of system/user utterances, so we display them synchronously. 6The system was designed this way because most users say “yes” to this question, obviating the date and time. models yield a very similar flowchart to the underlying design in a completely unsupervised way.7 Figure 5 shows part of the flowchart for the TechSupport corpus, generated by the TMHMMSS model.8 A conversation usually starts with a welcome message from a customer support agent. Next, clients sometimes report a problem; otherwise, the agent gathers the client’s identity. After these preliminaries, the agent usually checks the system version or platform settings. Then, information about the problem is exchanged, and a cycle ensues where agents propose solutions, and clients attempt them, reporting results. Usually, a conversation loops among these states until either the problem is resolved (as the case shown in the figure) or the client is left with a reference number for future follow-up (not shown due to space limit). Although technical support is taskoriented, the scope of possible issues is vast and not prescribed. The table in Figure 5 lists the top ranked words of selected topics—the categories clients often report problems in. It illustrates that, qualitatively, TM-HMMSS discovers both problem categories and conversation structures on our data. As one of the baseline model, we also include a part of flowchart generated by LM-HMM model with similar settings of T = 20 states. Illustrated by the highlighted states in 6, LM-HMM model conflates interactions that commonly occur at the beginning and end of a dialogue—i.e., “acknowledge agent” and “resolve problem”, since their underlying language models are likely to produce similar probability distributions over words. By incorporating topic information, our proposed models (e.g., TM-HMMSS in Figure 5) are able to enforce the state transitions towards more frequent flow patterns, which further helps to overcome the weakness of language model. 4.2 Quantitative Evaluation In this section, we evaluate our models using log likelihood and an ordering task on a held-out test set. Both evaluation metrics measure the predictive power of a conversation model. 7We considered various ways of making a quantitative evaluation of the inferred state diagram, and proved difficult. Rather than attempt to justify a particular sub-division of each “design states”, we instead give several straightforward quantitative evaluations in the next section. 8Recall in this corpus, state transitions occur after emitting each agent or client utterances, which does not necessarily alternate in a dialogue, so we display client request and agent response separately. 41 state: ask for bus route (route:0.14), (say:0.13), (<bus-route>:0.12), (not:0.10), (sure:0.10), (im:0.09), (a:0.08), (bus:0.07), (like:0.06), ... e.g.: say a bus route like <bus-route> or say i am not sure (<bus-route>:0.7), (the:0.07), (im:0.06), (not: 0.05), (sure:0.04), (route:0.02), (any:0.01), ... e.g.: <bus-route>/im not sure 0.53 state: confirm low-confidence speech recognition results (right:0.19), (is:0.19), (that:0.19), (<location>:0.12), (<bus-route>: 0.05), (i:0.04), (you:0.03), (said:0.03), (thought:0.03), (over:0.03), ... e.g.: i thought you said (<bus-route>/<location>) is that right (yes:0.45), (no:0.3), (yeah:0.12), (wrong:0.04), (correct:0.03), (back:0.02), (go:0.02), (nope:0.01), ... e.g.: yes/no/yeah/wrong/correct/go back/nope 0.12 0.32 0.53 0.15 state: ask for locations (you:0.1), (are:0.09), (where:0.08), (to:0.07), (say:0.06), (from:0.05), (leaving:0.05), (going:0.05), (<location>:0.05), (okay:0.04), ... e.g.: (okay <location>) say where are you (going to/leaving from) (<location>:0.84), (back: 0.05), (go:0.05), ... e.g.: <location> 0.21 0.23 0.85 0.44 0.28 state: ask if user is traveling now (say:0.8), (the:0.07), (you:0.07), (no:0.06), (yes:0.06), (do: 0.06), (want:0.06), (buses:0.05), (few:0.05), (next:0.04), ... e.g.: do you want the next few buses say yes or no (yes:0.5), (no:0.17), (yeah:0.16), (<bus-route>: 0.07), (back:0.04), (go:0.04), (nope:0.01), ... e.g.: yes/no/yeah 0.31 state: read out bus timetables (<location>:0.08), (at:0.05), (<time>:0.05), (next:0.05), (say:0.05), (from:0.04), (there: 0.04), (<bus-route>:0.04), (to:0.04), ... e.g.: there is a <bus-route> from <location> to <location> at <time> say next or repeat (next:0.4), (repeat:0.16), (over:0.11), (start:0.11), (previous:0.07), (go:0.06), (back:0.06), (goodbye:0.05), ... e.g.: next/repeat/start over/previous 0.12 0.42 state: ask for date and time (optional) (<time>:0.14), (<date>:0.1), (the:0.06), (or:0.05), (like:0.05), (say:0.05), (you:0.05), (want:0.05), (at:0.04), (depart:0.04), ... e.g.: say the time you want to depart like <time> (<time>:0.26), (<date>:0.14), (m:0.11), (depart:0.07), (a:0.07), (at:0.07), (by:0.03), ... e.g.: depart (at/by) <time> a m <date> 0.55 Start   I  heard  61C,  is  that  right?   Downtown,  is  that  correct?   Did  you  just  say  Norwood?   Say  just  the  day  you  want.   Say  just  the  <me  you  want.   I'm  sorry,  I  can't  find  any  bus   at  all  that  run  from  Milton  to   Norwell.  I  checked  route  61C   and  I  also  checked  all  the   other  bus  routes  I  know  too.   Repeat,  next,  previous   At  11:45  PM  today,   there  is  a  61  C  from   5th  Ave  and  Main  St   Canton,  arriving  2nd  St   and  Grant  Ave  in   Norwood  at  12:34  AM.   Say  a  bus  route,  or   say  I’m  not  sure.   Where  are  you  leaving   from?  (query  database)   Where  are  you  going   to?  (query  database)   Do  you  want  <mes  for   the  next  few  buses?   (query  database)   Figure 3: (Upper) Part of the flowchart inferred on BusTime, by TM-HMM model with K = 10 topics and T = 10 states. The most probable path is highlighted, which is consistent with the underlying design (Figure 4). Cyan blocks are system actions and yellow blocks are user responses. In every block, the upper cell shows the top ranked words marginalized over all topics and the lower cell shows some examples of that state. Transition probability cut-off is 0.1. States are labelled manually. Figure 4: (Left) Hand-crafted reference flowchart for BusTime (Williams, 2012). Only the most common dialogue flows are displayed. System prompts shown are example paraphrases. Edge cases are not included. Log Likelihood The likelihood metric measures the probability of generating the test set under a specified model. As shown in Figure 7, our models yield as good or better likelihood than LM-HMM and LM-HMMS models on both datasets under all settings. For our proposed models, TM-HMMS and TM-HMMSS perform better than TM-HMM on TechSupport, but not necessarily on BusTime. In addition, we notice that the marginal benefit of TM-HMMSS over TM-HMM is greater on TechSupport dataset, where each dialogue focuses on one of many possible tasks. This coincides with our belief that topics are more conversation dependent and shared across the entire corpus in customer support data—i.e., different clients in different sessions might ask about similar issues. Ordering Test Ritter et al. (2010) proposes an evaluation based on rank correlation coefficient, which measures the degree of similarity between any two orderings over sequential data. They use Kendall’s τ as evaluation metric, which is based on the agreement between pairwise orderings of two sequences (Kendall, 1938). It ranges from −1 to +1, where +1 indicates an identical ordering and −1 indicates a reverse ordering. The idea is to generate all permutations of the utterances in a dialogue (including true ordering), and compute the log likelihood for each under the model. Then, Kendall’s τ is computed between the most probable permutation and true ordering. The result is the average of τ values for all dialogues in test corpus. Ritter et al. (2010) limits their dataset by choosing Twitter dialogues containing 3 to 6 posts (utterances), making it tractable to enumerate all permutations. However, our datasets are much larger, and enumerating all possible permutations of dialogues with more than 20 or 30 utterances is infeasible. Instead, we incrementally build up the permutation set by adding one random permutation at a time, and taking the most probable permutation after each addition. If this process were continued (intractably!) until all permutations are enumerated, the true value of Kendall’s τ test would be reached. In practice, the value appears to plateau after a few dozen measurements. We present our results in Figure 8. Our models consistently perform as good or better than 42 Agent: conversation opening + identity check help, answer, desk, may, <agent-name>, welcom, name, number, phone, ... e.g.: welcome to answer desk, i'm <agentname>, how can i help you, may i have your name? Client: report problem tri, get, comput, cant, window, message, error, problem, instal, say, ... e.g.: get problem in windows, cant install on computer, it says error message Agent: conversation closure thank, answer, desk, <client-name>, contact, help, chat, day, welcom, ... e.g.: thank you for contacting answer desk, you are welcome, have a nice day Agent: acknowledge identity thank, minut, pleas, let, <client-name>, check, give, moment, ok, wait, ... e.g.: thank you, <client-name>, please give me a moment, let me check Agent: system check window, comput, instal, 7, use, 8, system, version, may, oper, ... e.g.: may i know what version is operating system you used? windows 7? Client: system verification ok, ye(s), sure, pleas, thank, k, <prodkey>, one, problem, fine, ... e.g.: ok, thanks, sure, <prodkey>, one problem Agent: acknowledge problem error, messag, see, issu, sorri, help, get, thank, <client-name>, oh, ... e.g.: sorry to hear that, thanks for error message, i see, let me help you on issue Agent: troubleshoot attempt click, <href>, pleas, link, code, let, go, download, run, ok, ... e.g.: please click <href> and go download the code, let it run and see it is ok Client: troubleshoot acknowledgement ok, link, click, ye(s), code, dont, tri, download, get, say, ... e.g.: ok, i am trying to download the code Client: identity verification <email>, <phone>, <client-name>, ye(s), number, phone, email, name, sure, call, ... e.g.: yes, my name is <client-name> sure, <client-name>, <phone>, <email> Agent: troubleshoot attempt instal, comput, program, tri, issu, system, file, work, run, see, ... e.g.: try to install file or run program and see the issue goes away Client: resolved problem thank, ok, help, great, good, much, <agent-name>, ye(s), day, bye, ... e.g.: great, thanks <agent-name> so much for your help, good day, bye 0.24 0.21 0.21 0.15 0.13 0.12 0.12 0.11 0.13 0.08 0.08 0.09 0.08 0.09 0.08 0.09 0.07 0.08 0.09 0.09 Topic Top Ranked Words purchase microsoft, store, purchas, able, get, sir, order, site, mr, contact, mac, .. . browser internet, explor, browser, ie, open, websit, googl, download, click, chrome, . .. backup file, restor, system, comput, back, folder, creat, option, dont, delet, .. . boot comput, boot, mode, option, disc, safe, recoveri, repair, back, clean, cd, disk, . . . update updat, window, servic, instal, pack, run, comput, download, check, restart, inform, system, error, fix, .. . network connect, internet, printer, comput, network, pc, print, access, wireless, hp, cable, adapt, router, speed, ... anti-virus viru, scan, comput, remov, secur, run, system, anti, essenti, infect, defend, softwar, program, protect, antiviru, malwar, . . . hardware driver, devic, drive, dvd, cd, hardwar, issu, model, laptop, plug, software, usb, . . . windows window, upgrad, 8, download, 7, instal, bit, vista, pro, system, ... office offic, 2010, word, microsoft, home, excel, version, 2007, student, document, trial, 2013, . . . outlook outlook, account, email, mail, microsoft, com, live, password, profil, contact, creat, server, access, ... license key, product, activ, purchas, licens, valid, verifi, id, disc, pro, grenuin, . . . facility window, 8, comput, instal, manufactur, system, oem, 7, pc, hp, ... Figure 5: Part of flowchart (left) and topic table (right) on TechSupport dataset, generated by TM-HMMSS model under settings of K = 20 topics and T = 20 states. The topic table lists top ranked words in issues discussed in the chats. Cyan blocks are system actions and yellow blocks are user responses. In every block, the upper cell shows top ranked words, and the lower cell shows example string patterns of that state. Transition probability cut-off is 0.05. States and topics are labelled manually. Agent: conversation opening + identity check answer, desk, help, <agent-name>, welcom, today, may, name, number, ... e.g.: welcome to answer desk, i'm <agentname>, how can i help you, may i have your name, case/phone number, account? Client: acknowledge agent / resolved problem thank, ok, help, much, good, great, <agent-name>, day, appreci, bye, ... e.g.: ok, thanks, great, <agent-name> appreciate your help, good day, bye Agent: conversation closure answer, desk, thank, contact, day, chat, great, session, com, help, ... e.g.: thank you for contacting answer desk, you are welcome, have a nice day Agent: acknowledge problem issu, sorri, call, help, number, suport, concern, <client-name>, <phone>, best, ... e.g.: sorry to hear that, let me help with your concern, <client-name> Client: confirm identity call, number, phone, case, <time>, would, <agent-name>, pleas, <phone>, time, ... e.g.: <agent-time>, my phone number is <phone>. would you pleas call number... Agent: conversation closure anyth, els, welcom, help, <client-name>, today, assist, question, would, answer, ... e.g.: you are welcome, anything else today i would help/assist you, <client-name>? Agent: acknowledge identity give, minut, pleas, check, let, thank, moment, 3, one, 5, ... e.g.: thanks, one moment please, give me 3 minutes, let me check Client: report problem updat, window, install, <agent-name>, hello, error, get, problem, download, message, ... e.g.: hello, <agent-name>, i get problem/error when install/update/download in windows 0.08 0.1 0.08 0.07 0.14 0.05 0.07 0.06 0.05 0.05 0.08 0.06 (4, (4, p (4, (4, #d (4 (4, (4, (4 ( ( state 4 1 (4, pay) 0.0413008 (4, #dollaramt#) 0.0393345 (4, dont) 0.0325617 (4, fix) 0.0310324 (4, support) 0.0275368 (4, much) 0.0231672 (4, ok) 0.0216379 (4, cost) 0.0214194 (4, issu) 0.0192346 (4, money) 0.0192346 state 14 1 (14, ok) 0.436864 (14, ye) 0.106903 (14, thank) 0.101863 (14, pleas) 0.0313091 (14, sure) 0.0262695 (15 (1 (1 (15 (1 (15, d (1 (15 (15 (15 state 19 0 (19, issu) 0.0736645 (19, troubleshoot) 0.0553393 (19, support) 0.0440807 (19, step) 0.0358164 (19, link) 0.0340198 (19, fix) 0.0339001 (19, help) 0.0325826 (19, resolv) 0.0285103 (19, advanc) 0.0240787 (19, option) 0.0230008 0.0122989 0.0107436 0.0063954 0.00403085 0.00382455 0.00366585 0.0035389 0 0.00266607 Figure 6: Part of flowchart on TechSupport dataset, generated by LM-HMM model with T = 20 states. Cyan blocks are system actions and yellow blocks are user responses. In every block, the upper cell shows the top ranked words, and the lower cell shows example word sequences or string patterns of that state. Transition probability cut-off is 0.05. States are labelled manually. A poorly-inferred state is highlighted, which seems to conflate the “acknowledge agent” and “resolve problem” states, and TM-HMMSS model has properly disentangled (Figure 5). the baseline models. For BusTime data, all models perform relatively well except LM-HMM which only indicates weak correlations. TMHMM out-performs all other models under all settings. This is also true for TechSupport dataset. LM-HMMS, TM-HMMS and TM-HMMSS models perform considerably well on BusTime, but not on TechSupport data. These three models allow words to be generated from additional sources other than states. Although this improves log likelihood, it is possible these models encode less information about the state sequences, at least in the more diffuse TechSupport data. In summary, under both quantitative evaluation measures, our models advance state-of-the-art, however which of our models is best depends on the application. 43 K10 K20 K30 150000 200000 250000 150000 200000 250000 150000 200000 250000 T10 T20 T30 LM−HMM LM−HMMS TM−HMM TM−HMMS TM−HMMSS LM−HMM LM−HMMS TM−HMM TM−HMMS TM−HMMSS LM−HMM LM−HMMS TM−HMM TM−HMMS TM−HMMSS model negative log likelihood K10 K20 K30 6e+05 7e+05 8e+05 6e+05 7e+05 8e+05 6e+05 7e+05 8e+05 T10 T20 T30 LM−HMM LM−HMMS TM−HMM TM−HMMS TM−HMMSS LM−HMM LM−HMMS TM−HMM TM−HMMS TM−HMMSS LM−HMM LM−HMMS TM−HMM TM−HMMS TM−HMMSS model negative log likelihood Figure 7: Negative log likelihood on BusTime (upper) and TechSupport (lower) datasets (smaller is better) under different settings of topics K and states T. K10 K20 K30 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 T10 T20 T30 0 25 50 75 100 0 25 50 75 100 0 25 50 75 100 # of random permutations average kendall's tau m o d e l LM−HMM LM−HMMS TM−HMM TM−HMMS TM−HMMSS K10 K20 K30 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 T10 T20 T30 0 25 50 75 100 0 25 50 75 100 0 25 50 75 100 # of random permutations average kendall's tau m o d e l LM−HMM LM−HMMS TM−HMM TM−HMMS TM−HMMSS Figure 8: Average Kendall’s τ measure on BusTime (upper) and TechSupport (lower) datasets (larger is better) against number of random permutations, under various settings of topics K and states T. 5 Conclusion and Future Work We have presented three new unsupervised models to discover latent structures in task-oriented dialogues. We evaluated on two very different corpora—logs from spoken, human-computer dialogues about bus time, and logs of textual, humanhuman dialogues about technical support. We have shown our models yield superior performance both qualitatively and quantitatively. One possible avenue for future work is scalability. Parallelization (Asuncion et al., 2012) or online learning (Doucet et al., 2001) could significantly speed up inference. In addition to MCMC, another class of inference method is variational Bayesian analysis (Blei et al., 2003; Beal, 2003), which is inherently easier to distribute (Zhai et al., 2012) and online update (Hoffman et al., 2010). Acknowledgments We would like to thank anonymous reviewers and Jordan Boyd-Graber for their valuable comments. We are also grateful to Alan Ritter and Bill Dolan for their helpful discussions; and Kai (Anthony) Lui for providing TechSupport dataset. 44 References Arthur Asuncion, Padhraic Smyth, Max Welling, David Newman, Ian Porteous, and Scott Triglia, 2012. Distributed Gibbs sampling for latent variable models. Satanjeev Banerjee and Alexander I Rudnicky. 2006. A texttiling based approach to topic boundary detection in meetings. In INTERSPEECH. Srinivas Bangalore, Giuseppe Di Fabbrizio, and Amanda Stent. 2006. Learning the structure of taskdriven human-human dialogs. In ACL, Stroudsburg, PA, USA. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In NAACL, pages 113–120. Matthew J. Beal. 2003. Variational Algorithms for Approximate Bayesian Inference. Ph.D. thesis. Alan W Black, Susanne Burger, Alistair Conkie, Helen Hastie, Simon Keizer, Nicolas Merigaud, Gabriel Parent, Gabriel Schubiner, Blaise Thomson, D. Williams, Kai Yu, Steve Young, and Maxine Eskenazi. 2010. Spoken dialog challenge 2010: Comparison of live and control test results. In SIGDIAL. David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. JMLR. Ananlada Chotimongkol. 2008. Learning the Structure of Task-oriented Conversations from the Corpus of In-domain Dialogs. Ph.D. thesis. Nigel Crook, Ramn Granell, and Stephen G. Pulman. 2009. Unsupervised classification of dialogue acts using a dirichlet process mixture model. In SIGDIAL. Hal Daum´e III and Daniel Marcu. 2006. Bayesian query-focused summarization. In ACL-44: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 305–312, Morristown, NJ, USA. Association for Computational Linguistics. David DeVault, Kallirroi Georgila, Ron Artstein, Fabrizio Morbini, David Traum, Stefan Scherer, Albert Rizzo, and Louis-Philippe Morency. 2013. Verbal indicators of psychological distress in interactive dialogue with a virtual human. In SIGDIAL. Arnaud Doucet, Nando De Freitas, and Neil Gordon, editors. 2001. Sequential Monte Carlo methods in practice. Springer Texts in Statistics. Michel Galley, Kathleen McKeown, Eric FoslerLussier, and Hongyan Jing. 2003. Discourse segmentation of multi-party conversation. In ACL. Sharon Goldwater and Thomas L. Griffiths. 2007. A fully Bayesian approach to unsupervised part-ofspeech tagging. In ACL. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. PNAS, 101(Suppl 1):5228–5235. James Henderson and Oliver Lemon. 2008. Mixture model POMDPs for efficient handling of uncertainty in dialogue management. In ACL. Matthew Hoffman, David M. Blei, and Francis Bach. 2010. Online learning for latent Dirichlet allocation. In NIPS. Pei-yun Hsueh, Johanna D. Moore, and Steve Renals. 2006. Automatic segmentation of multiparty dialogue. In EACL. Llu´ıs F. Hurtado, Joaquin Planells, Encarna Segarra, Emilio Sanchis, and David Griol. 2010. A stochastic finite-state transducer approach to spoken dialog management. In INTERSPEECH. Dan Jurafsky, Elizabeth Shriberg, and Debra Biasca. 1997. Switchboard SWBD-DAMSL shallowdiscourse-function annotation coders manual. Institute of Cognitive Science Technical Report, pages 97–02. Maurice G. Kendall. 1938. A new measure of rank correlation. Biometrika Trust. Staffan Larsson and David R. Traum. 2000. Information state and dialogue management in the TRINDI dialogue move engine toolkit. Natural Language Engineering, 5(3/4):323–340. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interaction for learning dialogue strategies. IEEE Trans on Speech and Audio Processing, 8(1):11–23. Jingjing Liu, Stephanie Seneff, and Victor Zue. 2010. Dialogue-oriented review summary generation for spoken dialogue recommendation systems. In NAACL. David Mimno, Hanna Wallach, Jason Naradowsky, David Smith, and Andrew McCallum. 2009. Polylingual topic models. In EMNLP. Gabriel Murray, Steve Renals, and Jean Carletta. 2005. Extractive summarization of meeting recordings. In European Conference on Speech Communication and Technology. Radford M. Neal. 2000. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. Radford M. Neal. 2003. Slice sampling. Annals of Statistics, 31:705–767. 45 Matthew Purver, Konrad K¨ording, Thomas L. Griffiths, and Joshua Tenenbaum. 2006. Unsupervised topic modelling for multi-party spoken discourse. In ACL. Verena Rieser and Oliver Lemon. 2010. Natural language generation as planning under uncertainty for spoken dialogue systems. In EMNLP. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. In NAACL. Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system. Journal of Artificial Intelligence Research. Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, September. David R Traum and Staffan Larsson. 2003. The information state approach to dialogue management. In Current and new directions in discourse and dialogue, pages 325–353. Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In ICML. Hanna M. Wallach. 2008. Structured Topic Models for Language. Ph.D. thesis, University of Cambridge. Jason D. Williams and Suhrid Balakrishnan. 2009. Estimating probability of correctness for ASR N-best lists. In SIGDIAL. Jason D. Williams. 2012. Challenges and opportunities for state tracking in statistical spoken dialog systems: Results from two public deployments. Journal of Selected Topics in Signal Processing. Tae Yano, William W. Cohen, and Noah A. Smith. 2009. Predicting response to political blog posts with topic models. In NAACL, pages 477–485, Stroudsburg, PA, USA. ACL. Steve Young. 2006. Using POMDPs for dialog management. In Proceedings of the 1st IEEE/ACL Workshop on Spoken Language Technologies (SLT06). Ke Zhai, Jordan Boyd-Graber, Nima Asadi, and Mohamad Alkhouja. 2012. Mr. LDA: A flexible large scale topic modeling package using variational inference in mapreduce. In WWW. 46
2014
4
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 424–434, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Surface Realisation from Knowledge-Bases Bikash Gyawali Universit´e de Lorraine, LORIA Villers-l`es-Nancy, F-54600, France [email protected] Claire Gardent CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-54500, France [email protected] Abstract We present a simple, data-driven approach to generation from knowledge bases (KB). A key feature of this approach is that grammar induction is driven by the extended domain of locality principle of TAG (Tree Adjoining Grammar); and that it takes into account both syntactic and semantic information. The resulting extracted TAG includes a unification based semantics and can be used by an existing surface realiser to generate sentences from KB data. Experimental evaluation on the KBGen data shows that our model outperforms a data-driven generate-and-rank approach based on an automatically induced probabilistic grammar; and is comparable with a handcrafted symbolic approach. 1 Introduction In this paper we present a grammar based approach for generating from knowledge bases (KB) which is linguistically principled and conceptually simple. A key feature of this approach is that grammar induction is driven by the extended domain of locality principle of TAG (Tree Adjoining Grammar) and takes into account both syntactic and semantic information. The resulting extracted TAGs include a unification based semantics and can be used by an existing surface realiser to generate sentences from KB data. To evaluate our approach, we use the benchmark provided by the KBGen challenge (Banik et al., 2012; Banik et al., 2013), a challenge designed to evaluate generation from knowledge bases; where the input is a KB subset; and where the expected output is a complex sentence conveying the meaning represented by the input. When compared with two other systems having taken part in the KBGen challenge, our system outperforms a data-driven, generate-and-rank approach based on an automatically induced probabilistic grammar; and produces results comparable to those obtained by a symbolic, rule based approach. Most importantly, we obtain these results using a general purpose approach that we believe is simpler and more transparent than current state of the art surface realisation systems generating from KB or DB data. 2 Related Work Our work is related to work on concept to text generation. Earlier work on concept to text generation mainly focuses on generation from logical forms using rule-based methods. (Wang, 1980) uses hand-written rules to generate sentences from an extended predicate logic formalism; (Shieber et al., 1990) introduces a head-driven algorithm for generating from logical forms; (Kay, 1996) defines a chart based algorithm which enhances efficiency by minimising the number of semantically incomplete phrases being built; and (Shemtov, 1996) presents an extension of the chart based generation algorithm presented in (Kay, 1996) which supports the generation of multiple paraphrases from underspecified semantic input. In all these approaches, grammar and lexicon are developed manually and it is assumed that the lexicon associates semantic sub-formulae with natural language expressions. Our approach is similar to these approaches in that it assumes a grammar encoding a compositional semantics. It differs from them however in that, in our approach, grammar and lexicon are automatically acquired from the data. With the development of the semantic web and the proliferation of knowledge bases, generation from knowledge bases has attracted increased interest and so called ontology verbalisers have been proposed which support the generation of text from (parts of) knowledge bases. One main 424 strand of work maps each axiom in the knowledge base to a clause. Thus the OWL verbaliser integrated in the Prot´eg´e tool (Kaljurand and Fuchs, 2007) provides a verbalisation of every axiom present in the ontology under consideration and (Wilcock, 2003) describes an ontology verbaliser using XML-based generation. As discussed in (Power and Third, 2010), one important limitation of these approaches is that they assume a simple deterministic mapping between knowledge representation languages and some controlled natural language (CNL). Specifically, the assumption is that each atomic term (individual, class, property) maps to a word and each axiom maps to a sentence. As a result, the verbalisation of larger ontology parts can produce very unnatural text such as, Every cat is an animal. Every dog is an animal. Every horse is an animal. Every rabbit is an animal. More generally, the CNL based approaches to ontology verbalisation generate clauses (one per axiom) rather than complex sentences and thus cannot adequately handle the verbalisation of more complex input such as the KBGen data where the KB input often requires the generation of a complex sentence rather than a sequence of base clauses. To generate more complex output from KB data, several alternative approaches have been proposed. The MIAKT project (Bontcheva and Wilks., 2004) and the ONTOGENERATION project (Aguado et al., 1998) use symbolic NLG techniques to produce textual descriptions from some semantic information contained in a knowledge base. Both systems require some manual input (lexicons and domain schemas). More sophisticated NLG systems such as TAILOR (Paris, 1988), MIGRAINE (Mittal et al., 1994), and STOP (Reiter et al., 2003) offer tailored output based on user/patient models. While offering more flexibility and expressiveness, these systems are difficult to adapt by non-NLG experts because they require the user to understand the architecture of the NLG systems (Bontcheva and Wilks., 2004). Similarly, the NaturalOWL system (Galanis et al., 2009) has been proposed to generate fluent descriptions of museum exhibits from an OWL ontology. This approach however relies on extensive manual annotation of the input data. The SWAT project has focused on producing descriptions of ontologies that are both coherent and efficient (Williams and Power, 2010). For instance, instead of the above output, the SWAT system would generate the sentence: The following are kinds of animals: cats, dogs, horses and rabbits. . In this approach too however, the verbaliser output is strongly constrained by a simple Definite Clause Grammar covering simple clauses and sentences verbalising aggregation patterns such as the above. More generally, the sentences generated by ontology verbalisers cover a limited set of linguistics constructions; the grammar used is manually defined; and the mapping between semantics and strings is assumed to be deterministic (e.g., a verb maps to a relation and a noun to a concept). In constrast, we propose an approach which can generate complex sentences from KB data; where the grammar is acquired from the data; and where no assumption is made about the mapping between semantics and NL expressions. Recent work has focused on data-driven generation from frames, lambda terms and data base entries. (DeVault et al., 2008) describes an approach for generating from the frames produced by a dialog system. They induce a probabilistic Tree Adjoining Grammar from a training set aligning frames and sentences using the grammar induction technique of (Chiang, 2000) and use a beam search that uses weighted features learned from the training data to rank alternative expansions at each step. (Lu and Ng, 2011) focuses on generating natural language sentences from logical form (i.e., lambda terms) using a synchronous context-free grammar. They introduce a novel synchronous context free grammar formalism for generating from lambda terms; induce such a synchronous grammar using a generative model; and extract the best output sentence from the generated forest using a log linear model. (Wong and Mooney, 2007; Lu et al., 2009) focuses on generating from variable-free treestructured representations such as the CLANG formal language used in the ROBOCUP competition and the database entries collected by (Liang et al., 2009) for weather forecast generation and for the air travel domain (ATIS dataset) by (Dahl et al., 1994). (Wong and Mooney, 2007) uses synchronous grammars to transform a variable free tree structured meaning representation into sentences. (Lu et al., 2009) uses a Conditional Ran425 The function of a gated channel is to release particles from the endoplasmic reticulum :TRIPLES ( (|Release-Of-Calcium646| |object| |Particle-In-Motion64582|) (|Release-Of-Calcium646| |base| |Endoplasmic-Reticulum64603|) (|Gated-Channel64605| |has-function||Release-Of-Calcium646|) (|Release-Of-Calcium646| |agent| |Gated-Channel64605|)) :INSTANCE-TYPES (|Particle-In-Motion64582| |instance-of| |Particle-In-Motion|) (|Endoplasmic-Reticulum64603| |instance-of| |Endoplasmic-Reticulum|) (|Gated-Channel64605| |instance-of| |Gated-Channel|) |Release-Of-Calcium646| |instance-of| |Release-Of-Calcium|)) :ROOT-TYPES ( (|Release-Of-Calcium646| |instance-of| |Event|) (|Particle-In-Motion64582| |instance-of| |Entity|) (|Endoplasmic-Reticulum64603| |instance-of| |Entity|) (|Gated-Channel64605| |instance-of| |Entity|))) Figure 1: Example KBGEN Scenario dom Field to generate from the same meaning representations. Finally, more recent papers propose approaches which perform both surface realisation and content selection. (Angeli et al., 2010) proposes a log linear model which decomposes into a sequence of discriminative local decisions. The first classifier determines which records to mention; the second, which fields of these records to select; and the third, which words to use to verbalise the selected fields. (Kim and Mooney, 2010) uses a generative model for content selection and verbalises the selected input using WASP−1, an existing generator. Finally, (Konstas and Lapata, 2012b; Konstas and Lapata, 2012a) develop a joint optimisation approach for content selection and surface realisation using a generic, domain independent probabilistic grammar which captures the structure of the database and the mapping from fields to strings. They intersect the grammar with a language model to improve fluency; use a weighted hypergraph to pack the derivations; and find the best derivation tree using Viterbi algorithm. Our approach differs from the approaches which assume variable free tree structured representations (Wong and Mooney, 2007; Lu et al., 2009) and data-based entries (Kim and Mooney, 2010; Konstas and Lapata, 2012b; Konstas and Lapata, 2012a) in that it handles graph-based, KB input and assumes a compositional semantics. It is closest to (DeVault et al., 2008) and (Lu and Ng, 2011) who extract a grammar encoding syntax and semantics from frames and lambda terms respectively. It differs from the former however in that it enforces a tighter syntax/semantics integration by requiring that the elementary trees of our extracted grammar encode the appropriate linking information. While (DeVault et al., 2008) extracts a TAG grammar associating each elementary tree with a semantics, we additionnally require that these trees encode the appropriate linking between syntactic and semantic arguments thereby restricting the space of possible tree combinations and drastically reducing the search space. Although conceptually related to (Lu and Ng, 2011), our approach extracts a unification based grammar rather than one with lambda terms. The extraction process and the generation algorithms are also fundamentally different. We use a simple mainly symbolic approach whereas they use a generative approach for grammar induction and a discriminative approach for sentence generation. 3 The KBGen Task The KBGen task was introduced as a new shared task at Generation Challenges 2013 (Banik et al., 2013)1 and aimed to compare different generation systems on KB data. Specifically, the task is to verbalise a subset of a knowledge base. For instance, the KB input shown in Figure 1 can be verbalised as: (1) The function of a gated channel is to release particles from the endoplasmic reticulum The KB subsets forming the KBGen input data were pre-selected from the AURA biology knowledge base (Gunning et al., 2010), a knowledge base about biology which was manually encoded by biology teachers and encodes knowledge about events, entities, properties and relations where relations include event-to-entity, event-to-event, 1http://www.kbgen.org 426 NPGC DT NN NN a gated channel instance-of(GC,Gated-Channel) SRoC1 NP↓GC VPRoC1 RoC VBZRoC NP↓P M releases instance-of(RoC,Release-of-Calcium) object(RoC,PM) agent(RoC,GC) NPP M particles instance-of(PM,Particle-In-Motion) VPRoC VP∗RoC PP IN NP↓ER from base(RoC,ER) NPER DT NN NN the endoplasmic reticulum instance-of(ER,Endoplasmic-Reticulum) Figure 2: Example FB-LTAG with Unification-Based Semantics. Dotted lines indicate substitution and adjunction operations between trees. The variables decorating the tree nodes (e.g., GC) abbreviate feature structures of the form [idx : V ] where V is a unification variable shared with the semantics. event-to-property and entity-to-property relations. AURA uses a frame-based knowledge representation and reasoning system called Knowledge Machine (Clark and Porter, 1997) which was translated into first-order logic with equality and from there, into multiple different formats including SILK (Grosof, 2012) and OWL2 (Motik et al., 2009). It is available for download in various formats including OWL2. 4 Generating from the KBGen Knowledge-Base To generate from the KBGen data, we induce a Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG, (Vijay-Shanker and Joshi, 1988)) augmented with a unification-based semantics (Gardent and Kallmeyer, 2003) from the training data. We then use this grammar and an existing surface realiser to generate from the test data. 4.1 Feature-Based Lexicalised Tree Adjoining Grammar Figure 2 shows an example FB-LTAG augmented with a unification-based semantics. Briefly, an FB-LTAG consists of a set of elementary trees which can be either initial or auxiliary. Initial trees are trees whose leaves are labeled with substitution nodes (marked with a downarrow) or terminal categories. Auxiliary trees are distinguished by a foot node (marked with a star) 2http://www.ai.sri.com/halo/ halobook2010/exported-kb/biokb.html whose category must be the same as that of the root node. In addition, in an FB-LTAG, each elementary tree is anchored by a lexical item (lexicalisation) and the nodes in the elementary trees are decorated with two feature structures called top and bottom which are unified during derivation. Two tree-composition operations are used to combine trees namely, substitution and adjunction. While substitution inserts a tree in a substitution node of another tree, adjunction inserts an auxiliary tree into a tree. In terms of unifications, substitution unifies the top feature structure of the substitution node with the top feature structure of the root of the tree being substituted in. Adjunction unifies the top feature structure of the root of the tree being adjoined with the top feature structure of the node being adjoined to; and the bottom feature structure of the foot node of the auxiliary tree being adjoined with the bottom feature structure of the node being adjoined to. In an FB-LTAG augmented with a unificationbased semantics, each tree is associated with a semantics i.e., a set of literals whose arguments may be constants or unification variables. The semantics of a derived tree is the union of the semantics of the tree contributing to its derivation modulo unification. Importantly, semantic variables are shared with syntactic variables (i.e., variables occurring in the feature structures decorating the tree nodes) so that when trees are combined, the appropriate syntax/semantics linking is enforced. For instance given the semantics: 427 instance-of(RoC,Release-Of-Calcium), object(RoC,PM),agent(RoC,GC),base(RoC,ER), instance-of(ER,Endoplasmic-Reticulum), instance-of(GC,Gated-Channel), instance-of(PM,Particle-In-Motion) the grammar will generate A gated channel releases particles from the endoplasmic reticulum but not e.g., Particles releases a gated channel from the endoplasmic reticulum. 4.2 Grammar Extraction We extract our FB-LTAG with unification semantics from the KBGen training data in two main steps. First, we align the KB data with the input string. Second, we induce a Tree Adjoining Grammar augmented with a unification-based semantics from the aligned data. 4.2.1 Alignment Given a Sentence/Input pair (S, I) provided by the KBGen Challenge, the alignment procedure associates each entity and event variable in I to a substring in S. To do this, we use the entity and the event lexicon provided by the KBGen organiser. The event lexicon maps event types to verbs, their inflected forms and nominalizations while the entity lexicon maps entity types to a noun and its plural form. For instance, the lexicon entries for the event and entity types shown in Figure 1 are as shown in Figure 3. For each entity and each event variable V in I, we retrieve the corresponding type (e.g., Particle-In-Motion for Particle-In-Motion64582); search the KBGen lexicon for the corresponding phrases (e.g., molecule in motion,molecules in motion); and associate V with the phrase in S which matches one of these phrases. Figure 3 shows an example lexicon and the resulting alignment obtained for the scenario shown in Figure 1. Note that there is not always an exact match between the phrase associated in the KBGen lexicon with a type and the phrase occurring in the training sentence. To account for this, we use some additional similarity based heuristics to identify the phrase in the input string that is most likely to be associated with a variable lacking an exact match in the input string. E.g., for entity variables (e.g., Particle-In-Motion64582), we search the input string for nouns (e.g., particles) whose overlap with the variable type (e.g., Particle-In-Motion) is not empty. 4.2.2 Inducing a based FB-LTAG from the aligned data To extract a Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG) from the KBGen data, we parse the sentences of the training corpus; project the entity and event variables to the syntactic projection of the strings they are aligned with; and extract the elementary trees of the resulting FB-LTAG from the parse tree using semantic information. Figure 4 shows the trees extracted from the scenario given in Figure 1. To associate each training example sentence with a syntactic parse, we use the Stanford parser. After alignment, the entity and event variables occurring in the input semantics are associated with substrings of the yield of the syntactic parse tree. We project these variables up the syntactic tree to reflect headedness. A variable aligned with a noun is projected to the NP level or to the immediately dominating PP if it occurs in the subtree dominated by the leftmost daughter of that PP. A variable aligned with a verb is projected to the first S node immediately dominating that verb or, in the case of a predicative sentence, to the root of that sentence3. Once entity and event variables have been projected up the parse trees, we extract elementary FB-LTAG trees and their semantics from the input scenario as follows. First, the subtrees whose root node is indexed with an entity variable are extracted. This results in a set of NP and PP trees anchored with entity names and associated with the predication true of the indexing variable. Second, the subtrees capturing relations between variables are extracted. To perform this extraction, each input variable X is associated with a set of dependent variables i.e., the set of variables Y such that X is related to Y (R(X, Y )). The minimal tree containing all and only the dependent variables D(X) of a variable X is then extracted and associated with the set of literals Φ such that Φ = {R(Y, Z) | (Y = X ∧Z ∈D(X))∨(Y, Z ∈ D(X))}. This procedure extracts the subtrees relating the argument variables of a semantic functors such as an event or a role e.g., a tree describing a verb and its arguments as shown in the top 3Initially, we used the head information provided by the Stanford parser. In practice however, we found that the heuristics we defined to project semantic variables to the corresponding syntactic projection were more accurate and better supported our grammar extraction process. 428 Particle-In-Motion molecule in motion,molecules in motion Endoplasmic-Reticulum endoplasmic reticulum,endoplasmic reticulum Gated-Channel gated Channel,gated Channels Release-Of-Calcium releases,release,released,release The function of a (gated channel, Gated-Channel64605) is to (release, Release-Of-Calcium646) (particles, Particle-In-Motion64582) from the (endoplasmic reticulum, Endoplasmic-Reticulum64603 ) Figure 3: Example Entries from the KBGen Lexicon and example alignment SRoC3 NP VPRoC3 RoC2 NP PP VBZ SRoC2 RoC1 DT NN IN NP↓GC is VPRoC1 RoC the fn of TO VBRoC NP↓P M PP to release IN NP↓ER from instance-of(RoC,Release-of-Calcium) object(RoC,PM) base(RoC,ER) has-function(GC,RoC) agent(RoC,GC) NPGC DT NN NN a gated channel instance-of(GC,Gated-Channel) NPP M particles instance-of(PM,Particle-In-Motion) NPER DT NN NN the endoplasmic reticulum instance-of(ER,Endoplasmic-Reticulum) Figure 4: Extracted Grammar for “The function of a gated channel is to release particles from the endoplasmic reticulum”. Variable names have been abbreviated and the KBGen tuple notation converted to terms so as to fit the input format expected by our surface realiser. 429 part of Figure 4. Note that such a tree may capture a verb occurring in a relative or a subordinate clause (together with its arguments) thus allowing for complex sentences including a relative or relating a main and a subordinate clause. The resulting grammar extracted from the parse trees (cf. e.g., Figure 4) is a Feature-Based Tree Adjoining Grammar with a Unification-based compositional semantics as described in (Gardent and Kallmeyer, 2003). In particular, our grammars differs from the traditional probabilistic Tree Adjoining Grammar extracted as described in e.g., (Chiang, 2000) in that they encode both syntax and semantics rather than just syntax. They also differ from the semantic FB-TAG extracted by (DeVault et al., 2008) in that (i) they encode the linking between syntactic and semantic arguments; (ii) they allow for elementary trees spanning discontiguous strings (e.g., The function of X is to release Y); and (iii) they enforce the semantic principle underlying TAG namely that an elementary tree containing a syntactic functor also contains its syntactic arguments. 4.3 Generation To generate with the grammar extracted from the KBGen data, we use the GenI surface realiser (Gardent et al., 2007). Briefly, given an input semantics and a FB-LTAG with a unification based semantics, GenI selects all grammar entries whose semantics subsumes the input semantics; combines these entries using the FB-LTAG combination operations (i.e., adjunction and substitution); and outputs the yield of all derived trees which are syntactically complete and whose semantics is the input semantics. To rank the generator output, we train a language model on the GeniA corpus 4, a corpus of 2000 MEDLINE asbtracts about biology containing more than 400000 words (Kim et al., 2003) and use this model to rank the generated sentences by decreasing probability. Thus for instance, given the input semantics shown in Figure 1 and the grammar depicted in Figure 4, the surface realiser will select all of these trees; combine them using FB-LTAG substitution operation; and output as generated sentence the yield of the resulting derived tree namely the sentence The function of a gated channel is to release particles from the endoplasmic reticulum. However, this procedure only works if the en4http://www.nactem.ac.uk/genia/ tries necessary to generate from the given input are present in the grammar. To handle new, unseen input, we proceed in two ways. First, we try to guess a grammar entry from the shape of the input and the existing grammar. Second, we expand the grammar by decomposing the extracted trees into simpler ones. 4.4 Guessing new grammar entries. Given the limited size of the training data, it is often the case that input from the test data will have no matching grammar unit. To handle such previously unseen input, we start by partitioning the input semantics into sub-semantics corresponding to events, entities and role. For each entity variable X of type Type, we create a default NP tree whose semantics is a literal of the form instance-of(X,Type). For event variables, we search the lexicon for an entry with a matching or similar semantics i.e., an entry with the same number and same type of literals (literals with same arity and with identical relations). When one is found, a grammar entry is constructed for the unseen event variable by substituting the event type of the matching entry with the type of the event variable. For instance, given the input semantics instance-of(C,Carry), object(C,X), base(C,Y), has-function(Z,C), agent(C,Z), this procedure will create a grammar entry identical to that shown at the top of Figure 4 except that the event type Release-of-Calcium is changed to Carry and the terminal release to the word form associated in the KBGen lexicon with this concept, namely to the verb carry. 4.5 Expanding the Grammar While the extracted grammar nicely captures predicate/argument dependencies, it is very specific to the items seen in the training data. To reduce overfitting, we generalise the extracted grammar by extracting from each event tree, subtrees that capture structures with fewer arguments and optional modifiers. For each event tree τ extracted from the training data which contains a subject-verb-object subtree τ ′, we add τ ′ to the grammar and associate it with the semantics of τ minus the relations associated with the arguments that have been removed. For instance, given the extracted tree for the sentence ”Aquaporin facilitates the movement of water molecules through hydrophilic channels.”, this 430 procedure will construct a new grammar tree corresponding to the subphrase “Aquaporin facilitates the movement of water molecules”. We also construct both simpler event trees and optional modifiers trees by extracting from event trees, PP trees which are associated with a relational semantics. For instance, given the tree shown in Figure 4, the PP tree associated with the relation base(RoC,ET) is removed thus creating two new trees as illustrated in Figure 5: an S tree corresponding to the sentence The function of a gated channel is to release particles and an auxiliary PP tree corresponding to the phrase from the endoplasmic reticulum. Similarly in the above example, a PP tree corresponding to the phrase ”through hydrophilic channels.” will be extracted. As with the base grammar, missing grammar entries are guessed from the expanded grammar. However we do this only in cases where a correct grammar entry cannot be guessed from the base grammar. 5 Experimental Setup We evaluate our approach on the KBGen data and compare it with the KBGen reference and two other systems having taken part to the KBGen challenge. 5.1 Training and test data. Following a practice introduced by (Angeli et al., 2010), we use the term scenario to denote a KB subset paired with a sentence. The KBGen benchmark contains 207 scenarii for training and 72 for testing. Each KB subset consists of a set of triples and each scenario contains on average 16 triples and 17 words. 5.2 Systems We evaluate three configurations of our approach on the KBGen test data: one without grammar expansion (BASE); a second with a manual grammar expansion MANEXP; and a third one with automated grammar expansion AUTEXP. We compare the results obtained with those obtained by two other systems participating in the KBGen challenge, namely the UDEL system, a symbolic rule based system developed by a group of students at the University of Delaware; and the IMS system, a statistical system using a probabilistic grammar induced from the training data. 5.3 Metrics. We evaluate system output automatically, using the BLEU-4 modified precision score (Papineni et al., 2002) with the human written sentences as reference. We also report results from a human based evaluation. In this evaluation, participants were asked to rate sentences along three dimensions: fluency (Is the text easy to read?), grammaticality and meaning similarity or adequacy (Does the meaning conveyed by the generated sentence correspond to the meaning conveyed by the reference sentence?). The evaluation was done on line using the LG-Eval toolkit (Kow and Belz, 2012), subjects used a sliding scale from -50 to +50 and a Latin Square Experimental Design was used to ensure that each evaluator sees the same number of outputs from each system and for each test set item. 12 subjects participated in the evaluation and 3 judgments were collected for each output. 6 Results and Discussion System All Covered Coverage # Trees IMS 0.12 0.12 100% UDEL 0.32 0.32 100% Base 0.04 0.39 30.5% 371 ManExp 0.28 0.34 83 % 412 AutExp 0.29 0.29 100% 477 Figure 6: BLEU scores and Grammar Size (Number of Elementary TAG trees Table 6 summarises the results of the automatic evaluation and shows the size (number of elementary TAG trees) of the grammars extracted from the KBGen data. The average BLEU score is given with respect to all input (All) and to those inputs for which the systems generate at least one sentence (Covered). While both the IMS and the UDEL system have full coverage, our BASE system strongly undergenerates failing to account for 69.5% of the test data. However, because the extracted grammar is linguistically principled and relatively compact, it is possible to manually edit it. Indeed, the MANEXP results show that, by adding 41 trees to the grammar, coverage can be increased by 52.5 points reaching a coverage of 83%. Finally, the AUTEXP results demonstrate that the automated expansion mechanism permits achieving full coverage while keeping a relative small grammar (477 trees). 431 SRoC3 NP VPRoC3 RoC2 NP PP VBZ SRoC2 RoC1 DT NN IN NP↓GC is VPRoC1 RoC the fn of TO VBRoC NP↓P M to release instance-of(RoC,Release-of-Calcium) object(RoC,PM) has-function(GC,RoC) agent(RoC,GC) VPRoC VP∗,RoC PP IN NP↓ER from base(RoC,ER) Figure 5: Trees Added by the Expansion Process Fluency Grammaticality Meaning Similarity System Mean Homogeneous Subsets Mean Homogeneous Subsets Mean Homogeneous Subsets UDEL 4.36 A 4.48 A 3.69 A AutExp 3.45 B 3.55 B 3.65 A IMS 1.91 C 2.05 C 1.31 B Figure 7: Human Evaluation Results on a scale of 0 to 5. Homogeneous subsets are determined using Tukey’s Post Hoc Test with p < 0.05 In terms of BLEU score, the best version of our system (AUTEXP) outperforms the probabilistic approach of IMS by a large margin (+0.17) and produces results similar to the fully handcrafted UDEL system (-0.03). In sum, our approach permits obtaining BLEU scores and a coverage which are similar to that obtained by a hand crafted system and outperforms a probabilistic approach. One key feature of our approach is that the grammar extracted from the training data is linguistically principled in that it obeys the extended locality principle of Tree Adjoining Grammars. As a result, the extracted grammar is compact and can be manually modified to fit the need of an application as shown by the good results obtained when using the MANEXP configuration. We now turn to the results of the human evaluation. Table 7 summarises the results whereby systems are grouped by letters when there is no significant difference between them (significance level: p < 0.05). We used ANOVAs and posthoc Tukey tests to test for significance. The differences between systems are statistically significant throughout except for meaning similarity (adequacy) where UDEL and our system are on the same level. Across the metrics, our system consistently ranks second behind the symbolic, UDEL system and before the statistical IMS one thus confirming the ranking based on BLEU. 7 Conclusion In Tree Adjoining Grammar, the extended domain of locality principle ensures that TAG trees group together in a single structure a syntactic predicate and its arguments. Moreover, the semantic principle requires that each elementary tree captures a single semantic unit. Together these two principles ensure that TAG elementary trees capture basic semantic units and their dependencies. In this paper, we presented a grammar extraction approach which ensures that extracted grammars comply with these two basic TAG principles. Using the KBGen benchmark, we then showed that the resulting induced FB-LTAG compares favorably with competing symbolic and statistical approaches when used to generate from knowledge base data. In the current version of the generator, the output is ranked using a simple language model trained on the GENIA corpus. We observed that this often fails to return the best output in terms of BLEU score, fluency, grammaticality and/or meaning. In the future, we plan to remedy this using a ranking approach such as proposed in (Velldal and Oepen, 2006; White and Rajkumar, 2009). 432 References G. Aguado, A. Ba˜n´on, J. Bateman, S. Bernardos, M. Fern´andez, A. G´omez-P´erez, E. Nieto, A. Olalla, R. Plaza, and A. S´anchez. 1998. Ontogeneration: Reusing domain and linguistic ontologies for spanish text generation. In Workshop on Applications of Ontologies and Problem Solving Methods, ECAI, volume 98. Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 502–512. Association for Computational Linguistics. Eva Banik, Claire Gardent, Donia Scott, Nikhil Dinesh, and Fennie Liang. 2012. Kbgen: Text generation from knowledge bases as a new shared task. In Proceedings of the seventh International Natural Language Generation Conference, pages 141–145. Association for Computational Linguistics. Eva Banik, Claire Gardent, Eric Kow, et al. 2013. The kbgen challenge. In Proceedings of the 14th European Workshop on Natural Language Generation (ENLG), pages 94–97. K. Bontcheva and Y. Wilks. 2004. Automatic report generation from ontologies: the miakt approach. In Ninth International Conference on Applications of Natural Language to Information Systems (NLDB’2004). Lecture Notes in Computer Science 3136, Springer, Manchester, UK. David Chiang. 2000. Statistical parsing with an automatically-extracted tree adjoining grammar. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 456–463. Association for Computational Linguistics. Peter Clark and Bruce Porter. 1997. Building concept representations from reusable components. In AAAI/IAAI, pages 369–376. Citeseer. Deborah A Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43–48. Association for Computational Linguistics. David DeVault, David Traum, and Ron Artstein. 2008. Making grammar-based generation easier to deploy in dialogue systems. In Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue, pages 198–207. Association for Computational Linguistics. D. Galanis, G. Karakatsiotis, G. Lampouras, and I. Androutsopoulos. 2009. An open-source natural language generator for owl ontologies and its use in prot´eg´e and second life. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics: Demonstrations Session, pages 17–20. Association for Computational Linguistics. Claire Gardent and Laura Kallmeyer. 2003. Semantic construction in feature-based tag. In Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics-Volume 1, pages 123–130. Association for Computational Linguistics. Claire Gardent, Eric Kow, et al. 2007. A symbolic approach to near-deterministic surface realisation using tree adjoining grammar. In ACL, volume 7, pages 328–335. B. Grosof. 2012. The silk project: Semantic inferencing on large knowledge. Technical report, SRI. http://silk.semwebcentral.org/. D. Gunning, V. K. Chaudhri, P. Clark, K. Barker, ShawYi Chaw, M. Greaves, B. Grosof, A. Leung, D. McDonald, S. Mishra, J. Pacheco, B. Porter, A. Spaulding, D. Tecuci, and J. Tien. 2010. Project halo update - progress toward digital aristotle. AI Magazine, Fall:33–58. K. Kaljurand and N.E. Fuchs. 2007. Verbalizing owl in attempto controlled english. Proceedings of OWLED07. Martin Kay. 1996. Chart generation. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pages 200–204. Association for Computational Linguistics. Joohyun Kim and Raymond J Mooney. 2010. Generative alignment and semantic parsing for learning from ambiguous supervision. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 543–551. Association for Computational Linguistics. J-D Kim, Tomoko Ohta, Yuka Tateisi, and Junichi Tsujii. 2003. Genia corpusa semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl 1):i180–i182. Ioannis Konstas and Mirella Lapata. 2012a. Conceptto-text generation via discriminative reranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 369–378. Association for Computational Linguistics. Ioannis Konstas and Mirella Lapata. 2012b. Unsupervised concept-to-text generation with hypergraphs. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 752–761. Association for Computational Linguistics. Eric Kow and Anja Belz. 2012. Lg-eval: A toolkit for creating online language evaluation experiments. In LREC, pages 4033–4037. 433 Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 91–99. Association for Computational Linguistics. Wei Lu and Hwee Tou Ng. 2011. A probabilistic forest-to-string model for language generation from typed lambda calculus expressions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1611–1622. Association for Computational Linguistics. Wei Lu, Hwee Tou Ng, and Wee Sun Lee. 2009. Natural language generation with tree conditional random fields. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 400–409. Association for Computational Linguistics. VO Mittal, G. Carenini, and JD Moore. 1994. Generating patient specific explanations in migraine. In Proceedings of the eighteenth annual symposium on computer applications in medical care. McGrawHill Inc. Boris Motik, Peter F Patel-Schneider, Bijan Parsia, Conrad Bock, Achille Fokoue, Peter Haase, Rinke Hoekstra, Ian Horrocks, Alan Ruttenberg, Uli Sattler, et al. 2009. Owl 2 web ontology language: Structural specification and functional-style syntax. W3C recommendation, 27:17. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. C.L. Paris. 1988. Tailoring object descriptions to a user’s level of expertise. Computational Linguistics, 14(3):64–78. R. Power and A. Third. 2010. Expressing owl axioms by english sentences: dubious in theory, feasible in practice. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1006–1013. Association for Computational Linguistics. E. Reiter, R. Robertson, and L.M. Osman. 2003. Lessons from a failure: Generating tailored smoking cessation letters. Artificial Intelligence, 144(1):41– 58. Hadar Shemtov. 1996. Generation of paraphrases from ambiguous logical forms. In Proceedings of the 16th conference on Computational linguistics-Volume 2, pages 919–924. Association for Computational Linguistics. Stuart M Shieber, Gertjan Van Noord, Fernando CN Pereira, and Robert C Moore. 1990. Semantichead-driven generation. Computational Linguistics, 16(1):30–42. Erik Velldal and Stephan Oepen. 2006. Statistical ranking in tactical generation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 517–525. Association for Computational Linguistics. K. Vijay-Shanker and AK Joshi. 1988. Feature structures based tree adjoining grammars. In Proceedings of the 12th International Conference on Computational Linguistics, Budapest, Hungary. Juen-tin Wang. 1980. On computational sentence generation from logical form. In Proceedings of the 8th conference on Computational linguistics, pages 405–411. Association for Computational Linguistics. Michael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for ccg realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1Volume 1, pages 410–419. Association for Computational Linguistics. G. Wilcock. 2003. Talking owls: Towards an ontology verbalizer. Human Language Technology for the Semantic Web and Web Services, ISWC, 3:109–112. Sandra Williams and Richard Power. 2010. Grouping axioms for more coherent ontology descriptions. In Proceedings of the 6th International Natural Language Generation Conference (INLG 2010), pages 197–202, Dublin. Yuk Wah Wong and Raymond J Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In HLT-NAACL, pages 172–179. 434
2014
40
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 435–445, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Hybrid Simplification using Deep Semantics and Machine Translation Shashi Narayan Universit´e de Lorraine, LORIA Villers-l`es-Nancy, F-54600, France [email protected] Claire Gardent CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-54500, France [email protected] Abstract We present a hybrid approach to sentence simplification which combines deep semantics and monolingual machine translation to derive simple sentences from complex ones. The approach differs from previous work in two main ways. First, it is semantic based in that it takes as input a deep semantic representation rather than e.g., a sentence or a parse tree. Second, it combines a simplification model for splitting and deletion with a monolingual translation model for phrase substitution and reordering. When compared against current state of the art methods, our model yields significantly simpler output that is both grammatical and meaning preserving. 1 Introduction Sentence simplification maps a sentence to a simpler, more readable one approximating its content. Typically, a simplified sentence differs from a complex one in that it involves simpler, more usual and often shorter, words (e.g., use instead of exploit); simpler syntactic constructions (e.g., no relative clauses or apposition); and fewer modifiers (e.g., He slept vs. He also slept). In practice, simplification is thus often modeled using four main operations: splitting a complex sentence into several simpler sentences; dropping and reordering phrases or constituents; substituting words/phrases with simpler ones. As has been argued in previous work, sentence simplification has many potential applications. It is useful as a preprocessing step for a variety of NLP systems such as parsers and machine translation systems (Chandrasekar et al., 1996), summarisation (Knight and Marcu, 2000), sentence fusion (Filippova and Strube, 2008) and semantic role labelling (Vickrey and Koller, 2008). It also has wide ranging potential societal application as a reading aid for people with aphasis (Carroll et al., 1999), for low literacy readers (Watanabe et al., 2009) and for non native speakers (Siddharthan, 2002). There has been much work recently on developing computational frameworks for sentence simplification. Synchronous grammars have been used in combination with linear integer programming to generate and rank all possible rewrites of an input sentence (Dras, 1999; Woodsend and Lapata, 2011). Machine Translation systems have been adapted to translate complex sentences into simple ones (Zhu et al., 2010; Wubben et al., 2012; Coster and Kauchak, 2011). And handcrafted rules have been proposed to model the syntactic transformations involved in simplifications (Siddharthan et al., 2004; Siddharthan, 2011; Chandrasekar et al., 1996). In this paper, we present a hybrid approach to sentence simplification which departs from this previous work in two main ways. First, it combines a model encoding probabilities for splitting and deletion with a monolingual machine translation module which handles reordering and substitution. In this way, we exploit the ability of statistical machine translation (SMT) systems to capture phrasal/lexical substitution and reordering while relying on a dedicated probabilistic module to capture the splitting and deletion operations which are less well (deletion) or not at all (splitting) captured by SMT approaches. Second, our approach is semantic based. While previous simplification approaches starts from either the input sentence or its parse tree, our model takes as input a deep semantic representation namely, the Discourse Representation Structure (DRS, (Kamp, 1981)) assigned by Boxer (Curran et al., 2007) to the input complex sentence. As we 435 shall see in Section 4, this permits a linguistically principled account of the splitting operation in that semantically shared elements are taken to be the basis for splitting a complex sentence into several simpler ones; this facilitates completion (the re-creation of the shared element in the split sentences); and this provide a natural means to avoid deleting obligatory arguments. When compared against current state of the art methods (Zhu et al., 2010; Woodsend and Lapata, 2011; Wubben et al., 2012), our model yields significantly simpler output that is both grammatical and meaning preserving. 2 Related Work Earlier work on sentence simplification relied on handcrafted rules to capture syntactic simplification e.g., to split coordinated and subordinated sentences into several, simpler clauses or to model active/passive transformations (Siddharthan, 2002; Chandrasekar and Srinivas, 1997; Bott et al., 2012; Canning, 2002; Siddharthan, 2011; Siddharthan, 2010). While these handcrafted approaches can encode precise and linguistically well-informed syntactic transformation (using e.g., detailed morphological and syntactic information), they are limited in scope to purely syntactic rules and do not account for lexical simplifications and their interaction with the sentential context. Using the parallel dataset formed by Simple English Wikipedia (SWKP)1 and traditional English Wikipedia (EWKP)2, more recent work has focused on developing machine learning approaches to sentence simplification. Zhu et al. (2010) constructed a parallel corpus (PWKP) of 108,016/114,924 complex/simple sentences by aligning sentences from EWKP and SWKP and used the resulting bitext to train a simplification model inspired by syntax-based machine translation (Yamada and Knight, 2001). Their simplification model encodes the probabilities for four rewriting operations on the parse tree of an input sentences namely, substitution, reordering, splitting and deletion. It is combined with a language model to improve grammaticality and the decoder translates sentences into sim1SWKP (http://simple.wikipedia.org) is a corpus of simple texts targeting “children and adults who are learning English Language” and whose authors are requested to “use easy words and short sentences”. 2http://en.wikipedia.org pler ones by greedily selecting the output sentence with highest probability. Using both the PWKP corpus developed by Zhu et al. (2010) and the edit history of Simple Wikipedia, Woodsend and Lapata (2011) learn a quasi synchronous grammar (Smith and Eisner, 2006) describing a loose alignment between parse trees of complex and of simple sentences. Following Dras (1999), they then generate all possible rewrites for a source tree and use integer linear programming to select the most appropriate simplification. They evaluate their model on the same dataset used by Zhu et al. (2010) namely, an aligned corpus of 100/131 EWKP/SWKP sentences and show that they achieve better BLEU score. They also conducted a human evaluation on 64 of the 100 test sentences and showed again a better performance in terms of simplicity, grammaticality and meaning preservation. In (Wubben et al., 2012; Coster and Kauchak, 2011), simplification is viewed as a monolingual translation task where the complex sentence is the source and the simpler one is the target. To account for deletions, reordering and substitution, Coster and Kauchak (2011) trained a phrase based machine translation system on the PWKP corpus while modifying the word alignment output by GIZA++ in Moses to allow for null phrasal alignments. In this way, they allow for phrases to be deleted during translation. No human evaluation is provided but the approach is shown to result in statistically significant improvements over a traditional phrase based approach. Similarly, Wubben et al. (2012) use Moses and the PWKP data to train a phrase based machine translation system augmented with a post-hoc reranking procedure designed to rank the output based on their dissimilarity from the source. A human evaluation on 20 sentences randomly selected from the test data indicates that, in terms of fluency and adequacy, their system is judged to outperform both Zhu et al. (2010) and Woodsend and Lapata (2011) systems. 3 Simplification Framework We start by motivating our approach and explaining how it relates to previous proposals w.r.t., the four main operations involved in simplification namely, splitting, deletion, substitution and reordering. We then introduce our framework. 436 Sentence Splitting. Sentence splitting is arguably semantic based in that in many cases, splitting occurs when the same semantic entity participates in two distinct eventualities. For instance, in example (1) below, the split is on the noun bricks which is involved in two eventualities namely, “being resistant to cold” and “enabling the construction of permanent buildings”. (1) C. Being more resistant to cold, bricks enabled the construction of permanent buildings. S. Bricks were more resistant to cold. Bricks enabled the construction of permanent buildings. While splitting opportunities have a clear counterpart in syntax (i.e., splitting often occurs whenever a relative, a subordinate or an appositive clause occurs in the complex sentence), completion i.e., the reconstruction of the shared element in the second simpler clause, is arguably semantically governed in that the reconstructed element corefers with its matching phrase in the first simpler clause. While our semantic based approach naturally accounts for this by copying the phrase corresponding to the shared entity in both phrases, syntax based approach such as Zhu et al. (2010) and Woodsend and Lapata (2011) will often fail to appropriately reconstruct the shared phrase and introduce agreement mismatches because the alignment or rules they learn are based on syntax alone. For instance, in example (2), Zhu et al. (2010) fails to copy the shared argument “The judge” to the second clause whereas Woodsend and Lapata (2011) learns a synchronous rule matching (VP and VP) to (VP. NP(It) VP) thereby failing to produce the correct subject pronoun (“he” or “she”) for the antecedent “The judge”. (2) C. The judge ordered that Chapman should receive psychiatric treatment in prison and sentenced him to twenty years to life. S1. The judge ordered that Chapman should get psychiatric treatment. In prison and sentenced him to twenty years to life. (Zhu et al., 2010) S2. The judge ordered that Chapman should receive psychiatric treatment in prison. It sentenced him to twenty years to life. (Woodsend and Lapata, 2011) Deletion. By handling deletion using a probabilistic model trained on semantic representations, we can avoid deleting obligatory arguments. Thus in our approach, semantic subformulae which are related to a predicate by a core thematic roles (e.g., agent and patient) are never considered for deletion. By contrast, syntax based approaches (Zhu et al., 2010; Woodsend and Lapata, 2011) do not distinguish between optional and obligatory arguments. For instance Zhu et al. (2010) simplifies (3C) to (3S) thereby incorrectly deleting the obligatory theme (gifts) of the complex sentence and modifying its meaning to giving knights and warriors (instead of giving gifts to knights and warriors). (3) C. Women would also often give knights and warriors gifts that included thyme leaves as it was believed to bring courage to the bearer. S. Women also often give knights and warriors. Gifts included thyme leaves as it was thought to bring courage to the saint. (Zhu et al., 2010) We also depart from Coster and Kauchak (2011) who rely on null phrasal alignments for deletion during phrase based machine translation. In their approach, deletion is constrained by the training data and the possible alignments, independent of any linguistic knowledge. Substitution and Reordering SMT based approaches to paraphrasing (Barzilay and Elhadad, 2003; Bannard and Callison-Burch, 2005) and to sentence simplification (Wubben et al., 2012) have shown that by utilising knowledge about alignment and translation probabilities, SMT systems can account for the substitutions and the reorderings occurring in sentence simplification. Following on these approaches, we therefore rely on phrase based SMT to learn substitutions and reordering. In addition, the language model we integrate in the SMT module helps ensuring better fluency and grammaticality. 3.1 An Example Figure 1 shows how our approach simplifies (4C) into (4S). (4) C. In 1964 Peter Higgs published his second paper in Physical Review Letters describing Higgs mechanism which predicted a new massive spin-zero boson for the first time. S. Peter Higgs wrote his paper explaining Higgs mechanism in 1964. Higgs mechanism predicted a new elementary particle. The DRS for (4C) produced using Boxer (Curran et al., 2007) is shown at the top of the Figure and a graph representation3 of the dependencies between its variables is shown immediately below. Each DRS variable labels a node in the graph and each edge is labelled with the relation holding between the variables labelling its end vertices. The 3The DRS to graph conversion goes through several preprocessing steps: the relation nn is inverted making modifier noun (higgs) dependent of modified noun (mechanism), named and timex are converted to unary predicates, e.g., named(x,peter) is mapped to peter(x) and timex(x) = 1964 is mapped to 1964(x); and nodes are introduced for orphan words (e.g., which). 437 (( X0 named(X0, higgs, per) named(X0, peter, per) ∧( X1 male(X1) ∧( X2 second(X2) paper(X2) of(X2, X1) ∧( X3 publish(X3) agent(X3, X0) patient(X3, X2) ;( X4 named(X4, physical, org) named(X4, review, org) named(X4, letters, org) ∧ X5 thing(X5) event(X3) in(X3, X4) in(X3, X5) timex(X5) = 1964 )))));( X6 ;( X7, X8 mechanism(X8) nn(X7, X8) named(X7, higgs, org) ∧ X9, X10, X11, X12 new(X9) massive(X9) spin-zero(X9) boson(X9) predict(X10) event(X10) describe(X11) event(X11) first(X12) time(X12) agent(X10, X8) patient(X10, X9) agent(X11, X6) patient(X11, X8) for(X10, X12) [Discourse Representation Structure produced by BOXER] ROOT O1 X10 X12 X9 R10 R11 X11 X8 X7 R8 X6 R6 R7 X3 X5 X4 X2 X1 R3 X0 R1 R2 R4 R5 R9 [DRS Graph Representation] O1 16 which/WDT X12 24, 25, 26 first/a, time/n X11 13 describe/v, event X10 17 predict/v, event X9 18, 19, 20 21, 22 new/a, spin-zero/a massive/a, boson/n X8 14, 15 mechanism/n X7 14 higgs/org X6 6, 7, 8 −− X5 2 thing/n, 1964 X4 10, 11, 12 physical/org review/org, letters/org X3 5 publish/v, event X2 6, 7, 8 second/a, paper/a X1 6 male/a X0 3, 4 higgs/per, peter/per node pos. in S predicate/type R11 23 for, X10 →X12 R10 17 patient, X10 →X9 R9 17 agent,X10 →X8 R8 −− nn, X8 →X7 R7 13 patient, X11 →X8 R6 13 agent,X11 →X6 R5 1 in, X3 →X5 R4 9 in, X3 →X4 R3 6 of, X2 →X1 R2 5 patient, X3 →X2 R1 5 agent,X3 →X0 rel pos. in S predicate ROOT X11 X8 X7 R8 X6 R6 R7 X3 X5 X4 X2 X1 R3 X0 R1 R2 R4 R5 ROOT O1 X10 X12 X9 X8 X7 R8 R9 R10 R11 ( ) wwwwSPLIT ROOT X11 X8 X7 R8 X6 R6 R7 X3 X5 X ′ 2 X1 R3 X0 R1 R2 R5 In 1964 Peter Higgs published his paper describing Higgs mechanism ROOT X10 X ′ 9 X8 X7 R8 R9 R10 Higgs mechanism predicted a new boson ( ) wwwwDELETION Peter Higgs wrote his paper explaining Higgs mechanism in 1964 . Higgs mechanism predicted a new elementary particle . ( ) wwwwPBMT+LM Figure 1: Simplification of “In 1964 Peter Higgs published his second paper in Physical Review Letters describing Higgs mechanism which predicted a new massive spin-zero boson for the first time .” 438 two tables to the right of the picture show the predicates (top table) associated with each variable and the relation label (bottom table) associated with each edge. Boxer also outputs the associated positions in the complex sentence for each predicate (not shown in the DRS but in the graph tables). Orphan words (OW) i.e., words which have no corresponding material in the DRS (e.g., which at position 16), are added to the graph (node O1) thus ensuring that the position set associated with the graph exactly matches the positions in the input sentence and thus deriving the input sentence. Split Candidate isSplit prob. (agent, for, patient) - (agent, in, in, patient) true 0.63 false 0.37 Table 1: Simplification: SPLIT Given the input DRS shown in Figure 1, simplification proceeds as follows. Splitting. The splitting candidates of a DRS are event pairs contained in that DRS. More precisely, the splitting candidates are pairs4 of event variables associated with at least one of the core thematic roles (e.g., agent and patient). The features conditioning a split are the set of thematic roles associated with each event variable. The DRS shown in Figure 1 contains three such event variables X3, X11 and X10 with associated thematic role sets {agent, in, in, patient}, {agent, patient} and {agent, for, patient} respectively. Hence, there are 3 splitting candidates (X3-X11, X3-X10 and X10X11) and 4 split options: no split or split at one of the splitting candidates. Here the split with highest probability (cf. Table 1) is chosen and the DRS is split into two sub-DRS, one containing X3, and the other containing X10. After splitting, dangling subgraphs are attached to the root of the new subgraph maximizing either proximity or position overlap. Here the graph rooted in X11 is attached to the root dominating X3 and the orphan word O1 to the root dominating X10. Deletion. The deletion model (cf. Table 2) regulates the deletion of relations and their associated subgraph; of adjectives and adverbs; and of orphan words. Here, the relations in between X3 and X4 and for between X10 and X12 are deleted resulting in the deletion of the phrases “in Physical Review Letters” and “for the first time” as well as the ad4The splitting candidates could be sets of event variables depending on the number of splits required. Here, we consider pairs for 2 splits. jectives second, massive, spin-zero and the orphan word which. Substitution and Reordering. Finally the translation and language model ensures that published, describing and boson are simplified to wrote, explaining and elementary particle respectively; and that the phrase “In 1964” is moved from the beginning of the sentence to its end. 3.2 The Simplification Model Our simplification framework consists of a probabilistic model for splitting and dropping which we call DRS simplification model (DRS-SM); a phrase based translation model for substitution and reordering (PBMT); and a language model learned on Simple English Wikipedia (LM) for fluency and grammaticality. Given a complex sentence c, we split the simplification process into two steps. First, DRS-SM is applied to Dc (the DRS representation of the complex sentence c) to produce one or more (in case of splitting) intermediate simplified sentence(s) s′. Second, the simplified sentence(s) s′ is further simplified to s using a phrase based machine translation system (PBMT+LM). Hence, our model can be formally defined as: ˆs = argmax s p(s|c) = argmax s p(s′|c)p(s|s′) = argmax s p(s′|Dc)p(s′|s)p(s) where the probabilities p(s′|Dc), p(s′|s) and p(s) are given by the DRS simplification model, the phrase based machine translation model and the language model respectively. To get the DRS simplification model, we combine the probability of splitting with the probability of deletion: p(s′|Dc) = X θ:str(θ(Dc))=s′ p(Dsplit|Dc)p(Ddel|Dsplit) where θ is a sequence of simplification operations and str(θ(Dc)) is the sequence of words associated with a DRS resulting from simplifying Dc using θ. The probability of a splitting operation for a given DRS Dc is: p(Dsplit|Dc) =    SPLIT(sptrue cand), split at spcand Q spcand SPLIT(spfalse cand), otherwise 439 relation candidate isDrop prob. relation word length range in 0-2 true 0.22 false 0.72 in 2-5 true 0.833 false 0.167 mod. cand. isDrop prob. mod word new true 0.22 false 0.72 massive true 0.833 false 0.167 OW candidate isDrop prob. orphan word isBoundary and true true 0.82 false 0.18 which false true 0.833 false 0.167 Table 2: Simplification: DELETION (Relations, modifiers and OW respectively) That is, if the DRS is split on the splitting candidate spcand, the probability of the split is then given by the SPLIT table (Table 1) for the isSplit value “true” and the split candidate spcand; else it is the product of the probability given by the SPLIT table for the isSplit value “false” for all split candidate considered for Dc. As mentioned above, the features used for determining the split operation are the role sets associated with pairs of event variables (cf. Table 1). The deletion probability is given by three models: a model for relations determining the deletion of prepositional phrases; a model for modifiers (adjectives and adverbs) and a model for orphan words (Table 2). All three deletion models use the associated word itself as a feature. In addition, the model for relations uses the PP length-range as a feature while the model for orphan words relies on boundary information i.e., whether or not, the OW occurs at the associated sentence boundary. p(Ddel|Dsplit) = Y relcand DELrel(relcand) Y modcand DELmod(modcand) Y owcand DELow(owcand) 3.3 Estimating the parameters We use the EM algorithm (Dempster et al., 1977) to estimate our split and deletion model parameters. For an efficient implementation of EM algorithm, we follow the work of Yamada and Knight (2001) and Zhu et al. (2010); and build training graphs (Figure 2) from the pair of complex and simple sentence pairs in the training data. Each training graph represents a complexsimple sentence pair and consists of two types of nodes: major nodes (M-nodes) and operation nodes (O-nodes). An M-node contains the DRS representation Dc of a complex sentence c and the associated simple sentence(s) si while O-nodes determine split and deletion operations on their parent M-node. Only the root M-node is considered for the split operations. For example, given fin del-rel∗; del-mod∗; del-ow∗ split root Figure 2: An example training graph the root M-node (Dc, (s1, s2)), multiple successful split O-nodes will be created, each one further creating two M-nodes (Dc1, s1) and (Dc2, s2). For the training pair (c, s), the root M-node (Dc, s) is followed by a single split O-node producing an Mnode (Dc, s) and counting all split candidates in Dc for failed split. The M-nodes created after split operations are then tried for multiple deletion operations of relations, modifiers and OW respectively. Each deletion candidate creates a deletion O-node marking successful or failed deletion of the candidate and a result M-node. The deletion process continues on the result M-node until there is no deletion candidate left to process. The governing criteria for the construction of the training graph is that, at each step, it tries to minimize the Levenshtein edit distance between the complex and the simple sentences. Moreover, for the splitting operation, we introduce a split only if the reference sentence consists of several sentences (i.e., there is a split in the training data); and only consider splits which maximises the overlap between split and simple reference sentences. We initialize our probability tables Table 1 and Table 2 with the uniform distribution, i.e., 0.5 because all our features are binary. The EM algorithm iterates over training graphs counting model features from O-nodes and updating our probability tables. Because of the space constraints, we do not describe our algorithm in details. We refer the reader to (Yamada and Knight, 2001) for more details. 440 Our phrase based translation model is trained using the Moses toolkit5 with its default command line options on the PWKP corpus (except the sentences from the test set) considering the complex sentence as the source and the simpler one as the target. Our trigram language model is trained using the SRILM toolkit6 on the SWKP corpus7. Decoding. We explore the decoding graph similar to the training graph but in a greedy approach always picking the choice with maximal probability. Given a complex input sentence c, a split Onode will be selected corresponding to the decision of whether to split and where to split. Next, deletion O-nodes are selected indicating whether or not to drop each of the deletion candidate. The DRS associated with the final M-node Dfin is then mapped to a simplified sentence s′ fin which is further simplified using the phrase-based machine translation system to produce the final simplified sentence ssimple. 4 Experiments We trained our simplification and translation models on the PWKP corpus. To evaluate performance, we compare our approach with three other state of the art systems using the test set provided by Zhu et al. (2010) and relying both on automatic metrics and on human judgments. 4.1 Training and Test Data The DRS-Based simplification model is trained on PWKP, a bi-text of complex and simple sentences provided by Zhu et al. (2010). To construct this bi-text, Zhu et al. (2010) extracted complex and simple sentences from EWKP and SWKP respectively and automatically aligned them using TF*IDF as a similarity measure. PWKP contains 108016/114924 complex/simple sentence pairs. We tokenize PWKP using Stanford CoreNLP toolkit8. We then parse all complex sentences in PWKP using Boxer9 to produce their DRSs. Finally, our DRS-Based simplification model is trained on 97.75% of PWKP; we drop out 2.25% of the complex sentences in PWKP which are repeated in the test set or for which Boxer fails to produce DRSs. 5http://www.statmt.org/moses/ 6http://www.speech.sri.com/projects/srilm/ 7We downloaded the snapshots of Simple Wikipedia dated 2013-10-30 available at http://dumps.wikimedia.org/. 8http://nlp.stanford.edu/software/corenlp.shtml 9http://svn.ask.it.usyd.edu.au/trac/candc, Version 1.00 We evaluate our model on the test set used by Zhu et al. (2010) namely, an aligned corpus of 100/131 EWKP/SWKP sentences. Boxer produces a DRS for 96 of the 100 input sentences. These input are simplified using our simplification system namely, the DRS-SM model and the phrase-based machine translation system (Section 3.2). For the remaining four complex sentences, Boxer fails to produce DRSs. These four sentences are directly sent to the phrase-based machine translation system to produce simplified sentences. 4.2 Automatic Evaluation Metrics To assess and compare simplification systems, two main automatic metrics have been used in previous work namely, BLEU and the Flesch-Kincaid Grade Level Index (FKG). The FKG index is a readability metric taking into account the average sentence length in words and the average word length in syllables. In its original context (language learning), it was applied to well formed text and thus measured the simplicity of a well formed sentence. In the context of the simplification task however, the automatically generated sentences are not necessarily well formed so that the FKG index reduces to a measure of the sentence length (in terms of words and syllables) approximating the simplicity level of an output sentence irrespective of the length of the corresponding input. To assess simplification, we instead use metrics that are directly related to the simplification task namely, the number of splits in the overall (test and training) data and in average per sentences; the number of generated sentences with no edits i.e., which are identical to the original, complex one; and the average Levenshtein distance between the system’s output and both the complex and the simple reference sentences. BLEU gives a measure of how close a system’s output is to the gold standard simple sentence. Because there are many possible ways of simplifying a sentence, BLEU alone fails to correctly assess the appropriateness of a simplification. Moreover BLEU does not capture the degree to which the system’s output differs from the complex sentence input. We therefore use BLEU as a means to evaluate how close the systems output are to the reference corpus but complement it with further manual metrics capturing other important factors when 441 evaluating simplifications such as the fluency and the adequacy of the output sentences and the degree to which the output sentence simplifies the input. 4.3 Results and Discussion Number of Splits Table 3 shows the proportion of input whose simplification involved a splitting operation. While our system splits in proportion similar to that observed in the training data, the other systems either split very often (80% of the time for Zhu and 63% of the time for Woodsend) or not at all (Wubben). In other words, when compared to the other systems, our system performs splits in proportion closest to the reference both in terms of total number of splits and of average number of splits per sentence. Data Total number of sentences % split average split / sentence PWKP 108,016 6.1 1.06 GOLD 100 28 1.30 Zhu 100 80 1.80 Woodsend 100 63 2.05 Wubben 100 1 1.01 Hybrid 100 10 1.10 Table 3: Proportion of Split Sentences (% split) in the training/test data and in average per sentence (average split / sentence). GOLD is the test data with the gold standard SWKP sentences; Zhu, Woodsend, Wubben are the best output of the models of Zhu et al. (2010), Woodsend and Lapata (2011) and Wubben et al. (2012) respectively; Hybrid is our model. Number of Edits Table 4 indicates the edit distance of the output sentences w.r.t. both the complex and the simple reference sentences as well as the number of input for which no simplification occur. The right part of the table shows that our system generate simplifications which are closest to the reference sentence (in terms of edits) compared to those output by the other systems. It also produces the highest number of simplifications which are identical to the reference. Conversely our system only ranks third in terms of dissimilarity with the input complex sentences (6.32 edits away from the input sentence) behind the Woodsend (8.63 edits) and the Zhu (7.87 edits) system. This is in part due to the difference in splitting strategies noted above : the many splits applied by these latter two systems correlate with a high number of edits. System BLEU Edits (Complex to System) Edits (System to Simple) LD No edit LD No edit GOLD 100 12.24 3 0 100 Zhu 37.4 7.87 2 14.64 0 Woodsend 42 8.63 24 16.03 2 Wubben 41.4 3.33 6 13.57 2 Hybrid 53.6 6.32 4 11.53 3 Table 4: Automated Metrics for Simplification: average Levenshtein distance (LD) to complex and simple reference sentences per system ; number of input sentences for which no simplification occur (No edit). BLEU score We used Moses support tools: multi-bleu10 to calculate BLEU scores. The BLEU scores shown in Table 4 show that our system produces simplifications that are closest to the reference. In sum, the automatic metrics indicate that our system produces simplification that are consistently closest to the reference in terms of edit distance, number of splits and BLEU score. 4.4 Human Evaluation The human evaluation was done online using the LG-Eval toolkit (Kow and Belz, 2012)11. The evaluators were allocated a trial set using a Latin Square Experimental Design (LSED) such that each evaluator sees the same number of output from each system and for each test set item. During the experiment, the evaluators were presented with a pair of a complex and a simple sentence(s) and asked to rate this pair w.r.t. to adequacy (Does the simplified sentence(s) preserve the meaning of the input?) and simplification (Does the generated sentence(s) simplify the complex input?). They were also asked to rate the second (simplified) sentence(s) of the pair w.r.t. to fluency (Is the simplified output fluent and grammatical?). Similar to the Wubben’s human evaluation setup, we randomly selected 20 complex sentences from Zhu’s test corpus and included in the evaluation corpus: the corresponding simple (Gold) sentence from Zhu’s test corpus, the output of our system (Hybrid) and the output of the other three systems (Zhu, Woodsend and Wubben) which were provided to us by the system authors. The evaluation data thus consisted of 100 complex/simple pairs. We collected ratings from 27 participants. 10http://www.statmt.org/moses/?n=Moses.SupportTools 11http://www.nltg.brighton.ac.uk/research/lg-eval/ 442 All were either native speakers or proficient in English, having taken part in a Master taught in English or lived in an English speaking country for an extended period of time. Systems Simplification Fluency Adequacy GOLD 3.57 3.93 3.66 Zhu 2.84 2.34 2.34 Woodsend 1.73 2.94 3.04 Wubben 1.81 3.65 3.84 Hybrid 3.37 3.55 3.50 Table 5: Average Human Ratings for simplicity, fluency and adequacy Table 5 shows the average ratings of the human evaluation on a slider scale from 0 to 5. Pairwise comparisons between all models and their statistical significance were carried out using a one-way ANOVA with post-hoc Tukey HSD tests and are shown in Table 6. Systems GOLD Zhu Woodsend Wubben Zhu ♦□△ Woodsend ♦□△ ♦□△ Wubben ♦■▲ ♦□△ ♦□△ Hybrid ♦■▲ ♦□△ ♦□▲ ♦■▲ Table 6: ♦/♦is/not significantly different (sig. diff.) wrt simplicity. □/■is/not sig. diff. wrt fluency. △/▲is/not sig. diff. wrt adequacy. (significance level: p < 0.05) With regard to simplification, our system ranks first and is very close to the manually simplified input (the difference is not statistically significant). The low rating for Woodsend reflects the high number of unsimplified sentences (24/100 in the test data used for the automatic evaluation and 6/20 in the evaluation data used for human judgments). Our system data is not significantly different from the manually simplified data for simplicity whereas all other systems are. For fluency, our system rates second behind Wubben and before Woodsend and Zhu. The difference between our system and both Zhu and Woodsend system is significant. In particular, Zhu’s output is judged less fluent probably because of the many incorrect splits it licenses. Manual examination of the data shows that Woodsend’s system also produces incorrect splits. For this system however, the high proportion of non simplified sentences probably counterbalances these incorrect splits, allowing for a good fluency score overall. Regarding adequacy, our system is against closest to the reference (3.50 for our system vs. 3.66 for manual simplification). Our system, the Wubben system and the manual simplifications are in the same group (the differences between these systems are not significant). The Woodsend system comes second and the Zhu system third (the difference between the two is significant). Wubben’s high fluency, high adequacy but low simplicity could be explained with their minimal number of edit (3.33 edits) from the source sentence. In sum, if we group together systems for which there is no significant difference, our system ranks first (together with GOLD) for simplicity; first for fluency (together with GOLD and Wubben); and first for adequacy (together with GOLD and Wubben). 5 Conclusion A key feature of our approach is that it is semantically based. Typically, discourse level simplification operations such as sentence splitting, sentence reordering, cue word selection, referring expression generation and determiner choice are semantically constrained. As argued by Siddharthan (2006), correctly capturing the interactions between these phenomena is essential to ensuring text cohesion. In the future, we would like to investigate how our framework deals with such discourse level simplifications i.e., simplifications which involves manipulation of the coreference and of the discourse structure. In the PWKP data, the proportion of split sentences is rather low (6.1 %) and many of the split sentences are simple sentence coordination splits. A more adequate but small corpus is that used in (Siddharthan, 2006) which consists of 95 cases of discourse simplification. Using data from the language learning or the children reading community, it would be interesting to first construct a similar, larger scale corpus; and to then train and test our approach on more complex cases of sentence splitting. Acknowledgments We are grateful to Zhemin Zhu, Kristian Woodsend and Sander Wubben for sharing their data. We would like to thank our annotators for participating in our human evaluation experiments and to anonymous reviewers for their insightful comments. 443 References Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL), pages 597– 604. Association for Computational Linguistics. Regina Barzilay and Noemie Elhadad. 2003. Sentence alignment for monolingual comparable corpora. In Proceedings of the 2003 conference on Empirical Methods in Natural Language Processing (EMNLP), pages 25–32. Association for Computational Linguistics. Stefan Bott, Horacio Saggion, and Simon Mille. 2012. Text simplification tools for spanish. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC), pages 1665– 1671. Yvonne Margaret Canning. 2002. Syntactic simplification of Text. Ph.D. thesis, University of Sunderland. John Carroll, Guido Minnen, Darren Pearce, Yvonne Canning, Siobhan Devlin, and John Tait. 1999. Simplifying text for language-impaired readers. In Proceedings of 9th Conference of the European Chapter of the Association for Computational Linguistics (EACL), volume 99, pages 269–270. Citeseer. Raman Chandrasekar and Bangalore Srinivas. 1997. Automatic induction of rules for text simplification. Knowledge-Based Systems, 10(3):183–190. Raman Chandrasekar, Christine Doran, and Bangalore Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th International conference on Computational linguistics (COLING), pages 1041–1044.Association for Computational Linguistics. William Coster and David Kauchak. 2011. Learning to simplify sentences using wikipedia. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, pages 1–9. Association for Computational Linguistics. James R Curran, Stephen Clark, and Johan Bos. 2007. Linguistically motivated large-scale NLP with C&C and Boxer. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL) on Interactive Poster and Demonstration Sessions, pages 33–36. Association for Computational Linguistics. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Mark Dras. 1999. Tree adjoining grammar and the reluctant paraphrasing of text. Ph.D. thesis, Macquarie University NSW 2109 Australia. Katja Filippova and Michael Strube. 2008. Dependency tree based sentence compression. In Proceedings of the Fifth International Natural Language Generation Conference (INLG), pages 25–32. Association for Computational Linguistics. Hans Kamp. 1981. A theory of truth and semantic representation. In J.A.G. Groenendijk, T.M.V. Janssen, B.J. Stokhof, and M.J.B. Stokhof, editors, Formal methods in the study of language, number pt. 1 in Mathematical Centre tracts. Mathematisch Centrum. Kevin Knight and Daniel Marcu. 2000. Statisticsbased summarization-step one: Sentence compression. In Proceedings of the Seventeenth National Conference on Artificial Intelligence (AAAI) and Twelfth Conference on Innovative Applications of Artificial Intelligence (IAAI), pages 703–710. AAAI Press. Eric Kow and Anja Belz. 2012. LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC), pages 4033–4037. Advaith Siddharthan, Ani Nenkova, and Kathleen McKeown. 2004. Syntactic simplification for improving content selection in multi-document summarization. In Proceedings of the 20th International Conference on Computational Linguistics (COLING), page 896. Association for Computational Linguistics. Advaith Siddharthan. 2002. An architecture for a text simplification system. In Proceedings of the Language Engineering Conference (LEC), pages 64–71. IEEE Computer Society. Advaith Siddharthan. 2006. Syntactic simplification and text cohesion. Research on Language and Computation, 4(1):77–109. Advaith Siddharthan. 2010. Complex lexico-syntactic reformulation of sentences using typed dependency representations. In Proceedings of the 6th International Natural Language Generation Conference (INLG), pages 125–133. Association for Computational Linguistics. Advaith Siddharthan. 2011. Text simplification using typed dependencies: a comparison of the robustness of different generation strategies. In Proceedings of the 13th European Workshop on Natural Language Generation (ENLG), pages 2–11. Association for Computational Linguistics. David A Smith and Jason Eisner. 2006. Quasisynchronous grammars: Alignment by soft projection of syntactic dependencies. In Proceedings of the HLT-NAACL Workshop on Statistical Machine Translation, pages 23–30. Association for Computational Linguistics. 444 David Vickrey and Daphne Koller. 2008. Sentence simplification for semantic role labeling. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL) and the Human Language Technology Conference (HLT), pages 344–352. Willian Massami Watanabe, Arnaldo Candido Junior, Vin´ıcius Rodriguez Uzˆeda, Renata Pontin de Mattos Fortes, Thiago Alexandre Salgueiro Pardo, and Sandra Maria Alu´ısio. 2009. Facilita: reading assistance for low-literacy readers. In Proceedings of the 27th ACM international conference on Design of communication, pages 29–36. ACM. Kristian Woodsend and Mirella Lapata. 2011. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 409–420. Association for Computational Linguistics. Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL): Long Papers-Volume 1, pages 1015–1024. Association for Computational Linguistics. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics (ACL), pages 523–530. Association for Computational Linguistics. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING), pages 1353–1361, Stroudsburg, PA, USA. Association for Computational Linguistics. 445
2014
41
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 446–456, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Grammatical Relations in Chinese: GB-Ground Extraction and Data-Driven Parsing Weiwei Sun, Yantao Du, Xin Kou, Shuoyang Ding, Xiaojun Wan∗ Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {ws,duyantao,kouxin,wanxiaojun}@pku.edu.cn, [email protected] Abstract This paper is concerned with building linguistic resources and statistical parsers for deep grammatical relation (GR) analysis of Chinese texts. A set of linguistic rules is defined to explore implicit phrase structural information and thus build high-quality GR annotations that are represented as general directed dependency graphs. The reliability of this linguistically-motivated GR extraction procedure is highlighted by manual evaluation. Based on the converted corpus, we study transition-based, datadriven models for GR parsing. We present a novel transition system which suits GR graphs better than existing systems. The key idea is to introduce a new type of transition that reorders top k elements in the memory module. Evaluation gauges how successful GR parsing for Chinese can be by applying datadriven models. 1 Introduction Grammatical relations (GRs) represent functional relationships between language units in a sentence. They are exemplified in traditional grammars by the notions of subject, direct/indirect object, etc. GRs have assumed an important role in linguistic theorizing, within a variety of approaches ranging from generative grammar to functional theories. For example, several computational grammar formalisms, such as Lexical Function Grammar (LFG; Bresnan and Kaplan, 1982; Dalrymple, 2001) and Head-driven Phrase Structure Grammar (HPSG; Pollard and Sag, 1994) encode grammatical functions directly. In particular, GRs can be viewed as the dependency backbone of an LFG analysis that provide general linguistic insights, and have great potential advantages for NLP applications, (Kaplan et al., 2004; Briscoe and Carroll, 2006; Clark and Curran, 2007a; Miyao et al., 2007). ∗Email correspondence. In this paper, we address the question of analyzing Chinese sentences with deep GRs. To acquire high-quality GR corpus, we propose a linguistically-motivated algorithm to translate a Government and Binding (GB; Chomsky, 1981; Carnie, 2007) grounded phrase structure treebank, i.e. Chinese Treebank (CTB; Xue et al., 2005) to a deep dependency bank where GRs are explicitly represented. Different from popular shallow dependency parsing that focus on tree-shaped structures, our GR annotations are represented as general directed graphs that express not only local but also various long-distance dependencies, such as coordinations, control/raising constructions, topicalization, relative clauses and many other complicated linguistic phenomena that goes beyond shallow syntax (see Fig. 1 for example.). Manual evaluation highlights the reliability of our linguistically-motivated GR extraction algorithm: The overall dependency-based precision and recall are 99.17 and 98.87. The automatically-converted corpus would be of use for a wide variety of NLP tasks. Recent years have seen the introduction of a number of treebank-guided statistical parsers capable of generating considerably accurate parses for Chinese. With the high-quality GR resource at hand, we study data-driven GR parsing. Previous work on dependency parsing mainly focused on structures that can be represented in terms of directed trees. We notice two exceptions. Sagae and Tsujii (2008) and Titov et al. (2009) individually studied two transition systems that can generate more general graphs rather than trees. Inspired by their work, we study transition-based models for building deep dependency structures. The existence of a large number of crossing arcs in GR graphs makes left-to-right, incremental graph spanning computationally hard. Applied to our data, the two existing systems cover only 51.0% and 76.5% GR graphs respectively. To better suit 446 浦东 近年 来 颁布 实行 了 涉及 经济 领域 的 法规性 文件 Pudong recently issue practice involve economic field regulatory document root root comp temp temp subj subj prt prt obj obj comp subj*ldd obj nmod relative nmod Figure 1: An example: Pudong recently enacted regulatory documents involving the economic field. The symbol “*ldd” indicates long-distance dependencies; “subj*ldd” between the word “涉及/involve” and the word “文件/documents” represents a long-range subject-predicate relation. The arguments and adjuncts of the coordinated verbs, namely “颁布/issue” and “实行/practice,” are separately yet distributively linked the two heads. our problem, we extend Titov et al.’s work and study what we call K-permutation transition system. The key idea is to introduce a new type of transition that reorders top k (2 ≤k ≤K) elements in the memory module of a stack-based transition system. With the increase of K, the expressiveness of the corresponding system strictly increases. We propose an oracle deriving method which is guaranteed to find a sound transition sequence if one exits. Moreover, we introduce an effective approximation of that oracle, which decreases decoding ambiguity but practically covers almost exactly the same graphs for our data. Based on the stronger transition system, we build a GR parser with a discriminative model for disambiguation and a beam decoder for inference. We conduct experiments on CTB 6.0 to profile this parser. With the increase of the K, the parser is able to utilize more GR graphs for training and the numeric performance is improved. Evaluation gauges how successful GR parsing for Chinese can be by applying data-driven models. Detailed analysis reveal some important factors that may possibly boost the performance. To our knowledge, this work provides the first result of extensive experiments of parsing Chinese with GRs. We release our GR processing kit and goldstandard annotations for research purposes. These resources can be downloaded at http://www. icst.pku.edu.cn/lcwm/omg. 2 GB-grounded GR Extraction In this section, we discuss the construction of the GR annotations. Basically, the annotations are automatically converted from a GB-grounded phrasestructure treebank, namely CTB. Conceptually, this conversion is similar to the conversions from CTB structures to representations in deep grammar formalisms (Tse and Curran, 2010; Yu et al., 2010; Guo et al., 2007; Xia, 2001). However, our work is grounded in GB, which is the linguistic basis of the construction of CTB. We argue that this theoretical choice makes the conversion process more compatible with the original annotations and therefore more accurate. We use directed graphs to explicitly encode bi-lexical dependencies involved in coordination, raising/control constructions, extraction, topicalization, and many other complicated phenomena. Fig. 1 shows an example of such a GR graph and its original CTB annotation. 2.1 Linguistic Basis GRs are encoded in different ways in different languages. In some languages, e.g. Turkish, grammatical function is encoded by means of morphological marking, while in highly configurational languages, e.g. Chinese, the grammatical function of a phrase is heavily determined by its constituent structure position. Dominant Chomskyan theories, including GB, have defined GRs as configurations at phrase structures. Following this principle, CTB groups words into constituents through the use of a limited set of fundamental grammatical functions. Transformational grammar utilizes empty categories (ECs) to represent long-distance dependencies. In CTB, traces are provided by relating displaced linguistic material to where it should be interpreted semantically. By exploiting configurational information, traces and functional tag annotations, GR information can be hopefully 447 IP VP ↓=↑ VP ↓=↑ NP ↓=(↑OBJ) NP ↓=↑ NN ↓=↑ 文件 NN ↓∈(↑NMOD) 法规性 CP ↓∈(↑REL) DEC ↓=↑ 的 IP ↓=(↑COMP) VP ↓=↑ NP ↓=(↑OBJ) NN ↓=↑ 领域 NP ↓∈(↑NMOD) NN ↓=↑ 经济 VV ↓=↑ 涉及 NP ↓=(↑SBJ) -NONE*T* AS ↓=(↑PRT) 了 VCD ↓=↑ VV ↓∈↑ 实行 VV ↓∈↑ 颁布 LCP ↓∈(↑TMP) LC ↓=↑ 来 NP ↓=(↑COMP) NT ↓=↑ 近年 NP ↓=(↑SBJ) NR 浦东 Figure 2: The original CTB annotation augmented with LFG-like f-structure annotations of the running example. derived from CTB trees with high accuracy. 2.2 The Extraction Algorithm Our treebank conversion algorithm borrows key insights from Lexical Functional Grammar (LFG; Bresnan and Kaplan, 1982; Dalrymple, 2001). LFG posits two levels of representation: c(onstituent)-structure and f(unctional)-structure minimally. C-structure is represented by phrasestructure trees, and captures surface syntactic configurations such as word order, while f-structure encodes grammatical functions. It is easy to extract a dependency backbone which approximates basic predicate-argument-adjunct structures from f-structures. The construction of the widely used PARC DepBank (King et al., 2003) is a good example. LFG relates c-structure and f-structure through f-structure annotations, which compositionally map every constituent to a corresponding fstructure. Borrowing this key idea, we translate CTB trees to dependency graphs by first augmenting each constituency with f-structure annotations, then propagating the head words of the head or conjunct daughter(s) upwards to their parents, and finally creating a dependency graph. The following presents details step-by-step. Tapping implicit information. Xue (2007) introduced a systematic study to tap the implicit functional information of CTB. This gives us a very good start to extract GRs. We slightly modify their method to enrich a CTB tree with f-structure annotations: Each node in a resulting tree is annotated with one and only one corresponding equation. See Fig. 2 for example. Comparing the original annotation and enriched one, we can see that the functionality of this step is to explicitly represent and regulate grammatical functions. Beyond CTB annotations: tracing more. Natural languages do not always interpret linguistic material locally. In order to obtain accurate and complete GR, predicate-argument, or logical form representations, a hallmark of deep grammars is that they usually involve a non-local dependency resolution mechanism. CTB trees utilize ECs and coindexed materials to represent long-distance dependencies. An EC is a nominal element that does not have any phonological content and is therefore unpronounced. Two kinds of anaphoric ECs, i.e. big PRO and trace, are annotated in CTB. Theoretically speaking, only trace is generated as the result of movement and therefore annotated with antecedents in CTB. We carefully check the annotation and find that a considerable amount of antecedents are not labeled, and hence a lot of impor448 VP{颁布,实行} VP{颁布,实行} NP{文件} NP{文件} CP{的} DEC{的} IP{涉及} VP{涉及} NP{文件*ldd} AS{了} VP{颁布,实行} LCP{来} Figure 3: An example of lexicalized tree after head word upward passing. Only partial result is shown. The long-distance dependency between “涉及/involve” and “文件/document” is created through copying the dependent to a coindexed anaphoric EC position. tant non-local information is missing. In addition, since the big PRO is also anaphoric, it is possible to find coindexed components sometimes. Such non-local information is also very valuable. Beyond CTB annotations, we introduce a number of phrase-structure patterns to extract more non-local dependencies. The method heavily leverages linguistic rules to exploit structural information. We take into account both theoretical assumptions and analyzing practices to enrich coindexation information according to phrasestructure patterns. In particular, we try to link an anaphoric EC e with its c-commonders if no non-empty antecedent has already been coindexed with e. Because the CTB is influenced deeply by the X-bar syntax, which regulates constituent analysis much, the number of our linguistic rules is quite modest. For the development of conversion rules, we used the first 9 files of CTB, which contains about 100 sentences. Readers can refer to the well-documented Perl script for details. See Fig. 2 for example. The noun phrase “法规性 文件/regulatory documents” is related to the trace “*T*.” This coindexation is not labeled by the original annotation. Passing head words and linking ECs. Based on an enriched tree, our algorithm propagates the head word of the head daughter upwards to their parents, linking coindexed units, and finally creating a GR graph. The partial result after head word passing of the running example is shown in Fig. 3. There are two differences of the head word passing between our GR extraction and a “normal” dependency tree extraction. First, the GR extraction procedure may pass multiple head words to its parent, especially in a coordination construction. Second, Precision Recall F-score Unlabeled 99.48 99.17 99.32 Labeled 99.17 98.87 99.02 Table 1: Manual evaluation of 209 sentences. long-distance dependencies are created by linking ECs and their coindexed phrases. 2.3 Manual Evaluation To have a precise understanding of whether our extraction algorithm works well, we have selected 20 files that contains 209 sentences in total for manual evaluation. Linguistic experts carefully examine the corresponding GR graphs derived by our extraction algorithm and correct all errors. In other words, a gold standard GR annotation set is created. The measure for comparing two dependency graphs is precision/recall of GR tokens which are defined as ⟨wh, wd, l⟩tuples, where wh is the head, wd is the dependent and l is the relation. Labeled precision/recall (LP/LR) is the ratio of tuples correctly identified by the automatic generator, while unlabeled precision/recall (UP/UR) is the ratio regardless of l. F-score is a harmonic mean of precision and recall. These measures correspond to attachment scores (LAS/UAS) in dependency tree parsing. To evaluate our GR parsing models that will be introduced later, we also report these metrics. The overall performance is summarized in Tab. 1. We can see that the automatical GR extraction achieves relatively high performance. There are two sources of errors in treebank conversion: (1) inadequate conversion rules and (2) wrong or inconsistent original annotations. During the creation of the gold standard corpus, we find that the former is mainly caused by complicated unbounded dependencies and the lack of internal structure for some kinds of phrases. Such problems are very hard to solve through rules only, if not possible, since original annotations do not provide sufficient information. The latter problem is more scattered and unpredictable. 2.4 Statistics Allowing non-projective dependencies generally makes parsing either by graph-based or transitionbased dependency parsing harder. Substantial research effort has been devoted in recent years to the design of elegant solutions for this problem. There are much more crossing arcs in the GR 449 graphs than syntactic dependency trees. In the training data (defined in Section 4.1), there are 558132 arcs and 86534 crossing pairs, About half of the sentences have crossing arcs (10930 out of 22277). The wide existence of crossing arcs poses an essential challenge for GR parsing, namely, to find methods for handling crossing arcs without a significant loss in accuracy and efficiency. 3 Transition-based GR Parsing The availability of large-scale treebanks has contributed to the blossoming of statistical approaches to build accurate shallow constituency and dependency parsers. With high-quality GR resources at hand, it is possible to study statistical approaches to automatically parse GR graphs. In this section, we investigate the feasibility of applying a datadriven, grammar-free approach to build GRs directly. In particular, transition-based dependency parsing method is studied. 3.1 Data-Driven Dependency Parsing Data-driven, grammar-free dependency parsing has received an increasing amount of attention in the past decade. Such approaches, e.g. transitionbased (Yamada and Matsumoto, 2003; Nivre, 2008) and graph-based (McDonald, 2006; Torres Martins et al., 2009) models have attracted the most attention of dependency parsing in recent years. Transition-based parsers utilize transition systems to derive dependency trees together with treebank-induced statistical models for predicting transitions. This approach was pioneered by (Yamada and Matsumoto, 2003) and (Nivre et al., 2004). Most research concentrated on surface syntactic structures, and the majority of existing approaches are limited to producing only trees. We notice two exceptions. Sagae and Tsujii (2008) and Titov et al. (2009) individually introduced two transition systems that can generate specific graphs rather than trees. Inspired by their work, we study transition-based approach to build GR graphs. 3.2 Transition Systems Following (Nivre, 2008), we define a transition system for dependency parsing as a quadruple S = (C, T, cs, Ct), where 1. C is a set of configurations, each of which contains a buffer β of (remaining) words and a set A of dependency arcs, Transitions SHIFT (σ, j|β, A) ⇒(σ|j, β, A) LEFT-ARCl (σ|i, j|β, A) ⇒(σ|i, j|β, A ∪{(j, l, i)}) RIGHT-ARCl (σ|i, j|β, A) ⇒(σ|i, j|β, A ∪{(i, l, j)}) POP (σ|i, β, A) ⇒(σ, β, A) ROTATEk (σ|ik| . . . |i2|i1, β, A) ⇒(σ|i1|ik| . . . |i2, β, A) Table 2: K-permutation System. 2. T is a set of transitions, each of which is a (partial) function t : C 7→C, 3. cs is an initialization function, mapping a sentence x to a configuration, with β = [1, . . . , n], 4. Ct ⊆C is a set of terminal configurations. Given a sentence x = w1, . . . , wn and a graph G = (V, A) on it, if there is a sequence of transitions t1, . . . , tm and a sequence of configurations c0, . . . , cm such that c0 = cs(x), ti(ci−1) = ci(i = 1, . . . , m), cm ∈Ct, and Acm = A, we say the sequence of transitions is an oracle sequence. And we define ¯Aci = A −Aci for the arcs to be built in ci. In a typical transition-based parsing process, the input words are put into a queue and partially built structures are organized by a stack. A set of SHIFT/REDUCE actions are performed sequentially to consume words from the queue and update the partial parsing results. 3.3 Online Reordering Among existing systems, Sagae and Tsujii’s is designed for projective graphs (denoted by G1 in Definition 1), and Titov et al.’s handles only a specific subset of non-projective graphs as well as projective graphs (G2). Applied to our data, only 51.0% and 76.5% of the extracted graphs are parsable with their systems. Obviously, it is necessary to investigate new transition systems for the parsing task in our study. To deal with crossing arcs, Titov et al. (2009) and Nivre (2009) designed a SWAP transition that switches the position of the two topmost nodes on the stack. Inspired by their work, we extend this approach to parse more general graphs. The basic idea is to provide our new system with an ability to reorder more nodes during decoding in an online fashion, which we refer to as online reordering. 3.4 K-Permutation System We define a K-permutation transition system SK = (C, T, cs, Ct), where a configuration c = 450 (σ, β, A) ∈C contains a stack σ of nodes besides β and A. We set the initial configuration for a sentence x = w1, . . . , wn to be cs(x) = ([], [1, . . . , n], {}), and take Ct to be the set of all configurations of the form ct = (σ, [], A) (for any arc set A). The set of transitions T contains five types of actions, as shown in Tab. 2: 1. SHIFT removes the front element from β and pushes it onto σ. 2. LEFT-ARCl/RIGHT-ARCl updates a configuration by adding (i, l, j)/(j, l, i) to A where i is the top of σ, and j is the front of β. 3. POP deletes the top element of σ. 4. ROTATEk updates a configuration with stack σ|ik| . . . |i2|i1 by rotating the top k nodes in stack left by one index, obtaining σ|i1|ik| . . . |i2, with constraint 2 ≤k ≤K. We refer to this system as K-permutation because by rotating the top k (2 ≤k ≤K) nodes in the stack, we can obtain all the permutations of the top K nodes. Note that S2 is identical to Titov et al.’s; S∞is complete with respect to the class of all directed graphs without self-loop, since we can arbitrarily permute the nodes in the stack. The Kpermutation system exhibits a nice property: The sets of corresponding graphs are strictly monotonic with respect to the ⊂operation. Definition 1. If a graph G can be parsed with transition system SK, we say G is a K-perm graph. We use GK to denote the set of all k-perm graphs. Specially, G0 = ∅, G1 is the set of all projective graphs, and G∞= S∞ k=0 Gk. Theorem 1. Gi ⊊Gi+1, ∀i ≥0. Proof. It is obvious that Gi ⊆Gi+1 and G0 ⊊G1. Fig. 4 gives an example which is in Gi+1 but not in Gi for all i > 0, indicating Gi ̸= Gi+1. Theorem 2. G∞is the set of all graphs without self-loop. Proof. It follows immediately from the fact that G ∈G|V |, ∀G = ⟨V, E⟩. The transition systems introduced in (Sagae and Tsujii, 2008) and (Titov et al., 2009) can be viewed as S11 and S2. 1Though Sagae and Tsujii (2008) introduced additional constraints to exclude cyclic path, the fundamental transition mechanism of their system is the same to S1. w1 · · · wi wi+1 · · · w2i w2i+1 w2i+2 Figure 4: A graph which is in Gi+1, but not in Gi. 3.5 Normal Form Oracle The K-permutation transition system may allow multiple oracle transition sequences on one graph, but trying to sum all the possible oracles is usually computational expensive. Here we give a construction procedure which is guaranteed to find an oracle sequence if one exits. We refer it as normal form oracle (NFO). Let L(j) be the ordered list of nodes connected to j in ¯Aci−1 for j ∈σci−1, and let LK(σci−1) = [L(j1), . . . , L(jmax{l,K})]. If σci−1 is empty, then we set ti to SHIFT; if there is no arc linked to j1 in ¯Aci−1, then we set ti to POP; if there exits a ∈¯Aci−1 linking j1 and b, then we set ti to LEFTARC or RIGHT-ARC correspondingly. When there are only SHIFT and ROTATE left, we first apply a sequence of ROTATE’s to make LK(σ) complete ordered by lexicographical order, then apply a SHIFT. Let ci = ti(ci−1), we continue to compute ti+1, until βci is empty. Theorem 3. If a graph is parsable with the transition system SK then the construction procedure is guaranteed to find an oracle transition sequence. Proof. During the construction, all the arcs are built by LEFT-ARC or RIGHT-ARC, which links the top of the stack and the front of the buffer. Therefore, we prefer L(σ) to be as orderly as possible, to make the words to be linked sooner on the top of the stack. the construction procedure above does best within the power of the system SK. 3.6 An Approximation for NFO In the construction of NFO transitions, we exhaustively use the ROTATE’s to make L(σ) complete ordered. We also observed that the transition LEFT-ARC, RIGHT-ARC and SHIFT only change the relative order between the first element of L(σ) and the rest elements. Therefore we explored an approximate procedure to determine the ROTATE’s, based on the observation. We call it approximate NFO (ANFO). Using notation defined in Section 3.5, the approximate procedure goes as follows. When it comes to the determination of 451 w1 w2 w3 w4 w5 w6 w7 w8 w9 Figure 5: A graph that can be parsed with S3 with a transition sequence SSSSR3SR3APAPR2R3SR3SR3APAPAPAPAP, where S stands for SHIFT, R for ROTATE, A for LEFT-ARC, and P for POP. But the approximate procedure fails to find the oracle, since R2R3 in bold in the sequence are not to be applied. the ROTATE sequence, let k be the largest m such that 0 ≤m ≤min{K, l} and L(jm) strictly precedes L(j1) by the lexicographical order (here we assume L(j0) strictly precedes any L(j), j ∈σ). If k > 0, we set ti to ROTATEk; else we set ti to SHIFT. The approximation assumes L(σ) is completely ordered except the first element, and insert the first element to its proper place each time. Definition 2. We define ˆGK as the graphs the oracle of which can be extracted by SK with the approximation procedure. It can be inferred similarly that Theorem 1 and Theorem 2 also hold for ˆG’s. However, the ˆGK is not equal to GK in non-trivial cases. Theorem 4. ˆGi ⊊Gi, ∀i ≥3. Proof. It is trivial that ˆGi ⊆Gi. An example graph that is in G3 but not in ˆG3 is shown in Figure 5, examples for arbitrary i > 3 can be constructed similarly. The above theorem indicates the inadequacy of the ANFO deriving procedure. Nevertheless, empirical evaluation (Section 4.2) shows that the coverage of AFO and ANFO deriving procedures are almost identical when applying to linguistic data. 3.7 Statistical Parsing When we parse a sentence w1w2 · · · wn, we start with the initial configuration c0 = cs(x), and choose next transition ti = C(ci−1) iteratively according to a discriminative classifier trained on oracle sequences. To build a parser, we use a structured classifier to approximate the oracle, and apply the Passive-Aggressive (PA) algorithm (Crammer et al., 2006) for parameter estimation. The PA algorithm is similar to the Perceptron algorithm, the difference from which is the update of weight vector. We also use parameter averaging and early update to achieve better training. Developing features has been shown crucial to advancing the state-of-the-art in dependency tree parsing (Koo and Collins, 2010; Zhang and Nivre, 2011). To build accurate deep dependency parsers, we utilize a large set of features for disambiguation. See the notes included in the supplymentary material for details. To improve the performance, we also apply the technique of beam search, which keep a beam of transition sequences with highest scores when parsing. 4 Experiments 4.1 Experimental setup CTB is a segmented, part-of-speech (POS) tagged, and fully bracketed corpus in the constituency formalism, and very popular to evaluate fundamental NLP tasks, including word segmentation (Sun and Xu, 2011), POS tagging (Sun and Uszkoreit, 2012), and syntactic parsing (Zhang and Clark, 2009; Sun and Wan, 2013). We use CTB 6.0 and define the training, development and test sets according to the CoNLL 2009 shared task. We use gold-standard word segmentation and POS taging results as inputs. All transition-based parsing models are trained with beam 16 and iteration 30. Overall precision/recall/f-score with respect to dependency tokens is reported. To evaluate the ability to recover non-local dependencies, the recall of such dependencies are reported too. 4.2 Coverage and Accuracy There is a dual effect of the increase of the parameter k to our transition-based dependency parser. On one hand, the higher k is, the more expressivity the corresponding transition system has. A system with higher k covers more structures and allows to use more data for training. On the other hand, higher k brings more ambiguities to the corresponding parser, and the parsing performance may thus suffer. Note that the ambiguity exists not only in each step for transition decision, but also in selecting the training oracle. The left-most columns of Tab. 3 shows the coverage of K-permutation transition system with respect to different K and different oracle deriving algorithms. Readers may be surprised that the coverage of NFO and ANFO deriving procedures is the same. Actually, all the covered graphs by the two oracle deriving procedures are exactly the 452 System NFO ANFO UP UR UF LP LR LF URL LRL URNL LRNL S2 76.5 76.5 85.88 81.00 83.37 83.98 79.21 81.53 81.93 80.34 58.88 52.17 S3 89.0 89.0 86.02 81.72 83.82 84.07 79.86 81.91 82.61 80.94 60.46 54.28 S4 95.6 95.6 86.28 82.06 84.12 84.35 80.22 82.23 82.92 81.29 61.48 54.77 S5 98.4 98.4 86.44 82.21 84.27 84.51 80.37 82.39 83.15 81.51 59.80 53.30 Table 3: Coverage and accuracy of the GR parser on the development data. same, except for S3. Only 1 from 22277 sentences can find a NFO but not an ANFO. This number demonstrates the effectiveness of ANFO. In the following experiments, we use the ANFO’s to train our parser. Applied to our data, S2, i.e. the exact system introduced by Titov et al. (2009), only covers 76.5% GR graphs. This is very different from the result obtained on the CoNLL shared task data for English semantic role labeling (SRL). According to (Titov et al., 2009), 99% semantic-role-labelled graphs can be generated by S2. We think there are two main reasons accounting for the differences, and highlight the importance of the expressiveness of transition systems to solve deep dependency parsing problems. First, the SRL task only focuses on finding arguments and adjuncts of verbal (and nominal) predicates, while dependencies headed by other words are not contained in its graph representation. On contrast, a deep dependency structure, like GR graph, approximates deep syntactic or semantic information of a sentence as a whole, and therefore is more dense. As a result, permutation system with a very low k is incapable to handle more cases. Another reason is about the Chinese language. Some language-specific properties result in complex crossing arcs. For example, serial verb constructions are widely used in Chinese to describe several separate events without conjunctions. The verbal heads in such constructions share subjects and adjuncts, both of which are before the heads. The distributive dependencies between verbal heads and subjects/adjuncts usually produce crossing arcs (see Fig. 6). To test our assumption, we evaluate the coverage of S2 over the functor-argument dependency graphs provided by the English and Chinese CCGBank (Hockenmaier and Steedman, 2007; Tse and Curran, 2010). The result is 96.9% vs. 89.0%, which confirms our linguistic intuition under another grammar formalism. Tab. 3 summarizes the performance of the transition-based parser with different configurations to reveal how well data-driven parsing can subject adjunct verb1 verb2 Figure 6: A simplified example to illustrate crossing arcs in serial verbal constructions. be performed in realistic situations. We can see that with the increase of K, the overall parsing accuracy incrementally goes up. The high complexity of Chinese deep dependency structures demonstrates the importance of the expressiveness of a transition system, while the improved numeric accuracies practically certify the benefits. The two points merit further exploration to more expressive transition systems for deep dependency parsing, at least for Chinese. The labeled evaluation scores on the final test data are presented in Tab. 4. Test UP UR UF LRL LRNL S5 83.93 79.82 81.82 80.94 54.38 Table 4: Performance on the test data. 4.3 Precision vs. Recall A noteworthy thing about the overall performance is that the precision is promising but the recall is too low behind. This difference is consistent with the result obtained by a shift-reduce CCG parser (Zhang and Clark, 2011). The functor-argument dependencies generated by that parser also has a relatively high precision but considerably low recall. There are two similarities between our parser and theirs: 1) both parsers produce dependency graphs rather trees; 2) both parser employ a beam decoder that does not guarantee global optimality. To build NLP application, e.g. information extraction, systems upon GR parsing, such property merits attention. A good trade-off between the precision and the recall may have a great impact on final results. 453 4.4 Local vs. Non-local Although the micro accuracy of all dependencies are considerably good, the ability of current stateof-the-art statistical parsers to find difficult nonlocal materials is far from satisfactory, even for English (Rimell et al., 2009; Bender et al., 2011). We report the accuracy in terms of local and nonlocal dependencies respectively to show the difficulty of the recovery of non-local dependencies. The last four columns of Tab. 3 demonstrates the labeled/unlabeled recall of local (URL/LRL) and non-local dependencies (URNL/LRNL). We can clearly see that non-local dependency recovery is extremely difficult for Chinese parsing. 4.5 Deep vs. Deep CCG and HPSG parsers also favor the dependencybased metrics for evaluation (Clark and Curran, 2007b; Miyao and Tsujii, 2008). Previous work on Chinese CCG and HPSG parsing unanimously agrees that obtaining the deep analysis of Chinese is more challenging (Yu et al., 2011; Tse and Curran, 2012). The successful C&C and Enju parsers provide very inaccurate results for Chinese texts. Though the numbers profiling the qualities of deep dependency structures under different formalisms are not directly comparable, all empirical evaluation indicates that the state-of-the-art of deep linguistic processing for Chinese lag behind very much. 5 Related Work Wide-coverage in-depth and accurate linguistic processing is desirable for many practical NLP applications, such as machine translation (Wu et al., 2010) and information extraction (Miyao et al., 2008). Parsing in deep formalisms, e.g. CCG, HPSG, LFG and TAG, provides valuable, richer linguistic information, and researchers thus draw more and more attention to it. Very recently, study on deep linguistic processing for Chinese has been initialized. Our work is one of them. To quickly construct deep annotations, corpusdriven grammar engineering has been studied. Phrase structure trees in CTB have been semiautomatically converted to deep derivations in the CCG (Tse and Curran, 2010), LFG (Guo et al., 2007), TAG (Xia, 2001) and HPSG (Yu et al., 2010) formalisms. Our GR extraction work is similar, but grounded in GB, which is more consistent with the construction of the original annotations. Based on converted fine-grained linguistic annotations, successful English deep parsers, such as C&C (Clark and Curran, 2007b) and Enju (Miyao and Tsujii, 2008), have been evaluated (Yu et al., 2011; Tse and Curran, 2012). We also borrow many ideas from recent advances in deep syntactic or semantic parsing for English. In particular, Sagae and Tsujii (2008)’s and Titov et al. (2009)’s studies on transition-based deep dependency parsing motivated our work very much. However, simple adoption of their systems does not resolve Chinese GR parsing well because the GR graphs are much more complicated. Our investigation on the K-permutation transition system advances the capacity of existing methods. 6 Conclusion Recent years witnessed rapid progress made on deep linguistic processing for English, and initial attempts for Chinese. Our work stands in between traditional dependency tree parsing and deep linguistic processing. We introduced a system for automatically extracting grammatical relations of Chinese sentences from GB phrase structure trees. The present work remedies the resource gap by facilitating the accurate extraction of GR annotations from GB trees. Manual evaluation demonstrate the effectiveness of our method. With the availability of high-quality GR resources, transition-based methods for GR parsing was studied. A new formal system, namely K-permutation system, is well theoretically discussed and practically implemented as the core module of a deep dependency parser. Empirical evaluation and analysis were presented to give better understanding of the Chinese GR parsing problem. Detailed analysis reveals some important directions for future investigation. Acknowledgement The work was supported by NSFC (61300064, 61170166 and 61331011) and National High-Tech R&D Program (2012AA011101). References Emily M. Bender, Dan Flickinger, Stephan Oepen, and Yi Zhang. 2011. Parser evaluation over local and nonlocal deep dependencies in a large corpus. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 397–408. Association for Computational Linguistics, Edinburgh, Scotland, UK. URL http://www.aclweb.org/ anthology/D11-1037. 454 J. Bresnan and R. M. Kaplan. 1982. Introduction: Grammars as mental representations of language. In J. Bresnan, editor, The Mental Representation of Grammatical Relations, pages xvii–lii. MIT Press, Cambridge, MA. Ted Briscoe and John Carroll. 2006. Evaluating the accuracy of an unlexicalized statistical parser on the parc depbank. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 41–48. Association for Computational Linguistics, Sydney, Australia. URL http://www.aclweb.org/anthology/P/ P06/P06-2006. Andrew Carnie. 2007. Syntax: A Generative Introduction. Blackwell Publishing, Blackwell Publishing 350 Main Street, Malden, MA 02148-5020, USA, second edition. Noam Chomsky. 1981. Lectures on Government and Binding. Foris Publications, Dordecht. Stephen Clark and James Curran. 2007a. Formalismindependent parser evaluation with ccg and depbank. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 248–255. Association for Computational Linguistics, Prague, Czech Republic. URL http://www.aclweb.org/anthology/ P07-1032. Stephen Clark and James R. Curran. 2007b. Wide-coverage efficient statistical parsing with CCG and log-linear models. Comput. Linguist., 33(4):493–552. URL http:// dx.doi.org/10.1162/coli.2007.33.4.493. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai ShalevShwartz, and Yoram Singer. 2006. Online passiveaggressive algorithms. JOURNAL OF MACHINE LEARNING RESEARCH, 7:551–585. M. Dalrymple. 2001. Lexical-Functional Grammar, volume 34 of Syntax and Semantics. Academic Press, New York. Yuqing Guo, Josef van Genabith, and Haifeng Wang. 2007. Treebank-based acquisition of lfg resources for Chinese. In Proceedings of the LFG07 Conference. CSLI Publications, California, USA. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the penn treebank. Computational Linguistics, 33(3):355–396. Ron Kaplan, Stefan Riezler, Tracy H King, John T Maxwell III, Alex Vasserman, and Richard Crouch. 2004. Speed and accuracy in shallow and deep stochastic parsing. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 97– 104. Association for Computational Linguistics, Boston, Massachusetts, USA. Tracy Holloway King, Richard Crouch, Stefan Riezler, Mary Dalrymple, and Ronald M. Kaplan. 2003. The PARC 700 dependency bank. In In Proceedings of the 4th International Workshop on Linguistically Interpreted Corpora (LINC-03), pages 1–8. Terry Koo and Michael Collins. 2010. Efficient third-order dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1–11. Association for Computational Linguistics, Uppsala, Sweden. URL http://www.aclweb.org/ anthology/P10-1001. Ryan McDonald. 2006. Discriminative learning and spanning tree algorithms for dependency parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA, USA. Yusuke Miyao, Rune Sætre, Kenji Sagae, Takuya Matsuzaki, and Jun’ichi Tsujii. 2008. Task-oriented evaluation of syntactic parsers and their representations. In Proceedings of ACL-08: HLT, pages 46–54. Association for Computational Linguistics, Columbus, Ohio. URL http://www.aclweb.org/anthology/P/ P08/P08-1006. Yusuke Miyao, Kenji Sagae, and Jun’ichi Tsujii. 2007. Towards framework-independent evaluation of deep linguistic parsers. In Ann Copestake, editor, Proceedings of the GEAF 2007 Workshop, CSLI Studies in Computational Linguistics Online, page 21 pages. CSLI Publications. URL http://www.cs.cmu.edu/˜sagae/ docs/geaf07miyaoetal.pdf. Yusuke Miyao and Jun’ichi Tsujii. 2008. Feature forest models for probabilistic hpsg parsing. Comput. Linguist., 34(1):35–80. URL http://dx.doi.org/10. 1162/coli.2008.34.1.35. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Comput. Linguist., 34:513– 553. URL http://dx.doi.org/10.1162/coli. 07-056-R1-07-027. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351– 359. Association for Computational Linguistics, Suntec, Singapore. URL http://www.aclweb.org/ anthology/P/P09/P09-1040. Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memorybased dependency parsing. In Hwee Tou Ng and Ellen Riloff, editors, HLT-NAACL 2004 Workshop: Eighth Conference on Computational Natural Language Learning (CoNLL-2004), pages 49–56. Association for Computational Linguistics, Boston, Massachusetts, USA. Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. The University of Chicago Press, Chicago. Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 813–821. Association for Computational Linguistics, Singapore. URL http://www.aclweb.org/anthology/D/ D09/D09-1085. Kenji Sagae and Jun’ichi Tsujii. 2008. Shift-reduce dependency DAG parsing. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 753–760. Coling 2008 Organizing Committee, Manchester, UK. URL http://www.aclweb.org/ anthology/C08-1095. Weiwei Sun and Hans Uszkoreit. 2012. Capturing paradigmatic and syntagmatic lexical relations: Towards accurate Chinese part-of-speech tagging. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Weiwei Sun and Xiaojun Wan. 2013. Data-driven, pcfg-based and pseudo-pcfg-based models for Chinese dependency parsing. Transactions of the Association for Computational Linguistics (TACL). Weiwei Sun and Jia Xu. 2011. Enhancing Chinese word segmentation using unlabeled data. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 970–979. Association for Computational Linguistics, Edinburgh, 455 Scotland, UK. URL http://www.aclweb.org/ anthology/D11-1090. Ivan Titov, James Henderson, Paola Merlo, and Gabriele Musillo. 2009. Online graph planarisation for synchronous parsing of semantic and syntactic dependencies. In Proceedings of the 21st international jont conference on Artifical intelligence, pages 1562–1567. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. URL http://dl.acm.org/citation.cfm?id= 1661445.1661696. Andre Torres Martins, Noah Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 342–350. Association for Computational Linguistics, Suntec, Singapore. URL http://www.aclweb.org/anthology/P/ P09/P09-1039. Daniel Tse and James R. Curran. 2010. Chinese CCGbank: extracting CCG derivations from the penn Chinese treebank. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1083–1091. Coling 2010 Organizing Committee, Beijing, China. URL http://www.aclweb.org/ anthology/C10-1122. Daniel Tse and James R. Curran. 2012. The challenges of parsing Chinese with combinatory categorial grammar. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 295–304. Association for Computational Linguistics, Montr´eal, Canada. URL http://www.aclweb. org/anthology/N12-1030. Xianchao Wu, Takuya Matsuzaki, and Jun’ichi Tsujii. 2010. Fine-grained tree-to-string translation rule extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 325–334. Association for Computational Linguistics, Uppsala, Sweden. URL http://www.aclweb.org/ anthology/P10-1034. Fei Xia. 2001. Automatic grammar generation from two different perspectives. Ph.D. thesis, University of Pennsylvania. Naiwen Xue, Fei Xia, Fu-dong Chiou, and Marta Palmer. 2005. The penn Chinese treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11:207–238. URL http://portal.acm.org/ citation.cfm?id=1064781.1064785. Nianwen Xue. 2007. Tapping the implicit information for the PS to DS conversion of the Chinese treebank. In Proceedings of the Sixth International Workshop on Treebanks and Linguistics Theories. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In The 8th International Workshop of Parsing Technologies (IWPT2003), pages 195–206. Kun Yu, Yusuke Miyao, Takuya Matsuzaki, Xiangli Wang, and Junichi Tsujii. 2011. Analysis of the difficulties in Chinese deep parsing. In Proceedings of the 12th International Conference on Parsing Technologies, pages 48–57. Association for Computational Linguistics, Dublin, Ireland. URL http://www.aclweb.org/ anthology/W11-2907. Kun Yu, Miyao Yusuke, Xiangli Wang, Takuya Matsuzaki, and Junichi Tsujii. 2010. Semi-automatically developing Chinese hpsg grammar from the penn Chinese treebank for deep parsing. In Coling 2010: Posters, pages 1417–1425. Coling 2010 Organizing Committee, Beijing, China. URL http://www.aclweb.org/ anthology/C10-2162. Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the Chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09), pages 162–171. Association for Computational Linguistics, Paris, France. URL http://www.aclweb.org/ anthology/W09-3825. Yue Zhang and Stephen Clark. 2011. Shift-reduce CCG parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 683–692. Association for Computational Linguistics, Portland, Oregon, USA. URL http://www.aclweb.org/ anthology/P11-1069. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 188–193. Association for Computational Linguistics, Portland, Oregon, USA. URL http://www. aclweb.org/anthology/P11-2033. 456
2014
42
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 457–467, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Ambiguity-aware Ensemble Training for Semi-supervised Dependency Parsing Zhenghua Li , Min Zhang∗, Wenliang Chen Provincial Key Laboratory for Computer Information Processing Technology Soochow University {zhli13,minzhang,wlchen}@suda.edu.cn Abstract This paper proposes a simple yet effective framework for semi-supervised dependency parsing at entire tree level, referred to as ambiguity-aware ensemble training. Instead of only using 1best parse trees in previous work, our core idea is to utilize parse forest (ambiguous labelings) to combine multiple 1-best parse trees generated from diverse parsers on unlabeled data. With a conditional random field based probabilistic dependency parser, our training objective is to maximize mixed likelihood of labeled data and auto-parsed unlabeled data with ambiguous labelings. This framework offers two promising advantages. 1) ambiguity encoded in parse forests compromises noise in 1-best parse trees. During training, the parser is aware of these ambiguous structures, and has the flexibility to distribute probability mass to its preferred parse trees as long as the likelihood improves. 2) diverse syntactic structures produced by different parsers can be naturally compiled into forest, offering complementary strength to our single-view parser. Experimental results on benchmark data show that our method significantly outperforms the baseline supervised parser and other entire-tree based semi-supervised methods, such as self-training, co-training and tri-training. 1 Introduction Supervised dependency parsing has made great progress during the past decade. However, it is very difficult to further improve performance ∗Correspondence author of supervised parsers. For example, Koo and Collins (2010) and Zhang and McDonald (2012) show that incorporating higher-order features into a graph-based parser only leads to modest increase in parsing accuracy. In contrast, semi-supervised approaches, which can make use of large-scale unlabeled data, have attracted more and more interest. Previously, unlabeled data is explored to derive useful local-context features such as word clusters (Koo et al., 2008), subtree frequencies (Chen et al., 2009; Chen et al., 2013), and word co-occurrence counts (Zhou et al., 2011; Bansal and Klein, 2011). A few effective learning methods are also proposed for dependency parsing to implicitly utilize distributions on unlabeled data (Smith and Eisner, 2007; Wang et al., 2008; Suzuki et al., 2009). All above work leads to significant improvement on parsing accuracy. Another line of research is to pick up some high-quality auto-parsed training instances from unlabeled data using bootstrapping methods, such as self-training (Yarowsky, 1995), co-training (Blum and Mitchell, 1998), and tri-training (Zhou and Li, 2005). However, these methods gain limited success in dependency parsing. Although working well on constituent parsing (McClosky et al., 2006; Huang and Harper, 2009), self-training is shown unsuccessful for dependency parsing (Spreyer and Kuhn, 2009). The reason may be that dependency parsing models are prone to amplify previous mistakes during training on self-parsed unlabeled data. Sagae and Tsujii (2007) apply a variant of co-training to dependency parsing and report positive results on out-of-domain text. Søgaard and Rishøj (2010) combine tri-training and parser ensemble to boost parsing accuracy. Both work employs two parsers to process the unlabeled data, and only select as extra training data sentences on which the 1-best parse trees of the two parsers are identical. In this way, the autoparsed unlabeled data becomes more reliable. 457 w0 He1 saw2 a3 deer4 riding5 a6 bicycle7 in8 the9 park10 .11 Figure 1: An example sentence with an ambiguous parse forest. However, one obvious drawback of these methods is that they are unable to exploit unlabeled data with divergent outputs from different parsers. Our experiments show that unlabeled data with identical outputs from different parsers tends to be short (18.25 words per sentence on average), and only has a small proportion of 40% (see Table 6). More importantly, we believe that unlabeled data with divergent outputs is equally (if not more) useful. Intuitively, an unlabeled sentence with divergent outputs should contain some ambiguous syntactic structures (such as preposition phrase attachment) that are very hard to resolve and lead to the disagreement of different parsers. Such sentences can provide more discriminative instances for training which may be unavailable in labeled data. To solve above issues, this paper proposes a more general and effective framework for semi-supervised dependency parsing, referred to as ambiguity-aware ensemble training. Different from traditional self/co/tri-training which only use 1-best parse trees on unlabeled data, our approach adopts ambiguous labelings, represented by parse forest, as gold-standard for unlabeled sentences. Figure 1 shows an example sentence with an ambiguous parse forest. The forest is formed by two parse trees, respectively shown at the upper and lower sides of the sentence. The differences between the two parse trees are highlighted using dashed arcs. The upper tree take “deer” as the subject of “riding”, whereas the lower one indicates that “he” rides the bicycle. The other difference is where the preposition phrase (PP) “in the park” should be attached, which is also known as the PP attachment problem, a notorious challenge for parsing. Reserving such uncertainty has three potential advantages. First, noise in unlabeled data is largely alleviated, since parse forest encodes only a few highly possible parse trees with high oracle score. Please note that the parse forest in Figure 1 contains four parse trees after combination of the two different choices. Second, the parser is able to learn useful features from the unambiguous parts of the parse forest. Finally, with sufficient unlabeled data, it is possible that the parser can learn to resolve such uncertainty by biasing to more reasonable parse trees. To construct parse forest on unlabeled data, we employ three supervised parsers based on different paradigms, including our baseline graph-based dependency parser, a transition-based dependency parser (Zhang and Nivre, 2011), and a generative constituent parser (Petrov and Klein, 2007). The 1-best parse trees of these three parsers are aggregated in different ways. Evaluation on labeled data shows the oracle accuracy of parse forest is much higher than that of 1-best outputs of single parsers (see Table 3). Finally, using a conditional random field (CRF) based probabilistic parser, we train a better model by maximizing mixed likelihood of labeled data and auto-parsed unlabeled data with ambiguous labelings. Experimental results on both English and Chinese datasets demonstrate that the proposed ambiguity-aware ensemble training outperforms other entire-tree based methods such as self/co/tri-training. In summary, we make following contributions. 1. We propose a generalized ambiguity-aware ensemble training framework for semisupervised dependency parsing, which can 458 make better use of unlabeled data, especially when parsers from different views produce divergent syntactic structures. 2. We first employ a generative constituent parser for semi-supervised dependency parsing. Experiments show that the constituent parser is very helpful since it produces more divergent structures for our semi-supervised parser than discriminative dependency parsers. 3. We build the first state-of-the-art CRF-based dependency parser. Using the probabilistic parser, we benchmark and conduct systematic comparisons among ours and all previous bootstrapping methods, including self/co/tritraining. 2 Supervised Dependency Parsing Given an input sentence x = w0w1...wn, the goal of dependency parsing is to build a dependency tree as depicted in Figure 1, denoted by d = {(h, m) : 0 ≤h ≤n, 0 < m ≤n}, where (h, m) indicates a directed arc from the head word wh to the modifier wm, and w0 is an artificial node linking to the root of the sentence. In parsing community, two mainstream methods tackle the dependency parsing problem from different perspectives but achieve comparable accuracy on a variety of languages. The graphbased method views the problem as finding an optimal tree from a fully-connected directed graph (McDonald et al., 2005; McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010), while the transition-based method tries to find a highest-scoring transition sequence that leads to a legal dependency tree (Yamada and Matsumoto, 2003; Nivre, 2003; Zhang and Nivre, 2011). 2.1 Graph-based Dependency Parser (GParser) In this work, we adopt the graph-based paradigm because it allows us to naturally derive conditional probability of a dependency tree d given a sentence x, which is required to compute likelihood of both labeled and unlabeled data. Under the graph-based model, the score of a dependency tree is factored into the scores of small subtrees p. Score(x, d; w) = w · f(x, d) = X p⊆d Score(x, p; w) h m (a) single dependency h s (b) adjacent sibling m Figure 2: Two types of scoring subtrees in our second-order graph-based parsers. Dependency features fdep(x, h, m): wh, wm, th, tm, th±1, tm±1, tb, dir(h, m), dist(h, m) Sibling features fsib(x, h, m, s): wh, ws, wm, th, tm, ts, th±1, tm±1, ts±1 dir(h, m), dist(h, m) Table 1: Brief illustration of the syntactic features. ti denotes the POS tag of wi. b is an index between h and m. dir(i, j) and dist(i, j) denote the direction and distance of the dependency (i, j). We adopt the second-order graph-based dependency parsing model of McDonald and Pereira (2006) as our core parser, which incorporates features from the two kinds of subtrees in Fig. 2.1 Then the score of a dependency tree is: Score(x, d; w) = X {(h,m)}⊆d wdep · fdep(x, h, m) + X {(h,s),(h,m)}⊆d wsib · fsib(x, h, s, m) where fdep(x, h, m) and fsib(x, h, s, m) are the feature vectors of the two subtree in Fig. 2; wdep/sib are feature weight vectors; the dot product gives scores contributed by corresponding subtrees. For syntactic features, we adopt those of Bohnet (2010) which include two categories corresponding to the two types of scoring subtrees in Fig. 2. We summarize the atomic features used in each feature category in Table 1. These atomic features are concatenated in different combinations to compose rich feature sets. Please refer to Table 4 of Bohnet (2010) for the complete feature list. 2.2 CRF-based GParser Previous work on graph-based dependency parsing mostly adopts linear models and perceptron based training procedures, which lack probabilistic explanations of dependency trees and do not need to compute likelihood of labeled training 1Higher-order models of Carreras (2007) and Koo and Collins (2010) can achieve higher accuracy, but has much higher time cost (O(n4)). Our approach is applicable to these higher-order models, which we leave for future work. 459 data. Instead, we build a log-linear CRF-based dependency parser, which is similar to the CRFbased constituent parser of Finkel et al. (2008). Assuming the feature weights w are known, the probability of a dependency tree d given an input sentence x is defined as: p(d|x; w) = exp{Score(x, d; w)} Z(x; w) Z(x; w) = X d′∈Y(x) exp{Score(x, d′; w)} (1) where Z(x) is the normalization factor and Y(x) is the set of all legal dependency trees for x. Suppose the labeled training data is D = {(xi, di)}N i=1. Then the log likelihood of D is: L(D; w) = N X i=1 log p(di|xi; w) The training objective is to maximize the log likelihood of the training data L(D). The partial derivative with respect to the feature weights w is: ∂L(D; w) ∂w = N X i=1    f(xi, di) − X d′∈Y(xi) p(d′|xi; w)f(xi, d′)    (2) where the first term is the empirical counts and the second term is the model expectations. Since Y(xi) contains exponentially many dependency trees, direct calculation of the second term is prohibitive. Instead, we can use the classic insideoutside algorithm to efficiently compute the model expectations within O(n3) time complexity, where n is the input sentence length. 3 Ambiguity-aware Ensemble Training In standard entire-tree based semi-supervised methods such as self/co/tri-training, automatically parsed unlabeled sentences are used as additional training data, and noisy 1-best parse trees are considered as gold-standard. To alleviate the noise, the tri-training method only uses unlabeled data on which multiple parsers from different views produce identical parse trees. However, unlabeled data with divergent syntactic structures should be more useful. Intuitively, if several parsers disagree on an unlabeled sentence, it implies that the unlabeled sentence contains some difficult syntactic phenomena which are not sufficiently covered in manually labeled data. Therefore, exploiting such unlabeled data may introduce more discriminative syntactic knowledge, largely compensating labeled training data. To address above issues, we propose ambiguityaware ensemble training, which can be interpreted as a generalized tri-training framework. The key idea is the use of ambiguous labelings for the purpose of aggregating multiple 1-best parse trees produced by several diverse parsers. Here, “ambiguous labelings” mean an unlabeled sentence may have multiple parse trees as gold-standard reference, represented by parse forest (see Figure 1). The training procedure aims to maximize mixed likelihood of both manually labeled and auto-parsed unlabeled data with ambiguous labelings. For an unlabeled instance, the model is updated to maximize the probability of its parse forest, instead of a single parse tree in traditional tri-training. In other words, the model is free to distribute probability mass among the trees in the parse forest to its liking, as long as the likelihood improves (T¨ackstr¨om et al., 2013). 3.1 Likelihood of the Unlabeled Data The auto-parsed unlabeled data with ambiguous labelings is denoted as D′ = {(ui, Vi)}M i=1, where ui is an unlabeled sentence, and Vi is the corresponding parse forest. Then the log likelihood of D′ is: L(D′; w) = M X i=1 log  X d′∈Vi p(d′|ui; w)   where p(d′|ui; w) is the conditional probability of d′ given ui, as defined in Eq. (1). For an unlabeled sentence ui, the probability of its parse forest Vi is the summation of the probabilities of all the parse trees contained in the forest. Then we can derive the partial derivative of the log likelihood with respect to w: ∂L(D′; w) ∂w = M X i=1    X d′∈Vi ˜p(d′|ui, Vi; w)f(ui, d′) − X d′∈Y(ui) p(d′|ui; w)f(ui, d′)    (3) where ˜p(d′|ui, Vi; w) is the probability of d′ un460 der the space constrained by the parse forest Vi. ˜p(d′|ui, Vi; w) = exp{Score(ui, d′; w)} Z(ui, Vi; w) Z(ui, Vi; w) = X d′∈Vi exp{Score(ui, d′; w)} The second term in Eq. (3) is the same with the second term in Eq. (2). The first term in Eq. (3) can be efficiently computed by running the insideoutside algorithm in the constrained search space Vi. 3.2 Stochastic Gradient Descent (SGD) Training We apply L2-norm regularized SGD training to iteratively learn feature weights w for our CRFbased baseline and semi-supervised parsers. We follow the implementation in CRFsuite.2 At each step, the algorithm approximates a gradient with a small subset of the training examples, and then updates the feature weights. Finkel et al. (2008) show that SGD achieves optimal test performance with far fewer iterations than other optimization routines such as L-BFGS. Moreover, it is very convenient to parallel SGD since computations among examples in the same batch is mutually independent. Training with the combined labeled and unlabeled data, the objective is to maximize the mixed likelihood: L(D; D′) = L(D) + L(D′) Since D′ contains much more instances than D (1.7M vs. 40K for English, and 4M vs. 16K for Chinese), it is likely that the unlabeled data may overwhelm the labeled data during SGD training. Therefore, we propose a simple corpus-weighting strategy, as shown in Algorithm 1, where Db i,k is the subset of training data used in kth update and b is the batch size; ηk is the update step, which is adjusted following the simulated annealing procedure (Finkel et al., 2008). The idea is to use a fraction of training data (Di) at each iteration, and do corpus weighting by randomly sampling labeled and unlabeled instances in a certain proportion (N1 vs. M1). Once the feature weights w are learnt, we can 2http://www.chokkan.org/software/crfsuite/ Algorithm 1 SGD training with mixed labeled and unlabeled data. 1: Input: Labeled data D = {(xi, di)}N i=1, and unlabeled data D′ = {(ui, Vi)}M j=1; Parameters: I, N1, M1, b 2: Output: w 3: Initialization: w(0) = 0, k = 0; 4: for i = 1 to I do {iterations} 5: Randomly select N1 instances from D and M1 instances from D′ to compose a new dataset Di, and shuffle it. 6: Traverse Di: a small batch Db i,k ⊆Di at one step. 7: wk+1 = wk + ηk 1 b ∇L(Db i,k; wk) 8: k = k + 1 9: end for parse the test data to find the optimal parse tree. d∗= arg max d′∈Y(x) p(d′|x; w) = arg max d′∈Y(x) Score(x, d′; w) This can be done with the Viterbi decoding algorithm described in McDonald and Pereira (2006) in O(n3) parsing time. 3.3 Forest Construction with Diverse Parsers To construct parse forests for unlabeled data, we employ three diverse parsers, i.e., our baseline GParser, a transition-based parser (ZPar3) (Zhang and Nivre, 2011), and a generative constituent parser (Berkeley Parser4) (Petrov and Klein, 2007). These three parsers are trained on labeled data and then used to parse each unlabeled sentence. We aggregate the three parsers’ outputs on unlabeled data in different ways and evaluate the effectiveness through experiments. 4 Experiments and Analysis To verify the effectiveness of our proposed approach, we conduct experiments on Penn Treebank (PTB) and Penn Chinese Treebank 5.1 (CTB5). For English, we follow the popular practice to split data into training (sections 2-21), development (section 22), and test (section 23). For CTB5, we adopt the data split of (Duan et al., 2007). We convert original bracketed structures into dependency structures using Penn2Malt with its default head-finding rules. For unlabeled data, we follow Chen et al. (2013) and use the BLLIP WSJ corpus (Charniak et al., 2000) for English and Xinhua portion of Chinese 3http://people.sutd.edu.sg/˜yue_zhang/doc/ 4https://code.google.com/p/berkeleyparser/ 461 Train Dev Test Unlabeled PTB 39,832 1,700 2,416 1.7M CTB5 16,091 803 1,910 4M Table 2: Data sets (in sentence number). Gigaword Version 2.0 (LDC2009T14) (Huang, 2009) for Chinese. We build a CRF-based bigram part-of-speech (POS) tagger with the features described in (Li et al., 2012), and produce POS tags for all train/development/test/unlabeled sets (10way jackknifing for training sets). The tagging accuracy on test sets is 97.3% on English and 94.0% on Chinese. Table 2 shows the data statistics. We measure parsing performance using the standard unlabeled attachment score (UAS), excluding punctuation marks. For significance test, we adopt Dan Bikel’s randomized parsing evaluation comparator (Noreen, 1989).5 4.1 Parameter Setting When training our CRF-based parsers with SGD, we use the batch size b = 100 for all experiments. We run SGD for I = 100 iterations and choose the model that performs best on development data. For the semi-supervised parsers trained with Algorithm 1, we use N1 = 20K and M1 = 50K for English, and N1 = 15K and M1 = 50K for Chinese, based on a few preliminary experiments. To accelerate the training, we adopt parallelized implementation of SGD and employ 20 threads for each run. For semi-supervised cases, one iteration takes about 2 hours on an IBM server having 2.0 GHz Intel Xeon CPUs and 72G memory. Default parameter settings are used for training ZPar and Berkeley Parser. We run ZPar for 50 iterations, and choose the model that achieves highest accuracy on the development data. For Berkeley Parser, we use the model after 5 splitmerge iterations to avoid over-fitting the training data according to the manual. The phrasestructure outputs of Berkeley Parser are converted into dependency structures using the same headfinding rules. 4.2 Methodology Study on Development Data Using three supervised parsers, we have many options to construct parse forest on unlabeled data. To examine the effect of different ways for forest construction, we conduct extensive methodology study on development data. Table 3 presents the 5http://www.cis.upenn.edu/˜dbikel/software.html results. We divide the systems into three types: 1) supervised single parsers; 2) CRF-based GParser with conventional self/co/tri-training; 3) CRFbased GParser with our approach. For the latter two cases, we also present the oracle accuracy and averaged head number per word (“Head/Word”) of parse forest when applying different ways to construct forests on development datasets. The first major row presents performance of the three supervised parsers. We can see that the three parsers achieve comparable performance on English, but the performance of ZPar is largely inferior on Chinese. The second major row shows the results when we use single 1-best parse trees on unlabeled data. When using the outputs of GParser itself (“Unlabeled ←G”), the experiment reproduces traditional self-training. The results on both English and Chinese re-confirm that self-training may not work for dependency parsing, which is consistent with previous studies (Spreyer and Kuhn, 2009). The reason may be that dependency parsers are prone to amplify previous mistakes on unlabeled data during training. The next two experiments in the second major row reimplement co-training, where another parser’s 1-best results are projected into unlabeled data to help the core parser. Using unlabeled data with the results of ZPar (“Unlabeled ←Z”) significantly outperforms the baseline GParser by 0.30% (93.15-82.85) on English. However, the improvement on Chinese is not significant. Using unlabeled data with the results of Berkeley Parser (“Unlabeled ←B”) significantly improves parsing accuracy by 0.55% (93.40-92.85) on English and 1.06% (83.34-82.28) on Chinese. We believe the reason is that being a generative model designed for constituent parsing, Berkeley Parser is more different from discriminative dependency parsers, and therefore can provide more divergent syntactic structures. This kind of syntactic divergence is helpful because it can provide complementary knowledge from a different perspective. Surdeanu and Manning (2010) also show that the diversity of parsers is important for performance improvement when integrating different parsers in the supervised track. Therefore, we can conclude that co-training helps dependency parsing, especially when using a more divergent parser. The last experiment in the second major row is known as tri-training, which only uses unla462 English Chinese UAS Oracle Head/Word UAS Oracle Head/Word GParser 92.85 — — 82.28 — — Supervised ZPar 92.50 81.04 Berkeley 92.70 82.46 Unlabeled ←G (self-train) 92.88 92.85 1.000 82.14 82.28 1.000 Semi-supervised GParser Unlabeled ←Z (co-train) 93.15 † 92.50 82.54 81.04 with Single 1-best Trees Unlabeled ←B (co-train) 93.40 † 92.70 83.34 † 82.46 Unlabeled ←B=Z (tri-train) 93.50 † 97.52 83.10 † 95.05 Unlabeled ←Z+G 93.18 † 94.97 1.053 82.78 86.66 1.136 Unlabeled ←B+G 93.35 † 96.37 1.080 83.24 † 89.72 1.188 Semi-supervised GParser Unlabeled ←B+Z 93.78 †‡ 96.18 1.082 83.86 †‡ 89.54 1.199 Ambiguity-aware Ensemble Unlabeled ←B+(Z∩G) 93.77 †‡ 95.60 1.050 84.26 †‡ 87.76 1.106 Unlabeled ←B+Z+G 93.50 † 96.95 1.112 83.30 † 91.50 1.281 Table 3: Main results on development data. G is short for GParser, Z for ZPar, and B for Berkeley Parser. † means the corresponding parser significantly outperforms supervised parsers, and ‡ means the result significantly outperforms co/tri-training at confidence level of p < 0.01. beled sentences on which Berkeley Parser and ZPar produce identical outputs (“Unlabeled ← B=Z”). We can see that with the verification of two views, the oracle accuracy is much higher than using single parsers (97.52% vs. 92.85% on English, and 95.06% vs. 82.46% on Chinese). Although using less unlabeled sentences (0.7M for English and 1.2M for Chinese), tri-training achieves comparable performance to co-training (slightly better on English and slightly worse on Chinese). The third major row shows the results of the semi-supervised GParser with our proposed approach. We experiment with different combinations of the 1-best parse trees of the three supervised parsers. The first three experiments combine 1-best outputs of two parsers to compose parse forest on unlabeled data. “Unlabeled ← B+(Z∩G)” means that the parse forest is initialized with the Berkeley parse and augmented with the intersection of dependencies of the 1-best outputs of ZPar and GParser. In the last setting, the parse forest contains all three 1-best results. When the parse forests of the unlabeled data are the union of the outputs of GParser and ZPar, denoted as “Unlabeled ←Z+G”, each word has 1.053 candidate heads on English and 1.136 on Chinese, and the oracle accuracy is higher than using 1-best outputs of single parsers (94.97% vs. 92.85% on English, 86.66% vs. 82.46% on Chinese). However, we find that although the parser significantly outperforms the supervised GParser on English, it does not gain significant improvement over co-training with ZPar (“Unlabeled ←Z”) on both English and Chinese. Combining the outputs of Berkeley Parser and GParser (“Unlabeled ←B+G”), we get higher oracle score (96.37% on English and 89.72% on Chinese) and higher syntactic divergence (1.085 candidate heads per word on English, and 1.188 on Chinese) than “Unlabeled ←Z+G”, which verifies our earlier discussion that Berkeley Parser produces more different structures than ZPar. However, it leads to slightly worse accuracy than co-training with Berkeley Parser (“Unlabeled ← B”). This indicates that adding the outputs of GParser itself does not help the model. Combining the outputs of Berkeley Parser and ZPar (“Unlabeled ←B+Z”), we get the best performance on English, which is also significantly better than both co-training (“Unlabeled ←B”) and tri-training (“Unlabeled ←B=Z”) on both English and Chinese. This demonstrates that our proposed approach can better exploit unlabeled data than traditional self/co/tri-training. More analysis and discussions are in Section 4.4. During experimental trials, we find that “Unlabeled ←B+(Z∩G)” can further boost performance on Chinese. A possible explanation is that by using the intersection of the outputs of GParser and ZPar, the size of the parse forest is better controlled, which is helpful considering that ZPar performs worse on this data than both Berkeley Parser and GParser. Adding the output of GParser itself (“Unlabeled ←B+Z+G”) leads to accuracy drop, although the oracle score is higher (96.95% on English and 91.50% on Chinese) than “Unlabeled ←B+Z”. We suspect the reason is that the model is likely to distribute the probability mass to these parse trees produced by itself instead of those by Berkeley Parser or ZPar under this setting. 463 Sup Semi McDonald and Pereira (2006) 91.5 — Koo and Collins (2010) [higher-order] 93.04 Zhang and McDonald (2012) [higher-order] 93.06 Zhang and Nivre (2011) [higher-order] 92.9 Koo et al. (2008) [higher-order] 92.02 93.16 Chen et al. (2009) [higher-order] 92.40 93.16 Suzuki et al. (2009) [higher-order,cluster] 92.70 93.79 Zhou et al. (2011) [higher-order] 91.98 92.64 Chen et al. (2013) [higher-order] 92.76 93.77 This work 92.34 93.19 Table 4: UAS comparison on English test data. In summary, we can conclude that our proposed ambiguity-aware ensemble training is significantly better than both the supervised approaches and the semi-supervised approaches that use 1-best parse trees. Appropriately composing the forest parse, our approach outperforms the best results of co-training or tri-training by 0.28% (93.78-93.50) on English and 0.92% (84.26-83.34) on Chinese. 4.3 Comparison with Previous Work We adopt the best settings on development data for semi-supervised GParser with our proposed approach, and make comparison with previous results on test data. Table 4 shows the results. The first major row lists several state-of-theart supervised methods. McDonald and Pereira (2006) propose a second-order graph-based parser, but use a smaller feature set than our work. Koo and Collins (2010) propose a third-order graphbased parser. Zhang and McDonald (2012) explore higher-order features for graph-based dependency parsing, and adopt beam search for fast decoding. Zhang and Nivre (2011) propose a feature-rich transition-based parser. All work in the second major row adopts semi-supervised methods. The results show that our approach achieves comparable accuracy with most previous semi-supervised methods. Both Suzuki et al. (2009) and Chen et al. (2013) adopt the higherorder parsing model of Carreras (2007), and Suzuki et al. (2009) also incorporate word cluster features proposed by Koo et al. (2008) in their system. We expect our approach may achieve higher performance with such enhancements, which we leave for future work. Moreover, our method may be combined with other semi-supervised approaches, since they are orthogonal in methodology and utilize unlabeled data from different perspectives. Table 5 make comparisons with previous results UAS Supervised Li et al. (2012) [joint] 82.37 Bohnet and Nivre (2012) [joint] 81.42 Chen et al. (2013) [higher-order] 81.01 This work 81.14 Semi Chen et al. (2013) [higher-order] 83.08 This work 82.89 Table 5: UAS comparison on Chinese test data. Unlabeled data UAS #Sent Len Head/Word Oracle NULL 92.34 0 — — — Consistent (tri-train) 92.94 0.7M 18.25 1.000 97.65 Low divergence 92.94 0.5M 28.19 1.062 96.53 High divergence 93.03 0.5M 27.85 1.211 94.28 ALL 93.19 1.7M 24.15 1.087 96.09 Table 6: Performance of our semi-supervised GParser with different sets of “Unlabeled ← B+Z” on English test set. “Len” means averaged sentence length. on Chinese test data. Li et al. (2012) and Bohnet and Nivre (2012) use joint models for POS tagging and dependency parsing, significantly outperforming their pipeline counterparts. Our approach can be combined with their work to utilize unlabeled data to improve both POS tagging and parsing simultaneously. Our work achieves comparable accuracy with Chen et al. (2013), although they adopt the higher-order model of Carreras (2007). Again, our method may be combined with their work to achieve higher performance. 4.4 Analysis To better understand the effectiveness of our proposed approach, we make detailed analysis using the semi-supervised GParser with “Unlabeled ← B+Z” on English datasets. Contribution of unlabeled data with regard to syntactic divergence: We divide the unlabeled data into three sets according to the divergence of the 1-best outputs of Berkeley Parser and ZPar. The first set contains those sentences that the two parsers produce identical parse trees, denoted by “consistent”, which corresponds to the setting for tri-training. Other sentences are split into two sets according to averaged number of heads per word in parse forests, denoted by “low divergence” and “high divergence” respectively. Then we train semi-supervised GParser using the three sets of unlabeled data. Table 6 illustrates the results and statistics. We can see that unlabeled data with identical outputs from Berkeley Parser and ZPar tends to be short sentences (18.25 words per sen464 tence on average). Results show all the three sets of unlabeled data can help the parser. Especially, the unlabeled data with highly divergent structures leads to slightly higher improvement. This demonstrates that our approach can better exploit unlabeled data on which parsers of different views produce divergent structures. Impact of unlabeled data size: To understand how our approach performs with regards to the unlabeled data size, we train semi-supervised GParser with different sizes of unlabeled data. Fig. 3 shows the accuracy curve on the test set. We can see that the parser consistently achieves higher accuracy with more unlabeled data, demonstrating the effectiveness of our approach. We expect that our approach has potential to achieve higher accuracy with more additional data. 92.3 92.4 92.5 92.6 92.7 92.8 92.9 93 93.1 93.2 0 50K 100K 200K 500K 1M 1.7M UAS Unlabeled Data Size B+Z Parser Figure 3: Performance of GParser with different sizes of “Unlabeled ←B+Z” on English test set. 5 Related Work Our work is originally inspired by the work of T¨ackstr¨om et al. (2013). They first apply the idea of ambiguous labelings to multilingual parser transfer in the unsupervised parsing field, which aims to build a dependency parser for a resourcepoor target language by making use of sourcelanguage treebanks. Different from their work, we explore the idea for semi-supervised dependency parsing where a certain amount of labeled training data is available. Moreover, we for the first time build a state-of-the-art CRF-based dependency parser and conduct in-depth comparisons with previous methods. Similar ideas of learning with ambiguous labelings are previously explored for classification (Jin and Ghahramani, 2002) and sequence labeling problems (Dredze et al., 2009). Our work is also related with the parser ensemble approaches such as stacked learning and reparsing in the supervised track. Stacked learning uses one parser’s outputs as guide features for another parser, leading to improved performance (Nivre and McDonald, 2008; Torres Martins et al., 2008). Re-parsing merges the outputs of several parsers into a dependency graph, and then apply Viterbi decoding to find a better tree (Sagae and Lavie, 2006; Surdeanu and Manning, 2010). One possible drawback of parser ensemble is that several parsers are required to parse the same sentence during the test phase. Moreover, our approach can benefit from these methods in that we can get parse forests of higher quality on unlabeled data (Zhou, 2009). 6 Conclusions This paper proposes a generalized training framework of semi-supervised dependency parsing based on ambiguous labelings. For each unlabeled sentence, we combine the 1-best parse trees of several diverse parsers to compose ambiguous labelings, represented by a parse forest. The training objective is to maximize the mixed likelihood of both the labeled data and the auto-parsed unlabeled data with ambiguous labelings. Experiments show that our framework can make better use of the unlabeled data, especially those with divergent outputs from different parsers, than traditional tri-training. Detailed analysis demonstrates the effectiveness of our approach. Specifically, we find that our approach is very effective when using divergent parsers such as the generative parser, and it is also helpful to properly balance the size and oracle accuracy of the parse forest of the unlabeled data. For future work, among other possible extensions, we would like to see how our approach performs when employing more diverse parsers to compose the parse forest of higher quality for the unlabeled data, such as the easyfirst non-directional dependency parser (Goldberg and Elhadad, 2010) and other constituent parsers (Collins and Koo, 2005; Charniak and Johnson, 2005; Finkel et al., 2008). Acknowledgments The authors would like to thank the critical and insightful comments from our anonymous reviewers. This work was supported by National Natural Science Foundation of China (Grant No. 61373095, 61333018). 465 References Mohit Bansal and Dan Klein. 2011. Web-scale features for full-scale parsing. In Proceedings of ACL, pages 693–702. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the 11th Annual Conference on Computational Learning Theory, pages 92–100. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of EMNLP 2012, pages 1455–1465. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of COLING, pages 89–97. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In Proceedings of EMNLP/CoNLL, pages 141–150. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of ACL, pages 173–180. Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 2000. BLLIP 1987-89 WSJ Corpus Release 1, LDC2000T43. Linguistic Data Consortium. Wenliang Chen, Jun’ichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009. Improving dependency parsing with subtrees from auto-parsed data. In Proceedings of EMNLP, pages 570–579. Wenliang Chen, Min Zhang, and Yue Zhang. 2013. Semi-supervised feature transformation for dependency parsing. In Proceedings of EMNLP, pages 1303–1313. Michael J. Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, pages 25–70. Mark Dredze, Partha Pratim Talukdar, and Koby Crammer. 2009. Sequence learning from data with multiple labels. In ECML/PKDD Workshop on Learning from Multi-Label Data. Xiangyu Duan, Jun Zhao, and Bo Xu. 2007. Probabilistic models for action-based Chinese dependency parsing. In Proceedings of ECML/ECPPKDD, pages 559–566. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL, pages 959–967. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Proceedings of NAACL. Zhongqiang Huang and Mary Harper. 2009. Selftraining PCFG grammars with latent annotations across languages. In Proceedings of EMNLP 2009, pages 832–841. Chu-Ren Huang. 2009. Tagged Chinese Gigaword Version 2.0, LDC2009T14. Linguistic Data Consortium. Rong Jin and Zoubin Ghahramani. 2002. Learning with multiple labels. In Proceedings of NIPS. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In ACL, pages 1–11. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL, pages 595–603. Zhenghua Li, Min Zhang, Wanxiang Che, and Ting Liu. 2012. A separately passive-aggressive training algorithm for joint POS tagging and dependency parsing. In COLING 2012, pages 1681–1698. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, pages 152–159. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81–88. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91–98. Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of ACL, pages 950–958. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of IWPT, pages 149–160. Eric W. Noreen. 1989. Computer-intensive methods for testing hypotheses: An introduction. John Wiley & Sons, Inc., New York. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of NAACL. Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of NAACL, pages 129–132. Kenji Sagae and Jun’ichi Tsujii. 2007. Dependency parsing and domain adaptation with LR models and parser ensembles. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL, pages 1044–1050. David A. Smith and Jason Eisner. 2007. Bootstrapping feature-rich dependency parsers with entropic priors. In Proceedings of EMNLP, pages 667–677. 466 Anders Søgaard and Christian Rishøj. 2010. Semisupervised dependency parsing using generalized tri-training. In Proceedings of ACL, pages 1065– 1073. Kathrin Spreyer and Jonas Kuhn. 2009. Datadriven dependency parsing of new languages using incomplete and noisy training data. In CoNLL, pages 12–20. Mihai Surdeanu and Christopher D. Manning. 2010. Ensemble models for dependency parsing: Cheap and good? In Proceedings of NAACL, pages 649– 652. Jun Suzuki, Hideki Isozaki, Xavier Carreras, and Michael Collins. 2009. An empirical study of semi-supervised structured conditional models for dependency parsing. In Proceedings of EMNLP, pages 551–560. Oscar T¨ackstr¨om, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of NAACL, pages 1061–1071. Andr´e Filipe Torres Martins, Dipanjan Das, Noah A. Smith, and Eric P. Xing. 2008. Stacking dependency parsers. In Proceedings of EMNLP, pages 157–166. Qin Iris Wang, Dale Schuurmans, and Dekang Lin. 2008. Semi-supervised convex training for dependency parsing. In Proceedings of ACL, pages 532–540. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, pages 195–206. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of ACL, pages 189–196. Hao Zhang and Ryan McDonald. 2012. Generalized higher-order dependency parsing with cube pruning. In Proceedings of EMNLP-CoNLL, pages 320–331. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL, pages 188–193. Zhi-Hua Zhou and Ming Li. 2005. Tri-training: Exploiting unlabeled data using three classifiers. In IEEE Transactions on Knowledge and Data Engineering, pages 1529–1541. Guangyou Zhou, Jun Zhao, Kang Liu, and Li Cai. 2011. Exploiting web-derived selectional preference to improve statistical dependency parsing. In Proceedings of ACL, pages 1556–1565. Zhi-Hua Zhou. 2009. When semi-supervised learning meets ensemble learning. In MCS. 467
2014
43
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 468–478, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Robust Approach to Aligning Heterogeneous Lexical Resources Mohammad Taher Pilehvar and Roberto Navigli Department of Computer Science Sapienza University of Rome {pilehvar,navigli}@di.uniroma1.it Abstract Lexical resource alignment has been an active field of research over the last decade. However, prior methods for aligning lexical resources have been either specific to a particular pair of resources, or heavily dependent on the availability of hand-crafted alignment data for the pair of resources to be aligned. Here we present a unified approach that can be applied to an arbitrary pair of lexical resources, including machine-readable dictionaries with no network structure. Our approach leverages a similarity measure that enables the structural comparison of senses across lexical resources, achieving state-of-the-art performance on the task of aligning WordNet to three different collaborative resources: Wikipedia, Wiktionary and OmegaWiki. 1 Introduction Lexical resources are repositories of machinereadable knowledge that can be used in virtually any Natural Language Processing task. Notable examples are WordNet, Wikipedia and, more recently, collaboratively-curated resources such as OmegaWiki and Wiktionary (Hovy et al., 2013). On the one hand, these resources are heterogeneous in design, structure and content, but, on the other hand, they often provide complementary knowledge which we would like to see integrated. Given the large scale this intrinsic issue can only be addressed automatically, by means of lexical resource alignment algorithms. Owing to its ability to bring together features like multilinguality and increasing coverage, over the past few years resource alignment has proven beneficial to a wide spectrum of tasks, such as Semantic Parsing (Shi and Mihalcea, 2005), Semantic Role Labeling (Palmer et al., 2010), and Word Sense Disambiguation (Navigli and Ponzetto, 2012). Nevertheless, when it comes to aligning textual definitions in different resources, the lexical approach (Ruiz-Casado et al., 2005; de Melo and Weikum, 2010; Henrich et al., 2011) falls short because of the potential use of totally different wordings to define the same concept. Deeper approaches leverage semantic similarity to go beyond the surface realization of definitions (Navigli, 2006; Meyer and Gurevych, 2011; Niemann and Gurevych, 2011). While providing good results in general, these approaches fail when the definitions of a given word are not of adequate quality and expressiveness to be distinguishable from one another. When a lexical resource can be viewed as a semantic graph, as with WordNet or Wikipedia, this limit can be overcome by means of alignment algorithms that exploit the network structure to determine the similarity of concept pairs. However, not all lexical resources provide explicit semantic relations between concepts and, hence, machine-readable dictionaries like Wiktionary have first to be transformed into semantic graphs before such graph-based approaches can be applied to them. To do this, recent work has proposed graph construction by monosemous linking, where a concept is linked to all the concepts associated with the monosemous words in its definition (Matuschek and Gurevych, 2013). However, this alignment method still involves tuning of parameters which are highly dependent on the characteristics of the generated graphs and, hence, requires hand-crafted sense alignments for the specific pair of resources to be aligned, a task which has to be replicated every time the resources are updated. In this paper we propose a unified approach to aligning arbitrary pairs of lexical resources which is independent of their specific structure. Thanks to a novel modeling of the sense entries and an effective ontologization algorithm, our approach also fares well when resources lack relational structure or pair-specific training data is absent, meaning that it is applicable to arbitrary pairs 468 without adaptation. We report state-of-the-art performance when aligning WordNet to Wikipedia, OmegaWiki and Wiktionary. 2 Resource Alignment Preliminaries. Our approach for aligning lexical resources exploits the graph structure of each resource. Therefore, we assume that a lexical resource L can be represented as an undirected graph G = (V, E) where V is the set of nodes, i.e., the concepts defined in the resource, and E is the set of undirected edges, i.e., semantic relations between concepts. Each concept c ∈V is associated with a set of lexicalizations LG(c) = {w1, w2, ..., wn}. For instance, WordNet can be readily represented as an undirected graph G whose nodes are synsets and edges are modeled after the relations between synsets defined in WordNet (e.g., hypernymy, meronymy, etc.), and LG is the mapping between each synset node and the set of synonyms which express the concept. However, other resources such as Wiktionary do not provide semantic relations between concepts and, therefore, have first to be transformed into semantic networks before they can be aligned using our alignment algorithm. We explain in Section 3 how a semi-structured resource which does not exhibit a graph structure can be transformed into a semantic network. Alignment algorithm. Given a pair of lexical resources L1 and L2, we align each concept in L1 by mapping it to its corresponding concept(s) in the target lexicon L2. Algorithm 1 formalizes the alignment process: the algorithm takes as input the semantic graphs G1 and G2 corresponding to the two resources, as explained above, and produces as output an alignment in the form of a set A of concept pairs. The algorithm iterates over all concepts c1 ∈V1 and, for each of them, obtains the set of concepts C ⊂V2, which can be considered as alignment candidates for c1 (line 3). For a concept c1, alignment candidates in G2 usually consist of every concept c2 ∈V2 that shares at least one lexicalization with c1 in the same part of speech tag, i.e., LG1(c1) ∩LG2(c2) ̸= ∅(Reiter et al., 2008; Meyer and Gurevych, 2011). Once the set of target candidates C for a source concept c1 is obtained, the alignment task can be cast as that of identifying those concepts in C to which c1 should be aligned. To do this, the algorithm calculates the similarity between c1 and each c2 ∈C (line 5). If their similarity score exceeds a certain value denoted by θ Algorithm 1 Lexical Resource Aligner Input: graphs H = (VH, EH), G1 = (V1, E1) and G2 = (V2, E2), the similarity threshold θ, and the combination parameter β Output: A, the set of all aligned concept pairs 1: A ←∅ 2: for each concept c1 ∈V1 3: C ←getCandidates(c1, V2) 4: for each concept c2 ∈C 5: sim ←calculateSimilarity(H, G1, G2, c1, c2, β) 6: if sim > θ then 7: A ←A ∪{(c1, c2)} 8: return A (line 6), the two concepts c1 and c2 are aligned and the pair (c1, c2) is added to A (line 7). Different resource alignment techniques usually vary in the way they compute the similarity of a pair of concepts across two resources (line 5 in Algorithm 1). In the following, we present our novel approach for measuring the similarity of concept pairs. 2.1 Measuring the Similarity of Concepts Figure 1 illustrates the procedure underlying our cross-resource concept similarity measurement technique. As can be seen, the approach consists of two main components: definitional similarity and structural similarity. Each of these components gets, as its input, a pair of concepts belonging to two different semantic networks and produces a similarity score. These two scores are then combined into an overall score (part (e) of Figure 1) which quantifies the semantic similarity of the two input concepts c1 and c2. The definitional similarity component computes the similarity of two concepts in terms of the similarity of their definitions, a method that has also been used in previous work for aligning lexical resources (Niemann and Gurevych, 2011; Henrich et al., 2012). In spite of its simplicity, the mere calculation of the similarity of concept definitions provides a strong baseline, especially for cases where the definitional texts for a pair of concepts to be aligned are lexically similar, yet distinguishable from the other definitions. However, as mentioned in the introduction, definition similarity-based techniques fail at identifying the correct alignments in cases where different wordings are used or definitions are not of high quality. The structural similarity component, instead, is a novel graph-based similarity measurement technique which calculates the similarity between a pair of concepts across the semantic networks of the two resources by leveraging the semantic 469 Figure 1: The process of measuring the similarity of a pair of concepts across two resources. The method consists of two components: definitional and structural similarities, each measuring a similarity score for the given concept pair. The two scores are combined by means of parameter β in the last stage. structure of those networks. This component goes beyond the surface realization of concepts, thus providing a deeper measure of concept similarity. The two components share the same backbone (parts (b) and (d) of Figure 1), but differ in some stages (parts (a) and (c) in Figure 1). In the following, we explain all the stages involved in the two components (gray blocks in the figure). 2.1.1 Semantic signature generation The aim of this stage is to model a given concept or set of concepts through a vectorial semantic representation, which we refer to as the semantic signature of the input. We utilized Personalized PageRank (Haveliwala, 2002, PPR), a random walk graph algorithm, for calculating semantic signatures. The original PageRank (PR) algorithm (Brin and Page, 1998) computes, for a given graph, a single vector wherein each node is associated with a weight denoting its structural importance in that graph. PPR is a variation of PR where the computation is biased towards a set of initial nodes in order to capture the notion of importance with respect to those particular nodes. PPR has been previously used in a wide variety of tasks such as definition similarity-based resource alignment (Niemann and Gurevych, 2011), textual semantic similarity (Hughes and Ramage, 2007; Pilehvar et al., 2013), Word Sense Disambiguation (Agirre and Soroa, 2009; Faralli and Navigli, 2012) and semantic text categorization (Navigli et al., 2011). When applied to a semantic graph by initializing the random walks from a set of concepts (nodes), PPR yields a vector in which each concept is associated with a weight denoting its semantic relevance to the initial concepts. Formally, we first represent a semantic network consisting of N concepts as a row-stochastic transition matrix M ∈RN×N. The cell (i, j) in the matrix denotes the probability of moving from a concept i to j in the graph: 0 if no edge exists from i to j and 1/degree(i) otherwise. Then the PPR vector, hence the semantic signature Sv of vector v is the unique solution to the linear system: Sv = (1 −α) v + α M Sv, where v is the personalization vector of size N in which all the probability mass is put on the concepts for which a semantic signature is to be computed and α is the damping factor, which is usually set to 0.85 (Brin and Page, 1998). We used the UKB1 off-the-shelf implementation of PPR. Definitional similarity signature. In the definitional similarity component, the two concepts c1 and c2 are first represented by their corresponding definitions d1 and d2 in the respective resources L1 and L2 (Figure 1(a), top). To improve expressiveness, we follow Niemann and Gurevych (2011) and further extend di with all the word forms associated with concept ci and its neighbours, i.e., the union of all lexicalizations LGi(x) for all concepts x ∈{c′ ∈Vi : (c, c′) ∈Ei} ∪{c}, where Ei is the set of edges in Gi. In this component the personalization vector vi is set by uniformly distributing the probability mass over the nodes corresponding to the senses of all the content words in the extended definition of di according to the sense inventory of a semantic network H. We use the same semantic graph H for computing the semantic signatures of both definitions. Any semantic network with a dense relational structure, providing good coverage of the words appearing in the definitions, is a suitable candidate for H. For this purpose we used the WordNet (Fellbaum, 1998) graph which was further enriched by connecting 1http://ixa2.si.ehu.es/ukb/ 470 each concept to all the concepts appearing in its disambiguated gloss.2 Structural similarity signature. In the structural similarity component (Figure 1(b), bottom), the semantic signature for each concept ci is computed by running the PPR algorithm on its corresponding graph Gi, hence a different Mi is built for each of the two concepts. 2.1.2 Signature unification As mentioned earlier, semantic signatures are vectors with dimension equal to the number of nodes in the semantic graph. Since the structural similarity signatures Sv1 and Sv2 are calculated on different graphs and thus have different dimensions, we need to make them comparable by unifying them. We therefore propose an approach (part (c) of Figure 1) that finds a common ground between the two signatures: to this end we consider all the concepts associated with monosemous words in the two signatures as landmarks and restrict the two signatures exclusively to those common concepts. Leveraging monosemous words as bridges between two signatures is a particularly reliable technique as typically a significant portion of all words in a lexicon are monosemous.3 Formally, let IG(w) be an inventory mapping function that maps a term w to the set of concepts which are expressed by w in graph G. Then, given two signatures Sv1 and Sv2, computed on the respective graphs G1 and G2, we first obtain the set M of words that are monosemous according to both semantic networks, i.e., M = {w : |IG1(w)|=1 ∧|IG2(w)|=1}. We then transform each of the two signatures Svi into a new subsignature S′ vi whose dimension is |M|: the kth component of S′ vi corresponds to the weight in Svi of the only concept of wk in IGi(wk). As an example, assume we are given two semantic signatures computed for two concepts in WordNet and Wiktionary. Also, consider the noun tradeoff which is monosemous according to both these resources. Then, each of the two unified sub-signatures will contain a component whose weight is determined by the weight of the only concept associated with tradeoffn in the corresponding semantic signature. As a result of the unification process, we obtain a pair of equally-sized semantic signatures with comparable components. 2http://wordnet.princeton.edu 3For instance, we calculated that more than 80% of the words in WordNet are monosemous, with over 60% of all the synsets containing at least one of them. 2.1.3 Signature comparison Having at hand the semantic signatures for the two input concepts, we proceed to comparing them (part (d) in Figure 1). We leverage a nonparametric measure proposed by Pilehvar et al. (2013) which first transforms each signature into a list of sorted elements and then calculates the similarity on the basis of the average ranking of elements across the two lists: Sim(Sv1, Sv2) = P|T| i=1(r1 i + r2 i )−1 P|T| i=1(2i)−1 (1) where T is the intersection of all concepts with non-zero probability in the two signatures and rj i is the rank of the ith entry in the jth sorted list. The denominator is a normalization factor to guarantee a maximum value of one. The method penalizes the differences in the higher rankings more than it does for the lower ones. The measure was shown to outperform the conventional cosine distance when comparing different semantic signatures in multiple textual similarity tasks (Pilehvar et al., 2013). 2.1.4 Score combination Finally (part (e) of Figure 1), we calculate the overall similarity between two concepts as a linear combination of their definitional and structural similarities: β Simdef(Sv1, Sv2) + (1 − β) Simstr(Sv1, Sv2). In Section 4.2.1, we explain how we set, in our experiments, the values of β and the similarity threshold θ (cf. alignment algorithm in Section 2). 3 Lexical Resource Ontologization In Section 2, we presented our approach for aligning lexical resources. However, the approach assumes that the input resources can be viewed as semantic networks, which seems to limit its applicability to structured resources only. In order to address this issue and hence generalize our alignment approach to any given lexical resource, we propose a method for transforming a given machine-readable dictionary into a semantic network, a process we refer to as ontologization. Our ontologization algorithm takes as input a lexicon L and outputs a semantic graph G = (V, E) where, as already defined in Section 2, V is the set of concepts in L and E is the set of semantic relations between these concepts. Introducing relational links into a lexicon can be achieved in different ways. A first option is to extract binary 471 relations between pairs of words from raw text. Both words in these relations, however, should be disambiguated according to the given lexicon (Pantel and Pennacchiotti, 2008), making the task particularly prone to mistakes due to the high number of possible sense pairings. Here, we take an alternative approach which requires disambiguation on the target side only, hence reducing the size of the search space significantly. We first create the empty undirected graph GL = (V, E) such that V is the set of concepts in L and E = ∅. For each source concept c ∈V we create a bag of content words W = {w1, . . . , wn} which includes all the content words in its definition d and, if available, additional related words obtained from lexicon relations (e.g., synonyms in Wiktionary). The problem is then cast as a disambiguation task whose goal is to identify the intended sense of each word wi ∈W according to the sense inventory of L: if wi is monosemous, i.e., |{IGL(wi)}| = 1, we connect our source concept c to the only sense cwi of wi and set E := E ∪{{c, cwi}}; else, wi has multiple senses in L. In this latter case, we choose the most appropriate concept ci ∈IGL(wi) by finding the maximal similarity between the definition of c and the definitions of each sense of wi. To do this, we apply our definitional similarity measure introduced in Section 2.1. Having found the intended sense ˆcwi of wi, we add the edge {c, ˆcwi} to E. As a result of this procedure, we obtain a semantic graph representation G for the lexicon L. As an example, consider the 4th sense of the noun cone in Wiktionary (i.e., cone4 n) which is defined as “The fruit of a conifer”. The definition contains two content words: fruitn and conifern. The latter word is monosemous in Wiktionary, hence we directly connect cone4 n to the only sense of conifern. The noun fruit, however, has 5 senses in Wiktionary. We therefore measure the similarity between the definition of cone4 n and all the 5 definitions of fruit and introduce a link from cone4 n to the sense of fruit which yields the maximal similarity value (defined as “(botany) The seedbearing part of a plant...”). 4 Experiments Lexical resources. To enable a comparison with the state of the art, we followed Matuschek and Gurevych (2013) and performed an alignment of WordNet synsets (WN) to three different collaboratively-constructed resources: Wikipedia (WP), Wiktionary (WT), and OmegaWiki (OW). We utilized the DKPro software (Zesch et al., 2008; Gurevych et al., 2012) to access the information in the foregoing three resources. For WP, WT, OW we used the dump versions 20090822, 20131002, and 20131115, respectively. Evaluation measures. We followed previous work (Navigli and Ponzetto, 2012; Matuschek and Gurevych, 2013) and evaluated the alignment performance in terms of four measures: precision, recall, F1, and accuracy. Precision is the fraction of correct alignment judgments returned by the system and recall is the fraction of alignment judgments in the gold standard dataset that are correctly returned by the system. F1 is the harmonic mean of precision and recall. We also report results for accuracy which, in addition to true positives, takes into account true negatives, i.e., pairs which are correctly judged as unaligned. Lexicons and semantic graphs. Here, we describe how the four semantic graphs for our four lexical resources (i.e., WN, WP, WT, OW) were constructed. As mentioned in Section 2.1.1, we build the WN graph by including all the synsets and semantic relations defined in WordNet (e.g., hypernymy and meronymy) and further populate the relation set by connecting a synset to all the other synsets that appear in its disambiguated gloss. For WP, we used the graph provided by Matuschek and Gurevych (2013), constructed by directly connecting an article (concept) to all the hyperlinks in its first paragraph, together with the category links. Our WN and WP graphs have 118K and 2.8M nodes, respectively, with the average node degree being roughly 9 in both resources. The other two resources, i.e., WT and OW, do not provide a reliable network of semantic relations, therefore we used our ontologization approach to construct their corresponding semantic graphs. We report, in the following subsection, the experiments carried out to assess the accuracy of our ontologization method, together with the statistics of the obtained graphs for WT and OW. 4.1 Ontologization Experiments For ontologizing WT and OW, the bag of content words W is given by the content words in sense definitions and, if available, additional related words obtained from lexicon relations (see Section 3). In WT, both of these are in word surface form and hence had to be disambiguated. For OW, however, the encoded relations, though rela472 Source Type WT OW Definition Ambiguous 76.6% 50.7% Unambiguous 18.3% 32.9% Relation Ambiguous 2.8% Unambiguous 2.3% 16.4% Total number of edges 2.1M 255K Table 1: The statistics of the generated graphs for WT and OW. We report the distribution of the edges across types (i.e., ambiguous and unambiguous) and sources (i.e., definitions and relations) from which candidate words were obtained. tively small in number, are already disambiguated and, therefore, the ontologization was just performed on the definition’s content words. The resulting graphs for WT and OW contain 430K and 48K nodes, respectively, each providing more than 95% coverage of concepts, with the average node degree being around 10 for both resources. We present in Table 1, for WT and OW, the total number of edges together with their distribution across types (i.e., ambiguous and unambiguous) and sources (i.e., definitions and relations) from which candidate words were obtained. The edges obtained from unambiguous entries are essentially sense disambiguated on both sides whereas those obtained from ambiguous terms are a result of our similarity-based disambiguation. Hence, given that a large portion of edges came from ambiguous words (see Table 1), we carried out an experiment to evaluate the accuracy of our disambiguation method. To this end, we took as our benchmark the dataset provided by Meyer and Gurevych (2010) for evaluating relation disambiguation in WT. The dataset contains 394 manually-disambiguated relations. We compared our similarity-based disambiguation approach against the state of the art on this dataset, i.e., the WKTWSD system, which is a WT relation disambiguation algorithm based on a series of rules (Meyer and Gurevych, 2012b). Table 2 shows the performance of our disambiguation method, together with that of WKTWSD, in terms of Precision (P), Recall (R), F1, and accuracy. The “Human” row corresponds to the inter-rater F1 and accuracy scores, i.e., the upperbound performance on this dataset, as calculated by Meyer and Gurevych (2010). As can be seen, our method proves to be very accurate, surpassing the performance of the WKTWSD system in terms of precision, F1, and accuracy. This is particularly Approach P R F1 A WKTWSD 0.780 0.800 0.790 0.840 Our method 0.852 0.767 0.807 0.857 Human 0.890 0.910 Table 2: The performance of relation disambiguation for our similarity-based disambiguation method, as well as for the WKTWSD system. interesting as the WKTWSD system uses a rulebased technique specific to relation disambiguation in WT, whereas our method is resource independent and can be applied to arbitrary words in the definition of any concept. We also note that the graph constructed by Meyer and Gurevych (2010) had an average node degree of around 1. More recently, Matuschek and Gurevych (2013) leveraged monosemous linking (cf. Section 5) in order to create denser semantic graphs for OW and WT. Our approach, however, thanks to the connections obtained through ambiguous words, can provide graphs with significantly higher coverage. As an example, for WT, Matuschek and Gurevych (2013) generated a graph where around 30% of the nodes were in isolation, whereas this number drops to around 5% in our corresponding graph. These results show that our ontologization approach can be used to obtain dense semantic graph representations of lexical resources, while at the same time preserving a high level of accuracy. Now that all the four resources are transformed into semantic graphs, we move to our alignment experiments. 4.2 Alignment Experiments 4.2.1 Experimental setup Datasets. As our benchmark we tested on the gold standard datasets used in Matuschek and Gurevych (2013) for three alignment tasks: WordNet-Wikipedia (WN-WP), WordNetWiktionary (WN-WT), and WordNet-OmegaWiki (WN-OW). However, the dataset for WN-OW was originally built for the German language and, hence, was missing many English OW concepts that could be considered as candidate target alignments. We therefore fixed the dataset for the English language and reproduced the performance of previous work on the new dataset. The three datasets contained 320, 484, and 315 WN concepts that were manually mapped to their corresponding concepts in WP, WT, and OW, respectively. 473 Approach Training type WN-WP WN-WT WN-OW P R F1 A P R F1 A P R F1 A SB Cross-val. 0.780 0.780 0.780 0.950 0.670 0.650 0.660 0.910 0.749 0.691 0.716 0.886 DWSA Tuning on subset 0.750 0.670 0.710 0.930 0.680 0.270 0.390 0.890 0.651 0.372 0.473 0.830 SB+DWSA Cross-val. + tuning 0.750 0.870 0.810 0.950 0.680 0.710 0.690 0.920 0.794 0.688 0.735 0.898 SemAlign Unsupervised 0.709 0.929 0.805 0.943 0.642 0.799 0.712 0.923 0.664 0.761 0.709 0.872 Tuning on subset 0.877 0.792 0.833 0.960 0.672 0.799 0.730 0.930 0.750 0.717 0.733 0.893 Cross-val. 0.852 0.835 0.840 0.965 0.680 0.769 0.722 0.931 0.778 0.725 0.749 0.900 Tuning on WN-WP 0.754 0.627 0.684 0.931 0.825 0.584 0.684 0.889 Tuning on WN-WT 0.738 0.934 0.824 0.950 0.805 0.677 0.736 0.900 Tuning on WN-OW 0.744 0.925 0.824 0.950 0.684 0.766 0.723 0.930 Table 3: The performance of different systems on the task of aligning WordNet to Wikipedia (WN-WP), Wiktionary (WN-WT), and OmegaWiki (WN-OW) in terms of Precision (P), Recall (R), F1, and Accuracy (A). We present results for different configurations of our system (SemAlign), together with the state of the art in definition similarity-based alignment approaches (SB) and the best configuration of the stateof-the-art graph-based system, Dijkstra-WSA (Matuschek and Gurevych, 2013, DWSA). Configurations. Recall from Section 2 that our resource alignment technique has two parameters: the similarity threshold θ and the combination parameter β, both defined in [0, 1]. We performed experiments with three different configurations: • Unsupervised, where the two parameters are set to their middle values (i.e., 0.5), hence, no tuning is performed for either of the parameters. In this case, both the definitional and structural similarity scores are treated as equally important and two concepts are aligned if their overall similarity exceeds the middle point of the similarity scale. • Tuning, where we follow Matuschek and Gurevych (2013) and tune the parameters on a subset of the dataset comprising 100 items. • Cross-validation, where a 5-fold cross validation is carried out to find the optimal values for the parameters, a technique used in most of the recent alignment methods (Niemann and Gurevych, 2011; Meyer and Gurevych, 2012a; Matuschek and Gurevych, 2013). 4.2.2 Results We show in Table 3 the alignment performance of different systems on the task of aligning WN-WP, WN-WT, and WN-OW in terms of Precision (P), Recall (R), F1, and Accuracy. The SB system corresponds to the state-of-the-art definition similarity approaches for WN-WP (Niemann and Gurevych, 2011), WN-WT (Meyer and Gurevych, 2011), and WN-OW (Gurevych et al., 2012). DWSA stands for Dijkstra-WSA, the state-of-the-art graph-based alignment approach of Matuschek and Gurevych (2013). The authors also provided results for SB+Dijkstra-WSA, a hybrid system where DWSA was tuned for high precision and, in the case when no alignment target could be found, the algorithm fell back on SB judgments. We also show the results for this system as SB+DWSA in the table. For our approach (SemAlign) we show the results of six different runs each corresponding to a different setting. The first three (middle part of the table) correspond to the results obtained with the three configurations of SemAlign: unsupervised, with tuning on subset, and cross-validation (see Section 4.2.1). In addition to these, we performed experiments where the two parameters of SemAlign were tuned on pair-independent training data, i.e., a training dataset for a pair of resources different from the one being aligned. For this setting, we used the whole dataset of the corresponding resource pair to tune the two parameters of our system. We show the results for this setting in the bottom part of the table (last three lines). The main feature worth remarking upon is the consistency in the results across different resource pairs: the unsupervised system gains the best recall among the three configurations (with the improvement over SB+DWSA being always statistically significant4) whereas tuning, both on a subset or through cross-validation, consistently leads to the best performance in terms of F1 and accuracy (with the latter being statistically significant with respect to SB+DWSA on WN-WP and WN-WT). Moreover, the unsupervised system proves to be very robust inasmuch as it provides competitive results on all the three datasets, while it surpasses the performance of SB+DWSA on WN-WT. This 4All significance tests are done using z-test at p < 0.05. 474 Approach WN-WP WN-WT WN-OW P R F1 A P R F1 A P R F1 A Dijkstra-WSA 0.750 0.670 0.710 0.930 0.680 0.270 0.390 0.890 0.651 0.372 0.473 0.830 SemAlignstr 0.877 0.788 0.830 0.959 0.604 0.643 0.623 0.907 0.654 0.602 0.627 0.853 Table 4: Performance of SemAlign when using only the structural similarity component (SemAlignstr) compared to the state-of-the-art graph-based alignment approach, Dijkstra-WSA (Matuschek and Gurevych, 2013) for our three resource pairs: WordNet to Wikipedia (WN-WP), Wiktionary (WN-WT), and OmegaWiki (WN-OW). is particularly interesting as the latter system involves tuning of several parameters, whereas SemAlign, in its unsupervised configuration, does not need any training data nor does it involve any tuning. In addition, as can be seen in the table, SemAlign benefits from pair-independent training data in most cases across the three resource pairs with performance surpassing that of SB+DWSA, a system which is dependent on pair-specific training data. The consistency in the performance of SemAlign in its different configurations and across different resource pairs indicates its robustness and shows that our system can be utilized effectively for aligning any pair of lexical resources, irrespective of their structure or availability of training data. The system performance is generally higher on the alignment task for WP compared to WT and OW. We attribute this difference to the dictionary nature of the latter two, where sense distinctions are more fine-grained, as opposed to the relatively concrete concepts in the WP encyclopedia. 4.3 Similarity Measure Analysis We explained in Section 2.1 that our concept similarity measure consists of two components: the definitional and the structural similarities. Measuring the similarity of two concepts in terms of their definitions has been investigated in previous work (Niemann and Gurevych, 2011; Henrich et al., 2012). The structural similarity component of our approach, however, is novel, but at the same time one of the very few measures which enables the computation of the similarity of concepts across two resources directly and independently of the similarity of their definitions. A comparable approach is the Dijkstra-WSA proposed by Matuschek and Gurevych (2013) which, as also mentioned earlier in the Introduction, first connects the two resources’ graphs by leveraging monosemous linking and then aligns two concepts across the two graphs on the basis of their shortest distance. To gain more insight into the effectiveness of our structural similarity measure in comparison to the Dijkstra-WSA method, we carried out an experiment where our alignment system used only the structural similarity component, a variant of our system we refer to as SemAlignstr. Both systems (i.e., SemAlignstr and Dijkstra-WSA) were tuned on 100-item subsets of the corresponding datasets. We show in Table 4 the performance of the two systems on our three datasets. As can be seen in the table, SemAlignstr consistently improves over Dijkstra-WSA according to recall, F1 and accuracy with all the differences in recall and accuracy being statistically significant (p < 0.05). The improvement is especially noticeable for pairs involving either WT or OW where, thanks to the relatively denser semantic graphs obtained by means of our ontologization technique, the gap in F1 is about 0.23 (WN-WT) and 0.15 (WN-OW). In addition, as we mentioned earlier, for WN-WP we used the same graph as that of Dijkstra-WSA, since both WN and WP provide a full-fledged semantic network and thus neither needed to be ontologized. Therefore, the considerable performance improvement over Dijkstra-WSA on this resource pair shows the effectiveness of our novel concept similarity measure independently of the underlying semantic network. 5 Related Work Resource ontologization. Having lexical resources represented as semantic networks is highly beneficial. A good example is WordNet, which has been exploited as a semantic network in dozens of NLP tasks (Fellbaum, 1998). A recent prominent case is Wikipedia (Medelyan et al., 2009; Hovy et al., 2013) which, thanks to its inter-article hyperlink structure, provides a rich backbone for structuring additional information (Auer et al., 2007; Suchanek et al., 2008; Moro and Navigli, 2013; Flati et al., 2014). However, there are many large-scale resources, such as Wiktionary for instance, which by their very nature are not in the form of a graph. This is 475 usually the case with machine-readable dictionaries, where structuring the resource involves the arduous task of connecting lexicographic senses by means of semantic relations. Surprisingly, despite their vast potential, little research has been conducted on the automatic ontologization of collaboratively-constructed dictionaries like Wiktionary and OmegaWiki. Meyer and Gurevych (2012a) and Matuschek and Gurevych (2013) provided approaches for building graph representations of Wiktionary and OmegaWiki. The resulting graphs, however, were either sparse or had a considerable portion of the nodes left in isolation. Our approach, in contrast, aims at transforming a lexical resource into a full-fledged semantic network, hence providing a denser graph with most of its nodes connected. Resource alignment. Aligning lexical resources has been a very active field of research in the last decade. One of the main objectives in this area has been to enrich existing ontologies by means of complementary information from other resources. As a matter of fact, most efforts have been concentrated on aligning the de facto community standard sense inventory, i.e. WordNet, to other resources. These include: the Roget’s thesaurus and Longman Dictionary of Contemporary English (Kwong, 1998), FrameNet (Laparra and Rigau, 2009), VerbNet (Shi and Mihalcea, 2005) or domain-specific terminologies such as the Unified Medical Language System (Burgun and Bodenreider, 2001). More recently, the growth of collaboratively-constructed resources has seen the development of alignment approaches with Wikipedia (Ruiz-Casado et al., 2005; Auer et al., 2007; Suchanek et al., 2008; Reiter et al., 2008; Navigli and Ponzetto, 2012), Wiktionary (Meyer and Gurevych, 2011) and OmegaWiki (Gurevych et al., 2012). Last year Matuschek and Gurevych (2013) proposed Dijkstra-WSA, a graph-based approach relying on shortest paths between two concepts when the two corresponding resources graphs were combined by leveraging monosemous linking. Their method when backed off with other definition similarity based approaches (Niemann and Gurevych, 2011; Meyer and Gurevych, 2011), achieved state-of-the-art results on the mapping of WordNet to different collaboratively-constructed resources. This approach, however, in addition to setting the threshold for the definition similarity component by means of cross validation, also required other parameters to be tuned, such as the allowed path length (λ) and the maximum number of edges in a graph. The optimal value for the λ parameter varied from one resource pair to another, and even for a specific resource pair it had to be tuned for each configuration. This made the approach dependent on the training data for the specific pair of resources that were to be aligned. Instead of measuring the similarity of two concepts on the basis of their distance in the combined graph, our approach models each concept through a rich vectorial representation we refer to as semantic signature and compares the two concepts in terms of the similarity of their semantic signatures. This rich representation leads to our approach having a good degree of robustness such that it can achieve competitive results even in the absence of training data. This enables our system to be applied effectively for aligning new pairs of resources for which no training data is available, with state-of-the-art performance. 6 Conclusions This paper presents a unified approach for aligning lexical resources. Our method leverages a novel similarity measure which enables a direct structural comparison of concepts across different lexical resources. Thanks to an effective ontologization method, our alignment approach can be applied to any pair of lexical resources independently of whether they provide a full-fledged network structure. We demonstrate that our approach achieves state-of-theart performance on aligning WordNet to three collaboratively-constructed resources with different characteristics, i.e., Wikipedia, Wiktionary, and OmegaWiki. We also show that our approach is robust across its different configurations, even when the training data is absent, enabling it to be used effectively for aligning new pairs of lexical resources for which no resource-specific training data is available. In future work, we plan to extend our concept similarity measure across different natural languages. We release all our data at http://lcl.uniroma1.it/semalign. Acknowledgments The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. We would like to thank Michael Matuschek for providing us with Wikipedia graphs and alignment datasets. 476 References Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank for Word Sense Disambiguation. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 33–41, Athens, Greece. S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ive. 2007. DBpedia: A nucleus for a web of open data. In Proceedings of 6th International Semantic Web Conference joint with 2nd Asian Semantic Web Conference (ISWC+ASWC 2007), pages 722–735, Busan, Korea. Sergey Brin and Michael Page. 1998. Anatomy of a large-scale hypertextual Web search engine. In Proceedings of the 7th Conference on World Wide Web, pages 107–117, Brisbane, Australia. Anita Burgun and Olivier Bodenreider. 2001. Comparing terms, concepts and semantic classes in WordNet and the Unified Medical Language System. In Proceedings of NAACL Workshop, WordNet and Other Lexical Resources: Applications, Extensions and Customizations, pages 77–82, Pittsburgh, USA. Gerard de Melo and Gerhard Weikum. 2010. Providing multilingual, multimodal answers to lexical database queries. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), pages 348–355, Valletta, Malta. Stefano Faralli and Roberto Navigli. 2012. A New Minimally-supervised Framework for Domain Word Sense Disambiguation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1411– 1422, Jeju, Korea. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA. Tiziano Flati, Daniele Vannella, Tommaso Pasini, and Roberto Navigli. 2014. Two is bigger (and better) than one: the Wikipedia Bitaxonomy Project. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), Baltimore, Maryland. Iryna Gurevych, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M. Meyer, and Christian Wirth. 2012. UBY - a large-scale unified lexical-semantic resource based on LMF. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 580–590, Avignon, France. Taher H. Haveliwala. 2002. Topic-sensitive PageRank. In Proceedings of the 11th international conference on World Wide Web, pages 517–526, Hawaii, USA. Verena Henrich, Erhard Hinrichs, and Tatiana Vodolazova. 2011. Semi-automatic extension of GermaNet with sense definitions from Wiktionary. In Proceedings of 5th Language & Technology Conference (LTC 2011), pages 126–130, Pozna, Poland. Verena Henrich, Erhard W. Hinrichs, and Klaus Suttner. 2012. Automatically linking GermaNet to Wikipedia for harvesting corpus examples for GermaNet senses. In Journal for Language Technology and Computational Linguistics (JLCL), 27(1):1–19. Eduard H. Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semistructured content and Artificial Intelligence: The story so far. Artificial Intelligence, 194:2–27. Thad Hughes and Daniel Ramage. 2007. Lexical semantic relatedness with random graph walks. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing, pages 581–589, Prague, Czech Republic. Oi Yee Kwong. 1998. Aligning WordNet with additional lexical resources. In COLING-ACL98 Workshop on Usage of WordNet in Natural Language Processing Systems, pages 73–79, Montreal, Canada. Egoitzand Laparra and German Rigau. 2009. Integrating WordNet and FrameNet using a knowledgebased Word Sense Disambiguation algorithm. In Proceedings of Recent Advances in Natural Language Processing (RANLP09), pages 1–6, Borovets, Bulgaria. Michael Matuschek and Iryna Gurevych. 2013. Dijkstra-WSA: A graph-based approach to word sense alignment. Transactions of the Association for Computational Linguistics (TACL), 1:151–164. Olena Medelyan, David Milne, Catherine Legg, and Ian H. Witten. 2009. Mining meaning from Wikipedia. International Journal of HumanComputer Studies, 67(9):716–754. Christian M. Meyer and Iryna Gurevych. 2010. “worth its weight in gold or yet another resource”; a comparative study of Wiktionary, OpenThesaurus and GermaNet. In Proceedings of the 11th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing’10, pages 38–49, Iasi, Romania. Christian M. Meyer and Iryna Gurevych. 2011. What psycholinguists know about Chemistry: Aligning Wiktionary and WordNet for increased domain coverage. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 883–892, Chiang Mai, Thailand. Christian M. Meyer and Iryna Gurevych. 2012a. OntoWiktionary: Constructing an ontology from the collaborative online dictionary Wiktionary. In SemiAutomatic Ontology Development: Processes and Resources, pages 131–161. IGI Global. 477 Christian M. Meyer and Iryna Gurevych. 2012b. To exhibit is not to loiter: A multilingual, sensedisambiguated Wiktionary for measuring verb similarity. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pages 1763–1780, Mumbai, India. Andrea Moro and Roberto Navigli. 2013. Integrating syntactic and semantic analysis into the Open Information Extraction paradigm. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013), pages 2148–2154, Beijing, China. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. Roberto Navigli, Stefano Faralli, Aitor Soroa, Oier de Lacalle, and Eneko Agirre. 2011. Two birds with one stone: Learning semantic models for text categorization and Word Sense Disambiguation. In Proceedings of the 20th ACM Conference on Information and Knowledge Management (CIKM), pages 2317–2320, Glasgow, UK. Roberto Navigli. 2006. Meaningful clustering of senses helps boost word sense disambiguation performance. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics joint with the 21st International Conference on Computational Linguistics (COLING-ACL 2006), pages 105–112, Sydney, Australia. Elisabeth Niemann and Iryna Gurevych. 2011. The people’s web meets linguistic knowledge: Automatic sense alignment of Wikipedia and WordNet. In Proceedings of the Ninth International Conference on Computational Semantics, pages 205–214, Oxford, United Kingdom. Martha Palmer, Daniel Gildea, and Nianwen Xue. 2010. Semantic Role Labeling. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers. Patrick Pantel and Marco Pennacchiotti. 2008. Automatically harvesting and ontologizing semantic relations. In Proceedings of the 2008 Conference on Ontology Learning and Population: Bridging the Gap Between Text and Knowledge, pages 171–195, Amsterdam, The Netherlands. Mohammad Taher Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, Disambiguate and Walk: a Unified Approach for Measuring Semantic Similarity. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1341–1351, Sofia, Bulgaria. Nils Reiter, Matthias Hartung, and Anette Frank. 2008. A resource-poor approach for linking ontology classes to Wikipedia articles. In Johan Bos and Rodolfo Delmonte, editors, Semantics in Text Processing, volume 1 of Research in Computational Semantics, pages 381–387. College Publications, London, England. Maria Ruiz-Casado, Enrique Alfonseca, and Pablo Castells. 2005. Automatic assignment of Wikipedia encyclopedic entries to WordNet synsets. In Proceedings of the Third International Conference on Advances in Web Intelligence, pages 380–386, Lodz, Poland. Lei Shi and Rada Mihalcea. 2005. Putting pieces together: Combining FrameNet, VerbNet and WordNet for robust semantic parsing. In Proceedings of the 6th International Conference on Computational Linguistics and Intelligent Text Processing, pages 100–111, Mexico City, Mexico. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2008. Yago: A large ontology from Wikipedia and WordNet. Journal of Web Semantics, 6(3):203–217. Torsten Zesch, Christof M¨uller, and Iryna Gurevych. 2008. Using Wiktionary for computing semantic relatedness. In Proceedings of the 23rd national conference on Artificial intelligence - Volume 2, pages 861–866, Chicago, Illinois. 478
2014
44
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 479–488, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Predicting the relevance of distributional semantic similarity with contextual information Philippe Muller IRIT, Toulouse University Universit´e Paul Sabatier 118 Route de Narbonne 31062 Toulouse Cedex 04 [email protected] C´ecile Fabre CLLE, Toulouse University Universit´e Toulouse-Le Mirail 5 alles A. Machado 31058 Toulouse Cedex [email protected] Cl´ementine Adam CLLE, Toulouse University Universit´e Toulouse-Le Mirail 5 alles A. Machado 31058 Toulouse Cedex [email protected] Abstract Using distributional analysis methods to compute semantic proximity links between words has become commonplace in NLP. The resulting relations are often noisy or difficult to interpret in general. This paper focuses on the issues of evaluating a distributional resource and filtering the relations it contains, but instead of considering it in abstracto, we focus on pairs of words in context. In a discourse, we are interested in knowing if the semantic link between two items is a byproduct of textual coherence or is irrelevant. We first set up a human annotation of semantic links with or without contextual information to show the importance of the textual context in evaluating the relevance of semantic similarity, and to assess the prevalence of actual semantic relations between word tokens. We then built an experiment to automatically predict this relevance, evaluated on the reliable reference data set which was the outcome of the first annotation. We show that in-document information greatly improve the prediction made by the similarity level alone. 1 Introduction The goal of the work presented in this paper is to improve distributional thesauri, and to help evaluate the content of such resources. A distributional thesaurus is a lexical network that lists semantic neighbours, computed from a corpus and a similarity measure between lexical items, which generally captures the similarity of contexts in which the items occur. This way of building a semantic network has been very popular since (Grefenstette, 1994; Lin, 1998), even though the nature of the information it contains is hard to define, and its evaluation is far from obvious. A distributional thesaurus includes a lot of “noise” from a semantic point of view, but also lists relevant lexical pairs that escape classical lexical relations such as synonymy or hypernymy. There is a classical dichotomy when evaluating NLP components between extrinsic and intrinsic evaluations (Jones, 1994), and this applies to distributional thesauri (Curran, 2004; Poibeau and Messiant, 2008). Extrinsic evaluations measure the capacity of a system in which a resource or a component to evaluate has been used, for instance in this case information retrieval (van der Plas, 2008) or word sense disambiguation (Weeds and Weir, 2005). Intrinsic evaluations try to measure the resource itself with respect to some human standard or judgment, for instance by comparing a distributional resource with respect to an existing synonym dictionary or similarity judgment produced by human subjects (Pado and Lapata, 2007; Baroni and Lenci, 2010). The shortcomings of these methods have been underlined in (Baroni and Lenci, 2011). Lexical resources designed for other objectives put the spotlight on specific areas of the distributional thesaurus. They are not suitable for the evaluation of the whole range of semantic relatedness that is exhibited by distributional similarities, which exceeds the limits of classical lexical relations, even though researchers have tried to collect equivalent resources manually, to be used as a gold standard (Weeds, 2003; Bordag, 2008; Anguiano et al., 2011). One advantage of distributional similarities is to exhibit a lot of different semantic relations, not necessarily standard lexical relations. Even with respect to established lexical resources, distributional approaches may improve coverage, complicating the evaluation even more. The method we propose here has been designed as an intrinsic evaluation with a view to validate semantic proximity links in a broad per479 spective, to cover what (Morris and Hirst, 2004) call “non classical lexical semantic relations”. For instance, agentive relations (author/publish, author/publication) or associative relations (actor/cinema) should be considered. At the same time, we want to filter associations that can be considered as accidental in a semantic perspective (e.g. flag and composer are similar because they appear a lot with nationality names). We do this by judging the relevance of a lexical relation in a context where both elements of a lexical pair occur. We show not only that this improves the reliability of human judgments, but also that it gives a framework where this relevance can be predicted automatically. We hypothetize that evaluating and filtering semantic relations in texts where lexical items occur would help tasks that naturally make use of semantic similarity relations, but assessing this goes beyond the present work. In the rest of this paper, we describe the resource we used as a case study, and the data we collected to evaluate its content (section 2). We present the experiments we set up to automatically filter semantic relations in context, with various groups of features that take into account information from the corpus used to build the thesaurus and contextual information related to occurrences of semantic neighbours 3). Finally we discuss some related work on the evaluation and improvement of distributional resources (section 4). 2 Evaluation of lexical similarity in context 2.1 Data We use a distributional resource for French, built on a 200M word corpus extracted from the French Wikipedia, following principles laid out in (Bourigault, 2002) from a structured model (Baroni and Lenci, 2010), i.e. using syntactic contexts. In this approach, contexts are triples (governor,relation,dependent) derived from syntactic dependency structures. Governors and dependents are verbs, adjectives and nouns. Multiword units are available, but they form a very small subset of the resulting neighbours. Base elements in the thesaurus are of two types: arguments (dependents’ lemma) and predicates (governor+relation). This is to keep the predicate/argument distinction since similarities will be computed between predicate pairs or argument pairs, and a lexical item can appear in many predicates and as an argument (e.g. interest as argument, interest for as one predicate). The similarity of distributions was computed with Lin’s score (Lin, 1998). We will talk of lexical neighbours or distributional neighbours to label pairs of predicates or arguments, and in the rest of the paper we consider only lexical pairs with a Lin score of at least 0.1, which means about 1.4M pairs. This somewhat arbitrary level is an a priori threshold to limit the resulting database, and it is conservative enough not to exclude potential interesting relations. The distribution of scores is given figure 1; 97% of the selected pairs have a score between 0.1 and 0.29. Figure 1: Histogram of Lin scores for pairs considered. To ease the use of lexical neighbours in our experiments, we merged together predicates that include the same lexical unit, a posteriori. Thus there is no need for a syntactic analysis of the context considered when exploiting the resource, and sparsity is less of an issue1. 2.2 Annotation In order to evaluate the resource, we set up an annotation in context: pairs of lexical items are to be judged in their context of use, in texts where they occur together. To verify that this methodology is useful, we did a preliminary annotation to contrast judgment on lexical pairs with or without this contextual information. Then we made a larger annotation in context once we were assured of the reliability of the methodology. For the preliminary test, we asked three annotators to judge the similarity of pairs of lexical items without any context (no-context), and to judge the 1Whenever two predicates with the same lemma have common neighbours, we average the score of the pairs. 480 [...] Le ventre de l’impala de mˆeme que ses l`evres et sa queue sont blancs. Il faut aussi mentionner leurs lignes noires uniques `a chaque individu au bout des oreilles , sur le dos de la queue et sur le front. Ces lignes noires sont tr`es utiles aux impalas puisque ce sont des signes qui leur permettent de se reconnaitre entre eux. Ils poss`edent aussi des glandes s´ecr´etant des odeurs sur les pattes arri`eres et sur le front. Ces odeurs permettent ´egalement aux individus de se reconnaitre entre eux. Il a ´egalement des coussinets noirs situ´es, `a l’arri`ere de ses pattes . Les impalas mˆales et femelles ont une morphologie diff´erente. En effet, on peut facilement distinguer un mˆale par ses cornes en forme de S qui mesurent de 40 `a 90 cm de long. Les impalas vivent dans les savanes o`u l’ herbe (courte ou moyenne) abonde. Bien qu’ils appr´ecient la proximit´e d’une source d’eau, celle-ci n’est g´en´eralement pas essentielle aux impalas puisqu’ils peuvent se satisfaire de l’eau contenue dans l’ herbe qu’ils consomment. Leur environnement est relativement peu accident´e et n’est compos´e que d’ herbes , de buissons ainsi que de quelques arbres. [...] Figure 2: Example excerpt during the annotation of lexical pairs: annotators focus on a target item (here corne, horn, in blue) and must judge yellow words (pending: oreille/queue, ear/tail), either validating their relevance (green words: pattes, legs) or rejecting them (red words: herbe, grass). The text describes the morphology of the impala, and its habitat. similarity of pairs presented within a paragraph where they both occur (in context). The three annotators were linguists, and two of them (1 and 3) knew about the resource and how it was built. For each annotation, 100 pairs were randomly selected, with the following constraints: • for the no-context annotation, candidate pairs had a Lin score above 0.2, which placed them in the top 14% of lexical neighbours with respect to the similarity level. • for the in context annotation, the only constraint was that the pairs occur in the same paragraph somewhere in the corpus used to build the resource. The example paragraph was chosen at random. The guidelines given in both cases were the same: “Do you think the two words are semantically close ? In other words, is there a semantic relation between them, either classical (synonymy, hypernymy, co-hyponymy, meronymy, comeronymy) or not (the relation can be paraphrased but does not belong to the previous cases) ?” For the pre-test, agreement was rather moderate without context (the average of pairwise kappas was .46), and much better with a context (average = .68), with agreement rates above 90%. This seems to validate the feasability of a reliable annotation of relatedness in context, so we went on for a larger annotation with two of the previous annotators. For the larger annotation, the protocol was slightly changed: two annotators were given 42 full texts from the original corpus where lexical neighbours occurred. They were asked to judge the relation between two items types, regardless of the number of occurrences in the text. This time there was no filtering of the lexical pairs beyond the 0.1 threshold of the original resource. We followed the well-known postulate (Gale et al., 1992) that all occurrences of a word in the same discourse tend to have the same sense (“one sense per discourse”), in order to decrease the annotator workload. We also assumed that the relation between these items remain stable within the document, an arguably strong hypothesis that needed to be checked against inter-annotator agreement before beginning the final annotation . It turns out that the kappa score (0.80) shows a better interannotator agreement than during the preliminary test, which can be explained by the larger context given to the annotator (the whole text), and thus more occurrences of each element in the pair to judge, and also because the annotators were more experienced after the preliminary test. Agreement measures are summed-up table 1. An excerpt of an example text, as it was presented to the annotators, is shown figure 2. Overall, it took only a few days to annotate 9885 pairs of lexical items. Among the pairs that were presented to the annotators, about 11% were judged as relevant by the annotators. It is not easy to decide if the non-relevant pairs are just noise, or context-dependent associations that were not present in the actual text considered (for polysemy reasons for instance), or just low-level associations. An important aspect is thus to guarantee that there is a correlation between the sim481 Annotators Non-contextual Contextual Agreement rate Kappa Agreement rate Kappa N1+N2 77% 0.52 91% 0.66 N1+N3 70% 0.36 92% 0.69 N2+N3 79% 0.50 92% 0.69 Average 75, 3% 0,46 91, 7% 0,68 Experts NA NA 90.8% 0.80 Table 1: Inter-annotator agreements with Cohen’s Kappa for contextual and non-contextual annotations. N1, N2, N3 were annotators during the pre-test; expert annotation was made on a different dataset from the same corpus, only with the full discourse context. ilarity score (Lin’s score here), and the evaluated relevance of the neighbour pairs. Pearson correlation factor shows that Lin score is indeed significantly correlated to the annotated relevance of lexical pairs, albeit not strongly (r = 0.159). The produced annotation2 can be used as a reference to explore various aspects of distributional resources, with the caveat that it is as such a bit dependent on the particular resource used. We nonetheless assume that some of the relevant pairs would appear in other thesauri, or would be of interest in an evaluation of another resource. The first thing we can analyse from the annotated data is the impact of a threshold on Lin’s score to select relevant lexical pairs. The resource itself is built by choosing a cut-off which is supposed to keep pairs with a satisfactory similarity, but this threshold is rather arbitrary. Figure 3 shows the influence of the threshold value to select relevant pairs, when considering precision and recall of the pairs that are kept when choosing the threshold, evaluated against the human annotation of relevance in context. In case one wants to optimize the F-score (the harmonic mean of precision and recall) when extracting relevant pairs, we can see that the optimal point is at .24 for a threshold of .22 on Lin’s score. This can be considered as a baseline for extraction of relevant lexical pairs, to which we turn in the following section. 3 Experiments: predicting relevance in context The outcome of the contextual annotation presented above is a rather sizeable dataset of validated semantic links, and we showed these linguistic judgments to be reliable. We used this 2Freely available here http://www.irit.fr/ ˜Philippe.Muller/resources.html. Figure 3: Precision and recall on relevant links with respect to a threshold on the similarity measure (Lin’s score) dataset to set up a supervised classification experiment in order to automatically predict the relevance of a semantic link in a given discourse. We present now the list of features that were used for the model. They can be divided in three groups, according to their origin: they are computed from the whole corpus, gathered from the distributional resource, or extracted from the considered text which contains the semantic pair to be evaluated. 3.1 Features For each pair neighboura/neighbourb, we computed a set of features from Wikipedia (the corpus used to derive the distributional similarity): We first computed the frequencies of each item in the corpus, freqa and freqb, from which we derive • freqmin, freqmax : the min and max of freqa and freqb ; • freq×: the combination of the two, or log(freqa × freqb) 482 We also measured the syntagmatic association of neighboura and neighbourb, with a mutual information measure (Church and Hanks, 1990), computed from the cooccurrence of two tokens within the same paragraph in Wikipedia. This is a rather large window, and thus gives a good coverage with respect to the neighbour database (70% of all pairs). A straightforward parameter to include to predict the relevance of a link is of course the similarity measure itself, here Lin’s information measure. But this can be complemented by additional information on the similarity of the neighbours, namely: • each neighbour productivity : proda and prodb are defined as the numbers of neighbours of respectively neighboura and neighbourb in the database (thus related tokens with a similarity above the threshold), from which we derive three features as for frequencies: the min, the max, and the log of the product. The idea is that neighbours whith very high productivity give rise to less reliable relations. • the ranks of tokens in other related items neighbours: ranka−b is defined as the rank of neighboura among neighbours of neighbourb ordered with respect to Lin’s score; rangb−a is defined similarly and again we consider as features the min, max and log-product of these ranks. We add two categorial features, of a more linguistic nature: • cats is the pair of part-of-speech for the related items, e.g. to distinguish the relevance of NN or VV pairs. • predarg is related to the predicate/argument distinction: are the related items predicates or arguments ? The last set of features derive from the occurrences of related tokens in the considered discourses: First, we take into account the frequencies of items within the text, with three features as before: the min of the frequencies of the two related items, the max, and the log-product. Then we consider a tf·idf (Salton et al., 1975) measure, to evaluate the specificity and arguably the importance of a word Feature Description freqmin min(freqa, freqb) freqmax max(freqa, freqb) freq× log(freqa × freqb) im im = log P(a,b) P(a)·P(b) lin Lin’s score rankmin min(ranka−b, rankb−a) rankmax max(ranka−b, rankb−a) rank× log(ranka−b × rankb−a) prodmin min(proda, prodb) prodmax max(proda, prodb) prod× log(proda × prodb) cats neighbour pos pair predarg predicate or argument freqtxtmin min(freqtxta, freqtxtb) freqtxtmax max(freqtxta, freqtxtb) freqtxt× log(freqtxta × freqstxtb) tf·ipf tf·ipf(neighboura)×tf·ipf(neighbourb) coprph copresence in a sentence coprpara copresence in a paragraph sd smallest distance between neighboura and neighbourb gd highest distance between neighboura and neighbourb ad average distance between neighboura and neighbourb prodtxtmin min(proda, prodb) prodtxtmax max(proda, prodb) prodtxt× log(proda × prodb) cc belong to the same lexical connected component Table 2: Summary of features used in the supervised model, with respect to two lexical items a and b. The first group is corpus related, the second group is related to the distributional database, the third group is related to the textual context. Freq is related to the frequencies in the corpus, Freqtext the frequencies in the considered text. 483 in a document or within a document. Several variants of tf·idf have been proposed to adapt the measure to more local areas in a text with respect to the whole document. For instance (Dias et al., 2007) propose a tf·isf(term frequency · inverse sentence frequency), for topic segmentation. We similarly defined a tf·ipfmeasure based on the frequency of a word within a paragraph with respect to its frequency within the text. The resulting feature we used is the product of this measure for neighboura and neighbourb. A few other contextual features are included in the model: the distances between pairs of related items, instantiated as: • distance in words between occurrences of related word types: – minimal distance between two occurrences (sd) – maximal distance between two occurrences (gd) – average distance (ad) ; • boolean features indicating whether neighboura and neighbourb appear in the same sentence (coprs) or the same paragraph (coprpara). Finally, we took into account the network of related lexical items, by considering the largest sets of words present in the text and connected in the database (self-connected components), by adding the following features: • the degree of each lemma, seen as a node in this similarity graph, combined as above in minimal degree of the pair, maximal degree, and product of degrees (prodtxtmin, prodtxtmax, prodtxt×). This is the number of pairs (present in the text) where a lemma appears in. • a boolean feature cc saying whether a lexical pair belongs to a connected component of the text, except the largest. This reflects the fact that a small component may concern a lexical field which is more specific and thus more relevant to the text. Figure 4 shows examples of self-connected components in an excerpt of the page on Gorille (gorilla), e.g. the set {pelage, dos, fourrure} (coat, back, fur). The last feature is probably not entirely independent from the productivity of an item, or from the tf.ipf measure. Table 2 sums up the features used in our model. 3.2 Model Our task is to identify relevant similarities between lexical items, between all possible related pairs, and we want to train an inductive model, a classifier, to extract the relevant links. We have seen that the relevant/not relevant classification is very imbalanced, biased towards the “not relevant” category (about 11%/89%), so we applied methods dedicated to counter-balance this, and will focus on the precision and recall of the predicted relevant links. Following a classical methodology, we made a 10-fold cross-validation to evaluate robustly the performance of the classifiers. We tested a few popular machine learning methods, and report on two of them, a naive bayes model and the best method on our dataset, the Random Forest classifier (Breiman, 2001). Other popular methods (maximum entropy, SVM) have shown slightly inferior combined F-score, even though precision and recall might yield more important variations. As a baseline, we can also consider a simple threshold on the lexical similarity score, in our case Lin’s measure, which we have shown to yield the best F-score of 24% when set at 0.22. To address class imbalance, two broad types of methods can be applied to help the model focus on the minority class. The first one is to resample the training data to balance the two classes, the second one is to penalize differently the two classes during training when the model makes a mistake (a mistake on the minority class being made more costly than on the majority class). We tested the two strategies, by applying the classical Smote method of (Chawla et al., 2002) as a kind of resampling, and the ensemble method MetaCost of (Domingos, 1999) as a cost-aware learning method. Smote synthetizes and adds new instances similar to the minority class instances and is more efficient than a mere resampling. MetaCost is an interesting meta-learner that can use any classifier as a base classifier. We used Weka’s implementations of these methods (Frank et al., 2004), and our experiments and comparisons are thus easily replicated on our dataset, provided with this paper, even though they can be improved by 484 Le gorille est apr`es le bonobo et le chimpanz´e , du point de vue g´en´etique , l’ animal le plus proche de l’ humain . Cette parent´e a ´et´e confirm´ee par les similitudes entre les chromosomes et les groupes sanguins . Notre g´enome ne diff`ere que de 2 % de celui du gorille . Redress´es , les gorilles atteignent une taille de 1,75 m`etre , mais ils sont en fait un peu plus grands car ils ont les genoux fl´echis . L’ envergure des bras d´epasse la longueur du corps et peut atteindre 2,75 m`etres . Il existe une grande diff´erence de masse entre les sexes : les femelles p`esent de 90 `a 150 kilogrammes et les mˆales jusqu’ `a 275. En captivit´e , particuli`erement bien nourris , ils atteignent 350 kilogrammes . Le pelage d´epend du sexe et de l’ ˆage . Chez les mˆales les plus ˆag´es se d´eveloppe sur le dos une fourrure gris argent´e , d’ o`u leur nom de “dos argent´es” . Le pelage des gorilles de montagne est particuli`erement long et soyeux . Comme tous les anthropodes , les gorilles sont d´epourvus de queue . Leur anatomie est puissante , le visage et les oreilles sont glabres et ils pr´esentent des torus supra-orbitaires marqu´es . Figure 4: A few connected lexical components of the similarity graph, projected on a text, each in a different color. The groups are, in order of appearance of the first element: {genetic, close, human}, {similarity, kinship}, {chromosome, genome}, {male, female}, {coat, back, fur}, {age/N, aged/A}, {ear, tail, face}. The text describes the gorilla species, more particularly its morphology. Gray words are other lexical elements in the neighbour database. refinements of these techniques. We chose the following settings for the different models: naive bayes uses a kernel density estimation for numerical features, as this generally improves performance. For Random Forests, we chose to have ten trees, and each decision is taken on a randomly chosen set of five features. For resampling, Smote advises to double the number of instances of the minority class, and we observed that a bigger resampling degrades performances. For cost-aware learning, a sensible choice is to invert the class ratio for the cost ratio, i.e. here the cost of a mistake on a relevant link (false negative) is exactly 8.5 times higher than the cost on a non-relevant link (false positive), as non-relevant instances are 8.5 times more present than relevant ones. 3.3 Results We are interested in the precision and recall for the “relevant” class. If we take the best simple classifier (random forests), the precision and recall are 68.1% and 24.2% for an F-score of 35.7%, and this is significantly beaten by the Naive Bayes method as precision and recall are more even (Fscore of 41.5%). This is already a big improvement on the use of the similarity measure alone (24%). Also note that predicting every link as relevant would result in a 2.6% precision, and thus a 5% F-score. The random forest model is significantly improved by the balancing techniques: the overall best F-score of 46.3% is reached with Random Forests and the cost-aware learning method. Table 3 sums up the scores for the different configurations, with precision, recall, F-score and the confidence interval on the F-score. We analysed the learning curve by doing a cross-validation on reduced set of instances (from 10% to 90%); F1scores range from 37.3% with 10% of instances and stabilize at 80%, with small increment in every case. The filtering approach we propose seems to yield good results, by augmenting the similarity built on the whole corpus with signals from the local contexts and documents where related lexical items appear together. To try to analyse the role of each set of features, we repeated the experiment but changed the set of features used during training, and results are shown table 4 for the best method (RF with costaware learning). We can see that similarity-related features (measures, ranks) have the biggest impact, but the other ones also seem to play a significant role. We can draw the tentative conclusion that the quality of distributional relations depends on the contextualizing of the related lexical items, beyond just the similarity score and the ranks of items as neighbours of other items. 485 Method Precision Recall F-score CI Baseline (Lin threshold) 24.0 24.0 24.0 RF 68.1 24.2 35.7 ± 3.4 NB 34.8 51.3 41.5 ± 2.6 RF+resampling 56.6 32.0 40.9 ± 3.3 NB+resampling 32.8 54.0 40.7 ± 2.5 RF+cost aware learning 40.4 54.3 46.3 ± 2.7 NB+cost aware learning 27.3 61.5 37.8 ± 2.2 Table 3: Classification scores (%) on the relevant class. CI is the confidence interval on the F-score (RF = Random Forest, NB= naive bayes). Features Prec. Recall F-score all 40.4 54.3 46.3 all −corpus feat. 37.4 52.8 43.8 all −similarity feat. 36.1 49.5 41.8 all −contextual feat. 36.5 54.8 43.8 Table 4: Impact of each group of features on the best scores (%) : the lowest the results, the bigger the impact of the removed group of features. 4 Related work Our work is related to two issues: evaluating distributional resources, and improving them. Evaluating distributional resources is the subject of a lot of methodological reflection (Sahlgren, 2006), and as we said in the introduction, evaluations can be divided between extrinsic and intrinsic evaluations. In extrinsic evaluations, models are evaluated against benchmarks focusing on a single task or a single aspect of a resource: either discriminative, TOEFL-like tests (Freitag et al., 2005), analogy production (Turney, 2008), or synonym selection (Weeds, 2003; Anguiano et al., 2011; Ferret, 2013; Curran and Moens, 2002). In intrinsic evaluations, associations norms are used, such as the 353 word-similarity dataset (Finkelstein et al., 2002), e.g. (Pado and Lapata, 2007; Agirre et al., 2009), or specifically designed test cases, as in (Baroni and Lenci, 2011). We differ from all these evaluation procedures as we do not focus on an essential view of the relatedness of two lexical items, but evaluate the link in a context where the relevance of the link is in question, an “existential” view of semantic relatedness. As for improving distributional thesauri, outside of numerous alternate approaches to the construction, there is a body of work focusing on improving an existing resource, for instance reweighting context features once an initial thesaurus is built (Zhitomirsky-Geffet and Dagan, 2009), or post-processing the resource to filter bad neighbours or re-ranking neighbours of a given target (Ferret, 2013). They still use “essential” evaluation measures (mostly synonym extraction), although the latter comes close to our work since it also trains a model to detect (intrinsically) bad neighbours by using example sentences with the words to discriminate. We are not aware of any work that would try to evaluate differently semantic neighbours according to the context they appear in. 5 Conclusion We proposed a method to reliably evaluate distributional semantic similarity in a broad sense by considering the validation of lexical pairs in contexts where they both appear. This helps cover non classical semantic relations which are hard to evaluate with classical resources. We also presented a supervised learning model which combines global features from the corpus used to built a distributional thesaurus and local features from the text where similarities are to be judged as relevant or not to the coherence of a document. It seems from these experiments that the quality of distributional relations depends on the contextualizing of the related lexical items, beyond just the simi486 larity score and the ranks of items as neighbours of other items. This can hopefully help filter out lexical pairs when word lexical similarity is used as an information source where context is important: lexical disambiguation (Miller et al., 2012), topic segmentation (Guinaudeau et al., 2012). This can also be a preprocessing step when looking for similarities at higher levels, for instance at the sentence level (Mihalcea et al., 2006) or other macrotextual level (Agirre et al., 2013), since these are always aggregation functions of word similarities. There are limits to what is presented here: we need to evaluate the importance of the level of noise in the distributional neighbours database, or at least the quantity of non-semantic relations present, and this depends on the way the database is built. Our starting corpus is relatively small compared to current efforts in this framework. We are confident that the same methodology can be followed, even though the quantitative results may vary, since it is independent of the particular distributional thesaurus we used, and the way the similarities are computed. References E. Agirre, E. Alfonseca, K. Hall, J. Kravalova, M. Pas¸ca, and A. Soroa. 2009. A study on similarity and relatedness using distributional and wordnetbased approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19–27. Association for Computational Linguistics. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32–43, Atlanta, Georgia, USA, June. Association for Computational Linguistics. E.H. Anguiano, P. Denis, et al. 2011. FreDist: Automatic construction of distributional thesauri for French. In Actes de la 18eme conf´erence sur le traitement automatique des langues naturelles, pages 119–124. M. Baroni and A. Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721. M. Baroni and A. Lenci. 2011. How we BLESSed distributional semantic evaluation. GEMS 2011, pages 1–10. Stefan Bordag. 2008. A comparison of co-occurrence and similarity measures as simulations of context. In Alexander F. Gelbukh, editor, CICLing, volume 4919 of Lecture Notes in Computer Science, pages 52–63. Springer. D. Bourigault. 2002. UPERY : un outil d’analyse distributionnelle tendue pour la construction d’ontologies partir de corpus. In Actes de la 9e confrence sur le Traitement Automatique de la Langue Naturelle, pages 75–84, Nancy. Leo Breiman. 2001. Random forests. Machine Learning, 45(1):5–32. Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. Smote: Synthetic minority over-sampling technique. J. Artif. Intell. Res. (JAIR), 16:321–357. Kenneth Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):pp. 22–29. James R. Curran and Marc Moens. 2002. Improvements in automatic thesaurus extraction. In Proceedings of the ACL-02 Workshop on Unsupervised Lexical Acquisition, pages 59–66. J.R. Curran. 2004. From distributional to semantic similarity. Ph.D. thesis, University of Edinburgh. Ga¨el Dias, Elsa Alves, and Jos´e Gabriel Pereira Lopes. 2007. Topic segmentation algorithms for text summarization and passage retrieval: an exhaustive evaluation. In Proceedings of the 22nd national conference on Artificial intelligence - Volume 2, AAAI’07, pages 1334–1339. AAAI Press. Pedro Domingos. 1999. Metacost: A general method for making classifiers cost-sensitive. In Usama M. Fayyad, Surajit Chaudhuri, and David Madigan, editors, KDD, pages 155–164. ACM. Olivier Ferret. 2013. Identifying bad semantic neighbors for improving distributional thesauri. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 561–571, Sofia, Bulgaria, August. Association for Computational Linguistics. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: the concept revisited. ACM Trans. Inf. Syst., 20(1):116– 131. Eibe Frank, Mark Hall, , and Len Trigg. 2004. Weka 3.3: Data mining software in java. www.cs.waikato.ac.nz/ml/weka/. Dayne Freitag, Matthias Blume, John Byrnes, Edmond Chow, Sadik Kapadia, Richard Rohwer, and Zhiqiang Wang. 2005. New experiments in distributional representations of synonymy. In Proceedings of CoNLL, pages 25–32, Ann Arbor, Michigan, June. Association for Computational Linguistics. 487 W. Gale, K. Church, and D. Yarowsky. 1992. One sense per discourse. In In Proceedings of the 4th DARPA Speech and Natural Language Workshop, New-York, pages 233–237. G. Grefenstette. 1994. Explorations in automatic thesaurus discovery. Kluwer Academic Pub., Boston. Camille Guinaudeau, Guillaume Gravier, and Pascale S´ebillot. 2012. Enhancing lexical cohesion measure with confidence measures, semantic relations and language model interpolation for multimedia spoken content topic segmentation. Computer Speech & Language, 26(2):90–104. Karen Sparck Jones. 1994. Towards better NLP system evaluation. In Proceedings of the Human Language Technology Conference, pages 102–107. Association for Computational Linguistics. D. Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning, pages 296–304, Madison. Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the 21st national conference on Artificial intelligence, AAAI06, volume 1, pages 775–780. AAAI Press. Tristan Miller, Chris Biemann, Torsten Zesch, and Iryna Gurevych. 2012. Using distributional similarity for lexical expansion in knowledge-based word sense disambiguation. In Proceedings of COLING 2012, pages 1781–1796, Mumbai, India, December. The COLING 2012 Organizing Committee. J. Morris and G. Hirst. 2004. Non-classical lexical semantic relations. In Proceedings of the HLT Workshop on Computational Lexical Semantics, pages 46–51, Boston. Sebastian Pado and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. Thierry Poibeau and C´edric Messiant. 2008. Do we still Need Gold Standards for Evaluation? In Proceedings of the Language Resource and Evaluation Conference. Magnus Sahlgren. 2006. Towards pertinent evaluation methodologies for word-space models. In In Proceedings of the 5th International Conference on Language Resources and Evaluation. G. Salton, C. S. Yang, and C. T. Yu. 1975. A theory of term importance in automatic text analysis. Journal of the American Society for Information Science, 26(1):33–44. Peter D. Turney. 2008. A uniform approach to analogies, synonyms, antonyms, and associations. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING ’08, pages 905–912, Stroudsburg, PA, USA. Association for Computational Linguistics. L. van der Plas. 2008. Automatic Lexico-Semantic Acquisition for Question Answering. Ph.D. thesis, University of Groningen. J. Weeds and D. Weir. 2005. Co-occurrence retrieval: A flexible framework for lexical distributional similarity. Computational Linguistics, 31(4):439–475. Julie Elizabeth Weeds. 2003. Measures and Applications of Lexical Distributional Similarity. Ph.D. thesis, University of Sussex. Maayan Zhitomirsky-Geffet and Ido Dagan. 2009. Bootstrapping distributional feature vector quality. Computational Linguistics, 35(3):435–461. 488
2014
45
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 489–499, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Interpretable Semantic Vectors from a Joint Model of Brain- and TextBased Meaning Alona Fyshe1, Partha P. Talukdar1, Brian Murphy2, Tom M. Mitchell1 1Machine Learning Department, Carnegie Mellon University 2School of Electronics, Electrical Engineering and Computer Science Queen’s University Belfast [afyshe,partha.talukdar,tom.mitchell]@cs.cmu.edu [email protected] Abstract Vector space models (VSMs) represent word meanings as points in a high dimensional space. VSMs are typically created using a large text corpora, and so represent word semantics as observed in text. We present a new algorithm (JNNSE) that can incorporate a measure of semantics not previously used to create VSMs: brain activation data recorded while people read words. The resulting model takes advantage of the complementary strengths and weaknesses of corpus and brain activation data to give a more complete representation of semantics. Evaluations show that the model 1) matches a behavioral measure of semantics more closely, 2) can be used to predict corpus data for unseen words and 3) has predictive power that generalizes across brain imaging technologies and across subjects. We believe that the model is thus a more faithful representation of mental vocabularies. 1 Introduction Vector Space Models (VSMs) represent lexical meaning by assigning each word a point in high dimensional space. Beyond their use in NLP applications, they are of interest to cognitive scientists as an objective and data-driven method to discover word meanings (Landauer and Dumais, 1997). Typically, VSMs are created by collecting word usage statistics from large amounts of text data and applying some dimensionality reduction technique like Singular Value Decomposition (SVD). The basic assumption is that semantics drives a person’s language production behavior, and as a result co-occurrence patterns in written text indirectly encode word meaning. The raw co-occurrence statistics are unwieldy, but in the compressed VSM the distance between any two words is conceived to represent their mutual semantic similarity (Sahlgren, 2006; Turney and Pantel, 2010), as perceived and judged by speakers. This space then reflects the “semantic ground truth” of shared lexical meanings in a language community’s vocabulary. However corpus-based VSMs have been criticized as being noisy or incomplete representations of meaning (Glenberg and Robertson, 2000). For example, multiple word senses collide in the same vector, and noise from mis-parsed sentences or spam documents can interfere with the final semantic representation. When a person is reading or writing, the semantic content of each word will be necessarily activated in the mind, and so in patterns of activity over individual neurons. In principle then, brain activity could replace corpus data as input to a VSM, and contemporary imaging techniques allow us to attempt this. Functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG) are two brain activation recording technologies that measure neuronal activation in aggregate, and have been shown to have a predictive relationship with models of word meaning (Mitchell et al., 2008; Palatucci et al., 2009; Sudre et al., 2012; Murphy et al., 2012b).1 If brain activation data encodes semantics, we theorized that including brain data in a model of semantics could result in a model more consistent with semantic ground truth. However, the inclusion of brain data will only improve a text-based model if brain data contains semantic information not readily available in the corpus. In addition, if a semantic test involves another subject’s brain activation data, performance can improve only if the additional semantic information is consistent across brains. Of course, brains differ in shape, size and in connectivity, so additional information encoded in one brain might not translate to an1For more details on fMRI and MEG, see Section 4.2 489 other. Furthermore, different brain imaging technologies measure very different correlates of neuronal activity. Due to these differences, it is possible that one subject’s brain activation data cannot improve a model’s performance on another subject’s brain data, or for brain data collected using a different recording technology. Indeed, intersubject models of brain activation is an open research area (Conroy et al., 2013), as is learning the relationship between recording technologies (Engell et al., 2012; Hall et al., 2013). Brain data can also be corrupted by many types of noise (e.g. recording room interference, movement artifacts), another possible hindrance to the use of brain data in VSMs. VSMs are interesting from both engineering and scientific standpoints. In this work we focus on the scientific question: Can the inclusion of brain data improve semantic representations learned from corpus data? What can we learn from such a model? From an engineering perspective, brain activation data will likely never replace text data. Brain activation recordings are both expensive and time consuming to collect, whereas textual data is vast and much of it is free to download. However, from a scientific perspective, combining text and brain data could lead to more consistent semantic models, in turn leading to a better understanding of semantics and semantic modeling generally. In this paper, we leverage both kinds of data to build a hybrid VSM using a new matrix factorization method (JNNSE). Our hypothesis is that the noise of brain and corpus derived statistics will be largely orthogonal, and so the two data sources will have complementary strengths as input to VSMs. If this hypothesis is correct, we should find that the resulting VSM is more successful in modeling word semantics as encoded in human judgements, as well as separate corpus and brain data that was not used in the derivation of the model. We will show that our method: 1. creates a VSM that is more correlated to an independent measure of word semantics. 2. produces word vectors that are more predictable from the brain activity of different people, even when brain data is collected with a different recording technology. 3. predicts corpus representations of withheld words more accurately than a model that does not combine data sources. 4. directly maps semantic concepts onto the brain by jointly learning neural representations. Together, these results suggest that corpus and brain activation data measure semantics in compatible and complimentary ways. Our results are evidence that a joint model of brain- and text-based semantics may be closer to semantic ground truth than text-only models. Our findings also indicate that there is additional semantic information available in brain activation data that is not present in corpus data, and that there are elements of semantics currently lacking in text-based VSMs. We have made available the top performing VSMs created with brain and text data (http://www.cs.cmu.edu/ ˜afyshe/papers/acl2014/). In the following sections we will review NNSE, and our extension, JNNSE. We will describe the data used and the experiments to support our position that brain data is a valuable source of semantic information that compliments text data. 2 Non-Negative Sparse Embedding Non-Negative Sparse Embedding (NNSE) (Murphy et al., 2012a) is an algorithm that produces a latent representation using matrix factorization. Standard NNSE begins with a matrix X ∈Rw×c made of c corpus statistics for w words. NNSE solves the following objective function: argmin A,D w X i=1 Xi,: −Ai,: × D 2 + λ A 1 (1) subject to: Di,:DT i,: ≤1, ∀1 ≤i ≤ℓ (2) Ai,j ≥0, 1 ≤i ≤w, 1 ≤j ≤ℓ(3) The solution will find a matrix A ∈Rw×ℓthat is sparse, non-negative, and represents word semantics in an ℓ-dimensional latent space. D ∈Rℓ×c gives the encoding of corpus statistics in the latent space. Together, they factor the original corpus statistics matrix X in a way that minimizes the reconstruction error. The L1 constraint encourages sparsity in A; λ is a hyperparameter. Equation 2 constrains D to eliminate solutions where A is made arbitrarily small by making D arbitrarily large. Equation 3 ensures that A is nonnegative. We may increase ℓto give more dimensional space to represent word semantics, or decrease ℓfor more compact representations. 490 The sparse and non-negative representation in A produces a more interpretable semantic space, where interpretability is quantified with a behavioral task (Chang et al., 2009; Murphy et al., 2012a). To illustrate the interpretability of NNSE, we describe a word by selecting the word’s top scoring dimensions, and selecting the top scoring words in those dimensions. For example, the word chair has the following top scoring dimensions: 1. chairs, seating, couches; 2. mattress, futon, mattresses; 3. supervisor, coordinator, advisor. These dimensions cover two of the distinct meanings of the word chair (furniture and person of power). NNSE’s sparsity constraint dictates that each word can have a non-zero score in only a few dimensions, which aligns well to previous feature elicitation experiments in psychology. In feature elicitation, participants are asked to name the characteristics (features) of an object. The number of characteristics named is usually small (McRae et al., 2005), which supports the requirement of sparsity in the learned latent space. 3 Joint Non-Negative Sparse Embedding We extend NNSEs to incorporate an additional source of data for a subset of the words in X, and call the approach Joint Non-Negative Sparse Embeddings (JNNSEs). The JNNSE algorithm is general enough to incorporate any new information about the a word w, but for this study we will focus on brain activation recordings of a human subject reading single words. We will incorporate either fMRI or MEG data, and call the resulting models JNNSE(fMRI+Text) and JNNSE(MEG+Text) and refer to them generally as JNNSE(Brain+Text). For clarity, from here on, we will refer to NNSE as NNSE(Text), or NNSE(Brain) depending on the single source of input data used. Let us order the rows of the corpus data X so that the first 1 . . . w′ rows have both corpus statistics and brain activation recordings. Each brain activation recording is a row in the brain data matrix Y ∈Rw′×v where v is the number of features derived from the recording. For MEG recordings, v =sensors × time points= 306 × 150. For fMRI v = grey-matter voxels =≃20, 000 depending on the brain anatomy of each individual subject. The new objective function is: argmin A,D(c),D(b) w X i=1 Xi,: −Ai,: × D(c) 2+ w′ X i=1 Yi,: −Ai,: × D(b) 2 + λ A 1 (4) subject to: D(c) i,: D(c) i,: T ≤1, ∀1 ≤i ≤ℓ (5) D(b) i,: D(b) i,: T ≤1, ∀1 ≤i ≤ℓ (6) Ai,j ≥0, 1 ≤i ≤w, 1 ≤j ≤ℓ (7) We have introduced an additional constraint on the rows 1 . . . w′, requiring that some of the learned representations in A also reconstruct the brain activation recordings (Y ) through representations in D(b) ∈Rℓ×v. Let us use A′ to refer to the brainconstrained rows of A. Words that are close in “brain space” must have similar representations in A′, which can further percolate to affect the representations of other words in A via closeness in “corpus space”. With A or D fixed, the objective function for NNSE(Text) and JNNSE(Brain+Text) is convex. However, we are solving for A and D, so the problem is non-convex. To solve for this objective, we use the online algorithm of Section 3 from Mairal et al. (Mairal et al., 2010). This algorithm is guaranteed to converge, and in practice we found that JNNSE(Brain+Text) converged as quickly as NNSE(Text) for the same ℓ. We used the SPAMS package2 to solve, and set λ = 0.025. This algorithm was a very easy extension to NNSE(Text) and required very little additional tuning. We also consider learning shared representations in the case where data X and Y contain the effects of known disjoint features. For example, when a person reads a word, the recorded brain activation data Y will contain the physiological response to viewing the stimulus, which is unrelated to the semantics of the word. These signals can be attributed to, for example, the number of letters in the word and the number of white pixels on the screen (Sudre et al., 2012). To account for such effects in the data, we augment A′ with a set of n fixed, manually defined features (e.g. word length) to create A′ percept ∈ Rw×(ℓ+n). D(b) ∈R(ℓ+n)×v is used with A′ percept, 2SPAMS Package: http://spams-devel.gforge.inria.fr/ 491 to reconstruct the brain data Y . More generally, one could instead allocate a certain number of latent features specific to X or Y, both of which could be learned, as explored in some related work (Gupta et al., 2013). We use 11 perceptual features that characterize the non-semantic features of the word stimulus (for a list, see supplementary material at http://www.cs.cmu. edu/˜afyshe/papers/acl2014/). The JNNSE algorithm is advantageous in that it can handle partially paired data. That is, the algorithm does not require that every row in X also have a row in Y . Fully paired data is a requirement of many other approaches (White et al., 2012; Jia and Darrell, 2010). Our approach allows us to leverage the semantic information in corpus data even for words without brain activation recordings. JNNSE(Brain+Text) does not require brain data to be mapped to a common average brain, which is often the case when one wants to generalize between human subjects. Such mappings can blur and distort data, making it less useful for subsequent prediction steps. We avoid these mappings, and instead use the fact that similar words elicit similar brain activation within a subject. In the JNNSE algorithm, it is this closeness in “brain space” that guides the creation of the latent space A. Leveraging intra-subject distance measures to study inter-subject encodings has been studied previously (Kriegeskorte et al., 2008a; Raizada and Connolly, 2012), and has even been used across species (humans and primates) (Kriegeskorte et al., 2008b). Though we restrict ourselves to using one subject per JNNSE(Brain+Text) model, the JNNSE algorithm could easily be extended to include data from multiple brain imaging experiments by adding a new squared loss term for additional brain data. 3.1 Related Work Perhaps the most well known related approach to joining data sources is Canonical Correlation Analysis (CCA) (Hotelling, 1936), which has been applied to brain activation data in the past (Rustandi et al., 2009). CCA seeks two linear transformations that maximally correlate two data sets in the transformed form. CCA requires that the data sources be paired (all rows in the corpus data must have a corresponding brain data), as correlation between points is integral to the objective. To apply CCA to our data we would need to discard the vast majority of our corpus data, and use only the 60 rows of X with corresponding rows in Y. While CCA holds the input data fixed and maximally correlates the transformed form, we hold the transformed form fixed and seek a solution that maximally correlates the reconstruction (AD(c) or A′D(b)) with the data (X and Y respectively). This shift in error compensation is what allows our data to be only partially paired. While a Bayesian formulation of CCA can handle missing data, our model has missing data for > 97% of the full w × (v + c) brain and corpus data matrix. To our knowledge, this extreme amount of missing data has not been explored with Bayesian CCA. One could also use a topic model style formulation to represent this semantic representation task. Supervised topic models (Blei and McAuliffe, 2007) use a latent topic to generate two observed outputs: words in a document and a categorical label for the document. The same idea could be applied here: the latent semantic representation generates the observed brain activity and corpus statistics. Generative and discriminative models both have their own strengths and weaknesses, generative models being particularly strong when data sources are limited (Ng and Jordan, 2002). Our task is an interesting blend of data-limited and data-rich problem scenarios. In the past, various pieces of additional information have been incorporated into semantic models. For example, models with behavioral data (Silberer and Lapata, 2012) and models with visual information (Bruni et al., 2011; Silberer et al., 2013) have both shown to improve semantic representations. Other works have correlated VSMs built with text or images with brain activation data (Murphy et al., 2012b; Anderson et al., 2013). To our knowledge, this work is the first to integrate brain activation data into the construction of the VSM. 4 Data 4.1 Corpus Data The corpus statistics used here are the downloadable vectors from Fyshe et al. (2013)3. They are compiled from a 16 billion word subset of ClueWeb09 (Callan and Hoy, 2009) and contain two types of corpus features: dependency and document features, found to be complimentary for 3http://www.cs.cmu.edu/˜afyshe/papers/ conll2013/ 492 most tasks. Dependency statistics were derived by dependency parsing the corpus and compiling counts for all dependencies incident on the word. Document statistics are word-document co-occurrence counts. Count thresholding was applied to reduce noise, and positive pointwisemutual-information (PPMI) (Church and Hanks, 1990) was applied to the counts. SVD was applied to the document and dependency statistics and the top 1000 dimensions of each type were retained. We selected the rows corresponding to noun-tagged words (approx. 17000 words). 4.2 Brain Activation Data We have MEG and fMRI data at our disposal. MEG measures the magnetic field caused by many thousands of neurons firing together, and has good time resolution (1000 Hz) but poor spatial resolution. fMRI measures the change in blood oxygenation that results from differential neural activity, and has good spatial resolution but poor time resolution (0.5-1 Hz). We have fMRI data and MEG data for 18 subjects (9 in each imaging modality) viewing 60 concrete nouns (Mitchell et al., 2008; Sudre et al., 2012). The 60 words span 12 word categories (animals, buildings, tools, insects, body parts, furniture, building parts, utensils, vehicles, objects, clothing, food). Each of the 60 words was presented with a line drawing, so word ambiguity is not an issue. For both recording modalities, all trials for a particular word were averaged together to create one training instance per word, with 60 training instances in all for each subject and imaging modality. More preprocessing details appear in the supplementary material. 5 Experimental Results Here we explore several variations of JNNSE and NNSE formulations. For a comparison of the models used, see Table 1. 5.1 Correlation to Behavioral Data To test if our joint model of Brain+Text is closer to semantic ground truth we compared the latent representation A learned via JNNSE(Brain+Text) or NNSE(Text) to an independent behavioral measure of semantics. We collected behavioral data for the 60 nouns in the form of answers to 218 semantic questions. Answers were gathered with Mechanical Turk. The full list of questions appear in the supplementary material. Some example questions are:“Is it alive?”, and “Can it bend?”. Mechanical Turk users were asked to respond to each question for each word on a scale of 1-5. At least 3 respondents answered each question and the median score was used. This gives us a semantic representation of each of the 60 words in a 218-dimensional behavioral space. Because we required answers to each of the questions for all words, we do not have the problems of sparsity that exist for feature production norms from other studies (McRae et al., 2005). In addition, our answers are ratings, rather than binary yes/no answers. For a given value of ℓwe solve the NNSE(Text) and JNNSE(Brain+Text) objective function as detailed in Equation 1 and 4 respectively. We compared JNNSE(Brain+Text) and NNSE(Text) models by measuring the correlation of all pairwise distances in JNNSE(Brain+Text) and NNSE(Text) space to the pairwise distances in the 218dimensional semantic space. Distances were calculated using normalized Euclidean distance (equivalent in rank-ordering to cosine distance, but more suitable for sparse vectors). Figure 1 shows the results of this correlation test. The error bars for the JNNSE(Brain+Text) models represent a 95% confidence interval calculated using the standard error of the mean (SEM) over the 9 person-specific JNNSE(Brain+Text) models. Because there is only one NNSE(Text) model for each dimension setting, no SEM can be calculated, but it suffices to show that the NNSE(Text) correlation does not fall into the 95% confidence interval of the JNNSE(Brain+Text) models. The SVD matrix for the original corpus data has correlation 0.4279 to the behavioral data, also below the 95% confidence interval for all JNNSE models. The results show that a model that incorporates brain activation data is more faithful to a behavioral measure of semantics. 5.2 Word Prediction from Brain Activation We now show that the JNNSE(Brain+Text) vectors are more consistent with independent samples of brain activity collected from different subjects, even when recorded using different recording technologies. As previously mentioned, because there is a large degree of variation between brains and because MEG and fMRI measure very different correlates of neuronal activity, this type of generalization has proven to be very challenging and is an open research question in the neuroscience community. The output A of the JNNSE(Brain+Text) or 493 Table 1: A Comparison of the models explored in this paper, and the data upon which they operate. Model Name Section(s) Text Data Brain Data Withheld Data NNSE(Text) 2, 5 ✓ x NNSE(Brain) 2, 5.2.1, 5.3 x ✓ JNNSE(Brain+Text) 3, 5 ✓ ✓ JNNSE(Brain+Text): Dropout task 5.2.2 ✓ ✓ subset of brain data JNNSE(Brain+Text): Predict corpus 5.3 ✓ ✓ subset of text data 250 500 1000 0.4 0.42 0.44 0.46 0.48 0.5 Correlation of Semantic Question Distances to JNNSE(fMRI) Number of Latent Dimensions Correlation JNNSE(fMRI+Text) JNNSE(MEG+Text) NNSE(Text) SVD(Text) Figure 1: Correlation of JNNSE(Brain+Text) and NNSE(Text) models with the distances in a semantic space constructed from behavioral data. Error bars indicate SEM. NNSE(Text) algorithm can be used as a VSM, which we use for the task of word prediction from fMRI or MEG recordings. A JNNSE(Brain+Text) created with a particular human subject’s data is never used in the prediction framework with that same subject. For example, if we use fMRI data from subject 1 to create a JNNSE(fMRI+Text), we will test it with the remaining 8 fMRI subjects, but all 9 MEG subjects (fMRI and MEG subjects are disjoint). Let us call the VSM learned with JNNSE(Brain+Text) or NNSE(Text) the semantic vectors. We can train a weight matrix W that predicts the semantic vector a of a word from that word’s brain activation vector x: a = Wx. W can be learned with a variety of methods, we will use L2 regularized regression. One can also train regressors that predict the brain activation data from the semantic vector: x = Wa, but we have found this to give lower predictive accuracy. Note that we must re-train our weight matrix W for each subject (instead of re-using D(b) from Equation 4) because testing always occurs on a different subject, and the brain activation data is not inter-subject aligned. We train ℓindependent L2 regularized regressors to predict the ℓ-dimensional vectors a = {a1 . . . aℓ}. The predictions are concatenated to produce a predicted semantic vector: ˆa = {ˆa1, . . . , ˆaℓ}. We assess word prediction performance by testing if the model can differentiate between two unseen words, a task named 2 vs. 2 prediction (Mitchell et al., 2008; Sudre et al., 2012). We choose the assignment of the two held out semantic vectors (a(1), a(2)) to predicted semantic vectors (ˆa(1), ˆa(2)) that minimizes the sum of the two normalized Euclidean distances. 2 vs. 2 accuracy is the percentage of tests where the correct assignment is chosen. The 60 nouns fall into 12 word categories. Words in the same word category (e.g. screwdriver and hammer) are closer in semantic space than words in different word categories, which makes some 2 vs. 2 tests more difficult than others. We choose 150 random pairs of words (with each word represented equally) to estimate the difficulty of a typical word pair, without having to test all 60 2  word pairs. The same 150 random pairs are used for all subjects and all VSMs. Expected chance performance on the 2 vs. 2 test is 50%. Results for testing on fMRI data in the 2 vs. 2 framework appear in Figure 2. JNNSE(fMRI+Text) data performed on average 6% better than the best NNSE(Text), and exceeding even the original SVD corpus representations while maintaining interpretability. These results generalize across brain activity recording types; JNNSE(MEG+Text) performs as well as JNNSE(fMRI+Text) when tested on fMRI data. The results are consistent when testing on MEG data: JNNSE(MEG+Text) or JNNSE(fMRI+Text) outperforms NNSE(Text) (see Figure 3). 494 250 500 1000 64 66 68 70 72 74 Number of Latent Dimensions 2 vs. 2 Accuracy 2 vs. 2 Acc. for JNNSE and NNSE, tested on fMRI data JNNSE(fMRI+Text) JNNSE(MEG+Text) NNSE(Text) SVD(Text) Figure 2: Average 2 vs. 2 accuracy for NNSE(Text) and JNNSE(Brain+Text), tested on fMRI data. Models created with one subject’s fMRI data were not used to compute 2 vs. 2 accuracy for that same subject. 250 500 1000 66 68 70 72 74 76 78 80 82 Number of Latent Dimensions 2 vs. 2 Accuracy 2 vs. 2 Acc. for JNNSE and NNSE, tested on MEG data JNNSE(fMRI+Text) JNNSE(MEG+Text) NNSE(Text) SVD(Text) Figure 3: Average 2 vs. 2 accuracy for NNSE(Text) and JNNSE(Brain+Text), tested on MEG data. Models created with one subject’s MEG data were not used to compute 2 vs. 2 accuracy for that same subject. NNSE(Text) performance decreases as the number of latent dimension increases. This implies that without the regularizing effect of brain activation data, the extra NNSE(Text) dimensions are being used to overfit to the corpus data, or possibly to fit semantic properties not detectable with current brain imaging technologies. However, when brain activation data is included, increasing the number of latent dimensions strictly increases performance for JNNSE(fMRI+Text). JNNSE(MEG+Text) has peak performance with 500 latent dimensions, with ∼1% decrease in performance at 1000 latent dimensions. In previous work, the ability to decode words from brain activation data was found to improve with added latent dimensions (Murphy et al., 2012a). Our results may differ because our words are POS tagged, and we included only nouns for the final NNSE(Text) model. We found that with the original λ = 0.05 setting from Murphy et al. (Murphy et al., 2012a) produced vectors that were too sparse; four of the 60 test words had all-zero vectors (JNNSE(Brain+Text) models did have any allzero vectors). To improve the NNSE(Text) vectors for a fair comparison, we reduced λ = 0.025, under which NNSE(Text) did not produce any allzero vectors for the 60 words. Our results show that brain activation data contributes additional information, which leads to an increase in performance for the task of word prediction from brain activation data. This suggests that corpus-only models may not capture all relevant semantic information. This conflicts with previous studies which found that semantic vectors culled from corpus statistics contain all of the semantic information required to predict brain activation (Bullinaria and Levy, 2013). 5.2.1 Prediction from a Brain-only Model How much predictive power does the corpus data provide to this word prediction task? To test this, we calculated the 2 vs. 2 accuracy for a NNSE(Brain) model trained on brain activation data only. We train NNSE(Brain) with one subject’s data and use the resulting vectors to calculate 2 vs. 2 accuracy for the remaining subjects. We have brain data for only 60 words, so using ℓ≥60 latent dimensions leads to an under-constrained system and a degenerate solution wherein only one latent dimension is active for any word (and where the brain data can be perfectly reconstructed). The degenerate solution makes it impossible to generalize across words and leads to performance at chance levels. An NNSE(MEG) trained on MEG data gave maximum 2 vs. 2 accuracy of 67% when ℓ= 20. The reduced performance may be due to the limited training data and the low SNR of the data, but could also be attributed to the lack of corpus information, which provides another piece of semantic information. 495 5.2.2 Effect on Rows Without Brain Data It is possible that some JNNSE(Brain+Text) dimensions are being used exclusively to fit brain activation data, and not the semantics represented in both brain and corpus data. If a particular dimension j is solely used for brain data, the sparsity constraint will favor solutions that sets A(i,j) = 0 for i > w′ (no brain data constraint), and A(i,j) > 0 for some 0 ≤i ≤w′ (brain data constrained). We found that there were no such dimensions in the JNNSE(Brain+Text). In fact for the ℓ= 1000 JNNSE(Brain+Text), all latent dimensions had greater than ∼25% non-zero entries, which implies that all dimensions are being shared between the two data inputs (corpus and brain activation), and are used to reconstruct both. To test that the brain activation data is truly influencing rows of A not constrained by brain activation data, we performed a dropout test. We split the original 60 words into two 30 word groups (as evenly as possible across word categories). We trained JNNSE(fMRI+Text) with 30 words, and tested word prediction with the remaining 8 subjects and the other 30 words. Thus, the training and testing word sets are disjoint. Because of the reduced size of the training data, we did see a drop in performance, but JNNSE(fMRI+Text) vectors still gave word prediction performance 7% higher than NNSE(Text) vectors. Full results appear in the supplementary material. 5.3 Predicting Corpus Data Here we ask: can an accurate latent representation of a word be constructed using only brain activation data? This task simulates the scenario where there is no reliable corpus representation of a word, but brain data is available. This scenario may occur for seldom-used words that fall below the thresholds used for the compilation of corpus statistics. It could also be useful for acronym tokens (lol, omg) found in social media contexts where the meaning of the token is actually a full sentence. We trained a JNNSE(fMRI+Text) with brain data for all 60 words, but withhold the corpus data for 30 of the 60 words (as evenly distributed as possible amongst the 12 word categories). The brain activation data for the 30 withheld words will allow us to create latent representations in A for withheld words. Simultaneously, we will learn a mapping from the latent representation to the corpus data (D(c)). This task cannot be perTable 2: Mean rank accuracy over 30 words using corpus representations predicted by a JNNSE(MEG+Text) model trained with some rows of the corpus data withheld. Significance is calculated using Fisher’s method to combine pvalues for each of the subject-dependent models. Latent Dim size Rank Accuracy p-value 250 65.30 < 10−19 500 67.37 < 10−24 1000 63.47 < 10−15 formed with a NNSE(Text) model because one cannot learn a latent representation of a word without data of some kind. This further emphasizes the impact of brain imaging data, which will allow us to generalize to previously unseen words in corpus space. We use the latent representations in A for each of the words without corpus data and the mapping to corpus space D(c) to predict the withheld corpus data in X. We then rank the withheld rows of X by their distance to the predicted row of X and calculate the mean rank accuracy of the held out words. Results in Table 2 show that we can recreate the withheld corpus data using brain activation data. Peak mean rank accuracy (67.37) is attained at ℓ= 500 latent dimensions. This result shows that neural semantic representations can create a latent representation that is faithful to unseen corpus statistics, providing further evidence that the two data sources share a strong common element. How much power is the remaining corpus data supplying in scenarios where we withhold corpus data? To answer this question, we trained an NNSE(Brain) model on 30 words of brain activation, and then trained a regressor to predict corpus data from those latent brain-only representations. We use the trained regressor to predict the corpus data for the remaining 30 words. Peak performance is attained at ℓ= 10 latent dimensions, giving mean rank accuracy of 62.37, significantly worse than the model that includes both corpus and brain activation data (67.37). 5.4 Mapping Semantics onto the Brain Because our method incorporates brain data into an interpretable semantic model, we can directly map semantic concepts onto the brain. To do this, we examined the mappings from the latent space to the brain space via D(b). We found that the most interpretable mappings come from mod496 !"#$%&'() (a) D(b) matrix, subject P3, dimension with top words bathroom, balcony, kitchen. MNI coordinates z=-12 (left) and z=-18 (right). Fusiform is associated with shelter words. !"#$%&'$()*+ !(&%&'$()*+ (b) D(b) matrix; subject P1; dimension with top words ankle, elbow, knee. MNI coordinates z=60 (left) and z=54 (right). Preand post-central areas are activated for body part words. !"#$% &'(#)*+"#,$% (c) D(b) matrix; subject P1; dimension with top scoring words buffet, brunch, lunch. MNI coordinates z=30 (left) and z=24 (right). Pars opercularis is believed to be part of the gustatory cortex, which responds to food related words. Figure 4: The mappings (D(b)) from latent semantic space (A) to brain space (Y ) for fMRI and words from three semantic categories. Shown are representations of the fMRI slices such that the back of the head is at the top of the image, the front of the head is at the bottom. els where the perceptual features had been scaled down (divided by a constant factor), which encourages more of the data to be explained by the semantic features in A. Figure 4 shows the mappings (D(b)) for dimensions related to shelter, food and body parts. The red areas align with areas of the brain previously known to be activated by the corresponding concepts (Mitchell et al., 2008; Just et al., 2010). Our model has learned these mappings in an unsupervised setting by relating semantic knowledge gleaned from word usage to patterns of activation in the brain. This illustrates how the interpretability of JNNSE can allow one to explore semantics in the human brain. The mappings for one subject are available for download (http://www.cs. cmu.edu/˜afyshe/papers/acl2014/). 6 Future Work and Conclusion We are interested in pursuing many future projects inspired by the success of this model. We would like to extend the JNNSE algorithm to incorporate data from multiple subjects, multiple modalities and multiple experiments with non-overlapping words. Including behavioral data and image data is another possibility. We have explored a model of semantics that incorporates text and brain activation data. Though the number of words for which we have brain activation data is comparatively small, we have shown that including even this small amount of data has a positive impact on the learned latent representations, including for words without brain data. We have provided evidence that the latent representations are closer to the neural representation of semantics, and possibly, closer to semantic ground truth. Our results reveal that there are aspects of semantics not currently represented in text-based VSMs, indicating that there may be room for improvement in either the data or algorithms used to create VSMs. Our findings also indicate that using the brain as a semantic test can separate models that capture this additional semantic information from those that do not. Thus, the brain is an important source of both training and testing data. Acknowledgments This work was supported in part by NIH under award 5R01HD075328-02, by DARPA under award FA8750-13-2-0005, and by a fellowship to Alona Fyshe from the Multimodal Neuroimaging Training Program funded by NIH awards T90DA022761 and R90DA023420. References Andrew J Anderson, Elia Bruni, Ulisse Bordignon, Massimo Poesio, and Marco Baroni. 2013. Of words , eyes and brains : Correlating image-based distributional semantic models with neural representations of concepts. In Proceedings of the Conference on Empirical Methods on Natural Language Processing. David M Blei and Jon D. McAuliffe. 2007. Supervised topic models. In Advances in Neural Information Processing Systems, pages 1–22. 497 Elia Bruni, Giang Binh Tran, and Marco Baroni. 2011. Distributional semantics from text and images. In Proceedings of the EMNLP 2011 Geometrical Models for Natural Language Semantics (GEMS). John A Bullinaria and Joseph P Levy. 2013. Limiting factors for mapping corpus-based semantic representations to brain activity. PloS one, 8(3):e57191, January. Jamie Callan and Mark Hoy. 2009. The ClueWeb09 Dataset. Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David M Blei. 2009. Reading Tea Leaves : How Humans Interpret Topic Models. In Advances in Neural Information Processing Systems, pages 1–9. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational linguistics, 16(1):22–29. Bryan R Conroy, Benjamin D Singer, J Swaroop Guntupalli, Peter J Ramadge, and James V Haxby. 2013. Inter-subject alignment of human cortical anatomy using functional connectivity. NeuroImage, 81:400– 11, November. Andrew D Engell, Scott Huettel, and Gregory McCarthy. 2012. The fMRI BOLD signal tracks electrophysiological spectral perturbations, not eventrelated potentials. NeuroImage, 59(3):2600–6, February. Alona Fyshe, Partha Talukdar, Brian Murphy, and Tom Mitchell. 2013. Documents and Dependencies : an Exploration of Vector Space Models for Semantic Composition. In Computational Natural Language Learning, Sofia, Bulgaria. Arthur M Glenberg and David a Robertson. 2000. Symbol Grounding and Meaning: A Comparison of High-Dimensional and Embodied Theories of Meaning. Journal of Memory and Language, 43(3):379–401, October. Sunil Kumar Gupta, Dinh Phung, Brett Adams, and Svetha Venkatesh. 2013. Regularized nonnegative shared subspace learning. Data Mining and Knowledge Discovery, 26(1):57–97. Emma L Hall, Siˆan E Robson, Peter G Morris, and Matthew J Brookes. 2013. The relationship between MEG and fMRI. NeuroImage, November. Harold Hotelling. 1936. Relations between two sets of variates. Biometrika, 28(3/4):321–377. Yangqing Jia and Trevor Darrell. 2010. Factorized Latent Spaces with Structured Sparsity. In Advances in Neural Information Processing Systems, volume 23. Marcel Adam Just, Vladimir L Cherkassky, Sandesh Aryal, and Tom M Mitchell. 2010. A neurosemantic theory of concrete noun representation based on the underlying brain codes. PloS one, 5(1):e8622, January. Nikolaus Kriegeskorte, Marieke Mur, and Peter Bandettini. 2008a. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2(November):4, January. Nikolaus Kriegeskorte, Marieke Mur, Douglas A Ruff, Roozbeh Kiani, Jerzy Bodurka, Hossein Esteky, Keiji Tanaka, and Peter A Bandettin. 2008b. Matching Categorical Object Representations in Inferior Temporal Cortex of Man and Monkey. Neuron, 60(6):1126–1141. TK Landauer and ST Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, 1(2):211–240. Julien Mairal, Francis Bach, J Ponce, and Guillermo Sapiro. 2010. Online learning for matrix factorization and sparse coding. The Journal of Machine Learning Research, 11:19–60. Ken McRae, George S Cree, Mark S Seidenberg, and Chris McNorgan. 2005. Semantic feature production norms for a large set of living and nonliving things. Behavior research methods, 37(4):547–59, November. Tom M Mitchell, Svetlana V Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L Malave, Robert A Mason, and Marcel Adam Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science (New York, N.Y.), 320(5880):1191–5, May. Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012a. Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding. In Proceedings of Conference on Computational Linguistics (COLING). Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012b. Selecting Corpus-Semantic Models for Neurolinguistic Decoding. In First Joint Conference on Lexical and Computational Semantics (*SEM), pages 114–123, Montreal, Quebec, Canada. Andrew Y. Ng and Michael I. Jordan. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Advances in neural information processing systems, volume 14. Mark Palatucci, Geoffrey Hinton, Dean Pomerleau, and Tom M Mitchell. 2009. Zero-Shot Learning with Semantic Output Codes. Advances in Neural Information Processing Systems, 22:1410–1418. Rajeev D S Raizada and Andrew C Connolly. 2012. What Makes Different People’s Representations Alike : Neural Similarity Space Solves the Problem of Across-subject fMRI Decoding. Journal of Cognitive Neuroscience, 24(4):868–877. 498 Indrayana Rustandi, Marcel Adam Just, and Tom M Mitchell. 2009. Integrating Multiple-Study Multiple-Subject fMRI Datasets Using Canonical Correlation Analysis. In MICCAI 2009 Workshop: Statistical modeling and detection issues in intraand inter-subject functional MRI data analysis. Magnus Sahlgren. 2006. The Word-Space Model Using distributional analysis to represent syntagmatic and paradigmatic relations between words. Doctor of philosophy, Stockholm University. Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1423–1433. Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of Semantic Representation with Visual Attributes. In Association for Computational Linguistics 2013, Sofia, Bulgaria. Gustavo Sudre, Dean Pomerleau, Mark Palatucci, Leila Wehbe, Alona Fyshe, Riitta Salmelin, and Tom Mitchell. 2012. Tracking Neural Coding of Perceptual and Semantic Features of Concrete Nouns. NeuroImage, 62(1):463–451, May. Peter D Turney and Patrick Pantel. 2010. From Frequency to Meaning : Vector Space Models of Semantics. Journal of Artificial Intelligence Research, 37:141–188. Martha White, Yaoliang Yu, Xinhua Zhang, and Dale Schuurmans. 2012. Convex multi-view subspace learning. In Advances in Neural Information Processing Systems, pages 1–14. 499
2014
46
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 500–510, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Single-Agent vs. Multi-Agent Techniques for Concurrent Reinforcement Learning of Negotiation Dialogue Policies Kallirroi Georgila, Claire Nelson, David Traum University of Southern California Institute for Creative Technologies 12015 Waterfront Drive, Playa Vista, CA 90094, USA {kgeorgila,traum}@ict.usc.edu Abstract We use single-agent and multi-agent Reinforcement Learning (RL) for learning dialogue policies in a resource allocation negotiation scenario. Two agents learn concurrently by interacting with each other without any need for simulated users (SUs) to train against or corpora to learn from. In particular, we compare the Qlearning, Policy Hill-Climbing (PHC) and Win or Learn Fast Policy Hill-Climbing (PHC-WoLF) algorithms, varying the scenario complexity (state space size), the number of training episodes, the learning rate, and the exploration rate. Our results show that generally Q-learning fails to converge whereas PHC and PHC-WoLF always converge and perform similarly. We also show that very high gradually decreasing exploration rates are required for convergence. We conclude that multiagent RL of dialogue policies is a promising alternative to using single-agent RL and SUs or learning directly from corpora. 1 Introduction The dialogue policy of a dialogue system decides on which actions the system should perform given a particular dialogue state (i.e., dialogue context). Building a dialogue policy can be a challenging task especially for complex applications. For this reason, recently much attention has been drawn to machine learning approaches to dialogue management and in particular Reinforcement Learning (RL) of dialogue policies (Williams and Young, 2007; Rieser et al., 2011; Jurˇc´ıˇcek et al., 2012). Typically there are three main approaches to the problem of learning dialogue policies using RL: (1) learn against a simulated user (SU), i.e., a model that simulates the behavior of a real user (Georgila et al., 2006; Schatzmann et al., 2006); (2) learn directly from a corpus (Henderson et al., 2008; Li et al., 2009); or (3) learn via live interaction with human users (Singh et al., 2002; Gaˇsi´c et al., 2011; Gaˇsi´c et al., 2013). We propose a fourth approach: concurrent learning of the system policy and the SU policy using multi-agent RL techniques. Both agents are trained simultaneously and there is no need for building a SU separately or having access to a corpus.1 As we discuss below, concurrent learning could potentially be used for learning via live interaction with human users. Moreover, for negotiation in particular there is one more reason in favor of concurrent learning as opposed to learning against a SU. Unlike slot-filling domains, in negotiation the behaviors of the system and the user are symmetric. They are both negotiators, thus building a good SU is as difficult as building a good system policy. So far research on using RL for dialogue policy learning has focused on single-agent RL techniques. Single-agent RL methods make the assumption that the system learns by interacting with a stationary environment, i.e., an environment that does not change over time. Here the environment is the user. Generally the assumption that users do not significantly change their behavior over time holds for simple information providing tasks (e.g., reserving a flight). But this is not necessarily the case for other genres of dialogue, including negotiation. Imagine a situation where a negotiator is so uncooperative and arrogant that the other negotiators decide to completely change their negotiation strategy in order to punish her. Therefore it is important to investigate RL approaches that do not make such assumptions about the user/environment. 1Though corpora or SUs may still be useful for bootstrapping the policies and encoding real user behavior (see section 6). 500 Multi-agent RL is designed to work for nonstationary environments. In this case the environment of a learning agent is one or more other agents that can also be learning at the same time. Therefore, unlike single-agent RL, multi-agent RL can handle changes in user behavior or in the behavior of other agents participating in the interaction, and thus potentially lead to more realistic dialogue policies in complex dialogue scenarios. This ability of multi-agent RL can also have important implications for learning via live interaction with human users. Imagine a system that learns to change its strategy as it realizes that a particular user is no longer a novice user, or that a user no longer cares about five star restaurants. We apply multi-agent RL to a resource allocation negotiation scenario. Two agents with different preferences negotiate about how to share resources. We compare Q-learning (a singleagent RL algorithm) with two multi-agent RL algorithms: Policy Hill-Climbing (PHC) and Win or Learn Fast Policy Hill-Climbing (PHC-WoLF) (Bowling and Veloso, 2002). We vary the scenario complexity (i.e., the quantity of resources to be shared and consequently the state space size), the number of training episodes, the learning rate, and the exploration rate. Our research contributions are as follows: (1) we propose concurrent learning using multi-agent RL as a way to deal with some of the issues of current approaches to dialogue policy learning (i.e., the need for SUs and corpora), which may also potentially prove useful for learning via live interaction with human users; (2) we show that concurrent learning can address changes in user behavior over time, and requires multi-agent RL techniques and variable exploration rates; (3) to our knowledge this is the first time that PHC and PHCWoLF are used for learning dialogue policies; (4) for the first time, the above techniques are applied to a negotiation domain; and (5) this is the first study that compares Q-learning, PHC, and PHCWoLF in such a variety of situations (varying a large number of parameters). The paper is structured as follows. Section 2 presents related work. Section 3 provides a brief introduction to single-agent RL and multi-agent RL. Section 4 describes our negotiation domain and experimental setup. In section 5 we present our results. Finally, section 6 concludes and provides some ideas for future work. 2 Related Work Most research in RL for dialogue management has been done in the framework of slot-filling applications such as restaurant recommendations (Lemon et al., 2006; Thomson and Young, 2010; Gaˇsi´c et al., 2012; Daubigney et al., 2012), flight reservations (Henderson et al., 2008), sightseeing recommendations (Misu et al., 2010), appointment scheduling (Georgila et al., 2010), etc. RL has also been applied to question-answering (Misu et al., 2012), tutoring domains (Tetreault and Litman, 2008; Chi et al., 2011), and learning negotiation dialogue policies (Heeman, 2009; Georgila and Traum, 2011; Georgila, 2013). As mentioned in section 1, there are three main approaches to the problem of learning dialogue policies using RL. In the first approach, a SU is hand-crafted or learned from a small corpus of human-human or human-machine dialogues. Then the dialogue policy can be learned by having the system interact with the SU for a large number of dialogues (usually thousands of dialogues). Depending on the application, building a realistic SU can be just as difficult as building a good dialogue policy. Furthermore, it is not clear what constitutes a good SU for dialogue policy learning. Should the SU resemble real user behavior as closely as possible, or should it exhibit some degree of randomness to explore a variety of interaction patterns? Despite much research on the issue, these are still open questions (Schatzmann et al., 2006; Ai and Litman, 2008; Pietquin and Hastie, 2013). In the second approach, no SUs are required. Instead the dialogue policy is learned directly from a corpus of human-human or human-machine dialogues. For example, Henderson et al. (2008) used a combination of RL and supervised learning to learn a dialogue policy in a flight reservation domain, whereas Li et al. (2009) used Least-Squares Policy Iteration (Lagoudakis and Parr, 2003), an RL-based technique that can learn directly from corpora, in a voice dialer application. However, collecting such corpora is not trivial, especially in new domains. Typically, data are collected in a Wizard-of-Oz setup where human users think that they interact with a system while in fact they interact with a human pretending to be the system, or by having human users interact with a preliminary version of the dialogue system. In both cases the resulting interactions are expected to be quite dif501 ferent from the interactions of human users with the final system. In practice this means that dialogue policies learned from such data could be far from optimal. The first experiment on learning via live interaction with human users (third approach) was reported by Singh et al. (2002). They used RL to help the system with two choices: how much initiative it should allow the user, and whether or not to confirm information provided by the user. Recently, learning of “full” dialogue policies (not just choices at specific points in the dialogue) via live interaction with human users has become possible with the use of Gaussian processes (Engel et al., 2005; Rasmussen and Williams, 2006). Typically learning a dialogue policy is a slow process requiring thousands of dialogues, hence the need for SUs. Gaussian processes have been shown to speed up learning. This fact together with easy access to a large number of human users through crowd-sourcing has allowed dialogue policy learning via live interaction with human users (Gaˇsi´c et al., 2011; Gaˇsi´c et al., 2013). Space constraints prevent us from providing an exhaustive list of previous work on using RL for dialogue management. Thus below we focus only on research that is directly related to our work, specifically research on concurrent learning of the policies of multiple agents, and the application of RL to negotiation domains. So far research on RL in the dialogue community has focused on using single-agent RL techniques where the stationary environment is the user. Most approaches assume that the user goal is fixed and that the behavior of the user is rational. Other approaches account for changes in user goals (Ma, 2013). In either case, one can build a user simulation model that is the average of different user behaviors or learn a policy from a corpus that contains a variety of interaction patterns, and thus safely assume that single-agent RL techniques will work. However, in the latter case if the behavior of the user changes significantly over time then the assumption that the environment is stationary will no longer hold. There has been a lot of research on multi-agent RL in the optimal control and robotics communities (Littman, 1994; Hu and Wellman, 1998; Busoniu et al., 2008). Here two or more agents learn simultaneously. Thus the environment of an agent is one or more other agents that continuously change their behavior because they are also learning at the same time. Therefore the environment is no longer stationary and single-agent RL techniques do not work well or do not work at all. We are particularly interested in the work of Bowling and Veloso (2002) who proposed the PHC and PHC-WoLF algorithms that we use in this paper. We chose these two algorithms because, unlike other multi-agent RL methods (Littman, 1994; Hu and Wellman, 1998), they do not make assumptions that do not always hold and do not require quadratic or linear programming that does not always scale. English and Heeman (2005) were the first in the dialogue community to explore the idea of concurrent learning of dialogue policies. However, English and Heeman (2005) did not use multiagent RL but only standard single-agent RL, in particular an on-policy Monte Carlo method (Sutton and Barto, 1998). But single-agent RL techniques are not well suited for concurrent learning where each agent is trained against a continuously changing environment. Indeed, English and Heeman (2005) reported problems with convergence. Chandramohan et al. (2012) proposed a framework for co-adaptation of the dialogue policy and the SU using single-agent RL. They applied Inverse Reinforcement Learning (IRL) (Abbeel and Ng, 2004) to a corpus in order to learn the reward functions of both the system and the SU. Furthermore, Cuay´ahuitl and Dethlefs (2012) used hierarchical multi-agent RL for co-ordinating the verbal and non-verbal actions of a robot. Cuay´ahuitl and Dethlefs (2012) did not use PHC or PHCWoLF and did not compare against single-agent RL methods. With regard to using RL for learning negotiation policies, the amount of research that has been performed is very limited compared to slot-filling. English and Heeman (2005) learned negotiation policies for a furniture layout task. Then Heeman (2009) extended this work by experimenting with different representations of the RL state in the same domain (this time learning against a hand-crafted SU). In both cases, to reduce the search space, the RL state included only information about e.g., whether there was a pending proposal rather than the actual value of this proposal. Paruchuri et al. (2009) performed a theoretical study on how Partially Observable Markov Decision Processes (POMDPs) can be applied to negotiation domains. 502 Georgila and Traum (2011) built argumentation dialogue policies for negotiation against users of different cultural norms in a one-issue negotiation scenario. To learn these policies they trained SUs on a spoken dialogue corpus in a florist-grocer negotiation domain, and then tweaked these SUs towards a particular cultural norm using handcrafted rules. Georgila (2013) learned argumentation dialogue policies from a simulated corpus in a two-issue negotiation scenario (organizing a party). Finally, Nouri et al. (2012) used IRL to learn a model for cultural decision-making in a simple negotiation game (the Ultimatum Game). 3 Single-Agent vs. Multi-Agent Reinforcement Learning Reinforcement Learning (RL) is a machine learning technique used to learn the policy of an agent, i.e., which action the agent should perform given its current state (Sutton and Barto, 1998). The goal of an RL-based agent is to maximize the reward it gets during an interaction. Because it is very difficult for the agent to know what will happen in the rest of the interaction, the agent must select an action based on the average reward it has previously observed after having performed that action in similar contexts. This average reward is called expected future reward. Single-agent RL is used in the framework of Markov Decision Processes (MDPs) (Sutton and Barto, 1998) or Partially Observable Markov Decision Processes (POMDPs) (Williams and Young, 2007). Here we focus on MDPs. An MDP is defined as a tuple (S, A, T, R, γ) where S is the set of states (representing different contexts) which the agent may be in, A is the set of actions of the agent, T is the transition function S × A × S →[0, 1] which defines a set of transition probabilities between states after taking an action, R is the reward function S × A →ℜ which defines the reward received when taking an action from the given state, and γ is a factor that discounts future rewards. Solving the MDP means finding a policy π : S →A. The quality of the policy π is measured by the expected discounted (with discount factor γ) future reward also called Q-value, Qπ : S × A →ℜ. A stochastic game is defined as a tuple (n, S, A1...n, T, R1...n, γ) where n is the number of agents, S is the set of states, Ai is the set of actions available for agent i (and A is the joint action space A1 × A2 × ... × An), T is the transition function S × A × S →[0, 1] which defines a set of transition probabilities between states after taking a joint action, Ri is the reward function for the ith agent S × A →ℜ, and γ is a factor that discounts future rewards. The goal is for each agent i to learn a mixed policy πi : S × Ai →[0, 1] that maps states to mixed strategies, which are probability distributions over the agent’s actions, so that the agent’s expected discounted (with discount factor γ) future reward is maximized. Stochastic games are a generalization of MDPs for multi-agent RL. In stochastic games there are many agents that select actions and the next state and rewards depend on the joint action of all these agents. The agents can have different reward functions. Partially Observable Stochastic Games (POSGs) are the equivalent of POMDPs for multiagent RL. In POSGs, the agents have different observations, and uncertainty about the state they are in and the beliefs of their interlocutors. POSGs are very hard to solve but new algorithms continuously emerge in the literature. In this paper we use three algorithms: Qlearning, Policy Hill-Climbing (PHC), and Win or Learn Fast Policy Hill-Climbing (PHC-WoLF). PHC is an extension of Q-learning. For all three algorithms, Q-values are updated as follows: Q(s, a) ←(1−α)Q(s, a)+α  r + γmaxa′Q(s ′, a ′)  (1) In Q-learning, for a given state s, the agent performs the action with the highest Q-value for that state. In addition to Q-values, PHC and PHC-WoLF also maintain the current mixed policy π(s, a). In each step the mixed policy is updated by increasing the probability of selecting the highest valued action according to a learning rate δ (see equations (2), (3), and (4) below). π(s, a) ←π(s, a) + ∆sa (2) ∆sa = ( −δsa if a ̸= argmaxa′Q(s, a ′) Σa′̸=aδsa′ otherwise (3) δsa = min  π(s, a), δ |Ai| −1  (4) The difference between PHC and PHC-WoLF is that PHC uses a constant learning rate δ whereas 503 PHC-WoLF uses a variable learning rate (see equation (5) below). The main idea is that when the agent is “winning” the learning rate δW should be low so that the opponents have more time to adapt to the agent’s policy, which helps with convergence. On the other hand when the agent is “losing” the learning rate δLF should be high so that the agent has more time to adapt to the other agents’ policies, which also facilitates convergence. Thus PHC-WoLF uses two learning rates δW and δLF . PHC-WoLF determines whether the agent is “winning” or “losing” by comparing the current policy’s π(s, a) expected payoff with that of the average policy ˜π(s, a) over time. If the current policy’s expected payoff is greater then the agent is “winning”, otherwise it is “losing”. δ =        δW if ( Σα′π(s, α ′)Q(s, α ′) > Σα′ ˜π(s, α ′)Q(s, α ′) δLF otherwise (5) More details about Q-learning, PHC, and PHCWoLF can be found in (Sutton and Barto, 1998; Bowling and Veloso, 2002). As discussed in sections 1 and 2, single-agent RL techniques, such as Q-learning, are not suitable for multi-agent RL. Nevertheless, despite its shortcomings Q-learning has been used successfully for multi-agent RL (Claus and Boutilier, 1998). Indeed, as we see in section 5, Q-learning can converge to the optimal policy for small state spaces. However, as the state space size increases the performance of Q-learning drops (compared to PHC and PHC-WoLF). 4 Domain and Experimental Setup Our domain is a resource allocation negotiation scenario. Two agents negotiate about how to share resources. For the sake of readability from now on we will refer to apples and oranges. The two agents have different goals. Also, they have human-like constraints of imperfect information about each other; they do not know each other’s reward function or degree of rationality (during learning our agents can be irrational). Thus a Nash equilibrium (if there exists one) cannot be computed in advance. Agent 1 cares more about apples and Agent 2 cares more about oranges. Table 1 shows the points that Agents 1 and 2 earn for each apple and each orange that they have at the end of the negotiation. Agent 1 Agent 2 apple 300 200 orange 200 300 Table 1: Points earned by Agents 1 and 2 for each apple and each orange that they have at the end of the negotiation. Agent 1: offer-2-2 (I offer you 2 A and 2 O) Agent 2: offer-3-0 (I offer you 3 A and 0 O) Agent 1: offer-0-3 (I offer you 0 A and 3 O) Agent 2: offer-4-0 (I offer you 4 A and 0 O) Agent 1: accept (I accept your offer) Figure 1: Example interaction between Agents 1 and 2 (A: apples, O: oranges). We use a simplified dialogue model with two types of speech acts: offers and acceptances. The dialogue proceeds as follows: one agent makes an offer, e.g., “I give you 3 apples and 1 orange”, and the other agent may choose to accept it or make a new offer. The negotiation finishes when one of the agents accepts the other agent’s offer or time runs out. We compare Q-learning with PHC and PHCWoLF. For all algorithms and experiments each agent is rewarded only at the end of the dialogue based on the negotiation outcome (see Table 1). Thus the two agents have different reward functions. There is also a penalty of -10 for each agent action to ensure that dialogues are not too long. Also, to avoid long dialogues, if none of the agents accepts the other agent’s offers, the negotiation finishes after 20 pairs of exchanges between the two agents (20 offers from Agent 1 and 20 offers from Agent 2). An example interaction between the two agents is shown in Figure 1. As we can see, each agent can offer any combination of apples and oranges. So if we have X apples and Y oranges for sharing, there can be (X + 1) × (Y + 1) possible offers. For example if we have 2 apples and 2 oranges for sharing, there can be 9 possible offers: “offer0-0”, “offer-0-1”, ..., “offer-2-2”. For our experiments we vary the number of fruits to be shared and choose to keep X equal to Y . Table 2 shows our state representation, i.e., the state variables that we keep track of with all the possible values they can take, where X is the num504 Current offer: (X + 1) × (Y + 1) possible values How many times the current offer has already been rejected: (0, 1, 2, 3, or 4) Is the current offer accepted: yes, no Table 2: State variables. ber of apples and Y is the number of oranges to be shared. The third variable is always set to “no” until one of the agents accepts the other agent’s offer. Table 3 shows the state and action space sizes for different numbers of apples and oranges to be shared used in our experiments below. The number of actions includes the acceptance of an offer. Table 3 also shows the number of state-action pairs (Q-values). As we will see in section 5, even though the number of states for each agent is not large, it takes many iterations and high exploration rates for convergence due to the fact that both agents are learning at the same time and the assumption of interacting with a stationary environment no longer holds. For comparison, in (English and Heeman, 2005) the state specification for each agent included 5 binary variables resulting in 32 possible states. English and Heeman (2005) kept track of whether there was an offer on the table but not of the actual value of the offer. For our task it is essential to keep track of the offer values, which of course results in much larger state spaces. Also, in (English and Heeman, 2005) there were 5 possible actions resulting in 160 state-action pairs. Our state and action spaces are much larger and furthermore we explore the effect of different state and action space sizes on convergence. During learning the two agents interact for 5 epochs. Each epoch contains N number of episodes. We vary N from 25,000 up to 400,000 with a step of 25,000 episodes. English and Heeman (2005) trained their agents for 200 epochs, where each epoch contained 200 episodes. We also vary the exploration rate per epoch. In particular, in the experiments reported in section 5.1 the exploration rate is set as follows: 0.95 for epoch 1, 0.8 for epoch 2, 0.5 for epoch 3, 0.3 for epoch 4, and 0.1 for epoch 5. Section 5.2 reports results again with 5 epochs of training but a constant exploration rate per epoch set to 0.3. An exploration rate of 0.3 means that 30% of the time the agent will select an action randomly. Finally, we vary the learning rate. For PHC#States #Actions #State-Action Pairs 1 A & O 40 5 200 2 A & O 90 10 900 3 A & O 160 17 2720 4 A & O 250 26 6500 5 A & O 360 37 13320 6 A & O 490 50 24500 7 A & O 640 65 41600 Table 3: State space, action space, and state-action space sizes for different numbers of apples and oranges to be shared (A: apples, O: oranges). WoLF we set δW = 0.05 and δLF = 0.2 (see section 3). These values were chosen with experimentation and the basic idea is that the agent should learn faster when “losing” and slower when “winning”. For PHC we explore two cases. In the first case which from now on will be referred to as PHC-W, we set δ to be equal to δW (also used for PHC-WoLF). In the second case which from now on will be referred to as PHC-LF, we set δ to be equal to δLF (also used for PHC-WoLF). So unlike PHC-WoLF, PHC-W and PHC-LF do not use a variable learning rate. PHC-W always learns slowly and PHC-LF always learns fast. In all the above cases, training stops after 5 epochs. Then we test the learned policies against each other for one more epoch the size of which is the same as the size of the epochs used for training. For example, if the policies were learned for 5 epochs with each epoch containing 25,000 episodes, then for testing the two policies will interact for another 25,000 episodes. For comparison, English and Heeman (2005) had their agents interact for 5,000 dialogues during testing. To ensure that the policies do not converge by chance, we run the training and test sessions 20 times each and we report averages. Thus all results presented in section 5 are averages of 20 runs. 5 Results Given that Agent 1 is more interested in apples and Agent 2 cares more about oranges, the maximum total utility solution would be the case where each agent offers to get all the fruits it cares about and to give its interlocutor all the fruits it does not care about, and the other agent accepts this offer. Thus, when converging to the maximum total utility solution, in the case of 4 fruits (4 ap505 ples and 4 oranges), the average reward of the two agents should be 1200 minus 10 for making or accepting an offer. For 5 fruits the average reward should be 1500 minus 10, and so forth. We call 1200 (or 1500) the convergence reward, i.e., the reward after converging to the maximum total utility solution if we do not take into account the action penalty. For example, in the case of 4 fruits, if Agent 1 starts the negotiation, after converging to the maximum total utility solution the optimal interaction should be: Agent 1 makes an offer to Agent 2, namely 0 apples and 4 oranges, and Agent 2 accepts. Thus the reward for Agent 1 is 1190, the reward for Agent 2 is 1190, and the average reward of the two agents is also 1190. Also, the convergence reward for Agent 1 is 1200 and the convergence reward for Agent 2 is also 1200. Below, in all the graphs that we provide, we show the average distance from the convergence reward. This is to make all graphs comparable because in all cases the optimal average distance from the convergence reward of the two agents should be equal to 10 (make the optimal offer or accept the optimal offer that the other agent makes). The formulas for calculating the average distance from the convergence reward are: AD1 = Pnr j=1|CR1 −R1j| nr (6) AD2 = Pnr j=1|CR2 −R2j| nr (7) AD = AD1 + AD2 2 (8) where CR1 is the convergence reward for Agent 1, R1j is the reward of Agent 1 for run j, CR2 is the convergence reward for Agent 2, and R2j is the reward of Agent 2 for run j. Moreover, AD1 is the average distance from the convergence reward for Agent 1, AD2 is the average distance from the convergence reward for Agent 2, and AD is the average of AD1 and AD2. All graphs of section 5 show AD values. Also, nr is the number of runs (in our case always equal to 20). Thus in the case of 4 fruits, we will have CR1=CR2=1200, and if for all runs R1j=R2j=1190, then AD=10. 5.1 Variable Exploration Rate In this section we report results with different exploration rates per training epoch (see section 4). QPHCPHCPHClearning LF W WoLF 1 A & O 10.5 10 10 10 2 A & O 10.3 10.3 10 10 3 A & O 11.7 10 10 10 4 A & O 15 11.8 11.7 11.7 5 A & O 45.4 29.5 26.5 22.9 6 A & O 60.8 33.4 46.1 33.9 7 A & O 95 56 187.8 88.6 Table 4: Average distance from convergence reward over 20 runs for 100,000 episodes per epoch and for different numbers of fruits to be shared (A: apples, O: oranges). The best possible value is 10. Table 4 shows the average distance from the convergence reward over 20 runs for 100,000 episodes per epoch, for different numbers of fruits, and for all four methods (Q-learning, PHC-LF, PHCW, and PHC-WoLF). It is clear that as the state space becomes larger 100,000 training episodes per epoch are not enough for convergence. Also, for 1, 2, and 3 fruits all algorithms converge and perform comparably. As the number of fruits increases, Q-learning starts performing worse than the multi-agent RL algorithms. For 7 fruits PHCW appears to perform worse than Q-learning but this is because, as we can see in Figure 5, in this case more than 400,000 episodes per epoch are required for convergence. Thus after only 100,000 episodes per epoch all policies still behave somewhat randomly. Figures 2, 3, 4, and 5 show the average distance from the convergence reward as a function of the number of episodes per epoch during training, for 4, 5, 6, and 7 fruits respectively. For 4 fruits it takes about 125,000 episodes per epoch and for 5 fruits it takes about 225,000 episodes per epoch for the policies to converge. This number rises to approximately 350,000 for 6 fruits and becomes even higher for 7 fruits. Q-learning consistently performs worse than the rest of the algorithms. The differences between PHC-LF, PHC-W, and PHCWoLF are insignificant, which is a bit surprising given that Bowling and Veloso (2002) showed that PHC-WoLF performed better than PHC in a series of benchmark tasks. In Figures 2 and 3, PHC-LF appears to be reaching convergence slightly faster than PHC-W and PHC-WoLF but this is not statistically significant. 506 Figure 2: 4 fruits and variable exploration rate: Average distance from convergence reward during testing (20 runs). The best possible value is 10. Figure 3: 5 fruits and variable exploration rate: Average distance from convergence reward during testing (20 runs). The best possible value is 10. 5.2 Constant Exploration Rate In this section we report results with a constant exploration rate for all training epochs (see section 4). Figures 6 and 7 show the average distance from the convergence reward as a function of the number of episodes per epoch during training, for 4 and 5 fruits respectively. Clearly having a constant exploration rate in all epochs is problematic. For 4 fruits, after 225,000 episodes per epoch there is still no convergence. For comparison, with a variable exploration rate it took about 125,000 episodes per epoch for the policies to converge. Likewise for 5 fruits. After 400,000 episodes per epoch there is still no convergence. For comparison, with a variable exploration rate it took about 225,000 episodes per epoch for convergence. Figure 4: 6 fruits and variable exploration rate: Average distance from convergence reward during testing (20 runs). The best possible value is 10. Figure 5: 7 fruits and variable exploration rate: Average distance from convergence reward during testing (20 runs). The best possible value is 10. The above results show that, unlike single-agent RL where having a constant exploration rate is perfectly acceptable, here a constant exploration rate does not work. 6 Conclusion and Future Work We used single-agent RL and multi-agent RL for learning dialogue policies in a resource allocation negotiation scenario. Two agents interacted with each other and both learned at the same time. The advantage of this approach is that it does not require SUs to train against or corpora to learn from. We compared a traditional single-agent RL algorithm (Q-learning) against two multi-agent RL algorithms (PHC and PHC-WoLF) varying the scenario complexity (state space size), the number 507 Figure 6: 4 fruits and constant exploration rate: Average distance from convergence reward during testing (20 runs). The best possible value is 10. Figure 7: 5 fruits and constant exploration rate: Average distance from convergence reward during testing (20 runs). The best possible value is 10. of training episodes, and the learning and exploration rates. Our results showed that Q-learning is not suitable for concurrent learning given that it is designed for learning against a stationary environment. Q-learning failed to converge in all cases, except for very small state space sizes. On the other hand, both PHC and PHC-WoLF always converged (or in the case of 7 fruits they needed more training episodes) and performed similarly. We also showed that in concurrent learning very high gradually decreasing exploration rates are required for convergence. We conclude that multiagent RL of dialogue policies is a promising alternative to using single-agent RL and SUs or learning directly from corpora. The focus of this paper is on comparing singleagent RL and multi-agent RL for concurrent learning, and studying the implications for convergence and exploration/learning rates. Our next step is testing with human users. We are particularly interested in users whose behavior changes during the interaction and continuous testing against expert repeat users, which has never been done before. Another interesting question is whether corpora or SUs may still be required for designing the state and action spaces and the reward functions of the interlocutors, bootstrapping the policies, and ensuring that information about the behavior of human users is encoded in the resulting learned policies. Gaˇsi´c et al. (2013) showed that it is possible to learn “full” dialogue policies just via interaction with human users (without any bootstrapping using corpora or SUs). Similarly, concurrent learning could be used in an on-line fashion via live interaction with human users. Or alternatively concurrent learning could be used offline to bootstrap the policies and then these policies could be improved via live interaction with human users (again using concurrent learning to address possible changes in user behavior). These are open research questions for future work. Furthermore, we intend to apply multi-agent RL to more complex negotiation domains, e.g., experiment with more than two types of resources (not just apples and oranges) and more types of actions (not just offers and acceptances). We would also like to compare policies learned with multi-agent RL techniques with policies learned with SUs or from corpora both in simulation and with human users. Finally, we aim to experiment with different feature-based representations of the state and action spaces. Currently all possible deal combinations are listed as possible actions and as elements of the state, which can quickly lead to very large state and action spaces as the application becomes more complex (in our case as the number of fruits increases). However, abstraction is not trivial because the agents have no guarantee that the value of a deal is a simple function of the value of its parts, and values may differ for different agents. Acknowledgments Claire Nelson sadly died in May 2013. We continued and completed this work after her passing away. She is greatly missed. This work was funded by the NSF grant #1117313. 508 References Pieter Abbeel and Andrew Y. Ng. 2004. Apprenticeship learning via inverse reinforcement learning. In Proc. of the International Conference on Machine Learning, Bannf, Alberta, Canada. Hua Ai and Diane Litman. 2008. Assessing dialog system user simulation evaluation measures using human judges. In Proc. of the Annual Meeting of the Association for Computational Linguistics, Columbus, Ohio, USA. Michael Bowling and Manuela Veloso. 2002. Multiagent learning using a variable learning rate. Artificial Intelligence, 136(2):215–250. L. Busoniu, R. Babuska, and B. De Schutter. 2008. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 38(2):156–172. Senthilkumar Chandramohan, Matthieu Geist, Fabrice Lef`evre, and Olivier Pietquin. 2012. Co-adaptation in spoken dialogue systems. In Proc. of the International Workshop on Spoken Dialogue Systems, Paris, France. Min Chi, Kurt VanLehn, Diane Litman, and Pamela Jordan. 2011. Empirically evaluating the application of reinforcement learning to the induction of effective and adaptive pedagogical strategies. User Modeling and User-Adapted Interaction, 21(12):137–180. Caroline Claus and Craig Boutilier. 1998. The dynamics of reinforcement learning in cooperative multiagent systems. In Proc. of the National Conference on Artificial Intelligence. Heriberto Cuay´ahuitl and Nina Dethlefs. 2012. Hierarchical multiagent reinforcement learning for coordinating verbal and nonverbal actions in robots. In Proc. of the ECAI Workshop on Machine Learning for Interactive Systems, Montpellier, France. Lucie Daubigney, Matthieu Geist, Senthilkumar Chandramohan, and Olivier Pietquin. 2012. A comprehensive reinforcement learning framework for dialogue management optimization. IEEE Journal of Selected Topics in Signal Processing, 6(8):891–902. Yaakov Engel, Shie Mannor, and Ron Meir. 2005. Reinforcement learning with Gaussian processes. In Proc. of the International Conference on Machine Learning, Bonn, Germany. Michael S. English and Peter A. Heeman. 2005. Learning mixed initiative dialogue strategies by using reinforcement learning on both conversants. In Proc. of the Conference on Empirical Methods in Natural Language Processing, Vancouver, Canada. M. Gaˇsi´c, Filip Jurˇc´ıˇcek, Blaise Thomson, Kai Yu, and Steve Young. 2011. On-line policy optimisation of spoken dialogue systems via live interaction with human subjects. In Proc. of the IEEE Automatic Speech Recognition and Understanding Workshop, Big Island, Hawaii, USA. Milica Gaˇsi´c, Matthew Henderson, Blaise Thomson, Pirros Tsiakoulis, and Steve Young. 2012. Policy optimisation of POMDP-based dialogue systems without state space compression. In Proc. of the IEEE Workshop on Spoken Language Technology, Miami, Florida, USA. M. Gaˇsi´c, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, and S. Young. 2013. On-line policy optimisation of Bayesian spoken dialogue systems via human interaction. In Proc. of the International Conference on Acoustics, Speech and Signal Processing, Vancouver, Canada. Kallirroi Georgila and David Traum. 2011. Reinforcement learning of argumentation dialogue policies in negotiation. In Proc. of Interspeech, Florence, Italy. Kallirroi Georgila, James Henderson, and Oliver Lemon. 2006. User simulation for spoken dialogue systems: Learning and evaluation. In Proc. of Interspeech, Pittsburgh, Pennsylvania, USA. Kallirroi Georgila, Maria K. Wolters, and Johanna D. Moore. 2010. Learning dialogue strategies from older and younger simulated users. In Proc. of the Annual SIGdial Meeting on Discourse and Dialogue, Tokyo, Japan. Kallirroi Georgila. 2013. Reinforcement learning of two-issue negotiation dialogue policies. In Proc. of the Annual SIGdial Meeting on Discourse and Dialogue, Metz, France. Peter A. Heeman. 2009. Representing the reinforcement learning state in a negotiation dialogue. In Proc. of the IEEE Automatic Speech Recognition and Understanding Workshop, Merano, Italy. James Henderson, Oliver Lemon, and Kallirroi Georgila. 2008. Hybrid reinforcement/supervised learning of dialogue policies from fixed datasets. Computational Linguistics, 34(4):487–511. Junling Hu and Michael P. Wellman. 1998. Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proc. of the International Conference on Machine Learning, Madison, Wisconsin, USA. Filip Jurˇc´ıˇcek, Blaise Thomson, and Steve Young. 2012. Reinforcement learning for parameter estimation in statistical spoken dialogue systems. Computer Speech and Language, 26(3):168–192. Michail G. Lagoudakis and Ronald Parr. 2003. Leastsquares policy iteration. Journal of Machine Learning Research, 4:1107–1149. 509 Oliver Lemon, Kallirroi Georgila, and James Henderson. 2006. Evaluating effectiveness and portability of reinforcement learned dialogue strategies with real users: The TALK TownInfo evaluation. In Proc. of the IEEE Workshop on Spoken Language Technology, Palm Beach, Aruba. Lihong Li, Jason D. Williams, and Suhrid Balakrishnan. 2009. Reinforcement learning for dialog management using least-squares policy iteration and fast feature selection. In Proc. of Interspeech, Brighton, United Kingdom. Michael L. Littman. 1994. Markov games as a framework for multi-agent reinforcement learning. In Proc. of the International Conference on Machine Learning, New Brunswick, New Jersey, USA. Yi Ma. 2013. User goal change model for spoken dialog state tracking. In Proc. of the NAACL-HLT Student Research Workshop, Atlanta, Georgia, USA. Teruhisa Misu, Komei Sugiura, Kiyonori Ohtake, Chiori Hori, Hideki Kashioka, Hisashi Kawai, and Satoshi Nakamura. 2010. Modeling spoken decision making dialogue and optimization of its dialogue strategy. In Proc. of the Annual SIGdial Meeting on Discourse and Dialogue, Tokyo, Japan. Teruhisa Misu, Kallirroi Georgila, Anton Leuski, and David Traum. 2012. Reinforcement learning of question-answering dialogue policies for virtual museum guides. In Proc. of the Annual SIGdial Meeting on Discourse and Dialogue, Seoul, South Korea. Elnaz Nouri, Kallirroi Georgila, and David Traum. 2012. A cultural decision-making model for negotiation based on inverse reinforcement learning. In Proc. of the Cognitive Science Conference, Sapporo, Japan. P. Paruchuri, N. Chakraborty, R. Zivan, K. Sycara, M. Dudik, and G. Gordon. 2009. POMDP based negotiation modeling. In IJCAI Workshop on Modeling Intercultural Collaboration and Negotiation, Pasadena, California, USA. Olivier Pietquin and Helen Hastie. 2013. A survey on metrics for the evaluation of user simulations. Knowledge Engineering Review, 28(1):59–73. Carl Edward Rasmussen and Christopher K. I. Williams. 2006. Gaussian Processes for Machine Learning. MIT Press. Verena Rieser, Simon Keizer, Xingkun Liu, and Oliver Lemon. 2011. Adaptive information presentation for spoken dialogue systems: Evaluation with human subjects. In Proc. of the European Workshop on Natural Language Generation, Nancy, France. Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. Knowledge Engineering Review, 21(2):97–126. Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system. Journal of Artificial Intelligence Research, 16:105–133. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press. Joel R. Tetreault and Diane J. Litman. 2008. A reinforcement learning approach to evaluating state representations in spoken dialogue systems. Speech Communication, 50(8-9):683–696. Blaise Thomson and Steve Young. 2010. Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems. Computer Speech and Language, 24(4):562–588. Jason D. Williams and Steve Young. 2007. Scaling POMDPs for spoken dialog management. IEEE Transactions on Audio, Speech, and Language Processing, 15(7):2116–2129. 510
2014
47
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 511–521, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Linear-Time Bottom-Up Discourse Parser with Constraints and Post-Editing Vanessa Wei Feng Department of Computer Science University of Toronto Toronto, ON, Canada [email protected] Graeme Hirst Department of Computer Science University of Toronto Toronto, ON, Canada [email protected] Abstract Text-level discourse parsing remains a challenge. The current state-of-the-art overall accuracy in relation assignment is 55.73%, achieved by Joty et al. (2013). However, their model has a high order of time complexity, and thus cannot be applied in practice. In this work, we develop a much faster model whose time complexity is linear in the number of sentences. Our model adopts a greedy bottom-up approach, with two linear-chain CRFs applied in cascade as local classifiers. To enhance the accuracy of the pipeline, we add additional constraints in the Viterbi decoding of the first CRF. In addition to efficiency, our parser also significantly outperforms the state of the art. Moreover, our novel approach of post-editing, which modifies a fully-built tree by considering information from constituents on upper levels, can further improve the accuracy. 1 Introduction Discourse parsing is the task of identifying the presence and the type of the discourse relations between discourse units. While research in discourse parsing can be partitioned into several directions according to different theories and frameworks, Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is probably the most ambitious one, because it aims to identify not only the discourse relations in a small local context, but also the hierarchical tree structure for the full text: from the relations relating the smallest discourse units (called elementary discourse units, EDUs), to the ones connecting paragraphs. For example, Figure 1 shows a text fragment consisting of two sentences with four EDUs in total (e1-e4). Its discourse tree representation is shown below the text, following the notation convention of RST: the two EDUs e1 and e2 are related by a mononuclear relation CONSEQUENCE, where e2 is the more salient span (called nucleus, and e1 is called satellite); e3 and e4 are related by another mononuclear relation CIRCUMSTANCE, with e4 as the nucleus; the two spans e1:2 and e3:4 are further related by a multi-nuclear relation SEQUENCE, with both spans as the nucleus. Conventionally, there are two major sub-tasks related to text-level discourse parsing: (1) EDU segmentation: to segment the raw text into EDUs, and (2) tree-building: to build a discourse tree from EDUs, representing the discourse relations in the text. Since the first sub-task is considered relatively easy, with the state-of-art accuracy at above 90% (Joty et al., 2012), the recent research focus is on the second sub-task, and often uses manual EDU segmentation. The current state-of-the-art overall accuracy of the tree-building sub-task, evaluated on the RST Discourse Treebank (RST-DT, to be introduced in Section 8), is 55.73% by Joty et al. (2013). However, as an optimal discourse parser, Joty et al.’s model is highly inefficient in practice, with respect to both their DCRF-based local classifiers, and their CKY-like bottom-up parsing algorithm. DCRF (Dynamic Conditional Random Fields) is a generalization of linear-chain CRFs, in which each time slice contains a set of state variables and edges (Sutton et al., 2007). CKY parsing is a bottom-up parsing algorithm which searches all possible parsing paths by dynamic programming. Therefore, despite its superior performance, their model is infeasible in most realistic situations. The main objective of this work is to develop a more efficient discourse parser, with similar or even better performance with respect to Joty et al.’s optimal parser, but able to produce parsing results in real time. Our contribution is three-fold. First, with a 511 [On Aug. 1, the state tore up its controls,]e1 [and food prices leaped]e2 [Without buffer stocks,]e3 [inflation exploded.]e4 wsj 1146 e1 e2 consequence e1:4 e3 e4 circumstance sequence e1:2 e3:4 Figure 1: An example text fragment composed of two sentences and four EDUs, with its RST discourse tree representation shown below. greedy bottom-up strategy, we develop a discourse parser with a time complexity linear in the total number of sentences in the document. As a result of successfully avoiding the expensive nongreedy parsing algorithms, our discourse parser is very efficient in practice. Second, by using two linear-chain CRFs to label a sequence of discourse constituents, we can incorporate contextual information in a more natural way, compared to using traditional discriminative classifiers, such as SVMs. Specifically, in the Viterbi decoding of the first CRF, we include additional constraints elicited from common sense, to make more effective local decisions. Third, after a discourse (sub)tree is fully built from bottom up, we perform a novel post-editing process by considering information from the constituents on upper levels. We show that this post-editing can further improve the overall parsing performance. 2 Related work 2.1 HILDA discourse parser The HILDA discourse parser by Hernault et al. (2010) is the first attempt at RST-style text-level discourse parsing. It adopts a pipeline framework, and greedily builds the discourse tree from the bottom up. In particular, starting from EDUs, at each step of the tree-building, a binary SVM classifier is first applied to determine which pair of adjacent discourse constituents should be merged to form a larger span, and another multi-class SVM classifier is then applied to assign the type of discourse relation that holds between the chosen pair. The strength of HILDA’s greedy tree-building strategy is its efficiency in practice. Also, the employment of SVM classifiers allows the incorporation of rich features for better data representation (Feng and Hirst, 2012). However, HILDA’s approach also has obvious weakness: the greedy algorithm may lead to poor performance due to local optima, and more importantly, the SVM classifiers are not well-suited for solving structural problems due to the difficulty of taking context into account. 2.2 Joty et al.’s joint model Joty et al. (2013) approach the problem of textlevel discourse parsing using a model trained by Conditional Random Fields (CRF). Their model has two distinct features. First, they decomposed the problem of textlevel discourse parsing into two stages: intrasentential parsing to produce a discourse tree for each sentence, followed by multi-sentential parsing to combine the sentence-level discourse trees and produce the text-level discourse tree. Specifically, they employed two separate models for intra- and multi-sentential parsing. Their choice of two-stage parsing is well motivated for two reasons: (1) it has been shown that sentence boundaries correlate very well with discourse boundaries, and (2) the scalability issue of their CRFbased models can be overcome by this decomposition. Second, they jointly modeled the structure and the relation for a given pair of discourse units. For example, Figure 2 shows their intra-sentential model, in which they use the bottom layer to represent discourse units; the middle layer of binary nodes to predict the connection of adjacent discourse units; and the top layer of multi-class nodes to predict the type of the relation between two units. Their model assigns a probability to each possible constituent, and a CKY-like parsing algorithm finds the globally optimal discourse tree, given the computed probabilities. The strength of Joty et al.’s model is their joint modeling of the structure and the relation, such that information from each aspect can interact with the other. However, their model has a major defect in its inefficiency, or even infeasibility, for application in practice. The inefficiency lies in both their DCRF-based joint model, on which inference is usually slow, and their CKY-like parsing algorithm, whose issue is more prominent. Due to the O(n3) time complexity, where n is the number 512 R2 S2 U2 U1 R3 S3 U3 Rj Sj Uj Rt-1 St-1 Ut-1 Relation sequence Structure sequence Unit sequence at level i Figure 2: Joty et al. (2013)’s intra-sentential Condition Random Fields. of input discourse units, for large documents, the parsing simply takes too long1. 3 Overall work flow Figure 3 demonstrates the overall work flow of our discourse parser. The general idea is that, similar to Joty et al. (2013), we perform a sentence-level parsing for each sentence first, followed by a textlevel parsing to generate a full discourse tree for the whole document. However, in addition to efficiency (to be shown in Section 6), our discourse parser has a distinct feature, which is the postediting component (to be introduced in Section 5), as outlined in dashes. Our discourse parser works as follows. A document D is first segmented into a list of sentences. Each sentence Si, after being segmented into EDUs (not shown in the figure), goes through an intra-sentential bottom-up tree-building model Mintra, to form a sentence-level discourse tree TSi, with the EDUs as leaf nodes. After that, we apply the intra-sentential post-editing model Pintra to modify the generated tree TSi to T p Si , by considering upper-level information. We then combine all sentence-level discourse tree T p Si ’s using our multi-sentential bottom-up tree-building model Mmulti to generate the textlevel discourse tree TD. Similar to sentence-level parsing, we also post-edit TD using Pmulti to produce the final discourse tree T p D. 1The largest document in the RST-DT contains over 180 sentences, i.e., n > 180 for their multi-sentential CKY parsing. Intuitively, suppose the average time to compute the probability of each constituent is 0.01 second, then in total, the CKY-like parsing takes over 16 hours. It is possible to optimize Joty et al.’s CKY-like parsing by replacing their CRFbased computation for upper-level constituents with some local computation based on the probabilities of lower-level constituents. However, such optimization is beyond the scope of this paper. 4 Bottom-up tree-building For both intra- and multi-sentential parsing, our bottom-up tree-building process adopts a similar greedy pipeline framework like the HILDA discourse parser (discussed in Section 2.1), to guarantee efficiency for large documents. In particular, starting from the constituents on the bottom level (EDUs for intra-sentential parsing and sentence-level discourse trees for multi-sentential parsing), at each step of the tree-building, we greedily merge a pair of adjacent discourse constituents such that the merged constituent has the highest probability as predicted by our structure model. The relation model is then applied to assign the relation to the new constituent. 4.1 Linear-chain CRFs as Local models Now we describe the local models we use to make decisions for a given pair of adjacent discourse constituents in the bottom-up tree-building. There are two dimensions for our local models: (1) scope of the model: intra- or multi-sentential, and (2) purpose of the model: for determining structures or relations. So we have four local models, Mstruct intra , Mrel intra, Mstruct multi , and Mrel multi. While our bottom-up tree-building shares the greedy framework with HILDA, unlike HILDA, our local models are implemented using CRFs. In this way, we are able to take into account the sequential information from contextual discourse constituents, which cannot be naturally represented in HILDA with SVMs as local classifiers. Therefore, our model incorporates the strengths of both HILDA and Joty et al.’s model, i.e., the efficiency of a greedy parsing algorithm, and the ability to incorporate sequential information with CRFs. As shown by Feng and Hirst (2012), for a pair of discourse constituents of interest, the sequential information from contextual constituents is crucial for determining structures. Therefore, it is well motivated to use Conditional Random Fields (CRFs) (Lafferty et al., 2001), which is a discriminative probabilistic graphical model, to make predictions for a sequence of constituents surrounding the pair of interest. In this sense, our local models appear similar to Joty et al.’s non-greedy parsing models. However, the major distinction between our models and theirs is that we do not jointly model the structure and the relation; rather, we use two linear513 D S1 Si Sn ... ... Mintra Mintra Mintra Mintra Pintra ... ... ... Pintra Pintra Pmulti Mmulti ... ... 1 S T iS T n S T p Sn T p Si T p S T 1 D T p D T Figure 3: The work flow of our proposed discourse parser. In the figure, Mintra and Mmulti stand for the intra- and multi-sentential bottom-up tree-building models, and Pintra and Pmulti stand for the intra- and multi-sentential post-editing models. chain CRFs to model the structure and the relation separately. Although joint modeling has shown to be effective in various NLP and computer vision applications (Sutton et al., 2007; Yang et al., 2009; Wojek and Schiele, 2008), our choice of using two separate models is for the following reasons: First, it is not entirely appropriate to model the structure and the relation at the same time. For example, with respect to Figure 2, it is unclear how the relation node R j is represented for a training instance whose structure node Sj = 0, i.e., the units Uj−1 and Uj are disjoint. Assume a special relation NO-REL is assigned for R j. Then, in the tree-building process, we will have to deal with the situations where the joint model yields conflicting predictions: it is possible that the model predicts Sj = 1 and R j = NO-REL, or vice versa, and we will have to decide which node to trust (and thus in some sense, the structure and the relation is no longer jointly modeled). Secondly, as a joint model, it is mandatory to use a dynamic CRF, for which exact inference is usually intractable or slow. In contrast, for linearchain CRFs, efficient algorithms and implementations for exact inference exist. 4.2 Structure models 4.2.1 Intra-sentential structure model Figure 4a shows our intra-sentential structure model Mstruct intra in the form of a linear-chain CRF. Similar to Joty et al.’s intra-sentential model, the first layer of the chain is composed of discourse constituents Uj’s, and the second layer is composed of binary nodes Sj’s to indicate the probability of merging adjacent discourse constituents. S2 U2 U1 S3 U3 Sj Uj St Ut Structure sequence All units in sentence at level i (a) Intra-sentential structure model Mstruct intra . Sj-1 Uj-1 Uj-2 Sj Uj Structure sequence Adjacent units at level i Uj+1 Sj+1 Sj-1 Uj-3 Sj+2 Uj+2 C1 C2 C3 (b) Multi-sentential structure model Mstruct multi . C1, C2, and C3 denote the three chains for predicting Uj and Uj+1. Figure 4: Local structure models. At each step in the bottom-up tree-building process, we generate a single sequence E, consisting of U1,U2,...,Uj,...,Ut, which are all the current discourse constituents in the sentence that need to be processed. For instance, initially, we have the sequence E1 = {e1,e2,...,em}, which are the EDUs of the sentence; after merging e1 and e2 on the second level, we have E2 = {e1:2,e3,...,em}; after merging e4 and e5 on the third level, we have E3 = {e1:2,e3,e4:5,...,em}, and so on. Because the structure model is the first component in our pipeline of local models, its accuracy is crucial. Therefore, to improve its accuracy, we enforce additional commonsense constraints in its Viterbi decoding. In particular, we disallow 11 transitions between adjacent labels (a discourse unit can be merged with at most one adjacent unit), and we disallow all-zero sequences (at least one 514 pair must be merged). Since the computation of Ei does not depend on a particular pair of constituents, we can use the same sequence Ei to compute structural probabilities for all adjacent constituents. In contrast, Joty et al.’s computation of intra-sentential sequences depends on the particular pair of constituents: the sequence is composed of the pair in question, with other EDUs in the sentence, even if those EDUs have already been merged. Thus, different CRF chains have to be formed for different pairs of constituents. In addition to efficiency, our use of a single CRF chain for all constituents can better capture the sequential dependencies among context, by taking into account the information from partially built discourse constituents, rather than bottom-level EDUs only. 4.2.2 Multi-sentential structure model For multi-sentential parsing, where the smallest discourse units are single sentences, as argued by Joty et al. (2013), it is not feasible to use a long chain to represent all constituents, due to the fact that it takes O(TM2) time to perform the forwardbackward exact inference on a chain with T units and an output vocabulary size of M, thus the overall complexity for all possible sequences in their model is O(M2n3)2. Instead, we choose to take a sliding-window approach to form CRF chains for a particular pair of constituents, as shown in Figure 4b. For example, suppose we wish to compute the structural probability for the pair Uj−1 and Uj, we form three chains, each of which contains two contextual constituents: C1 = {Uj−3,Uj−2,Uj−1,Uj}, C2 = {Uj−2,Uj−1,Uj,Uj+1}, and C3 = {Uj−1,Uj,Uj+1,Uj+2}. We then find the chain Ct,1 ≤t ≤3, with the highest joint probability over the entire sequence, and assign its marginal probability P(St j = 1) to P(Sj = 1). Similar to Mstruct intra , for Mstruct multi , we also include additional constraints in the Viterbi decoding, by disallowing transitions between two ones, and disallowing the sequence to be all zeros if it contains all the remaining constituents in the document. 4.3 Relation models 4.3.1 Intra-sentential relation model The intra-sentential relation model Mrel intra, shown in Figure 5a, works in a similar way to Mstruct intra , as 2The time complexity will be reduced to O(M2n2), if we use the same chain for all constituents as in our Mstruct intra . described in Section 4.2.1. The linear-chain CRF contains a first layer of all discourse constituents Uj’s in the sentence on level i, and a second layer of relation nodes R j’s to represent the relation between a pair of discourse constituents. However, unlike the structure model, adjacent relation nodes do not share discourse constituents on the first layer. Rather, each relation node R j attempts to model the relation of one single constituent Uj, by taking Uj’s left and right subtrees Uj,L and Uj,R as its first-layer nodes; if Uj is a single EDU, then the first-layer node of R j is simply Uj, and R j is a special relation symbol LEAF3. Since we know, a priori, that the constituents in the chains are either leaf nodes or the ones that have been merged by our structure model, we never need to worry about the NO-REL issue as outlined in Section 4.1. In the bottom-up tree-building process, after merging a pair of adjacent constituents using Mstruct intra into a new constituent, say Uj, we form a chain consisting of all current constituents in the sentence to decide the relation label for Uj, i.e., the R j node in the chain. In fact, by performing inference on this chain, we produce predictions not only for R j, but also for all other R nodes in the chain, which correspond to all other constituents in the sentence. Since those non-leaf constituents are already labeled in previous steps in the tree-building, we can now re-assign their relations if the model predicts differently in this step. Therefore, this re-labeling procedure can compensate for the loss of accuracy caused by our greedy bottom-up strategy to some extent. 4.3.2 Multi-sentential relation model Figure 5b shows our multi-sentential relation model. Like Mrel intra, the first layer consists of adjacent discourse units, and the relation nodes on the second layer model the relation of each constituent separately. Similar to Mstruct multi introduced in Section 4.2.2, Mrel multi also takes a sliding-window approach to predict labels for constituents in a local context. For a constituent Uj to be predicted, we form three chains, and use the chain with the highest joint probability to assign or re-assign relations to constituents in that chain. 3These leaf constituents are represented using a special feature vector is leaf = True; thus the CRF never labels them with relations other than LEAF. 515 Relation sequence All units in sentence at level i R1 U1,R U1,L R2 U2 Rj Uj,R Uj,L Rt Ut,R Ut,L (a) Intra-sentential relation model Mrel intra. Relation sequence Adjacent units at level i R1 Uj-2,R Uj-2,L Rj-1 Uj-1 Rj Uj,R Uj,L Rj+1 Uj+1,R Uj+1,L Rj+2 Uj+2 C1 C2 C3 (b) Multi-sentential relation model Mrel multi. C1, C2, and C3 denote the three sliding windows for predictingUj,L andUj,R. Figure 5: Local relation models. 5 Post-editing After an intra- or multi-sentential discourse tree is fully built, we perform a post-editing to consider possible modifications to the current tree, by considering useful information from the discourse constituents on upper levels, which is unavailable in the bottom-up tree-building process. The motivation for post-editing is that, some particular discourse relations, such as TEXTUALORGANIZATION, tend to occur on the top levels of the discourse tree; thus, information such as the depth of the discourse constituent can be quite indicative. However, the exact depth of a discourse constituent is usually unknown in the bottom-up tree-building process; therefore, it might be beneficial to modify the tree by including top-down information after the tree is fully built. The process of post-editing is shown in Algorithm 1. For each input discourse tree T, which is already fully built by bottom-up tree-building models, we do the following: Lines 3 – 9: Identify the lowest level of T on which the constituents can be modified according to the post-editing structure component, Pstruct. To do so, we maintain a list L to store the discourse constituents that need to be examined. Initially, L consists of all the bottom-level constituents in T. At each step of the loop, we consider merging the pair of adjacent units in L with the highest probability predicted by Pstruct. If the predicted pair is not merged in the original tree T, then a possible modification is located; otherwise, we merge the pair, and proceed to the next iteration. Lines 10 – 12: If modifications have been proposed in the previous step, we build a new tree Algorithm 1 Post-editing algorithm. Input: A fully built discourse tree T. 1: if |T| = 1 then 2: return T ▷Do nothing if it is a single EDU. 3: L ←[U1,U2,...,Ut] ▷The bottom-level constituents in T. 4: while |L| > 2 do 5: i ←PREDICTMERGING(L,Pstruct) 6: p ←PARENT(L[i],L[i+1],T) 7: if p = NULL then 8: break 9: Replace L[i] and L[i+1] with p 10: if |L| = 2 then 11: L ←[U1,U2,...,Ut] 12: T p ←BUILDTREE(L,Pstruct,Prel,T) Output: T p T p using Pstruct as the structure model, and Prel as the relation model, from the constituents on which modifications are proposed. Otherwise, T p is built from the bottom-level constituents of T. The upper-level information, such as the depth of a discourse constituent, is derived from the initial tree T. 5.1 Local models The local models, P{struct|rel} {intra|multi}, for post-editing is almost identical to their counterparts of the bottom-up tree-building, except that the linearchain CRFs in post-editing includes additional features to represent information from constituents on higher levels (to be introduced in Section 7). 6 Linear time complexity Here we analyze the time complexity of each component in our discourse parser, to quantitatively demonstrate the time efficiency of our model. The following analysis is focused on the bottom-up tree-building process, but a similar analysis can be carried out for the post-editing process. Since the number of operations in the post-editing process is roughly the same (1.5 times in the worst case) as in the bottom-up tree-building, post-editing shares the same complexity as the tree-building. 6.1 Intra-sentential parsing Suppose the input document is segmented into n sentences, and each sentence Sk contains mk EDUs. For each sentence Sk with mk EDUs, the 516 overall time complexity to perform intra-sentential parsing is O(m2 k). The reason is the following. On level i of the bottom-up tree-building, we generate a single chain to represent the structure or relation for all the mk −i constituents that are currently in the sentence. The time complexity for performing forward-backward inference on the single chain is O((mk −i)×M2) = O(mk −i), where the constant M is the size of the output vocabulary. Starting from the EDUs on the bottom level, we need to perform inference for one chain on each level during the bottom-up tree-building, and thus the total time complexity is Σmk i=1O(mk −i) = O(m2 k). The total time to generate sentence-level discourse trees for n sentences is Σn k=1O(m2 k). It is fairly safe to assume that each mk is a constant, in the sense that mk is independent of the total number of sentences in the document. Therefore, the total time complexity Σn k=1O(m2 k) ≤n × O(max1≤j≤n(m2 j)) = n×O(1) = O(n), i.e., linear in the total number of sentences. 6.2 Multi-sentential parsing For multi-sentential models, Mstruct multi and Mrel multi, as shown in Figures 4b and 5b, for a pair of constituents of interest, we generate multiple chains to predict the structure or the relation. By including a constant number k of discourse units in each chain, and considering a constant number l of such chains for computing each adjacent pair of discourse constituents (k = 4 for Mstruct multi and k = 3 for Mrel multi; l = 3), we have an overall time complexity of O(n). The reason is that it takes l ×O(kM2) = O(1) time, where l,k,M are all constants, to perform exact inference for a given pair of adjacent constituents, and we need to perform such computation for all n−1 pairs of adjacent sentences on the first level of the treebuilding. Adopting a greedy approach, on an arbitrary level during the tree-building, once we decide to merge a certain pair of constituents, say Uj and Uj+1, we only need to recompute a small number of chains, i.e., the chains which originally include Uj or Uj+1, and inference on each chain takes O(1). Therefore, the total time complexity is (n−1)×O(1)+(n−1)×O(1) = O(n), where the first term in the summation is the complexity of computing all chains on the bottom level, and the second term is the complexity of computing the constant number of chains on higher levels. We have thus showed that the time complexity is linear in n, which is the number of sentences in the document. In fact, under the assumption that the number of EDUs in each sentence is independent of n, it can be shown that the time complexity is also linear in the total number of EDUs4. 7 Features In our local models, to encode two adjacent units, Uj and Uj+1, within a CRF chain, we use the following 10 sets of features, some of which are modified from Joty et al.’s model. Organization features: Whether Uj (or Uj+1) is the first (or last) constituent in the sentence (for intra-sentential models) or in the document (for multi-sentential models); whether Uj (or Uj+1) is a bottom-level constituent. Textual structure features: Whether Uj contains more sentences (or paragraphs) than Uj+1. N-gram features: The beginning (or end) lexical n-grams in each unit; the beginning (or end) POS n-grams in each unit, where n ∈{1,2,3}. Dominance features: The PoS tags of the head node and the attachment node; the lexical heads of the head node and the attachment node; the dominance relationship between the two units. Contextual features: The feature vector of the previous and the next constituent in the chain. Substructure features: The root node of the left and right discourse subtrees of each unit. Syntactic features: whether each unit corresponds to a single syntactic subtree, and if so, the top PoS tag of the subtree; the distance of each unit to their lowest common ancestor in the syntax tree (intra-sentential only). Entity transition features: The type and the number of entity transitions across the two units. We adopt Barzilay and Lapata (2008)’s entitybased local coherence model to represent a document by an entity grid, and extract local transitions among entities in continuous discourse constituents. We use bigram and trigram transitions with syntactic roles attached to each entity. 4We implicitly made an assumption that the parsing time is dominated by the time to perform inference on CRF chains. However, for complex features, the time required for feature computation might be dominant. Nevertheless, a careful caching strategy can accelerate feature computation, since a large number of multi-sentential chains overlap with each other. 517 Cue phrase features: Whether a cue phrase occurs in the first or last EDU of each unit. The cue phrase list is based on the connectives collected by Knott and Dale (1994) Post-editing features: The depth of each unit in the initial tree. 8 Experiments For pre-processing, we use the Stanford CoreNLP (Klein and Manning, 2003; de Marneffe et al., 2006; Recasens et al., 2013) to syntactically parse the texts and extract coreference relations, and we use Penn2Malt5 to lexicalize syntactic trees to extract dominance features. For local models, our structure models are trained using MALLET (McCallum, 2002) to include constraints over transitions between adjacent labels, and our relation models are trained using CRFSuite (Okazaki, 2007), which is a fast implementation of linear-chain CRFs. The data that we use to develop and evaluate our discourse parser is the RST Discourse Treebank (RST-DT) (Carlson et al., 2001), which is a large corpus annotated in the framework of RST. The RST-DT consists of 385 documents (347 for training and 38 for testing) from the Wall Street Journal. Following previous work on the RST-DT (Hernault et al., 2010; Feng and Hirst, 2012; Joty et al., 2012; Joty et al., 2013), we use 18 coarsegrained relation classes, and with nuclearity attached, we have a total set of 41 distinct relations. Non-binary relations are converted into a cascade of right-branching binary relations. 9 Results and Discussion 9.1 Parsing accuracy We compare four different models using manual EDU segmentation. In Table 1, the jCRF model in the first row is the optimal CRF model proposed by Joty et al. (2013). gSVMFH in the second row is our implementation of HILDA’s greedy parsing algorithm using Feng and Hirst (2012)’s enhanced feature set. The third model, gCRF, represents our greedy CRF-based discourse parser, and the last row, gCRFPE, represents our parser with the postediting component included. In order to conduct a direct comparison with Joty et al.’s model, we use the same set of eval5http://stp.lingfil.uu.se/˜nivre/ research/Penn2Malt.html. Model Span Nuc Relation Acc MAFS jCRF 82.5 68.4 55.7 N/A gSVMFH 82.8 67.1 52.0 27.4/23.3 gCRF 84.9∗ 69.9∗ 57.2∗ 35.3/31.3 gCRFPE 85.7∗† 71.0∗† 58.2∗† 36.2/32.3 Human 88.7 77.7 65.8 N/A ∗: significantly better than gSVMFH (p < .01) †: significantly better than gCRF (p < .01) Table 1: Performance of different models using gold-standard EDU segmentation, evaluated using the constituent accuracy (%) for span, nuclearity, and relation. For relation, we also report the macro-averaged F1-score (MAFS) for correctly retrieved constituents (before the slash) and for all constituents (after the slash). Statistical significance is verified using Wilcoxon’s signed-rank test. uation metrics, i.e., the unlabeled and labeled precision, recall, and F-score6 as defined by Marcu (2000). For evaluating relations, since there is a skewed distribution of different relation types in the corpus, we also include the macro-averaged F1-score (MAFS)7 as another metric, to emphasize the performance of infrequent relation types. We report the MAFS separately for the correctly retrieved constituents (i.e., the span boundary is correct) and all constituents in the reference tree. As demonstrated by Table 1, our greedy CRF models perform significantly better than the other two models. Since we do not have the actual output of Joty et al.’s model, we are unable to conduct significance testing between our models and theirs. But in terms of overall accuracy, our gCRF model outperforms their model by 1.5%. Moreover, with post-editing enabled, gCRFPE significantly (p < .01) outperforms our initial model gCRF by another 1% in relation assignment, and this overall accuracy of 58.2% is close to 90% of human performance. With respect to the macroaveraged F1-scores, adding the post-editing component also obtains about 1% improvement. However, the overall MAFS is still at the lower 6For manual segmentation, precision, recall, and F-score are the same. 7MAFS is the F1-score averaged among all relation classes by equally weighting each class. Therefore, we cannot conduct significance test between different MAFS. 518 Avg Min Max # of EDUs 61.74 4 304 # of Sentences 26.11 2 187 # of EDUs per sentence 2.36 1 10 Table 2: Characteristics of the 38 documents in the test data. end of 30% for all constituents. Our error analysis shows that, for two relation classes, TOPICCHANGE and TEXTUAL-ORGANIZATION, our model fails to retrieve any instance, and for TOPIC-COMMENT and EVALUATION, our model scores a class-wise F1 score lower than 5%. These four relation classes, apart from their infrequency in the corpus, are more abstractly defined, and thus are particularly challenging. 9.2 Parsing efficiency We further illustrate the efficiency of our parser by demonstrating the time consumption of different models. First, as shown in Table 2, the average number of sentences in a document is 26.11, which is already too large for optimal parsing models, e.g., the CKY-like parsing algorithm in jCRF, let alone the fact that the largest document contains several hundred of EDUs and sentences. Therefore, it should be seen that non-optimal models are required in most cases. In Table 3, we report the parsing time8 for the last three models, since we do not know the time of jCRF. Note that the parsing time excludes the time cost for any necessary pre-processing. As can be seen, our gCRF model is considerably faster than gSVMFH, because, on one hand, feature computation is expensive in gSVMFH, since gSVMFH utilizes a rich set of features; on the other hand, in gCRF, we are able to accelerate decoding by multi-threading MALLET (we use four threads). Even for the largest document with 187 sentences, gCRF is able to produce the final tree after about 40 seconds, while jCRF would take over 16 hours assuming each DCRF decoding takes only 0.01 second. Although enabling post-editing doubles the time consumption, the overall time is still acceptable in practice, and the loss of efficiency can be compensated by the improvement in accuracy. 8Tested on a Linux system with four duo-core 3.0GHz processors and 16G memory. Model Parsing Time (seconds) Avg Min Max gSVMFH 11.19 0.42 124.86 gCRF 5.52 0.05 40.57 gCRFPE 10.71 0.12 84.72 Table 3: The parsing time (in seconds) for the 38 documents in the test set of RST-DT. Time cost of any pre-processing is excluded from the analysis. 10 Conclusions In this paper, we presented an efficient text-level discourse parser with time complexity linear in the total number of sentences in the document. Our approach was to adopt a greedy bottomup tree-building, with two linear-chain CRFs as local probabilistic models, and enforce reasonable constraints in the first CRF’s Viterbi decoding. While significantly outperforming the stateof-the-art model by Joty et al. (2013), our parser is much faster in practice. In addition, we propose a novel idea of post-editing, which modifies a fully-built discourse tree by considering information from upper-level constituents. We show that, although doubling the time consumption, postediting can further boost the parsing performance to close to 90% of human performance. In future work, we wish to further explore the idea of post-editing, since currently we use only the depth of the subtrees as upper-level information. Moreover, we wish to study whether we can incorporate constraints into the relation models, as we do to the structure models. For example, it might be helpful to train the relation models using additional criteria, such as Generalized Expectation (Mann and McCallum, 2008), to better take into account some prior knowledge about the relations. Last but not least, as reflected by the low MAFS in our experiments, some particularly difficult relation types might need specifically designed features for better recognition. Acknowledgments We thank Professor Gerald Penn and the reviewers for their valuable advice and comments. This work was financially supported by the Natural Sciences and Engineering Research Council of Canada and by the University of Toronto. 519 References Jason Baldridge and Alex Lascarides. 2005. Probabilistic head-driven parsing for discourse structure. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 96–103, Ann Arbor, Michigan, June. Association for Computational Linguistics. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: an entity-based approach. Computational Linguistics, 34(1):1–34. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. In Proceedings of Second SIGDial Workshop on Discourse and Dialogue (SIGDial 2001), pages 1–10. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006). Vanessa Wei Feng and Graeme Hirst. 2012. Text-level discourse parsing with rich linguistic features. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL 2012), pages 60–68, Jeju, Korea. Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. HILDA: A discourse parser using support vector machine classification. Dialogue and Discourse, 1(3):1–33. Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2012. A novel discriminative framework for sentence-level discourse analysis. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLPCoNLL 2012, pages 904–915. Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra- and multisentential rhetorical parsing for document-level discourse analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), pages 486–496, Sofia, Bulgaria, August. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics (ACL 2003), ACL 2003, pages 423–430, Stroudsburg, PA, USA. Association for Computational Linguistics. Alistair Knott and Robert Dale. 1994. Using linguistic phenomena to motivate a set of coherence relations. Discourse Processes, 18(1):35–64. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML 2001, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Gideon S. Mann and Andrew McCallum. 2008. Generalized Expectation Criteria for semi-supervised learning of Conditional Random Fields. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL 2008), pages 870–878, Columbus, Ohio, June. Association for Computational Linguistics. William Mann and Sandra Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. The MIT Press. Andrew Kachites McCallum. 2002. MALLET: A machine learning for language toolkit. http://mallet.cs.umass.edu. Philippe Muller, Stergos Afantenos, Pascal Denis, and Nicholas Asher. 2012. Constrained decoding for text-level discourse parsing. In Proceedings of COLING 2012, pages 1883–1900, Mumbai, India, December. The COLING 2012 Organizing Committee. Naoaki Okazaki. 2007. CRFsuite: a fast implementation of conditional random fields (CRFs). http://www.chokkan.org/software/crfsuite/. Marta Recasens, Marie-Catherine de Marneffe, and Christopher Potts. 2013. The life and death of discourse entities: Identifying singleton mentions. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 627–633, Atlanta, Georgia, June. Association for Computational Linguistics. Rajen Subba and Barbara Di Eugenio. 2009. An effective discourse parser that uses rich linguistic information. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 566–574, Boulder, Colorado, June. Association for Computational Linguistics. Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. The Journal of Machine Learning Research, 8:693–723, May. Christian Wojek and Bernt Schiele. 2008. A dynamic conditional random field model for joint labeling of object and scene classes. In European Conference 520 on Computer Vision (ECCV 2008), pages 733–747, Marseille, France. Dong Yang, Paul Dixon, Yi-Cheng Pan, Tasuku Oonishi, Masanobu Nakamura, and Sadaoki Furui. 2009. Combining a two-step conditional random field model and a joint source channel model for machine transliteration. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009), pages 72–75, Suntec, Singapore, August. Association for Computational Linguistics. 521
2014
48
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 522–530, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Negation Focus Identification with Contextual Discourse Information Bowei Zou Qiaoming Zhu Guodong Zhou* Natural Language Processing Lab, School of Computer Science and Technology Soochow University, Suzhou, 215006, China [email protected], {qmzhu, gdzhou}@suda.edu.cn Abstract Negative expressions are common in natural language text and play a critical role in information extraction. However, the performances of current systems are far from satisfaction, largely due to its focus on intrasentence information and its failure to consider inter-sentence information. In this paper, we propose a graph model to enrich intrasentence features with inter-sentence features from both lexical and topic perspectives. Evaluation on the *SEM 2012 shared task corpus indicates the usefulness of contextual discourse information in negation focus identification and justifies the effectiveness of our graph model in capturing such global information. * 1 Introduction Negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition (Morante and Sporleder, 2012). For example, sentence (1) could be interpreted as it is not the case that he stopped. (1) He didn't stop. Negation expressions are common in natural language text. According to the statistics on biomedical literature genre (Vincze et al., 2008), 19.44% of sentences contain negative expressions. The percentage rises to 22.5% on Conan Doyle stories (Morante and Daelemans, 2012). It is interesting that a negative sentence may have both negative and positive meanings. For example, sentence (2) could be interpreted as He stopped, but not until he got to Jackson Hole with positive part he stopped and negative part until he got to Jackson Hole. Moreover, a nega * Corresponding author tive expression normally interacts with some special part in the sentence, referred as negation focus in linguistics. Formally, negation focus is defined as the special part in the sentence, which is most prominently or explicitly negated by a negative expression. Hereafter, we denote negative expression in boldface and negation focus underlined. (2) He didn't stop until he got to Jackson Hole. While people tend to employ stress or intonation in speech to emphasize negation focus and thus it is easy to identify negation focus in speech corpora, such stress or intonation information often misses in the dominating text corpora. This poses serious challenges on negation focus identification. Current studies (e.g., Blanco and Moldovan, 2011; Rosenberg and Bergler, 2012) sort to various kinds of intra-sentence information, such as lexical features, syntactic features, semantic role features and so on, ignoring less-obvious inter-sentence information. This largely defers the performance of negation focus identification and its wide applications, since such contextual discourse information plays a critical role on negation focus identification. Take following sentence as an example. (3) Helen didn’t allow her youngest son to play the violin. In sentence (3), there are several scenarios on identification of negation focus, with regard to negation expression n’t, given different contexts: Scenario A: Given sentence But her husband did as next sentence, the negation focus should be Helen, yielding interpretation the person who didn’t allow the youngest son to play the violin is Helen but not her husband. Scenario B: Given sentence She thought that he didn’t have the artistic talent like her eldest son as next sentence, the negation focus should be the youngest son, yielding interpretation Helen 522 thought that her eldest son had the talent to play the violin, but the youngest son didn’t. Scenario C: Given sentence Because of her neighbors’ protests as previous sentence, the negation focus should be play the violin, yielding interpretation Helen didn’t allow her youngest son to play the violin, but it didn’t show whether he was allowed to do other things. In this paper, to well accommodate such contextual discourse information in negation focus identification, we propose a graph model to enrich normal intra-sentence features with various kinds of inter-sentence features from both lexical and topic perspectives. Besides, the standard PageRank algorithm is employed to optimize the graph model. Evaluation on the *SEM 2012 shared task corpus (Morante and Blanco, 2012) justifies our approach over several strong baselines. The rest of this paper is organized as follows. Section 2 overviews the related work. Section 3 presents several strong baselines on negation focus identification with only intra-sentence features. Section 4 introduces our topic-driven word-based graph model with contextual discourse information. Section 5 reports the experimental results and analysis. Finally, we conclude our work in Section 6. 2 Related Work Earlier studies of negation were almost in linguistics (e.g. Horn, 1989; van der Wouden, 1997), and there were only a few in natural language processing with focus on negation recognition in the biomedical domain. For example, Chapman et al. (2001) developed a rule-based negation recognition system, NegEx, to determine whether a finding mentioned within narrative medical reports is present or absent. Since the release of the BioScope corpus (Vincze et al., 2008), a freely available resource consisting of medical and biological texts, machine learning approaches begin to dominate the research on negation recognition (e.g. Morante et al., 2008; Li et al., 2010). Generally, negation recognition includes three subtasks: cue detection, which detects and identifies possible negative expressions in a sentence, scope resolution, which determines the grammatical scope in a sentence affected by a negative expression, and focus identification, which identifies the constituent in a sentence most prominently or explicitly negated by a negative expression. This paper concentrates on the third subtask, negation focus identification. Due to the increasing demand on deep understanding of natural language text, negation recognition has been drawing more and more attention in recent years, with a series of shared tasks and workshops, however, with focus on cue detection and scope resolution, such as the BioNLP 2009 shared task for negative event detection (Kim et al., 2009) and the ACL 2010 Workshop for scope resolution of negation and speculation (Morante and Sporleder, 2010), followed by a special issue of Computational Linguistics (Morante and Sporleder, 2012) for modality and negation. The research on negation focus identification was pioneered by Blanco and Moldovan (2011), who investigated the negation phenomenon in semantic relations and proposed a supervised learning approach to identify the focus of a negation expression. However, although Morante and Blanco (2012) proposed negation focus identification as one of the *SEM’2012 shared tasks, only one team (Rosenberg and Bergler, 2012) 1 participated in this task. They identified negation focus using three kinds of heuristics and achieved 58.40 in F1-measure. This indicates great expectation in negation focus identification. The key problem in current research on negation focus identification is its focus on intrasentence information and large ignorance of inter-sentence information, which plays a critical role in the success of negation focus identification. For example, Ding (2011) made a qualitative analysis on implied negations in conversation and attempted to determine whether a sentence was negated by context information, from the linguistic perspective. Moreover, a negation focus is always associated with authors’ intention in article. This indicates the great challenges in negation focus identification. 3 Baselines Negation focus identification in *SEM’2012 shared tasks is restricted to verbal negations annotated with MNEG in PropBank, with only the constituent belonging to a semantic role selected as negation focus. Normally, a verbal negation expression (not or n’t) is grammatically associated with its corresponding verb (e.g., He didn’t stop). For details on annotation guidelines and 1 In *SEM’2013, the shared task is changed with focus on "Semantic Textual Similarity". 523 examples for verbal negations, please refer to Blanco and Moldovan (2011). For comparison, we choose the state-of-the-art system described in Blanco and Moldovan (2011), which employed various kinds of syntactic features and semantic role features, as one of our baselines. Since this system adopted C4.5 for training, we name it as BaselineC4.5. In order to provide a stronger baseline, besides those features adopted in BaselineC4.5, we added more refined intra-sentence features and adopted ranking Support Vector Machine (SVM) model for training. We name it as BaselineSVM. Following is a list of features adopted in the two baselines, for both BaselineC4.5 and BaselineSVM,  Basic features: first token and its part-ofspeech (POS) tag of the focus candidate; the number of tokens in the focus candidate; relative position of the focus candidate among all the roles present in the sentence; negated verb and its POS tag of the negative expression;  Syntactic features: the sequence of words from the beginning of the governing VP to the negated verb; the sequence of POS tags from the beginning of the governing VP to the negated verb; whether the governing VP contains a CC; whether the governing VP contains a RB.  Semantic features: the syntactic label of semantic role A1; whether A1 contains POS tag DT, JJ, PRP, CD, RB, VB, and WP, as defined in Blanco and Moldovan (2011); whether A1 contains token any, anybody, anymore, anyone, anything, anytime, anywhere, certain, enough, full, many, much, other, some, specifics, too, and until, as defined in Blanco and Moldovan (2011); the syntactic label of the first semantic role in the sentence; the semantic label of the last semantic role in the sentence; the thematic role for A0/A1/A2/A3/A4 of the negated predicate. and for BaselineSVM only,  Basic features: the named entity and its type in the focus candidate; relative position of the focus candidate to the negative expression (before or after).  Syntactic features: the dependency path and its depth from the focus candidate to the negative expression; the constituent path and its depth from the focus candidate to the negative expression; 4 Exploring Contextual Discourse Information for Negation Focus Identification While some of negation focuses could be identified by only intra-sentence information, others must be identified by contextual discourse information. Section 1 illustrates the necessity of such contextual discourse information in negation focus identification by giving three scenarios of different discourse contexts for negation expression n’t in sentence (3). For better illustration of the importance of contextual discourse information, Table 1 shows the statistics of intra- and inter-sentence information necessary for manual negation focus identification with 100 instances randomly extracted from the held-out dataset of *SEM'2012 shared task corpus. It shows that only 17 instances can be identified by intra-sentence information. It is surprising that inter-sentence information is indispensable in 77 instances, among which 42 instances need only inter-sentence information and 35 instances need both intra- and intersentence information. This indicates the great importance of contextual discourse information on negation focus identification. It is also interesting to note 6 instances are hard to determine even given both intra- and inter-sentence information. Info Number #Intra-Sentence Only 17 #Inter-Sentence Only 42 #Both 35 #Hard to Identify 6 (Note: "Hard to Identify" means that it is hard for a human being to identify the negation focus even given both intra- and inter-sentence information.) Table 1. Statistics of intra- and inter-sentence information on negation focus identification. Statistically, we find that negation focus is always related with what authors repeatedly states in discourse context. This explains why contextual discourse information could help identify negation focus. While inter-sentence information provides the global characteristics from the discourse context perspective and intra-sentence information provides the local features from lexical, syntactic and semantic perspectives, both have their own contributions on negation focus identification. In this paper, we first propose a graph model to gauge the importance of contextual discourse 524 information. Then, we incorporate both intra- and inter-sentence features into a machine learning-based framework for negation focus identification. 4.1 Graph Model Graph models have been proven successful in many NLP applications, especially in representing the link relationships between words or sentences (Wan and Yang, 2008; Li et al., 2009). Generally, such models could construct a graph to compute the relevance between document theme and words. In this paper, we propose a graph model to represent the contextual discourse information from both lexical and topic perspectives. In particular, a word-based graph model is proposed to represent the explicit relatedness among words in a discourse from the lexical perspective, while a topic-driven word-based model is proposed to enrich the implicit relatedness between words, by adding one more layer to the word-based graph model in representing the global topic distribution of the whole dataset. Besides, the PageRank algorithm (Page et al., 1998) is adopted to optimize the graph model. Word-based Graph Model: A word-based graph model can be defined as Gword (W, E), where W={wi} is the set of words in one document and E={eij|wi, wj ∈W} is the set of directed edges between these words, as shown in Figure 1. Figure 1. Word-based graph model. In the word-based graph model, word node wi is weighted to represent the correlation of the word with authors’ intention. Since such correlation is more from the semantic perspective than the grammatical perspective, only content words are considered in our graph model, ignoring functional words (e.g., the, to,…). Especially, the content words limited to those with part-ofspeech tags of JJ, NN, PRP, and VB. For simplicity, the weight of word node wi is initialized to 1. In addition, directed edge eij is weighted to represent the relatedness between word wi and word wj in a document with transition probability P(j|i) from i to j, which is normalized as follows: ܲሺ݆|݅ሻൌ ௌ௜௠ሺ௪೔,௪ೕሻ ∑ௌ௜௠ሺ௪೔,௪ೖሻ ೖ (1) where k represents the nodes in discourse, and Sim(wi,wj) denotes the similarity between wi and wj. In this paper, two kinds of information are used to calculate the similarity between words. One is word co-occurrence (if word wi and word wj occur in the same sentence or in the adjacent sentences, Sim(wi,wj) increases 1), and the other is WordNet (Miller, 1995) based similarity. Please note that Sim(wi,wi) = 0 to avoid selftransition, and Sim(wi,wj) and Sim(wj,wi) may not be equal. Finally, the weights of word nodes are calculated using the PageRank algorithm as follows: ܵܿ݋ݎ݁ሺ଴ሻሺݓ௜ሻൌ1 ܵܿ݋ݎ݁ሺ௡ାଵሻሺݓ௜ሻൌ݀∑ ܵܿ݋ݎ݁ሺ௡ሻ൫ݓ௝൯ൈ ௝ஷ௜ ܲሺ݆|݅ሻ൅ ሺ1 െ݀ሻ (2) where d is the damping factor as in the PageRank algorithm. Topic-driven Word-based Graph Model While the above word-based graph model can well capture the relatedness between content words, it can only partially model the focus of a negation expression since negation focus is more directly related with topic than content. In order to reduce the gap, we propose a topic-driven word-based model by adding one more layer to refine the word-based graph model over the global topic distribution, as shown in Figure 2. Figure 2. Topic-driven word-based graph model. 525 Here, the topics are extracted from all the documents in the *SEM 2012 shared task using the LDA Gibbs Sampling algorithm (Griffiths, 2002). In the topic-driven word-based graph model, the first layer denotes the relatedness among content words as captured in the above word-based graph model, and the second layer denotes the topic distribution, with the dashed lines between these two layers indicating the word-topic model return by LDA. Formally, the topic-driven word-based twolayer graph is defined as Gtopic (W, T, Ew, Et), where W={wi} is the set of words in one document and T={ti} is the set of topics in all documents; Ew={ewij|wi, wj ∈W} is the set of directed edges between words and Et ={etij|wi∈W, tj ∈T} is the set of undirected edges between words and topics; transition probability Pw(j|i) of ewij is defined as the same as P(j|i) of the word-based graph model. Besides, transition probability Pt (i,m) of etij in the word-topic model is defined as: ܲ௧ሺ݅, ݉ሻൌ ோ௘௟ሺ௪೔,௧೘ሻ ∑ோ௘௟ሺ௪೔,௧ೖሻ ೖ (3) where Rel(wi, tm) is the weight of word wi in topic tm calculated by the LDA Gibbs Sampling algorithm. On the basis, the transition probability Pw (j|i) of ewij is updated by calculating as following: ܲ′௪ሺ݆|݅ሻൌθ ∙ܲ௪ሺ݆|݅ሻ൅ሺ1 െθሻ∙ ௉೟ሺ௜,௠ሻൈ௉೟ሺ௝,௠ሻ ∑௉೟ሺ௜,௞ሻൈ௉೟ሺ௝,௞ሻ ೖ (4) where k represents all topics linked to both word wi and word wj, and θ∈[0,1] is the coefficient controlling the relative contributions from the lexical information in current document and the topic information in all documents. Finally, the weights of word nodes are calculated using the PageRank algorithm as follows: ܵܿ݋ݎ݁ሺ଴ሻሺݓ௜ሻൌ1 ܵܿ݋ݎ݁ሺ௡ାଵሻሺݓ௜ሻൌ݀∑ ܵܿ݋ݎ݁ሺ௡ሻ൫ݓ௝൯ൈ ௝ஷ௜ ܲ′௪ሺ݆|݅ሻ൅ ሺ1 െ݀ሻ (5) where d is the damping factor as in the PageRank algorithm. 4.2 Negation Focus Identification via Graph Model Given the graph models and the PageRank optimization algorithm discussed above, four kinds of contextual discourse information are extracted as inter-sentence features (Table 2). In particular, the total weight and the max weight of words in the focus candidate are calculated as follows: ܹ݄݁݅݃ݐ௧௢௧௔௟ൌ∑ܵܿ݋ݎ݁ሺ௙௜௡௔௟ሻሺݓ௜ሻ ௜ (6) ܹ݄݁݅݃ݐ௠௔௫ൌmax௜ܵܿ݋ݎ݁ሺ௙௜௡௔௟ሻሺݓ௜ሻ (7) where i represents the content words in the focus candidate. These two kinds of weights focus on different aspects about the focus candidate with the former on the contribution of content words, which is more beneficial for a long focus candidate, and the latter biased towards the focus candidate which contains some critical word in a discourse. No Feature 1 Total weight of words in the focus candidate using the co-occurrence similarity. 2 Max weight of words in the focus candidate using the co-occurrence similarity. 3 Total weight of words in the focus candidate using the WordNet similarity. 4 Max weight of words in the focus candidate using the WordNet similarity. Table 2. Inter-sentence features extracted from graph model. For evaluating the contribution of contextual discourse information on negation focus identification directly, we incorporate the four intersentence features from the topic-driven wordbased graph model into a negation focus identifier. 5 Experimentation In this section, we describe experimental settings and systematically evaluate our negation focus identification approach with focus on exploring the effectiveness of contextual discourse information. 5.1 Experimental Settings Dataset In all our experiments, we employ the *SEM'2012 shared task corpus (Morante and Blanco, 2012) 2. As a freely downloadable resource, the *SEM shared task corpus is annotated on top of PropBank, which uses the WSJ section of the Penn TreeBank. In particular, negation focus annotation on this corpus is restricted to verbal negations (with corresponding mark 2 http://www.clips.ua.ac.be/sem2012-st-neg/ 526 MNEG in PropBank). On 50% of the corpus annotated by two annotators, the inter-annotator agreement was 0.72 (Blanco and Moldovan, 2011). Along with negation focus annotation, this corpus also contains other annotations, such as POS tag, named entity, chunk, constituent tree, dependency tree, and semantic role. In total, this corpus provides 3,544 instances of negation focus annotations. For fair comparison, we adopt the same partition as *SEM’2012 shared task in all our experiments, i.e., with 2,302 for training, 530 for development, and 712 for testing. Although for each instance, the corpus only provides the current sentence, the previous and next sentences as its context, we sort to the Penn TreeBank3 to obtain the corresponding document as its discourse context. Evaluation Metrics Same as the *SEM'2012 shared task, the evaluation is made using precision, recall, and F1-score. Especially, a true positive (TP) requires an exact match for the negation focus, a false positive (FP) occurs when a system predicts a non-existing negation focus, and a false negative (FN) occurs when the gold annotations specify a negation focus but the system makes no prediction. For each instance, the predicted focus is considered correct if it is a complete match with a gold annotation. Beside, to show whether an improvement is significant, we conducted significance testing using z-test, as described in Blanco and Moldovan (2011). Toolkits In our experiments, we report not only the default performance with gold additional annotated features provided by the *SEM'2012 shared task corpus and the Penn TreeBank, but also the performance with various kinds of features extracted automatically, using following toolkits:  Syntactic Parser: We employ the Stanford Parser4 (Klein and Manning, 2003; De Marneffe et al., 2006) for tokenization, constituent and dependency parsing.  Named Entity Recognizer: We employ the Stanford NER5 (Finkel et al., 2005) to obtain named entities. 3 http://www.cis.upenn.edu/~treebank/ 4 http://nlp.stanford.edu/software/lex-parser.shtml 5 http://nlp.stanford.edu/ner/  Semantic Role Labeler: We employ the semantic role labeler, as described in Punyakanok et al (2008).  Topic Modeler: For estimating transition probability Pt(i,m), we employ GibbsLDA++6, an LDA model using Gibbs Sampling technique for parameter estimation and inference.  Classifier: We employ SVMLight 7 with default parameters as our classifier. 5.2 Experimental Results With Only Intra-sentence Information Table 3 shows the performance of the two baselines, the decision tree-based classifier as in Blanco and Moldovan (2011) and our ranking SVM-based classifier. It shows that our ranking SVM-based baseline slightly improves the F1measure by 2.52% over the decision tree-based baseline, largely due to the incorporation of more refined features. System P(%) R(%) F1 BaselineC4.5 66.73 49.93 57.12 BaselineSVM 60.22 59.07 59.64 Table 3. Performance of baselines with only intra-sentence information. Error analysis of the ranking SVM-based baseline on development data shows that 72% of them are caused by the ignorance of intersentence information. For example, among the 42 instances listed in the category of “#InterSentence Only” in Table 1, only 7 instances can be identified correctly by the ranking SVMbased classifier. With about 4 focus candidates in one sentence on average, this percentage is even lower than random. With Only Inter-sentence Information For exploring the usefulness of pure contextual discourse information in negation focus identification, we only employ inter-sentence features into ranking SVM-based classifier. First of all, we estimate two parameters for our topic-driven word-based graph model: topic number T for topic model and coefficient θ between Pw(j|i) and Pt (i,m) in Formula 4. Given the LDA Gibbs Sampling model with parameters α = 50/T and β = 0.1, we vary T from 20 to 100 with an interval of 10 to find the opti 6 http://gibbslda.sourceforge.net/ 7 http://svmlight.joachims.org 527 mal T. Figure 3 shows the experiment results of varying T (with θ = 0.5) on development data. It shows that the best performance is achieved when T = 50 with 51.11 in F1). Therefore, we set T as 50 in our following experiments. Figure 3. Performance with varying T. For parameter θ, a trade-off between the transition probability Pw(j|i) (word to word) and the transition probability Pt (i,m) (word and topic) to update P’w(j|i), we vary it from 0 to 1 with an interval of 0.1. Figure 4 shows the experiment results of varying θ (with T=50) on development data. It shows that the best performance is achieved when θ = 0.6, which are adopted hereafter in all our experiments. This indicates that direct lexical information in current document contributes more than indirect topic information in all documents on negation focus identification. It also shows that direct lexical information in current document and indirect topic information in all documents are much complementary on negation focus identification. Figure 4. Performance with varying θ. System P(%) R(%) F1 using word-based graph model 45.62 42.02 43.75 using topic-driven wordbased graph model 54.59 50.76 52.61 Table 4. Performance with only inter-sentence information. Table 4 shows the performance of negation focus identification with only inter-sentence features. It also shows that the system with intersentence features from the topic-driven wordbased graph model significantly improves the F1-measure by 8.86 over the system with intersentence features from the word-based graph model, largely due to the usefulness of topic information. In comparison with Table 3, it shows that the system with only intra-sentence features achieves better performance than the one with only intersentence features (59.64 vs. 52.61 in F1measure). With both Intra- and Inter-sentence Information Table 5 shows that enriching intra-sentence features with inter-sentence features significantly (p<0.01) improve the performance by 9.85 in F1measure than the better baseline. This indicates the usefulness of such contextual discourse information and the effectiveness of our topicdriven word-based graph model in negation focus identification. System P(%) R(%) F1 BaselineC4.5 with intra feat. only 66.73 49.93 57.12 BaselineSVM with intra feat. only 60.22 59.07 59.64 Ours with Both feat. using word-based GM 64.93 62.47 63.68 Ours with Both feat. using topic-driven word-based GM 71.67 67.43 69.49 (Note: “feat.” denotes features; “GM” denotes graph model.) Table 5. Performance comparison of systems on negation focus identification. System P(%) R(%) F1 BaselineC4.5 with intra feat. only (auto) 60.94 44.62 51.52 BaselineSVM with intra feat. Only (auto) 53.81 51.67 52.72 Ours with Both feat. using word-based GM (auto) 58.77 57.19 57.97 Ours with Both feat. using topic-driven word-based GM (auto) 66.74 64.53 65.62 Table 6. Performance comparison of systems on negation focus identification with automatically extracted features. 528 Besides, Table 6 shows the performance of our best system with all features automatically extracted using the toolkits as described in Section 5.1. Compared with our best system employing gold additional annotated features (the last line in Table 5), the homologous system with automatically extracted features (the last line in Table 6) only decrease of less than 4 in F1measure. This demonstrates the achievability of our approach. In comparison with the best-reported performance on the *SEM’2012 shared task (Rosenberg and Bergler, 2012), our system performs better by about 11 in F-measure. 5.3 Discussion While this paper verifies the usefulness of contextual discourse information on negation focus identification, the performance with only intersentence features is still weaker than that with only intra-sentence features. There are two main reasons. On the one hand, the former employs an unsupervised approach without prior knowledge for training. On the other hand, the usefulness of inter-sentence features depends on the assumption that a negation focus relates to the meaning of which is most relevant to authors’ intention in a discourse. If there lacks relevant information in a discourse context, negation focus will become difficult to be identified only by inter-sentence features. Error analysis also shows that some of the negation focuses are very difficult to be identified, even for a human being. Consider the sentence (3) in Section 1, if given sentence because of her neighbors' protests, but her husband doesn’t think so as its following context, both Helen and to play the violin can become the negation focus. Moreover, the inter-annotator agreement in the first round of negation focus annotation can only reach 0.72 (Blanco and Moldovan, 2011). This indicates inherent difficulty in negation focus identification. 6 Conclusion In this paper, we propose a graph model to enrich intra-sentence features with inter-sentence features from both lexical and topic perspectives. In this graph model, the relatedness between words is calculated by word co-occurrence, WordNetbased similarity, and topic-driven similarity. Evaluation on the *SEM 2012 shared task corpus indicates the usefulness of contextual discourse information on negation focus identification and our graph model in capturing such global information. In future work, we will focus on exploring more contextual discourse information via the graph model and better ways of integrating intra- and inter-sentence information on negation focus identification. Acknowledgments This research is supported by the National Natural Science Foundation of China, No.61272260, No.61331011, No.61273320, the Natural Science Foundation of Jiangsu Province, No. BK2011282, the Major Project of College Natural Science Foundation of Jiangsu Province, No.11KIJ520003, and the Graduates Project of Science and Innovation, No. CXZZ12_0818. The authors would like to thank the anonymous reviewers for their insightful comments and suggestions. Our sincere thanks are also extended to Dr. Zhongqing Wang for his valuable discussions during this study. Reference Eduardo Blanco and Dan Moldovan. 2011. Semantic Representation of Negation Using Focus Detection. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 581-589, Portland, Oregon, June 19-24, 2011. Wendy W. Chapman, Will Bridewell, Paul Hanbury, Gregory F. Cooper, and Bruce G. Buchanan. 2001. A simple algorithm for identifying negated findings and diseases in discharge summaries. Journal of Biomedical Informatics, 34:301-310. Marie-Catherine De Marneffe, Bill MacCartney and Christopher D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of LREC’2006. Yun Ding. 2011. Implied Negation in Discourse. Journal of Theory and Practice in Language Studies, 1(1): 44-51, Jan 2011. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 363-370, Stroudsburg, PA, USA. Tom Griffiths. 2002. Gibbs sampling in the generative model of Latent Dirichlet Allocation. Tech. rep., Stanford University. Laurence R Horn. 1989. A Natural History of Negation. Chicago University Press, Chicago, IL. 529 Fangtao Li, Yang Tang, Minlie Huang, and Xiaoyan Zhu. 2009. Answering Opinion Questions with Random Walks on Graphs. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 737-745, Suntec, Singapore, 2-7 Aug 2009. Junhui Li, Guodong Zhou, Hongling Wang, and Qiaoming Zhu. 2010. Learning the Scope of Negation via Shallow Semantic Parsing. In Proceedings of the 23rd International Conference on Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 671-679. Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshinobu Kano, and Jun'ichi Tsujii. 2009. Overview of BioNLP'09 Shared Task on Event Extraction. In Proceedings of the BioNLP'2009 Workshop Companion Volume for Shared Task. Stroudsburg, PA, USA: Association for Computational Linguistics, 1-9. Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of the 41st Meeting of the Association for Computational Linguistics, pages 423-430. George A. Miller. 1995. Wordnet: a lexical database for english. Commun. ACM, 38(11):39-41. Roser Morante, Anthony Liekens and Walter Daelemans. 2008. Learning the Scope of Negation in Biomedical Texts. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 715-724, Honolulu, October 2008. Roser Morante and Caroline Sporleder, editors. 2010. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing. University of Antwerp, Uppsala, Sweden. Roser Morante and Eduardo Blanco. 2012. *SEM 2012 Shared Task: Resolving the Scope and Focus of Negation. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM), pages 265-274, Montreal, Canada, June 78, 2012. Roser Morante and Caroline Sporleder. 2012. Modality and Negation: An Introduction to the Special Issue. Computational Linguistics, 2012, 38(2): 223260. Roser Morante and Walter Daelemans. 2012. Conan Doyle-neg: Annotation of negation cues and their scope in Conan Doyle stories. In Proceedings of LREC 2012, Istambul. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1998. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford University. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287, June. Sabine Rosenberg and Sabine Bergler. 2012. UConcordia: CLaC Negation Focus Detection at *Sem 2012. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM), pages 294-300, Montreal, Canada, June 7-8, 2012. Ton van der Wouden. 1997. Negative Contexts: Collocation, Polarity, and Multiple Negation. Routledge, London. Veronika Vincze, György Szarvas, Richárd Farkas, György Móra, and János Csirik. 2008. The BioScope corpus: biomedical texts annotated for uncertainty,negation and their scopes. BMC Bioinformatics, 9(Suppl 11):S9. Xiaojun Wan and Jianwu Yang. 2008. Multidocument summarization using cluster-based link analysis. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 299306. 530
2014
49
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 47–57, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Structured Perceptrons for Coreference Resolution with Latent Antecedents and Non-local Features Anders Bj¨orkelund and Jonas Kuhn Institute for Natural Language Processing University of Stuttgart {anders,jonas}@ims.uni-stuttgart.de Abstract We investigate different ways of learning structured perceptron models for coreference resolution when using non-local features and beam search. Our experimental results indicate that standard techniques such as early updates or Learning as Search Optimization (LaSO) perform worse than a greedy baseline that only uses local features. By modifying LaSO to delay updates until the end of each instance we obtain significant improvements over the baseline. Our model obtains the best results to date on recent shared task data for Arabic, Chinese, and English. 1 Introduction This paper studies and extends previous work using the structured perceptron (Collins, 2002) for complex NLP tasks. We show that for the task of coreference resolution the straightforward combination of beam search and early update (Collins and Roark, 2004) falls short of more limited feature sets that allow for exact search. This contrasts with previous work on, e.g., syntactic parsing (Collins and Roark, 2004; Huang, 2008; Zhang and Clark, 2008) and linearization (Bohnet et al., 2011), and even simpler structured prediction problems, where early updates are not even necessary, such as part-of-speech tagging (Collins, 2002) and named entity recognition (Ratinov and Roth, 2009). The main reason why early updates underperform in our setting is that the task is too difficult and that the learning algorithm is not able to profit from all training data. Put another way, early updates happen too early, and the learning algorithm rarely reaches the end of the instances as it halts, updates, and moves on to the next instance. An alternative would be to continue decoding the same instance after the early updates, which is equivalent to Learning as Search Optimization (LaSO; Daum´e III and Marcu (2005b)). The learning task we are tackling is however further complicated since the target structure is under-determined by the gold standard annotation. Coreferent mentions in a document are usually annotated as sets of mentions, where all mentions in a set are coreferent. We adopt the recently popularized approach of inducing a latent structure within these sets (Fernandes et al., 2012; Chang et al., 2013; Durrett and Klein, 2013). This approach provides a powerful boost to the performance of coreference resolvers, but we find that it does not combine well with the LaSO learning strategy. We therefore propose a modification to LaSO, which delays updates until after each instance. The combination of this modification with non-local features leads to further improvements in the clustering accuracy, as we show in evaluation results on all languages from the CoNLL 2012 Shared Task – Arabic, Chinese, and English. We obtain the best results to date on these data sets.1 2 Background Coreference resolution is the task of grouping referring expressions (or mentions) in a text into disjoint clusters such that all mentions in a cluster refer to the same entity. An example is given in Figure 1 below, where mentions from two clusters are marked with brackets: [Drug Emporium Inc.]a1 said [Gary Wilber]b1 was named CEO of [this drugstore chain]a2. [He]b2 succeeds his father, Philip T. Wilber, who founded [the company]a3 and remains chairman. Robert E. Lyons III, who headed the [company]a4’s Philadelphia region, was appointed president and chief operating officer, succeeding [Gary Wilber]b3. Figure 1: An excerpt of a document with the mentions from two clusters marked. 1Our system is available at http://www.ims. uni-stuttgart.de/˜anders/coref.html 47 In recent years much work on coreference resolution has been devoted to increasing the expressivity of the classical mention-pair model, in which each coreference classification decision is limited to information about two mentions that make up a pair. This shortcoming has been addressed by entity-mention models, which relate a candidate mention to the full cluster of mentions predicted to be coreferent so far (for more discussion on the model types, see, e.g., (Ng, 2010)). Nevertheless, the two best systems in the latest CoNLL Shared Task on coreference resolution (Pradhan et al., 2012) were both variants of the mention-pair model. While the second best system (Bj¨orkelund and Farkas, 2012) followed the widely used baseline of Soon et al. (2001), the winning system (Fernandes et al., 2012) proposed the use of a tree representation. The tree-based model of Fernandes et al. (2012) construes the representation of coreference clusters as a rooted tree. Figure 2 displays an example tree over the clusters from Figure 1. Every mention corresponds to a node in the tree, and arcs between mentions indicate that they are coreferent. The tree additionally has a dummy root node. Every subtree under the root node corresponds to a cluster of coreferent mentions. Since coreference training data is typically not annotated with trees, Fernandes et al. (2012) proposed the use of latent trees that are induced during the training phase of a coreference resolver. The latent tree provides more meaningful antecedents for training.2 For instance, the popular pair-wise instance creation method suggested by Soon et al. (2001) assumes non-branching trees, where the antecedent of every mention is its linear predecessor (i.e., heb2 is the antecedent of Gary Wilberb3). Comparing the two alternative antecedents of Gary Wilberb3, the tree in Figure 2 provides a more reliable basis for training a coreference resolver, as the two mentions of Gary Wilber are both proper names and have an exact string match. 3 Representation and Learning Let M = {m0, m1, ..., mn} denote the set of mentions in a document, including the artificial root mention (denoted by m0). We assume that the 2We follow standard practice and overload the terms anaphor and antecedent to be any type of mention, i.e., names as well as pronouns. An antecedent is simply the mention to the left of the anaphor. Drug Emporium Inc.a1 the companya3 this drugstore chaina2 Gary Wilberb1 Heb2 Gary Wilberb3 root companya4 Figure 2: A tree representation of Figure 1. mentions are ordered ascendingly with respect to the linear order of the document, where the document root precedes all other mentions.3 For each mention mj, let Aj denote the set of potential antecedents. That is, the set of all mentions that precede mj according to the linear order including the root node, or, Aj = {mi | i < j}. Finally, let A denote the set of all antecedent sets {A0, A1, ..., An}. In the tree model, each mention corresponds to a node, and an antecedent-anaphor pair ⟨ai, mi⟩, where ai ∈Ai, corresponds to a directed edge (or arc) pointing from antecedent to anaphor. The score of an arc ⟨ai, mi⟩is defined as the scalar product between a weight vector w and a feature vector Φ(⟨ai, mi⟩), where Φ is a feature extraction function over an arc (thus extracting features from the antecedent and the anaphor). The score of a coreference tree y = {⟨a1, m1⟩, ⟨a2, m2⟩, ..., ⟨an, mn⟩} is defined as the sum of the scores of all the mention pairs: score(⟨ai, mi⟩) = w · Φ(⟨ai, mi⟩) (1) score(y) = X ⟨ai,mi⟩∈y score(⟨ai, mi⟩) The objective is to find the output ˆy that maximizes the scoring function: ˆy = arg max y∈Y(A) score(y) (2) where Y(A) denotes the set of possible trees given the antecedent sets A. By treating the mentions as nodes in a directed graph and assigning scores to the arcs according to (1), Fernandes et al. (2012) solved the search problem using the Chu-LiuEdmonds (CLE) algorithm (Chu and Liu, 1965; 3We impose a total order on mentions. In case of nested mentions, the mention that begins first is assumed to precede the embedded one. If two mentions begin at the same token, the longer one is taken to precede the shorter one. 48 Edmonds, 1967), which is a maximum spanning tree algorithm that finds the optimal tree over a connected directed graph. CLE, however, has the drawback that the scores of the arcs must remain fixed and can not change depending on other arcs and it is not clear how to include non-local features in a CLE decoder. 3.1 Online learning We find the weight vector w by online learning using a variant of the structured perceptron (Collins, 2002). Specifically, we use the passive-aggressive (PA) algorithm (Crammer et al., 2006), since we found that this performed slightly better in preliminary experiments.4 The structured perceptron iterates over training instances ⟨xi, yi⟩, where xi are inputs and yi are outputs. For each instance it uses the current weight vector w to make a prediction ˆyi given the input xi. If the prediction is incorrect, the weight vector is updated in favor of the correct structure. Otherwise the weight vector is left untouched. In our setting inputs xi correspond to documents and outputs yi are trees over mentions in a document. The training data is, however, not annotated with trees, but only with clusters of mentions. That is, the yi’s are not defined a priori. 3.2 Latent antecedents In order to have a tree structure to update against, we use the current weight vector and apply the decoder to a constrained antecedent set and obtain a latent tree over the mentions in a document, where each mention is assigned a single correct antecedent (Fernandes et al., 2012). We constrain the antecedent sets such that only trees that correspond to the correct clustering can be built. Specifically, let ˜Aj denote the set of correct antecedents for a mention mj, or ˜ Aj = ( {m0} if mj has no correct antecedent {ai | COREF(ai, mj), ai ∈Aj} otherwise that is, if mention mj is non-referential or the first mention of its cluster, ˜ Aj contains only the document root. Otherwise it is the set of all mentions to the left that belong to the same cluster as mj. Analogously to A, let ˜ A denote the set of constrained antecedent sets. The latent tree ˜y needed 4We also implement the feature mapping function Φ as a hash kernel (Bohnet, 2010) and apply averaging (Collins, 2002), though for brevity we omit this from the pseudocode. for updates is then defined to be the optimal tree over Y( ˜ A), subject to the current weight vector: ˜y = arg max y∈Y( ˜ A) score(y) The intuition behind the latent tree is that during online learning, the weight vector will start favoring latent trees that are easier to learn (such as the one in Figure 2). Algorithm 1 PA algorithm with latent trees Input: Training data D, number of iterations T Output: Weight vector w 1: w = −→0 2: for t ∈1..T do 3: for ⟨Mi, Ai, ˜ Ai⟩∈D do 4: ˆyi = arg maxY(A) score(y) ▷Predict 5: if ¬ CORRECT(ˆyi) then 6: ˜yi = arg maxY( ˜ A) score(y) ▷Latent tree 7: ∆= Φ(ˆyi) −Φ(˜yi) 8: τ = ∆·w+LOSS(ˆyi) ∥∆∥2 ▷PA weight 9: w = w + τ∆ ▷PA update 10: return w Algorithm 1 shows pseudocode for the learning algorithm, which we will refer to as the baseline learning algorithm. Instead of looping over pairs ⟨x, y⟩of documents and trees, it loops over triples ⟨M, A, ˜ A⟩that comprise the set of mentions M and the two sets of antecedent candidates (line 3). Moreover, rather than checking that the tree is identical to the latent tree, it only requires the tree to correctly encode the gold clustering (line 5). The update that occurs in lines 7-9 is the passive-aggressive update. A loss function LOSS that quantifies the error in the prediction is used to compute a scalar τ that controls how much the weights are moved in each update. If τ is set to 1, the update reduces to the standard structured perceptron update. The loss function can be an arbitrarily complex function that returns a numerical value of how bad the prediction is. In the simplest case, Hamming loss can be used, i.e., for each incorrect arc add 1. We follow Fernandes et al. (2012) and penalize erroneous root attachments, i.e., mentions that erroneously get the root node as their antecedent, with a loss of 1.5. For all other arcs we use Hamming loss. 4 Incremental Search We now show that the search problem in (2) can equivalently be solved by the more intuitive bestfirst decoder (Ng and Cardie, 2002), rather than using the CLE decoder. The best-first decoder 49 works incrementally by making a left-to-right pass over the mentions, selecting for each mention the highest scoring antecedent. The key aspect that makes the best-first decoder equivalent to the CLE decoder is that all arcs point from left to right, both in this paper and in the work of Fernandes et al. (2012). We sketch a proof that this decoder also returns the highest scoring tree. First, note that this algorithm indeed returns a tree. This can be shown by assuming the opposite, in which case the tree has to have a cycle. Then there must be a mention that has its antecedent to the right. Though this is not possible since all arcs point from left to right. Second, this tree is the highest scoring tree. Again, assume the contrary, i.e., that there is a higher scoring tree in Y(A). This implies that for some mention there is a higher scoring antecedent than the one selected by the decoder. This contradicts the fact that the best-first decoder selects the highest scoring antecedent for each mention.5 5 Introducing Non-local Features Since the best-first decoder makes a left-to-right pass, it is possible to extract features on the partial structure on the left. Such non-local features are able to capture information beyond that of a mention and its potential antecedent, e.g., the size of a partially built cluster, or features extracted from the antecedent of the antecedent. When only local features are used, greedy search (either with CLE or the best-first decoder) suffices to find the highest scoring tree. That is, greedy search provides an exact solution to equation 2. Non-local features, however, render the exact search problem intractable. This is because with non-local features, locally suboptimal (i.e., non-greedy) antecedents for some mentions may lead to a higher total score over a whole document. In order to keep some options around during search, we extend the best-first decoder with beam search. Beam search works incrementally by keeping an agenda of state items. At each step, all items on the agenda are expanded. The subset of size k (the beam size) of the highest scoring expansions are retained and put back into the agenda for the next step. The feature extraction function Φ 5In case there are multiple maximum spanning trees, the best-first decoder will return one of them. This also holds for the CLE algorithm. With proper definitions, the proof can be constructed to show that both search algorithms return trees belonging to the set of maximum spanning trees over a graph. is also extended such that it also receives the current state s as an argument: Φ(⟨mi, mj⟩, s). The state encodes the previous decisions and enables Φ to extract features from the partial tree on the left. We now outline three different ways of learning the weight vector w with non-local features. 5.1 Early updates The beam search decoder can be plugged into the training algorithm, replacing the calls to arg max. Since state items leading to the best tree may be pruned from the agenda before the decoder reaches the end of the document, the introduction of non-local features may cause the decoder to return a non-optimal tree. This is problematic as it might cause updates although the correct tree has a higher score than the predicted one. It has previously been observed (Huang et al., 2012) that substantial gains can be made by applying an early update strategy (Collins and Roark, 2004): if the correct item is pruned before reaching the end of the document, then stop and update. While beam search and early updates have been successfully applied to other NLP applications, our task differs in two important aspects: First, coreference resolution is a much more difficult task, which relies on more (world) knowledge than what is available in the training data. In other words, it is unlikely that we can devise a feature set that is informative enough to allow the weight vector to converge towards a solution that lets the learning algorithm see the entire documents during training, at least in the situation when no external knowledge sources are used. Second, our gold structure is not known but is induced latently, and may vary from iteration to iteration. With non-local features this is troublesome since the best latent tree of a complete document may not necessarily coincide with the best partial tree at some intermediate mention mj, j < n, i.e., a mention before the last in a document. We therefore also apply beam search to find the latent tree to have a partial gold structure for every mention in a document. Algorithm 2 shows pseudocode for the beam search and early update training procedure. The algorithm maintains two parallel agendas, one for gold items and one for predicted items. At every mention, both agendas are expanded and thus cover the same set of mentions. Then the predicted agenda is checked to see if it contains any correct 50 Algorithm 2 Beam search and early update Input: Data set D, epochs T, beam size k Output: weight vector w 1: w = −→0 2: for t ∈1..T do 3: for ⟨Mi, Ai, ˜ Ai⟩∈D do 4: AgendaG = {} 5: AgendaP = {} 6: for j ∈1..n do 7: AgendaG = EXPAND(AgendaG, ˜Aj, mj, k) 8: AgendaP = EXPAND(AgendaP, Aj, mj, k) 9: if ¬ CONTAINSCORRECT(AgendaP) then 10: ˜y = EXTRACTBEST(AgendaG) 11: ˆy = EXTRACTBEST(AgendaP) 12: update ▷PA update 13: GOTO 3 ▷Skip and move to next instance 14: ˆy = EXTRACTBEST(AgendaP) 15: if ¬ CORRECT(ˆy) then 16: ˜y = EXTRACTBEST(AgendaG) 17: update ▷PA update item. If there is no correct item in the predicted agenda, search is halted and an update is made against the best item from the gold agenda. The algorithm then moves on to the next document. If the end of a document is reached, the top scoring predicted item is checked for correctness. If it is not, an update is made against the best gold item. A drawback of early updates is that the remainder of the document is skipped when an early update is applied, effectively discarding some training data.6 An alternative strategy that makes better use of the training data is to apply the maxviolation procedure suggested by Huang et al. (2012). However, since our gold trees change from iteration to iteration, and even inside of a single document, it is not entirely clear with respect to what gold tree the maximum violation should be computed. Initial experiments with max-violation updates indicated that they did not improve much over early updates, and also had a tendency to only consider a smaller portion of the training data. 5.2 LaSO To make full use of the training data we implemented Learning as Search Optimization (LaSO; Daum´e III and Marcu, 2005b). It is very similar to early updates, but differs in one crucial respect: When an early update is made, search is continued rather than aborted. Thus the learning algorithm always reaches the end of a document, avoiding the problem that early updates discard parts of the training data. 6In fact, after 50 iterations about 70% of the mentions in the training data are still being ignored due to early updates. Correct items are computed the same way as with early updates, where an agenda of gold items is maintained in parallel. When search is resumed after an intermediate LaSO update, the prediction agenda is re-seeded with gold items (i.e., items that are all correct). This is necessary since the update influences what the partial gold structure looks like, and the gold agenda therefore needs to be recreated from the beginning of the document. Specifically, after each intermediate LaSO update, the gold agenda is expanded repeatedly from the beginning of the document to the point where the update was made, and is then copied over to seed the prediction agenda. In terms of pseudocode, this is accomplished by replacing lines 12 and 13 in Algorithm 2 with the following: 12: update ▷PA update 13: AgendaG = {} 14: for mi ∈{m1, ..., mj} ▷Recreate gold agenda 15: AgendaG = EXPAND(AgendaG, ˜Ai, mi, k) 16: AgendaP = COPY(AgendaG) 17: GOTO 6 ▷Continue 5.3 Delayed LaSO updates When we applied LaSO, we noticed that it performed worse than the baseline learning algorithm when only using local features. We believe that the reason is that updates are made in the middle of documents which means that lexical forms of antecedents are “fresh in memory” of the weight vector. This results in fewer mistakes during training and leads to fewer updates. While this feedback makes it easier during training, such feedback is not available during test time, and the LaSO learning setting therefore mimics the testing setting to a lesser extent. We also found that LaSO updates change the shape of the latent tree and that the average distance between mentions connected by an arc increased. This problem can also be attributed to how lexical items are fresh in memory. Such trees tend to deviate from the intuition that the latent trees are easier to learn. They also render distancebased features (which are standard practice and generally rather useful) less powerful, as distance in sentences or mentions becomes less of a reliable indicator for coreference. To cope with this problem, we devised the delayed LaSO update, which differs from LaSO only in the respect that it postpones the actual updates until the end of a document. This is accomplished by summing the distance vectors ∆at every point where LaSO would make an update. At 51 Algorithm 3 Delayed LaSO update Input: Data set D, iterations T, beam size k Output: weight vector w 1: w = −→0 2: for t ∈1..T do 3: for ⟨Mi, Ai, ˜ Ai⟩∈D do 4: AgendaG = {} 5: AgendaP = {} 6: ∆acc = −→0 7: lossacc = 0 8: for j ∈1..n do 9: AgendaG = EXPAND(AgendaG, ˜Aj, mj, k) 10: AgendaP = EXPAND(AgendaP, Aj, mj, k) 11: if ¬ CONTAINSCORRECT(AgendaP) then 12: ˜y = EXTRACTBEST(AgendaG) 13: ˆy = EXTRACTBEST(AgendaP) 14: ∆acc = ∆acc + Φ(ˆy) −Φ(˜y) 15: lossacc = lossacc + LOSS(ˆy) 16: AgendaP = AgendaG 17: ˆy = EXTRACTBEST(AgendaP) 18: if ¬ CORRECT(ˆy) then 19: ˜y = EXTRACTBEST(AgendaG) 20: ∆acc = ∆acc + Φ(ˆy) −Φ(˜y) 21: lossacc = lossacc + LOSS(ˆy) 22: if ∆acc ̸= −→0 then 23: update w.r.t. ∆acc and lossacc the end of a document, an update is made with respect to the sum of all ∆’s. Similarly, a running sum of the partial loss is maintained within a document. Since the PA update only depends on the distance vector ∆and the loss, it can be applied with respect to these sums at the end of the document. When only local features are used, this update is equivalent to the updates in the baseline learning algorithm. This follows because greedy search finds the optimal tree when only local features are used. Similarly, using only local features, the beam-based best-first decoder will also return the optimal tree. Algorithm 3 shows the pseudocode for the delayed LaSO learning algorithm. 6 Features In this section we briefly outline the type of features we use. The feature sets are customized for each language. As a baseline we use the features from Bj¨orkelund and Farkas (2012), who ranked second in the 2012 CoNLL shared task and is publicly available. The exact definitions and feature sets that we use are available as part of the download package of our system. 6.1 Local features Basic features that can be extracted on one or both mentions in a pair include (among others): Mention type, which is either root, pronoun, name, or common; Distance features, e.g., the distance in sentences or mentions; Rule-based features, e.g., StringMatch or SubStringMatch; Syntax-based features, e.g., category labels or paths in the syntax tree; Lexical features, e.g., the head word of a mention or the last word of a mention. In order to have a strong local baseline, we applied greedy forward/backward feature selection on the training data using a large set of local feature templates. Specifically, the training set of each language was split into two parts where 75% was used for training, and 25% for testing. Feature templates were incrementally added or removed in order to optimize the mean of MUC, B3, and CEAFe (i.e., the CoNLL average). 6.2 Non-local Features We experimented with non-local features drawn from previous work on entity-mention models (Luo et al., 2004; Rahman and Ng, 2009), however they did not improve performance in preliminary experiments. The one exception is the size of a cluster (Culotta et al., 2007). Additional features we use are Shape encodes the linear “shape” of a cluster in terms of mention type. For instance, the clusters representing Gary Wilber and Drug Emporium Inc. from the example in Figure 1, would be represented as RNPN and RNCCC, respectively. Where R, N, P, and C denote the root node, names, pronouns, and common noun phrases, respectively. Local syntactic context is inspired by the Entity Grid (Barzilay and Lapata, 2008), where the basic assumption is that references to an entity follow particular syntactic patterns. For instance, an entity may be introduced as an object in one sentence, whereas in subsequent sentences it is referred to in subject position. Grammatical functions are approximated by the path in the syntax tree from a mention to its closest S node. The partial paths of a mention and its linear predecessor, given the cluster of the current antecedent, informs the model about the local syntactic context. Cluster start distance denotes the distance in mentions from the beginning of the document where the cluster of the antecedent in consideration begins. Additionally, the non-local model also has access to the basic properties of other mentions in the partial tree structure, such as head words. The 52 non-local features were selected with the same greedy forward strategy as the local features, starting from the optimized local feature sets. 7 Experimental Setup We apply our model to the CoNLL 2012 Shared Task data, which includes a training, development, and test set split for three languages: Arabic, Chinese and English. We follow the closed track setting where systems may only be trained on the provided training data, with the exception of the English gender and number data compiled by Bergsma and Lin (2006). We use automatically extracted mentions using the same mention extraction procedure as Bj¨orkelund and Farkas (2012). We evaluate our system using the CoNLL 2012 scorer, which computes several coreference metrics: MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), and CEAFe and CEAFm (Luo, 2005). We also report the CoNLL average (also known as MELA; Denis and Baldridge (2009)), i.e., the arithmetic mean of MUC, B3, and CEAFe. It should be noted that for B3 and the CEAF metrics, multiple ways of handling twinless mentions7 have been proposed (Rahman and Ng, 2009; Stoyanov et al., 2009). We use the most recent version of the CoNLL scorer (version 7), which implements the original definitions of these metrics.8 Our system is evaluated on the version of the data with automatic preprocessing information (e.g., predicted parse trees). Unless otherwise stated we use 25 iterations of perceptron training and a beam size of 20. We did not attempt to tune either of these parameters. We experiment with two feature sets for each language: the optimized local feature sets (denoted local), and the optimized local feature sets extended with non-local features (denoted non-local). 8 Results Learning strategies. We begin by looking at the different learning strategies. Since early updates do not always make use of the complete documents during training, it can be expected that it will require either a very wide beam or more iterations to get up to par with the baseline learning algorithm. Figure 3 shows the CoNLL average on 7i.e., mentions that appear in the prediction but not in gold, or the other way around 8Available at http://conll.cemantix.org/ 2012/software.html 54 56 58 60 62 64 0 10 20 30 40 50 CoNLL avg. Iterations Baseline Early (local), k=20 Early (local), k=100 Early (non-local), k=20 Early (non-local), k=100 Figure 3: Comparing early update training with the baseline training algorithm. the English development set as a function of number of training iterations with two different beam sizes, 20 and 100, over the local and non-local feature sets. The figure shows that even after 50 iterations, early update falls short of the baseline, even when the early update system has access to more informative non-local features.9 In Figure 4 we compare early update with LaSO and delayed LaSO on the English development set. The left half uses the local feature set, and the right the extended non-local feature set. Recall that with only local features, delayed LaSO is equivalent to the baseline learning algorithm. As before, early update is considerably worse than other learning strategies. We also see that delayed LaSO outperforms LaSO, both with and without non-local features. Note that plain LaSO with non-local features only barely outperforms the delayed LaSO with only local features (i.e., the baseline), which indicates that only delayed LaSO is able to fully leverage non-local features. From these results we conclude that we are better off when the learning algorithm handles one document at a time, instead of getting feedback within documents. Local vs. Non-local feature sets. Table 1 displays the differences in F-measures and CoNLL average between the local and non-local systems when applied to the development sets for each language. All metrics improve when more informative non-local features are added to the local feature set. Arabic and English show considerable improvements, and the CoNLL average increases 9Although the Early systems still seem to show slight increases after 50 iterations, it needs a considerable number of iterations to catch up with the baseline – after 100 iterations the best early system is still more than half a point behind the baseline. 53 58 59 60 61 62 63 64 65 Local Non-local CoNLL avg. Early LaSO Delayed LaSO Figure 4: Comparison of learning algorithms evaluated on the English development set. MUC B3 CEAFm CEAFe CoNLL Arabic local 47.33 42.51 49.71 46.49 45.44 non-local 49.31 43.52 50.96 47.18 46.67 Chinese local 65.84 57.94 62.23 57.05 60.27 non-local 66.4 57.99 62.37 57.12 60.5 English local 69.95 58.7 62.91 56.03 61.56 non-local 70.74 60.03 65.01 56.8 62.52 Table 1: Comparison of local and non-local feature sets on the development sets. about one point. For Chinese the gains are generally not as pronounced, though the MUC metric goes up by more than half a point. Final results. In Table 2 we compare the results of the non-local system (This paper) to the best results from the CoNLL 2012 Shared Task.10 Specifically, this includes Fernandes et al.’s (2012) system for Arabic and English (denoted Fernandes), and Chen and Ng’s (2012) system for Chinese (denoted C&N). For English we also compare it to the Berkeley system (Durrett and Klein, 2013), which, to our knowledge, is the best publicly available system for English coreference resolution (denoted D&K). As a general baseline, we also include Bj¨orkelund and Farkas’ (2012) system (denoted B&F), which was the second best system in the shared task. For almost all metrics our system is significantly better than the best competitor. For a few metrics the best competitor outperforms our results for either precision or recall, but in terms of F-measures and the CoNLL average our system is the best for all languages. 10Thanks to Sameer Pradhan for providing us with the outputs of the other systems for significance testing. 9 Related Work On the machine learning side Collins and Roark’s (2004) work on the early update constitutes our starting point. The LaSO framework was introduced by Daum´e III and Marcu (2005b), but has, to our knowledge, only been applied to the related task of entity detection and tracking (Daum´e III and Marcu, 2005a). The theoretical motivation for early updates was only recently explained rigorously (Huang et al., 2012). The delayed LaSO update that we propose decomposes the prediction task of a complex structure into a number of subproblems, each of which guarantee violation, using Huang et al.’s (2012) terminology. We believe this is an interesting novelty, as it leverages the complete structures for every training instance during every iteration, and expect it to be applicable also to other structured prediction tasks. Our approach also resembles imitation learning techniques such as SEARN (Daum´e III et al., 2009) and DAGGER (Ross et al., 2011), where the search problem is reduced to a sequence of classification steps that guide the search algorithm through the search space. These frameworks, however, rely on the notion of an expert policy which provides an optimal decision at each point during search. In our context that would require antecedents for every mention to be given a priori, rather than using latent antecedents as we do. Perceptrons for coreference. The perceptron has previously been used to train coreference resolvers either by casting the problem as a binary classification problem that considers pairs of mentions in isolation (Bengtson and Roth, 2008; Stoyanov et al., 2009; Chang et al., 2012, inter alia) or in the structured manner, where a clustering for an entire document is predicted in one go (Fernandes et al., 2012). However, none of these works use non-local features. Stoyanov and Eisner (2012) train an Easy-First coreference system with the perceptron to learn a sequence of join operations between arbitrary mentions in a document and accesses non-local features through previous merge operations in later stages. Culotta et al. (2007) also apply online learning in a first-order logic framework that enables non-local features, though using a greedy search algorithm. Latent antecedents. The use of latent antecedents goes back to the work of Yu and Joachims (2009), although the idea of determining 54 MUC B3 CEAFm CEAFe CoNLL Rec Prec F1 Rec Prec F1 Rec Prec F1 Rec Prec F1 avg. Arabic B&F 43.9 52.51 47.82 35.7 49.77 41.58 43.8 50.03 46.71 40.45 41.86 41.15 43.51 Fernandes 43.63 49.69 46.46 38.39 47.7 42.54 47.6 50.85 49.17 48.16 45.03 46.54 45.18 This paper 47.53 53.3 50.25 44.14 49.34 46.6 50.94 55.19 52.98 49.2 49.45 49.33 48.72 Chinese B&F 58.72 58.49 58.61 49.17 53.2 51.11 56.68 51.86 54.14 55.36 41.8 47.63 52.45 C&N 59.92 64.69 62.21 51.76 60.26 55.69 59.58 60.45 60.02 58.84 51.61 54.99 57.63 This paper 62.57 69.39 65.8 53.87 61.64 57.49 58.75 64.76 61.61 54.65 59.33 56.89 60.06 English B&F 65.23 70.1 67.58 49.51 60.69 54.47 56.93 59.51 58.19 51.34 49.14 59.21 57.42 Fernandes 65.83 75.91 70.51 51.55 65.19 57.58 57.48 65.93 61.42 50.82 57.28 53.86 60.65 D&K 66.58 74.94 70.51 53.2 64.56 58.33 59.19 66.23 62.51 52.9 58.06 55.36 61.4 This paper 67.46 74.3 70.72 54.96 62.71 58.58 60.33 66.92 63.45 52.27 59.4 55.61 61.63 Table 2: Comparison with other systems on the test sets. Bold numbers indicate significance at the p < 0.05 level between the best and the second best systems (according to the CoNLL average) using a Wilcoxon signed rank sum test. We refrain from significance tests on the CoNLL average, as it is an average over other F-measures. meaningful antecedents for mentions can be traced back to Ng and Cardie (2002) who used a rulebased approach. Latent antecedents have recently gained popularity and were used by two systems in the CoNLL 2012 Shared Task, including the winning system (Fernandes et al., 2012; Chang et al., 2012). Durrett and Klein (2013) present a coreference resolver with latent antecedents that predicts clusterings over entire documents and fit a loglinear model with a custom task-specific loss function using AdaGrad (Duchi et al., 2011). Chang et al. (2013) use a max-margin approach to learn a pairwise model and rely on stochastic gradient descent to circumvent the costly operation of decoding the entire training set in order to compute the gradients and the latent antecedents. None of the aforementioned works use non-local features in their models, however. Entity-mention models. Entity-mention models that compare a single mention to a (partial) cluster have been studied extensively and several works have evaluated non-local entity-level features (Luo et al., 2004; Yang et al., 2008; Rahman and Ng, 2009). Luo et al. (2004) also apply beam search at test time, but use a static assignment of antecedents and learns log-linear model using batch learning. Moreover, these works alter the basic feature definitions from their pairwise models when introducing entity-level features. This contrasts with our work, as our mention-pair model simply constitutes a special case of the non-local system. 10 Conclusion We presented experiments with a coreference resolver that leverages non-local features to improve its performance. The application of non-local features requires the use of an approximate search algorithm to keep the problem tractable. We evaluated standard perceptron learning techniques for this setting both using early updates and LaSO. We found that the early update strategy is considerably worse than a local baseline, as it is unable to exploit all training data. LaSO resolves this issue by giving feedback within documents, but still underperforms compared to the baseline as it distorts the choice of latent antecedents. We introduced a modification to LaSO, where updates are delayed until each document is processed. In the special case where only local features are used, this method coincides with standard structured perceptron learning that uses exact search. Moreover, it is also able to profit from nonlocal features resulting in improved performance. We evaluated our system on all three languages from the CoNLL 2012 Shared Task and present the best results to date on these data sets. Acknowledgments We are grateful to the anonymous reviewers as well as Christian Scheible and Wolfgang Seeker for comments on earlier versions of this paper. This research has been funded by the DFG via SFB 732, project D8. 55 References Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In In The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference, pages 563–566. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1–34. Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 294–303, Honolulu, Hawaii, October. Association for Computational Linguistics. Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 33–40, Sydney, Australia, July. Association for Computational Linguistics. Anders Bj¨orkelund and Rich´ard Farkas. 2012. Datadriven multilingual coreference resolution using resolver stacking. In Joint Conference on EMNLP and CoNLL - Shared Task, pages 49–55, Jeju Island, Korea, July. Association for Computational Linguistics. Bernd Bohnet, Simon Mille, Benoˆıt Favre, and Leo Wanner. 2011. <stumaba >: From deep representation to surface. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 232–235, Nancy, France, September. Association for Computational Linguistics. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 89–97, Beijing, China, August. Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Mark Sammons, and Dan Roth. 2012. Illinoiscoref: The ui system in the conll-2012 shared task. In Joint Conference on EMNLP and CoNLL - Shared Task, pages 113–117, Jeju Island, Korea, July. Association for Computational Linguistics. Kai-Wei Chang, Rajhans Samdani, and Dan Roth. 2013. A constrained latent variable model for coreference resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 601–612, Seattle, Washington, USA, October. Association for Computational Linguistics. Chen Chen and Vincent Ng. 2012. Combining the best of two worlds: A hybrid approach to multilingual coreference resolution. In Joint Conference on EMNLP and CoNLL - Shared Task, pages 56–63, Jeju Island, Korea, July. Association for Computational Linguistics. Yoeng-jin Chu and Tseng-hong Liu. 1965. On the shortest aborescence of a directed graph. Science Sinica, 14:1396–1400. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 111–118, Barcelona, Spain, July. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1–8. Association for Computational Linguistics, July. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive–aggressive algorithms. Journal of Machine Learning Reseach, 7:551–585, March. Aron Culotta, Michael Wick, and Andrew McCallum. 2007. First-order probabilistic models for coreference resolution. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 81–88, Rochester, New York, April. Association for Computational Linguistics. Hal Daum´e III and Daniel Marcu. 2005a. A largescale exploration of effective global features for a joint entity detection and tracking model. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 97–104, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. Hal Daum´e III and Daniel Marcu. 2005b. Learning as search optimization: approximate large margin methods for structured prediction. In ICML, pages 169–176. Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning, 75(3):297–325. Pascal Denis and Jason Baldridge. 2009. Global Joint Models for Coreference Resolution and Named Entity Classification. In Procesamiento del Lenguaje Natural 42, pages 87–96, Barcelona: SEPLN. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, July. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971–1982, 56 Seattle, Washington, USA, October. Association for Computational Linguistics. Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71(B):233–240. Eraldo Fernandes, C´ıcero dos Santos, and Ruy Milidi´u. 2012. Latent structure perceptron with feature induction for unrestricted coreference resolution. In Joint Conference on EMNLP and CoNLL - Shared Task, pages 41–48, Jeju Island, Korea, July. Association for Computational Linguistics. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–151, Montr´eal, Canada, June. Association for Computational Linguistics. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-08: HLT, pages 586–594, Columbus, Ohio, June. Association for Computational Linguistics. Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mentionsynchronous coreference resolution algorithm based on the bell tree. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, pages 135–142, Barcelona, Spain, July. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25–32, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 104– 111, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1396– 1411, Uppsala, Sweden, July. Association for Computational Linguistics. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL - Shared Task, pages 1–40, Jeju Island, Korea, July. Association for Computational Linguistics. Altaf Rahman and Vincent Ng. 2009. Supervised models for coreference resolution. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 968–977, Singapore, August. Association for Computational Linguistics. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147–155, Boulder, Colorado, June. Association for Computational Linguistics. St´ephane Ross, Geoffrey J. Gordon, and J. Andrew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, pages 627–635. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. Veselin Stoyanov and Jason Eisner. 2012. Easy-first coreference resolution. In Proceedings of COLING 2012, pages 2519–2534, Mumbai, India, December. Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coreference resolution: Making sense of the stateof-the-art. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 656–664, Suntec, Singapore, August. Association for Computational Linguistics. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model theoretic coreference scoring scheme. In Proceedings MUC-6, pages 45–52, Columbia, Maryland. Xiaofeng Yang, Jian Su, Jun Lang, Chew Lim Tan, Ting Liu, and Sheng Li. 2008. An entitymention model for coreference resolution with inductive logic programming. In Proceedings of ACL08: HLT, pages 843–851, Columbus, Ohio, June. Association for Computational Linguistics. Chun-Nam Yu and T. Joachims. 2009. Learning structural svms with latent variables. In International Conference on Machine Learning (ICML). Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 562– 571, Honolulu, Hawaii, October. Association for Computational Linguistics. 57
2014
5
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 531–541, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics New Word Detection for Sentiment Analysis Minlie Huang, Borui Ye*, Yichen Wang, Haiqiang Chen**, Junjun Cheng**, Xiaoyan Zhu State Key Lab. of Intelligent Technology and Systems, National Lab. for Information Science and Technology, Dept. of Computer Science and Technology, Tsinghua University, Beijing 100084, PR China *Dept. of Communication Engineering, Beijing University of Posts and Telecommunications **China Information Technology Security Evaluation Center [email protected] Abstract Automatic extraction of new words is an indispensable precursor to many NLP tasks such as Chinese word segmentation, named entity extraction, and sentiment analysis. This paper aims at extracting new sentiment words from large-scale user-generated content. We propose a fully unsupervised, purely data-driven framework for this purpose. We design statistical measures respectively to quantify the utility of a lexical pattern and to measure the possibility of a word being a new word. The method is almost free of linguistic resources (except POS tags), and requires no elaborated linguistic rules. We also demonstrate how new sentiment word will benefit sentiment analysis. Experiment results demonstrate the effectiveness of the proposed method. 1 Introduction New words on the Internet have been emerging all the time, particularly in user-generated content. Users like to update and share their information on social websites with their own language styles, among which new political/social/cultural words are constantly used. However, such new words have made many natural language processing tasks more challenging. Automatic extraction of new words is indispensable to many tasks such as Chinese word segmentation, machine translation, named entity extraction, question answering, and sentiment analysis. New word detection is one of the most critical issues in Chinese word segmentation. Recent studies (Sproat and Emerson, 2003) (Chen, 2003) have shown that more than 60% of word segmentation errors result from new words. Statistics show that more than 1000 new Chinese words appear every year (Thesaurus Research Center, 2003). These words are mostly domain-specific technical terms and time-sensitive political/social /cultural terms. Most of them are not yet correctly recognized by the segmentation algorithm, and remain as out of vocabulary (OOV) words. New word detection is also important for sentiment analysis such as opinionated phrase extraction and polarity classification. A sentiment phrase with complete meaning should have a correct boundary, however, characters in a new word may be broken up. For example, in a sentence " 表演/ n 非常/ adv 给/ v 力/ n(artists' performance is very impressive)" the two Chinese characters“给/v 力/n(cool; powerful)”should always be extracted together. In polarity classification, new words can be informative features for classification models. In the previous example, "给 力(cool; powerful)" is a strong feature for classification models while each single character is not. Adding new words as feature in classification models will improve the performance of polarity classification, as demonstrated later in this paper. This paper aims to detect new word for sentiment analysis. We are particulary interested in extracting new sentiment word that can express opinions or sentiment, which is of high value towards sentiment analysis. New sentiment word, as exemplified in Table 1, is a sub-class of multi-word expressions which is a sequence of neighboring words "whose exact and unambiguous meaning or connotation cannot be derived from the meaning or connotation of its components" (Choueka, 1988). Such new words cannot be directly identified using grammatical rules, which poses a major challenge to automatic analysis. Moreover, existing lexical resources never have adequate and timely coverage since new words appear constantly. People thus resort to statistical methods such as Pointwise Mutual Information (Church and Hanks, 1990), Symmetrical Conditional Probability 531 (da Silva and Lopes, 1999), Mutual Expectation (Dias et al., 2000), Enhanced Mutual Information (Zhang et al., 2009), and Multi-word Expression Distance (Bu et al., 2010). New word English Translation Polarity 口爱 lovely positive 杯具 tragic/tragedy negative 给力 very cool; powerful positive 坑爹 reverse one's expectation negative Table 1: Examples of new sentiment word. Our central idea for new sentiment word detection is as follows: Starting from very few seed words (for example, just one seed word), we can extract lexical patterns that have strong statistical association with the seed words; the extracted lexical patterns can be further used in finding more new words, and the most probable new words can be added into the seed word set for the next iteration; and the process can be run iteratively until a stop condition is met. The key issues are to measure the utility of a pattern and to quantify the possibility of a word being a new word. The main contributions of this paper are summarized as follows: • We propose a novel framework for new word detection from large-scale user-generated data. This framework is fully unsupervised and purely data-driven, and requires very lightweight linguistic resources (i.e., only POS tags). • We design statistical measures to quantify the utility of a pattern and to quantify the possibility of a word being a new word, respectively. No elaborated linguistic rules are needed to filter undesirable results. This feature may enable our approach to be portable to other languages. • We investigate the problem of polarity prediction of new sentiment word and demonstrate that inclusion of new sentiment word benefits sentiment classification tasks. The rest of the paper is structured as follows: we will introduce related work in the next section. We will describe the proposed method in Section 3, including definitions, the overview of the algorithm, and the statistical measures for addressing the two key issues. We then present the experiments in Section 4. Finally, the work is summarized in Section 5. 2 Related Work New word detection has been usually interweaved with word segmentation, particularly in Chinese NLP. In these works, new word detection is considered as an integral part of segmentation, where new words are identified as the most probable segments inferred by the probabilistic models; and the detected new word can be further used to improve word segmentation. Typical models include conditional random fields proposed by (Peng et al., 2004), and a joint model trained with adaptive online gradient descent based on feature frequency information (Sun et al., 2012). Another line is to treat new word detection as a separate task, usually preceded by part-of-speech tagging. The first genre of such studies is to leverage complex linguistic rules or knowledge. For example, Justeson and Katz (1995) extracted technical terminologies from documents using a regular expression. Argamon et al. (1998) segmented the POS sequence of a multi-word into small POS tiles, counted tile frequency in the new word and non-new-word on the training set respectively, and detected new words using these counts. Chen and Ma (2002) employed morphological and statistical rules to extract Chinese new word. The second genre of the studies is to treat new word detection as a classification problem. Zhou (2005) proposed a discriminative Markov Model to detect new words by chunking one or more separated words. In (Li et al., 2005), new word detection was viewed as a binary classification problem. However, these supervised models requires not only heavy engineering of linguistic features, but also expensive annotation of training data. User behavior data has recently been explored for finding new words. Zheng et al. (2009) explored user typing behaviors in Sogou Chinese Pinyin input method to detect new words. Zhang et al. (2010) proposed to use dynamic time warping to detect new words from query logs. However, both of the work are limited due to the public unavailability of expensive commercial resources. Statistical methods for new word detection have been extensively studied, and in some sense exhibit advantages over linguistics-based methods. In this setting, new word detection is mostly 532 known as multi-word expression extraction. To measure multi-word association, the first model is Pointwise Mutual Information (PMI) (Church and Hanks, 1990). Since then, a variety of statistical methods have been proposed to measure bi-gram association, such as Log-likelihood (Dunning, 1993) and Symmetrical Conditional Probability (SCP) (da Silva and Lopes, 1999). Among all the 84 bi-gram association measures, PMI has been reported to be the best one in Czech data (Pecina, 2005). In order to measure arbitrary ngrams, most common strategies are to separate ngram into two parts X and Y so that existing bigram methods can be used (da Silva and Lopes, 1999; Dias et al., 2000; Schone and Jurafsky, 2001). Zhang et al. (2009) proposed Enhanced Mutual Information (EMI) which measures the cohesion of n-gram by the frequency of itself and the frequency of each single word. Based on the information distance theory, Bu et al. (2010) proposed multi-word expression distance (MED) and the normalized version, and reported superior performance to EMI, SCP, and other measures. 3 Methodology 3.1 Definitions Definition 3.1 (Adverbial word). Words that are used mainly to modify a verb or an adjective, such as "太(too)", "非常(very)", "十分(very)", and "特 别(specially)". Definition 3.2 (Auxiliary word). Words that are auxiliaries, model particles, or punctuation marks. In Chinese, such words are like "着,了,啦,的,啊", and punctuation marks include ",。!?;:" and so on. Definition 3.3 (Lexical Pattern). A lexical pattern is a triplet < AD, ∗, AU >, where AD is an adverbial word, the wildcard ∗means an arbitrary number of words 1, and AU denotes an auxiliary word. Table 2 gives some examples of lexical patterns. In order to obtain lexical patterns, we can define regular expressions with POS tags 2 and apply the regular expressions on POS tagged texts. Since the tags of adverbial and auxiliary words are 1We set the number to 3 words in this work considering computation costs. 2Such expressions are very simple and easy to write because we only need to consider POS tags of adverbial and auxiliary word. relatively static and can be easily identified, such a method can safely obtain lexical patterns. Pattern Frequency <"都",*,"了"> 562,057 <"都",*,"的"> 387,649 <"太",*,"了"> 380,470 <"不",*,","> 369,702 Table 2: Examples of lexical pattern. The frequency is counted on 237,108,977 Weibo posts. 3.2 The Algorithm Overview The algorithm works as follows: starting from very few seed words (for example, a word in Table 1), the algorithm can find lexical patterns that have strong statistical association with the seed words in which the likelihood ratio test (LRT) is used to quantify the degree of association. Subsequently, the extracted lexical patterns can be further used in finding more new words. We design several measures to quantify the possibility of a candidate word being a new word, and the topranked words will be added into the seed word set for the next iteration. The process can be run iteratively until a stop condition is met. Note that we do not augment the pattern set (P) at each iteration, instead, we keep a fixed small number of patterns during iteration because this strategy produces optimal results. From linguistic perspectives, new sentiment words are commonly modified by adverbial words and thus can be extracted by lexical patterns. This is the reason why the algorithm will work. Our algorithm is in spirit to double propagation (Qiu et al., 2011), however, the differences are apparent in that: firstly, we use very lightweight linguistic information (except POS tags); secondly, our major contributions are to propose statistical measures to address the following key issues: first, to measure the utility of lexical patterns; second, to measure the possibility of a candidate word being a new word. 3.3 Measuring the Utility of a Pattern The first key issue is to quantify the utility of a pattern at each iteration. This can be measured by the association of a pattern to the current word set used in the algorithm. The likelihood ratio tests (Dunning, 1993) is used for this purpose. This association model has also been used to model association between opinion target words by (Hai et 533 Algorithm 1: New word detection algorithm Input: D: a large set of POS tagged posts Ws: a set of seed words kp: the number of patterns chosen at each iteration kc: the number of patterns in the candidate pattern set kw: the number of words added at each iteration K: the number of words returned Output: A list of ranked new words W 1 Obtain all lexical patterns using regular expressions on D; 2 Count the frequency of each lexical pattern and extract words matched by each pattern ; 3 Obtain top kc frequent patterns as candidate pattern set Pc and top 5,000 frequent words as candidate word set Wc ; 4 P = Φ; W=Ws; t = 0 ; 5 for |W| < K do 6 Use W to score each pattern in Pc with U(p) ; 7 P = {top kp patterns} ; 8 Use P to extract new words and if the words are in Wc, score them with F(w) ; 9 W = W ∪{top kw words} ; 10 Wc = Wc - W ; 11 Sort words in W with F(w) ; 12 Output the ranked list of words in W ; al., 2012). The LRT is well known for not relying critically on the assumption of normality, instead, it uses the asymptotic assumption of the generalized likelihood ratio. In practice, the use of likelihood ratios tends to result in significant improvements in text-analysis performance. In our problem, LRT computes a contingency table of a pattern p and a word w, derived from the corpus statistics, as given in Table 3, where k1(w, p) is the number of documents that w matches pattern p, k2(w, ¯p) is the number of documents that w occurs while p does not, k3( ¯w, p) is the number of documents that p occurs while w does not, and k4( ¯w, ¯p) is the number of documents containing neither p nor w. Statistics p ¯p w k1(w, p) k2(w, ¯p) ¯w k3( ¯w, p) k4( ¯w, ¯p) Table 3: Contingency table for likelihood ratio test (LRT). Based on the statistics shown in Table 3, the likelihood ratio tests (LRT) model captures the statistical association between a pattern p and a word w by employing the following formula: LRT(p, w) = log L(ρ1, k1, n1) ∗L(ρ2, k2, n2) L(ρ, k1, n1) ∗L(ρ, k2, n2) (1) where: L(ρ, k, n) = ρk ∗(1 −ρ)n−k; n1 = k1 + k3; n2 = k2 + k4; ρ1 = k1/n1; ρ2 = k2/n2; ρ = (k1 + k2)/(n1 + n2). Thus, the utility of a pattern can be measured as follows: U(p) = ∑ wi∈W LRT(p, wi) (2) where W is the current word set used in the algorithm (see Algorithm 1). 3.4 Measuring the Possibility of Being New Words Another key issue in the proposed algorithm is to quantify the possibility of a candidate word being a new word. We consider several factors for this purpose. 3.4.1 Likelihood Ratio Test Very similar to the pattern utility measure, LRT can also be used to measure the association of a candidate word to a given pattern set, as follows: LRT(w) = ∑ pi∈P LRT(w, pi) (3) where P is the current pattern set used in the algorithm (see Algorithm 1), and pi is a lexical pattern. This measure only quantifies the association of a candidate word to the given pattern set. It tells nothing about the possibility of a word being a new word, however, a new sentiment word, should have close association with the lexical patterns. This has linguistic interpretations because new sentiment words are commonly modified by adverbial words and thus should have close association with lexical patterns. This measure is proved to be an influential factor by our experiments in Section 4.3. 534 3.4.2 Left Pattern Entropy If a candidate word is a new word, it will be more commonly used with diversified lexical patterns since the non-compositionality of new word means that the word can be used in many different linguistic scenarios. This can be measured by information entropy, as follows: LPE(w) = − ∑ li∈L(Pc,w) c(li, w) N(w) ∗log c(li, w) N(w) (4) where L(Pc, w) is the set of left word of all patterns by which word w can be matched in Pc , c(li, w) is the count that word w can be matched by patterns whose left word is li, and N(w) is the count that word w can be matched by the patterns in Pc. Note that we use Pc, instead of P, because the latter set is very small while computing entropy needs a large number of patterns. Tuning the size of Pc will be further discussed in Section 4.4. 3.4.3 New Word Probability Some words occur very frequently and can be widely matched by lexical patterns, but they are not new words. For example, " 爱吃(love to eat)" and " 爱说(love to talk)" can be matched by many lexical patterns, however, they are not new words due to the lack of non-compositionality. In such words, each single character has high probability to be a word. Thus, we design the following measure to favor this observation. NWP(w) = n ∏ i=1 p(wi) 1 −p(wi) (5) where w = w1w2 . . . wn, each wi is a single character, and p(wi) is the probability of the character wi being a word, as computed as follows: p(wi) = all(wi) −s(wi) all(wi) where all(wi) is the total frequency of wi, and s(wi) is the frequency of wi being a single character word. Obviously, in order to obtain the value of s(wi), some particular Chinese word segmentation tool is required. In this work, we resort to ICTCLAS (Zhang et al., 2003), a widely used tool in the literature. 3.4.4 Non-compositionality Measures New words are usually multi-word expressions, where a variety of statistical measures have been proposed to detect multi-word expressions. Thus, such measures can be naturally incorporated into our algorithm. The first measure is enhanced mutual information (EMI) (Zhang et al., 2009): EMI(w) = log2 F/N ∏n i=1 Fi−F N (6) where F is the number of posts in which a multiword expression w = w1w2 . . . wn occurs, Fi is the number of posts where wi occurs, and N is the total number of posts. The key idea of EMI is to measure word pair’s dependency as the ratio of its probability of being a multi-word to its probability of not being a multi-word. The larger the value, the more possible the expression will be a multi-word expression. The second measure we take into account is normalized multi-word expression distance (Bu et al., 2010), which has been proposed to measure the non-compositionality of multi-word expressions. NMED(w) = log|µ(w)| −log|ϕ(w)| logN −log|ϕ(w)| (7) where µ(w) is the set of documents in which all single words in w = w1w2 . . . wn co-occur, ϕ(w) is the set of documents in which word w occurs as a whole, and N is the total number of documents. Different from EMI, this measure is a strict distance metric, meaning that a smaller value indicates a larger possibility of being a multi-word expression. As can be seen from the formula, the key idea of this metric is to compute the ratio of the co-occurrence of all words in a multi-word expressions to the occurrence of the whole expression. 3.4.5 Configurations to Combine Various Factors Taking into account the aforementioned factors, we have different settings to score a new word, as follows: FLRT (w) = LRT(w) (8) FLP E(w) = LRT(w) ∗LPE(w) (9) FNW P (w) = LRT(w) ∗LPE(w) ∗NWP(w) (10) FEMI(w) = LRT(w) ∗LPE(w) ∗EMI(w) (11) FNMED(w) = LRT(w) ∗LPE(w) NMED(w) (12) 535 4 Experiment In this section, we will conduct the following experiments: first, we will compare our method to several baselines, and perform parameter tuning with extensive experiments; second, we will classify polarity of new sentiment words using two methods; third, we will demonstrate how new sentiment words will benefit sentiment classification. 4.1 Data Preparation We crawled 237,108,977 Weibo posts from http://www.weibo.com, the largest social website in China. These posts range from January of 2011 to December of 2012. The posts were then part-ofspeech tagged using a Chinese word segmentation tool named ICTCLAS (Zhang et al., 2003). Then, we asked two annotators to label the top 5,000 frequent words that were extracted by lexical patterns as described in Algorithm 1. The annotators were requested to judge whether a candidate word is a new word, and also to judge the polarity of a new word (positive, negative, and neutral). If there is a disagreement on either of the two tasks, discussions are required to make the final decision. The annotation led to 323 new words, among which there are 116 positive words, 112 negative words, and 95 neutral words3. 4.2 Evaluation Metric As our algorithm outputs a ranked list of words, we adapt average precision to evaluate the performance of new sentiment word detection. The metric is computed as follows: AP(K) = ∑K k=1 P(k) ∗rel(k) ∑K k=1 rel(k) where P(k) is the precision at cut-off k, rel(k) is 1 if the word at position k is a new word and 0 otherwise, and K is the number of words in the ranked list. A perfect list (all top K items are correct) has an AP value of 1.0. 4.3 Evaluation of Different Measures and Comparison to Baselines First, we assess the influence of likelihood ratio test, which measures the association of a word to the pattern set. As can be seen from Table 4, the association model (LRT) remarkably boosts the 3All the resources are available upon request. performance of new word detection, indicating LRT is a key factor for new sentiment word extraction. From linguistic perspectives, new sentiment words are commonly modified by adverbial words and thus should have close association with lexical patterns. Second, we compare different settings of our method to two baselines. The first one is enhanced mutual information (EMI) where we set F(w) = EMI(w) (Zhang et al., 2009) and the second baseline is normalized multi-word expression distance (NMED) (Bu et al., 2010) where we set F(w) = NMED(w). The results are shown in Figure 1. As can be seen, all the proposed measures outperform the two baselines (EMI and NMED) remarkably and consistently. The setting of FNMED produces the best performance. Adding NMED or EMI leads to remarkable improvements because of their capability of measuring non-compositionality of new words. Only using LRT can obtain a fairly good results when K is small, however, the performance drops sharply because it's unable to measure non-compositionality. Comparison between LRT + LPE (or LRT + LPE + NWP) and LRT shows that inclusion of left pattern entropy also boosts the performance apparently. However, the new word probability (NWP) has only marginal contribution to improvement. In the above experiments, we set kp = 5 (the number of patterns chosen at each iteration) and kw = 10 (the number of words added at each iteration), which is the optimal setting and will be discussed in the next subsection. And only one seed word "坑爹(reverse one's expectation)" is used. Figure 1: Comparative results of different measure settings. X-axis is the number of words returned (K), and Y-axis is average precision (AP(K)). 536 top K words ⇒ 100 200 300 400 500 LPE 0.366 0.324 0.286 0.270 0.259 LRT+LPE 0.743 0.652 0.613 0.582 0.548 LPE+NWP 0.467 0.400 0.350 0.330 0.320 LRT+LPE+NWP 0.755 0.680 0.612 0.571 0.543 LPE+EMI 0.608 0.551 0.519 0.486 0.467 LRT+LPE+EMI 0.859 0.759 0.717 0.662 0.632 LPE+NMED 0.749 0.690 0.641 0.612 0.576 LRT+LPE+NMED 0.907 0.808 0.741 0.723 0.699 Table 4: Results with vs. without likelihood ratio test (LRT). 4.4 Parameter Tuning Firstly, we will show how to obtain the optimal settings of kp and kw. The measure setting we take here is FNMED(w), as shown in Formula (12). Again, we choose only one seed word "坑 爹(reverse one's expectation)", and the number of words returned is set to K = 300. Results in Table 5 show that the performance drops consistently across different kw settings when the number of patterns increases. Note that at the early stage of Algorithm 1, larger kp (perhaps with noisy patterns) may lead to lower quality of new words; while larger kw (perhaps with noisy seed words) may lead to lower quality of lexical patterns. Therefore, we choose the optimal setting to small numbers, as kp = 5, kw = 10. Secondly, we justify whether the proposed algorithm is sensitive to the number of seed words. We set kp = 5 and kw = 10, and take FNMED as the weighting measure of new word. We experimented with only one seed word, two, three, and four seed words, respectively. The results in Table 6 show very stable performance when different numbers of seed words are chosen. It's interesting that the performance is totally the same with different numbers of seed words. By looking into the pattern set and the selected words at each iteration, we found that the pattern set (P) converges soon to the same set after a few iterations; and at the beginning several iterations, the selected words are almost the same although the order of adding the words is different. Since the algorithm will finally sort the words at step (11) and P is the same, the ranking of the words becomes all the same. Lastly, we need to decide the optimal number of patterns in Pc (that is, kc in Algorithm 1) because the set has been used in computing left pattern entropy, see Formula (4). Too small size of Pc may lead to insufficient estimation of left pattern entropy. Results in Table 7 shows that larger Pc decrease the performance, particularly when the number of words returned (K) becomes larger. Therefore, we set |Pc| = 100. 4.5 Polarity Prediction of New Sentiment Words In this section, we attempt to classifying the polarity of the annotated 323 new words. Two methods are adapted with different settings for this purpose. The first one is majority vote (MV), and the second one is pointwise mutual information, similar to (Turney and Littman, 2003). The majority vote method is formulated as below: MV (w) = ∑ wp∈P W #(w, wp) |PW| − ∑ wn∈NW #(w, wn) |NW| where PW and NW are a positive and negative set of emoticons (or seed words) respectively, and #(w, wp) is the co-occurrence count of the input word w and the item wp. The polarity is judged according to this rule: if MV (w) > th1, the word w is positive; if MV (w) < −th1 the word negative; otherwise neutral. The threshold th1 is manually tuned. And PMI is computed as follows: PMI(w) = ∑ wp∈P W PMI(w, wp) |PW| − ∑ wn∈NW PMI(w, wn) |NW| where PMI(x, y) = log2( Pr(x,y) Pr(x)∗Pr(y)), and Pr(·) denotes probability. The polarity is judged according to the rule: if PMI(w) > th2, w is positive; if PMI(w) < −th2 negative; otherwise neutral. The threshold th2 is manually tuned. As for the resources PW and NW, we have three settings. The first setting (denoted by 537 HHHHHH kw kp 2 3 4 5 10 20 50 5 0.753 0.738 0.746 0.741 0.741 0.734 0.715 10 0.753 0.738 0.746 0.741 0.741 0.728 0.712 15 0.753 0.738 0.746 0.741 0.754 0.734 0.718 20 0.763 0.738 0.744 0.749 0.749 0.735 0.717 Table 5: Parameter tuning results for kp and kw. The measure setting is FNMED(w), the seed word set is {"坑爹(reverse one's expectation)"}, and the number of words returned is K = 300. # seeds ⇒ 1 2 3 4 K=100 0.907 0.907 0.907 0.907 K=200 0.808 0.808 0.808 0.808 K=300 0.741 0.741 0.741 0.741 K=400 0.709 0.709 0.709 0.709 K=500 0.685 0.685 0.685 0.685 Table 6: Performance with different numbers of seed words. The measure setting is FNMED(w), and kp = 5, kw = 10. The seed words are chosen from Table 1. Large_Emo) is a set of most frequent 36 emoticons in which there are 21 positive and 15 negative emoticons respectively. The second one (denoted by Small_Emo) is a set of 10 emoticons, which are chosen from the 36 emoticons, as shown in Table 8. The third one (denoted by Opin_Words) is two sets of seed opinion words, where PW={ 高兴(happy),大方(generous),漂亮(beautiful), 善 良(kind), 聪明(smart)} and NW ={伤心(sad),小 气(mean),难看(ugly), 邪恶(wicked), 笨(stupid)}. The performance of polarity prediction is shown in Table 9. In two-class polarity classification, we remove neutral words and only make prediction with positive/negative classes. The first observation is that the performance of using emoticons is much better than that of using seed opinion words. We conjecture that this may be because new sentiment words are more frequently co-occurring with emoticons than with these opinion words. The second observation is that threeclass polarity classification is much more difficult than two-class polarity classification because many extracted new words are nouns such as "基 友(gay)","菇凉(girl)", and "盆友(friend)". Such nouns are more difficult to classify sentiment orientation. 4.6 Application of New Sentiment Words to Sentiment Classification In this section, we justify whether inclusion of new sentiment word would benefit sentiment classification. For this purpose, we randomly sampled and annotated 4,500 Weibo posts that contain at least one opinion word in the union of the Hownet 4 opinion lexicons and our annotated new words. We apply two models for polarity classification. The first model is a lexicon-based model (denoted by Lexicon) that counts the number of positive and negative opinion words in a post respectively, and classifies a post to be positive if there are more positive words than negative ones, and to be negative otherwise. The second model is a SVM model in which opinion words are used as feature, and 5-fold cross validation is conducted. We experiment with different settings of Hownet lexicon resources: • Hownet opinion words (denoted by Hownet): After removing some obviously inappropriate words, the left lexicons have 627 positive opinion words and 1,038 negative opinion words, respectively. • Compact Hownet opinion words (denoted by cptHownet): we count the frequency of the above opinion words on the training data and remove words whose document frequency is less than 2. This results in 138 positive words and 125 negative words. Then, we add into the above resources the labeled new polar words(denoted by NW, including 116 positive and 112 negative words) and the top 100 words produced by the algorithm (denoted by T100), respectively. Note that the lexicon-based model requires the sentiment orientation of each dictionary entry 5, we thus manually label the po4http://www.keenage.com/html/c_index.html. 5This is not necessary for the SVM model. All words in the top 100 words can be used as feature. 538 |Pc| ⇒ 50 100 200 300 400 500 K=100 0.907 0.905 0.916 0.916 0.888 0.887 K=200 0.808 0.810 0.778 0.776 0.766 0.764 K=300 0.741 0.731 0.722 0.726 0.712 0.713 K=400 0.709 0.708 0.677 0.675 0.656 0.655 K=500 0.685 0.683 0.653 0.646 0.626 0.627 Table 7: Tuning the number of patterns in Pc. The measure setting is FNMED(w), kp = 5, kw = 10, and the seed word set is {"坑爹(reverse one's expectation)"}. Emoticon Polarity Emoticon Polarity positive negative positive negative positive negative positive negative positive negative Table 8: The ten emoticons used for polarity prediction. Methods ⇒ Majority vote PMI Two-class polarity classification Large_Emo 0.861 0.865 Small_Emo 0.846 0.851 Opin_Words 0.697 0.654 Three-class polarity classification Large_Emo 0.598 0.632 Small_Emo 0.551 0.635 Opin_Words 0.449 0.486 Table 9: The accuracy of two/three-class polarity classification. larity of all top 100 words (we did NOT remove incorrect new word). This results in 52 positive and 34 negative words. Results in Table 10 show that inclusion of new words in both models improves the performance remarkably. In the setting of the original lexicon (Hownet), both models obtain 2-3% gains from the inclusion of new words. Similar improvement is observed in the setting of the compact lexicon. Note, that T100 is automatically obtained from Algorithm 1 so that it may contain words that are not new sentiment words, but the resource also improves performance remarkably. 5 Conclusion In order to extract new sentiment words from large-scale user-generated content, this paper proposes a fully unsupervised, purely data-driven, and # Pos/Neg Lexicon SVM Hownet 627/1,038 0.737 0.756 Hownet+NW 743/1,150 0.770 0.779 Hownet+T100 679/1,172 0.761 0.774 cptHownet 138/125 0.738 0.758 cptHownet+NW 254/237 0.774 0.782 cptHownet+T100 190/159 0.764 0.775 Table 10: The accuracy of polarity classfication of Weibo post with/without new sentiment words. NW includes 116/112 positive/negative words, and T100 contains 52/34 positive/negative words. almost knowledge-free (except POS tags) framework. We design statistical measures to quantify the utility of a lexical pattern and to measure the possibility of a word being a new word, respectively. The method is almost free of linguistic resources (except POS tags), and does not rely on elaborated linguistic rules. We conduct extensive experiments to reveal the influence of different statistical measures in new word finding. Comparative experiments show that our proposed method outperforms baselines remarkably. Experiments also demonstrate that inclusion of new sentiment words benefits sentiment classification definitely. From linguistic perspectives, our framework is capable to extract adjective new words because the lexical patterns usually modify adjective words. As future work, we are considering how to extract other types of new sentiment words, such as nounal new words that can express sentiment. Acknowledgments This work was partly supported by the following grants from: the National Basic Research Program (973 Program) under grant No. 2012CB316301 and 2013CB329403, the National Science Foundation of China project under grant No. 61332007 and No. 60803075, and the Beijing Higher Education Young Elite Teacher Project. 539 References Shlomo Argamon, Ido Dagan, and Yuval Krymolowski. 1998. A memory-based approach to learning shallow natural language patterns. In Proceedings of the 17th International Conference on Computational Linguistics - Volume 1, COLING '98, pages 67--73, Stroudsburg, PA, USA. Association for Computational Linguistics. Fan Bu, Xiaoyan Zhu, and Ming Li. 2010. Measuring the non-compositionality of multiword expressions. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 116--124, Stroudsburg, PA, USA. Association for Computational Linguistics. Keh-Jiann Chen and Wei-Yun Ma. 2002. Unknown word extraction for chinese documents. In Proceedings of the 19th International Conference on Computational Linguistics - Volume 1, COLING '02, pages 1--7, Stroudsburg, PA, USA. Association for Computational Linguistics. Aitao Chen. 2003. Chinese word segmentation using minimal linguistic knowledge. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing - Volume 17, SIGHAN '03, pages 148--151, Stroudsburg, PA, USA. Association for Computational Linguistics. Yaacov Choueka. 1988. Looking for needles in a haystack or locating interesting collocation expressions in large textual databases. In Proceeding of the RIAO'88 Conference on User-Oriented Content-Based Text and Image Handling, pages 21--24. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Comput. Linguist., 16(1): 22--29, March. J Ferreira da Silva and G Pereira Lopes. 1999. A local maxima method and a fair dispersion normalization for extracting multi-word units from corpora. In Sixth Meeting on Mathematics of Language, pages 369--381. Gaël Dias, Sylvie Guilloré, and José Gabriel Pereira Lopes. 2000. Mining textual associations in text corpora. 6th ACM SIGKDD Work. Text Mining. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Comput. Linguist., 19(1):61--74, March. Zhen Hai, Kuiyu Chang, and Gao Cong. 2012. One seed to find them all: Mining opinion features via association. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12, pages 255--264, New York, NY, USA. ACM. John S Justeson and Slava M Katz. 1995. Technical terminology: some linguistic properties and an algorithm for identification in text. Natural language engineering, 1(1):9--27. Hongqiao Li, Chang-Ning Huang, Jianfeng Gao, and Xiaozhong Fan. 2005. The use of svm for chinese new word identification. In Natural Language Processing--IJCNLP 2004, pages 723-732. Springer. Pavel Pecina. 2005. An extensive empirical study of collocation extraction methods. In Proceedings of the ACL Student Research Workshop, ACLstudent '05, pages 13--18, Stroudsburg, PA, USA. Association for Computational Linguistics. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In Proceedings of the 20th International Conference on Computational Linguistics, COLING '04, Stroudsburg, PA, USA. Association for Computational Linguistics. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics, 37(1):9--27. Patrick Schone and Daniel Jurafsky. 2001. Is knowledge-free induction of multiword unit dictionary headwords a solved problem. In Proc. of the 6th Conference on Empirical Methods in Natural Language Processing (EMNLP 2001), pages 100--108. Richard Sproat and Thomas Emerson. 2003. The first international chinese word segmentation bakeoff. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing - Volume 17, SIGHAN '03, pages 133--143, Stroudsburg, PA, USA. Association for Computational Linguistics. Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers Volume 1, ACL '12, pages 253--262, Stroudsburg, PA, USA. Association for Computational Linguistics. Beijing Thesaurus Research Center. 2003. Xinhua Xin Ciyu Cidian. Commercial Press, Beijing. Peter D. Turney and Michael L. Littman. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Trans. Inf. Syst., 21(4):315--346, October. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexical analyzer ictclas. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing 540 Volume 17, SIGHAN '03, pages 184--187, Stroudsburg, PA, USA. Association for Computational Linguistics. Wen Zhang, Taketoshi Yoshida, Xijin Tang, and TuBao Ho. 2009. Improving effectiveness of mutual information for substantival multiword expression extraction. Expert Systems with Applications, 36(8):10919--10930. Yan Zhang, Maosong Sun, and Yang Zhang. 2010. Chinese new word detection from query logs. In Advanced Data Mining and Applications, pages 233--243. Springer. Yabin Zheng, Zhiyuan Liu, Maosong Sun, Liyun Ru, and Yang Zhang. 2009. Incorporating user behaviors in new word detection. In Proceedings of the 21st International Jont Conference on Artifical Intelligence, IJCAI'09, pages 2101--2106, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. GuoDong Zhou. 2005. A chunking strategy towards unknown word detection in chinese word segmentation. In Natural Language Processing--IJCNLP 2005, pages 530--541. Springer. 541
2014
50
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 542–551, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics ReNew: A Semi-Supervised Framework for Generating Domain-Specific Lexicons and Sentiment Analysis Zhe Zhang Department of Computer Science North Carolina State University Raleigh, NC 27695-8206 [email protected] Munindar P. Singh Department of Computer Science North Carolina State University Raleigh, NC 27695-8206 [email protected] Abstract The sentiment captured in opinionated text provides interesting and valuable information for social media services. However, due to the complexity and diversity of linguistic representations, it is challenging to build a framework that accurately extracts such sentiment. We propose a semi-supervised framework for generating a domain-specific sentiment lexicon and inferring sentiments at the segment level. Our framework can greatly reduce the human effort for building a domainspecific sentiment lexicon with high quality. Specifically, in our evaluation, working with just 20 manually labeled reviews, it generates a domain-specific sentiment lexicon that yields weighted average FMeasure gains of 3%. Our sentiment classification model achieves approximately 1% greater accuracy than a state-of-the-art approach based on elementary discourse units. 1 Introduction Automatically extracting sentiments from usergenerated opinionated text is important in building social media services. However, the complexity and diversity of the linguistic representations of sentiments make this problem challenging. High-quality sentiment lexicons can improve the performance of sentiment analysis models over general-purpose lexicons (Choi and Cardie, 2009). More advanced methods such as (Kanayama and Nasukawa, 2006) adopt domain knowledge by extracting sentiment words from the domain-specific corpus. However, depending on the context, the same word can have different polarities even in the same domain (Liu, 2012). In respect to sentiment classification, Pang et al. (2002) infer the sentiments using basic features, such as bag-of-words. To capture more complex linguistic phenomena, leading approaches (Nakagawa et al., 2010; Jo and Oh, 2011; Kim et al., 2013) apply more advanced models but assume one document or sentence holds one sentiment. However, this is often not the case. Sentiments can change within one document, one sentence, or even one clause. Also, existing approaches infer sentiments without considering the changes of sentiments within or between clauses. However, these changes can be successfully exploited for inferring fine-grained sentiments. To address the above shortcomings of lexicon and granularity, we propose a semi-supervised framework named ReNew. (1) Instead of using sentences, ReNew uses segments as the basic units for sentiment classification. Segments can be shorter than sentences and therefore help capture fine-grained sentiments. (2) ReNew leverages the relationships between consecutive segments to infer their sentiments and automatically generates a domain-specific sentiment lexicon in a semi-supervised fashion. (3) To capture the contextual sentiment of words, ReNew uses dependency relation pairs as the basic elements in the generated sentiment lexicon. Sentiment Segment 1 2 3 4 5 NEG NEU POS transition cue transition cue Figure 1: Segments in a Tripadvisor review. Consider a part of a review from Tripadvisor.1 We split it into six segments with sentiment labels. 1http://www.tripadvisor.com/ShowUserReviews-g32655d81765-r100000013 542 “. . . (1: POS) The hotel was clean and comfortable. (2: POS) Service was friendly (3: POS) even providing us a late-morning check-in. (4: POS) The room was quiet and comfortable, (5: NEG) but it was beginning to show a few small signs of wear and tear. . . . ” Figure 1 visualizes the sentiment changes within the text. The sentiment remains the same across Segments 1 to 4. The sentiment transition between Segments 4 and 5 is indicated by the transition cue “but”—which signals conflict and contradiction. Assuming we know Segment 4 is positive, given the fact that Segment 5 starts with “but,” we can infer with high confidence that the sentiment in Segment 5 changes to neutral or negative even without looking at its content. After classifying the sentiment of Segment 5 as NEG, we associate the dependency relation pairs {“sign”, “wear”} and {“sign”, “tear”} with that sentiment. ReNew can greatly reduce the human effort for building a domain-specific sentiment lexicon with high quality. Specifically, in our evaluation on two real datasets, working with just 20 manually labeled reviews, ReNew generates a domainspecific sentiment lexicon that yields weighted average F-Measure gains of 3%. Additionally, our sentiment classification model achieves approximately 1% greater accuracy than a state-of-theart approach based on elementary discourse units (Lazaridou et al., 2013). The rest of this paper is structured as follows. Section 2 introduces some essential background. Section 3 illustrates ReNew. Section 4 presents our experiments and results. Section 5 reviews some related work. Section 6 concludes this paper and outlines some directions for future work. 2 Background Let us introduce some of the key terminology used in ReNew. A segment is a sequence of words that represents at most one sentiment. A segment can consist of multiple consecutive clauses, up to a whole sentence. Or, it can be shorter than a clause. A dependency relation defines a binary relation that describes whether a pairwise syntactic relation among two words holds in a sentence. In ReNew, we exploit the Stanford typed dependency representations (de Marneffe et al., 2006) that use triples to formalize dependency relations. A domain-specific sentiment lexicon contains three lists of dependency relations, associated respectvely with positive, neutral, or negative sentiment. Given a set of reviews, the tasks of sentiment analysis in ReNew are (1) splitting each review into segments, (2) associating each segment with a sentiment label (positive, neutral, negative), and (3) automatically generating a domainspecific sentiment lexicon. We employ Conditional Random Fields (Lafferty et al., 2001) to predict the sentiment label for each segment. Given a sequence of segments ¯x = (x1, · · · , xn) and a sequence of sentiment labels ¯y = (y1, · · · , yn), the CRFs model p(¯y|¯x) as follows. p(¯y|¯x) = 1 Z(¯x) exp J X j (ωj · Fj(¯x, ¯y)) Fj(¯x, ¯y) = n X i=1 fj(yi−1, yi, ¯x, i) where ω is a set of weights learned in the training process to maximize p(¯y|¯x). Z(¯x) is a normalization constant that is the sum of all possible label sequences. And, Fj is a feature function that sums fj over i ∈(1, n), where n is the length of ¯y, and fj can have arbitrary dependencies on the observation sequence ¯x and neighboring labels. 3 Framework Bootstrapping Process Sentiment Labeling or Learner Retraining Seed Information Lexicon Generator Domain Specific Lexicon General Lexicon Segmentation Labeled Data Unlabeled Data Figure 2: The ReNew framework schematically. Figure 2 illustrates ReNew. Its inputs include 543 a general sentiment lexicon and a small labeled training dataset. We use a general sentiment lexicon and the training dataset as prior knowledge to build the initial learners. On each iteration in the bootstrapping process, additional unlabeled data is first segmented. Second, the learners predict labels for segments based on current knowledge. Third, the lexicon generator determines which newly learned dependency relation triples to promote to the lexicon. At the end of each iteration, the learners are retrained via the updated lexicon so as to classify better on the next iteration. After labeling all of the data, we obtain the final version of our learners along with a domain-specific lexicon. 3.1 Rule-Based Segmentation Algorithm Algorithm 1 Rule-based segmentation. Require: Review dataset T 1: for all review r in T do 2: Remove HTML tags 3: Expand typical abbreviations 4: Mark special name-entities 5: for all sentence m in r do 6: while m contains a transition cue and m is not empty do 7: Extract subclause p that contains the transition cue 8: Add p as segment s into segment list 9: Remove p from m 10: end while 11: Add the remaining part in m as segment s into segment list 12: end for 13: end for The algorithm starts with a review dataset T. Each review r from dataset T is first normalized by a set of hard-coded rules (lines 2–4) to remove unnecessary punctuations and HTML tags, expand typical abbreviations, and mark special name entities (e.g., replace a URL by #LINK# and replace a monetary amount “$78.99” by #MONEY#). After the normalization step, it splits each review r into sentences, and each sentence into subclauses (lines 6–10) provided transition cues occur. In effect, the algorithm converts each review into a set of segments. Note that ReNew captures and uses the sentiment changes. Therefore, our segmentation algorithm considers only two specific types of transition cues including contradiction and emphasis. 3.2 Sentiment Labeling ReNew starts with a small labeled training set. Knowledge from this initial training set is not sufficient to build an accurate sentiment classification model or to generate a domain-specific sentiment lexicon. Unlabeled data contains rich knowledge, and it can be easily obtained. To exploit this resource, on each iteration, the sentiment labeling component, as shown in Figure 3, labels the data by using multiple learners and a label integrator. We have developed a forward (FR) and a backward relationship (BR) learner to learn relationships among segments. Sentiment Labeling Label Integrator reverse order Forward Relationship Learner Backward Relationship Learner Unlabeled segments Labeled segments Figure 3: Sentiment labeling. 3.2.1 FR and BR Learners The FR learner learns the relationship between the current segment and the next. Given the sentiment label and content of a segment, it tries to find the best possible sentiment label of the next segment. The FR Learner tackles the following situation where two segments are connected by a transition word, but existing knowledge is insufficient to infer the sentiment of the second segment. For instance, consider the following review sentence.2 (1) The location is great, (2) but the staff was pretty ho-hum about everything from checking in, to AM hot coffee, to PM bar. The sentence contains two segments. We can easily infer the sentiment polarity of Segment 1 based on the word “great” that is commonly included in many general sentiment lexicons. For Segment 2, without any context information, it is difficult to infer its sentiment. Although the 2http://www.tripadvisor.com/ShowUserReviews-g60763d93589-r10006597 544 word “ho-hum” indicates a negative polarity, it is not a frequent word. However, the conjunction “but” clearly signals a contrast. So, given the fact that the former segment is positive, a pretrained FR learner can classify the latter as negative. The Backward Relationship (BR) learner does the same but with the segments in each review in reverse order. 3.2.2 Label Integrator Given the candidate sentiment labels suggested by the two learners, the label integrator first selects the label with confidence greater than or equal to a preset threshold. Segments are left unlabeled if their candidate labels belong to mutually exclusive categories with the same confidence. 3.3 Lexicon Generator In each iteration, after labeling a segment, the lexicon generator identifies new triples automatically. As shown in Figure 4, this module contains two parts: a Triple Extractor and a Lexicon Integrator. For each sentiment, the Triple Extractor (TE) extracts candidate dependency relation triples using a novel rule-based approach. The Lexicon Integrator (LI) evaluates the proposed candidates and promotes the most supported candidates to the corresponding sentiment category in the domainspecific lexicon. Lexicon Generator Triple Extractor Lexicon Integrator Domain Specific Lexicon Labeled segments Figure 4: Lexicon generator module. 3.3.1 Triple Extractor (TE) The TE follows the steps below, for segments that contain only one clause, as demonstrated in Figure 5 for “The staff was slow and definitely not very friendly.” The extracted triples are root nsubj(slow, staff), nsubj(slow, staff), and nsubj(not friendly, staff). 1. Generate a segment’s dependency parse tree. 2. Identify the root node of each clause in the segment. 3. Remove all triples except those marked E in Table 1. 4. Apply the rules in Table 2 to add or modify triples. 5. Suggest the types of triples marked L in Table 1 to the lexicon integrator. Table 1: Dependency relation types used in extracting (E) and domain-specific lexicon (L). Types Explanation E L amod adjectival modifier √ √ acomp adjectival complement √ √ nsubj nominal subject √ √ neg negation modifier √ conj and words coordinated by “and” √ or similar prep with words coordinated by “with” √ root root node √ root amod amod root node √ root acomp acomp root node √ root nsubj nsubj root node √ neg pattern “neg” pattern √ Table 1 describes all seven types of triples used in the domain-specific lexicon. Among them, amod, acomp, and nsubj are as in (de Marneffe et al., 2006). And, root amod captures the root node of a sentence when it also appears in the adjectival modifier triple, similarly for root acomp and root nsubj. We observe that the word of the root node is often related to the sentiment of a sentence and this is especially true when this word also appears in the adjectival modifier, adjectival complement, or negation modifier triple. Zhang et al. (2010) propose the no pattern that describes a word pair whose first word is “No” followed by a noun or noun phrase. They show that this pattern is a useful indicator for sentiment analysis. In our dataset, in addition to “No,” we observe the frequent usage of “Nothing” followed by an adjective. For example, users may express a negative feeling about a hotel using sentence such as “Nothing special.” Therefore, we create the neg pattern to capture a larger range of possible word pairs. In ReNew, neg pattern is “No” or “Nothing” followed by a noun or noun phrase or an adjective. 3.3.2 Lexicon Integrator (LI) The Lexicon Integrator promotes candidate triples with a frequency greater than or equal to a preset 545 The staff was slow nsubj definitely not very friendly det cop advmod advmod neg conj_and root staff slow nsubj not friendly neg conj_and root staff slow nsubj not_friendly conj_and root staff slow nsubj not_friendly root (a) (b) (c) (d) nsubj Figure 5: Extracting sentiment triples from a segment that contains one clause. (a) The initial dependency parse tree. (b) Remove nonsentiment triples. (c) Handle negation triples. (d) Build relationships. threshold. The frequency list is updated in each iteration. The LI first examines the prior knowledge represented as an ordered list of the governors of all triples, each is attached with an ordered list of its dependents. Then, based on the triples promoted in this iteration, the order of the governors and their dependents is updated. Triples are not promoted if their governors or dependents appear in a predetermined list of stopwords. The LI promotes triples by respecting mutual exclusion and the existing lexicon. In particular, it does not promote triples if they exist in multiple sentiment categories or if they already belong to a different sentiment category. Finally, for each sentiment, we obtain seven sorted lists corresponding to amod, acomp, nsubj, root amod, root acomp, root nsubj, and neg pattern. These lists form the domain-specific sentiment lexicon. Table 2: Rules for extracting sentiment triples. Rule Function Condition Result R1 Handle Negation word wi; wi = wdep + “ ′′ neg(wgov, wdep); + wi wi = wgov; R2 Build Relationships word wi and wj; amod(wgov, wi) (conj and, amod) conj and(wi, wj); amod(wgov, wj) amod(wgov, wi); R3 Build Relationships word wi and wj; acomp(wgov, wi) (conj and, acomp) conj and(wi, wj); acomp(wgov, wj) acomp(wgov, wi); R4 Build Relationships word wi and wj; nsubj(wi, wdep) (conj and, nsubj) conj and(wi, wj); nsubj(wj, wdep) nsubj(wi, wdep); 3.4 Learner Retraining At the end of each iteration, ReNew retrains each learner as shown in Figure 6. Newly labeled segments are selected by a filter. Then, given an updated lexicon, learners are retrained to perform better on the next iteration. Detailed description of the filter and learner are presented below. 3.4.1 Filter The filter seeks to prevent labeling errors from accumulating during bootstrapping. In ReNew, newly acquired training samples are segments with labels that are predicted by old learners. Each predicted label is associated with a confidence value. The filter is applied to select those labeled segments with confidence greater than or equal to a preset threshold. Learner Retraining Filter Domain Specific Lexicon Learner Feature Extractor Classification Model Labeled segments Figure 6: Retrain a relationship learner. 3.4.2 Learner As Section 3.2 describes, ReNew uses learners to capture different types of relationships among segments to classify sentiment by leveraging these relationships. Each learner contains two components: a feature extractor and a classification model. To train a learner, the feature extractor first converts labeled segments into feature vectors 546 Table 3: A list of transition types used in ReNew. Transition Types Examples Agreement, Addition, and Similarity also, similarly, as well as, ... Opposition, Limitation, and Contradiction but, although, in contrast, ... Cause, Condition, and Purpose if, since, as/so long as, .. . Examples, Support, and Emphasis including, especially, such as, ... Effect, Consequence, and Result therefore, thus, as a result .. . Conclusion, Summary, and Restatement overall, all in all, to sum up, ... Time, Chronology, and Sequence until, eventually, as soon as, ... for training a CRF-based sentiment classification model. The feature extractor generates five kinds of features as below. Grammar: part-of-speech tag of every word, the type of phrases and clauses (if known). Opinion word: To exploit a general sentiment lexicon, we use two binary features indicating the presence or absence of a word in the positive or negative list in a general sentiment lexicon. Dependency relation: The lexicon generated by ReNew uses the Stanford typed dependency representation as its structure. Transition cue: For tracking the changes of the sentiment, we exploit seven types of transition cues, as shown in Table 3. Punctuation, special name-entity, and segment position: Some punctuation symbols, such as “!”, are reliable carriers of sentiments. We mark special named-entities, such as time, money, and so on. In addition, we use segment positions (beginning, middle, and end) in reviews as features. 4 Experiments To assess ReNew’s effectiveness, we prepare two hotel review datasets crawled from Tripadvisor. One dataset contains a total of 4,017 unlabeled reviews regarding 802 hotels from seven US cities. The reviews are posted by 340 users, each of whom contributes at least ten reviews. The other dataset contains 200 reviews randomly selected from Tripadvisor. We collected ground-truth labels for this dataset by inviting six annotators in two groups of three. Each group labeled the same 100 reviews. We obtained the labels for each segment consist as positive, neutral, or negative. Fleiss’ kappa scores for the two groups were 0.70 and 0.68, respectively, indicating substantial agreement between our annotators. The results we present in the remainder of this section rely upon the following parameter values. The confidence thresholds used in the Label Integrator and filter are both set to 0.9 for positive labels and 0.7 for negative and neutral labels. The minimum frequency used in the Lexicon Integrator for selecting triples is set to 4. 4.1 Feature Function Evaluation Our first experiment evaluates the effects of different combinations of features. To do this, we first divide all features into four basic feature sets: T (transition cues), P (punctuations, special nameentities, and segment positions), G (grammar), and OD (opinion words and dependency relations). We train 15 sentiment classification models using all basic features and their combinations. Figure 7 shows the results of a 10-fold cross validation on the 200-review dataset (light grey bars show the accuracy of the model trained without using transition cue features). 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 OD+G+P OD+P OD+G G+P OD G P T Accuracy Feature w/o transition cues (T) w/ transition cues (T) Figure 7: Accuracy using different features. The feature OD yields the best accuracy, followed by G, P, and T. Although T yields the worst accuracy, incorporating it improves the resulting accuracy of the other features, as shown by the dark grey bars. In particular, the accuracy of OD is markedly improved by adding T. The model trained using all the feature sets yields the best accuracy. 4.2 Relationship Learners Evaluation Our second experiment evaluates the impact of the relationship learners and the label integrator. To this end, we train and compare sentiment classification models using three configurations. The first configuration (FW-L) uses only the FR learner; the second (BW-L) only the BR learner. ALL-L uses both the FR and BR learners, together with a label integrator. We evaluate them with 10-fold cross 547 validation on the 200-review dataset. Accuracy Macro F-score Micro F-score 0.46 0.48 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 FW-L BW-L Both Figure 8: Comparison among the learners. Figure 8 reports the accuracy, macro F-score, and micro F-score. It shows that the BR learner produces better accuracy and a micro F-score than the FR learner but a slightly worse macro F-score. Jointly considering both learners with the label integrator achieves better results than either alone. The results demonstrate the effectiveness of our sentiment labeling component. 4.3 Domain-Specific Lexicon Assessment Our third experiment evaluates the quality of the domain-specific lexicon automatically generated by ReNew. To do this, we first transform each of the 200 labeled reviews into feature vectors. Then we retrain Logistic Regression models using WEKA (Hall et al., 2009). Note that we use only the features extracted from the lexicons themselves. This is important because to compare only the lexicons’ impact on sentiment classification, we need to avoid the effect of other factors, such as syntax, transition cues, and so on. We compare models trained using (1) our domain-specific lexicon, (2) Affective Norms for English Words (ANEW) (Bradley and Lang, 1999), and (3) Linguistic Inquiry and Word Count (LIWC) (Tausczik and Pennebaker, 2010). ANEW and LIWC are well-known general sentiment lexicons. Table 4 shows the results obtained by 10-fold cross validation. Each weighted average is computed according to the number of segments in each class. The table shows the significant advantages of the lexicon generated by ReNew. ANEW achieves the highest recall for the positive class, but the lowest recalls in the negative and neutral classes. Regarding the neutral class, both ANEW and LIWC achieve poor results. The weighted average measures indicate our lexicon has the highest overall quality. Our domain-specific lexicon contains distinguishable aspects associated with sentiment words. For example, the aspect “staff” is associated with positive words (e.g., “nice,” “friendli,” “help,” “great,” and so on) and negative words (e.g., “okai,” “anxiou,” “moodi,” “effici,” and so on). We notice that some positive words also occur on the negative side. This may be for two reasons. First, some sentences that contain positive words may convey a negative sentiment, such as “The staff should be more efficient.” Second, the bootstrapping process in ReNew may introduce some wrong words by mistakenly labeling the sentiment of the segments. These challenges suggest useful directions for the future work. 4.4 Lexicon Generation and Sentiment Classification Our fourth experiment evaluates the robustness of ReNew’s lexicon generation process as well as the performance of the sentiment classification models using these lexicons. We first generate ten domain-specific lexicons by repeatedly following these steps: For the first iteration, (1) build a training dataset by randomly selecting 20 labeled reviews (about 220 segments) and (2) train the learners using the training dataset and LIWC. For each iteration thereafter, (1) label 400 reviews from the unlabeled dataset (4,071 reviews) and (2) update the lexicon and retrain the learners. After labeling all of the data, output a domain-specific lexicon. To evaluate the benefit of using domain-specific sentiment lexicons, we train ten sentiment classification models using the ten lexicons and then compare them, pairwise, against models trained with the general sentiment lexicon LIWC. Each model consists of an FR learner, a BR learner, and a label integrator. Each pairwise comparison is evaluated on a testing dataset with 10-fold cross validation. Each testing dataset consists of 180 randomly selected reviews (about 1,800 segments). For each of the pairwise comparisons, we conduct a paired t-test to determine if the domain-specific sentiment lexicon can yield better results. Figure 9 shows the pairwise comparisons of accuracy between the two lexicons. Each group of bars represents the accuracy of two sentiment classification models trained using LIWC (CRFsGeneral) and the generated domain-specific lexicon (CRFs-Domain), respectively. The solid line corresponds to a baseline model that takes the ma548 Table 4: Comparison results of different lexicons. ANEW LIWC ReNew Precision Recall F-Measure Precision Recall F-Measure Precision Recall F-Measure Positive 0.59 0.994 0.741 0.606 0.975 0.747 0.623 0.947 0.752 Negative 0.294 0.011 0.021 0.584 0.145 0.232 0.497 0.202 0.288 Neutral 0 0 0 0 0 0 0.395 0.04 0.073 Weighted average 0.41 0.587 0.44 0.481 0.605 0.489 0.551 0.608 0.518 jority classification strategy. Based on the distribution of the datasets, the majority class of all datasets is positive. We can see that models using either the general lexicon or the domain-specific lexicon achieve higher accuracy than the baseline model. Domain-specific lexicons produce significantly higher accuracy than general lexicons. In the figures below, we indicate significance to 10%, 5%, and 1% as ‘·’, ‘∗’, and ‘∗∗’, respectively. P1(∗∗) P2(∗∗) P3(∗) P4(·) P5(∗) P6(·) P7(∗∗) P8(·) P9(∗) P10(∗∗) 0.57 0.59 0.61 0.63 0.65 0.67 0.69 Comparing Pairs Accuracy CRFs-General CRFs-Domain Baseline Figure 9: Accuracy with different lexicons. P1(∗) P2(∗∗) P3(∗) P4() P5(·) P6(·) P7(∗∗) P8(·) P9() P10(∗∗) 0.42 0.43 0.44 0.45 0.46 0.47 0.48 Comparing Pairs Macro F-score CRFs-General CRFs-Domain Figure 10: Macro F-score with different lexicons. Figure 10 and 11 show the pairwise comparisons of macro and micro F-score together with the results of the paired t-tests. We can see that the domain-specific lexicons (dark-grey bars) consistently yield better results than their corresponding general lexicons (light-grey bars). P1(∗∗) P2(∗∗) P3(∗) P4(·) P5(∗) P6(·) P7(∗∗) P8(·) P9(∗) P10(∗∗) 0.54 0.55 0.56 0.57 0.58 0.59 0.6 Comparing Pairs Micro F-score CRFs-General CRFs-Domain Figure 11: Micro F-score with different lexicons. ReNew starts with LIWC and a labeled dataset and generates ten lexicons and sentiment classification models by iteratively learning 4,017 unlabeled reviews without any human guidance. The above results show that the generated lexicons contain more domain-related information than the general sentiment lexicons. Also, note that the labeled datasets we used contain only 20 labeled reviews. This is an easy requirement to meet. 4.5 Comparison with Previous Work Our fifth experiment compares ReNew with Lazaridou et al.’s (2013) approach for sentiment classification using discourse relations. Like ReNew, Lazaridou et al.’s approach works on the sub sentential level. However, it differs from ReNew in three aspects. First, the basic units of their model are elementary discourse units (EDUs) from Rhetorical Structure Theory (RST) (Mann and Thompson, 1988). Second, their model considers the forward relationship between EDUs, whereas ReNew captures both forward and backward relationship between segments. Third, they use a generative model to capture the transition distributions over EDUs whereas ReNew uses a discriminative model to capture the transition sequences of segments. EDUs are defined as minimal units of text and consider many more relations than the two types 549 Table 5: Comparison of our framework with previous work on sentiment classification. Method Accuracy EDU-Model (Lazaridou et al.) 0.594 ReNew (our method) 0.605 of transition cues underlying our segments. We posit that EDUs are too fine-grained for sentiment analysis. Consider the following sentence from Lazaridou et al.’s dataset with its EDUs identified. (1) My husband called the front desk (2) to complain. Unfortunately, EDU (1) lacks sentiment and EDU (2) lacks the topic. Although Lazaridou et al.’s model can capture the forward relationship between any two consecutive EDUs, it cannot handle such cases because their model assumes that each EDU is associated with a topic and a sentiment. In contrast, ReNew finds just one segment in the above sentence. Just to compare with Lazaridou et al., we apply our sentiment labeling component at the level of EDUs. Their labeled dataset contains 65 reviews, corresponding to 1,541 EDUs. Since this dataset is also extracted from Tripadvisor, we use the domain-specific lexicon automatically learned by ReNew based on our 4,071 unlabeled reviews. Follow the same training and testing regimen (10fold cross validation), we compare ReNew with their model. As shown in Table 5, ReNew outperforms their approach on their dataset: Although ReNew is not optimized for EDUs, it achieves better accuracy. 5 Related Work Two bodies of work are relevant. First, to generate sentiment lexicons, existing approaches commonly generate a sentiment lexicon by extending dictionaries or sentiment lexicons. Hu and Liu (2004), manually collect a small set of sentiment words and expand it iteratively by searching synonyms and antonyms in WordNet (Miller, 1995). Rao and Ravichandran (2009) formalize the problem of sentiment detection as a semisupervised label propagation problem in a graph. Each node represents a word, and a weighted edge between any two nodes indicates the strength of the relationship between them. Esuli and Sebastiani (2006) use a set of classifiers in a semisupervised fashion to iteratively expand a manually defined lexicon. Their lexicon, named SentiWordNet, comprises the synset of each word obtained from WordNet. Each synset is associated with three sentiment scores: positive, negative, and objective. Second, for sentiment classification, Nakagawa et al. (2010) introduce a probabilistic model that uses the interactions between words within one sentence for inferring sentiments. Socher et al. (2011) introduce a semi-supervised approach that uses recursive autoencoders to learn the hierarchical structure and sentiment distribution of a sentence. Jo and Oh (2011) propose a probabilistic generative model named ASUM that can extract aspects coupled with sentiments. Kim et al. (2013) extend ASUM by enabling its probabilistic model to discover a hierarchical structure of the aspect-based sentiments. The above works apply sentence-level sentiment classification and their models are not able to capture the relationships between or among clauses. 6 Conclusions and Future Work The leading lexical approaches to sentiment analysis from text are based on fixed lexicons that are painstakingly built by hand. There is little a priori justification that such lexicons would port across application domains. In contrast, ReNew seeks to automate the building of domain-specific lexicons beginning from a general sentiment lexicon and the iterative application of CRFs. Our results are promising. ReNew greatly reduces the human effort for generating high-quality sentiment lexicons together with a classification model. In future work, we plan to apply ReNew to additional sentiment analysis problems such as review quality analysis and sentiment summarization. Acknowledgments Thanks to Chung-Wei Hang, Chris Healey, James Lester, Steffen Heber, and the anonymous reviewers for helpful comments. This work is supported by the Army Research Laboratory in its Network Sciences Collaborative Technology Alliance (NS-CTA) under Cooperative Agreement Number W911NF-09-2-0053 and by an IBM Ph.D. Scholarship and an IBM Ph.D. fellowship. 550 References Margaret M. Bradley and Peter J. Lang. 1999. Affective norms for English words (ANEW): Instruction manual and affective ratings. Technical Report C-1, The Center for Research in Psychophysiology, University of Florida, Gainesville, FL. Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification. In Proceedings of the 14th Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 590–598, Singapore. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC), pages 449–454, Genoa, Italy. Andrea Esuli and Fabrizio Sebastiani. 2006. SentiWordNet: A publicly available lexical resource for opinion mining. In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC), pages 417–422, Genoa, Italy. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations Newsletter, 11(1):10– 18, November. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 10th International Conference on Knowledge Discovery and Data Mining (KDD), pages 168–177, Seattle. Yohan Jo and Alice Haeyun Oh. 2011. Aspect and sentiment unification model for online review analysis. In Proceedings of the 4th ACM International Conference on Web Search and Data Mining (WSDM), pages 815–824, Hong Kong. Hiroshi Kanayama and Tetsuya Nasukawa. 2006. Fully automatic lexicon expansion for domainoriented sentiment analysis. In Proceedings of the 11th Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 355–363, Sydney. Suin Kim, Jianwen Zhang, Zheng Chen, Alice H. Oh, and Shixia Liu. 2013. A hierarchical aspectsentiment model for online reviews. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI), pages 804–812, Bellevue. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning (ICML), pages 282–289, San Francisco. Angeliki Lazaridou, Ivan Titov, and Caroline Sporleder. 2013. A Bayesian model for joint unsupervised induction of sentiment, aspect and discourse representations. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 1630–1639, Sofia, Bulgaria. Bing Liu. 2012. Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, San Rafael, CA. William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using CRFs with hidden variables. In Proceedings of the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 786–794, Los Angeles. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In Proceedings of the 7th Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 79–86, Philadelphia. Delip Rao and Deepak Ravichandran. 2009. Semisupervised polarity lexicon induction. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 675–682, Athens. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 16th Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 151–161, Edinburgh. Yla R. Tausczik and James W. Pennebaker. 2010. The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1):24–54, March. Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O’Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING), pages 1462–1470, Beijing. 551
2014
51
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 552–561, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Decision-Theoretic Approach to Natural Language Generation Nathan McKinley Department of EECS Case Western Reserve University Cleveland, OH, USA [email protected] Soumya Ray Department of EECS Case Western Reserve University Cleveland, OH, USA [email protected] Abstract We study the problem of generating an English sentence given an underlying probabilistic grammar, a world and a communicative goal. We model the generation problem as a Markov decision process with a suitably defined reward function that reflects the communicative goal. We then use probabilistic planning to solve the MDP and generate a sentence that, with high probability, accomplishes the communicative goal. We show empirically that our approach can generate complex sentences with a speed that generally matches or surpasses the state of the art. Further, we show that our approach is anytime and can handle complex communicative goals, including negated goals. 1 Introduction Suppose someone wants to tell their friend that they saw a dog chasing a cat. Given such a communicative goal, most people can formulate a sentence that satisfies the goal very quickly. Further, they can easily provide multiple similar sentences, differing in details but all satisfying the general communicative goal, with no or very little error. Natural language generation (NLG) develops techniques to extend similar capabilities to automated systems. In this paper, we study the restricted NLG problem: given a grammar, lexicon, world and a communicative goal, output a valid English sentence that satisfies this goal. The problem is restricted because in our work, we do not consider the issue of how to fragment a complex goal into multiple sentences (discourse planning). Though restricted, this NLG problem is still difficult. A key source of difficulty is the nature of the grammar, which is generally large, probabilistic and ambiguous. Some NLG techniques use sampling strategies (Knight and Hatzivassiloglou, 1995) where a set of sentences is sampled from a data structure created from an underlying grammar and ranked according to how well they meet the communicative goal. Such approaches naturally handle statistical grammars, but do not solve the generation problem in a goal-directed manner. Other approaches view NLG as a planning problem (Koller and Stone, 2007). Here, the communicative goal is treated as a predicate to be satisfied, and the grammar and vocabulary are suitably encoded as logical operators. Then automated classical planning techniques are used to derive a plan which is converted into a sentence. This is an elegant formalization of NLG, however, restrictions on what current planning techniques can do limit its applicability. A key limitation is the logical nature of automated planning systems, which do not handle probabilistic grammars, or force ad-hoc approaches for doing so (Bauer and Koller, 2010). A second limitation comes from restrictions on the goal: it may be difficult to ensure that some specific piece of information should not be communicated, or to specify preferences over communicative goals, or specify general conditions, like that the sentence should be readable by a sixth grader. A third limitation comes from the search process: without strong heuristics, most planners get bogged down when given communicative goals that require chaining together long sequences of operators (Koller and Petrick, 2011). In our work, we also view NLG as a planning problem. However, we differ in that our underlying formalism for NLG is a suitably defined Markov decision process (MDP). This setting allows us to address the limitations outlined 552 above: it is naturally probabilistic, and handles probabilistic grammars; we are able to specify complex communicative goals and general criteria through a suitably-defined reward function; and, as we show in our experiments, recent developments in fast planning in large MDPs result in a generation system that can rapidly deal with very specific communicative goals. Further, our system has several other desirable properties: it is an anytime approach; with a probabilistic grammar, it can naturally be used to sample and generate multiple sentences satisfying the communicative goal; and it is robust to large grammar sizes. Finally, the decision-theoretic setting allows for a precise tradeoff between exploration of the grammar and vocabulary to find a better solution and exploitation of the current most promising (partial) solution, instead of a heuristic search through the solution space as performed by standard planning approaches. Below, we first describe related work, followed by a detailed description of our approach. We then empirically evaluate our approach and a state-ofthe-art baseline in several different experimental settings and demonstrate its effectiveness at solving a variety of NLG tasks. Finally, we discuss future extensions and conclude. 2 Related Work Two broad lines of approaches have been used to attack the general NLG problem. One direction can be thought of as “overgeneration and ranking.” Here some (possibly probabilistic) structure is used to generate multiple candidate sentences, which are then ranked according to how well they satisfy the generation criteria. This includes work based on chart generation and parsing (Shieber, 1988; Kay, 1996). These generators assign semantic meaning to each individual token, then use a set of rules to decide if two words can be combined. Any combination which contains a semantic representation equivalent to the input at the conclusion of the algorithm is a valid output from a chart generation system. Other examples of this idea are the HALogen/Nitrogen systems (Langkilde-Geary, 2002). HALogen uses a two-phase architecture where first, a “forest” data structure that compactly summarizes possible expressions is constructed. The structure allows for a more efficient and compact representation compared to lattice structures that were previously used in statistical sentence generation approaches. Using dynamic programming, the highest ranked sentence from this structure is then output. Many other systems using similar ideas exist, e.g. (White and Baldridge, 2003; Lu et al., 2009). A second line of attack formalizes NLG as an AI planning problem. SPUD (Stone et al., 2003), a system for NLG through microplanning, considers NLG as a problem which requires realizing a deliberative process of goal-directed activity. Many such NLG-as-planning systems use a pipeline architecture, working from their communicative goal through discourse planning and sentence generation. In discourse planning, information to be conveyed is selected and split into sentence-sized chunks. These sentence-sized chunks are then sent to a sentence generator, which itself is usually split into two tasks, sentence planning and surface realization (Koller and Petrick, 2011). The sentence planner takes in a sentence-sized chunk of information to be conveyed and enriches it in some way. This is then used by a surface realization module which encodes the enriched semantic representation into natural language. This chain is sometimes referred to as the “NLG Pipeline” (Reiter and Dale, 2000). Another approach, called integrated generation, considers both sentence generation portions of the pipeline together (Koller and Stone, 2007). This is the approach taken in some modern generators like CRISP (Koller and Stone, 2007) and PCRISP (Bauer and Koller, 2010). In these generators, the input semantic requirements and grammar are encoded in PDDL (Fox and Long, 2003), which an off-the-shelf planner such as Graphplan (Blum and Furst, 1997) uses to produce a list of applications of rules in the grammar. These generators generate parses for the sentence at the same time as the sentence, which keeps them from generating realizations that are grammatically incorrect, and keeps them from generating grammatical structures that cannot be realized properly. In the NLG-as-planning framework, the choice of grammar representation is crucial in treating NLG as a planning problem; the grammar provides the actions that the planner will use to generate a sentence. Tree Adjoining Grammars (TAGs) are a common choice (Koller and Stone, 2007; Bauer and Koller, 2010). TAGs are tree-based grammars consisting of two sets of trees, called initial trees and auxiliary or adjoining trees. An 553 entire initial tree can replace a leaf node in the sentence tree whose label matches the label of the root of the initial tree in a process called “substitution.” Auxiliary trees, on the other hand, encode recursive structures of language. Auxiliary trees have, at a minimum, a root node and a foot node whose labels match. The foot node must be a leaf of the auxiliary tree. These trees are used in a three-step process called “adjoining”. The first step finds an adjoining location by searching through our sentence to find any subtree with a root whose label matches the root node of the auxiliary tree. In the second step, the target subtree is removed from the sentence tree, and placed in the auxiliary tree as a direct replacement for the foot node. Finally, the modified auxiliary tree is placed back in the sentence tree in the original target location. We use a variation of TAGs in our work, called a lexicalized TAG (LTAG), where each tree is associated with a lexical item called an anchor. Though the NLG-as-planning approaches are elegant and appealing, a key drawback is the difficulty of handling probabilistic grammars, which are readily handled by the overgeneration and ranking strategies. Recent approaches such as PCRISP (Bauer and Koller, 2010) attempt to remedy this, but do so in a somewhat ad-hoc way, by transforming the probabilities into costs, because they rely on deterministic planning to actually realize the output. In this work, we directly address this by using a more expressive underlying formalism, a Markov decision process (MDP). We show empirically that this modification has other benefits as well, such as being anytime and an ability to handle complex communicative goals beyond those that deterministic planners can handle. We note that prior work exists that uses MDPs for NLG (Lemon, 2011). That work differs from ours in several key respects: (i) it considers NLG at a coarse level, for example choosing the type of utterance (in a dialog context) and how to fill in specific slots in a template, (ii) the source of uncertainty is not language-related but comes from things like uncertainty in speech recognition, and (iii) the MDPs are solved using reinforcement learning and not planning, which is impractical in our setting. However, that work does consider NLG in the context of the broader task of dialog management, which we leave for future work. 3 Sentence Tree Realization with UCT In this section, we describe our approach, called Sentence Tree Realization with UCT (STRUCT). We describe the inputs to STRUCT, followed by the underlying MDP formalism and the probabilistic planning algorithm we use to generate sentences in this MDP. 3.1 Inputs to STRUCT STRUCT takes three inputs in order to generate a single sentence. These inputs are a grammar (including a lexicon), a communicative goal, and a world specification. STRUCT uses a first-order logic-based semantic model in its communicative goal and world specification. This model describes named “entities,” representing general things in the world. Entities with the same name are considered to be the same entity. These entities are described using first-order logic predicates, where the name of the predicate represents a statement of truth about the given entities. In this semantic model, the communicative goal is a list of these predicates with variables used for the entity names. For instance, a communicative goal of ‘red(d), dog(d)’ (in English, “say anything about a dog which is red.”) would match a sentence with the semantic representation ‘red(subj), dog(subj), cat(obj), chased(subj, obj)’, like “The red dog chased the cat”, for instance. A grammar contains a set of PTAG trees, divided into two sets (initial and adjoining). These trees are annotated with the entities in them. Entities are defined as any element anchored by precisely one node in the tree which can appear in a statement representing the semantic content of the tree. In addition to this set of trees, the grammar contains a list of words which can be inserted into those trees, turning the PTAG into an PLTAG. We refer to this list as a lexicon. Each word in the lexicon is annotated with its first-order logic semantics with any number of entities present in its subtree as the arguments. A world specification is simply a list of all statements which are true in the world surrounding our generation. Matching entity names refer to the same entity. We use the closed world assumption, that is, any statement not present in our world is false. Before execution begins, our grammar is pruned to remove entries which cannot possibly be used in generation for the given problem, by tran554 chased(subj,obj), dog(subj) chased(subj,obj) S VP NP−subj det N−self "dog" "chased" V−self NP−obj "dog" N−self det NP dog(self) V−self NP−obj NP−subj "chased" VP S Figure 1: An example tree substitution operation in STRUCT. sitively discovering all predicates that hold about the entities mentioned in the goal in the world, and eliminating all trees not about any of these. This often allows STRUCT to be resilient to large grammar sizes, as our experiments will show. 3.2 Specification of the MDP We formulate NLG as a planning problem on a Markov decision process (MDP) (Puterman, 1994). An MDP is a tuple (S, A, T, R, γ) where S is a set of states, A is a set of actions available to an agent, T : S × A × S →[0, 1] is a possibly stochastic function defining the probability T(s, a, s′) with which the environment transitions to s′ when the agent does a in state s. R : S × A × S →R is a real-valued reward function that specifies the utility of performing action a in state s to reach another state. Finally, γ is a discount factor that allows planning over infinite horizons to converge. In such an MDP, the agent selects actions at each state to optimize the expected infinite-horizon discounted reward. In the MDP we use for NLG, we must define each element of the tuple in such a way that a plan in the MDP becomes a sentence in a natural language. Our set of states, therefore, will be partial sentences which are in the language defined by our PLTAG input. There are an infinite number of these states, since TAG adjoins can be repeated indefinitely. Nonetheless, given a specific world and communicative goal, only a fraction of this MDP needs to be explored, and, as we show below, a good solution can often be found quickly using a variation of the UCT algorithm (Kocsis and Szepesvari, 2006). Our set of actions consist of all single substitutions or adjoins at a particular valid location in the tree (example shown in Figure 1). Since we are using PLTAGs in this work, this means every action adds a word to the partial sentence. In situations where the sentence is complete (no nonterminals without children exist), we add a dummy action that the algorithm may choose to stop generation and emit the sentence. Based on these state and action definitions, the transition function takes a mapping between a partial sentence / action pair and the partial sentences which can result from one particular PLTAG adjoin / substitution, and returns the probability of that rule in the grammar. In order to control the search space, we restrict the structure of the MDP so that while substitutions are available, only those operations are considered when determining the distribution over the next state, without any adjoins. We do this is in order to generate a complete and valid sentence quickly. This allows STRUCT to operate as an anytime algorithm, described further below. The immediate value of a state, intuitively, describes closeness of an arbitrary partial sentence to our communicative goal. Each partial sentence is annotated with its semantic information, built up using the semantic annotations associated with the PLTAG trees. Thus we use as a reward a measure of the match between the semantic annotation of the partial tree and the communicative goal. That is, the larger the overlap between the predicates, the higher the reward. For an exact reward signal, when checking this overlap, we need to substitute each combination of entities in the goal into predicates in the sentence so we can return a high value if there are any mappings which are both possible (contain no statements which are not present in the grounded world) and mostly fulfill the goal (contain most of the goal predicates). However, this is combinatorial; also, most entities within sentences do not interact (e.g. if we say “the white rabbit jumped on the orange carrot,” the whiteness of the rabbit has nothing to do with the carrot), and finally, an approximate reward signal generally works well enough unless we need to emit nested subclauses. Thus as an approximation, we use a reward signal where we simply count how many individual predicates overlap with the goal with some entity substitution. In the experiments, we illustrate the difference between the exact and approximate reward signals. The final component of the MDP is the discount factor. We generally use a discount factor of 1; this is because we are willing to generate lengthy sentences in order to ensure we match our goal. A discount factor of 1 can be problematic in general since it can cause rewards to diverge, but since 555 there are a finite number of terms in our reward function (determined by the communicative goal and the fact that because of lexicalization we do not loop), this is not a problem for us. 3.3 The Probabilistic Planner We now describe our approach to solving the MDP above to generate a sentence. Determining the optimal policy at every state in an MDP is polynomial in the size of the state-action space (Brafman and Tennenholtz, 2003), which is intractable in our case. But for our application, we do not need to find the optimal policy. Rather we just need to plan in an MDP to achieve a given communicative goal. Is it possible to do this without exploring the entire state-action space? Recent work answers this question affirmatively. New techniques such as sparse sampling (Kearns et al., 1999) and UCT (Kocsis and Szepesvari, 2006) show how to generate near-optimal plans in large MDPs with a time complexity that is independent of the state space size. Using the UCT approach with a suitably defined MDP (explained above) allows us to naturally handle probabilistic grammars as well as formulate NLG as a planning problem, unifying the distinct lines of attack described in Section 2. Further, the theoretical guarantees of UCT translate into fast generation in many cases, as we demonstrate in our experiments. Online planning in MDPs as done by UCT follows two steps. From each state encountered, we construct a lookahead tree and use it to estimate the utility of each action in this state. Then, we take the best action, the system transitions to the next state and the procedure is repeated. In order to build a lookahead tree, we use a “rollout policy.” This policy has two components: if it encounters a state already in the tree, it follows a “tree policy,” discussed further below. If it encounters a new state, the policy reverts to a “default” policy that randomly samples an action. In all cases, any rewards received during the rollout search are backed up. Because this is a Monte Carlo estimate, typically, we run several simultaneous trials, and we keep track of the rewards received by each choice and use this to select the best action at the root. The tree policy needed by UCT for a state s is the action a in that state which maximizes: P(s, a) = Q(s, a) + c s lnN(s) N(s, a) (1) Algorithm 1 STRUCT algorithm. Require: Number of simulations numTrials, Depth of lookahead maxDepth, time limit T Ensure: Generated sentence tree 1: bestSentence ←nil 2: while time limit not reached do 3: state ←empty sentence tree 4: while state not terminal do 5: for numTrials do 6: testState ←state 7: currentDepth ←0 8: if testState has unexplored actions then 9: Apply one unexplored PLTAG production sampled from the PLTAG distribution to testState 10: currentDepth++ 11: end if 12: while currentDepth < maxDepth do 13: Apply PLTAG production selected by tree policy (Equation 1) or default policy as required 14: currentDepth++ 15: end while 16: calculate reward for testState 17: associate reward with first action taken 18: end for 19: state ←maximum reward testState 20: if state score > bestSentence score and state has no nonterminal leaf nodes then 21: bestSentence ←state 22: end if 23: end while 24: end while 25: return bestSentence Here Q(s, a) is the estimated value of a as observed in the tree search, computed as a sum over future rewards observed after (s, a). N(s) and N(s, a) are visit counts for the state and stateaction pair. Thus the second term is an exploration term that biases the algorithm towards visiting actions that have not been explored enough. c is a constant that trades off exploration and exploitation. This essentially treats each action decision as a bandit problem; previous work shows that this approach can efficiently select near-optimal actions at each state. We use a modified version of UCT in order to 556 increase its usability in the MDP we have defined. First, because we receive frequent, reasonably accurate feedback, we favor breadth over depth in the tree search. That is, it is more important in our case to try a variety of actions than to pursue a single action very deep. Second, UCT was originally used in an adversarial environment, and so is biased to select actions leading to the best average reward rather than the action leading to the best overall reward. This is not true for us, however, so we choose the latter action instead. With the MDP definition above, we use our modified UCT to find a solution sentence (Algorithm 1). After every action is selected and applied, we check to see if we are in a state in which the algorithm could terminate (i.e. the sentence has no nonterminals yet to be expanded). If so, we determine if this is the best possibly-terminal state we have seen so far. If so, we store it, and continue the generation process. Whenever we reach a terminal state, we begin again from the start state of the MDP. Because of the structure restriction above (substitution before adjoin), STRUCT generates a valid sentence quickly. This enables STRUCT to perform as an anytime algorithm, which if interrupted will return the highestvalue complete and valid sentence it has found. This also allows partial completion of communicative goals if not all goals can be achieved simultaneously in the time given. 4 Empirical Evaluation In this section, we compare STRUCT to a stateof-the-art NLG system, CRISP, 1 and evaluate three hypotheses: (i) STRUCT is comparable in speed and generation quality to CRISP as it generates increasingly large referring expressions, (ii) STRUCT is comparable in speed and generation quality to CRISP as the size of the grammar which they use increases, and (iii) STRUCT is capable of communicating complex propositions, including multiple concurrent goals, negated goals, and nested subclauses. For these experiments, STRUCT was implemented in Python 2.7. We used a 2010 version of CRISP which uses a Java-based GraphPlan implementation. All of our experiments were run on a 4-core AMD Phenom II X4 995 processor clocked at 3.2 GHz. Both systems were given access to 8 1We were unfortunately unable to get the PCRISP system to compile, and so we could not evaluate it. 0.01 0.1 1 10 100 1000 10000 0 2 4 6 8 10 12 14 16 Time to Generate (seconds) Referring Expression Length CRISP STRUCT, final STRUCT, initial Figure 2: Experimental comparison between STRUCT and CRISP: Generation time vs. length of referring expression GB of RAM. The times reported are from the start of the generation process, eliminating variations due to interpreter startup, input parsing, etc. 4.1 Comparison to CRISP We begin by describing experiments comparing STRUCT to CRISP. For these experiments, we use the approximate reward function for STRUCT. Referring Expressions We first evaluate CRISP and STRUCT on their ability to generate referring expressions. Following prior work (Koller and Petrick, 2011), we consider a series of sentence generation problems which require the planner to generate a sentence like “The Adj1 Adj2 ... Adjk dog chased the cat.”, where the string of adjectives is a string that distinguishes one dog (whose identity is specified in the problem description) from all other entities in the world. In this experiment, maxDepth was set equal to 1, since each action taken improved the sentence in a way measurable by our reward function. numTrials was set equal to k(k + 1), since this is the number of adjoining sites available in the final step of generation, times the number of potential words to adjoin. This allows us to ensure successful generation in a single loop of the STRUCT algorithm. The experiment has two parameters: j, the number of adjectives in the grammar, and k, the number of adjectives necessary to distinguish the entity in question from all other entities. We set j = k and show the results in Figure 2. We observe that CRISP was able to achieve sub-second or similar times for all expressions of less than length 5, but its generation times increase exponentially past that point, exceeding 100 seconds for some plans at length 10. At length 15, CRISP failed to generate a referring expression; 557 after 90 minutes the Java garbage collector terminated the process. STRUCT (the “STRUCT final” line) performs much better and is able to generate much longer referring expressions without failing. Later experiments had successful referring expression generation of lengths as high as 25. The “STRUCT initial” curve shows the time taken by STRUCT to come up with the first complete sentence, which partially solves the goal and which (at least) could be output if generation was interrupted and no better alternative was found. As can be seen, this always happens very quickly. Grammar Size. We next evaluate STRUCT and CRISP’s ability to handle larger grammars. This experiment is set up in the same way as the one above, with the exception of l “distracting” words, words which are not useful in the sentence to be generated. l is defined as j −k. In these experiments, we vary l between 0 and 50. Figure 3a shows the results of these experiments. We observe that CRISP using GraphPlan, as previously reported in (Koller and Petrick, 2011), handles an increase in number of unused actions very well. Prior work reported a difference on the order of single milliseconds moving from j = 1 to j = 10. We report similar variations in CRISP runtime as j increases from 10 to 60: runtime increases by approximately 10% over that range. No Pruning. If we do not prune the grammar (as described in Section 3.1), STRUCT’s performance is similar to CRISP using the FF planner (Hoffmann and Nebel, 2001), also profiled in (Koller and Petrick, 2011), which increased from 27 ms to 4.4 seconds over the interval from j = 1 to j = 10. STRUCT’s performance is less sensitive to larger grammars than this, but over the same interval where CRISP increases from 22 seconds of runtime to 27 seconds of runtime, STRUCT increases from 4 seconds to 32 seconds. This is due almost entirely to the required increase in the value of numTrials as the grammar size increases. At the low end, we can use numTrials = 20, but at l = 50, we must use numTrials = 160 in order to ensure perfect generation as soon as possible. Note that, as STRUCT is an anytime algorithm, valid sentences are available very early in the generation process, despite the size of the set of adjoining trees. This time does not change substantially with increases in grammar size. However, the time to perfect this solution does. With Pruning. STRUCT’s performance improves significantly if we allow for pruning. This experiment involving distracting words is an example of a case where pruning will perform well. When we apply pruning, we find that STRUCT is able to ignore the effect of additional distracting words. Experiments showed roughly constant times for generation for j = 1 through j = 5000. Our experiments do not show any significant impact on runtime due to the pruning procedure itself, even on large grammars. 4.2 Complex Communicative Goals In the next set of experiments, we illustrate that STRUCT can solve a variety of complex communicative goals such as negated goals, conjuctions and goals requiring nested subclauses to be output. Multiple Goals. We first evaluate STRUCT’s ability to accomplish multiple communicative goals when generating a single sentence. In this experiment, we modify the problem from the previous section. In that section, the referred-to dog was unique, and it was therefore possible to produce a referring expression which identified it unambiguously. In this experiment, we remove this condition by creating a situation in which the generator will be forced to ambiguously refer to several dogs. We then add to the world a number of adjectives which are common to each of these possible referents. Since these adjectives do not further disambiguate their subject, our generator should not use them in its output. We then encode these adjectives into communicative goals, so that they will be included in the output of the generator despite not assisting in the accomplishment of disambiguation. For example, assume we had two black cats, and we wanted to say that one of them was sleeping, but we wanted to emphasize that it was a black cat. We would have as our goal both “sleeps(c)” and “black(c)”. We want the generator to say “the black cat sleeps”, instead of simply “the cat sleeps.” We find that, in all cases, these otherwise useless adjectives are included in the output of our generator, indicating that STRUCT is successfully balancing multiple communicative goals. As we show in figure 3b (the “Positive Goals” curve) , the presence of additional satisfiable semantic goals does not substantially affect the time required for generation. We are able to accomplish this task with the same very high frequency as the CRISP 558 0 5 10 15 20 25 30 35 10 20 30 40 50 60 Time to Generate (seconds) Adjoining Grammar Size CRISP STRUCT STRUCT (pruning) (a) Effect of grammar size 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 1 2 3 4 5 6 7 8 9 Time to Generate (seconds) Number of Goals Positive Goals Negative Goals (b) Effect of multiple/ negated goals 0 200 400 600 800 1000 1200 2 4 6 8 10 12 14 16 18 Score Time (seconds) Generated Score Best Available Score (c) Effect of nested subclauses Figure 3: STRUCT experiments (see text for details). 0 1 2 3 4 5 6 1 2 3 4 5 Time to Generate (seconds) Number of Sentences STRUCT (1 entity) CRISP (1 entity) (a) One entity (“The man sat and the girl sat and ...”). 0 20 40 60 80 100 120 140 160 1 2 3 4 5 Time to Generate (seconds) Number of Sentences STRUCT (2 entities) CRISP (2 entities) (b) Two entities (“The dog chased the cat and ...”). 0 10 20 30 40 50 60 70 1 2 3 4 5 Time to Generate (seconds) Number of Sentences STRUCT, 3 entities (c) Three entities (“The man gave the girl the book and ...”). Figure 4: Time taken by STRUCT to generate sentences with conjunctions with varying numbers of entities. comparisons, as we use the same parameters. Negated Goals. We now evaluate STRUCT’s ability to generate sentences given negated communicative goals. We again modify the problem used earlier by adding to our lexicon several new adjectives, each applicable only to the target of our referring expression. Since our target can now be referred to unambiguously using only one adjective, our generator should just select one of these new adjectives (we experimentally confirmed this). We then encode these adjectives into negated communicative goals, so that they will not be included in the output of the generator, despite allowing a much shorter referring expression. For example, assume we have a tall spotted black cat, a tall solid-colored white cat, and a short spotted brown cat, but we wanted to refer to the first one without using the word “black”. We find that these adjectives which should have been selected immediately are omitted from the output, and that the sentence generated is the best possible under the constraints. This demonstrates that STRUCT is balancing these negated communicative goals with its positive goals. Figure 3b (the “Negative Goals” curve) shows the impact of negated goals on the time to generation. Since this experiment alters the grammar size, we see the time to final generation growing linearly with grammar size. The increased time to generate can be traced directly to this increase in grammar size. This is a case where pruning does not help us in reducing the grammar size; we cannot optimistically prune out words that we do not plan to use. Doing so might reduce the ability of STRUCT to produce a sentence which partially fulfills its goals. Nested subclauses. Next, we evaluate STRUCT’s ability to generate sentences with nested subclauses. An example of such a sentence is “The dog which ate the treat chased the cat.” This is a difficult sentence to generate for several reasons. The first, and clearest, is that there are words in the sentence which do not help to increase the score assigned to the partial sentence. Notably, we must adjoin the word “which” to “the dog” during the portion of generation where the sentence reads “the dog chased the cat”. This decision requires us to do planning deeper than one level in the MDP, which increases the number of simulations STRUCT requires in order to get the correct result. In this case, we require lookahead further into the tree than depth 1. We need to know that using “which” will allow us to further specify which dog is chasing the cat; in order to do this we must use at least d = 3. Our reward function must determine this with, at a minimum, the actions corresponding to “which”, “ate”, and “treat”. For these experiments, we use the exact reward function for STRUCT. 559 Despite this issue, STRUCT is capable of generating these sentences. Figure 3c shows the score of STRUCT’s generated output over time for two nested clauses. Notice that, because the exact reward function is being used, the time to generate is longer in this experiment. To the best of our knowledge, CRISP is not able to generate sentences of this form due to an insufficiency in the way it handles TAGs, and consequently we present our results without this baseline. Conjunctions. Finally, we evaluate STRUCT’s ability to generate sentences including conjunctions. We introduce the conjunction “and”, which allows for the root nonterminal of a new sentence (‘S’) to be adjoined to any other sentence. We then provide STRUCT with multiple goals. Given sufficient depth for the search (d = 3 was sufficient for our experiments, as our reward signal is fine-grained), STRUCT will produce two sentences joined by the conjunction “and”. Again, we follow prior work in our experiment design (Koller and Petrick, 2011). As we can see in Figures 4a, 4b, and 4c, STRUCT successfully generates results for conjunctions of up to five sentences. This is not a hard upper bound, but generation times begin to be impractically large at that point. Fortunately, human language tends toward shorter sentences than these unwieldy (but technically grammatical) sentences. STRUCT increases in generation time both as the number of sentences increases and as the number of objects per sentences increases. We compare our results to those presented in (Koller and Petrick, 2011) for CRISP with the FF Planner. They attempted to generate sentences with three entities and failed to find a result within their 4 GB memory limit. As we can see, CRISP generates a result slightly faster than STRUCT when we are working with a single entity, but works much much slower for two entities and cannot generate results for a third entity. According to Koller’s findings, this is because the search space grows by a factor of the universe size with the addition of another entity (Koller and Petrick, 2011). 5 Conclusion We have proposed STRUCT, a general-purpose natural language generation system which is comparable to current state-of-the-art generators. STRUCT formalizes the generation problem as an MDP and applies a version of the UCT algorithm, a fast online MDP planner, to solve it. Thus, STRUCT naturally handles probabilistic grammars. We demonstrate empirically that STRUCT is anytime, comparable to existing generation-asplanning systems in certain NLG tasks, and is also capable of handling other, more complex tasks such as negated communicative goals. Though STRUCT has many interesting properties, many directions for exploration remain. Among other things, it would be desirable to integrate STRUCT with discourse planning and dialog systems. Fortunately, reinforcement learning has already been investigated in such contexts (Lemon, 2011), indicating that an MDPbased generation procedure could be a natural fit in more complex generation systems. This is a primary direction for future work. A second direction is that, due to the nature of the approach, STRUCT is highly amenable to parallelization. None of the experiments reported here use parallelization, however, to be fair to CRISP. We plan to parallelize STRUCT in future work, to take advantage of current multicore architectures. This should obviously further reduce generation time. STRUCT is open source and available from github.com upon request. Acknowledgments This work was supported in part by NSF CNS1035602. SR was supported in part by CWRU award OSA110264. The authors are grateful to Umang Banugaria for help with the STRUCT implementation. References D. Bauer and A. Koller. 2010. Sentence generation as planning with probabilistic LTAG. Proceedings of the 10th International Workshop on Tree Adjoining Grammar and Related Formalisms, New Haven, CT. A.L. Blum and M.L. Furst. 1997. Fast planning through planning graph analysis. Artificial intelligence, 90(1):281–300. R. I. Brafman and M. Tennenholtz. 2003. R-MAX-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213–231. M. Fox and D. Long. 2003. PDDL2.1: An extension to PDDL for expressing temporal planning domains. Journal of Artificial Intelligence Research, 20:61– 124. 560 Jorg Hoffmann and Bernhard Nebel. 2001. The FF planning system: fast plan generation through heuristic search. Journal of Artificial Intelligence Research, 14(1):253–302, May. Martin Kay. 1996. Chart generation. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, ACL ’96, pages 200– 204, Stroudsburg, PA, USA. Association for Computational Linguistics. M. Kearns, Y. Mansour, and A.Y. Ng. 1999. A sparse sampling algorithm for near-optimal planning in large Markov decision processes. In International Joint Conference on Artificial Intelligence, volume 16, pages 1324–1331. Lawrence Erlbaum Associates Ltd. K. Knight and V. Hatzivassiloglou. 1995. Two-level, many-paths generation. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 252–260. Association for Computational Linguistics. Levente Kocsis and Csaba Szepesvari. 2006. Bandit based Monte-Carlo planning. In Proceedings of the Seventeenth European Conference on Machine Learning, pages 282–293. Springer. Alexander Koller and Ronald P. A. Petrick. 2011. Experiences with planning for natural language generation. Computational Intelligence, 27(1):23–40. A. Koller and M. Stone. 2007. Sentence generation as a planning problem. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, volume 45, page 336. I. Langkilde-Geary. 2002. An empirical verification of coverage and correctness for a general-purpose sentence generator. In Proceedings of the 12th International Natural Language Generation Workshop, pages 17–24. Oliver Lemon. 2011. Learning what to say and how to say it: joint optimization of spoken dialogue management and natural language generation. Computer Speech and Language, 25(2):210–221. W. Lu, H.T. Ng, and W.S. Lee. 2009. Natural language generation with tree conditional random fields. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 400–409. Association for Computational Linguistics. M.L. Puterman. 1994. Markov decision processes: Discrete stochastic dynamic programming. John Wiley & Sons, Inc. Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press, January. Stuart M. Shieber. 1988. A uniform architecture for parsing and generation. In Proceedings of the 12th conference on Computational linguistics - Volume 2, COLING ’88, pages 614–619, Stroudsburg, PA, USA. Association for Computational Linguistics. Matthew Stone, Christine Doran, Bonnie Webber, Tonia Bleam, and Martha Palmer. 2003. Microplanning with communicative intentions: The SPUD system. Computational Intelligence, 19(4):311– 381. M. White and J. Baldridge. 2003. Adapting chart realization to CCG. In Proceedings of the 9th European Workshop on Natural Language Generation, pages 119–126. 561
2014
52
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 562–571, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Generating Code-switched Text for Lexical Learning Igor Labutov Cornell University [email protected] Hod Lipson Cornell University [email protected] Abstract A vast majority of L1 vocabulary acquisition occurs through incidental learning during reading (Nation, 2001; Schmitt et al., 2001). We propose a probabilistic approach to generating code-mixed text as an L2 technique for increasing retention in adult lexical learning through reading. Our model that takes as input a bilingual dictionary and an English text, and generates a code-switched text that optimizes a defined “learnability” metric by constructing a factor graph over lexical mentions. Using an artificial language vocabulary, we evaluate a set of algorithms for generating code-switched text automatically by presenting it to Mechanical Turk subjects and measuring recall in a sentence completion task. 1 Introduction Today, an adult trying to learn a new language is likely to embrace an age-old and widely accepted practice of learning vocabulary through curated word lists and rote memorization. Yet, it is not uncommon to find yourself surrounded by speakers of a foreign language and instinctively pick up words and phrases without ever seeing the definition in your native tongue. Hearing “pass le sale please” at the dinner table from your in-laws visiting from abroad, is unlikely to make you think twice about passing the salt. Humans are extraordinarily good at inferring meaning from context, whether this context is your physical surrounding, or the surrounding text in the paragraph of the word that you don’t yet understand. Recently, a novel method of L2 language teaching had been shown effective in improving adult lexical acquisition rate and retention 1. This tech1authors’ unpublished work nique relies on a phenomenon that elicits a natural simulation of L1-like vocabulary learning in adults — significantly closer to L1 learning for L2 learners than any model studied previously. By infusing foreign words into text in the learner’s native tongue into low-surprisal contexts, the lexical acquisition process is facilitated naturally and non-obtrusively. Incidentally, this phenomenon occurs “in the wild” and is termed code-switching or code-mixing, and refers to the linguistic pattern of bilingual speakers swapping words and phrases between two languages during speech. While this phenomenon had received significant attention from both a socio-linguistic (Milroy and Muysken, 1995) and theoretical linguistic perspectives (Belazi et al., 1994; Bhatt, 1997) (including some computational studies), only recently has it been hypothesizes that “code-switching” is a marking of bilingual proficiency, rather than deficiency (Genesee, 2001). Until recently it was widely believed that incidental lexical acquisition through reading can only occur for words that occur at sufficient density in a single text, so as to elicit the “noticing” effect needed for lexical acquisition to occur (Cobb, 2007). Recent neurophysiological findings, however, indicate that even a single incidental exposure to a novel word in a sufficiently constrained context is sufficient to trigger an early integration of the word in the brain’s semantic network (Borovsky et al., 2012). An approach explored in this paper, and motivated by the above findings, exploits “constraining” contexts in text to introduce novel words. A state-of-the-art approach for generating such text is based on an expert annotator whose job is to decide which words to “switch out” with novel foreign words (from hereon we will refer to the “switched out” word as the source word and to the “switched in” word as the target word). Consequently the process is labor-intensive and leads to 562 a “one size fits all solution” that is insensitive to the learner’s skill level or vocabulary proficiency. This limitation is also cited in literature as a significant roadblock to the widespread adaptation of graded reading series (Hill, 2008). A readingbased tool that follows the same principle, i.e. by systematic exposure of a learner to an incrementally more challenging text, will result in more effective learning (Lantolf and Appel, 1994). To address the above limitation, we develop an approach for automatically generating such “codeswitched” text with an explicit goal of maximizing the lexical acquisition rate in adults. Our method is based on a global optimization approach that incorporates a “knowledge model” of a user with the content of the text, to generate a sequence of lexical “switches”. To facilitate the selection of “switch points”, we learn a discriminative model for predicting switch point locations on a corpus that we collect for this purpose (and release to the community). Below is a high-level outline of this paper. • We formalize our approach within a probabilistic graphical model framework, inference in which yields “code-switched” text that maximizes a surrogate to the acquisition rate objective. • We compare this global method to several baseline techniques, including the strong “high-frequency” baseline. • We analyze the operating range in which our model is effective and motivate the nearfuture extension of this approach with the proposed improvements. 2 Related Work Our proposed approach to the computational generation of code-switched text, for the purpose of L2 pedagogy, is influenced by a number of fields that studied aspects of this phenomenon from distinct perspectives. In this section, we briefly describe a motivation from the areas of socio- and psycho- linguistics and language pedagogy research that indicate the promise of this approach. 2.1 Code-switching as a natural phenomenon Code-switching (or code-mixing) is a widely studied phenomenon that received significant attention over the course of the last three decades, across the disciplines of sociolinguistics, theoretical and psycholinguistics and even literary and cultural studies (predominantly in the domain of SpanishEnglish code-switching) (Lipski, 2005). Code-switching that occurs naturally in bilingual populations, and especially in children, has for a long time been considered a marking of incompetency in the second language. A more recent view on this phenomenon, however, suggests that due to the underlying syntactic complexity of code-switching, code-switching is actually a marking of bilingual fluency (Genesee, 2001). More recently, the idea of employing code-switching in the classroom, in a form of conversation-based exercises, has attracted the attention of multiple researchers and educators (Moodley, 2010; Macaro, 2005), yielding promising results in an elementary school study in SouthAfrica. 2.2 Computational Approaches to Code-switching Additionally, there has been a limited number of studies of the computational approaches to code-switching, and in particular code-switched text generation. Solorio and Liu (2008), record and transcribe a corpus of Spanish-English codemixed conversation to train a generative model (Naive Bayes) for the task of predicting codeswitch points in conversation. Additionally they test their trained model in its ability to generate code-switched text with convincing results. Building on their work, (Adel et al., 2012) employ additional features and a recurrent network language model for modeling code-switching in conversational speech. Adel and collegues (2011) propose a statistical machine translation-based approach for generating code-switched text. We note, however, that the primary goal of these methods is in the faithful modeling of the natural phenomenon of code-switching in bilingual populations, and not as a tool for language teaching. While useful in generating coherent, syntactically constrained code-switched texts in its own right, none of these methods explicitly consider code-switching as a vehicle for teaching language, and thus do not take on an optimization-based view with an objective of improving lexical acquisition through the reading of the generated text. More recently, and concurrently with our work, Google’s Language Immersion app employs the principle of 563 code-switching for language pedagogy, by generating code-switched web content, and allowing its users to tune it to their skill level. It does not, however, seem to model the user explicitly, nor is it clear if it performs any optimization in generating the text, as no studies have been published to date. 2.3 Computational Approaches to Sentence Simplification Although not explicitly for teaching language, computational approaches that facilitate accessibility to texts that might otherwise be too difficult for its readers, either due to physical or learning disabilities, or language barriers, are relevant. In the recent work of (Kauchak, 2013), for example demonstrates an approach to increasing readability of texts by learning from unsimplified texts. Approaches in this area span methods for simplifying lexis (Yatskar et al., 2010; Biran et al., 2011), syntax (Siddharthan, 2006; Siddharthan et al., 2004), discourse properties (Hutchinson, 2005), and making technical terminology more accessible to non-experts (Elhadad and Sutaria, 2007). While the resulting texts are of great potential aid to language learners and may implicitly improve upon a reader’s language proficiency, they do not explicitly attempt to promote learning as an objective in generating the simplified text. 2.4 Recent Neurophysiological findings Evidence for the potential effectiveness of codeswitching for language acquisition, stem from the recent findings of (Borovsky et al., 2012), who have shown that even a single exposure to a novel word in a constrained context, results in the integration of the word within your existing semantic base, as indicated by a change in the N400 electrophysiological response recorded from the subjects’ scalps. N400 ERP marker has been found to correlate with the semantic “expectedness” of a word (Kutas and Hillyard, 1984), and is believed to be an early indicator of word learning. Furthermore, recent work of (Frank et al., 2013), show that word surprisal predicts N400, providing concrete motivation for artificial manipulation of text to explicitly elicit word learning through natural reading, directly motivating our approach. Prior to the above findings, it was widely believed that for evoking “incidental” word learning through reading alone, the word must appear with sufficiently high frequency within the text, such as to elicit the “noticing” effect — a prerequisite to lexical acquisition (Schmidt and Schmidt, 1995; Cobb, 2007). 3 Model 3.1 Overview The formulation of our model is primarily motivated by two hypotheses that have been validated experimentally in the cognitive science literature. We re-state these hypotheses in the language of “surprisal”: 1. Inserting a target word into a low surprisal context increases the rate of that word’s integration into a learner’s lexicon. 2. Multiple exposures to the word in low surprisal contexts increases rate of that word’s integration. Hypothesis 1 is supported by evidence from (Borovsky et al., 2012; Frank et al., 2013), and hypothesis 2 is supported by evidence from (Schmidt and Schmidt, 1995). We adopt the term “lowsurprisal” context to identify contexts (e.g. ngrams) that are highly predictive of the target word (e.g. trailing word in the n-gram). The motivation stems from the recent evidence (Frank et al., 2013) that low-surprisal contexts affect the N400 response and thus correlate with word acquisition. To realize a “code-switched” mixture that adheres maximally to the above postulates, it is self-evident that a non-trivial optimization problem must be solved. For example, naively selecting a few words that appear in low-surprisal contexts may facilitate their acquisition, but at the expense of other words within the same context that may appear in a larger number of low-surprisal contexts further in the text. To address this problem, we approach it with a formulation of a factor graph that takes global structure of the text into account. Factor graph formalism allows us to capture local features of individual contexts, such as lexical and syntactic surprisal, while inducing dependencies between consequent “switching decisions” in the text. Maximizing likelihood of the joint probability under the factorization of this graph yields an optimal sequence of these “switching decisions” in the entirety of the text. Maximizing joint likelihood, as we will show in the next section, is a surrogate to maximizing the probability of the learner acquiring novel words through the process of reading the generated text. 564 wi Known + Constrained Unknown + Constrained Unknown + Unconstrained Known + Unconstrained ... w1 w2 w3 w4 w5 w|V | KNOW DON’T KNOW θi k Meaning of malhela? Existing knowledge of word User’s lexical knowledge model zi k The door malhela to the beach wi infused word KNOW DON’T KNOW Contextual Interpretation of word Updated knowledge belief Updated Knowldge Model LEGEND Mixed-Language Content Figure 1: Overview of the approach. Probabilistic learner model (PLM) provides the current value of the belief in the learner’s knowledge of any given word. Local contextual model provides the value of the belief in learning the word from the context alone. Upon exposure of the learner to the word in the given context, PLM is updated with the posterior belief in the user’s knowledge of the word. 3.2 Language Learner Model A simplified model of the learner, that we shall term a Probabilistic Learner Model (PLM) serves as a basis for our approach. PLM is a model of a learner’s lexical knowledge at any given time. PLM models the learner as a vector of independent Bernoulli distributions, where each component represents a probability of the learner knowing the corresponding word. We motivate a probabilistic approach by taking the perspective of measuring our belief in the learner’s knowledge of any given word, rather than the learner’s uncertainty in own knowledge. Formally, we can fully specify this model for learner i as follows: Ui = (πi 0, πi 1, . . . , πi |V |) (1) where V is the vocabulary set — identical across all users, and πi j is our degree of belief in the learner i’s knowledge of a target word wj ∈V . Statistical estimation techniques exist for estimating an individual’s vocabulary size, such as (Bhat and Sproat, 2009; Beglar, 2010), and can be di565 rectly employed for estimating the parameters of this model as our prior belief about user i’s knowledge. The primary motivation behind a probabilistic user model, is to provide a mechanism for updating these probabilities as the user progresses through her reading. Maximizing the parameters of the PLM under a given finite span of codeswitched text, thus, provides a handle for generating optimal code-switched content. Additionally, a probabilistic approach allows for a natural integration of the user model with the uncertainty in other components of the system, such as uncertainty in determining the degree of constraint imposed by the context, and in bitext alignment. 3.3 Model overview At the high level, as illustrated in Figure 1, our approach integrates the model of the learner (PLM) with the local contextual features to update the PLM parameters incrementally as the learner progresses through the text. The fundamental assumption behind our approach is that the learner’s knowledge of a given word after observing it in a sentence is a function of 1) the learner’s previous knowledge of the word, prior to observing it in a given sentence and 2) a degree of constraint that a given context imposes on the meaning of the novel word, and is directly related to the surprisal of novel word in that context. Broadly, as the learner progresses from one sentence to the next, exposing herself to more novel words, the updated parameters of the language model in turn guide the selection of new “switch-points” for replacing source words with the target foreign words. In practice, however, this process is carried out implicitly and off-line by optimizing the estimated progress of the learner’s PLM, without dynamic feedback. Next, we describe the model in detail. 3.4 Switching Factor Graph Model To aid in the specification of the factor graph structure, we introduce new terminology. Because the PLM is updated progressively, we will refer to the parameters of the PLM for a given word wi after observing its kth appearance (instance) in the text, as the learner’s state of knowledge of that word, and denote it as a binary random variable zi k. P(zi k = 1) =    Probability that word wi ∈V is understood on kthexposure Without explicit testing of the user, this variable is hidden. We can view the prior learning model as the parameters of the vector of random variables (z0 0, z1 0, . . . z|V | 0 ). The key to our approach is in how the parameters of these hidden variables are updated from repeated exposures to words in various contexts. Intuitively, an update to the parameter of zi k from zi k−1 occurs after the learner observes word wi in a context (this may be an n-gram, an entire sentence or paragraph containing wi, but we will restrict our attention to fixed-length n-grams). Intuitively an update to the parameter of zi k−1 will depend on how “constrained” the meaning of wi is in the given context. We will refer to it as the “learnability”, denoted by Lk i , of word wi on its kth appearance, given its context. Formally, we will define “learnability” as follows: P(Li k = 1|wi, w\i, z\i k ) = P(constrained(wi) = 1|w) Y i̸=j P(zj k = 1) (2) where w\i represents the set of words that comprise the context window of wi, not including wi, and z\i k are the states corresponding to each of the words in w\i. P(constrained(wi) = 1|w) is a real value (scaled between 0 and 1) that represents the degree of constraint imposed on the meaning of word wi by its context. This value comes from a binary prediction model trained to predict the “predictability” of a word in its context, and is based on the dataset that we collected (described later in the paper). Generally, this value may come directly from the surprisal quantity given by a language model, or may incorporate additional features that are found informative in predicting the constraint on the word. Finally, the quantity is weighted by the parameters of the state variables corresponding to the words other than wi contained in the context. This encodes an intuition that a degree of predictability of a given word given its context is related to the learner’s knowledge of the other words in that context. If, for example, in the sentence “pass me the salt and pepper, please”, both “salt” and “pepper” are substituted with their foreign translations that the learner is unlikely to know, it’s equally unlikely that she will learn them after being exposed to this context, as the context itself will not offer sufficient 566 information for both words to be inferred simultaneously. On the other hand, substituting “salt” and “pepper” individually, is likely to make it much easier to infer the meaning of the other. zi k−1 zi k Li k Figure 2: A noisy-OR combination of the learner’s previous state of knowledge of the word zi k−1 and the word’s “learnability” in the observed context Li k The updated parameter of zi k is obtained from a noisy-OR combination of the parameters of zi k−1 and Li k: P(zi k = 1|zi k−1, Li k) = 1 −[1 −P(Li k = 1)][1 −P(zk−1 = 1)] A noisy-OR-based CPD provides a convenient and tractable approximation in capturing the intended intuition: updated state of knowledge of a given word will increase if the word is observed in a “good” context, or if the learner already knows the word. Combining Equation 2 for each word in the context using the noisy-OR, the updated state for word wi will now be conditioned on zi k−1, z\i k , wk. Because of the dependence of each z in the context on all other hidden variables in that context, we can capture the dependence using a single factor per context, with all of the z variables taking part in a clique, whose dimension is the size of the context. We will now introduce a dual interpretation of the z variables: as “switching” variables that decide whether a given word will be replaced with its translation in the foreign language. If, for example, all of the words have high probability of being known by a learner, than maximizing the joint likelihood of the model will result in most of the words “switched-out” — a desired result. For an arbitrary prior PLM and the input text, maximizing joint likelihood will result in the selection of “switched-out” words that have the highest final probability of being “known” by the learner. 3.5 Inference The problem of selecting “switch-points” reduces to the problem of inference in the resulting factor graph. Unfortunately, without a fairly strong constraint on the collocation of switched words, the resulting graph will contain loops, requiring techniques of approximate inference. To find the optimal settings of the z variables, we apply the loopy max-sum algorithm. While variants of loopy belief propagation, in general, are not guaranteed to converge, we found that the convergence does indeed occur in our experiments. 3.6 Predicting “predictable” words We carried out experiments to determine which words are likely to be inferred from their context. The collected data-set is then used to train a logistic regression classifier to predict which words are likely to be easily inferred from their context. We believe that this dataset may also be useful to researchers in studying related phenomena, and thus make it publicly available. For this task, we focus only on the following context features for predicting the “predictability” of words: n-gram probability, vector-space similarity score, coreferring mentions. N-gram probability and vector-space similarity 2 score are all computed within a fixed-size window of the word (trigrams using Microsoft N-gram service). Coreference feature is a binary feature which indicates whether the word has a co-referring mention in a 3-sentence window preceding a given context (obtained using Stanford’s CoreNLP package). We train L2-regularized logistic regression to predict a binary label L ∈{Constrained, Unconstrained} using a crowd-sourced corpus described below. 3.7 Corpus Construction For collecting data about which words are likely to be “predicted” given their content, we developed an Amazon Mechanical Turk task that presented turkers with excerpts of a short story (English translation of “The Man who Repented” by 2we employ C&W word embeddings from http:// metaoptimize.com/projects/wordreprs/ 567 wi wj wi wi wj wj wk wi S1 S2 S3 S4 S5 S6 Original Text Factor Graph Mapping f1 f2 f3 f4 f5 zi 0 zi 1 zi 2 zi 3 zj 0 zj 1 zj 2 zk 0 Figure 3: Sequence of sentences in the text (left) is mapped into a factor graph, whose nodes correspond to specific occurences of individual words, connected in a clique corresponding to a context in which the word occurs. Ana Maria Matute), with some sentences containing a blank in place of a word. Only content words were considered for the task. Turkers were required to type in their best guess, and the number of semantically similar guesses were counted by an average number of 6 other turkers. A ratio of the median of semantically similar guesses to the total number of guesses was then taken as the score representing “predictability” of the word being guessed in the given context. All words corresponding to blanks whose scores were equal to and above 0.6 were than taken as a positive label (Constrained) and scores below 0.6 were taken as a negative label (Unconstrained). Turkers that judged the semantic similarity of the guesses of other turkers achieved an average Cohen’s kappa agreement of 0.44, indicating fair to poor agreement. 4 Experiments We carried out experiments on the effectiveness of our approach using the Amazon Mechanical Turk platform. Our experimental procedure was as follows: 162 turkers were partitioned into four groups, each corresponding to a treatment condition: OPT (N=34), HF (N=41), RANDOM (N=43), MAN (N=44). Each condition correFigure 4: Visualization of the most “predictable” words in an excerpt from the “The Man who Repented” by Ana Maria Matute (English translation). Font-size correlates with the score given by judge turkers in evaluating guesses of other turkers that were presented with the same text, but the word replaced with a blank. Snippet of the dataset that we release publicly. sponded to a model used to generate the presented code-switched text. For all experiments, the text used was a short story “Lottery” by Shirley Jackson, and a total number of replaced words was controlled (34). Target vocabulary consisted of words from an artificial language, generated statically by a mix of words from several languages. Below we describe the individual treatment conditions: RANDOM (Baseline): words for switching are 568 selected at random from content only words. HF (High Frequency) Baseline: words for switching are selected at random from a ranked list of words that occur most frequently in the presented text. MAN (Manual) Baseline: words for switching are selected manually by the author, based on the intuition of which words are most likely to be guessed in context. OPT (Optimization-based): factor graph-based model proposed in this paper is used for generating code-switched content. The total number of switched words generated by this method is used as a constant for all baselines. Turkers were solicited to participate in a study that involved “reading a short story with a twist” (title of HIT). Not the title, nor the description gave away the purpose of the study, nor that it would be followed by a quiz. Time was not controlled for this study, but on average turkers took 27 minutes to complete the reading. Upon completing the reading portion of the task, turkers were presented with novel sentences that featured the words observed during reading, where only one of the sentences used the word in a semantically correct way. Turkers were asked to select the sentence that “made the most sense”. An example of the sentences presented during the test: Example 1 ✓My edzino loves to go shopping every weekend. The edzino was too big to explore on our own, so went with a group. English word: wife Example 2 ✓His unpreadvers were utterly confusing and useless. The unpreadvers was so strong, that he had to go to a hospital. English word: directions A “recall” metric was computed for each turker, defined as the ratio of correctly selected sentences to the total number of sentences presented. The “grand-average recall” across all turkers was then computed and reported here. 5 Results We perform a one-way ANOVA across the four groups listed above, with the resulting F = 11.38 and p = 9.7e−7. Consequently, multiple pairwise comparison of the models was performed with the Bonferroni-corrected pairwise t-test, yielding the only significantly different recall means between HF −MAN (p = 0.00018), RANDOM − MAN (p = 2.8e −6), RANDOM −OPT (p = 0.00587). The results indicate that, while none of the automated methods (RANDOM, HF, OPT) outperform manually generated codeswitched text, OPT outperforms the RANDOM baseline (no decisive conclusion can be drawn with respect to the HF −RANDOM pair). Additionally, we note, that for words with frequency less than 4, OPT produces recall that is on average higher than the HF baseline (p=0.043, Welch’s t-test), but at the expense of higher frequency words. G G G G G G 0.00 0.25 0.50 0.75 HF MAN OPT RANDOM Condition Recall Condition HF MAN OPT RANDOM Figure 5: Results presented for 4 groups, subjected to 4 treatment conditions: RANDOM, HF, MAN, OPT. Recall performance for each group corresponds to the average ratio of selected sentences that correctly utilize codeswitched words in novel contexts, across all turkers. 6 Discussion We observe from our experiments that the optimization-based approach does not in general outperform the HF baseline. The strength of the 569 G G G 0.0 0.2 0.4 0.6 0.8 HF OPT Condition Recall Condition HF OPT Figure 6: Subset of the results for 2 of the 4 treatment conditions: HF and OPT that correspond to recall only for words with item frequency in the presented text below 4. frequency-based baseline is attributed to a wellknown phenomenon that item frequency promotes the “noticing” effect during reading, critical for triggering incidental lexical acquisition. Generating code-switched text by replacing high frequency content words, thus, in general is a simple and viable approach for generating effective reading-based L2 curriculum aids. However, this method is fundamentally less flexible than the optimization-based method proposed in this paper, for several reasons: • The optimization-based method explicitly models the learner and thus generates codeswitched text progressively more fit for a given individual, even across a sequence of multiple texts. A frequency-based baseline alone would generate content at approximately the same level of difficulty consistently, with the pattern that words that tend to have high frequency in the natural language in general to be the ones that are “switchedout” most often. • An optimization-based approach is able to elicit higher recall in low frequency words, as the mechanism for their selection is driven by the context in which these words appear, rather than frequency alone, favoring those that are learned more readily through context. Moreover, the proposed method in this paper is extensible to more sophisticated learner models, with a potential to surpass the results presented here. Another worthwhile application of this method is as a nested component within a larger optimization-based tool, that in addition to generating code-switched text as demonstrated here, aids in selecting content (such as popular books) as units in the code-switched curriculum. 7 Future Work In this work we demonstrated a pilot implementation of a model-based, optimization-based approach to content generation for assisting in the reading-based L2 language acquisition. Our approach is based on static optimization, and while it would, in theory progress in difficulty with more reading, its open-loop nature precludes it from maintaining an accurate model of the learner in the long-term. For generating effecting L2 content, it is important that the user be kept in a “zone of proximal development” — a tight region where the level of the taught content is at just the right difficulty. Maintaining an accurate internal model of the learner is the single most important requirement for achieving this functionality. Closed-loop learning, with active user feedback is, thus, going to be functionally critical component of any system of this type that is designed to function in the long-term. Additionally, our approach is currently a proofof-concept of an automated method for generating content for assisted L2 acquisition, and is limited to artificial language and only isolated lexical items. The next step would be to integrate bitext alignment across texts in two natural languages, inevitably introducing another stochastic component into the pipeline. Extending this method to larger units, like chunks and simple grammar is another important avenue along which we are taking this work. Early results from concurrent research indicate that “code-switched based” method proposed here is also effective in eliciting acquisition of multi-word chunks. References Heike Adel, Ngoc Thang Vu, Franziska Kraus, Tim Schlippe, Haizhou Li, and Tanja Schultz. 2012. Re570 current neural network language modeling for code switching conversational speech. ICASSP. David Beglar. 2010. A rasch-based validation of the vocabulary size test. Language Testing, 27(1):101– 118. Hedi M Belazi, Edward J Rubin, and Almeida Jacqueline Toribio. 1994. Code switching and x-bar theory: The functional head constraint. Linguistic inquiry, pages 221–237. Suma Bhat and Richard Sproat. 2009. Knowing the unseen: estimating vocabulary size over unseen samples. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 109–117. Association for Computational Linguistics. Rakesh Mohan Bhatt. 1997. Code-switching, constraints, and optimal grammars. Lingua, 102(4):223–251. Or Biran, Samuel Brody, and Noemie Elhadad. 2011. Putting it simply: a context-aware approach to lexical simplification. Fabian Blaicher. 2011. SMT-based Text Generation for Code-Switching Language Models. Ph.D. thesis, Nanyang Technological University, Singapore. Arielle Borovsky, Jeffrey L Elman, and Marta Kutas. 2012. Once is enough: N400 indexes semantic integration of novel word meanings from a single exposure in context. Language Learning and Development, 8(3):278–302. Tom Cobb. 2007. Computing the vocabulary demands of l2 reading. Language Learning & Technology, 11(3):38–63. Noemie Elhadad and Komal Sutaria. 2007. Mining a lexicon of technical terms and lay equivalents. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pages 49–56. Association for Computational Linguistics. Stefan L Frank, Leun J Otten, Giulia Galli, and Gabriella Vigliocco. 2013. Word surprisal predicts n400 amplitude during reading. In Proceedings of the 51st annual meeting of the Association for Computational Linguistics, pages 878–883. Fred Genesee. 2001. Bilingual first language acquisition: Exploring the limits of the language faculty. Annual Review of Applied Linguistics, 21:153–168. David R Hill. 2008. Graded readers in english. ELT journal, 62(2):184–204. Ben Hutchinson. 2005. Modelling the substitutability of discourse connectives. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 149–156. Association for Computational Linguistics. David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In Proceedings of ACL. Marta Kutas and Steven A Hillyard. 1984. Brain potentials during reading reflect word expectancy and semantic association. Nature. James P Lantolf and Gabriela Appel. 1994. Vygotskian approaches to second language research. Greenwood Publishing Group. John M Lipski. 2005. Code-switching or borrowing? no s´e so no puedo decir, you know. In Selected Proceedings of the Second Workshop on Spanish Sociolinguistics, pages 1–15. Ernesto Macaro. 2005. Codeswitching in the l2 classroom: A communication and learning strategy. In Non-native language teachers, pages 63–84. Springer. Lesley Milroy and Pieter Muysken. 1995. One speaker, two languages: Cross-disciplinary perspectives on code-switching. Cambridge University Press. Visvaganthie Moodley. 2010. Code-switching and communicative competence in the language classroom. Journal for Language Teaching, 44(1):7–22. Ian SP Nation. 2001. Learning vocabulary in another language. Ernst Klett Sprachen. Richard C Schmidt and Richard W Schmidt. 1995. Attention and awareness in foreign language learning, volume 9. Natl Foreign Lg Resource Ctr. Norbert Schmitt, Diane Schmitt, and Caroline Clapham. 2001. Developing and exploring the behaviour of two new versions of the vocabulary levels test. Language testing, 18(1):55–88. Advaith Siddharthan, Ani Nenkova, and Kathleen McKeown. 2004. Syntactic simplification for improving content selection in multi-document summarization. In Proceedings of the 20th international conference on Computational Linguistics, page 896. Association for Computational Linguistics. Advaith Siddharthan. 2006. Syntactic simplification and text cohesion. Research on Language and Computation, 4(1):77–109. Thamar Solorio and Yang Liu. 2008. Learning to predict code-switching points. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 973–981. Association for Computational Linguistics. Mark Yatskar, Bo Pang, Cristian Danescu-NiculescuMizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 365–368. Association for Computational Linguistics. 571
2014
53
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 572–581, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Omni-word Feature and Soft Constraint for Chinese Relation Extraction Yanping Chen† Qinghua Zheng† †MOEKLINNS Lab, Department of Computer Science and Technology Xi’an Jiaotong University, China [email protected], [email protected] ‡Amazon.com, Inc. [email protected] Wei Zhang‡ Abstract Chinese is an ancient hieroglyphic. It is inattentive to structure. Therefore, segmenting and parsing Chinese are more difficult and less accurate. In this paper, we propose an Omniword feature and a soft constraint method for Chinese relation extraction. The Omni-word feature uses every potential word in a sentence as lexicon feature, reducing errors caused by word segmentation. In order to utilize the structure information of a relation instance, we discuss how soft constraint can be used to capture the local dependency. Both Omni-word feature and soft constraint make a better use of sentence information and minimize the influences caused by Chinese word segmentation and parsing. We test these methods on the ACE 2005 RDC Chinese corpus. The results show a significant improvement in Chinese relation extraction, outperforming other methods in F-score by 10% in 6 relation types and 15% in 18 relation subtypes. 1 Introduction Information Extraction (IE) aims at extracting syntactic or semantic units with concrete concepts or linguistic functions (Grishman, 2012; McCallum, 2005). Instead of dealing with the whole documents, focusing on designated information, most of the IE systems extract named entities, relations, quantifiers or events from sentences. The relation recognition task is to find the relationships between two entities. Successful recognition of relation implies correctly detecting both the relation arguments and relation type. Although this task has received extensive research. The performance of relation extraction is still unsatisfactory with a F-score of 67.5% for English (23 subtypes) (Zhou et al., 2010). Chinese relation extraction also faces a weak performance having F-score about 66.6% in 18 subtypes (Dandan et al., 2012). The difficulty of Chinese IE is that Chinese words are written next to each other without delimiter in between. Lacking of orthographic word makes Chinese word segmentation difficult. In Chinese, a single sentence often has several segmentation paths leading to the segmentation ambiguity problem (Liang, 1984). The lack of delimiter also causes the Out-of-Vocabulary problem (OOV, also known as new word detection) (Huang and Zhao, 2007). These problems are worsened by the fact that Chinese has a large number of characters and words. Currently, the state-of-the-art Chinese OOV recognition system has performance about 75% in recall (Zhong et al., 2012). The errors caused by segmentation and OOV will accumulate and propagate to subsequent processing (e.g. partof-speech (POS) tagging or parsing). Therefore, the Chinese relation extraction is more difficult. According to our survey, compared to the same work in English, the Chinese relation extraction researches make less significant progress. Based on the characteristics of Chinese, in this paper, an Omni-word feature and a soft constraint method are proposed for Chinese relation extraction. We apply these approaches in a maximum entropy based system to extract relations from the ACE 2005 corpus. Experimental results show that our method has made a significant improvement. The contributions of this paper include 1. Propose a novel Omni-word feature for Chinese relation extraction. Unlike the traditional segmentation based method, which is a partition of the sentence, the Omni-word feature uses every potential word in a sentence as lexicon feature. 2. Aiming at the Chinese inattentive structure, we utilize the soft constraint to capture the local dependency in a relation instance. Four constraint conditions are proposed to gener572 ate combined features to capture the local dependency and maximize the classification determination. The rest of this paper is organized as follows. Section 2 introduces the related work. The Omniword feature and soft constrain are proposed in Section 3. We give the experimental results in Section 3.2 and analyze the performance in Section 4. Conclusions are given in Section 5. 2 Related Work There are two paradigms extracting the relationship between two entities: the Open Relation Extraction (ORE) and the Traditional Relation Extraction (TRE) (Banko et al., 2008). Based on massive and heterogeneous corpora, the ORE systems deal with millions or billions of documents. Even strict filtrations or constrains are employed to filter the redundancy information, they often generate tens of thousands of relations dynamically (Hoffmann et al., 2010). The practicability of ORE systems depends on the adequateness of information in a big corpus (Brin, 1999). Most of the ORE systems utilize weak supervision knowledge to guide the extracting process, such as: Databases (Craven and Kumlien, 1999), Wikipedia (Wu and Weld, 2007; Hoffmann et al., 2010), Regular expression (Brin, 1999; Agichtein and Gravano, 2000), Ontology (Carlson et al., 2010; Mohamed et al., 2011) or Knowledge Base extracted automatically from Internet (Mintz et al., 2009; Takamatsu et al., 2012). However, when iteratively coping with large heterogeneous data, the ORE systems suffer from the “semantic drift” problem, caused by error accumulation (Curran et al., 2007). Agichtein, Carlson and Fader et al. (2010; 2011; 2000) propose syntactic and semantic constraints to prevent this deficiency. The soft constraints, proposed in this paper, are combined features like these syntactic or semantic constraints, which will be discussed in Section 3.2. The TRE paradigm takes hand-tagged examples as input, extracting predefined relation types (Banko et al., 2008). The TRE systems use techniques such as: Rules (Regulars, Patterns and Propositions) (Miller et al., 1998), Kernel method (Zhang et al., 2006b; Zelenko et al., 2003), Belief network (Roth and Yih, 2002), Linear programming (Roth and Yih, 2007), Maximum entropy (Kambhatla, 2004) or SVM (GuoDong et al., 2005). Compared to the ORE systems, the TRE systems have a robust performance. Disadvantages of the TRE systems are that the manually annotated corpus is required, which is timeconsuming and costly in human labor. And migrating between different applications is difficult. However, the TRE systems are evaluable and comparable. Different systems running on the same corpus can be evaluated appropriately. In the field of Chinese relation extraction, Liu et al. (2012) proposed a convolution tree kernel. Combining with external semantic resources, a better performance was achieved. Che et al. (2005) introduced a feature based method, which utilized lexicon information around entities and was evaluated on Winnow and SVM classifiers. Li and Zhang et al. (2008; 2008) explored the position feature between two entities. For each type of these relations, a SVM was trained and tested independently. Based on Deep Belief Network, Chen et al. (2010) proposed a model handling the high dimensional feature space. In addition, there are mixed models. For example, Lin et al. (2010) employed a model, combining both the feature based and the tree kernel based methods. Despite the popularity of kernel based method, Huang et al. (2008) experimented with different kernel methods and inferred that simply migrating from English kernel methods can result in a bad performance in Chinese relation extraction. Chen and Li et al. (2008; 2010) also pointed out that, due to the inaccuracy of Chinese word segmentation and parsing, the tree kernel based approach is inappropriate for Chinese relation extraction. The reason of the tree kernel based approach not achieve the same level of accuracy as that from English may be that segmenting and parsing Chinese are more difficult and less accurate than processing English. In our research, we proposed an Omni-word feature and a soft constraint method. Both approaches are based on the Chinese characteristics. Therefore, better performance is expected. In the following, we introduce the feature construction, which discusses the proposed two approaches. 3 Feature Construction In this section, the employed candidate features are discussed. And four constraint conditions are proposed to transform the candidate features into combined features. The soft constraint is the 573 method to generate the combine features1. 3.1 Candidate Feature Set In the ACE corpus, an entity is an object or set of objects in the world. An entity mention is a reference to an entity. The entity mention is annotated with its full extent and its head, referred to as the extend mention and the head mention respectively. The extent mention includes both the head and its modifiers. Each relation has two entities as arguments: Arg-1 and Arg-2, referred to as E1 and E2. A relation mention (or instance) is the embodiment of a relation. It is referred by the sentence (or clause) in which the relation is located in. In our work, we focus on the detection and recognition of relation mention. Relation identification is handled as a classification problem. Entity-related information (e.g. head noun, entity type, subtype, CLASS, LDCTYPE, etc.) are supposed to be known and provided by the corpus. In our experiment, the entity type, subtype and the head noun are used. All the employed features are simply classified into five categories: Entity Type and Subtype, Head Noun, Position Feature, POS Tag and Omniword Feature. The first four are widely used. The last one is proposed in this paper and is discussed in detail. Entity Type and Subtype: In ACE 2005 RDC Chinese corpus, there are 7 entity types (Person, Organization, GPE, Location, Facility, Weapon and Vehicle) and 44 subtypes (e.g. Group, Government, Continent, etc.). Head Noun: The head noun (or head mention) of entity mention is manually annotated. This feature is useful and widely used. Position Feature: The position structure between two entity mentions (extend mentions). Because the entity mentions can be nested, two entity mentions may have four coarse structures: “E1 is before E2”, “E1 is after E2”, “E1 nests in E2” and “E2 nests in E1”, encoded as: ‘E1_B_E2’, ‘E1_A_E2’, ‘E1_N_E2’ and ‘E2_N_E1’. POS Tag: In our model, we use only the adjacent entity POS tags, which lie in two sides of the entity mention. These POS tags are labelled by the ICTCLAS package2. The POS tags are not used independently. It is encoded by combining 1If without ambiguity, we also use the terminology of “soft constraint” denoting features generated by the employed constraint conditions. 2http://ictclas.org/ the POS tag with the adjacent entity mention information. For example ‘E1_Right_n’ means that the right side of the first entity is a noun (“n”). Omni-word Feature: The notion of “word” in Chinese is vague and has never played a role in the Chinese philological tradition (Sproat et al., 1996). Some Chinese segmentation performance has been reported precision scores above 95% (Peng et al., 2004; Xue, 2003; Zhang et al., 2003). However, for the same sentence, even native peoples in China often disagree on word boundaries (Hoosain, 1992; Yan et al., 2010). Sproat et al. (1996) has showed that there is a consistence of 75% on the segmentation among different native Chinese speakers. The word-formation of Chinese also implies that the meanings of a compound word are made up, usually, by the meanings of words that contained in it (Hu and Du, 2012). So, fragments of phrase are also informative. Because high precision can be received by using simple lexical features (Kambhatla, 2004; Li et al., 2008). Making better use of such information is beneficial. In consideration of the Chinese characteristics, we use every potential word in a relation mention as the lexical features. For example, relation mention ‘台北大安森林公园’ (Taipei Daan Forest Park) has a ”PART-WHOLE” relation type. The traditional segmentation method may generate four lexical features {‘台北’, ‘大安’, ‘森 林’, ‘公园’}, which is a partition of the relation mention. On the other hand, the Omni-word feature denoting all the possible words in the relation mention may generate features as: {‘台’, ‘北’, ‘大’, ‘安’, ‘森’, ‘林’, ‘公’, ‘园’, ‘台北’, ‘大安’, ‘森林’, ‘公园’, ‘森林公园’, ‘大安森林公园’}3 Most of these features are nested or overlapped mutually. So, the traditional character-based or word-based feature is only a subset of the Omniword feature. To extract the Omni-word feature, only a lexicon is required, then scan the sentence to collect every word. Because the number of lexicon entry determines the dimension of the feature space, performance of Omni-word feature is influenced by the lexicon being employed. In this paper, we generate the lexicon by merging two lexicons. The first lexicon 3The generated Omni-word features dependent on the employed lexicon. 574 is obtained by segmenting every relation instance using the ICTCLAS package, collecting very word produced by ICTCLAS. Because the ICTCLAS package was trained on annotated corpus containing many meaningful lexicon entries. We expect this lexicon to improve the performance. The second lexicon is the Lexicon Common Words in Contemporary Chinese4. Despite the Omni-word can be seen as a subset of n-Gram feature. It is not the same as the n-Gram feature. N-Gram features are more fragmented. In most of the instances, the n-Gram features have no semantic meanings attached to them, thus have varied distributions. Furthermore, for a single Chinese word, occurrences of 4 characters are frequent. Even 7 or more characters are not rare. Because Chinese has plenty of characters5, when the corpus becoming larger, the nGram (n¿4) method is difficult to be adopted. On the other hand, the Omni-word can avoid these problems and take advantages of Chinese characteristics (the word-formation and the ambiguity of word segmentation). 3.2 Soft Constraint The structure information (or dependent information) of relation instance is critical for recognition. However, even in English, “deeper” analysis (e.g. logical syntactic relations or predicate-argument structure) may suffer from a worse performance caused by inaccurate chunking or parsing. Hence, the local dependency contexts around the relation arguments are more helpful (Zhao and Grishman, 2005). Zhang et al. (2006a) also showed that Path-enclosed Tree (PT) achieves the best performance in the kernel based relation extraction. In this field, the tree kernel based method commonly uses the parse tree to capture the structure information (Zelenko et al., 2003; Culotta and Sorensen, 2004). On the other hand, the feature based method usually uses the combined feature to capture such structure information (GuoDong et al., 2005; Kambhatla, 2004). In the open relation extraction domain, syntactic and semantic constraints are widely employed to prevent the “semantic drift” problem. Such constraints can also be seen as structural constraint. 4Published by Ministry of Education of the People’s Republic of China in 2008, containing 56,008 entries. 5Currently, at least 13000 characters are used by native Chinese people. Modern Chinese Dictionary: http: //www.cp.com.cn/ Most of these constraints are hard constraints. Any relation instance violating these constraints (or below a predefined threshold) will be abandoned. For example, Agichtein and Gravano (2000) generates patterns according to a confidence threshold (τt). Fader et al. (2011) utilizes a confidence function. And Carlson et al. (2010) filters candidate instances and patterns using the number of times they co-occurs. Deleting of relation instances is acceptable for open relation extraction because it always deals with a big data set. But it’s not suitable for traditional relation extraction, and will result in a low recall. Utilizing the notion of combined feature (GuoDong et al., 2005; Kambhatla, 2004), we replace the hard constraint by the soft constraint. Each soft constraint (combined feature) has a parameter trained by the classifier indicating the discrimination ability it has. No subjective or priori judgement is adopted to delete any potential determinative constraint (except for the reason of dimensionality reduction). Most of the researches make use of the combined feature, but rarely analyze the influence of the approaches we combine them. In this paper, we use the soft constraint to model the local dependency. It is a subset of the combined feature, generated by four constraint conditions: singleton, position sensitive, bin sensitive and semantic pair . For every employed candidate feature, an appropriate constraint condition is selected to combine them with additional information to maximize the classification determination. Singleton: A feature is employed as a singleton feature when it is used without combining with any information. In our experiments, only the position feature is used as singleton feature. Position Sensitive: A position sensitive feature has a label indicating which entity mention it depends on. In our experiment, the Head noun and POS Tag are utilized as position sensitive features, which has been introduced in Section 3.1. For example, ‘台北_E1’ means that the head noun ‘台 北’ depend on the first entity mention. Semantic Pair: Semantic pair is generated by combining two semantic units. Two kinds of semantic pair are employed. Those are generated by combining two entity types or two entity subtypes into a semantic pair. For example, ‘Person_Location’ denotes that the type of the first relation argument is a “Person” (entity 575 type) and the second is a “Location” (entity type). Semantic pair can capture both the semantic and structure information in a relation mention. Bin Sensitive: In our study, Omni-word feature is not added as “bag of words”. To use the Omniword feature, we segment each relation mention by two entity mentions. Together with the two entity mentions, we get five parts: “FIRST”, “MIDDLE”, “END”, “E1” and “E2” (or less, if the two entity mentions are nested). Each part is taken as an independent bin. A flag is used to distinguish them. For example, ‘台北_Bin_F’, ‘台 北_Bin_E1’ and ‘台北_Bin_E’ mean that the lexicon entry ‘台北’ appears in three bins: the FIRST bin, the first entity mention (E1) bin and the END bin. They will be used as three independent features. To sum up, among the five candidate feature sets, the position feature is used as a singleton feature. Both head noun and POS tag are position sensitive. Entity types and subtypes are employed as semantic pair. Only Omni-word feature is bin sensitive. In the following experiments, focusing on Chinese relation extraction, we will analyze the performance of candidate feature sets and study the influence of the constraint conditions. sectionExperiments In this section, methodologies of the Omniword feature and the soft constraint are tested. Then they are compared with the state-of-the-art methods. 3.3 Settings and Results We use the ACE 2005 RDC Chinese corpus, which was collected from newswires, broadcasts and weblogs, containing 633 documents with 6 major relation types and 18 subtypes. There are 8,023 relations and 9,317 relation mentions. After deleting 5 documents containing wrong annotations6, we keep 9,244 relation mentions as positive instances. To get the negative instances, each document is segmented into sentences7. Those sentences that do not contain any entity mention pair are deleted. For each of the remained sentences, we iteratively extract every entity mention pair as the arguments of relation instances for predicting. For example, suppose a sentence has three entity mentions: A,B 6DAVYZW {20041230.1024, 20050110.1403, 20050111.1514, 20050127.1720, 20050201.1538}. 7The five punctuations are used as sentence boundaries: Period (。), Question mark (?), Exclamatory mark (!), Semicolon (;) and Comma (,). and C. Because the relation arguments are order sensitive, six entity mention pairs can be generated: [A,B], [A,C], [B,C], [B,A], [C,A] and [C,B]. After discarding the entity mention pairs that were used as positive instances, we generated 93,283 negative relation instances labelled as “OTHER”. Then, we have 7 relation types and 19 subtypes. A maximum entropy multi-class classifier is trained and tested on the generated relation instances. We adopt the five-fold cross validation for training and testing. Because we are interested in the 6 annotated major relation types and the 18 subtypes, we average the results of five runs on the 6 positive relation types (and 18 subtypes) as the final performance. F-score is computed by 2 × (Precision × Recall) Precision + Recall To implement the maximum entropy model, the toolkit provided by Le (2004) is employed. The iteration is set to 30. Five candidate feature sets are employed to generate the combined features. The entity type and subtype, head noun, position feature are referred to as Fthp8. The POS tags are referred to as Fpos. The Omni-word feature set is denoted by Fow. Table 1 gives the performance of our system on the 6 types and 18 subtypes. Note that, in this paper, bare numbers and numbers in the parentheses represent the results of the 6 types and the 18 subtypes respectively. Table 1: Performance on Type (Subtype) Features P R F Fthp 61.51 48.85 54.46 (52.92) (36.92) (43.49) Fow 80.16 75.45 77.74 (66.98) (54.85) (60.31) Fthp ∪Fpos 83.93 77.81 80.76 (69.83) (61.63) (65.47) Fthp ∪Fow 92.40 88.37 90.34 (81.94) (70.69) (75.90) Fthp ∪Fpos ∪Fow 92.26 88.51 90.35 (80.52) (70.96) (75.44) In Row 1, because Fthp are features directly obtained from annotated corpus, we take this per8“thp” is an acronym of “type, head, position”. Features in Fthp are the candidate features combined with the corresponding constraint conditions. The following Fpos and Fow are the same. 576 formance as our referential performance. In Row 2, with only the Fow feature, the F-score already reaches 77.74% in 6 types and 60.31% in 18 subtypes. The last row shows that adding the Fpos almost has no effect on the performance when both the Fthp and Fow are in use. The results show that Fow is effective for Chinese relation extraction. The superiorities of Owni-word feature depend on three reasons. First, the specificity of Chinese word-formation indicates that the subphrases of Chinese word (or phrase) are also informative. Second, most of relation instances have limited context. The Owni-word feature, utilizing every possible word in them, is a better way to capture more information. Third, the entity mentions are manually annotated. They can precisely segment the relation instance into corresponding bins. Segmentation of bins bears the sentence structure information. Therefore, the Owni-word feature with bin information can make a better use of both the syntactic information and the local dependency. 3.4 Comparison Various systems were proposed for Chinese relation extraction. We mainly focus on systems trained and tested on the ACE corpus. Table 2 lists three systems. Table 2: Survey of Other Systems System P R F Che et al. (2005) 76.13 70.18 73.27 Zhang et al. (2011) 80.71 62.48 70.43 (77.75) (60.20) (67.86) Liu et al. (2012) 81.1 61.0 69.0 (79.1) (57.5) (66.6) Che et al. (2005) was implemented on the ACE 2004 corpus, with 2/3 data for training and 1/3 for testing. The performance was reported on 7 relation types: 6 major relation types and the none relation (or negative instance). Zhang et al. (2011) was based on the ACE 2005 corpus with 75% data for training and 25% for testing. Performances about the 7 types and 19 subtypes were given. Both of them are feature based methods. Liu et al. (2012) is a kernel based method evaluated on the ACE 2005 corpus. The five-fold cross validation was used and declared the performances on 6 relation types and 18 subtypes. The data preprocessing makes differences from our experiments to others. In order to give a better comparison with the state-of-the-art methods, based on our experiment settings and data, we implement the two feature based methods proposed by Che et al. (2005) and Zhang et al. (2011) in Table 2. The results are shown in Table 3. In Table 3, Ei (i ∈1, 2) represents entity mention. “Order” in Che et al. (2005) denotes the position structure of entity mention pair. Four types of order are employed (the same as ours). WordEi+ −k and POSEi+ −k are the words and POS of Ei, “+−k” means that it is the kth word (of POS) after (+) or before (-) the corresponding entity mention. In this paper, k = 1 and k = 2 were set. In Row 2, the “Uni-Gram” represents the Unigram features of internal and external character sequences. Internal character sequences are the four entity extend and head mentions. Five kinds of external character sequences are used: one InBetween character sequence between E1 and E2 and four character sequences around E1 and E2 in a given window size w s. The w s is set to 4. The “Bi-Gram” is the 2-gram feature of internal and external character sequences. Instead of the 4 position structures, the 9 position structures are used. Please refer to Zhang et al. (2011) for the details of these 9 position structures. In Table 3, it is shown that our system outperforms other systems, in F-score, by 10% on 6 relation types and by 15% on 18 subtypes. For researchers who are interested in our work, the source code of our system and our implementations of Che et al. (2005) and Zhang et al. (2011) are available at https://github. com/YPench/CRDC. 4 Discussion In this section, we analyze the influences of employed feature sets and constraint conditions on the performances. Most papers in relation extraction try to augment the number of employed features. In our experiment, we found that this does not always guarantee the best performance, despite the classifier being adopted is claimed to control these features independently. Because features may interact mutually in an indirect way, even with the same feature set, different constraint conditions can have significant influences on the final performance. In Section 3, we introduced five candidate feature sets. Instead of using them as independent features, we combined them with additional in577 Table 3: Comparing With the State-of-the-Art Methods System Feature Set P R F (Che et al., 2005) Ei.Type, Ei.Subtype, Order, WordEi+ −1, WordEi+ −2, POSEi+ −1, POSEi+ −2 84.81 75.69 79.99 (64.89) (52.99) (58.34) (Zhang et al., 2011) Ei.Type, Ei.Subtype, 9 Position Feature, Uni-Gram, Bi-Gram 79.56 72.99 76.13 (66.78) (54.56) (60.06) Ours Fthp ∪Fpos ∪Fow 92.26 88.51 90.35 (80.52) (70.96) (75.44) formation. We proposed four constraint conditions to generate the soft constraint features. In Table 4, the performances of candidate features are compared when different constraint conditions was employed. In Column 3 of Table 4 (Constraint Condition), (1), (2), (3), (4) and (5) stand for the referential feature sets9 in Table 1. Symbol “/” means that the corresponding candidate features in the referential feature set are substituted by the new constraint condition. Par in Column 4 is the number of parameters in the trained maximum entropy model, which indicate the model complexity. I in Column 5 is the influence on performance. “-” and “+” mean that the performance is decreased or increased. The first observation is that the combined features are more powerful than used as singletons. Model parameters are increased by the combined features. Increasing of parameters projects the relation extraction problem into a higher dimensional space, making the decision boundaries become more flexible. The named entities in the ACE corpus are also annotated with the CLASS and LDCTYPE labels. Zhou et al. (2010) has shown that these labels can result in a weaker performance. Row 1, 2 and 3 show that, no matter how they are used, the performances decrease obviously. The reason of the performance degradation may be caused by the problem of over-fitting or data sparseness. At most of the time, increase of model parameters can result in a better performance. Except in Row 8 and Row 11, when two head nouns of entity pair were combined as semantic pair and when POS tag were combined with the entity type, the performances are decreased. There are 7356 head nouns in the training set. Combining two head nouns may increase the feature space 9(1), (2), (3), (4) and (5) denote Fthp, Fow, Fthp ∪Fpos, Fthp ∪Fow and Fthp ∪Fpos ∪Fow respectively. by 7356 × (7356 −1). Such a large feature space makes the occurrence of features close to a random distribution, leading to a worse data sparseness. In Row 4, 10 and 13, these features are used as singleton, the performance degrades considerably. This means that, the missing of sentence structure information on the employed features can lead to a bad performance. Row 9 and 12 show an interesting result. Comparing the reference set (5) with the reference set (3), the Head noun and adjacent entity POS tag get a better performance when used as singletons. These results reflect the interactions between different features. Discussion of this issue is beyond this paper’s scope. In this paper, for a better demonstration of the constraint condition, we still use the Position Sensitive as the default setting to use the Head noun and the adjacent entity POS tag. Row 13 and 14 compare the Omni-word feature (By-Omni-word) with the traditional segmentation based feature (By-Segmentation). BySegmentation denotes the traditional segmentation based feature set generated by a segmentation tool, collecting every output of relation mention. In this place, the ICTCLAS package is adopted too. Conventionally, if a sentence is perfectly segmented, By-Segmentation is straightforward and effective. But, our experiment shows different observations. Row 13 and 14 show that the Omniword method outperforms the traditional method. Especially, when the bin information is used (Row 15), the performance of Omni-word feature increases considerably. Row 14 shows that, compared with the traditional method, the Omni-word feature improves the performance by about 8.79% in 6 relation types and 11.83% in 18 subtypes in F-core. Such improvement may reside in the three reasons discussed in Section 3.3. In short, from Table 4 we have seen that the en578 Table 4: Influence of Feature Set No. Feature Constraint Condition Par P R F I 1 entity CLASS and LDCTYPE (1)/as singleton 21,112 60.29 42.82 50.07 -4.39 21,910 (41.70) (25.18) (31.40) -12.09 2 (1)/combined with positional Info 21,159 63.02 44.47 52.15 -2.31 22,013 (41.61) (26.31) (32.24) -11.25 3 (1)/as semantic pair 21,207 63.35 47.67 54.40 -0.06 22,068 (42.98) (31.34) (36.25) -7.24 4 Type, Subtype semantic pair (1)/as singleton 19,390 51.37 29.16 37.20 -17.26 147,435 (32.8) (18.97) (24.06) -19.43 5 (1)/combined with positional info 19,524 61.77 43.67 51.17 -3.29 20,297 (41.13) (26.83) (32.47) -11.02 6 (5)/as singleton 105,865 91.39 87.92 89.62 -0.73 121,218 (79.32) (68.73) (73.65) -1.79 7 head noun (3)/as singleton 21,450 85.66 75.74 80.40 -0.36 22,409 (64.38) (57.14) (60.55) -0.34 8 (3)/as semantic pair 77,333 83.05 73.14 77.78 -2.54 77,947 (59.70) (51.70) (55.41) -5.48 9 (5)/as singleton 100,963 92.50 88.90 90.66 +0.31 115,499 (82.63) (71.67) (76.76) +1.32 10 adjacent entity POS tag (3)/as singleton 21,450 72.66 61.16 66.41 -13.91 22,409 (62.42) (45.69) (52.76) -8.13 11 (3)/combined with entity type 22,151 80.66 71.67 75.90 -4.42 23,357 (63.41) (53.16) (57.83) -3.06 12 (5)/as singleton 106,931 92.50 88.66 90.54 +0.19 121,194 (82.04) (71.36) (76.33) +0.89 13 Omni-word feature (2)/By-Segmentation as singleton 36,916 67.19 60.12 63.46 -14.28 41,652 (55.85) (44.50) (49.54) -10.77 14 (2)/By-Segmentation with bins 79,430 71.12 66.90 68.95 -8.79 84,715 (54.76) (43.50) (48.48) -11.83 15 (2)/By-Omni-word as singleton 47,428 69.67 63.77 66.59 -11.15 57,702 (54.85) (48.84) (51.67) -8.64 16 (5)/as singleton 57,321 91.43 86.37 88.83 -1.52 67,722 (76.43) (69.57) (72.84) -2.60 tity type and subtype maximize the performance when used as semantic pair. Head noun and adjacent entity POS tag are employed to combine with positional information. Omni-word feature with bins information can increase the performance considerably. Our model (in Section 3.3) uses these settings. This insures that the performances of the candidate features are optimized. 5 Conclusion In this paper, We proposed a novel Omni-word feature taking advantages of Chinese sub-phrases. We also introduced the soft constraint method for Chinese relation recognition. The soft constraint utilizes four constraint conditions to catch the structure information in a relation instance. Both the Omni-word feature and soft constrain make better use of information a sentence has, and minimize the deficiency caused by Chinese segmentation and parsing. The size of the employed lexicon determines the dimension of the feature space. The first impression is that more lexicon entries result in more power. However, more lexicon entries also increase the computational complexity and bring in noises. In our future work, we will study this issue. The notion of soft constraints can also be extended to include more patterns, rules, regexes or syntac579 tic constraints that have been used for information extraction. The usability of these strategies is also left for future work. Acknowledgments The research was supported in part by NSF of China (91118005, 91218301, 61221063); 863 Program of China (2012AA011003); Cheung Kong Scholar’s Program; Pillar Program of NST (2012BAH16F02); Ministry of Education of China Humanities and Social Sciences Project (12YJC880117); The Ministry of Education Innovation Research Team (IRT13035). References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of DL ’00, pages 85–94. ACM. Michele Banko, Oren Etzioni, and Turing Center. 2008. The tradeoffs between open and traditional relation extraction. Proceedings of ACL-HLT ’08, pages 28–36. Sergey Brin. 1999. Extracting patterns and relations from the world wide web. The World Wide Web and Databases, pages 172–183. Andrew Carlson, Justin Betteridge, Richard C. Wang, Estevam R. Hruschka Jr, and Tom M. Mitchell. 2010. Coupled semi-supervised learning for information extraction. In Proceedings of WSDM ’10, pages 101–110. Wanxiang Che, Ting Liu, and Sheng Li. 2005. Automatic entity relation extraction. Journal of Chinese Information Processing, 19(2):1–6. Yu Chen, Wenjie Li, Yan Liu, Dequan Zheng, and Tiejun Zhao. 2010. Exploring deep belief network for chinese relation extraction. In Proceedings of CLP ’10, pages 28–29. Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of ISMB ’99, pages 77–86. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of ACL ’04, page 423. James R. Curran, Tara Murphy, and Bernhard Scholz. 2007. Minimising semantic drift with mutual exclusion bootstrapping. In Proceedings of PACLING ’07, pages 172–180. Liu Dandan, Hu Yanan, and Qian Longhua. 2012. Exploiting lexical semantic resource for tree kernelbased chinese relation extraction. Natural Language Processing and Chinese Computing, pages 213–224. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of EMNLP ’11, pages 1535–1545. Ralph Grishman. 2012. Information extraction: Capabilities and challenges. Notes prepared for the. Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of ACL ’05, pages 427–434. Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In Proceedings of ACL ’10, volume 10, pages 286– 295. Rumjahn Hoosain. 1992. Psychological reality of the word in chinese. Advances in psychology, 90:111– 130. He Hu and Xiaoyong Du. 2012. Radical features for chinese text classification. In Proceedings of FSKD ’12, pages 720–724. Changning Huang and Hai Zhao. 2007. Chinese word segmentation : A decade review. Journal of Chinese Information Processing, 21(3):8–19. Ruihong Huang, Le Sun, and Yuanyong Feng. 2008. Study of kernel-based methods for chinese relation extraction. Information Retrieval Technology, pages 598–604. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of ACL-demo ’04, page 22. Zhang Le. 2004. Maximum entropy modeling toolkit for python and c++. Natural Language Processing Lab, Northeastern University, China. Wenjie Li, Peng Zhang, Furu Wei, Yuexian Hou, and Qin Lu. 2008. A novel feature-based approach to chinese entity relation extraction. In Proceedings of HLT-Short ’08, pages 89–92. Nanyuan Liang. 1984. Written chinese word segmentation system-cdws. Journal of Beijing Institute of Aeronautics and Astronautics, 4. Ruqi Lin, Jinxiu Chen, Xiaofang Yang, and Honglei Xu. 2010. Research on mixed model-based chinese relation extraction. In Proceedings of ICCSIT ’10, volume 1, pages 687–691. Andrew McCallum. 2005. Information extraction: Distilling structured data from unstructured text. Queue, 3(9):48–57. 580 Scott Miller, Michael Crystal, Heidi Fox, Lance Ramshaw, Richard Schwartz, Rebecca Stone, and Ralph Weischedel. 1998. Algorithms that learn to extract information: Bbn: Tipster phase iii. In Proceedings of TIPSTER ’98, pages 75–89. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of ACL ’09, pages 1003–1011. Thahir P Mohamed, Estevam R Hruschka Jr., and Tom M Mitchell. 2011. Discovering relations between noun categories. In Proceedings of EMNLP ’11, pages 1447–1455. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In Proceedings of COLING ’04. Dan Roth and Wen-tau Yih. 2002. Probabilistic reasoning for entity & relation recognition. In Proceedings of COLING ’02, pages 1–7. Dan Roth and Wen-tau Yih. 2007. Global inference for entity and relation identification via a linear programming formulation. Introduction to Statistical Relational Learning, pages 553–580. Richard Sproat, William Gale, Chilin Shih, and Nancy Chang. 1996. A stochastic finite-state wordsegmentation algorithm for chinese. Computational linguistics, 22(3):377–404. Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Proceedings of ACL ’12, pages 721–729. Fei Wu and Daniel S. Weld. 2007. Autonomously semantifying wikipedia. In Proceedings of CIKM ’07, pages 41–50. Nianwen Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29–48. Ming Yan, Reinhold Kliegl, Eike Richter, Antje Nuthmann, and Hua Shu. 2010. Flexible saccade-target selection in chinese reading. The Quarterly Journal of Experimental Psychology, 63(4):705–725. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. The Journal of Machine Learning Research, 3:1083–1106. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexical analyzer ictclas. In Proceedings of SIGHAN ’03, pages 184– 187. Min Zhang, Jie Zhang, and Jian Su. 2006a. Exploring syntactic features for relation extraction using a convolution tree kernel. In Proceedings of HLT-NAACL ’06, pages 288–295. Min Zhang, Jie Zhang, Jian Su, and Guodong Zhou. 2006b. A composite kernel to extract relations between entities with both flat and structured features. In Proceedings of ACL ’06, pages 825–832. Peng Zhang, Wenjie Li, Furu Wei, Qin Lu, and Yuexian Hou. 2008. Exploiting the role of position feature in chinese relation extraction. In Proceedings of LREC ’08. Peng Zhang, Wenjie Li, Yuexian Hou, and Dawei Song. 2011. Developing position structure-based framework for chinese entity relation extraction. ACM Transactions on Asian Language Information Processing (TALIP), 10(3):14. Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proceedings of ACL ’05, pages 419– 426. Ming Zhong, Sheng Wang, and Ming Wu. 2012. Revising word lattice using support vector machine for chinese word segmentation. In Proceedings of IIWAS ’12, pages 352–355. Guodong Zhou, Longhua Qian, and Jianxi Fan. 2010. Tree kernel-based semantic relation extraction with rich syntactic and semantic information. Information Sciences, 180(8):1313–1325. 581
2014
54
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 582–592, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Bilingual Active Learning for Relation Classification via Pseudo Parallel Corpora Longhua Qian Haotian Hui Ya’nan Hu Guodong Zhou* Qiaoming Zhu Natural Language Processing Lab School of Computer Science and Technology, Soochow University 1 Shizi Street, Suzhou, China 215006 {qianlonghua,20134227019,20114227025,gdzhou,qmzhu}@suda.edu.cn Abstract Active learning (AL) has been proven effective to reduce human annotation efforts in NLP. However, previous studies on AL are limited to applications in a single language. This paper proposes a bilingual active learning paradigm for relation classification, where the unlabeled instances are first jointly chosen in terms of their prediction uncertainty scores in two languages and then manually labeled by an oracle. Instead of using a parallel corpus, labeled and unlabeled instances in one language are translated into ones in the other language and all instances in both languages are then fed into a bilingual active learning engine as pseudo parallel corpora. Experimental results on the ACE RDC 2005 Chinese and English corpora show that bilingual active learning for relation classification significantly outperforms monolingual active learning. 1 Introduction Semantic relation extraction between named entities (aka. entity relation extraction or more concisely relation extraction) is an important subtask of Information Extraction (IE) as well as Natural Language Processing (NLP). With its aim to identify and classify the semantic relationship between two entities (ACE 2002-2007), relation extraction is of great significance to many NLP applications, such as question answering, information fusion, social network construction, and knowledge mining and population etc. * Corresponding author In the literature, the mainstream research on relation extraction adopts statistical machine learning methods, which can be grouped into supervised learning (Zelenko et al., 2003; Culotta and Soresen, 2004; Zhou et al., 2005; Zhang et al., 2006; Qian et al., 2008; Chan and Roth, 2011), semi-supervised learning (Zhang et al., 2004; Chen et al., 2006; Zhou et al., 2008; Qian et al., 2010) and unsupervised learning (Hasegawa et al., 2004; Zhang et al., 2005) in terms of the amount of labeled training data they need. Usually the extraction performance depends heavily on the quality and quantity of the labeled data, however, the manual annotation of a largescale corpus is labor-intensive and timeconsuming. In the last decade researchers have turned to another effective learning paradigm-active learning (AL), which, given a small number of labeled instances and a large number of unlabeled instances, selects the most informative unlabeled instances to be manually annotated and add them into the training data in an iterative fashion. Essentially active learning attempts to decrease the quantity of labeled instances by enhancing their quality, gauged by their informativeness to the learner. Since its emergence, active learning has been successfully applied to many tasks in NLP (Engelson and Dagan, 1996; Hwa, 2004; Tomanek et al., 2007; Settles and Craven, 2008). It is trivial to validate, as we will do later in this paper, that active learning can also alleviate the annotation burden for relation extraction in one language while retaining the extraction performance. However, there are cases when we may exploit relation extraction in multiple languages and there are corpora with relation instances annotated for more than one language, such as the ACE RDC 2005 English and Chinese corpora. Hu et al. (2013) shows that supervised relation extraction in one language (e.g. Chinese) 582 can be enhanced by relation instances translated from another language (e.g. English). This demonstrates that there is some complementariness between relation instances in two languages, particularly when the training data is scarce. One natural question is: Can this characteristic be made full use of so that active learning can maximally benefit relation extraction in two languages? To the best of our knowledge, so far the issue of joint active learning in two languages has yet been addressed. Moreover, the success of joint bilingual learning may lend itself to many inherent multilingual NLP tasks such as POS tagging (Yarowsky and Ngai, 2001), name entity recognition (Yarowsky et al., 2001), sentiment analysis (Wan, 2009), and semantic role labeling (Sebastian and Lapata, 2009) etc. This paper proposes a bilingual active learning (BAL) paradigm to relation classification with a small number of labeled relation instances and a large number of unlabeled instances in two languages (non-parallel). Instead of using a parallel corpus which should have entity/relation alignment information and is thus difficult to obtain, this paper employs an off-the-shelf machine translator to translate both labeled and unlabeled instances from one language into the other language, forming pseudo parallel corpora. These translated instances along with the original instances are then fed into a bilingual active learning engine. Findings obtained from experiments with relation classification on the ACE 2005 corpora show that this kind of pseudoparallel corpora can significantly improve the classification performance for both languages in a BAL framework. The rest of the paper is organized as follows. Section 2 reviews the previous work on relation extraction while Section 3 describes our baseline systems. Section 4 elaborates on the bilingual active learning paradigm and Section 5 discusses the experimental results. Finally conclusions and directions for future work are presented in Section 6. 2 Related Work While there are many studies in monolingual relation extraction, there are only a few on multilingual relation extraction in the literature. Monolingual relation extraction: A wide range of studies on relation extraction focus on monolingual resources. As far as representation of relation instances is concerned, there are feature-based methods (Zhao et al., 2004; Zhou et al., 2005; Chan and Roth, 2011) and kernelbased methods (Zelenko et al., 2003; Zhang et al., 2006; Qian et al., 2008), mainly for the English language. Both methods are also widely used in relation extraction in other languages, such as those in Chinese relation extraction (Che et al., 2005; Li et al., 2008; Yu et al., 2010). Multilingual relation extraction: There are only two studies related to multilingual relation extraction. Kim et al. (2010) propose a crosslingual annotation projection approach which uses parallel corpora to acquire a relation detector on the target language. However, the mapping of two entities involved in a relation instance may leads to errors. Therefore, Kim and Lee (2012) further employ a graph-based semisupervised learning method, namely Label Propagation (LP), to indirectly propagate labels from the source language to the target language in an iterative fashion. Both studies transfer relation annotations via parallel corpora from the resource-rich language (English) to the resourcepoor language (Korean), but not vice versa. Based on a small number of labeled instances and a large number of unlabeled instances in both languages, our method differs from theirs in that we adopt a bilingual active learning paradigm via machine translation and improve the performance for both languages simultaneously. Active Learning in NLP: Active learning has become an active research topic due to its potential to significantly reduce the amount of labeled training data while achieving comparable performance with supervised learning. It has been successfully applied to many NLP applications, such as POS tagging (Engelson and Dagan, 1996; Ringger et al., 2007), word sense disambiguation (Chan and Ng, 2007; Zhu and Hovy, 2007), sentiment detection (Brew et al., 2010; Li et al., 2012), syntactical parsing (Hwa, 2004; Osborne and Baldridge, 2004), and named entity recognition (Shen et al., 2004; Tomanek et al., 2007; Tomanek and Hahn, 2009) etc. Different from these AL studies on a single task, Reichart et al. (2008) introduce a multi-task active learning (MTAL) paradigm, where unlabeled instances are selected for two annotation tasks (i.e. named entity and syntactic parse tree). They demonstrate that MTAL in the same language outperforms one-sided and random selection AL. From a different perspective, we propose an active learning framework for the same task, but across two different languages. Another related study (Haffari and Sarkar, 2009) deals with active learning for multilingual 583 machine translation, which make use of multilingual corpora to decrease human annotation efforts by selecting highly informative sentences for a newly added language in multilingual parallel corpora. While machine translation inherently deals with multilingual parallel corpora, our task focuses on relation extraction by pseudo parallel corpora in two languages. 3 Baseline Systems This section first introduces the fundamental supervised learning method, and then describes a baseline active learning algorithm. 3.1 Supervised Learning We adopt the feature-based method for fundamental supervised relation classification, rather than the tree kernel-based method, since active learning needs a large number of iterations and the kernel-based method usually performs much slower than the feature-based one. Following is a list of our used features, much similar to Zhou et al. (2005): a) Lexical features of entities and their contexts WM1: bag-of-words in the 1st entity mention HM1: headword of M1 WM2: bag-of-words in the 2nd entity mention HM2: headword of M2 HM12: combination of HM1 and HM2 WBNULL: when no word in between WBFL: the only one word in between WBF: the first word in between when at least two words in between WBL: the last word in between when at least two words in between WBO: other words in between except the first and last words when at least three words in between b) Entity type ET12: combination of entity types EST12: combination of entity subtypes EC12: combination of entity classes c) Mention level ML12: combination of entity mention levels MT12: combination of LDC mention types d) Overlap #WB: number of other mentions in between #MB: number of words in between M1>M2 or M1<M2: flag indicating whether M2/M1 is included in M1/M2. 3.2 Active Learning Algorithm We use a pool-based active learning procedure with uncertainty sampling (Scheffer et al., 2001; Culotta and McCallum, 2005; Kim et al., 2006) for both Chinese and English relation classification as illustrated in Fig. 1. During iterations a batch of unlabeled instances are chosen in terms of their informativeness to the current classifier, labeled by an oracle and in turn added into the labeled data to retrain the classifier. Due to our focus on the effectiveness of bilingual active learning on relation classification, we only use uncertainty sampling without incorporating more complex measures, such as diversity and representativeness (Settles and Craven, 2008), and leave them for future work. Input: - L, labeled data set - U, unlabeled data set - n, batch size Output: - SVM, classifier Repeat: 1. Train a single classifier SVM on L 2. Run the classifier on U 3. Find at most n instances in U that the classifier has the highest prediction uncertainty 4. Have these instances labeled by an oracle 5. Add them into L Until: certain number of instances are labeled or certain performance is reached Algorithm uncertainty-based active learning Figure 1. Pool-based active learning with uncertainty sampling Since the SVMLIB package used in this paper can output probabilities assigned to the class labels on an instance, we have three uncertainty metrics readily available, i.e., least confidence (LC), margin (M) and entropy (E). The NER experimental results on multiple corpora (Settles and Craven, 2008) show that there is no single clear winner among these three metrics. This conclusion is also validated by our preliminary experiments on the task of active learning relation extraction, thus we adopt the LC metric for simplicity. Specifically, with a sequence of K probabilities for a relation instance at some iteration, denoted as {p1,p2,…pK} in the descending order, the LC metric of the relation instance can be simply picked as the first one, i.e. 1p H LC = (1) Where K denotes the total number of relation classes. Note that this metric actually reflects prediction reliability (i.e. reverse uncertainty) rather than uncertainty in order to facilitate joint 584 confidence calculation for two languages (cf. §4.4). Intuitively, the smaller the HLC is, the less confident the prediction is. 4 Bilingual Active Learning for Relation Classification In this section, we elaborate on the bilingual active learning for relation extraction. 4.1 Problem Definition With Chinese and English (designated as c and e) as two languages used in our study, this paper intends to address the task of bilingual relation classification, i.e., assigning relation labels to candidate instances that have semantic relationships. Suppose we have a small number of labeled instances in both languages, denoted as Lc and Le (non-parallel) respectively, and a large number of unlabeled instances in both languages, denoted as Uc and Ue (non-parallel). The test instances in both languages are represented as Tc and Te. In order to take full advantage of bilingual resources, we translate both labeled and unlabeled instances in one language to ones in the other language as follows: Lc Æ Let Uc Æ Uet Le Æ Lct Ue Æ Lct The objective is to learn SVM classifiers in both languages, denoted as SVMc and SVMe respectively, in a BAL fashion to improve their classification performance. 4.2 Bilingual Active Learning Framework Currently, AL is widely used in NLP tasks in a single language, i.e., during iterations unlabeled instances least confident only in one language are picked and manually labeled to augment the training data. The only exception is AL for machine translation (Haffari et al., 2009; Haffari and Sarkar, 2009), whose purpose is to select the most informative sentences in the source language to be manually translated into the target language. Previous studies (Reichart et al., 2008; Haffari and Sarkar, 2009) show that multi-task active learning (MTAL) can yield promising overall results, no matter whether they are two different tasks or the task of machine translation on multiple language pairs. If a specific NLP task on two languages, such as relation classification, can be regarded as two tasks, it is reasonable to argue that these two tasks can benefit each other when jointly performed in the BAL framework. Yet, to our knowledge, this issue remains unexplored. An important issue for bilingual learning is how to obtain two language views for relation instances from multilingual resources. There are three solutions to this problem, i.e. parallel corpora (Lu et al., 2011), translated corpora (aka. pseudo parallel corpora) (Wan 2009), and bilingual lexicons (Oh et al., 2009). We adopt the one with pseudo parallel corpora, using the machine translation method to generate instances from one language to the other in the BAL paradigm, as depicted in Fig. 2. English View Labeled Chinese Instances (Lc) Labeled Translated English Instances (Let) Labeled English Instances (Le) Labeled Translated Chinese Instances (Lct) Machine Translation Machine Translation Unlabeled Chinese Instances (Uc) Unlabeled Translated Chinese Instances (Uct) Unlabeled Translated English Instances (Uet) Unlabeled English Instances (Ue) Machine Translation Machine Translation Chinese View Bilingual active learning Test Chinese Instances (Tc) Test English Instances (Te) Figure 2. Framework of bilingual active learning In order to make full use of pseudo parallel corpora, translated labeled and unlabeled instances are augmented in the following two ways: z For labeled Chinese instances (Lc) and English instances (Le), their translated counterparts (Let and Lct), along with their labels, are directly added into the labeled instances in the other language; z For unlabeled Chinese instances (Uc) and English instances (Ue), during an active learning iteration the top n unlabeled instances in Uc and Uet which are least confidently jointly 585 predicted by SVMc and SVMe are labeled by an oracle and added to Lc and Le respectively. (cf. §4.4) 4.3 Instance Projection via MT Among the several off-the-shelf machine translation services, we select the Google Translator1 because of its high quality and easy accessibility. Both the mentions of relation instances and the mentions of two involved entities are first translated into the other language via machine translation. Then, two entities in the original instance are aligned with their counterparts in the translated instance in order to form an aligned bilingual relation instance pair. Instance translation All the positive instances in the ACE 2005 Chinese and English corpora are translated to another language respectively, i.e. Chinese to English and vice versa. The relation instance is represented as the word sequence between two entities. This word sequence, rather than the whole sentence, is then translated to another language by the Google Translator. The reason is that, although this sequence loses partial contextual information of the relation instance, its translation quality is supposed to be better. Our preliminary experiments indicate that the addition of contextual information fail to benefit the task. After translation, word segmentation is performed on Chinese instances translated from English while tokenization is needed for translated English instances. Entity alignment The objective of entity alignment is to build a mapping from the entities in the original instances to the entities in the translated instances. Put in another way, entity alignment automatically marks the entity mentions in the translated instance, thereby the feature vector corresponding to the translated instance can be constructed. Entity alignment is vital in cross-language relation extraction whose difficulty lies in the fact that the same entity mention as an isolated phrase and as an integral phrase in the relation instance can be translated to different phrases. For example, the Chinese entity mention “官员” (officer) is translated to “officer” in isolation, it is, however, translated to “officials” when in the relation instance “叙利亚 官员” (Syrian officials). 1 http://translate.google.com Input: - Me, entity mention in English - Re, relation instance in English - Mct, translation of Me in Chinese - Rc, translation of Re in Chinese - L, a lexicon consisting of entries like (ei, ci, pi), where pi is the translation probability from ei to ci - α, probability threshold Output: - Mc, the counterpart of Me in Rc Steps: 1. If Mct can be exactly found in Rc, then return Mct 2. If the rightmost part of Mct can be found in Rc, then this part can be returned 3. For very word we in Me, a) If there exists a word wc in Rc and (we, wc, p) in L and p>α, then (we, wc) is a match of two words b) Return a successive sequence of matching words wc 4. Return null Algorithm entity alignment Figure 3. Entity alignment algorithm Therefore, we devise some heuristics to align entity mentions between Chinese and English. The basic idea is that the word sequence in one mention successively matches the word sequence in the other mention. Take entity alignment from English to Chinese as an example, given entity mention Me in relation instance Re in English and their respective translations Mct and Rc in Chinese, the objective of entity alignment is to find Mc, the counterpart of Me in Rc. The procedure of entity alignment algorithm can be described in Fig. 3. In the algorithm, the probability threshold α is empirically set to 0.002 where the precision and recall of entity alignment are balanced. Our lexicon is derived from the FBIS parallel corpus (#LDC2003E14), which is widely used in machine translation between English and Chinese. It should be noted that the process of relation translation and entity alignment are far from perfection, leading to reduction in the number of instances being mapping to the other language, i.e. |Lc| > |Let| |Uc| > |Uet| |Le| > |Lct| |Ue| > |Lct| 4.4 Bilingual Active Learning Algorithm The basic idea of our BAL paradigm is that, while unlabeled instances uncertain in one lan586 guage are informative to the learner in that language, unlabeled instances jointly uncertain in both languages are informative to the learners in both languages, thus potentially improving classification performance for both languages more than their individual active learners do. This idea is embodied in the BAL algorithm in Fig. 4, where n is the batch size, i.e., the number of instances selected, labeled and augmented at each iteration. Figure 4. Bilingual active learning algorithm The key point of this algorithm lies in Step 5 and Step 6, where unlabeled instances from Uc and Ue are selected and labeled respectively. Take Chinese for an example, when gauging the prediction uncertainty for an unlabeled instance in Uc, not only its own uncertainty measure Hc predicted by SVMc is considered, but also the uncertainty measure Het for its translation counterpart in Uet, which is predicted by SVMe, is considered. Generally, in order to jointly consider these two measures, there are three methods to compute their means, namely, arithmetic mean, geometric mean and harmonic mean. Preliminary experiments show that among these three means, there is no single winner, so we simply take the geometric mean defined as follows: et c g H H H * = (2) Considering that we adopt the LC measure as the uncertainty score, when an instance in Uc can’t find its translation counterpart in Uet due to translation error or entity alignment failure, Het is set to 1, i.e. the maximum. Since the bigger H is, the more confident the prediction is, the less likely the instance will be chosen, in this way we discourage the unlabeled instances without translation counterparts. 5 Experimentation We have systematically evaluated our BAL paradigm on the relation classification task using ACE RDC 2005 RDC Chinese and English corpora. 5.1 Experimental Settings Corpora and Preprocessing We use the ACE 2005 RDC Chinese and English corpora as the benchmark data (hereafter we refer to them as the Chinese corpus (ACE2005c) and the English corpus (ACE2005e) respectively). Both corpora have the same entity/relation hierarchies, which define 7 entity types, 6 major relation types. However, the Chinese corpus contains 633 documents and 9,147 positive relation instances while the English corpus only contains 498 files and 6,253 positive instances. Therefore, in order to balance the corpus scale to fairly evaluate bilingual active learning impact on relation classification, we randomly select 458 Chinese files and thus get 6,268 positive instances, comparable to the English corpus. Preprocessing steps for both corpora include sentence splitting and tokenization (word segmentation for Chinese using ICTCLAS2). Then, positive relation mentions with word sequences between two entities and their feature vectors are extracted from sentences while negative relation mentions are simply discarded because we focus on the task of relation classification. After entity and relation mentions in one language are trans 2 http://ictclas.org/ 587 lated into the other language using the Google translator, entity alignment is performed between relation mentions and their translations. Finally 4,747 Chinese relation mentions are successfully translated and aligned from English and vice versa, 4,936 English relation mentions are translated and aligned from Chinese. SVMLIB (Chang and Lin, 2011) is selected as our classifier since it supports multi-class classification. The training parameters C (SVM) is set to 2.4 according to our previous work on relation extraction (Qian et al., 2010). Relation classification performance is evaluated using the standard Precision (P), Recall (R) and their harmonic average (F1) as well as deficiency measure (cf. latter in this section.). Overall performance scores are averaged over 10 runs. For each run, 1/40 and 1/5 randomly selected instances are used as the training and test set respectively while the remaining instances are used as the unlabeled set for further labeling during active learning iterations. Methods for Comparison For fair comparison, two baseline methods of supervised learning are included to augment their training sets with labeled instances during iterations. However, these labeled instances are chosen randomly from the corpus. SL-MO (Supervised Learning with monolingual labeled instances): only the monolingual labeled instances are fed to the SVM classifiers for both Chinese and English relation classification respectively. The initial training data only contain Lc and Le for Chinese and English respectively. SL-CR (Supervised Learning with crosslingual labeled instances): in addition to monolingual labeled instances (SL-MO), the training data for supervised learning contain labeled instances translated from the other language. That is, the initial training data contain Lc and Lct for Chinese, or Le and Let for English. More important, at each iteration not only the labeled instances are added to the training data of its own language, but their translated instances are also added to the training data of the other language. AL-MO (Active Learning with monolingual instances): labeled and unlabeled data for active learning only contain monolingual instances. No translated instances are involved. That is, the data contain Lc and Uc for Chinese, or Le and Ue for English respectively. This is the normal active learning method applied to a single language. AL-CR (Active Learning with cross-lingual instances): both the manually labeled instances and their translated ones are added to the respective training data. The initial training data contain Lc and Lct for Chinese, or Le and Let for English. At each iteration, the n least confidently classified instances in Uc and Ue are labeled and added to the Chinese/English training data respectively. Their translated instances in Uet and Uct are also added to the English/Chinese training data respectively. AL-BI (Active Learning with bilingual labeled and unlabeled instances): similar to ALCR with the exception that the unlabeled instances are chosen not by uncertainty scores in one language, but by the joint uncertainty scores in two languages. (cf. §4.4) Evaluation Metric Although learning curves are often used to evaluate the performance for active learning, it is preferable to quantitatively compare various active learning methods using a statistical metric deficiency (Schein and Ungar, 2007) defined as: ∑ ∑ = = − − = n i i n n i i n n REF F REF F AL F REF F REF AL def 1 1 )) ( ) ( ( )) ( ) ( ( ) , ( (3) Where n is the number of iterations involved in active learning and Fi is the F1-score of relation classification at the ith iteration. REF is the baseline active learning method and AL is an improved variant of REF, such as AL-CR or ALBI. Essentially this deficiency metric measures the degree to which REF outperforms AL. Thus, smaller deficiency value (i.e. <1.0) indicates AL outperforms REF while a larger value (i.e. >1.0) indicates AL underperforms REF. 5.2 Experimental Results and Analysis Comparison of overall deficiency Table 1 compares the deficiency scores of relation classification on the Chinese (ACE2005c) and English corpora (ACE2005e) for various learning methods, i.e., SL-CR, AL-MO, AL-CR and AL-BI. Particularly, SL-MO is used as the baseline system against which deficiency scores for other methods are computed. The batch size n is set to 100 and iterations stop after all the unlabeled instances have run out of. Deficiency scores are averaged over 10 runs and the best ones are highlighted in bold font. Each run has a different test set and a different seed set. 588 (a) Chinese (b) English Figure 5. Deficiency comparison for different batch sizes (a) Chinese (b) English Figure 6. Learning curves for different methods The table shows that among the three active learning methods, bilingual active learning (ALBI) achieves the best performance for both Chinese and English relation classification. This demonstrates that, bilingual active learning with jointly selecting the unlabeled instances can not only enhance relation classification for its own language, but also help relation classification for the other language due to the complementary nature of relation instances between Chinese and English. Corpora SL-CR AL-MO AL-CR AL-BI ACE2005c 0.934 0.383 0.323 0.254 ACE2005e 0.779 0.405 0.298 0.160 Table 1. Deficiency comparison of different methods The table also shows the consistent utility of cross-lingual information for relation classification for both languages. When cross-lingual information is augmented, SL-CR outperforms SL-MO and AL-CR outperforms AL-MO. Comparison of different batch sizes Figure 5(a) and 5(b) illustrate the deficiency scores for four learning methods (SL-CR, ALMO, AL-CR and AL-BI) against the SL-MO method with different batch sizes (n), where prefixes “C” and “E” denote Chinese and English respectively. The horizontal axes denote the range of n (<=1000) while the vertical ones denote the deficiency scores. The figures show that the deficiency scores for three active learning methods run virtually parallel with each other while they increase monotonously with the batch size n. This suggests that for both Chinese and English AL-BI consistently performs best against other methods across a wide range of batch sizes, though the overall advantage of three active learning methods generally diminish. Comparison of learning curves In order to gain an intuition into how the performance evolves when the labeled instances are added into the training data during iterations, we depict the learning curves for various learning methods on the Chinese and English corpora in Fig. 6(a) and 6(b) respectively. The horizontal axes denote learning iterations while the vertical ones denote F1-scores. For simplicity of illustration the F1-scores are collected from one of the 10 runs. 75 77 79 81 83 85 87 89 91 93 95 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 C-SL-MO C-SL-CR C-AL-MO C-AL-CR C-AL-BI 75 77 79 81 83 85 87 89 91 93 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 E-SL-MO E-SL-CR E-AL-MO E-AL-CR E-AL-BI 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 100 200 300 400 500 600 700 800 900 1000 C-SL-CR C-AL-MO C-AL-CR C-AL-BI 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 100 200 300 400 500 600 700 800 900 1000 E-SL-CR E-AL-MO E-AL-CR E-AL-BI 589 The figures clearly demonstrate the performance difference for both languages among five methods at the beginning of iterations while F1scores converge at the end of iterations. Particularly at the very outset, AL-BI outperforms other methods, quickly jumps to a very high point comparable to its best performance. However, after the 10th iteration the performance scores for the three AL variants tend to show trivial difference probably because most highly informative instances have already been added to the training data. Comparison of annotation scale In order to better compare BAL with other AL methods Figure 7 zooms out partial data on three AL methods in Fig. 6 and rescale the data for AL-MO, where “C” and “E” denote Chinese and English respectively. Likewise, the vertical axis denotes F1-scores while the horizontal axis denotes the number of instances labeled for ALCR and AL-BI. However, for AL-MO that number is doubled. This figure tries to answer the question: to label n respective instances in both languages for BAL or to labeled 2n instances in just one language for monolingual AL, can the former rival the latter? 80 82 84 86 88 90 92 94 100 200 300 400 500 600 700 800 900 1000 C-AL-MO (2n) C-AL-CR C-AL-BI E-AL-MO (2n) E-AL-CR E-AL-BI Figure 7. Comparison of annotation scale among three AL methods The figure shows that for both Chinese and English, when the number of instances (n) to be labeled is no greater than 400, AL-BI with n instances can achieve comparable performance with AL-MO with 2n instances. It implies that when the labeled instances are limited, labeling instances, half in one language and half in the other for BAL, is competitive against labeling the same total number of instances in just one language for monolingual AL, not to mention that the former can generate two relation extractors on two languages. 6 Conclusion This paper proposes a bilingual active learning paradigm for Chinese and English relation classification. Given a small number of relation instances and a large number of unlabeled relation instances in both languages, we translate both the labeled and unlabeled instances in one language to the other as pseudo parallel corpora. After entity alignment, these labeled and unlabeled instances in both languages are fed into a bilingual active learning engine. Experiments with the task of relation classification on the ACE RDC 2005 Chinese and English corpora show that bilingual active learning can significantly outperforms monolingual active learning for both Chinese and English simultaneously. Moreover, we demonstrate that BAL across two languages can compete against monolingual AL when the annotation scale is limited, though the overall number of labeled instances remains the same. For future work, on one hand, we plan to combine uncertainty sampling with diversity and informativeness measures; on the other hand, we intend to combine BAL with semi-supervised learning to further reduce human annotation efforts. Acknowledgments This research is supported by Grants 61373096, 61305088, 61273320, and 61331011 under the National Natural Science Foundation of China; Project 2012AA011102 under the “863” National High-Tech Research and Development of China; Grant 11KJA520003 under the Education Bureau of Jiangsu, China. We would like to thank the excellent and insightful comments from the three anonymous reviewers. Thanks also go to my colleague Dr. Shoushan Li for his helpful suggestions. Reference ACE. 2002-2007. Automatic Content Extraction. http://www.ldc.upenn.edu/Projects/ACE/ A. Brew, D. Greene, and P. Cunningham. 2010. Using crowdsourcing and active learning to track sentiment in online media. ECAI’2010: 145–150. Y.S. Chan and D. Roth. 2011. Exploiting SyntacticoSemantic Structures for Relation Extraction. ACL’2011: 551-560 Y.S. Chan and H.T. Ng. 2007. Domain adaptation with active learning for word sense disambiguation. ACL’2007. 590 C.C. Chang and C.J. Lin. 2011. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(27):1-27. W.X. Che, T. Liu, and S. Li. 2005. Automatic Extraction of Entity Relation (in Chinese). Journal of Chinese Information Processing, 19(2): 1-6. J.X. Chen, D.H. Ji, and C. L. Tan. 2006. Relation Extraction using Label Propagation-based Semisupervised Learning. ACL/COLING’2006: 129-136. A. Culotta and J. Sorensen. 2004. Dependency tree kernels for relation extraction. ACL’2004: 423-439. A. Culotta and A. McCallum. 2005. Reducing labeling effort for stuctured prediction tasks. AAAI’2005: 746–751. S. P. Engelson and I. Dagan. 1996. Minimizing manual annotation cost in supervised training from corpora. ACL’1996: 319–326. G. Haffari, M. Roy, and A. Sarkar. 2009. Active learning for statistical phrase-based machine translation. NAACL’2009: 415–423. G. Haffari and A. Sarkar. 2009. Active learning for multilingual statistical machine translation. ACL/IJCNLP’2009: 181–189. T. Hasegawa, S. Sekine, and R. Grishman. 2004. Discovering Relations among Named Entities from Large Corpora. ACL’2004. Y.N. Hu, J.G. Shu, L.H. Qian, and Q.M. Qiao. 2013. Cross-lingual Relation Extraction based on Machine Translation (in Chinese). Journal of Chinese Information Processing, 27(5): 191-197. R. Hwa. 2004. Sample selection for statistical parsing. Computational Linguistics, 30(3): 253–276. S. Kim, M. Jeong, J. Lee, and G.G. Lee. 2010. A Cross-lingual Annotation Projection Approach for Relation Detection. COLING’2010: 564-571. S. Kim and G.G. Lee. 2012. A Graph-based Crosslingual Projection Approach for Weakly Supervised Relation Extraction. ACL’2012: 48-53. S. Kim, Y. Song, K. Kim, J.W. Cha, and G.G. Lee. 2006. MMR-based active machine learning for bio named entity recognition. HLT-NAACL’2006: 69– 72. W.J. Li, P. Zhang, F.R. Wei, Y.X. Hou, and Q. Lu. 2008. A Novel Feature-based Approach to Chinese Entity Relation Extraction. ACL’2008: 89-92. S.S. Li, S.F. Ju, G.D. Zhou, and X.J. Li. 2012. Active learning for imbalanced sentiment classification. EMNLP-CoNLL’2012: 139-148. B. Lu, C.H. Tan, C. Cardie, and B.K. Tsou. 2011. Joint Bilingual Sentiment Classification with Unlabeled Parallel Corpora. ACL’2011: 320-330. J. Oh, K. Uchimoto, and K. Torisawa. 2009. Bilingual Co-Training for Monolingual HyponymyRelation Acquisition. ACL’2009: 432-440. M. Osborne and J. Baldridge. 2004. Ensemble based active learning for parse selection. HLT-NAACL’ 2004: 89–96. L.H. Qian, G.D. Zhou, F. Kong, and Q.M. Zhu. 2010. Clustering-based Stratified Seed Sampling for Semi-Supervised Relation Classification. EMNLP2010: 346-355. L.H. Qian, G.D. Zhou, Q.M. Zhu, and P.D. Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. COLING’2008: 697-704. R. Reichart, K. Tomanek, U. Hahn, and A. Rappoport. 2008. Multi-task active learning for linguistic annotations. ACL’2008: 861-869. E. Ringger, P. McClanahan, R. Haertel, G. Busby, M. Carmen, J. Carroll, K. Seppi, and D. Lonsdale. 2007. Active learning for part-of-speech tagging: Accelerating corpus annotation. In Proceedings of the Linguistic Annotation Workshop at ACL’2007: 101–108. T. Scheffer, C. Decomain, and S. Wrobel. 2001. Active hidden Markov models for information extraction. In Proceedings of the International Conference on Advances in Intelligent Data Analysis (CAIDA), pages 309–318. A. I. Schein and L. H. Ungar. 2007. Active learning for logistic regression: an evaluation. Machine Learning, 68(3): 235-265. P. Sebastian and M. Lapata. 2009. Cross-lingual annotation projection of semantic roles. Journal of Artificial Intelligence Research, 36(1): 307-340. B. Settles and M. Craven. 2008. An Analysis of Active Learning Strategies for Sequence Labeling Tasks. EMNLP’2008: 1070–1079. D. Shen, J. Zhang, J. Su, G.D. Zhou and C.-L. Tan. 2004. Multi-criteria-based active learning for named entity recognition. ACL’2004. K. Tomanek and U. Hahn. 2009. Semi-Supervised Active Learning for Sequence Labeling. ACLIJCNLP’2009: 1039-1047. K. Tomanek, J. Wermter, and U. Hahn. 2007. An approach to text corpus construction which cuts annotation costs and maintains reusability of annotated data. EMNLP-CoNLL’2007: 486–495. X.J. Wan. 2009. Co-Training for Cross-Lingual Sentiment Classification. ACL-AFNLP’2009: 235-243. D. Yarowsky and G. Ngai. 2001. Inducing multilingual POS taggers and NP bracketers via robust projection across aligned corpora. NAACL’2001: 1-8. 591 D. Yarowsky, G. Ngai, and R. Wicentorski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. HLT’2001:1-8. H.H. Yu, L.H. Qian, G.D. Zhou, and Q.M. Zhu. 2010. Chinese Semantic Relation Extraction based on Unified Syntactic and Entity Semantic Tree (in Chinese). Journal of Chinese Information Processing, 24(5): 17-23. D. Zelenko, C. Aone, and A. Richardella. 2003. Kernel Methods for Relation Extraction. Journal of Machine Learning Research, 3: 1083-1106. Z. Zhang. 2004. Weakly-supervised relation classification for Information Extraction. CIKM’2004. M. Zhang, J. Su, D. M. Wang, G. D. Zhou, and C. L. Tan. 2005. Discovering Relations between Named Entities from a Large Raw Corpus Using Tree Similarity-Based Clustering. IJCNLP’2005: 378389. M. Zhang, J. Zhang, J. Su, and G.D. Zhou. 2006. A Composite Kernel to Extract Relations between Entities with both Flat and Structured Features. ACL/COLING’2006: 825-832. S.B. Zhao and R. Grishman. 2005. Extracting relations with integrated information using kernel methods. ACL’2005: 419-426. G.D. Zhou, J.H. Li, L.H. Qian, and Q.M. Zhu. 2008. Semi-Supervised Learning for Relation Extraction. IJCNLP’2008: 32-38. G.D. Zhou, J. Su, J. Zhang, and M. Zhang. 2005. Exploring various knowledge in relation extraction. ACL’2005: 427-434. J.B. Zhu and E. Hovy. 2007. Active learning for word sense disambiguation with methods for addressing the class imbalance problem. EMNLPCoNLL’2007: 783-790. 592
2014
55
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 593–602, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Soft Linear Constraints with Application to Citation Field Extraction Sam Anzaroot Alexandre Passos David Belanger Andrew McCallum Department of Computer Science University of Massachusetts, Amherst {anzaroot, apassos, belanger, mccallum}@cs.umass.edu Abstract Accurately segmenting a citation string into fields for authors, titles, etc. is a challenging task because the output typically obeys various global constraints. Previous work has shown that modeling soft constraints, where the model is encouraged, but not require to obey the constraints, can substantially improve segmentation performance. On the other hand, for imposing hard constraints, dual decomposition is a popular technique for efficient prediction given existing algorithms for unconstrained inference. We extend dual decomposition to perform prediction subject to soft constraints. Moreover, with a technique for performing inference given soft constraints, it is easy to automatically generate large families of constraints and learn their costs with a simple convex optimization problem during training. This allows us to obtain substantial gains in accuracy on a new, challenging citation extraction dataset. 1 Introduction Citation field extraction, an instance of information extraction, is the task of segmenting and labeling research paper citation strings into their constituent parts, including authors, editors, year, journal, volume, conference venue, etc. This task is important because citation data is often provided only in plain text; however, having an accurate structured database of bibliographic information is necessary for many scientometric tasks, such as mapping scientific sub-communities, discovering research trends, and analyzing networks of researchers. Automated citation field extraction needs further research because it has not yet reached a level of accuracy at which it can be practically deployed in real-world systems. Hidden Markov models and linear-chain conditional random fields (CRFs) have previously been applied to citation extraction (Hetzner, 2008; Peng and McCallum, 2004) . These models support efficient dynamic-programming inference, but only model local dependencies in the output label sequence. However citations have strong global regularities not captured by these models. For example many book citations contain both an author section and an editor section, but none have two disjoint author sections. Since linear-chain models are unable to capture more than Markov dependencies, the models sometimes mislabel the editor as a second author. If we could enforce the global constraint that there should be only one author section, accuracy could be improved. One framework for adding such global constraints into tractable models is constrained inference, in which at inference time the original model is augmented with restrictions on the outputs such that they obey certain global regularities. When hard constraints can be encoded as linear equations on the output variables, and the underlying model’s inference task can be posed as linear optimization, one can formulate this constrained inference problem as an integer linear program (ILP) (Roth and Yih, 2004). Alternatively, one can employ dual decomposition (Rush et al., 2010). Dual decompositions’s advantage over ILP is is that it can leverage existing inference algorithms for the original model as a black box. Such a modular algorithm is easy to implement, and works quite well in practice, providing certificates of optimality for most examples. The above two approaches have previously been applied to impose hard constraints on a model’s output. On the other hand, recent work has demonstrated improvements in citation field extraction by imposing soft constraints (Chang et al., 2012). Here, the model is not required obey the global constraints, but merely pays a penalty for their vi593 4 .ref-marker [ J. first D. middle Monk ,last person ]authors [ Cardinal Functions on Boolean Algebra , ]title [ Lectures in Mathematics , ETH Zurich , series Birkhause Verlag , publisher Basel , Boston , Berlin , address 1990 . year date ]venue Figure 1: Example labeled citation olation. This paper introduces a novel method for imposing soft constraints via dual decomposition. We also propose a method for learning the penalties the prediction problem incurs for violating these soft constraints. Because our learning method drives many penalties to zero, it allows practitioners to perform ‘constraint selection,’ in which a large number of automatically-generated candidate global constraints can be considered and automatically culled to a smaller set of useful constraints, which can be run quickly at test time. Using our new method, we are able to incorporate not only all the soft global constraints of Chang et al. (2012), but also far more complex data-driven constraints, while also providing stronger optimality certificates than their beam search technique. On a new, more broadly representative, and challenging citation field extraction data set, we show that our methods achieve a 17.9% reduction in error versus a linear-chain conditional random field. Furthermore, we demonstrate that our inference technique can use and benefit from the constraints of Chang et al. (2012), but that including our data-driven constraints on top of these is beneficial. While this paper focusses on an application to citation field extraction, the novel methods introduced here would easily generalize to many problems with global output regularities. 2 Background 2.1 Structured Linear Models The overall modeling technique we employ is to add soft constraints to a simple model for which we have an existing efficient prediction algorithm. For this underlying model, we employ a chainstructured conditional random field (CRF), since CRFs have been shown to perform better than other simple unconstrained models like hidden markov models for citation extraction (Peng and McCallum, 2004). We produce a prediction by performing MAP inference (Koller and Friedman, 2009). The MAP inference task in a CRF be can expressed as an optimization problem with a linear objective (Sontag, 2010; Sontag et al., 2011). Here, we define a binary indicator variable for each candidate setting of each factor in the graphical model. Each of these indicator variables is associated with the score that the factor takes on when it has the indictor variable’s corresponding value. Since the log probability of some y in the CRF is proportional to sum of the scores of all the factors, we can concatenate the indicator variables as a vector y and the scores as a vector w and write the MAP problem as max. ⟨w, y⟩ s.t. y ∈U, (1) where the set U represents the set of valid configurations of the indicator variables. Here, the constraints are that all neighboring factors agree on the components of y in their overlap. Structured Linear Models are the general family of models where prediction requires solving a problem of the form (1), and they do not always correspond to a probabilistic model. The algorithms we present in later sections for handling soft global constraints and for learning the penalties of these constraints can be applied to general structured linear models, not just CRFs, provided we have an available algorithm for performing MAP inference. 2.2 Dual Decomposition for Global Constraints In order to perform prediction subject to various global constraints, we may need to augment the problem (1) with additional constraints. Dual Decomposition is a popular method for performing MAP inference in this scenario, since it leverages known algorithms for MAP in the base problem where these extra constraints have not been added (Komodakis et al., 2007; Sontag et al., 2011; Rush and Collins, 2012). In this case, the MAP problem can be formulated as a structured linear model similar to equation (1), for which we have a MAP algorithm, but where we have imposed some additional constraints Ay ≤b that no longer allow us to use the algorithm. In other 594 Algorithm 1 DD: projected subgradient for dual decomposition with hard constraints 1: while has not converged do 2: y(t) = argmaxy∈U w + AT λ, y 3: λ(t) = Π0≤· h λ(t−1) −η(t)(Ay −b) i words, we consider the problem max. ⟨w, y⟩ s.t. y ∈U Ay ≤b, (2) for an arbitrary matrix A and vector b. We can write the Lagrangian of this problem as L(y, λ) = ⟨w, y⟩+ λT (Ay −b). (3) Regrouping terms and maximizing over the primal variables, we have the dual problem min.λD(λ) = max y∈U w + AT λ, y −λT b. (4) For any λ, we can evaluate the dual objective D(λ), since the maximization in (4) is of the same form as the original problem (1), and we assumed we had a method for performing MAP in this. Furthermore, a subgradient of D(λ) is Ay∗−b, for an y∗which maximizes this inner optimization problem. Therefore, we can minimize D(λ) with the projected subgradient method (Boyd and Vandenberghe, 2004), and the optimal y can be obtained when evaluating D(λ∗). Note that the subgradient of D(λ) is the amount by which each constraint is violated by λ when maximizing over y. Algorithm 1 depicts the basic projected subgradient descent algorithm for dual decomposition. The projection operator Π consists of truncating all negative coordinates of λ to 0. This is necessary because λ is a vector of dual variables for inequality constraints. The algorithm has converged when each constraint is either satisfied by y(t) with equality or its corresponding component of λ is 0, due to complimentary slackness (Boyd and Vandenberghe, 2004). 3 Soft Constraints in Dual Decomposition We now introduce an extension of Algorithm 1 to handle soft constraints. In our formulation, a soft-constrained model imposes a penalty for each unsatisfied constraint, proportional to the amount by which it is violated. Therefore, our derivation parallels how soft-margin SVMs are derived from hard-margin SVMs by introducing auxiliary slack variables (Cortes and Vapnik, 1995). Note that when performing MAP subject to soft constraints, optimal solutions might not satisfy some constraints, since doing so would reduce the model’s score by too much. Consider the optimization problems of the form: max. ⟨w, y⟩−⟨c, z⟩ s.t. y ∈U Ay −b ≤z −z ≤0, (5) For positive ci, it is clear that an optimal zi will be equal to the degree to which aT i y ≤bi is violated. Therefore, we pay a cost ci times the degree to which the ith constraint is violated, which mirrors how slack variables are used to represent the hinge loss for SVMs. Note that ci has to be positive, otherwise this linear program is unbounded and an optimal value can be obtained by setting zi to infinity. Using a similar construction as in section 2.2 we write the Lagrangian as: (6) L(y, z, λ, µ) = ⟨w, y⟩−⟨c, z⟩ + λT (Ay −b −z) + µT (−z). The optimality constraints with respect to z tell us that −c −λ −µ = 0, hence µ = −c −λ. Substituting, we have L(y, λ) = ⟨w, y⟩+ λT (Ay −b), (7) except the constraint that µ = −c −λ implies that for µ to be positive λ ≤c. Since this Lagrangian has the same form as equation (3), we can also derive a dual problem, which is the same as in equation (4), with the additional constraint that each λi can not be bigger than its cost ci. In other words, the dual problem can not penalize the violation of a constraint more than the soft constraint model in the primal would penalize you if you violated it. This optimization problem can still be solved with projected subgradient descent and is depicted in Algorithm 2. The only modifications to Algorithm 1 are replacing the coordinate-wise projection Π0≤· with Π0≤·≤c and how we check for convergence. Now, we check for the KKT conditions of (5), where for every constraint i, either the constraint is satisfied with equality, λi = 0, or λi = ci. 595 Algorithm 2 Soft-DD: projected subgradient for dual decomposition with soft constraints 1: while has not converged do 2: y(t) = argmaxy∈U w + AT λ, y 3: λ(t) = Π0≤·≤c h λ(t−1) −η(t)(Ay −b) i Therefore, implementing soft-constrained dual decomposition is as easy as implementing hardconstrained dual decomposition, and the periteration complexity is the same. We encourage further applications of soft-constraint dual decomposition to existing and new NLP problems. 3.1 Learning Penalties One consideration when using soft v.s. hard constraints is that soft constraints present a new training problem, since we need to choose the vector c, the penalties for violating the constraints. An important property of problem (5) in the previous section is that it corresponds to a structured linear model over y and z. Therefore, we can apply known training algorithms for estimating the parameters of structured linear models to choose c. All we need to employ the structured perceptron algorithm (Collins, 2002) or the structured SVM algorithm (Tsochantaridis et al., 2004) is a blackbox procedure for performing MAP inference in the structured linear model given an arbitrary cost vector. Fortunately, the MAP problem for (5) can be solved using Soft-DD, in Algorithm 2. Each penalty ci has to be non-negative; otherwise, the optimization problem in equation (5) is ill-defined. This can be ensured by simple modifications of the perceptron and subgradient descent optimization of the structured SVM objective simply by truncating c coordinate-wise to be non-negative at every learning iteration. Intuitively, the perceptron update increases the penalty for a constraint if it is satisfied in the ground truth and not in an inferred prediction, and decreases the penalty if the constraint is satisfied in the prediction and not the ground truth. Since we truncate penalties at 0, this suggests that we will learn a penalty of 0 for constraints in three categories: constraints that do not hold in the ground truth, constraints that hold in the ground truth but are satisfied in practice by performing inference in the base CRF model, and constraints that are satisfied in practice as a side-effect of imposing non-zero penalties on some other constraints . A similar analysis holds for the structured SVM approach. Therefore, we can view learning the values of the penalties not just as parameter tuning, but as a means to perform ‘constraint selection,’ since constraints that have a penalty of 0 can be ignored. This property allows us to consider large families of constraints, from which the useful ones are automatically identified. We found it beneficial, though it is not theoretically necessary, to learn the constraints on a heldout development set, separately from the other model parameters, as during training most constraints are satisfied due to overfitting, which leads to an underestimation of the relevant penalties. 4 Citation Extraction Data We consider the UMass citation dataset, first introduced in Anzaroot and McCallum (2013). It has over 1800 citation from many academic fields, extracted from the arXiv. This dataset contains both coarse-grained and fine-grained labels; for example it contains labels for the segment of all authors, segments for each individual author, and for the first and last name of each author. There are 660 citations in the development set and 367 citation in the test set. The labels in the UMass dataset are a concatenation of labels from a hierarchically-defined schema. For example, a first name of an author is tagged as: authors/person/first. In addition, individual tokens are labeled using a BIO label schema for each level in the hierarchy. BIO is a commonly used labeling schema for information extraction tasks. BIO labeling allows individual labels on tokens to label segmentation information as well as labels for the segments. In this schema, labels that begin segments are prepended with a B, labels that continue a segment are prepended with an I, and tokens that don’t have a labeling in this schema are given an O label. For example, in a hierarchical BIO label schema the first token in the first name for the second author may be labeled as: I-authors/B-person/B-first. An example labeled citation in this dataset can be viewed in figure 1. 5 Global Constraints for Citation Extraction 5.1 Constraint Templates We now describe the families of global constraints we consider for citation extraction. Note these 596 constraints are all linear, since they depend only on the counts of each possible conditional random field label. Moreover, since our labels are BIO-encoded, it is possible, by counting B tags, to count how often each citation tag itself appears in a sentence. The first two families of constraints that we describe are general to any sequence labeling task while the last is specific to hierarchical labeling such as available in the UMass dataset. Our sequence output is denoted as y and an element of this sequence is yk. We denote [[yk = i]] as the function that outputs 1 if yk has a 1 at index i and 0 otherwise. Here, yk represents an output tag of the CRF, so if [[yk = i]] = 1, then we have that yk was given a label with index i. 5.2 Singleton Constraints Singleton constraints ensure that each label can appear at most once in a citation. These are same global constraints that were used for citation field extraction in Chang et al. (2012). We define s(i) to be the number of times the label with index i is predicted in a citation, formally: s(i) = X yk∈y [[yk = i]] The constraint that each label can appear at most once takes the form: s(i) <= 1 5.3 Pairwise Constraints Pairwise constraints are constraints on the counts of two labels in a citation. We define z1(i, j) to be z1(i, j) = X yk∈y [[yk = i]] + X yk∈y [[yk = j]] and z2(i, j) to be z2(i, j) = X yk∈y [[yk = i]] − X yk∈y [[yk = j]] We consider all constraints of the forms: z(i, j) ≤0, 1, 2, 3 and z(i, j) ≥0, 1, 2, 3. Note that some pairs of these constraints are redundant or logically incompatible. However, we are using them as soft constraints, so these constraints will not necessarily be satisfied by the output of the model, which eliminates concern over enforcing logically impossible outputs. Furthermore, in section 3.1 we described how our procedure for learning penalties will drive some penalties to 0, which effectively removes them from our set of constraints we consider. It can be shown, for example, that we will never learn non-zero penalties for certain pairs of logically incompatible constraints using the perceptron-style algorithm described in section 3.1 . 5.4 Hierarchical Equality Constraints The labels in the citation dataset are hierarchical labels. This means that the labels are the concatenation of all the levels in the hierarchy. We can create constraints that are dependent on only one or couple of elements in the hierarchy. We define C(x, i) as the function that returns 1 if the output x contains the label i in the hierarchy and 0 otherwise. We define e(i, j) to be e(i, j) = X yk∈y [[C(yk, i)]] − X yk∈y [[C(yk, j)]] Hierarchical equality constraints take the forms: e(i, j) ≥ 0 (8) e(i, j) ≤ 0 (9) 5.5 Local constraints We constrain the output labeling of the chainstructured CRF to be a valid BIO encoding. This both improves performance of the underlying model when used without global constraints, as well as ensures the validity of the global constraints we impose, since they operate only on B labels. The constraint that the labeling is valid BIO can be expressed as a collection of pairwise constraints on adjacent labels in the sequence. Rather than enforcing these constraints using dual decomposition, they can be enforced directly when performing MAP inference in the CRF by modifying the dynamic program of the Viterbi algorithm to only allow valid pairs of adjacent labels. 5.6 Constraint Pruning While the techniques from section 3.1 can easily cope with a large numbers of constraints at training time, this can be computationally costly, specially if one is considering very large constraint families. This is problematic because the size 597 Unconstrained [17]ref-marker [ D.first Sivia ,last person J.first Skilling ,last person ]authors [ Data Analysis : A Bayesian Tutorial ,booktitle Oxford University Press , publisher 2006 year date ]venue Constrained [17]ref-marker [ D.first Sivia ,last person J.first Skilling ,last person ]authors Data Analysis : A Bayesian Tutorial ,title [ Oxford University Press , publisher 2006 year date ]venue Unconstrained [ Sobol’ ,last I.first M.middle person ]authors [ (1990) .year ]date [On sensitivity estimation for nonlinear mathematical models .]title [ Matematicheskoe Modelirovanie ,journal 2volume (1) :number 112–118 .pages ( In Russian ) . status ]venue Constrained [ Sobol’ ,last I.first M.middle person ]authors [ (1990) .year ]date [On sensitivity estimation for nonlinear mathematical models .]title [ Matematicheskoe Modelirovanie ,journal 2volume (1) :number 112–118 .pages ( In Russian ) . language ]venue Figure 2: Two examples where imposing soft global constraints improves field extraction errors. SoftDD converged in 1 iteration on the first example, and 7 iterations on the second. When a reference is citing a book and not a section of the book, the correct labeling of the name of the book is title. In the first example, the baseline CRF incorrectly outputs booktitle, but this is fixed by Soft-DD, which penalizes outputs based on the constraint that booktitle should co-occur with an address label. In the second example, the unconstrained CRF output violates the constraint that title and status labels should not co-occur. The ground truth labeling also violates a constraint that title and language labels should not co-occur. At convergence of the Soft-DD algorithm, the correct labeling of language is predicted, which is possible because of the use of soft constraints. Constraints F1 score Sparsity # of cons Baseline 94.44 Only-one 94.62 0% 3 Hierarchical 94.55 56.25% 16 Pairwise 95.23 43.19% 609 All 95.39 32.96% 628 All DD 94.60 0% 628 Table 1: Set of constraints learned and F1 scores. The last row depicts the result of inference using all constraints as hard constraints. of some constraint families we consider grows quadratically with the number of candidate labels, and there are about 100 in the UMass dataset. Such a family consists of constraints that the sum of the counts of two different label types has to be bounded (a useful example is that there can’t be more than one out of “phd thesis” and “journal”). Therefore, quickly pruning bad constraints can save a substantial amount of training time, and can lead to better generalization. To do so, we calculate a score that estimates how useful each constraint is expected to be. Our score compares how often the constraint is violated in the ground truth examples versus our predictions. Here, prediction is done with respect to the base chain-structured CRF tagger and does not include global constraints. Note that it may make sense to consider a constraint that is sometimes violated in the ground truth, as the penalty learning algorithm can learn a small penalty for it, which will allow it to be violated some of the time. Our importance score is defined as, for each constraint c on labeled set D, imp(c) = P d∈D[[maxywT d y]]c P d∈D[[yd]]c , (10) where [[y]]c is 1 if the constraint is violated on output y and 0 otherwise. Here, yd denotes the ground truth labeling and wd is the vector of scores for the CRF tagger. We prune constraints by picking a cutoff value for imp(c). A value of imp(c) above 1 implies that the constraint is more violated on the predicted examples than on the ground truth, and hence that we might want to keep it. We also find that the constraints that have the largest imp values are semantically interesting. 6 Related Work There are multiple previous examples of augmenting chain-structured sequence models with terms capturing global relationships by expanding the chain to a more complex graphical model with non-local dependencies between the outputs. Inference in these models can be performed, for example, with loopy belief propagation (Bunescu and Mooney, 2004; Sutton and McCallum, 2004) or Gibbs sampling (Finkel et al., 2005). Belief propagation is prohibitively expensive in our 598 model due to the high cardinalities of the output variables and of the global factors, which involve all output variables simultaneously. There are various methods for exploiting the combinatorial structure of these factors, but performance would still have higher complexity than our method. While Gibbs sampling has been shown to work well tasks such as named entity recognition (Finkel et al., 2005), our previous experiments show that it does not work well for citation extraction, where it found only low-quality solutions in practice because the sampling did not mix well, even on a simple chain-structured CRF. Recently, dual decomposition has become a popular method for solving complex structured prediction problems in NLP (Koo et al., 2010; Rush et al., 2010; Rush and Collins, 2012; Paul and Eisner, 2012; Chieu and Teow, 2012). Soft constraints can be implemented inefficiently using hard constraints and dual decomposition— by introducing copies of output variables and an auxiliary graphical model, as in Rush et al. (2012). However, at every iteration of dual decomposition, MAP must be run in this auxiliary model. Furthermore the copying of variables doubles the number of iterations needed for information to flow between output variables, and thus slows convergence. On the other hand, our approach to soft constraints has identical per-iteration complexity as for hard constraints, and is a very easy modification to existing hard constraint code. Initial work in machine learning for citation extraction used Markov models with no global constraints. Hidden Markov models (HMMs), were originally employed for automatically extracting information from research papers on the CORA dataset (Seymore et al., 1999; Hetzner, 2008). Later, CRFs were shown to perform better on CORA, improving the results from the Hmm’s token-level F1 of 86.6 to 91.5 with a CRF(Peng and McCallum, 2004). Recent work on globally-constrained inference in citation extraction used an HMMCCM, which is an HMM with the addition of global features that are restricted to have positive weights (Chang et al., 2012). Approximate inference is performed using beam search. This method increased the HMM token-level accuracy from 86.69 to 93.92 on a test set of 100 citations from the CORA dataset. The global constraints added into the model are simply that each label only occurs once per citation. This approach is limited in its use of an HMM as an underlying model, as it has been shown that CRFs perform significantly better, achieving 95.37 token-level accuracy on CORA (Peng and McCallum, 2004). In our experiments, we demonstrate that the specific global constraints used by Chang et al. (2012) help on the UMass dataset as well. 7 Experimental Results Our baseline is the one used in Anzaroot and McCallum (2013), with some labeling errors removed. This is a chain-structured CRF trained to maximize the conditional likelihood using LBFGS with L2 regularization. We use the same features as Anzaroot and McCallum (2013), which include word type, capitalization, binned location in citation, regular expression matches, and matches into lexicons. In addition, we use a rule-based segmenter that segments the citation string based on punctuation as well as probable start or end segment words (e.g. ‘in’ and ‘volume’). We add a binary feature to tokens that correspond to the start of a segment in the output of this simple segmenter. This final feature improves the F1 score on the cleaned test set from 94.0 F1 to 94.44 F1, which we use as a baseline score. We then use the development set to learn the penalties for the soft constraints, using the perceptron algorithm described in section 3.1. MAP inference in the model with soft constraints is performed using Soft-DD, shown in Algorithm 2. We instantiate constraints from each template in section 5.1, iterating over all possible labels that contain a B prefix at any level in the hierarchy and pruning all constraints with imp(c) < 2.75 calculated on the development set. We asses performance in terms of field-level F1 score, which is the harmonic mean of precision and recall for predicted segments. Table 1 shows how each type of constraint family improved the F1 score on the dataset. Learning all the constraints jointly provides the largest improvement in F1 at 95.39. This improvement in F1 over the baseline CRF as well as the improvement in F1 over using only-one constraints was shown to be statistically significant using the Wilcoxon signed rank test with p-values < 0.05. In the all-constraints settings, 32.96% of the constraints have a learned parameter of 0, and therefore only 599 Stop F1 score Convergence Avg Iterations 1 94.44 76.29% 1.0 2 95.07 83.38% 1.24 5 95.12 95.91% 1.61 10 95.39 99.18% 1.73 Table 2: Performance from terminating Soft-DD early. Column 1 is the number of iterations we allow each example. Column 3 is the % of test examples that converged. Column 4 is the average number of necessary iterations, a surrogate for the slowdown over performing unconstrained inference. 421 constraints are active. Soft-DD converges, and thus solves the constrained inference problem exactly, for all test set examples after at most 41 iterations. Running Soft-DD to convergence requires 1.83 iterations on average per example. Since performing inference in the CRF is by far the most computationally intensive step in the iterative algorithm, this means our procedure requires approximately twice as much work as running the baseline CRF on the dataset. On examples where unconstrained inference does not satisfy the constraints, Soft-DD converges after 4.52 iterations on average. For 11.99% of the examples, the Soft-DD algorithm satisfies constraints that were not satisfied during unconstrained inference, while in the remaining 11.72% Soft-DD converges with some constraints left unsatisfied, which is possible since we are imposing them as soft constraints. We could have enforced these constraints as hard constraints rather than soft ones. This experiment is shown in the last row of Table 1, where F1 only improves to 94.6. In addition, running the DD algorithm with these constraints takes 5.21 iterations on average per example, which is 2.8 times slower than Soft-DD with learned penalties. In Figure 2, we analyze the performance of Soft-DD when we don’t necessarily run it to convergence, but stop after a fixed number of iterations on each test set example. We find that a large portion of our gain in accuracy can be obtained when we allow ourselves as few as 2 dual decomposition iterations. However, this only amounts to 1.24 times as much work as running the baseline CRF on the dataset, since the constraints are satisfied immediately for many examples. In Figure 2 we consider two applications of our Soft-DD algorithm, and provide analysis in the caption. We train and evaluate on the UMass dataset instead of CORA, because it is significantly larger, has a useful finer-grained labeling schema, and its annotation is more consistent. We were able to obtain better performance on CORA using our baseline CRF than the HMM CCM results presented in Chang et al. (2012), which include soft constraints. Given this high performance of our base model on CORA, we did not apply our Soft-DD algorithm to the dataset. Furthermore, since the dataset is so small, learning the penalties for our large collection of constraints is difficult, and test set results are unreliable. Rather than compare our work to Chang et al. (2012) via results on CORA, we apply their constraints on the UMass data using Soft-DD and demonstrate accuracy gains, as discussed above. 7.1 Examples of learned constraints We now describe a number of the useful constraints that receive non-zero learned penalties and have high importance scores, defined in Section 5.6. The importance score of a constraint provides information about how often it is violated by the CRF, but holds in the ground truth, and a non-zero penalty implies we enforce it as a soft constraint at test time. The two singleton constraints with highest importance score are that there should only be at most one title segment in a citation and that there should be at most one author segment in a citation. The only one author constraint is particularly useful for correctly labeling editor segments in cases where unconstrained inference mislabels them as author segments. As can be seen in Table 3, editor fields are among the most improved with our new method, largely due to this constraint. The two hierarchical constraints with the highest importance scores with non-zero learned penalties constrain the output such that number of person segments does not exceed the number of first segments and vice-versa. Together, these constraints penalize outputs in which the number of person segments do not equal the number of first segments, i.e., every author should have a first name. One important pairwise constraint penalizes outputs in which thesis segments don’t co-occur with school segments. School segments label the name of the university that the thesis was submitted to. The application of this constraint increases the performance of the model on school segments 600 Label U C + venue/series 35.29 66.67 31.37 venue/editor/person/first 66.67 94.74 28.07 venue/school 40.00 66.67 26.67 venue/editor/person/last 75.00 94.74 19.74 venue/editor 77.78 90.00 12.22 venue/editor/person/middle 81.82 91.67 9.85 Table 3: Labels with highest improvement in F1. U is in unconstrained inference. C is the results of constrained inference. + is the improvement in F1. dramatically, as can be seen in table 3. An interesting form of pairwise constraints penalize outputs in which some labels do not cooccur with other labels. Some examples of constraints in this form enforce that journal segments should co-occur with pages segments and that booktitle segments should co-occur with address segments. An example of the latter constraint being employed during inference is the first example in Figure 2. Here, the constrained inference penalizes output which contains a booktitle segment but no address segment. This penalization leads allows the constrained inference to correctly label the booktitle segment as a title segment. The above example constraints are almost always satisfied on the ground truth, and would be useful to enforce as hard constraints. However, there are a number of learned constraints that are often violated on the ground truth but are still useful as soft constraints. Take, for example, the constraint that the number of number segments does not exceed the number of booktitle segments, as well as the constraint that it does not exceed the number of journal segments. These constraints are moderately violated on ground truth examples, however. For example, when booktitle segments co-occur with number segments but not with journal segments, the second constraint is violated. It is still useful to impose these soft constraints, as strong evidence from the CRF allows us to violate them, and they can guide the model to good predictions when the CRF is unconfident. 8 Conclusion We introduce a novel modification to the standard projected subgradient dual decomposition algorithm for performing MAP inference subject to hard constraints to one for performing MAP in the presence of soft constraints. In addition, we offer an easy-to-implement procedure for learning the penalties on soft constraints. This method drives many penalties to zero, which allows users to automatically discover discriminative constraints from large families of candidates. We show via experiments on a recent substantial dataset that using soft constraints, and selecting which constraints to use with our penalty-learning procedure, can lead to significant gains in accuracy. We achieve a 17% gain in accuracy over a chain-structured CRF model, while only needing to run MAP in the CRF an average of less than 2 times per example. This minor incremental cost over Viterbi, plus the fact that we obtain certificates of optimality on 100% of our test examples in practice, suggests the usefulness of our algorithm for large-scale applications. We encourage further use of our Soft-DD procedure for other structured prediction problems. Acknowledgments This work was supported in part by the Center for Intelligent Information Retrieval, in part by DARPA under agreement number FA8750-132-0020, in part by NSF grant #CNS-0958392, and in part by IARPA via DoI/NBC contract #D11PC20152. The U.S. Government is authorized to reproduce and distribute reprint for Governmental purposes notwithstanding any copyright annotation thereon. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. References Sam Anzaroot and Andrew McCallum. 2013. A new dataset for fine-grained citation field extraction. In ICML Workshop on Peer Reviewing and Publishing Models. Stephen Poythress Boyd and Lieven Vandenberghe. 2004. Convex optimization. Cambridge university press. Razvan Bunescu and Raymond J Mooney. 2004. Collective information extraction with relational markov networks. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 438. Association for Computational Linguistics. M. Chang, L. Ratinov, and D. Roth. 2012. Structured learning with constrained conditional models. Machine Learning, 88(3):399–431, 6. Hai Leong Chieu and Loo-Nin Teow. 2012. Combining local and non-local information with dual decomposition for named entity recognition from text. 601 In Information Fusion (FUSION), 2012 15th International Conference on, pages 231–238. IEEE. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Association for Computational Linguistics. Corinna Cortes and Vladimir Vapnik. 1995. Supportvector networks. Machine learning, 20(3):273–297. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 363–370. Association for Computational Linguistics. Erik Hetzner. 2008. A simple method for citation metadata extraction using hidden markov models. In Proceedings of the 8th ACM/IEEE-CS joint conference on Digital libraries, pages 280–284. ACM. Daphne Koller and Nir Friedman. 2009. Probabilistic graphical models: principles and techniques. The MIT Press. Nikos Komodakis, Nikos Paragios, and Georgios Tziritas. 2007. Mrf optimization via dual decomposition: Message-passing revisited. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1–8. IEEE. Terry Koo, Alexander M Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1288–1298. Association for Computational Linguistics. Michael J Paul and Jason Eisner. 2012. Implicitly intersecting weighted automata using dual decomposition. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 232–242. Association for Computational Linguistics. Fuchun Peng and Andrew McCallum. 2004. Accurate information extraction from research papers using conditional random fields. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 329–336, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. Defense Technical Information Center. Alexander M. Rush and Michael Collins. 2012. A tutorial on dual decomposition and lagrangian relaxation for inference in natural language processing. J. Artif. Intell. Res. (JAIR), 45:305–362. Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1–11. Association for Computational Linguistics. Alexander M Rush, Roi Reichart, Michael Collins, and Amir Globerson. 2012. Improved parsing and pos tagging using inter-sentence consistency constraints. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1434–1444. Kristie Seymore, Andrew McCallum, Roni Rosenfeld, et al. 1999. Learning hidden markov model structure for information extraction. In AAAI-99 Workshop on Machine Learning for Information Extraction, pages 37–42. David Sontag, Amir Globerson, and Tommi Jaakkola. 2011. Introduction to dual decomposition for inference. In Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright, editors, Optimization for Machine Learning. MIT Press. David Sontag. 2010. Approximate Inference in Graphical Models using LP Relaxations. Ph.D. thesis, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science. Charles Sutton and Andrew McCallum. 2004. Collective segmentation and labeling of distant entities in information extraction. Technical report, DTIC Document. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proceedings of the twenty-first international conference on Machine learning, page 104. ACM. 602
2014
56
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 603–612, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Study of Concept-based Weighting Regularization for Medical Records Search Yue Wang, Xitong Liu, Hui Fang Department of Electrical & Computer Engineering, University of Delaware, USA {wangyue,xtliu,hfang}@udel.edu Abstract An important search task in the biomedical domain is to find medical records of patients who are qualified for a clinical trial. One commonly used approach is to apply NLP tools to map terms from queries and documents to concepts and then compute the relevance scores based on the conceptbased representation. However, the mapping results are not perfect, and none of previous work studied how to deal with them in the retrieval process. In this paper, we focus on addressing the limitations caused by the imperfect mapping results and study how to further improve the retrieval performance of the concept-based ranking methods. In particular, we apply axiomatic approaches and propose two weighting regularization methods that adjust the weighting based on the relations among the concepts. Experimental results show that the proposed methods are effective to improve the retrieval performance, and their performances are comparable to other top-performing systems in the TREC Medical Records Track. 1 Introduction With the increasing use of electronic health records, it becomes urgent to leverage this rich information resource about patients’ health conditions to transform research in health and medicine. As an example, when developing a cohort for a clinical trial, researchers need to identify patients matching a set of clinical criteria based on their medical records during their hospital visits (Safran et al., 2007; Friedman et al., 2010). This selection process is clearly a domain-specific retrieval problem, which searches for relevant medical records that contain useful information about their corresponding patients’ qualification to the criteria specified in a query, e.g., “female patient with breast cancer with mastectomies during admission”. Intuitively, to better solve this domain-specific retrieval problem, we need to understand the requirements specified in a query and identify the documents satisfying these requirements based on their semantic meanings. In the past decades, significant efforts have been put on constructing biomedical knowledge bases (Aronson and Lang, 2010; Lipscomb, 2000; Corporation, 1999) and developing natural language processing (NLP) tools, such as MetaMap, to utilize the information from the knowledge bases (Aronson, 2001; McInnes et al., 2009). These efforts make it possible to map free text to concepts and use these concepts to represent queries and documents. Indeed, concept-based representation is one of the commonly used approaches that leverage knowledge bases to improve the retrieval performance (Limsopatham et al., 2013d; Limsopatham et al., 2013b). The basic idea is to represent both queries and documents as “bags of concepts”, where the concepts are identified based on the information from the knowledge bases. This method has been shown to be more effective than traditional term-based representation in the medical record retrieval because of its ability to handle the ambiguity in the medical terminology. However, this method also suffers the limitation that its effectiveness depends on the accuracy of the concept mapping results. As a result, directly applying existing weighting strategies might lead to nonoptimal retrieval performance. In this paper, to address the limitation caused by the inaccurate concept mapping results, we propose to regularize the weighting strategies in the concept-based representation methods. Specifically, by applying the axiomatic approaches (Fang and Zhai, 2005), we analyze the retrieval func603 tions with concept-based representation and find that they may violate some reasonable retrieval constraints. We then propose two concept-based weighting regularization methods so that the regularized retrieval functions would satisfy the retrieval constraints and achieve better retrieval performance. Experimental results over two TREC collections show that both proposed conceptbased weighting regularization methods can improve the retrieval performance, and their performance is comparable with the best systems of the TREC Medical Records tracks (Voorhees and Tong, 2011; Voorhees and Hersh, 2012). Many NLP techniques have been developed to understand the semantic meaning of textual information, and are often applied to improve the search accuracy. However, due to the inherent ambiguity of natural languages, the results of NLP tools are not perfect. One of our contributions is to present a general methodology that can be used to adjust existing IR techniques based on the inaccurate NLP results. 2 Related Work The Medical Records track of the Text REtrieval Conference (TREC) provides a common platform to study the medical records retrieval problem and evaluate the proposed methods (Voorhees and Tong, 2011; Voorhees and Hersh, 2012). Concept-based representation has been studied for the medical record retrieval problem (Limsopatham et al., 2013d; Limsopatham et al., 2013b; Limsopatham et al., 2013a; Qi and Laquerre, 2012; Koopman et al., 2011; Koopman et al., 2012). For example, Qi and Laquerre used MetaMap to generate the concept-based representation and then apply a vector space retrieval model for ranking, and their results are one of the top ranked runs in the TREC 2012 Medical Records track (Qi and Laquerre, 2012). To further improve the performance, Limsopatham et al. proposed a task-specific representation, i.e., using only four types of concepts (symptom, diagnostic test, diagnosis and treatment) in the concept-based representation and a query expansion method based on the relationships among the medical concepts (Limsopatham et al., 2013d; Limsopatham et al., 2013a). Moreover, they also proposed a learning approach to combine both term-based and concept-based representation to further improve the performance (Limsopatham et Figure 1: Example of MetaMap result for a query. al., 2013b). Our work is also related to domain-specific IR (Yan et al., 2011; Lin and Demner-Fushman, 2006; Zhou et al., 2007). For example, Yan et al. proposed a granularity-based document ranking model that utilizes ontologies to identify document concepts. However, none of the previous work has studied how to regularize the weight of concepts based on their relations. It is well known that the effectiveness of a retrieval function is closely related to the weighting strategies (Fang and Zhai, 2005; Singhal et al., 1996). Various term weighting strategies have been proposed and studied for the term-based representation (Amati and Van Rijsbergen, 2002; Singhal et al., 1996; Robertson et al., 1996). However, existing studies on concept-based representation still used weighting strategies developed for term-based representation such as vector space models (Qi and Laquerre, 2012) and divergence from randomness (DFR) (Limsopatham et al., 2013a) and did not take the inaccurate concept mapping results into consideration. Compared with previous work, we focus on addressing the limitation caused by the inaccurate concept mapping. Note that our efforts are orthogonal to existing work, and it is expected to bring additional improvement to the retrieval performance. 3 Concept-based Representation for Medical Records Retrieval 3.1 Problem Formulation We follow the problem setup used in the TREC medical record track (Voorhees and Tong, 2011; Voorhees and Hersh, 2012). The task is to retrieve relevant patient visits with respect to a query. Since each visit can be associated with multiple medical records, the relevance of a visit is related to the relevance of individual associated medical records. Existing studies computed the relevance 604 scores at either visit-level, where all the medical records of a visit are merged into a visit document (Demner-Fushman et al., 2012; Limsopatham et al., 2013c), or record-level, where we can first compute the relevance score of individual records and then aggregate their scores as the relevance score of a visit (Limsopatham et al., 2013c; Zhu and Carterette, 2012; Limsopatham et al., 2013d). In this paper, we focus on the visit-level relevance because of its simplicity. In particular, given a patient’s visit, all the medical records generated from this visit are merged as a document. Note that our proposed concept-weighting strategies can also be easily applied to record-level relevance modeling. Since the goal is to retrieve medical records of patients that satisfying requirements specified in a query, the relevance of medical records should be modeled based on how well they match all the requirements (i.e., aspects) specified in the queries. 3.2 Background: UMLS and MetaMap Unified Medical Language System (UMLS) is a metathesaurus containing information from more than 100 controlled medical terminologies such as the Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) and Medical Subject Headings (MeSH). Specifically, it contains the information about over 2.8 million biomedical concepts. Each concept is labeled with a Concept Unique Identifier (CUI) and has a preferred name and a semantic type. Moreover, NLP tools for utilizing the information from UMLS have been developed. In particular, MetaMap (Aronson, 2001) can take a text string as the input, segment it into phrases, and then map each phrase to multiple UMLS CUIs with confidence scores. The confidence score is an indicator of the quality of the phrase-to-concept mapping by MetaMap. It is computed by four metrics: centrality, variation, coverage and cohesiveness (Aronson, 2001). These four measures try to evaluate the mapping from different angles, such as the involvement of the central part, the distance of the concept to the original phrase, and how well the concept matches the phrase. The maximum confidence in MetaMap is 1000. Figure 1 shows the MetaMap results for an example query “children with dental caries”. Two query aspects, i.e., “children” and “dental caries”, are identified. Each of them is mapped to multiple concepts, and each concept is associated with the confidence score as well as more detailed information about this concept. 3.3 Concept-based Representation Traditional retrieval models are based on “bag of terms” representation. One limitation of this representation is that relevance scores are computed based on the matching of terms rather than the meanings. As a result, the system may fail to retrieve the relevant documents that do not contain any query terms. To overcome this limitation, concept-based representation has been proposed to bridge the vocabulary gap between documents and queries (Qi and Laquerre, 2012; Limsopatham et al., 2013b; Koopman et al., 2012). In particular, MetaMap is used to map terms from queries and documents (e.g., medical records) to the semantic concepts from biomedical knowledge bases such as UMLS. Within the concept-based representation, the query can then be represented as a bag of all the generated CUIs in the MetaMap results. For example, the query from Figure 1 can be represented as {C0008059, C0680063, C0011334, C0333519, C0226984}. Documents can be represented in a similar way. After converting both queries and documents to concept-based representations using MetaMap, previous work applied existing retrieval functions such as vector space models (Singhal et al., 1996) to rank the documents. Note that when referring to existing retrieval functions in the paper, they include traditional keyword matching based functions such as pivoted normalization (Singhal et al., 1996), Okapi (Robertson et al., 1996), Dirichlet prior (Zhai and Lafferty, 2001) and basic axiomatic functions (Fang and Zhai, 2005). 4 Weighting Strategies for Concept-based Representation 4.1 Motivation Although existing retrieval functions can be directly applied to concept-based representation, they may lead to non-optimal performance. This is mainly caused by the fact that MetaMap may generate more than one mapped concepts for an aspect, i.e., a semantic unit in the text. Ideally, an aspect will be mapped to only one concept, and different concepts would represent different semantic meanings. Under such a situ605 Figure 2: Exploratory data analysis (From left to right are choosing minimum, average and maximum IDF concepts as the representing concepts, respectively. The removed concepts are highlighted in the figures.). ation, traditional retrieval functions would likely work well and generate satisfying retrieval performance since the relations among concepts are independent which is consistent with the assumptions made in traditional IR (Manning et al., 2008). However, the mapping results generated by MetaMap are not perfect. Although MetaMap is able to rank all the candidate concepts with the confidence score and pick the most likely one, the accuracy is not very high. In particular, our preliminary results show that turning on the disambiguation functionality provided by MetaMap (i.e., returning only the most likely concept for each query) could lead to worse retrieval performance than using all the candidate mappings. Thus, we use the one-to-many mapping results generated by MetaMap, in which each aspect can be mapped to multiple concepts. Unfortunately, such one-to-many concept mappings could hinder the retrieval performance in the following two ways. • The multiple concepts generated from the same aspect are related, which is inconsistent with the independence assumption made in the existing retrieval functions (Manning et al., 2008). For example, as shown in Figure 1, “dental caries” is mapped to three concepts. It is clear that the concepts are related, but existing retrieval functions are unable to capture their relations and would compute the weight of each concept independently. • The one-to-many mapping results generated by MetaMap could arbitrarily inflate the weights of some query aspects. For example, as shown in Figure 1, query aspect “children” is mapped to 2 concepts while “dental caries” is mapped to 3 concepts. In the existing retrieval functions, term occurrences are important relevance signals. However, when converting the text to concepts representation using MetaMap, the occurrences of the concepts are determined by not only the original term occurrences, a good indicator of relevance, but also the number of mapped concepts, which is determined by MetaMap and has nothing to do with the relevance status. As a result, the occurrences of concepts might not be a very accurate indicator of importance of the corresponding query aspect. To address the limitations caused by the inaccurate mapping results, we propose to apply axiomatic approaches (Fang and Zhai, 2005) to regularize the weighting strategies for concept-based representation methods. In particular, we first formalize retrieval constraints that any reasonable concept-based representation methods should satisfy and then discuss how to regularize the existing weighting strategies to satisfy the constraints and improve the retrieval performance. We first explain the notations used in this section. Q and D denote a query and a document with the concept-based representation. S(Q, D) is the relevance score of D with respect to Q. ei denotes a concept, and A(e) denotes the query aspect associated with e, i.e., a set of concepts that are mapped to the same phrases as e by using MetaMap. i(e) is the normalized confidence score of the mapping for concept e generated by MetaMap. c(e, D) denotes the occurrences of concept e in document D, df(e) denotes the number of documents containing e. |D| is the document length of D. Impc(e) is the importance of the concept such as the concept IDF value, and ImpA(A) is the importance of the aspect. 606 4.2 Unified concept weighting regularization We now discuss how to address the first challenge, i.e,. how to regularize the weighting strategy so that we can take into consideration the fact that concepts associated with the same query aspect are not independent. We call a concept is a variant of another one if both of them are associated with the same aspect. Intuitively, given a query with two aspects, a document covering both aspects should be ranked higher than those covering only one aspect. We can formalize the intuition in the concept-based representation as the following constraint. Unified Constraint: Let query be Q = {e1, e2, e3}, and we know that e2 is a variant of e3. Assume we have two documents D1 and D2 with the same document length, i.e., |D1| = |D2|. If we know that c(e1, D1) = c(e3, D2) > 0, c(e1, D2) = c(e3, D1) = 0 and c(e2, D1) = c(e2, D2) > 0, then S(Q, D1) > S(Q, D2). It is clear that existing retrieval functions would violate this constraint since they ignore the relations among concepts. One simple strategy to fix this problem is to merge all the concept variants as a single concept and select one representative concept to replace all occurrences of other variants in both queries and documents. By merging the concepts together, we are aiming to purify the concepts and make the similar concepts centralized so that the assumption that all the concepts are independent would hold. Formally, the adjusted occurrences of a concept e in a document D is shown as follows: cmod(e, D)=  P e′∈EC(e) c(e′, D) e=Rep(EC(e)) 0 e̸=Rep(EC(e)) (1) where c(e, D) is the original occurrence of concept e in document D, EC(e) denotes a set of all the variants of e including itself (i.e., all the concepts with the same preferred name as e), and Rep(EC(e)) denotes the representative concept from EC(e). It is trivial to prove that, with such changes, existing retrieval functions would satisfy the above constraint since the constraint implies TFC2 constraint defined in the previous study (Fang et al., 2004). Now the remaining question is how to select the representative concept from all the variants. There are three options: select the concept with the maximum IDF, average IDF, or minimum IDF. We conduct exploratory data analysis on these three options. In particular, for each option, we generate a plot indicating the correlation between the IDF value of a concept and the relevance probability of the concept (i.e., the probability that a document containing the concept is relevant). Note that both original and replaced IDF values are shown in the plot for each option. Figure 2 shows the results. It is clear that the right plot (i.e., selecting the concept with the maximum IDF as the representative concept) is the best choice since the changes make the points less scattered. In fact, this can also be confirmed by experimental results as reported in Table 5. Thus, we use the concept with the maximum IDF value as the representative concept of all the variants. 4.3 Balanced concept weighting regularization We now discuss how to address the second challenge, i.e., how to regularize the weighting strategy to deal with the arbitrarily inflated statistics caused by the one-to-many mappings. The arbitrary inflation could impact the importance of the query aspects. For example, as shown in Figure 1, one aspect is mapped to two concepts while the other is mapped to three. Moreover, it could also impact the accuracy of the concept IDF values. Consider “colonoscopies” and “adult”, it is clear that the first term is more important than the second one, which is consistent with their term IDF values, i.e., 7.52 and 2.92, respectively. However, with the concept-based representation, the IDF value of the concept “colonoscopies”(C0009378) is 2.72, which is even smaller than that of concept “adult” (C1706450), i.e., 2.92. To fix the negative impact on query aspects, we could leverage the findings in the previous study (Zheng and Fang, 2010) and regularize the weighting strategy based on the length of query aspects to favor documents covering more query aspects. Since each concept mapping is associated with a confidence score, we can incorporate them into the regularization function as follows: f(e, Q) = (1 −α) + α ·  P e′∈Q i(e′) P e′′∈A(e) i(e′′)  , (2) where i(e) is the normalized confidence score of concept e generated by MetaMap, and α is a parameter between 0 and 1 to control the effect of the regularization. When α is set to 0, there is no regularization. This regularization function aims to 607 penalize the weight of concept e based on its variants as well as the concepts from other aspects. In particular, a concept would receive more penalty (i.e., its weight will be decreased more) when it has more variants and the mappings of these variants are more accurate. To fix the negative impact on the concept IDF values, we propose to regularize the weighting based on the importance of the query aspect. This regularization can be formalized as the following constraint. Balanced Constraint: Let Q be a query with two concepts and the concepts are associated with different aspects, i.e., Q = {e1, e2}, and A(e1) ̸= A(e2). Assume D1 and D2 are two documents with the same length, i.e., |D1| = |D2|, and they cover different concepts with the same occurrences, i.e., c(e1, D1) = c(e2, D2) > 0 and c(e2, D1) = c(e1, D2) = 0. If we know Impc(e1) = Impc(e2) and ImpA(A(e1)) < ImpA(A(e2)), then we have S(Q, D1) < S(Q, D2). This constraint requires that the relevance score of a document should be affected by not only the importance of the concepts but also the importance of the associated query aspect. In a way, the constraint aims to counteract the arbitrary statistics inflation caused by MetaMap results and balance the weight among concepts based on the importance of the associated query aspects. And it is not difficult to show that existing retrieval functions violate this constraint. Now the question is how to revise the retrieval functions to make them satisfy this constraint. We propose to incorporate the importance of query aspect into the previous regularization function in Equation (2) as follows: f(e, Q) = (1−α)+α·  P e′∈Q i(e′) P e′′∈A(e) i(e′′)  ·ImpA(A(e)). (3) Note that ImpA(A(e)) is the importance of a query aspect and can be estimated based on the terms from the query aspect. In this paper, we use the maximum term IDF value from the aspect to estimate the importance, which performs better than using minimum and average IDF values as shown in the experiments (i.e., Table 6). We plan to study other options in the future work. 4.4 Discussions Both proposed regularization methods can be combined with any existing retrieval functions. In this paper, we focus on one of the state of the art weighting strategies, i.e., F2-EXP function derived from axiomatic retrieval model (Fang and Zhai, 2005), and explain how to incorporate the regularization methods into the function. The original F2-EXP retrieval function is shown as follows: S(Q, D) = X e∈Q∩D c(e, Q) · ( N df(e) )0.35 · c(e, D) c(e, D) + b + b×|D| avdl (4) where b is a parameter control the weight of the document length normalization. With the unified concept weighting regularization, the revised function based on F2-EXP function, i.e., Unified, is shown as follows: S(Q, D)= X e∈Q∩D cmod(e, Q)·( N df(t) )0.35· cmod(e, D) cmod(e, D)+b+ b×|D| avdl (5) where cmod(e, D) and cmod(e, Q) denote the modified occurrences as shown in Equation (1). It can be shown that this function satisfies the unified constraint but violates the balanced constraint. Following the similar strategy used in the previous study (Zheng and Fang, 2010), we can further incorporate the regularization function proposed in Equation (3) to the above function to make it satisfy the balanced constraint as follows: S(Q, D) = X e∈Q∩D cmod(e, Q)·( N df(t) )0.35·f(e, Q) (6) · cmod(e, D) cmod(e, D)+b+ b×|D| avdl where f(e, Q) is the newly proposed regularization function as shown in Equation (3). This method is denoted as Balanced, and can be shown that it satisfies both constraints. Table 1: Statistics of collections. # of unique tokens AvgDL AvgQL11 AvgQL12 Term 263,356 2,659 10.23 8.82 Concept 58,192 2,673 8.79 7.81 5 Experiments 5.1 Experiment Setup We conduct experiments using two data sets from the TREC Medical Records track 2011 and 2012. 608 Table 2: Description of Methods Name Representation Ranking strategies Term-BL Term F2-EXP (i.e., Equation (4)) Concept-BL Concept (i.e., Section 3.3) F2-EXP (i.e., Equation (4)) TSConcept-BL Task specific concept ((Limsopatham et al., 2013d)) F2-EXP (i.e., Equation (4)) Unified Concept (i.e., Section 4.2) F2-EXP + Unified (i.e., Equation (5)) Balanced Concept (i.e., Section 4.3) F2-EXP + Balanced (i.e., Equation (6)) Table 3: Performance under optimized parameter settings Med11 Med12 MAP bpref infNDCG infAP Term-BL 0.3474 0.4727 0.4695 0.2106 Concept-BL 0.3967 0.5476 0.5243 0.2497 TSConcept-BL 0.3964 0.5329 0.5283 0.2694 Unified 0.4235T 0.5443T 0.5416T 0.2586T Balanced 0.4561T,C,TS 0.5697T,C,TS 0.5767T,C,TS 0.2859T,C,TS The data sets are denoted as Med11 and Med12. Both data sets used the same document collection with 100,866 medical records, each of which is associated with a unique patient visit to the hospital or emergency department. Since the task is to retrieve relevant visits, we merged all the records from a visit to form a single document for the visit, which leads to 17,198 documents in the collection. There are 34 queries in Med11 and 47 in Med12. These queries were developed by domain experts based on the “inclusion criteria” of a clinical study (Voorhees and Tong, 2011; Voorhees and Hersh, 2012). After applying MetaMap to both documents and queries, we can construct a concept-based collection. Since documents are often much longer, we can first segment them into sentences, get the mapping results for each sentence, and then merge them together to generate the concept-based representation for the documents. Table 1 compares the statistics of the termbased and the concept-based collections, including the number of unique tokens in the collection (i.e., the number of terms for term-based representation and the number of concepts for concept-based representation), the average number of tokens in the documents (AvgDL) and the average number of tokens in the queries for these two collections (AvgQL11 and AvgQL12). It is interesting to see that the number of unique tokens is much smaller when using the concept-based indexing. This is expected since terms are semantically related and a group of related terms would be mapped to one semantic concept. Moreover, we observe that the document length and query length are similar for both collections. This is caused by the fact that concepts are related and the MetaMap would map an aspect to multiple related concepts. Table 2 summarizes the methods that we compare in the experiments. Following the evaluation methodology used in the medical record track, we use MAP@1000 as the primary measure for Med11 and also report bpref. For Med12, we take infNDCG@100 as the primary measure and also report infAP@100. Different measures were chosen for these two sets mainly because different pooling strategies were used to create the judgment pools (Voorhees and Hersh, 2012). 5.2 Performance Comparison Table 3 shows the performance under optimized parameter settings for all the methods over both data sets. The performance is optimized in terms of MAP in Med11, and infNDCG in Med12, respectively. α and b are tuned from 0 to 1 with the step 0.1. Note that T , C and TS indicate improvement over Term-BL, Concept-BL and TSConceptBL is statistically significant at 0.05 level based on Wilcoxon signed-rank test, respectively. Results show that Balanced method can significantly improve the retrieval performance over both collections. Unified method outperforms the baseline methods in terms of the primary measure on both collections, although it fails to improve the infAP on Med12 for one baseline method. It is not surprising to see that Balanced method is more effective than Unified since the former satisfies both of the proposed retrieval constraints while the lat609 Table 4: Testing Performance Trained on Med12 Med11 Tested on Med11 Med12 Measures MAP bpref infNDCG infAP Term-BL 0.3451 0.4682 0.4640 0.2040 Concept-BL 0.3895 0.5394 0.5194 0.2441 TSConcept-BL 0.3901 0.5286 0.5208 0.2662 Unified 0.4176T,C 0.5391T 0.5346T 0.2514T Balanced 0.4497T,C,TS 0.5627T,C,TS 0.5736T,C,TS 0.2811T,C,TS ter satisfies only one. Finally, we noticed that the performance difference between TSConceptBL and Concept-BL is not as significant as the ones reported in the previous study (Limsopatham et al., 2013d), which is probably caused by the difference of problem set up (i.e., record-level vs. visit-level as discussed in Section 3.1). We also conduct experiments to train parameters on one collection and compare the testing performance on the other collection. The results are summarized in Table 4. Clearly, Balanced is still the most effective regularization method. The testing performance is very close to the optimal performance, which indicates that the proposed methods are robust with respect to the parameter setting. Moreover, we would like to point out that the testing performance of Balanced is comparable to the top ranked runs from the TREC Medical records track. For example, the performance of the best automatic system in Med11 (e.g., CengageM11R3) is 0.552 in terms of bpref, while the performance of the best automatic system in Med12 (e.g., udelSUM) is 0.578 in terms of infNDCG. Note that the top system of Med12 used multiple external resources such as Wikipedia and Web, while we did not use such resources. Moreover, our performance might be further improved if we apply the result filtering methods used by many TREC participants (Leveling et al., 2012). Table 5: Selecting representative concepts MAP bpref Unified (i.e., Unified-max) 0.4235 0.5443 Unified-min 0.3894 0.5202 Unified-avg 0.4164 0.5303 5.3 More Analysis In the Unified method, we chose the concept with the maximum IDF as the representative concept Table 6: Estimating query aspect importance MAP bpref Balanced (i.e., Balanced-max) 0.4561 0.5697 Balanced-min 0.4216 0.5484 Balanced-avg 0.4397 0.5581 Table 7: Regularization components in Balanced MAP bpref Balanced 0.4561 0.5697 Confidence only 0.4294 0.5507 Importance only 0.4373 0.5598 among all the variants. We now conduct experiments on Med11 to compare its performance with those of using average IDF and minimum IDF ones as the representative concept. The results are shown in Table 5. It is clear that using maximum IDF is the best choice, which is consistent with our observation from the data exploratory analysis shown in Figure 2. In the Balanced method, we used the maximum IDF value to estimate the query importance. We also conduct experiments to compare its performance with those using the minimum and average IDF values. Table 6 summarizes the results, and shows that using the maximum IDF value performs better than the other choices. As shown in Equation (3), the Balanced method regularizes the weights through two components: (1) normalized confidence score of each aspect, i.e., P e′∈Q i(e′) P e′′∈A(e) i(e′′); and (2) the importance of the query aspect, i.e., ImpA(A(e)). To examine the effectiveness of each component, we conduct experiments using the modified Balanced method with only one of the components. The results are shown in Table 7. It is clear that both components are essential to improve the retrieval performance. Finally, we report the performance improvement of the proposed methods over the ConceptBL for each query in Figure 3. Clearly, both of the 610 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 101 106 111 116 121 126 131 136 141 146 151 156 161 165 171 176 181 185 Performance Difference Query ID Improvement(Unified) Improvement(Balanced) Figure 3: Improvement of proposed methods (Compared with the Concept-BL method). proposed methods can improve the effectiveness of most queries, and the Balanced method is more robust than the Unified method. 6 Conclusions and Future Work Medical record retrieval is an important domainspecific IR problem. Concept-based representation is an effective approach to dealing with ambiguity terminology in medical domain. However, the results of the NLP tools used to generate the concept-based representation are often not perfect. In this paper, we present a general methodology that can use axiomatic approaches as guidance to regularize the concept weighting strategies to address the limitations caused by the inaccurate concept mapping and improve the retrieval performance. In particular, we proposed two weighting regularization methods based on the relations among concepts. Experimental results show that the proposed methods can significantly outperform existing retrieval functions. There are many interesting directions for our future work. First, we plan to study how to automatically predict whether to use concept-based indexing based on the quality of MetaMap results, and explore whether the proposed methods are applicable for other entity linking methods. Second, we will study how to leverage other information from knowledge bases to further improve the performance. Third, more experiments could be conducted to examine the effectiveness of the proposed methods when using other ranking strategies. Finally, it would be interesting to study how to follow the proposed methodology to study other domain-specific IR problems. References Gianni Amati and Cornelis Joost Van Rijsbergen. 2002. Probabilistic models of information retrieval based on measuring the divergence from randomness. ACM TOIS. Alan R. Aronson and Franc¸ois-Michel Lang. 2010. An overview of metamap: historical perspective and recent advances. JAMIA, 17(3):229–236. Alan R. Aronson. 2001. Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program. In Proceedings of AMIA Symposium. Practice Management Information Corporation. 1999. ICD-9-CM: International Classification of Diseases, 9th Revision, Clinical Modification, 5th Edition. Practice Management Information Corporation. Dina Demner-Fushman, Swapna Abhyankar, Antonio Jimeno-Yepes, Russell Loane, Francois Lang, James G. Mork, Nicholas Ide, and Alan R. Aronson. 2012. NLM at TREC 2012 Medical Records Track. In Proceedings of TREC 2012. Hui Fang and ChengXiang Zhai. 2005. An exploration of axiomatic approaches to information retrieval. In Proceedings of SIGIR’05. Hui Fang, Tao Tao, and ChengXiang Zhai. 2004. A formal study of information retrieval heuristics. In Proceedings of SIGIR’04. Charles P. Friedman, Adam K. Wong, and David Blumenthal. 2010. Achieving a nationwide learning health system. Science Translational Medicine. Beval Koopman, Michael Lawley, and Peter Bruza. 2011. AEHRC & QUT at TREC 2011 Medical Track : a concept-based information retrieval approach. In Proceedings of TREC’11. Bevan Koopman, Guido Zuccon, Anthony Nguyen, Deanne Vickers, Luke Butt, and Peter D. Bruza. 611 2012. Exploiting SNOMED CT Concepts & Relationships for Clinical Information Retrieval: Australian e-Health Research Centre and Queensland University of Technology at the TREC 2012 Medical Track. In Proceedings of TREC’12. Johannes Leveling, Lorraine Goeuriot, Liadh Kelly, and Gareth J. F. Jones. 2012. DCU@TRECMed 2012: Using adhoc Baselines for Domain-Specific Retrieval. In Proceedings of TREC 2012. Nut Limsopatham, Craig Macdonald, and Iadh Ounis. 2013a. Inferring conceptual relationships to improve medical records search. In Proceedings of OAIR’13. Nut Limsopatham, Craig Macdonald, and Iadh Ounis. 2013b. Learning to combine representations for medical records search. In Proceedings of SIGIR’13. Nut Limsopatham, Craig Macdonald, and Iadh Ounis. 2013c. Learning to selectively rank patients’ medical history. In Proceedings of CIKM’13. Nut Limsopatham, Craig Macdonald, and Iadh Ounis. 2013d. A task-specific query and document representation for medical records search. In Proceedings of ECIR’13. Jimmy Lin and Dina Demner-Fushman. 2006. The role of knowledge in conceptual retrieval: a study in the domain of clinical medicine. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’06, pages 99–106, New York, NY, USA. ACM. Carolyn E Lipscomb. 2000. Medical Subject Headings (MeSH). The Medical Library Association. Christopher D. Manning, P. Raghavan, and H. Schutze. 2008. Introduction to Information Retrieval. Cambridge University Press. Bridget T. McInnes, Ted Pedersen, and Serguei V. S. Pakhomov. 2009. UMLS-Interface and UMLSSimilarity : Open Source Software for Measuring Paths and Semantic Similarity. In Proceedings of AMIA Symposium. Yanjun Qi and Pierre-Francois Laquerre. 2012. Retrieving Medical Records: NEC Labs America at TREC 2012 Medical Record Track. In Proceedings of TREC 2012. S.E. Robertson, S. Walker, S. Jones, M.M. HancockBeaulieu, and M. Gatford. 1996. Okapi at TREC-3. pages 109–126. Charles Safran, Meryl Bloomrosen, W. Edward Hammond, Steven Labkoff, Suzanne Markel-Fox, Paul C. Tang, and Don E. Detmer. 2007. White paper: Toward a national framework for the secondary use of health data: An american medical informatics association white paper. JAMIA, 14(1):1–9. Amit Singhal, Chris Buckley, and Mandar Mitra. 1996. Pivoted document length normalization. In Proceedings of SIGIR’96. Ellen M. Voorhees and William Hersh. 2012. Overview of the TREC 2012 Medical Records Track. In Proceedings of TREC 2012. Ellen M. Voorhees and Richard M. Tong. 2011. Overview of the TREC 2011 Medical Records Track. In Proceedings of TREC 2011. Xin Yan, Raymond Y.K. Lau, Dawei Song, Xue Li, and Jian Ma. 2011. Toward a semantic granularity model for domain-specific information retrieval. ACM TOIS. Chengxiang Zhai and John Lafferty. 2001. A study of smoothing methods for language models applied to Ad Hoc information retrieval. In Proceedings of SIGIR’01. Wei Zheng and Hui Fang. 2010. Query aspect based term weighting regularization in information retrieval. In Proceedings of ECIR’10. Wei Zhou, Clement Yu, Neil Smalheiser, Vetle Torvik, and Jie Hong. 2007. Knowledge-intensive conceptual retrieval and passage extraction of biomedical literature. In Proceedings of SIGIR’07. Dongqing Zhu and Ben Carterette. 2012. Combining multi-level evidence for medical record retrieval. In Proceedings of SHB’12. 612
2014
57
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 613–623, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning to Predict Distributions of Words Across Domains Danushka Bollegala Department of Computer Science University of Liverpool Liverpool, L69 3BX, UK danushka.bollegala@ liverpool.ac.uk David Weir Department of Informatics University of Sussex Falmer, Brighton, BN1 9QJ, UK d.j.weir@ sussex.ac.uk John Carroll Department of Informatics University of Sussex Falmer, Brighton, BN1 9QJ, UK j.a.carroll@ sussex.ac.uk Abstract Although the distributional hypothesis has been applied successfully in many natural language processing tasks, systems using distributional information have been limited to a single domain because the distribution of a word can vary between domains as the word’s predominant meaning changes. However, if it were possible to predict how the distribution of a word changes from one domain to another, the predictions could be used to adapt a system trained in one domain to work in another. We propose an unsupervised method to predict the distribution of a word in one domain, given its distribution in another domain. We evaluate our method on two tasks: cross-domain partof-speech tagging and cross-domain sentiment classification. In both tasks, our method significantly outperforms competitive baselines and returns results that are statistically comparable to current stateof-the-art methods, while requiring no task-specific customisations. 1 Introduction The Distributional Hypothesis, summarised by the memorable line of Firth (1957) – You shall know a word by the company it keeps – has inspired a diverse range of research in natural language processing. In such work, a word is represented by the distribution of other words that co-occur with it. Distributional representations of words have been successfully used in many language processing tasks such as entity set expansion (Pantel et al., 2009), part-of-speech (POS) tagging and chunking (Huang and Yates, 2009), ontology learning (Curran, 2005), computing semantic textual similarity (Besanc¸on et al., 1999), and lexical inference (Kotlerman et al., 2012). However, the distribution of a word often varies from one domain1 to another. For example, in the domain of portable computer reviews the word lightweight is often associated with positive sentiment bearing words such as sleek or compact, whereas in the movie review domain the same word is often associated with negative sentimentbearing words such as superficial or formulaic. Consequently, the distributional representations of the word lightweight will differ considerably between the two domains. In this paper, given the distribution wS of a word w in the source domain S, we propose an unsupervised method for predicting its distribution wT in a different target domain T . The ability to predict how the distribution of a word varies from one domain to another is vital for numerous adaptation tasks. For example, unsupervised cross-domain sentiment classification (Blitzer et al., 2007; Aue and Gamon, 2005) involves using sentiment-labeled user reviews from the source domain, and unlabeled reviews from both the source and the target domains to learn a sentiment classifier for the target domain. Domain adaptation (DA) of sentiment classification becomes extremely challenging when the distributions of words in the source and the target domains are very different, because the features learnt from the source domain labeled reviews might not appear in the target domain reviews that must be classified. By predicting the distribution of a word across different domains, we can find source domain features that are similar to the features in target domain reviews, thereby reducing the mismatch of features between the two domains. We propose a two-step unsupervised approach to predict the distribution of a word across domains. First, we create two lower dimensional la1In this paper, we use the term domain to refer to a collection of documents about a particular topic, for example reviews of a particular kind of product. 613 tent feature spaces separately for the source and the target domains using Singular Value Decomposition (SVD). Second, we learn a mapping from the source domain latent feature space to the target domain latent feature space using Partial Least Square Regression (PLSR). The SVD smoothing in the first step both reduces the data sparseness in distributional representations of individual words, as well as the dimensionality of the feature space, thereby enabling us to efficiently and accurately learn a prediction model using PLSR in the second step. Our proposed cross-domain word distribution prediction method is unsupervised in the sense that it does not require any labeled data in either of the two steps. Using two popular multi-domain datasets, we evaluate the proposed method in two prediction tasks: (a) predicting the POS of a word in a target domain, and (b) predicting the sentiment of a review in a target domain. Without requiring any task specific customisations, systems based on our distribution prediction method significantly outperform competitive baselines in both tasks. Because our proposed distribution prediction method is unsupervised and task independent, it is potentially useful for a wide range of DA tasks such entity extraction (Guo et al., 2009) or dependency parsing (McClosky et al., 2010). Our contributions are summarised as follows: • Given the distribution wS of a word w in a source domain S, we propose a method for learning its distribution wT in a target domain T . • Using the learnt distribution prediction model, we propose a method to learn a crossdomain POS tagger. • Using the learnt distribution prediction model, we propose a method to learn a crossdomain sentiment classifier. To our knowledge, ours is the first successful attempt to learn a model that predicts the distribution of a word across different domains. 2 Related Work Learning semantic representations for words using documents from a single domain has received much attention lately (Vincent et al., 2010; Socher et al., 2013; Baroni and Lenci, 2010). As we have already discussed, the semantics of a word varies across different domains, and such variations are not captured by models that only learn a single semantic representation for a word using documents from a single domain. The POS of a word is influenced both by its context (contextual bias), and the domain of the document in which it appears (lexical bias). For example, the word signal is predominately used as a noun in MEDLINE, whereas it appears predominantly as an adjective in the Wall Street Journal (WSJ) (Blitzer et al., 2006). Consequently, a tagger trained on WSJ would incorrectly tag signal in MEDLINE. Blitzer et al. (2006) append the source domain labeled data with predicted pivots (i.e. words that appear in both the source and target domains) to adapt a POS tagger to a target domain. Choi and Palmer (2012) propose a cross-domain POS tagging method by training two separate models: a generalised model and a domain-specific model. At tagging time, a sentence is tagged by the model that is most similar to that sentence. Huang and Yates (2009) train a Conditional Random Field (CRF) tagger with features retrieved from a smoothing model trained using both source and target domain unlabeled data. Adding latent states to the smoothing model further improves the POS tagging accuracy (Huang and Yates, 2012). Schnabel and Sch¨utze (2013) propose a training set filtering method where they eliminate shorter words from the training data based on the intuition that longer words are more likely to be examples of productive linguistic processes than shorter words. The sentiment of a word can vary from one domain to another. In Structural Correspondence Learning (SCL) (Blitzer et al., 2006; Blitzer et al., 2007), a set of pivots are chosen using pointwise mutual information. Linear predictors are then learnt to predict the occurrence of those pivots, and SVD is used to construct a lower dimensional representation in which a binary classifier is trained. Spectral Feature Alignment (SFA) (Pan et al., 2010) also uses pivots to compute an alignment between domain specific and domain independent features. Spectral clustering is performed on a bipartite graph representing domain specific and domain independent features to find a lowerdimensional projection between the two sets of features. The cross-domain sentiment-sensitive thesaurus (SST) (Bollegala et al., 2011) groups together words that express similar sentiments in 614 different domains. The created thesaurus is used to expand feature vectors during train and test stages in a binary classifier. However, unlike our method, SCL, SFA, or SST do not learn a prediction model between word distributions across domains. Prior knowledge of the sentiment of words, such as sentiment lexicons, has been incorporated into cross-domain sentiment classification. He et al. (2011) propose a joint sentiment-topic model that imposes a sentiment-prior depending on the occurrence of a word in a sentiment lexicon. Ponomareva and Thelwall (2012) represent source and target domain reviews as nodes in a graph and apply a label propagation algorithm to predict the sentiment labels for target domain reviews from the sentiment labels in source domain reviews. A sentiment lexicon is used to create features for a document. Although incorporation of prior sentiment knowledge is a promising technique to improve accuracy in cross-domain sentiment classification, it is complementary to our task of distribution prediction across domains. The unsupervised DA setting that we consider does not assume the availability of labeled data for the target domain. However, if a small amount of labeled data is available for the target domain, it can be used to further improve the performance of DA tasks (Xiao et al., 2013; Daum´e III, 2007). 3 Distribution Prediction 3.1 In-domain Feature Vector Construction Before we tackle the problem of learning a model to predict the distribution of a word across domains, we must first compute the distribution of a word from a single domain. For this purpose, we represent a word w using unigrams and bigrams that co-occur with w in a sentence as follows. Given a document H, such as a user-review of a product, we split H into sentences, and lemmatize each word in a sentence using the RASP system (Briscoe et al., 2006). Using a standard stop word list, we filter out frequent non-content unigrams and select the remainder as unigram features to represent a sentence. Next, we generate bigrams of word lemmas and remove any bigrams that consists only of stop words. Bigram features capture negations more accurately than unigrams, and have been found to be useful for sentiment classification tasks. Table 1 shows the unigram and bigram features we extract for a sentence using this procedure. Using data from a single dosentence This is an interesting and well researched book unigrams this, is, an, interesting, and, well, researched, (surface) book unigrams this, be, an, interest, and, well, research, book (lemma) unigrams interest, well, research, book (features) bigrams this+be, be+an, an+interest, interest+and, (lemma) and+well, well+research, research+book bigrams an+interest, interest+and, and+well, (features) well+research, research+book Table 1: Extracting unigram and bigram features. main, we construct a feature co-occurrence matrix A in which columns correspond to unigram features and rows correspond to either unigram or bigram features. The value of the element aij in the co-occurrence matrix A is set to the number of sentences in which the i-th and j-th features cooccur. Typically, the number of unique bigrams is much larger than that of unigrams. Moreover, cooccurrences of bigrams are rare compared to cooccurrences of unigrams, and co-occurrences involving a unigram and a bigram. Consequently, in matrix A, we consider co-occurrences only between unigrams vs. unigrams, and bigrams vs. unigrams. We consider each row in A as representing the distribution of a feature (i.e. unigrams or bigrams) in a particular domain over the unigram features extracted from that domain (represented by the columns of A). We apply Positive Pointwise Mutual Information (PPMI) to the cooccurrence matrix A. This is a variation of the Pointwise Mutual Information (PMI) (Church and Hanks, 1990), in which all PMI values that are less than zero are replaced with zero (Lin, 1998; Bullinaria and Levy, 2007). Let F be the matrix that results when PPMI is applied to A. Matrix F has the same number of rows, nr, and columns, nc, as the raw co-occurrence matrix A. Note that in addition to the above-mentioned representation, there are many other ways to represent the distribution of a word in a particular domain (Turney and Pantel, 2010). For example, one can limit the definition of co-occurrence to words that are linked by some dependency relation (Pado and Lapata, 2007), or extend the window of co-occurrence to the entire document (Baroni and Lenci, 2010). Since the method we propose in Section 3.2 to predict the distribution of a word across domains does not depend on the particular 615 feature representation method, any of these alternative methods could be used. To reduce the dimensionality of the feature space, and create dense representations for words, we perform SVD on F. We use the left singular vectors corresponding to the k largest singular values to compute a rank k approximation ˆF, of F. We perform truncated SVD using SVDLIBC2. Each row in ˆF is considered as representing a word in a lower k (≪nc) dimensional feature space corresponding to a particular domain. Distribution prediction in this lower dimensional feature space is preferrable to prediction over the original feature space because there are reductions in overfitting, feature sparseness, and the learning time. We created two matrices, ˆFS and ˆFT from the source and target domains, respectively, using the above mentioned procedure. 3.2 Cross-Domain Feature Vector Prediction We propose a method to learn a model that can predict the distribution wT of a word w in the target domain T , given its distribution wS in the source domain S. We denote the set of features that occur in both domains by W = {w(1), . . . , w(n)}. In the literature, such features are often referred to as pivots, and they have been shown to be useful for DA, allowing the weights learnt to be transferred from one domain to another. Various criteria have been proposed for selecting a small set of pivots for DA, such as the mutual information of a word with the two domains (Blitzer et al., 2007). However, we do not impose any further restrictions on the set of pivots W other than that they occur in both domains. For each word w(i) ∈W, we denote the corresponding rows in ˆFS and ˆFT by column vectors w(i) S and w(i) T . Note that the dimensionality of w(i) S and w(i) T need not be equal, and we may select different numbers of singular vectors to approximate ˆFS and ˆFT . We model distribution prediction as a multivariate regression problem where, given a set {(w(i) S , w(i) T )}n i=1 consisting of pairs of feature vectors selected from each domain for the pivots in W, we learn a mapping from the inputs (w(i) S ) to the outputs (w(i) T ). We use Partial Least Squares Regression (PLSR) (Wold, 1985) to learn a regression model using pairs of vectors. PLSR has been applied in 2http://tedlab.mit.edu/˜dr/SVDLIBC/ Algorithm 1 Learning a prediction model. Input: X, Y, L. Output: Prediction matrix M. 1: Randomly select γl from columns in Yl. 2: vl = Xl ⊤γl/ Xl ⊤γl 3: λl = Xlvl 4: ql = Yl ⊤λl/ Yl ⊤λl 5: γl = Ylql 6: If γl is unchanged go to Line 7; otherwise go to Line 2 7: cl = λl ⊤γl/ λl ⊤γl 8: pl = Xl ⊤λl/λl ⊤λl 9: Xl+1 = Xl −λlpl ⊤and Yl+1 = Yl −clλlql ⊤. 10: Stop if l = L; otherwise l = l + 1 and return to Line 1. 11: Let C = diag(c1, . . . , cL), and V = [v1 . . . vL] 12: M = V(P⊤V)−1CQ⊤ 13: return M Chemometrics (Geladi and Kowalski, 1986), producing stable prediction models even when the number of samples is considerably smaller than the dimensionality of the feature space. In particular, PLSR fits a smaller number of latent variables (10 −100 in practice) such that the correlation between the feature vectors for pivots in the two domains are maximised in this latent space. Let X and Y denote matrices formed by arranging respectively the vectors w(i) S s and w(i) T in rows. PLSR decomposes X and Y into a series of products between rank 1 matrices as follows: X ≈ L X l=1 λlpl ⊤= ΛP⊤ (1) Y ≈ L X l=1 γlql ⊤= ΓQ⊤. (2) Here, λl, γl, pl, and ql are column vectors, and the summation is taken over the rank 1 matrices that result from the outer product of those vectors. The matrices, Λ, Γ, P, and Q are constructed respectively by arranging λl, γl, pl, and ql vectors as columns. Our method for learning a distribution prediction model is shown in Algorithm 1. It is based on the two block NIPALS routine (Wold, 1975; Rosipal and Kramer, 2006) and iteratively discovers L pairs of vectors (λl, γl) such that the covariances, Cov(λl, γl), are maximised under the constraint ||pl|| = ||ql|| = 1. Finally, the prediction matrix, M is computed using λl, γl, pl, ql. The predicted distribution ˆwT of a word w in T is given by ˆwT = MwS. (3) 616 Our distribution prediction learning method is unsupervised in the sense that it does not require manually labeled data for a particular task from any of the domains. This is an important point, and means that the distribution prediction method is independent of the task to which it may subsequently be applied. As we go on to show in Section 6, this enables us to use the same distribution prediction method for both POS tagging and sentiment classification. 4 Domain Adaptation The main reason that a model trained only on the source domain labeled data performs poorly in the target domain is the feature mismatch – few features in target domain test instances appear in source domain training instances. To overcome this problem, we use the proposed distribution prediction method to find those related features in the source domain that correspond to the features appearing in the target domain test instances. We consider two DA tasks: (a) cross-domain POS tagging (Section 4.1), and (b) cross-domain sentiment classification (Section 4.2). Note that our proposed distribution prediction method can be applied to numerous other NLP tasks that involve sequence labelling and document classification. 4.1 Cross-Domain POS Tagging We represent each word using a set of features such as capitalisation (whether the first letter of the word is capitalised), numeric (whether the word contains digits), prefixes up to four letters, and suffixes up to four letters (Miller et al., 2011). Next, for each word w in a source domain labeled (i.e. manually POS tagged) sentence, we select its neighbours u(i) in the source domain as additional features. Specifically, we measure the similarity, sim(u(i) S , wS), between the source domain distributions of u(i) and w, and select the top r similar neighbours u(i) for each word w as additional features for w. We refer to such features as distributional features in this work. The value of a neighbour u(i) selected as a distributional feature is set to its similarity score sim(u(i) S , wS). Next, we train a CRF model using all features (i.e. capitalisation, numeric, prefixes, suffixes, and distributional features) on source domain labeled sentences. We train a PLSR model, M, that predicts the target domain distribution Mu(i) S of a word u(i) in the source domain labeled sentences, given its distribution, u(i) S . At test time, for each word w that appears in a target domain test sentence, we measure the similarity, sim(Mu(i) S , wT ), and select the most similar r words u(i) in the source domain labeled sentences as the distributional features for w, with their values set to sim(Mu(i) S , wT ). Finally, the trained CRF model is applied to a target domain test sentence. Note that distributional features are always selected from the source domain during both train and test times, thereby increasing the number of overlapping features between the trained model and test sentences. To make the inference tractable and efficient, we use a first-order Markov factorisation, in which we consider all pairwise combinations between the features for the current word and its immediate predecessor. 4.2 Cross-Domain Sentiment Classification Unlike in POS tagging, where we must individually tag each word in a target domain test sentence, in sentiment classification we must classify the sentiment for the entire review. We modify the DA method presented in Section 4.1 to satisfy this requirement as follows. Let us assume that we are given a set {(x(i) S , y(i))}n i=1 of n labeled reviews x(i) S for the source domain S. For simplicity, let us consider binary sentiment classification where each review x(i) is labeled either as positive (i.e. y(i) = 1) or negative (i.e. y(i) = −1). Our cross-domain binary sentiment classification method can be easily extended to the multi-class setting as well. First, we lemmatise each word in a source domain labeled review x(i) S , and extract both unigrams and bigrams as features to represent x(i) S by a binaryvalued feature vector. Next, we train a binary classification model, θ, using those feature vectors. Any binary classification algorithm can be used to learn θ. In our experiments, we used L2 regularised logistic regression. Next, we train a PLSR model, M, as described in Section 3.2 using unlabeled reviews in the source and target domains. At test time, we represent a test target review H using a binary-valued feature vector h of unigrams and bigrams of lemmas of the words in H, as we did for source domain labeled train reviews. Next, for each feature w(j) extracted from H, we measure the similarity, 617 sim(Mu(i) S , w(j) T ), between the target domain distribution of w(j), and each feature (unigram or bigram) u(i) in the source domain labeled reviews. We score each source domain feature u(i) for its relatedness to H using the formula: score(u(i), H) = 1 |H| |H| X j=1 sim(Mu(i) S , w(j) T ) (4) where |H| denotes the total number of features extracted from the test review H. We select the top scoring r features u(i) as distributional features for H, and append those to h. The corresponding values of those distributional features are set to the scores given by Equation 4. Finally, we classify h using the trained binary classifier θ. Note that given a test review, we find the distributional features that are similar to all the words in the test review from the source domain. In particular, we do not find distributional features independently for each word in the test review. This enables us to find distributional features that are consistent with all the features in a test review. 4.3 Model Choices For both POS tagging and sentiment classification, we experimented with several alternative approaches for feature weighting, representation, and similarity measures using development data, which we randomly selected from the training instances from the datasets described in Section 5. For feature weighting for sentiment classification, we considered using the number of occurrences of a feature in a review and tf-idf weighting (Salton and Buckley, 1983). For representation, we considered distributional features u(i) in descending order of their scores given by Equation 4, and then taking the inverse-rank as the values for the distributional features (Bollegala et al., 2011). However, none of these alternatives resulted in performance gains. With respect to similarity measures, we experimented with cosine similarity and the similarity measure proposed by Lin (1998); cosine similarity performed consistently well over all the experimental settings. The feature representation was held fixed during these similarity measure comparisons. For POS tagging, we measured the effect of varying r, the number of distributional features, using a development dataset. We observed that setting r larger than 10 did not result in significant improvements in tagging accuracy, but only increased the train time due to the larger feature space. Consequently, we set r = 10 in POS tagging. For sentiment analysis, we used all features in the source domain labeled reviews as distributional features, weighted by their scores given by Equation 4, taking the inverse-rank. In both tasks, we parallelised similarity computations using BLAS3 level-3 routines to speed up the computations. The source code of our implementation is publicly available4. 5 Datasets To evaluate DA for POS tagging, following Blitzer et al. (2006), we use sections 2 −21 from Wall Street Journal (WSJ) as the source domain labeled data. An additional 100, 000 WSJ sentences from the 1988 release of the WSJ corpus are used as the source domain unlabeled data. Following Schnabel and Sch¨utze (2013), we use the POS labeled sentences in the SACNL dataset (Petrov and McDonald, 2012) for the five target domains: QA forums, Emails, Newsgroups, Reviews, and Blogs. Each target domain contains around 1000 POS labeled test sentences and around 100, 000 unlabeled sentences. To evaluate DA for sentiment classification, we use the Amazon product reviews collected by Blitzer et al. (2007) for four different product categories: books (B), DVDs (D), electronic items (E), and kitchen appliances (K). There are 1000 positive and 1000 negative sentiment labeled reviews for each domain. Moreover, each domain has on average 17, 547 unlabeled reviews. We use the standard split of 800 positive and 800 negative labeled reviews from each domain as training data, and the remainder for testing. 6 Experiments and Results For each domain D in the SANCL (POS tagging) and Amazon review (sentiment classification) datasets, we create a PPMI weighted cooccurrence matrix FD. On average, FD created for a target domain in the SANCL dataset contains 104, 598 rows and 65, 528 columns, whereas those numbers in the Amazon dataset are 27, 397 and 35, 200 respectively. In cross-domain sentiment classification, we measure the binary sentiment classification accuracy for the target domain 3http://www.openblas.net/ 4http://www.csc.liv.ac.uk/˜danushka/ software.html 618 test reviews for each pair of domains (12 pairs in total for 4 domains). On average, we have 40, 176 pivots for a pair of domains in the Amazon dataset. In cross-domain POS tagging, WSJ is always the source domain, whereas the five domains in SANCL dataset are considered as the target domains. For this setting we have 9822 pivots on average. The number of singular vectors k selected in SVD, and the number of PLSR dimensions L are set respectively to 1000 and 50 for the remainder of the experiments described in the paper. Later we study the effect of those two parameters on the performance of the proposed method. The L-BFGS (Liu and Nocedal, 1989) method is used to train the CRF and logistic regression models. 6.1 POS Tagging Results Table 2 shows the token-level POS tagging accuracy for unseen words (i.e. words that appear in the target domain test sentences but not in the source domain labeled train sentences). By limiting the evaluation to unseen words instead of all words, we can evaluate the gain in POS tagging accuracy solely due to DA. The NA (no-adapt) baseline simulates the effect of not performing any DA. Specifically, in POS tagging, a CRF trained on source domain labeled sentences is applied to target domain test sentences, whereas in sentiment classification, a logistic regression classifier trained using source domain labeled reviews is applied to the target domain test reviews. The Spred baseline directly uses the source domain distributions for the words instead of projecting them to the target domain. This is equivalent to setting the prediction matrix M to the unit matrix. The Tpred baseline uses the target domain distribution wT for a word w instead of MwS. If w does not appear in the target domain, then wT is set to the zero vector. The Spred and Tpred baselines simulate the two alternatives of using source and target domain distributions instead of learning a PLSR model. The DA method proposed in Section 4.1 is shown as the Proposed method. Filter denotes the training set filtering method proposed by Schnabel and Sch¨utze (2013) for the DA of POS taggers. From Table 2, we see that the Proposed method achieves the best performance in all five domains, followed by the Tpred baseline. Recall that the Tpred baseline cannot find source domain words that do not appear in the target domain as distriTarget NA Spred Tpred Filter Proposed QA 67.34 68.18 68.75 57.08 69.28† Emails 65.62 66.62 67.07 65.61 67.09 Newsgroups 75.71 75.09 75.57 70.37 75.85† Reviews 56.36 54.60 56.68 47.91 56.93† Blogs 76.64 54.78 76.90 74.56 76.97† Table 2: POS tagging accuracies on SANCL. butional features for the words in the target domain test reviews. Therefore, when the overlap between the vocabularies used in the source and the target domains is small, Tpred cannot reduce the mismatch between the feature spaces. Poor performance of the Spred baseline shows that the distributions of a word in the source and target domains are different to the extent that the distributional features found using source domain distributions are inadequate. The two baselines Spred and Tpred collectively motivate our proposal to learn a distribution prediction model from the source domain to the target. The improvements of Proposed over the previously proposed Filter are statistically significant in all domains except the Emails domain (denoted by † in Table 2 according to the Binomial exact test at 95% confidence). However, the differences between the Tpred and Proposed methods are not statistically significant. 6.2 Sentiment Classification Results In Figure 1, we compare the Proposed crossdomain sentiment classification method (Section 4.2) against several baselines and the current stateof-the-art methods. The baselines NA, Spred, and Tpred are defined similarly as in Section 6.1. SST is the Sentiment Sensitive Thesaurus proposed by Bollegala et al. (2011). SST creates a single distribution for a word using both source and target domain reviews, instead of two separate distributions as done by the Proposed method. SCL denotes the Structural Correspondence Learning method proposed by Blitzer et al. (2006). SFA denotes the Spectral Feature Alignment method proposed by Pan et al. (2010). SFA and SCL represent the current state-of-the-art methods for cross-domain sentiment classification. All methods are evaluated under the same settings, including train/test split, feature spaces, pivots, and classification algorithms so that any differences in performance can be directly attributable to their domain adaptability. For each domain, the accuracy obtained by a classifier trained using labeled data from that 619 E−>B D−>B K−>B 50 55 60 65 70 75 80 85 Accuracy B−>E D−>E K−>E 50 60 70 80 90 Accuracy B−>D E−>D K−>D 50 55 60 65 70 75 80 85 Accuracy NA SFA SST SCL Spred Tpred Proposed B−>K E−>K D−>K 50 60 70 80 90 Accuracy Figure 1: Cross-Domain sentiment classification. domain is indicated by a solid horizontal line in each sub-figure. This upper baseline represents the classification accuracy we could hope to obtain if we were to have labeled data for the target domain. Clopper-Pearson 95% binomial confidence intervals are superimposed on each vertical bar. From Figure 1 we see that the Proposed method reports the best results in 8 out of the 12 domain pairs, whereas SCL, SFA, and Spred report the best results in other cases. Except for the D-E setting in which Proposed method significantly outperforms both SFA and SCL, the performance of the Proposed method is not statistically significantly different to that of SFA or SCL. The selection of pivots is vital to the performance of SFA. However, unlike SFA, which requires us to carefully select a small subset of pivots (ca. less than 500) using some heuristic approach, our Proposed method does not require any pivot selection. Moreover, SFA projects source domain reviews to a lower-dimensional latent space, in which a binary sentiment classifier is subsequently trained. At test time SFA projects a target review into this lower-dimensional latent space and applies the trained classifier. In contrast, our Proposed method predicts the distribution of a word in the target domain, given its distribution in the source domain, thereby explicitly translating the source domain reviews to the target. This property enables us to apply the proposed distribution prediction method to tasks other than sentiment analysis such as POS tagging where we must identify distributional features for individual words. 10 100 200 300 400 500 600 700 800 0.64 0.66 0.68 0.7 0.72 0.74 0.76 0.78 PLSR dimensions Accuracy E−−>B D−−>B Figure 2: The effect of PLSR dimensions. Unlike our distribution prediction method, which is unsupervised, SST requires labeled data for the source domain to learn a feature mapping between a source and a target domain in the form of a thesaurus. However, from Figure 1 we see that in 10 out of the 12 domain-pairs the Proposed method returns higher accuracies than SST. To evaluate the overall effect of the number of singular vectors k used in the SVD step, and the number of PLSR components L used in Algorithm 1, we conduct two experiments. To evaluate the effect of the PLSR dimensions, we fixed k = 1000 and measured the cross-domain sentiment classification accuracy over a range of L values. As shown in Figure 2, accuracy remains stable across a wide range of PLSR dimensions. Because the time complexity of Algorithm 1 increases linearly with L, it is desirable that we select smaller L val620 1000 1500 2000 2500 3000 0.58 0.6 0.62 0.64 0.66 0.68 0.7 0.72 0.74 0.76 SVD dimensions Accuracy E−−>B D−−>B Figure 3: The effect of SVD dimensions. Measure Distributional features sim(uS, wS) thin (0.1733), digestible (0.1728), small+print (0.1722) sim(uT , wT ) travel+companion (0.6018), snap-in (0.6010), touchpad (0.6016) sim(uS, wT ) segregation (0.1538), participation (0.1512), depression+era (0.1508) sim(MuS, wT ) small (0.2794), compact (0.2641), sturdy (0.2561) Table 3: Top 3 distributional features u ∈S for the word lightweight (w). ues in practice. To evaluate the effect of the SVD dimensions, we fixed L = 100 and measured the cross-domain sentiment classification accuracy for different k values as shown in Figure 3. We see an overall decrease in classification accuracy when k is increased. Because the dimensionality of the source and target domain feature spaces is equal to k, the complexity of the least square regression problem increases with k. Therefore, larger k values result in overfitting to the train data and classification accuracy is reduced on the target test data. As an example of the distribution prediction method, in Table 3 we show the top 3 similar distributional features u in the books (source) domain, predicted for the electronics (target) domain word w = lightweight, by different similarity measures. Bigrams are indicted by a + sign and the similarity scores of the distributional features are shown within brackets. Using the source domain distributions for both u and w (i.e. sim(uS, wS)) produces distributional features that are specific to the books domain, or to the dominant adjectival sense of having no importance or influence. On the other hand, using target domain distributions for u and w (i.e. sim(uT , wT )) returns distributional features of the dominant nominal sense of lower in weight frequently associated with electronic devices. Simply using source domain distributions uS (i.e. sim(uS, wT )) returns totally unrelated distributional features. This shows that word distributions in source and target domains are very different and some adaptation is required prior to computing distributional features. Interestingly, we see that by using the distributions predicted by the proposed method (i.e. sim(MuS, wT )) we overcome this problem and find relevant distributional features from the source domain. Although for illustrative purposes we used the word lightweight, which occurs in both the source and the target domains, our proposed method does not require the source domain distribution wS for a word w in a target domain document. Therefore, it can find distributional features even for words occurring only in the target domain, thereby reducing the feature mismatch between the two domains. 7 Conclusion We proposed a method to predict the distribution of a word across domains. We first create a distributional representation for a word using the data from a single domain, and then learn a Partial Least Square Regression (PLSR) model to predict the distribution of a word in a target domain given its distribution in a source domain. We evaluated the proposed method in two domain adaptation tasks: cross-domain POS tagging and crossdomain sentiment classification. Our experiments show that without requiring any task-specific customisations to our distribution prediction method, it outperforms competitive baselines and achieves comparable results to the current state-of-the-art domain adaptation methods. References Anthony Aue and Michael Gamon. 2005. Customizing sentiment classifiers to new domains: a case study. Technical report, Microsoft Research. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673 – 721. Romaric Besanc¸on, Martin Rajman, and Jean-C´edric Chappelier. 1999. Textual similarities based on a 621 distributional approach. In Proc. of DEXA, pages 180 – 184. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proc. of EMNLP, pages 120 – 128. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proc. of ACL, pages 440 – 447. Danushka Bollegala, David Weir, and John Carroll. 2011. Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification. In Proc. of ACL/HLT, pages 132 – 141. Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proc. of COLING/ACL Interactive Presentation Sessions. John A. Bullinaria and Jospeh P. Levy. 2007. Extracting semantic representations from word cooccurrence statistics: A computational study. Behavior Research Methods, 39(3):510 – 526. Jinho D. Choi and Martha Palmer. 2012. Fast and robust part-of-speech tagging using dynamic model selection. In Proc. of ACL Short Papers, volume 2, pages 363 – 367. Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22 – 29, March. James Curran. 2005. Supersense tagging of unknown nouns using semantic similarity. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 26 – 33. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proc. of ACL, pages 256 – 263. John R. Firth. 1957. A synopsis of linguistic theory 1930-55. Studies in Linguistic Analysis, pages 1 – 32. Paul Geladi and Bruce R. Kowalski. 1986. Partial least-squares regression: a tutorial. Analytica Chimica Acta, 185(0):1 – 17. Honglei Guo, Huijia Zhu, Zhili Guo, Xiaoxun Zhang, Xian Wu, and Zhong Su. 2009. Domain adaptation with latent semantic association for named entity recognition. In Proc. of NAACL, pages 281 – 289. Yulan He, Chenghua Lin, and Harith Alani. 2011. Automatically extracting polarity-bearing topics for cross-domain sentiment classification. In Proc. of ACL/HLT, pages 123 – 131. Fei Huang and Alexander Yates. 2009. Distributional representations for handling sparsity in supervised sequence-labeling. In ACL-IJCNLP’09, pages 495 – 503. Fei Huang and Alexander Yates. 2012. Biased representation learning for domain adaptation. In Proc. of EMNLP/CoNLL, pages 1313 – 1323. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2012. Directional distributional similarity for lexical inference. Natural Language Engineering, 16(4):359 – 389. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proc. of ACL, pages 768 – 774. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45:503 – 528. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Proc. of NAACL/HLT, pages 28 – 36. John E. Miller, Manabu Torii, and K. Vijay-Shanker. 2011. Building domain-specific taggers without annotated (domain) data. In Proc. of EMNLP/CoNLL, pages 1103 – 1111. Sebastian Pado and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161 – 199. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proc. of WWW, pages 751 – 760. Patrick Pantel, Eric Crestan, Arkady Borkovsky, AnaMaria Popescu, and Vishnu Vyas. 2009. Web-scale distributional similarity and entity set expansion. In Proc. of EMNLP, pages 938 – 947. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the 1st SANCL Workshop. Natalia Ponomareva and Mike Thelwall. 2012. Do neighbours help? an exploration of graph-based algorithms for cross-domain sentiment classification. In Proc. of EMNLP, pages 655 – 665. Roman Rosipal and Nicole Kramer. 2006. Overview and recent advances in partial least squares. In C. Saunders et al., editor, SLSFS’05, volume 3940 of LNCS, pages 34 – 51, Berlin Heidelberg. SpringerVerlag. G. Salton and C. Buckley. 1983. Introduction to Modern Information Retreival. McGraw-Hill Book Company. Tobias Schnabel and Hinrich Sch¨utze. 2013. Towards robust cross-domain domain adaptation for part-ofspeech tagging. In Proc. of IJCNLP, pages 198 – 206. 622 Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP, pages 1631 – 1642. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Aritificial Intelligence Research, 37:141 – 188. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antonie Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11:3371 – 3408. Herman Wold. 1975. Path models with latent variables: the NIPALS approach. In H. M. Blalock et al., editor, Quantitative socialogy: international perspective on mathematical and statistical modeling, pages 307 – 357. Academic. Herman Wold. 1985. Partial least squares. In Samel Kotz and Norman L. Johnson, editors, Encyclopedia of the Statistical Sciences, pages 581 – 591. Wiley. Min Xiao, Feipeng Zhao, and Yuhong Guo. 2013. Learning latent word representations for domain adaptation using supervised word clustering. In Proc. of EMNLP, pages 152 – 162. 623
2014
58
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 624–633, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics How to make words with vectors: Phrase generation in distributional semantics Georgiana Dinu and Marco Baroni Center for Mind/Brain Sciences University of Trento, Italy (georgiana.dinu|marco.baroni)@unitn.it Abstract We introduce the problem of generation in distributional semantics: Given a distributional vector representing some meaning, how can we generate the phrase that best expresses that meaning? We motivate this novel challenge on theoretical and practical grounds and propose a simple data-driven approach to the estimation of generation functions. We test this in a monolingual scenario (paraphrase generation) as well as in a cross-lingual setting (translation by synthesizing adjectivenoun phrase vectors in English and generating the equivalent expressions in Italian). 1 Introduction Distributional methods for semantics approximate the meaning of linguistic expressions with vectors that summarize the contexts in which they occur in large samples of text. This has been a very successful approach to lexical semantics (Erk, 2012), where semantic relatedness is assessed by comparing vectors. Recently these methods have been extended to phrases and sentences by means of composition operations (see Baroni (2013) for an overview). For example, given the vectors representing red and car, composition derives a vector that approximates the meaning of red car. However, the link between language and meaning is, obviously, bidirectional: As message recipients we are exposed to a linguistic expression and we must compute its meaning (the synthesis problem). As message producers we start from the meaning we want to communicate (a “thought”) and we must encode it into a word sequence (the generation problem). If distributional semantics is to be considered a proper semantic theory, then it must deal not only with synthesis (going from words to vectors), but also with generation (from vectors to words). Besides these theoretical considerations, phrase generation from vectors has many useful applications. We can, for example, synthesize the vector representing the meaning of a phrase or sentence, and then generate alternative phrases or sentences from this vector to accomplish true paraphrase generation (as opposed to paraphrase detection or ranking of candidate paraphrases). Generation can be even more useful when the source vector comes from another modality or language. Recent work on grounding language in vision shows that it is possible to represent images and linguistic expressions in a common vectorbased semantic space (Frome et al., 2013; Socher et al., 2013). Given a vector representing an image, generation can be used to productively construct phrases or sentences that describe the image (as opposed to simply retrieving an existing description from a set of candidates). Translation is another potential application of the generation framework: Given a semantic space shared between two or more languages, one can compose a word sequence in one language and generate translations in another, with the shared semantic vector space functioning as interlingua. Distributional semantics assumes a lexicon of atomic expressions (that, for simplicity, we take to be words), each associated to a vector. Thus, at the single-word level, the problem of generation is solved by a trivial generation-by-synthesis approach: Given an arbitrary target vector, “generate” the corresponding word by searching through the lexicon for the word with the closest vector to the target. This is however unfeasible for larger expressions: Given n vocabulary elements, this approach requires checking nk phrases of length k. This becomes prohibitive already for relatively short phrases, as reasonably-sized vocabularies do not go below tens of thousands of words. The search space for 3-word phrases in a 10K-word vocabulary is already in the order of trillions. In 624 this paper, we introduce a more direct approach to phrase generation, inspired by the work in compositional distributional semantics. In short, we revert the composition process and we propose a framework of data-induced, syntax-dependent functions that decompose a single vector into a vector sequence. The generated vectors can then be efficiently matched against those in the lexicon or fed to the decomposition system again to produce longer phrases recursively. 2 Related work To the best of our knowledge, we are the first to explicitly and systematically pursue the generation problem in distributional semantics. Kalchbrenner and Blunsom (2013) use top-level, composed distributed representations of sentences to guide generation in a machine translation setting. More precisely, they condition the target language model on the composed representation (addition of word vectors) of the source language sentence. Andreas and Ghahramani (2013) discuss the the issue of generating language from vectors and present a probabilistic generative model for distributional vectors. However, their emphasis is on reversing the generative story in order to derive composed meaning representations from word sequences. The theoretical generating capabilities of the methods they propose are briefly exemplified, but not fully explored or tested. Socher et al. (2011) come closest to our target problem. They introduce a bidirectional languageto-meaning model for compositional distributional semantics that is similar in spirit to ours. However, we present a clearer decoupling of synthesis and generation and we use different (and simpler) training methods and objective functions. Moreover, Socher and colleagues do not train separate decomposition rules for different syntactic configurations, so it is not clear how they would be able to control the generation of different output structures. Finally, the potential for generation is only addressed in passing, by presenting a few cases where the generated sequence has the same syntactic structure of the input sequence. 3 General framework We start by presenting the familiar synthesis setting, focusing on two-word phrases. We then introduce generation for the same structures. Finally, we show how synthesis and generation of longer phrases is handled by recursive extension of the two-word case. We assume a lexicon L, that is, a bi-directional look-up table containing a list of words Lw linked to a matrix Lv of vectors. Both synthesis and generation involve a trivial lexicon look-up step to retrieve vectors associated to words and vice versa: We ignore it in the exposition below. 3.1 Synthesis To construct the vector representing a two-word phrase, we must compose the vectors associated to the input words. More formally, similarly to Mitchell and Lapata (2008), we define a syntaxdependent composition function yielding a phrase vector ⃗p: ⃗p = fcompR(⃗u,⃗v) where ⃗u and ⃗v are the vector representations associated to words u and v. fcompR : Rd × Rd →Rd (for d the dimensionality of vectors) is a composition function specific to the syntactic relation R holding between the two words.1 Although we are not bound to a specific composition model, throughout this paper we use the method proposed by Guevara (2010) and Zanzotto et al. (2010) which defines composition as application of linear transformations to the two constituents followed by summing the resulting vectors: fcompR(⃗u,⃗v) = W1⃗u + W2⃗v. We will further use the following equivalent formulation: fcompR(⃗u,⃗v) = WR[⃗u;⃗v] where WR ∈Rd×2d and [⃗u;⃗v] is the vertical concatenation of the two vectors (using Matlab notation). Following Guevara, we learn WR using examples of word and phrase vectors directly extracted from the corpus (for the rest of the paper, we refer to these phrase vectors extracted non-compositionally from the corpus as observed vectors). To estimate, for example, the weights in the WAN (adjective-noun) matrix, we use the corpus-extracted vectors of the words in tuples such as ⟨red, car, red.car⟩, ⟨evil, cat, evil.cat⟩, etc. Given a set of training examples stacked into matrices U, V (the constituent vectors) and P (the corresponding observed vectors), we estimate WR by solving the least-squares regression problem: 1Here we make the simplifying assumption that all vectors have the same dimensionality, however this need not necessarily be the case. 625 min WR∈Rd×2d ∥P −WR[U; V ]∥ (1) We use the approximation of observed phrase vectors as objective because these vectors can provide direct evidence of the polysemous behaviour of words: For example, the corpus-observed vectors of green jacket and green politician reflect how the meaning of green is affected by its occurrence with different nouns. Moreover, it has been shown that for two-word phrases, despite their relatively low frequency, such corpus-observed representations are still difficult to outperform in phrase similarity tasks (Dinu et al., 2013; Turney, 2012). 3.2 Generation Generation of a two-word sequence from a vector proceeds in two steps: decomposition of the phrase vectors into two constituent vectors, and search for the nearest neighbours of each constituent vector in Lv (the lexical matrix) in order to retrieve the corresponding words from Lw. Decomposition We define a syntax-dependent decomposition function: [⃗u;⃗v] = fdecompR(⃗p) where ⃗p is a phrase vector, ⃗u and ⃗v are vectors associated to words standing in the syntactic relation R and fdecompR : Rd →Rd × Rd. We assume that decomposition is also a linear transformation, W ′ R ∈R2d×d, which, given an input phrase vector, returns two constituent vectors: fdecompR(⃗p) = W ′ R⃗p Again, we can learn from corpus-observed vectors associated to tuples of word pairs and the corresponding phrases by solving: min W ′ R∈R2d×d ∥[U; V ] −W ′ RP∥ (2) If a composition function fcompR is available, an alternative is to learn a function that can best revert this composition. The decomposition function is then trained as follows: min W ′ R∈R2d×d ∥[U; V ] −W ′ RWR[U; V ]∥ (3) where the matrix WR is a given composition function for the same relation R. Training with observed phrases, as in eq. (2), should be better at capturing the idiosyncrasies of the actual distribution of phrases in the corpus and it is more robust by being independent from the availability and quality of composition functions. On the other hand, if the goal is to revert as faithfully as possible the composition process and retrieve the original constituents (e.g., in a different modality or a different language), then the objective in eq. (3) is more motivated. Nearest neighbour search We retrieve the nearest neighbours of each constituent vector ⃗u obtained by decomposition by applying a search function s: NN⃗u = s(⃗u, Lv, t) where NN⃗u is a list containing the t nearest neighours of ⃗u from Lv, the lexical vectors. Depending on the task, t might be set to 1 to retrieve just one word sequence, or to larger values to retrieve t alternatives. The similarity measure used to determine the nearest neighbours is another parameter of the search function; we omit it here as we only experiment with the standard cosine measure (Turney and Pantel, 2010).2 3.3 Recursive (de)composition Extension to longer sequences is straightforward if we assume binary tree representations as syntactic structures. In synthesis, the top-level vector can be obtained by applying composition functions recursively. For example, the vector of big red car would be obtained as: fcompAN( ⃗ big, fcompAN( ⃗ red, ⃗ car)), where fcompAN is the composition function for adjective-noun phrase combinations. Conversely, for generation, we decompose the phrase vector with fdecompAN. The first vector is used for retrieving the nearest adjective from the lexicon, while the second vector is further decomposed. In the experiments in this paper we assume that the syntactic structure is given. In Section 7, we discuss ways to eliminate this assumption. 2Note that in terms of computational efficiency, cosinebased nearest neighbour searches reduce to vector-matrix multiplications, for which many efficient implementations exist. Methods such as locality sensitive hashing can be used for further speedups when working with particularly large vocabularies (Andoni and Indyk, 2008). 626 4 Evaluation setting In our empirical part, we focus on noun phrase generation. A noun phrase can be a single noun or a noun with one or more modifiers, where a modifier can be an adjective or a prepositional phrase. A prepositional phrase is in turn composed of a preposition and a noun phrase. We learn two composition (and corresponding decomposition) functions: one for modifier-noun phrases, trained on adjective-noun (AN) pairs, and a second one for prepositional phrases, trained on preposition-noun (PN) combinations. For the rest of this section we describe the construction of the vector spaces and the (de)composition function learning procedure. Construction of vector spaces We test two types of vector representations. The cbow model introduced in Mikolov et al. (2013a) learns vector representations using a neural network architecture by trying to predict a target word given the words surrounding it. We use the word2vec software3 to build vectors of size 300 and using a context window of 5 words to either side of the target. We set the sub-sampling option to 1e-05 and estimate the probability of a target word with the negative sampling method, drawing 10 samples from the noise distribution (see Mikolov et al. (2013a) for details). We also implement a standard countbased bag-of-words distributional space (Turney and Pantel, 2010) which counts occurrences of a target word with other words within a symmetric window of size 5. We build a 300Kx300K symmetric co-occurrence matrix using the top most frequent words in our source corpus, apply positive PMI weighting and Singular Value Decomposition to reduce the space to 300 dimensions. For both spaces, the vectors are finally normalized to unit length.4 For both types of vectors we use 2.8 billion tokens as input (ukWaC + Wikipedia + BNC). The Italian language vectors for the cross-lingual experiments of Section 6 were trained on 1.6 billion tokens from itWaC.5 A word token is a wordform + POS-tag string. We extract both word vectors and the observed phrase vectors which are 3Available at https://code.google.com/p/ word2vec/ 4The parameters of both models have been chosen without specific tuning, based on their observed stable performance in previous independent experiments. 5Corpus sources: http://wacky.sslmit.unibo. it, http://www.natcorp.ox.ac.uk required for the training procedures. We sanitycheck the two spaces on MEN (Bruni et al., 2012), a 3,000 items word similarity data set. cbow significantly outperforms count (0.80 vs. 0.72 Spearman correlations with human judgments). count performance is consistent with previously reported results.6 (De)composition function training The training data sets consist of the 50K most frequent ⟨u, v, p⟩tuples for each phrase type, for example, ⟨red, car, red.car⟩or ⟨in, car, in.car⟩.7 We concatenate ⃗u and ⃗v vectors to obtain the [U; V ] matrix and we use the observed ⃗p vectors (e.g., the corpus vector of the red.car bigram) to obtain the phrase matrix P. We use these data sets to solve the least squares regression problems in eqs. (1) and (2), obtaining estimates of the composition and decomposition matrices, respectively. For the decomposition function in eq. (3), we replace the observed phrase vectors with those composed with fcompR(⃗u,⃗v), where fcompR is the previously estimated composition function for relation R. Composition function performance Since the experiments below also use composed vectors as input to the generation process, it is important to provide independent evidence that the composition model is of high quality. This is indeed the case: We tested our composition approach on the task of retrieving observed AN and PN vectors, based on their composed vectors (similarly to Baroni and Zamparelli (2010), we want to retrieve the observed red.car vector using fcompAN(red, car)). We obtain excellent results, with minimum accuracy of 0.23 (chance level <0.0001). We also test on the AN-N paraphrasing test set used in Dinu et al. (2013) (in turn adapting Turney (2012)). The dataset contains 620 ANs, each paired with a single-noun paraphrase (e.g., false belief/fallacy, personal appeal/charisma). The task is to rank all nouns in the lexicon by their similarity to the phrase, and return the rank of the correct paraphrase. Results are reported in the first row of Table 1. To facilitate comparison, we search, like Dinu et al., through a vocabulary containing the 20K most frequent nouns. The count vectors results are similar to those reported by Dinu and colleagues for the same model, and with cbow vec6See Baroni et al. (2014) for an extensive comparison of the two types of vector representations. 7For PNs, we ignore determiners and we collapse, for example, in.the.car and in.car occurrences. 627 Input Output cbow count A◦N N 11 171 N A, N 67,29 204,168 Table 1: Median rank on the AN-N set of Dinu et al. (2013) (e.g., personal appeal/charisma). First row: the A and N are composed and the closest N is returned as a paraphrase. Second row: the N vector is decomposed into A and N vectors and their nearest (POS-tag consistent) neighbours are returned. tors we obtain a median rank that is considerably higher than that of the methods they test. 5 Noun phrase generation 5.1 One-step decomposition We start with testing one-step decomposition by generating two-word phrases. A first straightforward evaluation consists in decomposing a phrase vector into the correct constituent words. For this purpose, we randomly select (and consequently remove) from the training sets 200 phrases of each type (AN and PN) and apply decomposition operations to 1) their corpus-observed vectors and 2) their composed representations. We generate two words by returning the nearest neighbours (with appropriate POS tags) of the two vectors produced by the decomposition functions. Table 2 reports generation accuracy, i.e., the proportion of times in which we retrieved the correct constituents. The search space consists of the top most frequent 20K nouns, 20K adjectives and 25 prepositions respectively, leading to chance accuracy <0.0001 for nouns and adjectives and <0.05 for prepositions. We obtain relatively high accuracy, with cbow vectors consistently outperforming count ones. Decomposing composed rather than observed phrase representations is easier, which is to be expected given that composed representations are obtained with a simpler, linear model. Most of the errors consist in generating synonyms (hard case→difficult case, true cost →actual cost) or related phrases (stereo speakers→omni-directional sound). Next, we use the AN-N dataset of Dinu and colleagues for a more interesting evaluation of one-step decomposition. In particular, we reverse the original paraphrasing direction by attempting to generate, for example, personal charm from charisma. It is worth stressing the nature of the Input Output cbow count A.N A, N 0.36,0.61 0.20,0.41 P.N P, N 0.93,0.79 0.60,0.57 A◦N A, N 1.00,1.00 0.86,0.99 P◦N P, N 1.00,1.00 1.00,1.00 Table 2: Accuracy of generation models at retrieving (at rank 1) the constituent words of adjective-noun (AN) and preposition-noun (PN) phrases. Observed (A.N) and composed representations (A◦N) are decomposed with observed(eq. 2) and composed-trained (eq. 3) functions respectively. paraphrase-by-generation task we tackle here and in the next experiments. Compositional distributional semantic systems are often evaluated on phrase and sentence paraphrasing data sets (Blacoe and Lapata, 2012; Mitchell and Lapata, 2010; Socher et al., 2011; Turney, 2012). However, these experiments assume a pre-compiled list of candidate paraphrases, and the task is to rank correct paraphrases above foils (paraphrase ranking) or to decide, for a given pair, if the two phrases/sentences are mutual paraphrases (paraphrase detection). Here, instead, we do not assume a given set of candidates: For example, in N→AN paraphrasing, any of 20K2 possible combinations of adjectives and nouns from the lexicon could be generated. This is a much more challenging task and it paves the way to more realistic applications of distributional semantics in generation scenarios. The median ranks of the gold A and N of the Dinu set are shown in the second row of Table 1. As the top-generated noun is almost always, uninterestingly, the input one, we return the next noun. Here we report results for the more motivated corpus-observed training of eq. (2) (unsurprisingly, using composed-phrase training for the task of decomposing single nouns leads to lower performance). Although considerably more difficult than the previous task, the results are still very good, with median ranks under 100 for the cbow vectors (random median rank at 10K). Also, the dataset provides only one AN paraphrase for each noun, out of many acceptable ones. Examples of generated phrases are given in Table 3. In addition to generating topically related ANs, we also see nouns disambiguated in different ways than intended in 628 Input Output Gold reasoning deductive thinking abstract thought jurisdiction legal authority legal power thunderstorm thundery storm electrical storm folk local music common people superstition old-fashioned religion superstitious notion vitriol political bitterness sulfuric acid zoom fantastic camera rapid growth religion religious religion religious belief Table 3: Examples of generating ANs from Ns using the data set of Dinu et al. (2013). the gold standard (for example vitriol and folk in Table 3). Other interesting errors consist of decomposing a noun into two words which both have the same meaning as the noun, generating for example religion →religious religions. We observe moreover that sometimes the decomposition reflects selectional preference effects, by generating adjectives that denote typical properties of the noun to be paraphrased (e.g., animosity is a (political, personal,...) hostility or a fridge is a (big, large, small,...) refrigerator). This effect could be exploited for tasks such as property-based concept description (Kelly et al., 2012). 5.2 Recursive decomposition We continue by testing generation through recursive decomposition on the task of generating nounpreposition-noun (NPN) paraphrases of adjectivenouns (AN) phrases. We introduce a dataset containing 192 AN-NPN pairs (such as pre-election promises→promises before election), which was created by the second author and additionally corrected by an English native speaker. The data set was created by analyzing a list of randomly selected frequent ANs. 49 further ANs (with adjectives such as amazing and great) were judged not NPN-paraphrasable and were used for the experiment reported in Section 7. The paraphrased subset focuses on preposition diversity and on including prepositions which are rich in semantic content and relevant to paraphrasing the AN. This has led to excluding of, which in most cases has the purely syntactic function of connecting the two nouns. The data set contains the following 14 prepositions: after, against, at, before, between, by, for, from, in, on, per, under, with, without.8 NPN phrase generation involves the application of two decomposition functions. In the first 8This dataset is available at http://clic.cimec. unitn.it/composes step we decompose using the modifier-noun rule (fdecompAN). We generate a noun from the head slot vector and the “adjective” vector is further decomposed using fdecompPN (returning the top noun which is not identical to the previously generated one). The results, in terms of top 1 accuracy and median rank, are shown in Table 4. Examples are given in Table 5. For observed phrase vector training, accuracy and rank are well above chance for all constituents (random accuracy 0.00005 for nouns and 0.04 for prepositions, corresponding median ranks: 10K, 12). Preposition generation is clearly a more difficult task. This is due at least in part to their highly ambiguous and broad semantics, and the way in which they interact with the nouns. For example, cable through ocean in Table 5 is a reasonable paraphrase of undersea cable despite the gold preposition being under. Other than several cases which are acceptable paraphrases but not in the gold standard, phrases related in meaning but not synonymous are the most common error (overcast skies →skies in sunshine). We also observe that often the A and N meanings are not fully separated when decomposing and “traces” of the adjective or of the original noun meaning can be found in both generated nouns (for example nearby school →schools after school). To a lesser degree, this might be desirable as a disambiguation-in-context effect as, for example, in underground cavern, in secret would not be a context-appropriate paraphrase of underground. 6 Noun phrase translation This section describes preliminary experiments performed in a cross-lingual setting on the task of composing English AN phrases and generating Italian translations. Creation of cross-lingual vector spaces A common semantic space is required in order to map words and phrases across languages. This problem has been extensively addressed in the bilingual lexicon acquisition literature (Haghighi et al., 2008; Koehn and Knight, 2002). We opt for a very simple yet accurate method (Klementiev et al., 2012; Rapp, 1999) in which a bilingual dictionary is used to identify a set of shared dimensions across spaces and the vectors of both languages are projected into the subspace defined by these (Subspace Projection - SP). This method is applicable to count-type vector spaces, for which the dimen629 Input Output Training cbow count A◦N N, P, N observed 0.98(1),0.08(5.5),0.13(20.5) 0.82(1),0.17(4.5),0.05(71.5) A◦N N, P, N composed 0.99(1),0.02(12), 0.12(24) 0.99(1),0.06(10), 0.05(150.5) Table 4: Top 1 accuracy (median rank) on the AN→NPN paraphrasing data set. AN phrases are composed and then recursively decomposed into N, (P, N). Comma-delimited scores reported for first noun, preposition, second noun in this order. Training is performed on observed (eq. 2) and composed (eq. 3) phrase representations. Input Output Gold mountainous region region in highlands region with mountains undersea cable cable through ocean cable under sea underground cavern cavern through rock cavern under ground interdisciplinary field field into research field between disciplines inter-war years years during 1930s years between wars post-operative pain pain through patient pain after operation pre-war days days after wartime days before war intergroup differences differences between intergroup differences between minorities superficial level level between levels level on surface Table 5: Examples of generating NPN phrases from composed ANs. sions correspond to actual words. As the cbow dimensions do not correspond to words, we align the cbow spaces by using a small dictionary to learn a linear map which transforms the English vectors into Italian ones as done in Mikolov et al. (2013b). This method (Translation Matrix - TM) is applicable to both cbow and count spaces. We tune the parameters (TM or SP for count and dictionary size 5K or 25K for both spaces) on a standard task of translating English words into Italian. We obtain TM-5K for cbow and SP-25K for count as optimal settings. The two methods perform similarly for low frequency words while cbow-TM-5K significantly outperforms count-SP-25K for high frequency words. Our results for the cbow-TM-5K setting are similar to those reported by Mikolov et al. (2013b). Cross-lingual decomposition training Training proceeds as in the monolingual case, this time concatenating the training data sets and estimating a single (de)-composition function for the two languages in the shared semantic space. We train both on observed phrase representations (eq. 2) and on composed phrase representations (eq. 3). Adjective-noun translation dataset We randomly extract 1,000 AN-AN En-It phrase pairs from a phrase table built from parallel movie subtitles, available at http://opus.lingfil. uu.se/ (OpenSubtitles2012, en-it) (Tiedemann, 2012). Input Output cbow count A◦N(En) A,N (It) 0.31,0.59 0.24,0.54 A◦N (It) A,N(En) 0.50,0.62 0.28,0.48 Table 6: Accuracy of En→It and It→En phrase translation: phrases are composed in source language and decomposed in target language. Training on composed phrase representations (eq. (3)) (with observed phrase training (eq. 2) results are ≈50% lower). Results are presented in Table 6. While in these preliminary experiments we lack a proper term of comparison, the performance is very good both quantitatively (random < 0.0001) and qualitatively. The En→It examples in Table 7 are representative. In many cases (e.g., vicious killer, rough neighborhood) we generate translations that are arguably more natural than those in the gold standard. Again, some differences can be explained by different disambiguations (chest as breast, as in the generated translation, or box, as in the gold). Translation into related but not equivalent phrases and generating the same meaning in both constituents (stellar star) are again the most significant errors. We also see cases in which this has the desired effect of disambiguating the constituents, such as in the examples in Table 8, showing the nearest neighbours when translating black tie and indissoluble tie. 630 Input Output Gold vicious killer assassino feroce (ferocious killer) killer pericoloso spectacular woman donna affascinante (fascinating woman) donna eccezionale huge chest petto grande (big chest) scrigno immenso rough neighborhood zona malfamata (ill-repute zone) quartiere difficile mortal sin peccato eterno (eternal sin) pecato mortale canine star stella stellare (stellar star) star canina Table 7: En→It translation examples (back-translations of generated phrases in parenthesis). black tie cravatta (tie) nero (black) velluto (velvet) bianco (white) giacca (jacket) giallo (yellow) indissoluble tie alleanza (alliance) indissolubile (indissoluble) legame (bond) sacramentale (sacramental) amicizia (friendship) inscindibile (inseparable) Table 8: Top 3 translations of black tie and indissoluble tie, showing correct disambiguation of tie. 7 Generation confidence and generation quality In Section 3.2 we have defined a search function s returning a list of lexical nearest neighbours for a constituent vector produced by decomposition. Together with the neighbours, this function can naturally return their similarity score (in our case, the cosine). We call the score associated to the top neighbour the generation confidence: if this score is low, the vector has no good match in the lexicon. We observe significant Spearman correlations between the generation confidence of a constituent and its quality (e.g., accuracy, inverse rank) in all the experiments. For example, for the AN(En)→AN(It) experiment, the correlations between the confidence scores and the inverse ranks for As and Ns, for both cbow and count vectors, range between 0.34 (p < 1e−28) and 0.42. In the translation experiments, we can use this to automatically determine a subset on which we can translate with very high accuracy. Table 9 shows AN-AN accuracies and coverage when translating only if confidence is above a certain threshold. Throughout this paper we have assumed that the syntactic structure of the phrase to be generated is given. In future work we will exploit the correlation between confidence and quality for the purpose of eliminating this assumption. As a concrete example, we can use confidence scores to distinguish the two subsets of the AN-NPN dataset introduced in Section 5: the ANs which are paraphrasable with an NPN from those that do not En→It It→En Thr. Accuracy Cov. Accuracy Cov. 0.00 0.21 100% 0.32 100% 0.55 0.25 70% 0.40 63% 0.60 0.31 32% 0.45 37% 0.65 0.45 9% 0.52 16% Table 9: AN-AN translation accuracy (both A and N correct) when imposing a confidence threshold (random: 1/20K2). Figure 1: ROC of distinguishing ANs paraphrasable as NPNs from non-paraphrasable ones. have this property. We assign an AN to the NPNparaphrasable class if the mean confidence of the PN expansion in its attempted N(PN) decomposition is above a certain threshold. We plot the ROC curve in Figure 1. We obtain a significant AUC of 0.71. 8 Conclusion In this paper we have outlined a framework for the task of generation with distributional semantic models. We proposed a simple but effective approach to reverting the composition process to obtain meaningful reformulations of phrases through a synthesis-generation process. For future work we would like to experiment with more complex models for (de-)composition in order to improve the performance on the tasks we used in this paper. Following this, we 631 would like to extend the framework to handle arbitrary phrases, including making (confidencebased) choices on the syntactic structure of the phrase to be generated, which we have assumed to be given throughout this paper. In terms of applications, we believe that the line of research in machine translation that is currently focusing on replacing parallel resources with large amounts of monolingual text provides an interesting setup to test our methods. For example, Klementiev et al. (2012) reconstruct phrase tables based on phrase similarity scores in semantic space. However, they resort to scoring phrase pairs extracted from an aligned parallel corpus, as they do not have a method to freely generate these. Similarly, in the recent work on common vector spaces for the representation of images and text, the current emphasis is on retrieving existing captions (Socher et al., 2014) and not actual generation of image descriptions. From a more theoretical point of view, our work fills an important gap in distributional semantics, making it a bidirectional theory of the connection between language and meaning. We can now translate linguistic strings into vector “thoughts”, and the latter into their most appropriate linguistic expression. Several neuroscientific studies suggest that thoughts are represented in the brain by patterns of activation over broad neural areas, and vectors are a natural way to encode such patterns (Haxby et al., 2001; Huth et al., 2012). Some research has already established a connection between neural and distributional semantic vector spaces (Mitchell et al., 2008; Murphy et al., 2012). Generation might be the missing link to powerful computational models that take the neural footprint of a thought as input and produce its linguistic expression. Acknowledgments We thank Kevin Knight, Andrew Anderson, Roberto Zamparelli, Angeliki Lazaridou, Nghia The Pham, Germ´an Kruszewski and Peter Turney for helpful discussions and the anonymous reviewers for their useful comments. We acknowledge the ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). References Alexandr Andoni and Piotr Indyk. 2008. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Commun. ACM, 51(1):117– 122, January. Jacob Andreas and Zoubin Ghahramani. 2013. A generative model of vector space semantics. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, pages 91– 99, Sofia, Bulgaria. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of EMNLP, pages 1183–1193, Boston, MA. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of ACL, To appear, Baltimore, MD. Marco Baroni. 2013. Composition in distributional semantics. Language and Linguistics Compass, 7(10):511–522. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of EMNLP, pages 546–556, Jeju Island, Korea. Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. 2012. Distributional semantics in Technicolor. In Proceedings of ACL, pages 136– 145, Jeju Island, Korea. Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013. General estimation and evaluation of compositional distributional semantic models. In Proceedings of ACL Workshop on Continuous Vector Space Models and their Compositionality, pages 50– 58, Sofia, Bulgaria. Katrin Erk. 2012. Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635–653. Andrea Frome, Greg Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. 2013. DeViSE: A deep visual-semantic embedding model. In Proceedings of NIPS, pages 2121–2129, Lake Tahoe, Nevada. Emiliano Guevara. 2010. A regression model of adjective-noun compositionality in distributional semantics. In Proceedings of GEMS, pages 33–37, Uppsala, Sweden. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL, pages 771–779, Columbus, OH, USA, June. James Haxby, Ida Gobbini, Maura Furey, Alumit Ishai, Jennifer Schouten, and Pietro Pietrini. 2001. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293:2425–2430. 632 Alexander Huth, Shinji Nishimoto, An Vu, and Jack Gallant. 2012. A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron, 76(6):1210–1224. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, October. Association for Computational Linguistics. Colin Kelly, Barry Devereux, and Anna Korhonen. 2012. Semi-supervised learning for automatic conceptual property extraction. In Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics, pages 11–20, Montreal, Canada. Alexandre Klementiev, Ann Irvine, Chris CallisonBurch, and David Yarowsky. 2012. Toward statistical machine translation without parallel corpora. In Proceedings of EACL, pages 130–140, Avignon, France. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In In Proceedings of ACL Workshop on Unsupervised Lexical Acquisition, pages 9–16, Philadelphia, PA, USA. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. http://arxiv.org/ abs/1301.3781/. Tomas Mikolov, Quoc Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for Machine Translation. http://arxiv.org/abs/ 1309.4168. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pages 236–244, Columbus, OH. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Tom Mitchell, Svetlana Shinkareva, Andrew Carlson, Kai-Min Chang, Vincente Malave, Robert Mason, and Marcel Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science, 320:1191–1195. Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012. Selecting corpus-semantic models for neurolinguistic decoding. In Proceedings of *SEM, pages 114–123, Montreal, Canada. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, ACL ’99, pages 519– 526. Association for Computational Linguistics. Richard Socher, Eric Huang, Jeffrey Pennin, Andrew Ng, and Christopher Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS, pages 801–809, Granada, Spain. Richard Socher, Milind Ganjoo, Christopher Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Proceedings of NIPS, pages 935–943, Lake Tahoe, Nevada. Richard Socher, Quoc Le, Christopher Manning, and Andrew Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics. In press. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Peter Turney. 2012. Domain and function: A dualspace model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533– 585. Fabio Zanzotto, Ioannis Korkontzelos, Francesca Falucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distributional semantics. In Proceedings of COLING, pages 1263– 1271, Beijing, China. 633
2014
59
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 58–68, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Multilingual Models for Compositional Distributed Semantics Karl Moritz Hermann and Phil Blunsom Department of Computer Science University of Oxford Oxford, OX1 3QD, UK {karl.moritz.hermann,phil.blunsom}@cs.ox.ac.uk Abstract We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data. 1 Introduction Distributed representations of words provide the basis for many state-of-the-art approaches to various problems in natural language processing today. Such word embeddings are naturally richer representations than those of symbolic or discrete models, and have been shown to be able to capture both syntactic and semantic information. Successful applications of such models include language modelling (Bengio et al., 2003), paraphrase detection (Erk and Pad´o, 2008), and dialogue analysis (Kalchbrenner and Blunsom, 2013). Within a monolingual context, the distributional hypothesis (Firth, 1957) forms the basis of most approaches for learning word representations. In Figure 1: Model with parallel input sentences a and b. The model minimises the distance between the sentence level encoding of the bitext. Any composition functions (CVM) can be used to generate the compositional sentence level representations. this work, we extend this hypothesis to multilingual data and joint-space embeddings. We present a novel unsupervised technique for learning semantic representations that leverages parallel corpora and employs semantic transfer through compositional representations. Unlike most methods for learning word representations, which are restricted to a single language, our approach learns to represent meaning across languages in a shared multilingual semantic space. We present experiments on two corpora. First, we show that for cross-lingual document classification on the Reuters RCV1/RCV2 corpora (Lewis et al., 2004), we outperform the prior state of the art (Klementiev et al., 2012). Second, we also present classification results on a massively multilingual corpus which we derive from the TED corpus (Cettolo et al., 2012). The results on this task, in comparison with a number of strong baselines, further demonstrate the relevance of our approach and the success of our method in learning multilingual semantic representations over a wide range of languages. 58 2 Overview Distributed representation learning describes the task of learning continuous representations for discrete objects. Here, we focus on learning semantic representations and investigate how the use of multilingual data can improve learning such representations at the word and higher level. We present a model that learns to represent each word in a lexicon by a continuous vector in Rd. Such distributed representations allow a model to share meaning between similar words, and have been used to capture semantic, syntactic and morphological content (Collobert and Weston, 2008; Turian et al., 2010, inter alia). We describe a multilingual objective function that uses a noise-contrastive update between semantic representations of different languages to learn these word embeddings. As part of this, we use a compositional vector model (CVM, henceforth) to compute semantic representations of sentences and documents. A CVM learns semantic representations of larger syntactic units given the semantic representations of their constituents (Clark and Pulman, 2007; Mitchell and Lapata, 2008; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Socher et al., 2012; Hermann and Blunsom, 2013, inter alia). A key difference between our approach and those listed above is that we only require sentencealigned parallel data in our otherwise unsupervised learning function. This removes a number of constraints that normally come with CVM models, such as the need for syntactic parse trees, word alignment or annotated data as a training signal. At the same time, by using multiple CVMs to transfer information between languages, we enable our models to capture a broader semantic context than would otherwise be possible. The idea of extracting semantics from multilingual data stems from prior work in the field of semantic grounding. Language acquisition in humans is widely seen as grounded in sensory-motor experience (Bloom, 2001; Roy, 2003). Based on this idea, there have been some attempts at using multi-modal data for learning better vector representations of words (e.g. Srivastava and Salakhutdinov (2012)). Such methods, however, are not easily scalable across languages or to large amounts of data for which no secondary or tertiary representation might exist. Parallel data in multiple languages provides an alternative to such secondary representations, as parallel texts share their semantics, and thus one language can be used to ground the other. Some work has exploited this idea for transferring linguistic knowledge into low-resource languages or to learn distributed representations at the word level (Klementiev et al., 2012; Zou et al., 2013; Lauly et al., 2013, inter alia). So far almost all of this work has been focused on learning multilingual representations at the word level. As distributed representations of larger expressions have been shown to be highly useful for a number of tasks, it seems to be a natural next step to attempt to induce these, too, cross-lingually. 3 Approach Most prior work on learning compositional semantic representations employs parse trees on their training data to structure their composition functions (Socher et al., 2012; Hermann and Blunsom, 2013, inter alia). Further, these approaches typically depend on specific semantic signals such as sentiment- or topic-labels for their objective functions. While these methods have been shown to work in some cases, the need for parse trees and annotated data limits such approaches to resourcefortunate languages. Our novel method for learning compositional vectors removes these requirements, and as such can more easily be applied to low-resource languages. Specifically, we attempt to learn semantics from multilingual data. The idea is that, given enough parallel data, a shared representation of two parallel sentences would be forced to capture the common elements between these two sentences. What parallel sentences share, of course, are their semantics. Naturally, different languages express meaning in different ways. We utilise this diversity to abstract further from mono-lingual surface realisations to deeper semantic representations. We exploit this semantic similarity across languages by defining a bilingual (and trivially multilingual) energy as follows. Assume two functions f : X →Rd and g : Y →Rd, which map sentences from languages x and y onto distributed semantic representations in Rd. Given a parallel corpus C, we then define the energy of the model given two sentences (a, b) ∈C as: Ebi(a, b) = ∥f(a) −g(b)∥2 (1) 59 We want to minimize Ebi for all semantically equivalent sentences in the corpus. In order to prevent the model from degenerating, we further introduce a noise-constrastive large-margin update which ensures that the representations of non-aligned sentences observe a certain margin from each other. For every pair of parallel sentences (a, b) we sample a number of additional sentence pairs (·, n) ∈C, where n—with high probability—is not semantically equivalent to a. We use these noise samples as follows: Ehl(a, b, n) = [m + Ebi(a, b) −Ebi(a, n)]+ where [x]+ = max(x, 0) denotes the standard hinge loss and m is the margin. This results in the following objective function: J(θ) = X (a,b)∈C k X i=1 Ehl(a, b, ni) + λ 2 ∥θ∥2 ! (2) where θ is the set of all model variables. 3.1 Two Composition Models The objective function in Equation 2 could be coupled with any two given vector composition functions f, g from the literature. As we aim to apply our approach to a wide range of languages, we focus on composition functions that do not require any syntactic information. We evaluate the following two composition functions. The first model, ADD, represents a sentence by the sum of its word vectors. This is a distributed bag-of-words approach as sentence ordering is not taken into account by the model. Second, the BI model is designed to capture bigram information, using a non-linearity over bigram pairs in its composition function: f(x) = n X i=1 tanh (xi−1 + xi) (3) The use of a non-linearity enables the model to learn interesting interactions between words in a document, which the bag-of-words approach of ADD is not capable of learning. We use the hyperbolic tangent as activation function. 3.2 Document-level Semantics For a number of tasks, such as topic modelling, representations of objects beyond the sentence level are required. While most approaches to compositional distributed semantics end at the word Figure 2: Description of a parallel document-level compositional vector model (DOC). The model recursively computes semantic representations for each sentence of a document and then for the document itself, treating the sentence vectors as inputs for a second CVM. level, our model extends to document-level learning quite naturally, by recursively applying the composition and objective function (Equation 2) to compose sentences into documents. This is achieved by first computing semantic representations for each sentence in a document. Next, these representations are used as inputs in a higher-level CVM, computing a semantic representation of a document (Figure 2). This recursive approach integrates documentlevel representations into the learning process. We can thus use corpora of parallel documents— regardless of whether they are sentence aligned or not—to propagate a semantic signal back to the individual words. If sentence alignment is available, of course, the document-signal can simply be combined with the sentence-signal, as we did with the experiments described in §5.3. This concept of learning compositional representations for documents contrasts with prior work (Socher et al., 2011; Klementiev et al., 2012, inter alia) who rely on summing or averaging sentencevectors if representations beyond the sentencelevel are required for a particular task. We evaluate the models presented in this paper both with and without the document-level signal. We refer to the individual models used as ADD and BI if used without, and as DOC/ADD and DOC/BI is used with the additional document composition function and error signal. 4 Corpora We use two corpora for learning semantic representations and performing the experiments described in this paper. 60 The Europarl corpus v71 (Koehn, 2005) was used during initial development and testing of our approach, as well as to learn the representations used for the Cross-Lingual Document Classification task described in §5.2. We considered the English-German and English-French language pairs from this corpus. From each pair the final 100,000 sentences were reserved for development. Second, we developed a massively multilingual corpus based on the TED corpus2 for IWSLT 2013 (Cettolo et al., 2012). This corpus contains English transcriptions and multilingual, sentencealigned translations of talks from the TED conference. While the corpus is aimed at machine translation tasks, we use the keywords associated with each talk to build a subsidiary corpus for multilingual document classification as follows.3 The development sections provided with the IWSLT 2013 corpus were again reserved for development. We removed approximately 10 percent of the training data in each language to create a test corpus (all talks with id ≥1,400). The new training corpus consists of a total of 12,078 parallel documents distributed across 12 language pairs4. In total, this amounts to 1,678,219 nonEnglish sentences (the number of unique English sentences is smaller as many documents are translated into multiple languages and thus appear repeatedly in the corpus). Each document (talk) contains one or several keywords. We used the 15 most frequent keywords for the topic classification experiments described in section §5.3. Both corpora were pre-processed using the set of tools provided by cdec5 for tokenizing and lowercasing the data. Further, all empty sentences and their translations were removed from the corpus. 5 Experiments We report results on two experiments. First, we replicate the cross-lingual document classification task of Klementiev et al. (2012), learning distributed representations on the Europarl corpus and evaluating on documents from the Reuters RCV1/RCV2 corpora. Subsequently, we design a 1http://www.statmt.org/europarl/ 2https://wit3.fbk.eu/ 3http://www.clg.ox.ac.uk/tedcldc/ 4English to Arabic, German, French, Spanish, Italian, Dutch, Polish, Brazilian Portuguese, Romanian, Russian and Turkish. Chinese, Farsi and Slowenian were removed due to the small size of those datasets. 5http://cdec-decoder.org/ multi-label classification task using the TED corpus, both for training and evaluating. The use of a wider range of languages in the second experiments allows us to better evaluate our models’ capabilities in learning a shared multilingual semantic representation. We also investigate the learned embeddings from a qualitative perspective in §5.4. 5.1 Learning All model weights were randomly initialised using a Gaussian distribution (µ=0, σ2=0.1). We used the available development data to set our model parameters. For each positive sample we used a number of noise samples (k ∈{1, 10, 50}), randomly drawn from the corpus at each training epoch. All our embeddings have dimensionality d=128, with the margin set to m=d.6 Further, we use L2 regularization with λ=1 and step-size in {0.01, 0.05}. We use 100 iterations for the RCV task, 500 for the TED single and 5 for the joint corpora. We use the adaptive gradient method, AdaGrad (Duchi et al., 2011), for updating the weights of our models, in a mini-batch setting (b ∈ {10, 50}). All settings, our model implementation and scripts to replicate our experiments are available at http://www.karlmoritz.com/. 5.2 RCV1/RCV2 Document Classification We evaluate our models on the cross-lingual document classification (CLDC, henceforth) task first described in Klementiev et al. (2012). This task involves learning language independent embeddings which are then used for document classification across the English-German language pair. For this, CLDC employs a particular kind of supervision, namely using supervised training data in one language and evaluating without further supervision in another. Thus, CLDC can be used to establish whether our learned representations are semantically useful across multiple languages. We follow the experimental setup described in Klementiev et al. (2012), with the exception that we learn our embeddings using solely the Europarl data and use the Reuters corpora only during for classifier training and testing. Each document in the classification task is represented by the average of the d-dimensional representations of all its sentences. We train the multiclass classifier using an averaged perceptron (Collins, 2002) with the same settings as in Klementiev et al. (2012). 6On the RCV task we also report results for d=40 which matches the dimensionality of Klementiev et al. (2012). 61 Model en →de de →en Majority Class 46.8 46.8 Glossed 65.1 68.6 MT 68.1 67.4 I-Matrix 77.6 71.1 dim = 40 ADD 83.7 71.4 ADD+ 86.2 76.9 BI 83.4 69.2 BI+ 86.9 74.3 dim = 128 ADD 86.4 74.7 ADD+ 87.7 77.5 BI 86.1 79.0 BI+ 88.1 79.2 Table 1: Classification accuracy for training on English and German with 1000 labeled examples on the RCV corpus. Cross-lingual compositional representations (ADD, BI and their multilingual extensions), I-Matrix (Klementiev et al., 2012) translated (MT) and glossed (Glossed) word baselines, and the majority class baseline. The baseline results are from Klementiev et al. (2012). We present results from four models. The ADD model is trained on 500k sentence pairs of the English-German parallel section of the Europarl corpus. The ADD+ model uses an additional 500k parallel sentences from the English-French corpus, resulting in one million English sentences, each paired up with either a German or a French sentence, with BI and BI+ trained accordingly. The motivation behind ADD+ and BI+ is to investigate whether we can learn better embeddings by introducing additional data from other languages. A similar idea exists in machine translation where English is frequently used to pivot between other languages (Cohn and Lapata, 2007). The actual CLDC experiments are performed by training on English and testing on German documents and vice versa. Following prior work, we use varying sizes between 100 and 10,000 documents when training the multiclass classifier. The results of this task across training sizes are in Figure 3. Table 1 shows the results for training on 1,000 documents compared with the results published in Klementiev et al. (2012). Our models outperform the prior state of the art, with the BI models performing slightly better than the ADD models. As the relative results indicate, the addition of a second language improves model performance. It it interesting to note that results improve in both directions of the task, even though no additional German data was used for the ‘+‘ models. 5.3 TED Corpus Experiments Here we describe our experiments on the TED corpus, which enables us to scale up to multilingual learning. Consisting of a large number of relatively short and parallel documents, this corpus allows us to evaluate the performance of the DOC model described in §3.2. We use the training data of the corpus to learn distributed representations across 12 languages. Training is performed in two settings. In the single mode, vectors are learnt from a single language pair (en-X), while in the joint mode vectorlearning is performed on all parallel sub-corpora simultaneously. This setting causes words from all languages to be embedded in a single semantic space. First, we evaluate the effect of the documentlevel error signal (DOC, described in §3.2), as well as whether our multilingual learning method can extend to a larger variety of languages. We train DOC models, using both ADD and BI as CVM (DOC/ADD, DOC/BI), both in the single and joint mode. For comparison, we also train ADD and DOC models without the document-level error signal. The resulting document-level representations are used to train classifiers (system and settings as in §5.2) for each language, which are then evaluated in the paired language. In the English case we train twelve individual classifiers, each using the training data of a single language pair only. As described in §4, we use 15 keywords for the classification task. Due to space limitations, we report cumulative results in the form of F1-scores throughout this paper. MT System We develop a machine translation baseline as follows. We train a machine translation tool on the parallel training data, using the development data of each language pair to optimize the translation system. We use the cdec decoder (Dyer et al., 2010) with default settings for this purpose. With this system we translate the test data, and then use a Na¨ıve Bayes classifier7 for the actual experiments. To exemplify, this means the de→ar result is produced by training a translation system from Arabic to German. The Arabic test set is translated into German. A classifier is then trained 7We use the implementation in Mallet (McCallum, 2002) 62 Setting Languages Arabic German Spanish French Italian Dutch Polish Pt-Br Roman. Russian Turkish en →L2 MT System 0.429 0.465 0.518 0.526 0.514 0.505 0.445 0.470 0.493 0.432 0.409 ADD single 0.328 0.343 0.401 0.275 0.282 0.317 0.141 0.227 0.282 0.338 0.241 BI single 0.375 0.360 0.379 0.431 0.465 0.421 0.435 0.329 0.426 0.423 0.481 DOC/ADD single 0.410 0.424 0.383 0.476 0.485 0.264 0.402 0.354 0.418 0.448 0.452 DOC/BI single 0.389 0.428 0.416 0.445 0.473 0.219 0.403 0.400 0.467 0.421 0.457 DOC/ADD joint 0.392 0.405 0.443 0.447 0.475 0.453 0.394 0.409 0.446 0.476 0.417 DOC/BI joint 0.372 0.369 0.451 0.429 0.404 0.433 0.417 0.399 0.453 0.439 0.418 L2 →en MT System 0.448 0.469 0.486 0.358 0.481 0.463 0.460 0.374 0.486 0.404 0.441 ADD single 0.380 0.337 0.446 0.293 0.357 0.295 0.327 0.235 0.293 0.355 0.375 BI single 0.354 0.411 0.344 0.426 0.439 0.428 0.443 0.357 0.426 0.442 0.403 DOC/ADD single 0.452 0.476 0.422 0.464 0.461 0.251 0.400 0.338 0.407 0.471 0.435 DOC/BI single 0.406 0.442 0.365 0.479 0.460 0.235 0.393 0.380 0.426 0.467 0.477 DOC/ADD joint 0.396 0.388 0.399 0.415 0.461 0.478 0.352 0.399 0.412 0.343 0.343 DOC/BI joint 0.343 0.375 0.369 0.419 0.398 0.438 0.353 0.391 0.430 0.375 0.388 Table 2: F1-scores for the TED document classification task for individual languages. Results are reported for both directions (training on English, evaluating on L2 and vice versa). Bold indicates best result, underline best result amongst the vector-based systems. Training Language Test Language Arabic German Spanish French Italian Dutch Polish Pt-Br Rom’n Russian Turkish Arabic 0.378 0.436 0.432 0.444 0.438 0.389 0.425 0.420 0.446 0.397 German 0.368 0.474 0.460 0.464 0.440 0.375 0.417 0.447 0.458 0.443 Spanish 0.353 0.355 0.420 0.439 0.435 0.415 0.390 0.424 0.427 0.382 French 0.383 0.366 0.487 0.474 0.429 0.403 0.418 0.458 0.415 0.398 Italian 0.398 0.405 0.461 0.466 0.393 0.339 0.347 0.376 0.382 0.352 Dutch 0.377 0.354 0.463 0.464 0.460 0.405 0.386 0.415 0.407 0.395 Polish 0.359 0.386 0.449 0.444 0.430 0.441 0.401 0.434 0.398 0.408 Portuguese 0.391 0.392 0.476 0.447 0.486 0.458 0.403 0.457 0.431 0.431 Romanian 0.416 0.320 0.473 0.476 0.460 0.434 0.416 0.433 0.444 0.402 Russian 0.372 0.352 0.492 0.427 0.438 0.452 0.430 0.419 0.441 0.447 Turkish 0.376 0.352 0.479 0.433 0.427 0.423 0.439 0.367 0.434 0.411 Table 3: F1-scores for TED corpus document classification results when training and testing on two languages that do not share any parallel data. We train a DOC/ADD model on all en-L2 language pairs together, and then use the resulting embeddings to train document classifiers in each language. These classifiers are subsequently used to classify data from all other languages. Setting Languages English Arabic German Spanish French Italian Dutch Polish Pt-Br Roman. Russian Turkish Raw Data NB 0.481 0.469 0.471 0.526 0.532 0.524 0.522 0.415 0.465 0.509 0.465 0.513 Senna 0.400 Polyglot 0.382 0.416 0.270 0.418 0.361 0.332 0.228 0.323 0.194 0.300 0.402 0.295 single Setting DOC/ADD 0.462 0.422 0.429 0.394 0.481 0.458 0.252 0.385 0.363 0.431 0.471 0.435 DOC/BI 0.474 0.432 0.362 0.336 0.444 0.469 0.197 0.414 0.395 0.445 0.436 0.428 joint Setting DOC/ADD 0.475 0.371 0.386 0.472 0.451 0.398 0.439 0.304 0.394 0.453 0.402 0.441 DOC/BI 0.378 0.329 0.358 0.472 0.454 0.399 0.409 0.340 0.431 0.379 0.395 0.435 Table 4: F1-scores on the TED corpus document classification task when training and evaluating on the same language. Baseline embeddings are Senna (Collobert et al., 2011) and Polyglot (Al-Rfou’ et al., 2013). 63 100 200 500 1000 5000 10k 60 70 80 Training Documents (de) Classification Accuracy (%) 100 200 500 1000 5000 10k 50 60 70 80 90 Training Documents (en) ADD+ BI+ I-Matrix MT Glossed Figure 3: Classification accuracy for a number of models (see Table 1 for model descriptions). The left chart shows results for these models when trained on German data and evaluated on English data, the right chart vice versa. on the German training data and evaluated on the translated Arabic. While we developed this system as a baseline, it must be noted that the classifier of this system has access to significantly more information (all words in the document) as opposed to our models (one embedding per document), and we do not expect to necessarily beat this system. The results of this experiment are in Table 2. When comparing the results between the ADD model and the models trained using the documentlevel error signal, the benefit of this additional signal becomes clear. The joint training mode leads to a relative improvement when training on English data and evaluating in a second language. This suggests that the joint mode improves the quality of the English embeddings more than it affects the L2-embeddings. More surprising, perhaps, is the relative performance between the ADD and BI composition functions, especially when compared to the results in §5.2, where the BI models relatively consistently performed better. We suspect that the better performance of the additive composition function on this task is related to the smaller amount of training data available which could cause sparsity issues for the bigram model. As expected, the MT system slightly outperforms our models on most language pairs. However, the overall performance of the models is comparable to that of the MT system. Considering the relative amount of information available during the classifier training phase, this indicates that our learned representations are semantically useful, capturing almost the same amount of information as available to the Na¨ıve Bayes classifier. We next investigate linguistic transfer across languages. We re-use the embeddings learned with the DOC/ADD joint model from the previous experiment for this purpose, and train classifiers on all non-English languages using those embeddings. Subsequently, we evaluate their performance in classifying documents in the remaining languages. Results for this task are in Table 3. While the results across language-pairs might not be very insightful, the overall good performance compared with the results in Table 2 implies that we learnt semantically meaningful vectors and in fact a joint embedding space across thirteen languages. In a third evaluation (Table 4), we apply the embeddings learnt with out models to a monolingual classification task, enabling us to compare with prior work on distributed representation learning. In this experiment a classifier is trained in one language and then evaluated in the same. We again use a Na¨ıve Bayes classifier on the raw data to establish a reasonable upper bound. We compare our embeddings with the SENNA embeddings, which achieve state of the art performance on a number of tasks (Collobert et al., 2011). Additionally, we use the Polyglot embeddings of Al-Rfou’ et al. (2013), who published word embeddings across 100 languages, including all languages considered in this paper. We represent each document by the mean of its word vectors and then apply the same classifier training and testing regime as with our models. Even though both of these sets of embeddings were trained on much larger datasets than ours, our models outperform these baselines on all languages—even outperforming the Na¨ıve Bayes system on on several 64 Figure 4: t-SNE projections for a number of English, French and German words as represented by the BI+ model. Even though the model did not use any parallel French-German data during training, it learns semantic similarity between these two languages using English as a pivot, and semantically clusters words across all languages. Figure 5: t-SNE projections for a number of short phrases in three languages as represented by the BI+ model. The projection demonstrates linguistic transfer through a pivot by. It separates phrases by gender (red for female, blue for male, and green for neutral) and aligns matching phrases across languages. languages. While this may partly be attributed to the fact that our vectors were learned on in-domain data, this is still a very positive outcome. 5.4 Linguistic Analysis While the classification experiments focused on establishing the semantic content of the sentence level representations, we also want to briefly investigate the induced word embeddings. We use the BI+ model trained on the Europarl corpus for this purpose. Figure 4 shows the t-SNE projections for a number of English, French and German words. Even though the model did not use any parallel French-German data during training, it still managed to learn semantic word-word similarity across these two languages. Going one step further, Figure 5 shows t-SNE projections for a number of short phrases in these three languages. We use the English the president and gender-specific expressions Mr President and Madam President as well as gender-specific equivalents in French and German. The projection demonstrates a number of interesting results: First, the model correctly clusters the words into three groups, corresponding to the three English forms and their associated translations. Second, a separation between genders can be observed, with male forms on the bottom half of the chart and female forms on the top, with the neutral the president in the vertical middle. Finally, if we assume a horizontal line going through the president, this line could be interpreted as a “gender divide”, with male and female versions of one expression mirroring each other on that line. In the case of the president and its translations, this effect becomes even clearer, with the neutral English expression being projected close to the mid-point between each other language’s gender-specific versions. These results further support our hypothesis that the bilingual contrastive error function can learn semantically plausible embeddings and furthermore, that it can abstract away from mono-lingual surface realisations into a shared semantic space across languages. 6 Related Work Distributed Representations Distributed representations can be learned through a number of approaches. In their simplest form, distributional information from large corpora can be used to learn embeddings, where the words appearing within a certain window of the target word are used to compute that word’s embedding. This is related to topic-modelling techniques such as LSA (Dumais et al., 1988), LSI, and LDA (Blei et al., 2003), but these methods use a document-level context, and tend to capture the topics a word is used in rather than its more immediate syntactic context. Neural language models are another popular approach for inducing distributed word representations (Bengio et al., 2003). They have received a lot of attention in recent years (Collobert and Weston, 2008; Mnih and Hinton, 2009; Mikolov et al., 2010, inter alia) and have achieved state of the art performance in language modelling. Collobert et al. (2011) further popularised using neural network architectures for learning word embeddings from large amounts of largely unlabelled data by showing the embeddings can then be used to improve standard supervised tasks. 65 Unsupervised word representations can easily be plugged into a variety of NLP related tasks. Tasks, where the use of distributed representations has resulted in improvements include topic modelling (Blei et al., 2003) or named entity recognition (Turian et al., 2010; Collobert et al., 2011). Compositional Vector Models For a number of important problems, semantic representations of individual words do not suffice, but instead a semantic representation of a larger structure—e.g. a phrase or a sentence—is required. Self-evidently, sparsity prevents the learning of such representations using the same collocational methods as applied to the word level. Most literature instead focuses on learning composition functions that represent the semantics of a larger structure as a function of the representations of its parts. Very simple composition functions have been shown to suffice for tasks such as judging bigram semantic similarity (Mitchell and Lapata, 2008). More complex composition functions using matrix-vector composition, convolutional neural networks or tensor composition have proved useful in tasks such as sentiment analysis (Socher et al., 2011; Hermann and Blunsom, 2013), relational similarity (Turney, 2012) or dialogue analysis (Kalchbrenner and Blunsom, 2013). Multilingual Representation Learning Most research on distributed representation induction has focused on single languages. English, with its large number of annotated resources, has enjoyed most attention. However, there exists a corpus of prior work on learning multilingual embeddings or on using parallel data to transfer linguistic information across languages. One has to differentiate between approaches such as Al-Rfou’ et al. (2013), that learn embeddings across a large variety of languages and models such as ours, that learn joint embeddings, that is a projection into a shared semantic space across multiple languages. Related to our work, Yih et al. (2011) proposed S2Nets to learn joint embeddings of tf-idf vectors for comparable documents. Their architecture optimises the cosine similarity of documents, using relative semantic similarity scores during learning. More recently, Lauly et al. (2013) proposed a bag-of-words autoencoder model, where the bagof-words representation in one language is used to train the embeddings in another. By placing their vocabulary in a binary branching tree, the probabilistic setup of this model is similar to that of Mnih and Hinton (2009). Similarly, Sarath Chandar et al. (2013) train a cross-lingual encoder, where an autoencoder is used to recreate words in two languages in parallel. This is effectively the linguistic extension of Ngiam et al. (2011), who used a similar method for audio and video data. Hermann and Blunsom (2014) propose a largemargin learner for multilingual word representations, similar to the basic additive model proposed here, which, like the approaches above, relies on a bag-of-words model for sentence representations. Klementiev et al. (2012), our baseline in §5.2, use a form of multi-agent learning on wordaligned parallel data to transfer embeddings from one language to another. Earlier work, Haghighi et al. (2008), proposed a method for inducing bilingual lexica using monolingual feature representations and a small initial lexicon to bootstrap with. This approach has recently been extended by Mikolov et al. (2013a), Mikolov et al. (2013b), who developed a method for learning transformation matrices to convert semantic vectors of one language into those of another. Is was demonstrated that this approach can be applied to improve tasks related to machine translation. Their CBOW model is also worth noting for its similarities to the ADD composition function used here. Using a slightly different approach, Zou et al. (2013), also learned bilingual embeddings for machine translation. 7 Conclusion To summarize, we have presented a novel method for learning multilingual word embeddings using parallel data in conjunction with a multilingual objective function for compositional vector models. This approach extends the distributional hypothesis to multilingual joint-space representations. Coupled with very simple composition functions, vectors learned with this method outperform the state of the art on the task of cross-lingual document classification. Further experiments and analysis support our hypothesis that bilingual signals are a useful tool for learning distributed representations by enabling models to abstract away from mono-lingual surface realisations into a deeper semantic space. Acknowledgements This work was supported by a Xerox Foundation Award and EPSRC grant number EP/K036580/1. 66 References R. Al-Rfou’, B. Perozzi, and S. Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of CoNLL. M. Baroni and R. Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of EMNLP. Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, March. D. M. Blei, A. Y. Ng, and M. I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. P. Bloom. 2001. Precis of how children learn the meanings of words. Behavioral and Brain Sciences, 24:1095–1103. M. Cettolo, C. Girardi, and M. Federico. 2012. Wit3: Web inventory of transcribed and translated talks. In Proceedings of EAMT. S. Clark and S. Pulman. 2007. Combining symbolic and distributional models of meaning. In Proceedings of AAAI Spring Symposium on Quantum Interaction. AAAI Press. T. Cohn and M. Lapata. 2007. Machine translation by triangulation: Making effective use of multi-parallel corpora. In Proceedings of ACL. M. Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of ACLEMNLP. R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, July. S. T. Dumais, G. W. Furnas, T. K. Landauer, S. Deerwester, and R. Harshman. 1988. Using latent semantic analysis to improve access to textual information. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. C. Dyer, A. Lopez, J. Ganitkevitch, J. Weese, F. Ture, P. Blunsom, H. Setiawan, V. Eidelman, and P. Resnik. 2010. cdec: A Decoder, Alignment, and Learning framework for finite-state and context-free translation models. In Proceedings of ACL. K. Erk and S. Pad´o. 2008. A structured vector space model for word meaning in context. Proceedings of EMNLP. J. R. Firth. 1957. A synopsis of linguistic theory 193055. 1952-59:1–32. E. Grefenstette and M. Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of EMNLP. A. Haghighi, P. Liang, T. Berg-Kirkpatrick, and D. Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL-HLT. K. M. Hermann and P. Blunsom. 2013. The Role of Syntax in Vector Space Models of Compositional Semantics. In Proceedings of ACL. K. M. Hermann and P. Blunsom. 2014. Multilingual Distributed Representations without Word Alignment. In Proceedings of ICLR. N. Kalchbrenner and P. Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. Proceedings of the ACL Workshop on Continuous Vector Space Models and their Compositionality. A. Klementiev, I. Titov, and B. Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING. P. Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of the Machine Translation Summit. S. Lauly, A. Boulanger, and H. Larochelle. 2013. Learning multilingual word representations using a bag-of-words autoencoder. In Deep Learning Workshop at NIPS. D. D. Lewis, Y. Yang, T. G. Rose, and F. Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361–397, December. A. K. McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. T. Mikolov, M. Karafi´at, L. Burget, J. ˇCernock´y, and S. Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of INTERSPEECH. T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013a. Efficient Estimation of Word Representations in Vector Space. CoRR. T. Mikolov, Q. V. Le, and I. Sutskever. 2013b. Exploiting Similarities among Languages for Machine Translation. CoRR. J. Mitchell and M. Lapata. 2008. Vector-based models of semantic composition. In In Proceedings of ACL. 67 A. Mnih and G. Hinton. 2009. A scalable hierarchical distributed language model. In Proceedings of NIPS. J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. 2011. Multimodal deep learning. In ICML. D. Roy. 2003. Grounded spoken language acquisition: Experiments in word learning. IEEE Transactions on Multimedia, 5(2):197–209, June. A. P. Sarath Chandar, M. K. Mitesh, B. Ravindran, V. Raykar, and A. Saha. 2013. Multilingual deep learning. In Deep Learning Workshop at NIPS. R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP. R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLPCoNLL, pages 1201–1211. N. Srivastava and R. Salakhutdinov. 2012. Multimodal learning with deep boltzmann machines. In Proceedings of NIPS. J. Turian, L. Ratinov, and Y. Bengio. 2010. Word representations: a simple and general method for semisupervised learning. In Proceedings of ACL. P. D. Turney. 2012. Domain and function: A dualspace model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533– 585. W.-T. Yih, K. Toutanova, J. C. Platt, and C. Meek. 2011. Learning discriminative projections for text similarity measures. In Proceedings of CoNLL. W. Y. Zou, R. Socher, D. Cer, and C. D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of EMNLP. 68
2014
6
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 634–643, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Vector space semantics with frequency-driven motifs Shashank Srivastava Carnegie Mellon University Pittsburgh, PA 15217 [email protected] Eduard Hovy Carnegie Mellon University Pittsburgh, PA 15217 [email protected] Abstract Traditional models of distributional semantics suffer from computational issues such as data sparsity for individual lexemes and complexities of modeling semantic composition when dealing with structures larger than single lexical items. In this work, we present a frequencydriven paradigm for robust distributional semantics in terms of semantically cohesive lineal constituents, or motifs. The framework subsumes issues such as differential compositional as well as noncompositional behavior of phrasal consituents, and circumvents some problems of data sparsity by design. We design a segmentation model to optimally partition a sentence into lineal constituents, which can be used to define distributional contexts that are less noisy, semantically more interpretable, and linguistically disambiguated. Hellinger PCA embeddings learnt using the framework show competitive results on empirical tasks. 1 Introduction Meaning in language is a confluence of experientially acquired semantics of words or multi-word phrases, and their semantic composition to create new meanings. For instance, successfully interpreting a sentence such as The old senator kicked the bucket. requires the knowledge that the semantic connotations of ‘kicking the bucket’ as a unit are the same as those for ‘dying’. Short of explicit supervision, such semantic mappings must be inferred by a new language speaker through inductive mechanisms operating on observed linguistic usage. This perspective of acquired meaning aligns with the ‘meaning is usage’ adage, consonant with Wittgenstein’s view of semantics. At the same time, the ability to adaptively communicate elaborate meanings can only be conciled through Frege’s principle of compositionality, i.e., meanings of larger linguistic constructs can be derived from the meanings of individual components, modulated by their syntactic interrelations. Indeed, most linguistic usage appears compositional. This is supported by the fact even with very limited vocabulary, children and non-native speakers can often communicate surprisingly effectively. It can be argued that to be sustainable, inductive aspects of meaning must be recurrent enough to be learnable by new users. That is, a noncompositional phrase such as ‘kick the bucket’ is likely to persist in common parlance only if it is frequently used with its associated semantic mapping. If a usage-driven meaning of a motif is not recurrent enough, learning this mapping is inefficient in two ways. First, the sparseness of observations would severely limit accurate inductive acquisition by new observers. Second, the value of learning a very infrequent semantic mapping is likely marginal. This motivates the need for a frequency-driven view of lexical semantics. In particular, such a perspective can be especially advantageous for distributional semantics for reasons we outline below. Distributional semantic models (DSMs) that represent words as distributions over neighbouring contexts have been particularly effective in capturing fine-grained lexical semantics (Turney et al., 2010). Such models have engendered improvements in diverse applications such as selectional preference modeling (Erk, 2007), word-sense discrimination (McCarthy and Carroll, 2003), automatic dictionary building (Curran, 2003), and information retrieval (Manning et al., 2008). However, while conventional DSMs consider colloca634 With the bad press in wake of the financial crisis, businesses are leaving our shores . crisis: <bad, businesses, financial, leaving, press, shores, wake> financial crisis: <bad press, businesses, in wake of, leaving our shores> Table 1: Meaning representation by conventional DSMs vs notional ideal tion strengths (through counts and PMI scores) of word neighbourhoods, they disregard much of the regularity in human language. Most significantly, word tokens that act as latent dimensions are often derived from arbitrary tokenization. The example given in Table 1 succinctly describes this. The first row in the table shows a representation of the meaning of the token ‘crisis’ that a conventional DSM might extract from the given sentence after stopword removal. While helpful, the representation seems unsatisfying since words such as ‘press’, ‘wake’ and ‘shores’ seem to have little to do with a crisis. From a semantic perspective, a representation similar to the second is more valuable: not only does it represent a semantic mapping for a more specific meaning, but the latent dimensions of the representation have are less noisy (e.g., while ‘wake’ is semantically ambiguous, its surrounding context in ‘in wake of’ disambiguates it) and more intuitive in regards of semantic interepretability. This is the overarching theme of this work: we present a frequency driven paradigm for extending distributional semantics to phrasal and sentential levels in terms of such semantically cohesive, recurrent lexical units or motifs. We propose to identify such semantically cohesive motifs in terms of features inspired from frequency-characteristics, linguistic idiosyncrasies, and shallow syntactic analysis; and explore both supervised and semi-supervised models to optimally segment a sentence into such motifs. Through exploiting regularities in language usage, the framework can efficiently account for both compositional and non-compositional word usage, while avoiding the issue of data-sparsity by design. Our principal contributions in this paper are: • We present a framework for extending distributional semantics to learn semantic representations of both words and phrases in terms of recurrent motifs, rather than arbitrary word tokens • We present a simple model to segment a sentence into such motifs using a feature-set drawing from frequency statistics, information theory, linguistic theories and shallow syntactic analysis • Word and phrasal representations learnt through the approach outperform conventional DSM representations on empirical tasks This paper is organized as follows: In Section 2, we briefly review related work in the domain of compositional distributional semantics, and motivate our formulation. Section 3 describes our methodology, which consists of a frequencydriven segmentation model to partition text into semantically meaningful recurring lineal-subunits, a representation learning framework for learning new semantic embeddings based on this segmentation, and an approach to use such embeddings in downstream applications. We present experiments and empirical evaluations for our method in Section 4. Finally, we conclude in Section 5 with a summary of our principal findings, and a discussion of possible directions for future work. 2 Related Work While DSMs have been valuable in representing semantics of single words, approaches to extend them to represent the semantics of phrases and sentences has met with only marginal success. While there is considerable variety in approaches and formulations, existing approaches for phrasal level and sentential semantics can broadly be partitioned into two categories. 2.1 Compositional approaches These have aimed at using semantic representations for individual words to learn semantic representations for larger linguistic structures. These methods implicitly make an assumption of compositionality, and often include explicit computational models of compositionality. Notable among such models are the additive and multiplicative models of composition by Mitchell and Lapata (2008), Grefenstette et al. (2010), Baroni and 635 Zamparelli’s (2010) model that differentially models content and function words for semantic composition, and Goyal et al.’s SDSM model (2013) that incorporates syntactic roles to model semantic composition. Notable among the most effective distributional representations are the recent deep-learning approaches by Socher et al. (2012), that model vector composition through non-linear transformations. While word embeddings and language models from such methods have been useful for tasks such as relation classification, polarity detection, event coreference and parsing; much of existing literature on composition is based on abstract linguistic theory and conjecture, and there is little evidence to support that learnt representations for larger linguistic units correspond to their semantic meanings. While works such as the SDSM model suffer from the problem of sparsity in composing structures beyond bigrams and trigrams, methods such as Mitchell and Lapata (2008)and (Socher et al., 2012) and Grefenstette and Sadrzadeh (2011) are restricted by significant model biases in representing semantic composition by generic algebraic operations. Finally, the assumption that semantic meanings for sentences could have representations similar to those for smaller individual tokens is in some sense unintuitive, and not supported by linguistic or semantic theories. 2.2 Tree kernels Tree Kernel methods have gained popularity in the last decade for capturing syntactic information in the structure of parse trees (Collins and Duffy, 2002; Moschitti, 2006). Instead of procuring explicit representations, the kernel paradigm directly focuses on the larger goal of quantifying semantic similarity of larger linguistic units. Structural kernels for NLP are based on matching substructures within two parse trees , consisting of word-nodes with similar labels. These methods have been useful for eclectic tasks such as parsing, NER, semantic role labeling, and sentiment analysis. Recent approaches such as by Croce et al. (2011) and Srivastava et al. (2013) have attempted to provide formulations to incorporate semantics into tree kernels through the use of distributional word vectors at the individual word-nodes. While this framework is attractive in the lack of assumptions on representation that it makes, the use of distributional embeddings for individual tokens means that it suffers from the same shortcomings as described for the example in Table 1, and hence these methods model semantic relations between wordnodes very weakly. Figure 1 shows an example of the shortcomings of this general approach. Figure 1: Tokenwise syntactic and semantic similarities don’t imply sentential semantic similarity While the two sentences in consideration have near-identical syntax and could be argued to have semantically aligned words in similar positions, the semantics of the complete sentences are widely divergent. Specifically, the ‘bag of words’ assumption in tree kernels doesn’t suffice for these lexemes, and a stronger semantic model is needed to capture phrasal semantics as well as diverging inter-word relations such as in ‘coffee table’ and ‘water table’. Our hypothesis is that a model that can even weakly identify recurrent motifs such as ‘water table’ or ‘breaking a fall’ would be helpful in building more effective semantic representations. A significant advantage of a frequency driven view is that it makes the concern of compositionality of recurrent phrases immaterial. If a motif occurs frequently enough in common parlance, its semantics could be captured with distributional models irrespective of whether its associated semantics are compositional or acquired. 2.3 Identifying multi-word expressions Several approaches have focused on supervised identification of multi-word expressions (MWEs) through statistical (Pecina, 2008; Villavicencio et al., 2007) and linguistically motivated (Piao et al., 2005) techniques. More recently, hybrid methods based on both statistical as well as linguistic features have been popular (Tsvetkov and Wintner, 2011). Ramisch et al. (2008) demonstrate that adding part-of-speech tags to frequency counts substantially improves performance. Other methods have attempted to exploit morphological, syntactic and semantic characteristics of MWEs. In 636 particular, approaches such as Bannard (2007) use syntactic rigidity to characterize MWEs. While existing work has focused on the classification task of categorizing a phrasal constituent as a MWE or a non-MWE, the general ideas of most of these works are in line with our current framework, and the feature-set for our motif segmentation model is designed to subsume most of these ideas. It is worthwhile to point out that the task of motif segmentation is slightly different from MWE identification. Specifically, the onus on recurrent occurrences means that nondecomposibility is not an essential consideration for a word to be considered a motif. In line with the proposed paradigm, typical MWEs such as ‘shoot the breeze’, ‘sour note’ and ‘hot dog’ would be considered valid lineal motifs. 1 In addition, even decomposable recurrent lineal phrases such as ‘love story’, ‘federal government’, and ‘millions of people’ are marked as meaningful recurrent motifs. Finally, and least interestingly, we include common named entities such as ‘United States’ and ‘Java Virtual Machine’ within the ambit of motifs. 3 Method In this section, we define our frequency-driven framework for distributional semantics in detail. As just described above, our definition for motifs is less specific than MWEs. With such a working definition, contiguous motifs are likely to make distributional representations less noisy and also assist in disambiguating context. Also, the lack of specificity ensures that such motifs are common enough to meaningfully influence distributional representation beyond single tokens. A method towards frequency-driven distributional semantics could involve the following principal components: 3.1 Linear segmentation model The segmentation model forms the core of the framework. Ideally, it fragments a given sentence into non-overlapping, semantically meaningful, empirically frequent contiguous sub-units or motifs. The model accounts for possible segmentations of a sentence into potential motifs, and prefers recurrent and cohesive motifs through features that capture frequency-based and statistical 1We note that since we take motifs as lineal units, the current method doesn’t subsume several common noncontiguous MWEs such as ‘let off’ in ‘let him off’. features, as well as linguistic idiosyncracies. This is accomplished using a very simple linear chain model and a rich feature set consisting of a combination of frequency-driven, information theoretic and linguistically motivated features. Let an observed sentence be denoted by x, with the individual tokens xi denoting the i’th token in the sentence. The segmentation model is a chain LVM (latent variable model) that aims to maximize a linear objective defined by: J = X i wifi(yk, yk−1, x) where fi are arbitrary Markov features that can depend on segments (potential motifs) of the observed sentence x, and contiguous latent states. The features are chosen so as to best represent frequency-based, statistical as well as linguistic considerations for treating a segment as an agglutinative unit, or a motif. In specific, these features could encode characteristics such as frequency statistics, collocation strengths and syntactic distinctness, or inflectional rigidity of the considered segments; described in detail in Section 3.2. The model is an instantiation of a simple featurized HMM, and the weighted sum of features corresponding to a segment is cognate with an affinity score for the ‘stickiness’ of the segment, i.e., the affinity for the segment to be treated as holistic unit or a single motif. We also associate a penalizing cost for each non unary-motif to avoid aggressive agglutination of tokens. In particular, for an ngram occurrence to be considered a motif, the marginal contribution due to the affinity of the prospective motif should at minimum exceed this penalty. The weights for the affinity functions as well as these penalties are learnt from data using full as well as partial annotations. The latent state-variables yk denotes the membership of the token xk to a unary or a larger motif; and the state-sequence collectively gives the segmentation of the sentence. An individual state-variable yk encodes a pairing of the size of the encompassing ngram motif, and the position of the word xk within it. For instance, yk = T3 denotes that the token xk is the final position in a trigram motif. 3.1.1 Inference of optimal segmentation If the optimal weights wi are known, inference for the best motif segmentation can be performed 637 in linear time (in the number of tokens) following the generalized Viterbi algorithm. A slightly modified version of Viterbi could also be used to find segmentations that are constrained to agree with some given motif boundaries, but can segment other parts of the sentence optimally under these constraints. This is necessary for the scenario of semi-supervised learning of weights with partially annotated sentences, as described later. 3.2 Learning motif affinities and penalties We briefly discuss data-driven learning of weights for features that define the motif affinity scores and penalties. We describe learning of the model parameters with fully annotated training data, as well as an approach for learning motif segmentation that requires only partial supervision. Supervised learning: In the supervised case, optimal state sequences y(k) are fully observed for the training set. For this purpose, we created a dataset of 1000 sentences from the Simple English Wikipedia and the Gigaword Corpus, and manually annotated it with motif boundaries using BRAT (Stenetorp et al., 2012). In this case, learning can follow the online structured perceptron learning procedure by Collins (2002), where weights updates for the k’th training example (x(k), y(k)) are given as: wi ←wi + α(fi(x(k), y(k)) −fi(x(k), y′)) Here y′ = Decode(x(k), w) is the optimal Viterbi decoding using the current estimates of the weights. Updates are run for a large number of iterations until the change in objective drops below a threshold, and the learning rate α is adaptively modified as described in Collins et al. Implicitly, the weight learning algorithm can be seen as a gradient descent procedure minimizing the difference between the scores of highest scoring (Viterbi) state sequences, and the label state sequences. Semi-supervised learning: In the semisupervised case, the labels y(k) i are known only for some of the tokens in x(k). This is a commonplace scenario, where a part of a sentence has clear motif-boundaries, whereas the rest of the sentence is not annotated. For accumulating such data, we looked for occurrences of 2500 expressions from the WikiMWE dataset in sentences from the combined Simple English Wikipedia and Gigaword corpora. The query expressions in the retrieved sentences were marked with motif boundaries, while the remaining tokens in the sentences were left unannotated. While the Viterbi algorithm can be used for tagging optimal state-sequences given the weights, the structured perceptron can learn optimal model weights given gold-standard sequence labels. Hence, in this case, we use a variation of the hard EM algorithm for learning. The algorithm proceeds as follows: in the E-step, we use the current values of weights to compute hard-expectations, i.e., the best scoring Viterbi sequences among those consistent with the observed state labels. In the M-step, we take the decoded state-sequences in the E-step as observed, and run perceptron learning to update feature weights wi. Pseudocode of the learning algorithm for the partially labeled case is given in Algorithm 1. Algorithm 1 1: Input: Partially labeled data D = {(x, y)i} 2: Output: Weights w 3: Initialization: Set wi randomly, ∀i 4: for i : 1 to maxIter do 5: Decode D with current w to find optimal Viterbi paths that agree with (partial) ground truths. 6: Run Structured Perceptron algorithm with decoded tag-sequences to update weights w 7: end for 8: return w The semi-supervised approach enables incorporation of significantly more training data. In particular, this method could be used in conjunction with a supervised approach. This would involve initializing the weights prior to the semisupervised procedure with the weights from the supervised learning model, so as to seed the semisupervised approach with reasonable model, and use the partially annotated data to fine-tune the supervised model. The sequential approach, akin to annealing weights, can efficiently utilize both full and partial annotations. 3.2.1 Feature engineering In this section, we describe the principal features used in the segmentation model Transitional features and penalties: • Transitional features ftrans(yi−1, yi) = 638 Iyi−1,yi 2 describing the transitional affinities of state pairs. Since our state definitions preclude certain transitions (such as from state T2 to T1), these weights are initialized to −∞ to expedite training. • N-gram penalties: fngram We define a penalty for tagging each non-unary motif as described before. For a motif to be tagged, the improvement in objective score should at least exceed the corresponding penalty. e.g., fqgram(yi) = Iyi=Q4 denotes the penalty for tagging a tetragram. 3 Frequency-based, information theoretic, and POS features: • Absolute and log-normalized motif frequencies fngram(xi−n+1, ...xi−1, xi, yi). This feature is associated with a particular tokensequence and ngram-tag, and takes the value of the motif-frequency if the motif token-sequence matches the feature tokensequence, and is marked as with a matching tag. e.g., fbgram(xi−1 = love, xi = story, yi = B2). • Absolute and log-normalized motif frequencies for a particular POS-sequence. This feature is associated with a particular POStag sequence and ngram-tag, and takes the value of the motif-frequency if the motif token-sequence gets a matching tag, and is marked as with a matching ngram tag. e.g., fbgram(pi−1 = V B, pi = NN, yi = B2). • Medians and maxima of pairwise collocation statistics for tokens for a particular size of ngram motifs: we use the following statistics: pointwise mutual information, Chisquare statistic, and conditional probability. We also used POS sensitive versions of these, which performed much better than plain versions in our evaluations. • Histogram counts of inflectional forms of token sequence for the corresponding ngram motif and POS sequence: this features takes the value of the count of inflectional forms of an ngram that account for 90% of occurrences of all inflectional forms. 2Here, I denotes the indicator function 3It is straightforward to preclude partial n-gram annotations near sentence boundaries with prohibitive penalties. • Entropies of histogram distributions of inflectional variants (described above). • Features encoding syntactic rigidity: ratios and log-ratios of frequencies of an ngram motif and variations by replacing a token using near synonyms from its synset. Additionally, a few feature for the segmentations model contained minor orthographic features based on word shape (length and capitalization patterns). Also, all numbers, URLs, and currency symbols were normalized to the special NUMERIC, URL, and CURRENCY tokens respectively. Finally, a gazetteer feature checked for occurrences of motifs in a gazetteer of named entities. 3.3 Representation learning With the segmentation model described in the previous section, we process text from the English Gigaword corpus and the Simple English Wikipedia to partition sentences into motifs. Since the segmentation model accounts for the contexts of the entire sentence in determining motifs, different instances of the same token could evoke different meaning representations. Consider the following sentences tagged by the segmentation model, that would correspond to different representations of the token ‘remains’: once as a standalone motif, and once as part of an encompassing bigram motif (‘remains classified’). Hog prices have declined sharply , while the cost of corn remains relatively high. Even with the release of such documents, questions are not answered, since only the agency knows what remains classified Given constituent motifs of each sentence in the data, we can now define neighbourhood distributions for unary or phrasal motifs in terms of other motifs (as envisioned in Table 1). In our experiments, we use a window-length of 5 adjoining motifs on either side to define the neighbourhood of a constituent. Naturally, in the presence of multiword motifs, the neighbourhood boundary could be more extended than in a conventional DSM. With such neighbourhood contexts, the distributional paradigm posits that semantic similarity between a pair of motifs can be given by a sense of ‘distance’ between the two distributions. Most popularly, traditional measures of vector distance 639 such as the cosine similarity, Euclidean distance and City-block distance have been used in several distributional approaches. Additionally, several distance measures between discrete distributions exist in statistical literature, most famously the Kullback Leibler divergence, Bhattacharyya distance and the Hellinger distance. Recent work (Lebret and Lebret, 2013) has shown that the Hellinger distance is an especially effective measure in learning distributional embeddings, with Hellinger PCA being much more computationally inexpensive than neural language modeling approaches, while performing much better than standard PCA, and competitive with the state-of-theart in downstream evaluations. Hence, we use the Hellinger measure between neighbourhood motif distributions in learning representations. The Hellinger distance between two categorical distributions P = (p1...pk) and Q = (q1...qk) is defined as: H(P, Q) = 1 √ 2 v u u t k X i=1 (√pi −√qi)2 = 1 √ 2 √ P − p Q 2 The Hellinger measure has intuitively desirable properties: specifically, it can be seen as the Euclidean distance between the squareroots transformed distributions, where both vectors √ P and √Q are length-normalized under the same(Euclidean) norm. Finally, we perform SVD on the motif similarity matrix (with size of the order of the total vocabulary in the corpus), and retain the first k principal eigenvectors to obtain lowdimensional vector representations that are more convenient to work with. In our preliminary experiments, we found that k = 300 gave quantitatively good results, with marginal change with added dimensionality. We use this setting for all our experiments. 4 Experiments In this section, we describe some experimental evaluations and findings for our approach. We first quantitatively and qualitatively analyze the performance of the segmentation model, and then evaluate the distributional motif representations learnt by the model through two downstream applications. 4.1 Motif segmentation In an evaluation of the motif segmentations model within the perspective of our framework, we believe that exact correspondence to human judgment is unrealistic, since guiding principles for defining motifs, such as semantic cohesion, are hard to define and only serve as working principles. However, for purposes of relative comparison, we quantitatively evaluate the performance of the motif segmentation models on the fully annotated dataset. For this experiment, the goldannotated corpus was split into a training and test sets in a 9:1 proportion. A small fraction of the training split was set apart for development and validation. For this evaluation, we considered a motif boundary as correct only for an exact match, i.e., when both its boundaries (left and right) were correctly predicted. Also, since a majority of motifs are unary tokens, including them into consideration artificially boosts the accuracy, whereas we are more interested in the prediction of larger ngram tokens. Hence we report results on the performance on only non-unary motifs. P R F Rule-based baseline 0.85 0.10 0.18 Supervised 0.62 0.28 0.39 Semi-supervised 0.30 0.17 0.22 Supervised + annealing 0.69 0.38 0.49 Table 2: Results for motif segmentations Table 2 shows the performance of the segmentation model with the three proposed learning approaches described earlier. For a baseline, we consider a rule-based model that simply learns all ngram segmentations seen in the training data, and marks any occurrence of a matching token sequence as a motif; without taking neighbouring context into account. We observe that this model has a very high precision (since many token sequences marked as motifs would recur in similar contexts, and would thus have the same motif boundaries). However, the rule-based method has a very row recall due to lack of generalization capabilities. We see that while all three learning algorithms perform better than the baseline, the performance of the purely unsupervised system is inferior to supervised approaches. This is not unexpected: the supervision provided to the model is very weak due to a lack of negative examples (which leads to spurious motif taggings, 640 While men often (openly or privately) sympathized with Prince Charles when the princess went public about her rotten marriage , women cheered her on. The healthcare initiative has become a White elephant for the federal government. Chirac and Juppe have made a bad situation worse by seeking to meet Maastricht criteria not by cutting spending, but by raising taxes still further. Now , say Vatican observers , Pope John Paul II wants to show the world that many church members did resist the Third Reich and paid the price. Table 3: Examples of output from sentence segmentation model leading to a low precision), as well as no examples of transitions between adjacent motifs (to learn transitional weights and penalties). The supervised model expectedly outperforms both the rulebased and the semi-supervised systems. However, the supervised learning model with subsequent annealing outperforms the supervised model in terms of both precision and recall; showing the utility of the semi-supervised method when seeded with a good initial model, and the additive value of partially labeled data. Qualitative analysis of motif-segmented sentences shows that our designed feature-set is effective and helpful in identifying semantically cohesive ngrams. Table 3 provides four examples. The first example correctly identifies ‘went public’, while missing out on the potential motif ‘cheered her on’. In general, these examples illustrate that the model can identify idiomatic and idiosyncratic themes as well as commonly recurrent ngrams (in the second example, the model picks out ‘has become’ which is highly recurrent, but doesn’t have the semantic cohesiveness of some of the other motifs). In particular, consider the second example, where the model picks ‘white elephant’ as a motif. In such cases, the disambiguating influence of context incorporated by the motif is apparent. Elephant White elephant tusks expensive trunk spend african biggest white the project indian very high baby multibillion dollar The above table shows some of the top results for the unary token ‘elephant’ by frequency, and frequent unary and non-unary motifs for the motif ‘white elephant’ retrieved by the segmentation model. 4.2 Distributional representations For evaluating distributional representations for motifs (in terms of other motifs) learnt by the framework, we test these representations in two downstream tasks: sentence polarity classification and metaphor detection. For sentence polarity, we consider the Cornell Sentence Polarity corpus by Pang and Lee (2005), where the task is to classify the polarity of a sentence as positive or negative. The data consists of 10662 sentences from movie reviews that have been annotated as either positive or negative. For composing the motifs representations to get judgments on semantic similarity of sentences, we use our recent Vector Tree Kernel approach The VTK approach defines a convolutional kernel over graphs defined by the dependency parses of sentences, using a vector representation at each graph node that representing a single lexical token. For our purposes, we modify the approach to merge the nodes of all tokens that constitute a motif occurrence, and use the motif representation as the vector associated with the node. Table 4 shows results for the sentence polarity task. P R F1 DSM 0.56 0.50 0.53 AVM 0.55 0.53 0.54 MVM 0.55 0.49 0.52 VTK 0.65 0.58 0.62 VTK + MotifDSM 0.66 0.60 0.63 Table 4: Results for Sentence Polarity detection For this task, the motif based distributional embeddings vastly outperform a conventional distributional model (DSM) based on token distributions, as well as additive (AVM) and multiplicative (MVM) models of vector compositionality, as 641 proposed by Lapata et al. The model is competitive with the state-of-the-art VTK (Srivastava et al., 2013) that uses the SENNA neural embeddings by Collobert et al. (2011). P R F1 CRF 0.74 0.50 0.59 SVM+DSM 0.63 0.80 0.71 VTK+ SENNA 0.67 0.87 0.76 VTK+ MotifDSM 0.70 0.87 0.78 Table 5: Results for Metaphor identification On the metaphor detection task, we use the Metaphor dataset (Hovy et al., 2013). The data consists of sentences with defined phrases, and the task consists of identifying the linguistic use in these phrases as metaphorical or literal. For this task, the motif based model is expected to perform well as common metaphorical usage is generally through idiosyncratic MWEs, which the motif based models is specially geared to capture through the features of the segmentation model. For this task, we again use the VTK formalism for combining vector representations of the individual motifs. Table 5 shows that the motif-based DSM does better than discriminative models such as CRFs and SVMs, and also slightly improves on the VTK kernel with distributional embeddings. 5 Conclusion We have presented a new frequency-driven framework for distributional semantics of not only lexical items but also longer cohesive motifs. The theme of this work is a general paradigm of seeking motifs that are recurrent in common parlance, are semantically coherent, and are possibly noncompositional. Such a framework for distributional models avoids the issue of data sparsity in learning of representations for larger linguistic structures. The approach depends on drawing features from frequency statistics, statistical correlations, and linguistic theories; and this work provides a computational framework to jointly model recurrence and semantic cohesiveness of motifs through compositional penalties and affinity scores in a data driven way. While being deliberately vague in our working definition of motifs, we have presented simple efficient formulations to extract such motifs that uses both annotated as well as partially unannotated data. The qualitative and quantitative analyis of results from our preliminary motif segmentation model indicate that such motifs can help to disambiguate contexts of single tokens, and provide cleaner, more interpretable representations. Finally, we obtain motif representations in form of low-dimensional vector-space embeddings, and our experimental findings indicate value of the learnt representations in downstream applications. We believe that the approach has considerable theoretical as well as practical merits, and provides a simple and clean formulation for modeling phrasal and sentential semantics. In particular, we believe that ours is the first method that can invoke different meaning representations for a token depending on textual context of the sentence. The flexibility of having separate representations to model different semantic senses has considerable valuable, as compared with extant approaches that assign a single representation to each token, and are hence constrained to conflate several semantic senses into a common representation. The approach also elegantly deals with the problematic issue of differential compositional and non-compositional usage of words. Future work can focus on a more thorough quantitative evaluation of the paradigm, as well as extension to model non-contiguous motifs. References Colin Bannard. 2007. A measure of syntactic flexibility for automatically identifying multiword expressions in corpora. In Proceedings of the Workshop on a Broader Perspective on Multiword Expressions, pages 1–8. Association for Computational Linguistics. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183–1193. Association for Computational Linguistics. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 263–270. Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Association for Computational Linguistics. 642 Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution kernels on dependency trees. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1034–1046. Association for Computational Linguistics. James Richard Curran. 2003. From Distributional to Semantic Similarity. Ph.D. thesis, Institute for Communicating and Collaborative Systems School of Informatics University of Edinburgh. Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In ACL. Kartik Goyal, Sujay Kumar Jauhar, Huiying Li, Mrinmaya Sachan, Shashank Srivastava, and Eduard Hovy. 2013. A structured distributional semantic model: Integrating structure with semantics. ACL 2013, page 20. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1394–1404. Association for Computational Linguistics. Edward Grefenstette, Mehrnoosh Sadrzadeh, Stephen Clark, Bob Coecke, and Stephen Pulman. 2010. Concrete sentence spaces for compositional distributional models of meaning. arXiv preprint arXiv:1101.0309. Dirk Hovy, Shashank Srivastava, Sujay Kumar Jauhar, Mrinmaya Sachan, Kartik Goyal, Huiying Li, Whitney Sanders, and Eduard Hovy. 2013. Identifying metaphorical word use with tree kernels. Meta4NLP 2013, page 52. R´emi Lebret and Ronan Lebret. 2013. Word emdeddings through hellinger pca. arXiv preprint arXiv:1312.5542. Christopher D Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to information retrieval, volume 1. Cambridge University Press Cambridge. Diana McCarthy and John Carroll. 2003. Disambiguating nouns, verbs, and adjectives using automatically acquired selectional preferences. Computational Linguistics, 29(4):639–654. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In ACL, pages 236–244. Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In Machine Learning: ECML 2006, pages 318–329. Springer. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL. Pavel Pecina. 2008. A machine learning approach to multiword expression extraction. In Proceedings of the LREC MWE 2008 Workshop, pages 54–57. Citeseer. Scott Songlin Piao, Paul Rayson, Dawn Archer, and Tony McEnery. 2005. Comparing and combining a semantic tagger and a statistical tool for mwe extraction. Computer Speech & Language, 19(4):378– 397. Carlos Ramisch, Paulo Schreiner, Marco Idiart, and Aline Villavicencio. 2008. An evaluation of methods for the extraction of multiword expressions. In Proceedings of the LREC Workshop-Towards a Shared Task for Multiword Expressions (MWE 2008), pages 50–53. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Association for Computational Linguistics. Shashank Srivastava, Dirk Hovy, and Eduard H. Hovy. 2013. A walk-based semantically enriched tree kernel over distributed word representations. In EMNLP, pages 1411–1416. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. Brat: a web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102–107. Association for Computational Linguistics. Yulia Tsvetkov and Shuly Wintner. 2011. Identification of multi-word expressions by combining multiple linguistic information sources. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 836–845. Association for Computational Linguistics. Peter D Turney, Patrick Pantel, et al. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37(1):141–188. Aline Villavicencio, Valia Kordoni, Yi Zhang, Marco Idiart, and Carlos Ramisch. 2007. Validation and evaluation of automatically acquired multiword expressions for grammar engineering. In EMNLPCoNLL, pages 1034–1043. 643
2014
60
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 644–654, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Lexical Inference over Multi-Word Predicates: A Distributional Approach Omri Abend Shay B. Cohen Mark Steedman School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, United Kingdom {oabend,scohen,steedman}@inf.ed.ac.uk Abstract Representing predicates in terms of their argument distribution is common practice in NLP. Multi-word predicates (MWPs) in this context are often either disregarded or considered as fixed expressions. The latter treatment is unsatisfactory in two ways: (1) identifying MWPs is notoriously difficult, (2) MWPs show varying degrees of compositionality and could benefit from taking into account the identity of their component parts. We propose a novel approach that integrates the distributional representation of multiple sub-sets of the MWP’s words. We assume a latent distribution over sub-sets of the MWP, and estimate it relative to a downstream prediction task. Focusing on the supervised identification of lexical inference relations, we compare against state-of-the-art baselines that consider a single sub-set of an MWP, obtaining substantial improvements. To our knowledge, this is the first work to address lexical relations between MWPs of varying degrees of compositionality within distributional semantics. 1 Introduction Multi-word expressions (MWEs) constitute a large part of the lexicon and account for much of its growth (Jackendoff, 2002; Seaton and Macaulay, 2002). However, despite their importance, MWEs remain difficult to define and model, and consequently pose serious difficulties for NLP applications (Sag et al., 2001). Multi-word Predicates (MWPs; sometimes termed Complex Predicates) form an important and much addressed subclass of MWEs and are the focus of this paper. MWPs are informally defined as multiple words that constitute a single predicate (Alsina et al., 1997). MWPs encompass a wide range of phenomena, including causatives, light verbs, phrasal verbs, serial verb constructions and many others, and pose considerable challenges to both linguistic theory and NLP applications (see Section 2). Part of the difficulty in treating them stems from their position on the borderline between syntax and the lexicon. It is therefore often unclear whether they should be treated as fixed expressions, as compositional phrases that reflect the properties of their component parts or as both. This work addresses the modelling of MWPs within the context of distributional semantics (Turney and Pantel, 2010), in which predicates are represented through the distribution of arguments they may take. In order to collect meaningful statistics, the predicate’s lexical unit should be sufficiently frequent and semantically unambiguous. MWPs pose a challenge to such models, as na¨ıvely collecting statistics over all instances of highly ambiguous verbs is likely to result in noisy representations. For instance, the verb “take” may appear in MWPs as varied as “take time”, “take effect” and “take to the hills”. This heterogeneity of “take” is likely to have a negative effect on downstream systems that use its distributional representation. For instance, while “take” and “accept” are often considered lexically similar, the high frequency in which “take” participates in non-compositional MWPs is likely to push the two verbs’ distributional representations apart. A straightforward approach to this problem is to represent the predicate as a conjunction of multiple words, thereby trading ambiguity for sparsity. For instance, the verb “take” could be conjoined with its object (e.g., “take care”, “take a bus”). This approach, however, raises the challenge of identifying the sub-set of the predicate’s words that should be taken to represent it (henceforth, its lexical components or LCs). We propose a novel approach that addresses this 644 challenge in the context of identifying lexical inference relations between predicates (Lin and Pantel, 2001; Schoenmackers et al., 2010; Melamud et al., 2013a, inter alia). A (lexical) inference relation pL →pR is said to hold if the relation denoted by pR generally holds between a set of arguments whenever the relation pL does. For instance, an inference relation holds between “annex” and “control” since if a country annexes another, it generally controls it. Most works to this task use distributional similarity, either as their main component (Szpektor and Dagan, 2008; Melamud et al., 2013b), or as part of a more comprehensive system (Berant et al., 2011; Lewis and Steedman, 2013). For example, consider the verb “take”. While the inference relation “have →take” does not generally hold, it does hold in the case of some light verbs, such as “have a look →take a look”, underscoring the importance of taking more inclusive LCs into account. On the other hand, the predicate “likely to give a green light” is unlikely to appear often even within a very large corpus, and could benefit from taking its lexical sub-units (e.g., “likely” or “give a green light”) into account. We present a novel approach to the task that models the selection and relative weighting of the predicate’s LCs using latent variables. This approach allows the classifier that uses the distributional representations to take into account the most relevant LCs in order to make the prediction. By doing so, we avoid the notoriously difficult problem of defining and identifying MWPs and account for predicates of various sizes and degrees of compositionality. To our knowledge, this is the first work to address lexical relations between MWPs of varying degrees of compositionality within distributional semantics. We conduct experiments on the dataset of Zeichner et al. (2012) and compare our methods with analogous ones that select a fixed LC, using stateof-the-art feature sets. Our method obtains substantial performance gains across all scenarios. Finally, we note that our approach is cognitively appealing. Significant cognitive findings support the claim that a speaker’s lexicon consists of partially overlapping lexical units of various sizes, of which several can be evoked in the interpretation of an utterance (Jackendoff, 2002; Wray, 2008). 2 Background and Related Work Inference Relations. The detection of inference relations between predicates has become a central task over the past few years (Sekine, 2005; Zanzotto et al., 2006; Schoenmackers et al., 2010; Berant et al., 2011; Melamud et al., 2013a, inter alia). Inference rules are used in a wide variety of applications including Question Answering (Ravichandran and Hovy, 2002), Information Extraction (Shinyama and Sekine, 2006), and as a main component in Textual Entailment systems (Dinu and Wang, 2009; Dagan et al., 2013). Most approaches to the task used distributional similarity as a major component within their system. Lin and Pantel (2001) introduced DIRT, an unsupervised distributional system for detecting inference relations. The system is still considered a state-of-the-art baseline (Melamud et al., 2013a), and is often used as a component within larger systems. Schoenmackers et al. (2010) presented an unsupervised system for learning inference rules directly from open-domain web data. Melamud et al. (2013a) used topic models to combine typelevel predicate inference rules with token-level information from their arguments in a specific context. Melamud et al. (2013b) used lexical expansion to improve the representation of infrequent predicates. Lewis and Steedman (2013) combined distributional and symbolic representations, evaluating on a Question Answering task, as well as on a quantification-focused entailment dataset. Several studies tackled the task using supervised systems. Weisman et al. (2012) used a set of linguistically motivated features, but evaluated their system on a corpus that consists almost entirely of single-word predicates. Mirkin et al. (2006) presented a system for learning inference rules between nouns, using distributional similarity and pattern-based features. Hagiwara et al. (2009) identified synonyms using a supervised approach relying on distributional and syntactic features. Berant et al. (2011) used distributional similarity between predicates to weight the edges of an entailment graph. By imposing global constraints on the structure of the graph, they obtained a more accurate set of inference rules. Previous work used simple methods to select the predicate’s LC. Some filtered out frequent highly ambiguous verbs (Lewis and Steedman, 2013), others selected a single representative word (Melamud et al., 2013a), while yet others used multi-word LCs but treated them as fixed expressions (Lin and Pantel, 2001; Berant et al., 2011). The goals of the above studies are largely com645 plementary to ours. While previous work focused either on improving the quality of the distributional representations themselves or on their incorporation into more elaborate systems, we focus on the integration of the distributional representation of multiple LCs to improve the identification of inference relations between MWPs. MWP Extraction and Identification. MWPs have received considerable attention over the years in both theoretical and applicative contexts. Their position on the crossroads of syntax and the lexicon, their varying degrees of compositionality, as well as the wealth of linguistic phenomena they exhibit, made them the object of ongoing linguistic discussion (Alsina et al., 1997; Butt, 2010). In NLP, the discovery and identification of MWEs in general and MWPs in particular has been the focus of much work over the years (Lin, 1999; Baldwin et al., 2003; Biemann and Giesbrecht, 2011). Despite wide interest, the field has yet to converge to a general and widely agreed-upon method for identifying MWPs. See (Ramisch et al., 2013) for an overview. Most work on MWEs emphasized idiosyncratic or non-compositional expressions. Other lines of work focused on specific MWP classes such as light verbs (Tu and Roth, 2011; Vincze et al., 2013) and phrasal verbs (McCarthy et al., 2003; Pichotta and DeNero, 2013). Our work proposes a uniform treatment to MWPs of varying degrees of compositionality, and avoids defining MWPs explicitly by modelling their LCs as latent variables. Compositional Distributional Semantics. Much work in recent years has concentrated on the relation between the distributional representations of composite phrases and the representations of their component sub-parts (Widdows, 2008; Mitchell and Lapata, 2010; Baroni and Zamparelli, 2010; Coecke et al., 2010). Several works have used compositional distributional semantics (CDS) representations to assess the compositionality of MWEs, such as noun compounds (Reddy et al., 2011) or verb-noun combinations (Kiela and Clark, 2013). Despite significant advances, previous work has mostly been concerned with highly compositional cases and does not address the distributional representation of predicates of varying degrees of compositionality. 3 Our Proposal: A Latent LC Approach This section details our approach for distributionally representing MWPs by leveraging their component LCs. Section 3.1 describes our general approach, Section 3.2 presents our model and Section 3.3 details the feature set. 3.1 General Approach and Notation We propose a method for addressing MWPs of varying degrees of compositionality through the integration of the distributional representation of multiple sub-sets of the predicate’s words (LCs). We use it to tackle a supervised prediction task that represents predicates distributionally. Our model assumes a latent distribution over the LCs, and estimates its parameters so to best conform to the goals of the target prediction task. Formally, given a predicate p, we denote the set of words comprising it as W(p). The set of allowable LCs for p is denoted with Hp ⊂2W(p). Hp contains all sub-sets of p that we consider as apriori possible to represent p. For instance, if p is “likely to give a green light”, Hp may include LCs such as “likely” or “give light”. As our method is aimed at discovering the most relevant LCs, we do not attempt to analyze the MWPs in advance, but rather take an inclusive Hp, allowing the model to estimate the relative weights of the LCs. The task we use as a testbed for our approach is the lexical inference identification task between predicates. Given a pair of predicates p = (pL, pR), the task is to predict whether an inference relation holds between them. For instance, if pL is “devour” and pR is “eat greedily”, the classifier should use the similarity between “devour” and “eat” in order to correctly predict an inference relation in this case. Selecting the wider LC “eat greedily” might result in sparser statistics. In other examples, however, taking a wider LC is potentially beneficial. For instance, the dissimilarity between “take” and “make” should not prevent the classifier from identifying the inference relation between “take a step” and “make a step”. Our statistical model aims at predicting the correct label by making use of partially overlapping LCs of various sizes, both for the premise lefthand side (LHS) predicate pL and the hypothesis right-hand side (RHS) predicate pR. More formally, we take the space of values for our latent LC variables to be HpL,pR = HpL × HpR. Our evaluation dataset consists of pairs p(i) = (p(i) L , p(i) R ) for i ∈{1, . . . , M}, where M is the number of examples available, coupled with their gold-standard labels y(i) ∈{1, −1}. For brevity, we denote H(i) = Hp(i) = Hp(i) L ,p(i) R . We also as646 sume the existence of a feature function Φ(p, y, h) which maps a triplet of a predicate pair p, an inference label y, and a latent state h ∈Hp to Rd for some integer d. We denote the training set by D. 3.2 The Model We address the task with a latent variable loglinear model, representing the LCs of the predicates. We choose this model for its generality, conceptual simplicity, and because it allows to easily incorporate various feature sets and sets of latent variables. We introduce L2 regularization to avoid over-fitting. We use maximum likelihood estimation, and arrive at the following objective function: L(w|D) = 1 M M X i=1 log P(y(i)|p(i), w) −λ 2 ∥w∥2 = = 1 n n X i=1 0 @log X h∈H(i) exp “ w⊤Φ(p(i), y(i), h) ” −log Z(w, i) « −λ 2 ∥w∥2 where: Z(w, i) = X y∈{−1,1} X h∈Hi exp(w⊤Φ(pi, y, h)). We maximize L using the BFGS algorithm (Nocedal and Wright, 1999). The gradient (with respect to w) is the following: ∇L = Eh[Φ(pi, yi, h)] −Eh,y[Φ(pi, y, h)] −λ · w Hp can be defined to be any sub-set of 2W(p) given that taking an expectation over H can be done efficiently. It is therefore possible to use prior linguistic knowledge to consider only sub-sets of p that are likely to be non-compositional (e.g., verbpreposition or verb-noun pairs). In our experiments we attempt to keep the approach maximally general, and define Hp to be the set of all subsets of size 1 or 2 of content words in Wp1. We bound the size of h ∈Hp in order to retain computational efficiency and a sufficient frequency of the LCs in Hp. MWPs of length greater than 2 are effectively approximated by their set of subsets of sizes 1 and 2. Each h can therefore be written as a 4-tuple (hA L, hB L, hA R, hB R), where hA L (hA R) denotes the first word of the LHS (RHS) predicate’s LC. hB L (hB R) denotes the (possibly empty) second word of the predicate. Inference is carried out by maximizing P(y|p(i)) over y. As |Hp| = O(k4), where k is the 1We use a POS tagger to identify content words. Prepositions are considered content words under this definition. number of content words in p, and as the number of content words is usually small2, inference can be carried out by directly summing over H(i). Initialization. The introduction of latent variables into the log-linear model leads to a nonconvex objective function. Consequently, BFGS is not guaranteed to converge to the global optimum, but rather to a stationary point. The result may therefore depend on the parameter initialization. Indeed, preliminary experiments showed that both initializing w to be zero and using a random initializer results in lower performance. Instead, we initialize our model with a simplified convex model that fixes the LCs to be the pair of left-most content words comprising each of the predicates. This is a common method for selecting the predicate’s LC (e.g., Melamud et al., 2013a). Once h has been fixed, the model collapses to a convex log-linear model. The optimal w is then taken as an initialization point for the latent variable model. While this method may still not converge to the global maximum, our experiments show that this initialization technique yields high quality values for w (see Section 6). 3.3 Feature Set This section lists the features used for our experiments. We intentionally select a feature set that relies on either completely unsupervised or shallow processing tools that are available for a wide variety of languages and domains. Given a predicate pair p(i), a label y ∈{1, −1} and a latent state h ∈H(i), we define their feature vector as Φ(p(i), y, h) = y · Φ(p(i), h). The computation of Φ(p(i), h) requires a reference corpus R that contains triplets of the type (p, x, y) where p is a binary predicate and x and y are its arguments. We use the Reverb corpus as R in our experiments (Fader et al., 2011; see Section 4). We refrain from encoding features that directly reflect the vocabulary of the training set. Such features are not applicable beyond that set’s vocabulary, and as available datasets contain no more than a few thousand examples, these features are unlikely to generalize well. Table 1 presents the set of features we use in our experiments. The features can be divided into two main categories: similarity features between the LHS and the RHS predicates (table’s top), and features that reflect the individual properties of each 2|Hp| is about 15 on average in our dataset, where less than 5% of the H(i) are of size greater than 50. 647 Category Name Description Similarity COSINE DIRT cosine similarity between the vectors of hL and hR COSINEA DIRT cosine similarity between the vectors of hA L and hA R BInc DIRT BInc similarity between the vectors of hL and hR BIncA DIRT BInc similarity between the vectors of hA L and hA R Word A LHS POSA L The most frequent POS tag for the lemma of hA L POS2A L The second most frequent POS tag for the word lemma of hA L FREQA L The number of occurrences of hA L in the reference corpus COMMONA L A binary feature indicating whether hA L appears in both predicates ORDINALA L The ordinal number of hA L among the content words of the LHS predicate Pair LHS POSAB L The conjunction of POSA L and POSB L FREQAB L The frequency of hA L and hB L in the reference corpus PREFABL P(hA L|hA L) as estimated from the reference corpus PREFBAL P(hB L|hA L) as estimated from the reference corpus PMIABL The point-wise mutual information of hA L and hB L LDA TOPICSL P(topic|hL) for each of the induced topics. TOPICENTL The entropy of the topic distribution P(topic|hL) Table 1: The feature set used in our experiments. The top part presents the similarity measures based on the DIRT approach. The rest of the listed features apply to the LHS predicate (hL), and to the first word in it (hA L). Analogous features are introduced for the second word, hB L, and for the RHS predicate. The upper-middle part presents the word features for hA L. The lower-middle part presents features that apply where hL is of size 2. The bottom part lists the LDA-based features. of them. Within the LHS feature set, we distinguish between two sub-types of features: word features that encode the individual properties of hA L and hB L (table’s upper middle part), and pair features that only apply to LCs of size 2 and reflect the relation between hA L and hB L (table’s lower middle part). We further incorporate LDA-based features that reflect the selectional preferences of the predicates (table’s bottom). Distributional Similarity Features. The distributional similarity features are based on the DIRT system (Lin and Pantel, 2001). The score defines for each predicate p and for each argument slot s ∈{L, R} (corresponding to the arguments to the right and left of that predicate) a vector vp s which represents the distribution of arguments appearing in that slot. We take vp s(x) to be the number of times that the argument x appeared in the slot s of the predicate p. Given these vectors, the similarity between the predicates p1 and p2 is defined as: score(p1, p2) = q sim(vp1 L , vp2 L ) · sim(vp1 R , vp2 R ) where sim is some vector similarity measure. We use two common similarity measures: the vector cosine metric, and the BInc (Szpektor and Dagan, 2008) similarity measure. These measures give complementary perspectives on the similarity between the predicates, as the cosine similarity is symmetric between the LHS and RHS predicates, while BInc takes into account the directionality of the inference relation. Preliminary experiments with other measures, such as those of Lin (1998) and Weeds and Weir (2003) did not yield additional improvements. We encode the similarity of all measures for the pair hL and hR as well as the pair hA L and hA R. The latter feature is an approximation to the similarity between the heads of the predicates, as heads in English tend to be to the left of the predicates. These two features coincide for h values of size 1. Word and Pair Features. These features encode the basic properties of the LC. The motivation behind them is to allow a more accurate leveraging of the similarity features, as well as to better determine the relative weights of h ∈H(i). The feature set is composed of four analogous sets corresponding to hA L,hB L,hA R and hB R, as well as two sets of features that capture relations between hA L, hB L and hA R, hB R (in cases h is of size 2). The features include the ordinal index of the word within the predicate, the lemma’s frequency according to R, and a feature that indicates whether that word’s lemma also appears in both predicates of the pair. For instance, when considering the predicates “likely to come” and “likely to leave”, “likely” appears in both predicates, while “come” and “leave” appear only in one of them. In addition, we use POS-based features that encode the most frequent POS tag for the word lemma and the second most frequent POS tag (according to R). Information about the second most frequent POS tag can be important in identifying light verb constructions, such as “take a swim” or “give a smile”, where the object is derived from a verb. It can thus be interpreted as a generalization 648 of the feature that indicates whether the object is a deverbal noun, which is used by some light verb identification algorithms (Tu and Roth, 2011). In cases where hL is of size 2, we additionally encode features that apply to the conjunction of hA L and hB L. We encode the conjunction of their POS and the number of times the two lemmas occurred together in R. We also introduce features that capture the statistical correlation between the words of hL. To do so, we use point-wise mutual information, and the conditional probabilities P(hA L|hB L) and P(hB L|hA L). Similar measures have often been used for the unsupervised detection of MWEs (Villavicencio et al., 2007; Fazly and Stevenson, 2006). We also include the analogous set of features for hR. LDA-based Features. We further incorporate features based on a Latent Dirichlet Allocation (LDA) topic model (Blei et al., 2003). Several recent works have underscored the usefulness of using topic models to model a predicate’s selectional preferences (Ritter et al., 2010; Dinu and Lapata, 2010; S´eaghdha, 2010; Lewis and Steedman, 2013; Melamud et al., 2013a). We adopt the approach of Lewis and Steedman (2013), and define a pseudo-document for each LC in the evaluation corpus. We populate the pseudo-documents of an LC with its arguments according to R. We then train an LDA model with 25 topics over these documents. This yields a probability distribution P(topic|h) for each LC h, reflecting the types of arguments h may take. We further include a feature for the entropy of the topic distribution of the predicate, which reflects its heterogeneity. This feature is motivated by the assumption that a heterogeneous predicate is more likely to benefit from selecting a more inclusive LC than a homogeneous one. Technical Issues. All features used, except the similarity ones and the topic distribution features are binary. Frequency features are binned into 4 bins of equal frequency. We conjoin some of the feature sets by multiplying their values. Specifically, we add the cross product of the features of the category “Similarity” (see Table 1) with the rest of the features. In addition, we conjoin all LHS (RHS) features with an indicator feature that indicates whether hL (hR) is of size two. This results in 1605 non-constant features. We further note that some LCs that appear in the evaluation corpus do not appear at all in R. In our experiments they amounted to 0.2% of the LCs in our evaluation dataset. While previous work often discarded predicates below a certain frequency from the evaluation, we include them in order to facilitate comparison to future work. We assign the similarity features of such examples a 0 value, and assign their other numerical features the mean value of those features. 4 Experimental Setup Corpora and Preprocessing. As a reference corpus R, we use Reverb (Fader et al., 2011), a web-based corpus consisting of 15M web extractions of binary relations. Each relation is a triplet of a predicate and two arguments, one preceding it and one following it. Relations were extracted using regular expressions over the output of a POS tagger and an NP chunker. Each predicate may consist of a single verb, a verb and a preposition or a sequence of words starting in a verb and ending in a preposition, between which there may nouns, adjectives, adverbs, pronouns, determiners and verbs. The verb may also be a copula. Examples of predicates are “make the most of”, “could be exchanged for” and “is happy with”. Reverb is an appealing reference corpus for this task for several reasons. First, it uses fairly shallow preprocessing technology which is available for many domains and languages. Second, Reverb applies considerable noise filtering, which results in extractions of fair quality. Third, our evaluation dataset is based on Reverb extractions. We evaluate our algorithm on the dataset of Zeichner et al. (2012). This publicly available corpus3 provides pairs of Reverb binary relations and an indication of whether an inference relation holds between them within the context of a specific pair of argument fillers. The corpus was compiled using distributional methods to detect pairs of relations in Reverb that are likely to have an inference relation between. Annotators, employed through Amazon Mechanical Turk, were then asked to determine whether each pair is meaningful, and if so, to determine whether an inference relation holds. Further measures were taken to monitor the accuracy of the annotation. For example, the pair of predicates “make the most of” and “take advantage of” appears in the corpus as a pair between which an inference relation holds. The arguments in this case are “students” and “their university experience”. An ex3http://tinyurl.com/krx2acd 649 ample of a pair between which an inference relation does not hold is “tend to neglect” and “underestimate the importance of”, where the arguments are “Robert” and “his family”. The dataset contains 6,565 instances in total. We use 5,411 pairs of them, discarding instances that were deemed as meaningless by the annotators. We also discard cases where the set of arguments is reversed between the LHS and RHS predicates. In these examples, pR(x, y) is inferable from pL(y, x), rather than from pL(x, y). As there are less than 150 reversed instances in the corpus, experimenting on this sub-set is unlikely to be informative. The average length of a predicate in the corpus is 2.7 words (including function words). In 87.3% of the predicate pairs, there was more than one LC (i.e., |Hp| > 1), underscoring the importance of correctly leveraging the different LCs. We randomly partition the corpus into a training set which contains 4,343 instances (∼80%), and a test set that contains 1,068 instances, maintaining the same positive to negative label ratio in both datasets4. Development was carried out using cross-validation on the training data (see below). We use a Maximum Entropy POS Tagger, trained on the Penn Treebank, and the WordNet lemmatizer, both implemented within the NLTK package (Loper and Bird, 2002). To obtain a coarse-grained set of POS tags, we collapse the tag set to 7 categories: nouns, verbs, adjectives, adverbs, prepositions, the word “to” and a category that includes all other words. A Reverb argument is represented as the conjunction of its content words that appear more than 10 times in the corpus. Function words are defined according to their POS tags and include determiners, possessive pronouns, existential “there”, numbers and coordinating conjunctions. Auxiliary verbs and copulas are also considered function words. To compute the LDA features, we use the online variational Bayes algorithm of (Hoffman et al., 2010) as implemented in the Gensim software package (Rehurek and Sojka, 2010). Evaluated Algorithms. The only two previous works on this dataset (Melamud et al., 2013a; Melamud et al., 2013b) are not directly comparable, as they used unsupervised systems and evalu4A script that replicates our train-test partition of the corpus can be found here: http://homepages.inf.ed. ac.uk/oabend/mwpreds.html ated on sub-sets of the evaluation dataset. Instead, we use several baselines to demonstrate the usefulness of integrating multiple LCs, as well as the relative usefulness of our feature sets. The simplest baseline is ALLNEG, which predicts the most frequent label in the dataset (in our case: “no inference”). The other evaluated systems are formed by taking various subsets of our feature set. We experiment with 4 feature sets. The smallest set, SIM, includes only the similarity features. This feature set is related to the compositional distributional model of Mitchell and Lapata (2010) (see Section 6). We note that despite recent advances in identifying predicate inference relations, the DIRT system (Lin and Pantel, 2001) remains a strong baseline, and is often used as a component in state-of-the-art systems (Berant et al., 2011), and specifically in the two aforementioned works that used the same evaluation corpus. The next feature set BASIC includes the features found to be most useful during the development of the model: the most frequent POS tag, the frequency features and the feature Common. More inclusive is the feature set NO-LDA, which includes all features except the LDA features. Experiments with this set were performed in order to isolate the effect of the LDA features. Finally, ALL includes our complete set of features. The more direct comparison is against partial implementations of our system where the LC h is deterministically selected. Determining h for each predicate yields a regular log-linear binary classification model. We use two variants of this baseline. The first, LEFTMOST, selects the left-most content word for each predicate. Similar selection strategy was carried out by Melamud et al. (2013a). The second, VPREP, selects h to be the verb along with its following preposition. In cases the predicate contains multiple verbs, the one preceding the preposition is selected, and where the predicate does not contain any non-copula verbs, it regresses to LEFTMOST. This LC selection method approximates a baseline that includes subcategorized prepositions. Such cases are highly frequent and account for a large portion of the MWPs in English. Including a verb’s preposition in its LC was commonly done in previous work (e.g., Lewis and Steedman, 2013). We also attempted to identify verb-preposition constructions using a dependency parser. Unfortunately, our evaluation dataset is only available in 650 a lemmatized version, which posed a difficulty for the parser. Due to the low quality of the resulting parses, we implemented VPREP using POS-based regular expressions as defined above. The full model is denoted with LATENTLC. For each system and feature set, we report results using 10-fold cross-validation on the training set, as well as results on the test set. Both cases use the same set of parameters determined by crossvalidation on the training set. As the task at hand is a binary classification problem, we use accuracy scores to rate the performance of our systems. 5 Results Table 2 presents the results of our experiments. Rows correspond to the evaluated algorithms, while columns correspond to the feature sets used and the evaluation scenarios (i.e., training set cross-validation or test set evaluation). Our experiments make first use of this dataset in its fullest form for the problem of supervised learning of inference relations, and may serve as a starting point for further exploration of this dataset. For all feature sets and settings, LATENTLC scored highest, often with a considerable margin of up to 3.0% in the cross-validation and up to 4.6% on the test set relative to the LEFTMOST baseline, and 5.1% (cross-validation) and 6.8% (test) margins relative to VPREP. The best scoring result of our LATENTLC model in the cross-validation scenario is 65.72%, obtained by the feature set All. The best scoring result by any of the baseline models in this scenario is 62.7%, obtained by the same feature set. For the test set scenario, LATENTLC obtained its highest accuracy, 65.73%, when using the feature set Basic. This is a substantial improvement over the highest scoring baseline model in this scenario that obtained 61.6% accuracy, using the feature set All. This performance gap is substantial when taking into consideration that the improvements obtained by the highly competitive DIRT similarity features using the stronger LEFTMOST baseline, result in an improvement of 3.1% and 5.3% over the trivial ALLNEG baseline in the test set and cross-validation scenarios respectively. Comparing the different feature sets on our proposed model, we find that the Basic feature set gives a consistent and substantial increase over the Sim feature set. Improvements are of 2.8% (test) and 2.2% (cross-validation). Introducing more elaborate features (i.e., the feature sets NoLDA and All) yields some improvements in the crossvalidation, but these improvements are not replicated on the test set. This may be due to idiosyncrasies in the test set that are averaged out in the cross-validation scenario. For a qualitative analysis, we took the best performing model of the data set (i.e., with the Basic feature set), and extracted the set of instances where it made a correct prediction while both baselines made an error. This set contains many verb-preposition pairs, such as “list as →report as” or “submit via →deliver by”, underscoring the utility of leveraging multiple LCs rather than considering only a head word (as with LEFTMOST) or the entire phrase (as with VPREP). Other examples in this set contain more complex patterns. These include the positive pairs “talk much about →have much to say about” and “increase with →go up with”, and the negative “make prediction about →meet the challenge of” and “enjoy watching →love to play”. 6 Discussion Relation to CDS. Much recent work subsumed under the title Compositional Distributional Semantics addressed the distributional representation of multi-word phrases (see Section 2). This line of work focuses on compositional predicates, such as “kick the ball” and not on idiosyncratic predicates such as “kick the bucket”. A variant of the CDS approach can be framed within ours. Assume we wish to compute the similarity of the predicates pL = (w1, ..., wn) and pR = (w′ 1, ..., w′ m). Let us denote the vector space representations of the individual words as v1, ..., vn and v′ 1, ..., v′ m respectively. A standard approach in CDS is to compose distributional representations by taking their vector sum vL = v1 + v2... + vn and vR = v′ 1 + ... + v′ m (Mitchell and Lapata, 2010). One of the most effective similarity measures is the cosine similarity, which is a normalized dot product. The distributional similarity between pL and pR under this model is sim(pL, pR) = Pn i=1 Pm j=1 sim(wi, w′ j), where sim(wi, w′ j) is the dot product between vi and v′ j. This similarity score is similar in spirit to a simplified version of our statistical model that restricts the set of allowable LCs Hp to be {({wi}, {w′ j})|i ≤n, j ≤m}, i.e., only LCs of size 1. Indeed, taking Hp as above, and cosine similarity as the only feature (i.e., w ∈R), yields the distribution 651 Test Set Cross Validation Algorithm Sim Basic NoLDA All Sim Basic NoLDA All LATENTLC 62.9 65.7 64.4 64.6 62.7 ± 1.9 64.9 ± 1.9 65.0 ± 1.7 65.7 ±1.9 LEFTMOST 59.0 61.1 60.0 60.4 61.2 ± 2.1 62.5 ± 2.4 62.4 ±2.2 62.7 ± 2.0 VPREP 56.1 60.9 60.7 61.6∗ 58.1 ± 1.7 60.8 ± 2.2 60.4 ± 2.6 60.6 ± 2.2 ALLNEG 55.9 55.9 Table 2: Results for the various evaluated systems. Accuracy results are presented in percents, followed in the cross validation scenario by the standard deviation over the folds. The rows correspond to the various systems as defined in Section 4. LATENTLC is our proposed model. The columns correspond to the various feature sets, from the least to the most inclusive. SIM includes only similarity features. BASIC additionally includes POS-based and frequency features. NOLDA includes all features except LDA-based features. ALL is the full feature set. ALLNEG is the classifier that invariably predicts the label “no inference”. Bold marks best overall accuracy per column, and ∗marks figures that are not significantly worse (McNemar’s test, p < 0.05). The same positive to negative label ratio was maintained in both the cross validation and test set scenarios. In all cases, LATENTLC obtains substantial improvements over the baseline systems. P(y|p) ∝ X (wi,w′ j)∈Hp exp ` w · y · sim(wi, w′ j) ´ . This derivation highlights the relation of a simplified version of our approach to the additive CDS model, as both approaches effectively average over the similarities of all pairs of words in pL and pR. The derivation also highlights a few advantages of our approach. First, our approach allows to straightforwardly introduce additional features and to weight them in a way most consistent with the task at hand. Second, it allows much more flexibility in defining the set of allowable LCs, Hp. Specifically, Hp may contain LCs of sizes greater than 1. Third, our approach uses standard probabilistic modelling, and therefore has a natural statistical interpretation. In order to appreciate the effect of these advantages, we perform an experiment that takes H to be the set of all LCs of size 1, and uses a single similarity measure. We run a 10-fold crossvalidation on our training data, obtaining 61.3% accuracy using COSINE and 62.2% accuracy using BInc. The performance gap between these results and the accuracy obtained by our full model (65.7%) underscores the latter’s effectiveness in integrating multiple features and LCs. Effectiveness of Optimization Method. Our maximization of the log-likelihood function is not guaranteed to converge to a global optimum. Therefore, the quality of the learned parameters may be sensitive to the initialization point. We hereby describe an experiment that tests the sensitivity of our approach to such variance. Selecting the highest scoring feature set on our test set (i.e., BASIC), we ran the model with multiple initializers, by randomly perturbing our standard convex initializer (see Section 3). Concretely, given a convex initializer w, we select the starting point to be w + η, where ηi ∼N(0, α|wi|). We ran this experiment 400 times with α = 0.8. To combine the resulting weight vectors into a single classifier, we apply two types of standard approaches: a Product of Experts (Hinton, 2002), as well as a voting approach that selects the most frequently predicted label. Neither of these experiments yielded any significant performance gain. This demonstrates the robustness of our optimization method to the initialization point. 7 Conclusion We have presented a novel approach to the distributional representation of multi-word predicates. Since MWPs demonstrate varying levels of compositionality, a uniform treatment of MWPs either as fixed expressions or through head words is lacking. Instead, our approach integrates multiple lexical units contained in the predicate. The approach takes into account both multi-word LCs that address low compositionality cases, as well as single-word LCs that address compositional cases and are more frequent. It assumes a latent distribution over the LCs of the predicates, and estimates it relative to a target application task. We addressed the supervised inference identification task, obtaining substantial improvement over state-of-the-art baseline systems. In future work we intend to assess the benefit of this approach in MWP classes that are well-known from the literature. We believe that a permissive approach that integrates multiple analyses would perform better than standard single-analysis methods in a wide range of applications. Acknowledgements. We would like to thank Mike Lewis, Reshef Meir, Oren Melamud, Michael Roth and Nathan Schneider for their helpful comments. This work was supported by ERC Advanced Fellowship 249520 GRAMPLUS. 652 References Alex Alsina, Joan Wanda Bresnan, and Peter Sells. 1997. Complex predicates. Center for the Study of Language and Information. Timothy Baldwin, Colin Bannard, Takaaki Tanaka, and Dominic Widdows. 2003. An empirical model of multiword expression decomposability. In Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatmentVolume 18, pages 89–96. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In EMNLP, pages 1183–1193. Jonathan Berant, Jacob Goldberger, and Ido Dagan. 2011. Global learning of typed entailment rules. In ACL, pages 610–619. Chris Biemann and Eugenie Giesbrecht. 2011. Distributional semantics and compositionality 2011: Shared task description and results. In Workshop on Distributional Semantics and Compositionality, pages 21–28. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. the Journal of machine Learning research, 3:993–1022. Miriam Butt. 2010. The light verb jungle: still hacking away. In Complex predicates: cross-linguistic perspectives on event structure, pages 48–78. Cambridge University Press. Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. In J. van Bentham, M. Moortgat, and W. Buszkowski, editors, Linguistic Analysis, volume 36, pages 435–384. Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing textual entailment: Models and applications. Synthesis Lectures on Human Language Technologies, 6(4):1–220. Georgiana Dinu and Mirella Lapata. 2010. Topic models for meaning similarity in context. In COLING: Posters, pages 250–258. Georgiana Dinu and Rui Wang. 2009. Inference rules and their application to recognizing textual entailment. In EACL, pages 211–219. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In EMNLP, pages 1535–1545. Afsaneh Fazly and Suzanne Stevenson. 2006. Automatically constructing a lexicon of verb phrase idiomatic combinations. In EACL, pages 337–344. Masato Hagiwara, Yasuhiro Ogawa, and Katsuhiko Toyama. 2009. Supervised synonym acquisition using distributional features and syntactic patterns. Information and Media Technologies, 4(2):558–582. Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800. Matthew Hoffman, Francis R Bach, and David M Blei. 2010. Online learning for latent Dirichlet allocation. In NIPS, pages 856–864. Ray Jackendoff. 2002. Foundations of language: Brain, meaning, grammar, evolution. Oxford University Press. Douwe Kiela and Stephen Clark. 2013. Detecting compositionality of multi-word expressions using nearest neighbours in vector space models. In EMNLP, pages 1427–1432. Mike Lewis and Mark Steedman. 2013. Combined distributional and logical semantics. TACL, 1:179– 192. Dekang Lin and Patrick Pantel. 2001. DIRT – discovery of inference rules from text. In SIGKDD 2001, pages 323–328. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In COLING-ACL, pages 768–774. Dekang Lin. 1999. Automatic identification of noncompositional phrases. In ACL, pages 317–324. Edward Loper and Steven Bird. 2002. NLTK: The natural language toolkit. In ACL Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics, pages 63–70. Diana McCarthy, Bill Keller, and John Carroll. 2003. Detecting a continuum of compositionality in phrasal verbs. In ACL workshop on Multiword expressions: analysis, acquisition and treatment, pages 73–80. Oren Melamud, Jonathan Berant, Ido Dagan, Jacob Goldberger, and Idan Szpektor. 2013a. A two level model for context sensitive inference rules. In ACL 2013, pages 1331–1340. Oren Melamud, Ido Dagan, Jacob Goldberger, and Idan Szpektor. 2013b. Using lexical expansion to learn inference rules from sparse data. In ACL: Short Papers, pages 283–288. Shachar Mirkin, Ido Dagan, and Maayan Geffet. 2006. Integrating pattern-based and distributional similarity methods for lexical entailment acquisition. In COLING-ACL: Poster Session, pages 579–586. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Jorge. Nocedal and Stephen J Wright. 1999. Numerical optimization, volume 2. Springer New York. 653 Karl Pichotta and John DeNero. 2013. Identifying phrasal verbs using many bilingual corpora. In EMNLP, pages 636–646. Carlos Ramisch, Aline Villavicencio, and Valia Kordoni. 2013. Introduction to the special issue on multiword expressions: From theory to practice and use. ACM Transactions on Speech and Language Processing (TSLP), 10(2):3. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In ACL, pages 41–47. Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in compound nouns. In IJCNLP, pages 210–218. Radim Rehurek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In Proceedings of LREC 2010 workshop New Challenges for NLP Frameworks, pages 46–50. Alan Ritter, Mausam, and Oren Etzioni. 2010. A latent Dirichlet allocation method for selectional preferences. In ACL, pages 424–434. Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2001. Multiword expressions: A pain in the neck for NLP. In CICLing, pages 1–15. Stefan Schoenmackers, Oren Etzioni, Daniel S Weld, and Jesse Davis. 2010. Learning first-order Horn clauses from web text. In EMNLP, pages 1088– 1098. Diarmuid ´O. S´eaghdha. 2010. Latent variable models of selectional preference. In ACL 2010, pages 435– 444. Maggie Seaton and Alison Macaulay, editors. 2002. Collins COBUILD Idioms Dictionary. HarperCollins Publishers, 2nd edition. Satoshi Sekine. 2005. Automatic paraphrase discovery based on context and keywords between NE pairs. In IWP, pages 4–6. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In HLT-NAACL, pages 304–311. Idan Szpektor and Ido Dagan. 2008. Learning entailment rules for unary templates. In COLING, pages 849–856. Yuancheng Tu and Dan Roth. 2011. Learning English light verb constructions: contextual or statistical. In ACL HLT 2011, page 31. Peter D Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37(1):141–188. Aline Villavicencio, Valia Kordoni, Yi Zhang, Marco Idiart, and Carlos Ramisch. 2007. Validation and evaluation of automatically acquired multiword expressions for grammar engineering. In EMNLPCoNLL, pages 1034–1043. Veronika Vincze, Istv´an Nagy T., and Rich´ard Farkas. 2013. Identifying English and Hungarian light verb constructions: A contrastive approach. In ACL: Short Papers, pages 255–261. Julie Weeds and David Weir. 2003. A general framework for distributional similarity. In EMNLP, pages 81–88. Hila Weisman, Jonathan Berant, Idan Szpektor, and Ido Dagan. 2012. Learning verb inference rules from linguistically-motivated evidence. In EMNLPCoNLL, pages 194–204. Dominic Widdows. 2008. Semantic vector products: Some initial investigations. In Second AAAI Symposium on Quantum Interaction, volume 26, pages 28–35. Alison Wray. 2008. Formulaic language: Pushing the boundaries. Oxford University Press. Fabio Massimo Zanzotto, Marco Pennacchiotti, and Maria Teresa Pazienza. 2006. Discovering asymmetric entailment relations between verbs using selectional preferences. In ACL-COLING, pages 849– 856. Naomi Zeichner, Jonathan Berant, and Ido Dagan. 2012. Crowdsourcing inference-rule evaluation. In ACL: Short Papers, pages 156–160. 654
2014
61
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 655–665, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Convolutional Neural Network for Modelling Sentences Nal Kalchbrenner Edward Grefenstette {nal.kalchbrenner, edward.grefenstette, phil.blunsom}@cs.ox.ac.uk Department of Computer Science University of Oxford Phil Blunsom Abstract The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline. 1 Introduction The aim of a sentence model is to analyse and represent the semantic content of a sentence for purposes of classification or generation. The sentence modelling problem is at the core of many tasks involving a degree of natural language comprehension. These tasks include sentiment analysis, paraphrase detection, entailment recognition, summarisation, discourse analysis, machine translation, grounded language learning and image retrieval. Since individual sentences are rarely observed or not observed at all, one must represent a sentence in terms of features that depend on the words and short n-grams in the sentence that are frequently observed. The core of a sentence model involves a feature function that defines the process The cat sat on the red mat The cat sat on the red mat Figure 1: Subgraph of a feature graph induced over an input sentence in a Dynamic Convolutional Neural Network. The full induced graph has multiple subgraphs of this kind with a distinct set of edges; subgraphs may merge at different layers. The left diagram emphasises the pooled nodes. The width of the convolutional filters is 3 and 2 respectively. With dynamic pooling, a filter with small width at the higher layers can relate phrases far apart in the input sentence. by which the features of the sentence are extracted from the features of the words or n-grams. Various types of models of meaning have been proposed. Composition based methods have been applied to vector representations of word meaning obtained from co-occurrence statistics to obtain vectors for longer phrases. In some cases, composition is defined by algebraic operations over word meaning vectors to produce sentence meaning vectors (Erk and Pad´o, 2008; Mitchell and Lapata, 2008; Mitchell and Lapata, 2010; Turney, 2012; Erk, 2012; Clarke, 2012). In other cases, a composition function is learned and either tied to particular syntactic relations (Guevara, 2010; Zanzotto et al., 2010) or to particular word types (Baroni and Zamparelli, 2010; Coecke et al., 2010; Grefenstette and Sadrzadeh, 2011; Kartsaklis and Sadrzadeh, 2013; Grefenstette, 2013). Another approach represents the meaning of sentences by way of automatically extracted logical forms (Zettlemoyer and Collins, 2005). 655 A central class of models are those based on neural networks. These range from basic neural bag-of-words or bag-of-n-grams models to the more structured recursive neural networks and to time-delay neural networks based on convolutional operations (Collobert and Weston, 2008; Socher et al., 2011; Kalchbrenner and Blunsom, 2013b). Neural sentence models have a number of advantages. They can be trained to obtain generic vectors for words and phrases by predicting, for instance, the contexts in which the words and phrases occur. Through supervised training, neural sentence models can fine-tune these vectors to information that is specific to a certain task. Besides comprising powerful classifiers as part of their architecture, neural sentence models can be used to condition a neural language model to generate sentences word by word (Schwenk, 2012; Mikolov and Zweig, 2012; Kalchbrenner and Blunsom, 2013a). We define a convolutional neural network architecture and apply it to the semantic modelling of sentences. The network handles input sequences of varying length. The layers in the network interleave one-dimensional convolutional layers and dynamic k-max pooling layers. Dynamic k-max pooling is a generalisation of the max pooling operator. The max pooling operator is a non-linear subsampling function that returns the maximum of a set of values (LeCun et al., 1998). The operator is generalised in two respects. First, kmax pooling over a linear sequence of values returns the subsequence of k maximum values in the sequence, instead of the single maximum value. Secondly, the pooling parameter k can be dynamically chosen by making k a function of other aspects of the network or the input. The convolutional layers apply onedimensional filters across each row of features in the sentence matrix. Convolving the same filter with the n-gram at every position in the sentence allows the features to be extracted independently of their position in the sentence. A convolutional layer followed by a dynamic pooling layer and a non-linearity form a feature map. Like in the convolutional networks for object recognition (LeCun et al., 1998), we enrich the representation in the first layer by computing multiple feature maps with different filters applied to the input sentence. Subsequent layers also have multiple feature maps computed by convolving filters with all the maps from the layer below. The weights at these layers form an order-4 tensor. The resulting architecture is dubbed a Dynamic Convolutional Neural Network. Multiple layers of convolutional and dynamic pooling operations induce a structured feature graph over the input sentence. Figure 1 illustrates such a graph. Small filters at higher layers can capture syntactic or semantic relations between noncontinuous phrases that are far apart in the input sentence. The feature graph induces a hierarchical structure somewhat akin to that in a syntactic parse tree. The structure is not tied to purely syntactic relations and is internal to the neural network. We experiment with the network in four settings. The first two experiments involve predicting the sentiment of movie reviews (Socher et al., 2013b). The network outperforms other approaches in both the binary and the multi-class experiments. The third experiment involves the categorisation of questions in six question types in the TREC dataset (Li and Roth, 2002). The network matches the accuracy of other state-of-theart methods that are based on large sets of engineered features and hand-coded knowledge resources. The fourth experiment involves predicting the sentiment of Twitter posts using distant supervision (Go et al., 2009). The network is trained on 1.6 million tweets labelled automatically according to the emoticon that occurs in them. On the hand-labelled test set, the network achieves a greater than 25% reduction in the prediction error with respect to the strongest unigram and bigram baseline reported in Go et al. (2009). The outline of the paper is as follows. Section 2 describes the background to the DCNN including central concepts and related neural sentence models. Section 3 defines the relevant operators and the layers of the network. Section 4 treats of the induced feature graph and other properties of the network. Section 5 discusses the experiments and inspects the learnt feature detectors.1 2 Background The layers of the DCNN are formed by a convolution operation followed by a pooling operation. We begin with a review of related neural sentence models. Then we describe the operation of onedimensional convolution and the classical TimeDelay Neural Network (TDNN) (Hinton, 1989; Waibel et al., 1990). By adding a max pooling 1Code available at www.nal.co 656 layer to the network, the TDNN can be adopted as a sentence model (Collobert and Weston, 2008). 2.1 Related Neural Sentence Models Various neural sentence models have been described. A general class of basic sentence models is that of Neural Bag-of-Words (NBoW) models. These generally consist of a projection layer that maps words, sub-word units or n-grams to high dimensional embeddings; the latter are then combined component-wise with an operation such as summation. The resulting combined vector is classified through one or more fully connected layers. A model that adopts a more general structure provided by an external parse tree is the Recursive Neural Network (RecNN) (Pollack, 1990; K¨uchler and Goller, 1996; Socher et al., 2011; Hermann and Blunsom, 2013). At every node in the tree the contexts at the left and right children of the node are combined by a classical layer. The weights of the layer are shared across all nodes in the tree. The layer computed at the top node gives a representation for the sentence. The Recurrent Neural Network (RNN) is a special case of the recursive network where the structure that is followed is a simple linear chain (Gers and Schmidhuber, 2001; Mikolov et al., 2011). The RNN is primarily used as a language model, but may also be viewed as a sentence model with a linear structure. The layer computed at the last word represents the sentence. Finally, a further class of neural sentence models is based on the convolution operation and the TDNN architecture (Collobert and Weston, 2008; Kalchbrenner and Blunsom, 2013b). Certain concepts used in these models are central to the DCNN and we describe them next. 2.2 Convolution The one-dimensional convolution is an operation between a vector of weights m ∈Rm and a vector of inputs viewed as a sequence s ∈Rs. The vector m is the filter of the convolution. Concretely, we think of s as the input sentence and si ∈R is a single feature value associated with the i-th word in the sentence. The idea behind the one-dimensional convolution is to take the dot product of the vector m with each m-gram in the sentence s to obtain another sequence c: cj = m⊺sj−m+1:j (1) Equation 1 gives rise to two types of convolution depending on the range of the index j. The narrow type of convolution requires that s ≥m and yields s1 s1 ss ss c1 c5 c5 Figure 2: Narrow and wide types of convolution. The filter m has size m = 5. a sequence c ∈Rs−m+1 with j ranging from m to s. The wide type of convolution does not have requirements on s or m and yields a sequence c ∈ Rs+m−1 where the index j ranges from 1 to s + m −1. Out-of-range input values si where i < 1 or i > s are taken to be zero. The result of the narrow convolution is a subsequence of the result of the wide convolution. The two types of onedimensional convolution are illustrated in Fig. 2. The trained weights in the filter m correspond to a linguistic feature detector that learns to recognise a specific class of n-grams. These n-grams have size n ≤m, where m is the width of the filter. Applying the weights m in a wide convolution has some advantages over applying them in a narrow one. A wide convolution ensures that all weights in the filter reach the entire sentence, including the words at the margins. This is particularly significant when m is set to a relatively large value such as 8 or 10. In addition, a wide convolution guarantees that the application of the filter m to the input sentence s always produces a valid non-empty result c, independently of the width m and the sentence length s. We next describe the classical convolutional layer of a TDNN. 2.3 Time-Delay Neural Networks A TDNN convolves a sequence of inputs s with a set of weights m. As in the TDNN for phoneme recognition (Waibel et al., 1990), the sequence s is viewed as having a time dimension and the convolution is applied over the time dimension. Each sj is often not just a single value, but a vector of d values so that s ∈Rd×s. Likewise, m is a matrix of weights of size d × m. Each row of m is convolved with the corresponding row of s and the convolution is usually of the narrow type. Multiple convolutional layers may be stacked by taking the resulting sequence c as input to the next layer. The Max-TDNN sentence model is based on the architecture of a TDNN (Collobert and Weston, 2008). In the model, a convolutional layer of the narrow type is applied to the sentence matrix s, where each column corresponds to the feature vec657 tor wi ∈Rd of a word in the sentence: s =  w1 . . . ws   (2) To address the problem of varying sentence lengths, the Max-TDNN takes the maximum of each row in the resulting matrix c yielding a vector of d values: cmax =   max(c1,:) ... max(cd,:)   (3) The aim is to capture the most relevant feature, i.e. the one with the highest value, for each of the d rows of the resulting matrix c. The fixed-sized vector cmax is then used as input to a fully connected layer for classification. The Max-TDNN model has many desirable properties. It is sensitive to the order of the words in the sentence and it does not depend on external language-specific features such as dependency or constituency parse trees. It also gives largely uniform importance to the signal coming from each of the words in the sentence, with the exception of words at the margins that are considered fewer times in the computation of the narrow convolution. But the model also has some limiting aspects. The range of the feature detectors is limited to the span m of the weights. Increasing m or stacking multiple convolutional layers of the narrow type makes the range of the feature detectors larger; at the same time it also exacerbates the neglect of the margins of the sentence and increases the minimum size s of the input sentence required by the convolution. For this reason higher-order and long-range feature detectors cannot be easily incorporated into the model. The max pooling operation has some disadvantages too. It cannot distinguish whether a relevant feature in one of the rows occurs just one or multiple times and it forgets the order in which the features occur. More generally, the pooling factor by which the signal of the matrix is reduced at once corresponds to s−m+1; even for moderate values of s the pooling factor can be excessive. The aim of the next section is to address these limitations while preserving the advantages. 3 Convolutional Neural Networks with Dynamic k-Max Pooling We model sentences using a convolutional architecture that alternates wide convolutional layers K-Max pooling (k=3) Fully connected layer Folding Wide convolution (m=2) Dynamic k-max pooling (k= f(s) =5) Projected sentence matrix (s=7) Wide convolution (m=3) The cat sat on the red mat Figure 3: A DCNN for the seven word input sentence. Word embeddings have size d = 4. The network has two convolutional layers with two feature maps each. The widths of the filters at the two layers are respectively 3 and 2. The (dynamic) k-max pooling layers have values k of 5 and 3. with dynamic pooling layers given by dynamic kmax pooling. In the network the width of a feature map at an intermediate layer varies depending on the length of the input sentence; the resulting architecture is the Dynamic Convolutional Neural Network. Figure 3 represents a DCNN. We proceed to describe the network in detail. 3.1 Wide Convolution Given an input sentence, to obtain the first layer of the DCNN we take the embedding wi ∈Rd for each word in the sentence and construct the sentence matrix s ∈Rd×s as in Eq. 2. The values in the embeddings wi are parameters that are optimised during training. A convolutional layer in the network is obtained by convolving a matrix of weights m ∈Rd×m with the matrix of activations at the layer below. For example, the second layer is obtained by applying a convolution to the sentence matrix s itself. Dimension d and filter width m are hyper-parameters of the network. We let the operations be wide one-dimensional convolutions as described in Sect. 2.2. The resulting matrix c has dimensions d × (s + m −1). 658 3.2 k-Max Pooling We next describe a pooling operation that is a generalisation of the max pooling over the time dimension used in the Max-TDNN sentence model and different from the local max pooling operations applied in a convolutional network for object recognition (LeCun et al., 1998). Given a value k and a sequence p ∈Rp of length p ≥k, kmax pooling selects the subsequence pk max of the k highest values of p. The order of the values in pk max corresponds to their original order in p. The k-max pooling operation makes it possible to pool the k most active features in p that may be a number of positions apart; it preserves the order of the features, but is insensitive to their specific positions. It can also discern more finely the number of times the feature is highly activated in p and the progression by which the high activations of the feature change across p. The k-max pooling operator is applied in the network after the topmost convolutional layer. This guarantees that the input to the fully connected layers is independent of the length of the input sentence. But, as we see next, at intermediate convolutional layers the pooling parameter k is not fixed, but is dynamically selected in order to allow for a smooth extraction of higherorder and longer-range features. 3.3 Dynamic k-Max Pooling A dynamic k-max pooling operation is a k-max pooling operation where we let k be a function of the length of the sentence and the depth of the network. Although many functions are possible, we simply model the pooling parameter as follows: kl = max( ktop, ⌈L −l L s⌉) (4) where l is the number of the current convolutional layer to which the pooling is applied and L is the total number of convolutional layers in the network; ktop is the fixed pooling parameter for the topmost convolutional layer (Sect. 3.2). For instance, in a network with three convolutional layers and ktop = 3, for an input sentence of length s = 18, the pooling parameter at the first layer is k1 = 12 and the pooling parameter at the second layer is k2 = 6; the third layer has the fixed pooling parameter k3 = ktop = 3. Equation 4 is a model of the number of values needed to describe the relevant parts of the progression of an l-th order feature over a sentence of length s. For an example in sentiment prediction, according to the equation a first order feature such as a positive word occurs at most k1 times in a sentence of length s, whereas a second order feature such as a negated phrase or clause occurs at most k2 times. 3.4 Non-linear Feature Function After (dynamic) k-max pooling is applied to the result of a convolution, a bias b ∈Rd and a nonlinear function g are applied component-wise to the pooled matrix. There is a single bias value for each row of the pooled matrix. If we temporarily ignore the pooling layer, we may state how one computes each d-dimensional column a in the matrix a resulting after the convolutional and non-linear layers. Define M to be the matrix of diagonals: M = [diag(m:,1), . . . , diag(m:,m)] (5) where m are the weights of the d filters of the wide convolution. Then after the first pair of a convolutional and a non-linear layer, each column a in the matrix a is obtained as follows, for some index j: a = g   M   wj ... wj+m−1  + b    (6) Here a is a column of first order features. Second order features are similarly obtained by applying Eq. 6 to a sequence of first order features aj, ..., aj+m′−1 with another weight matrix M′. Barring pooling, Eq. 6 represents a core aspect of the feature extraction function and has a rather general form that we return to below. Together with pooling, the feature function induces position invariance and makes the range of higher-order features variable. 3.5 Multiple Feature Maps So far we have described how one applies a wide convolution, a (dynamic) k-max pooling layer and a non-linear function to the input sentence matrix to obtain a first order feature map. The three operations can be repeated to yield feature maps of increasing order and a network of increasing depth. We denote a feature map of the i-th order by Fi. As in convolutional networks for object recognition, to increase the number of learnt feature detectors of a certain order, multiple feature maps Fi 1, . . . , Fi n may be computed in parallel at the same layer. Each feature map Fi j is computed by convolving a distinct set of filters arranged in a matrix mi j,k with each feature map Fi−1 k of the lower order i −1 and summing the results: 659 Fi j = n X k=1 mi j,k ∗Fi−1 k (7) where ∗indicates the wide convolution. The weights mi j,k form an order-4 tensor. After the wide convolution, first dynamic k-max pooling and then the non-linear function are applied individually to each map. 3.6 Folding In the formulation of the network so far, feature detectors applied to an individual row of the sentence matrix s can have many orders and create complex dependencies across the same rows in multiple feature maps. Feature detectors in different rows, however, are independent of each other until the top fully connected layer. Full dependence between different rows could be achieved by making M in Eq. 5 a full matrix instead of a sparse matrix of diagonals. Here we explore a simpler method called folding that does not introduce any additional parameters. After a convolutional layer and before (dynamic) k-max pooling, one just sums every two rows in a feature map component-wise. For a map of d rows, folding returns a map of d/2 rows, thus halving the size of the representation. With a folding layer, a feature detector of the i-th order depends now on two rows of feature values in the lower maps of order i −1. This ends the description of the DCNN. 4 Properties of the Sentence Model We describe some of the properties of the sentence model based on the DCNN. We describe the notion of the feature graph induced over a sentence by the succession of convolutional and pooling layers. We briefly relate the properties to those of other neural sentence models. 4.1 Word and n-Gram Order One of the basic properties is sensitivity to the order of the words in the input sentence. For most applications and in order to learn fine-grained feature detectors, it is beneficial for a model to be able to discriminate whether a specific n-gram occurs in the input. Likewise, it is beneficial for a model to be able to tell the relative position of the most relevant n-grams. The network is designed to capture these two aspects. The filters m of the wide convolution in the first layer can learn to recognise specific n-grams that have size less or equal to the filter width m; as we see in the experiments, m in the first layer is often set to a relatively large value such as 10. The subsequence of n-grams extracted by the generalised pooling operation induces invariance to absolute positions, but maintains their order and relative positions. As regards the other neural sentence models, the class of NBoW models is by definition insensitive to word order. A sentence model based on a recurrent neural network is sensitive to word order, but it has a bias towards the latest words that it takes as input (Mikolov et al., 2011). This gives the RNN excellent performance at language modelling, but it is suboptimal for remembering at once the ngrams further back in the input sentence. Similarly, a recursive neural network is sensitive to word order but has a bias towards the topmost nodes in the tree; shallower trees mitigate this effect to some extent (Socher et al., 2013a). As seen in Sect. 2.3, the Max-TDNN is sensitive to word order, but max pooling only picks out a single ngram feature in each row of the sentence matrix. 4.2 Induced Feature Graph Some sentence models use internal or external structure to compute the representation for the input sentence. In a DCNN, the convolution and pooling layers induce an internal feature graph over the input. A node from a layer is connected to a node from the next higher layer if the lower node is involved in the convolution that computes the value of the higher node. Nodes that are not selected by the pooling operation at a layer are dropped from the graph. After the last pooling layer, the remaining nodes connect to a single topmost root. The induced graph is a connected, directed acyclic graph with weighted edges and a root node; two equivalent representations of an induced graph are given in Fig. 1. In a DCNN without folding layers, each of the d rows of the sentence matrix induces a subgraph that joins the other subgraphs only at the root node. Each subgraph may have a different shape that reflects the kind of relations that are detected in that subgraph. The effect of folding layers is to join pairs of subgraphs at lower layers before the top root node. Convolutional networks for object recognition also induce a feature graph over the input image. What makes the feature graph of a DCNN peculiar is the global range of the pooling operations. The (dynamic) k-max pooling operator can draw together features that correspond to words that are many positions apart in the sentence. Higher-order features have highly variable ranges that can be ei660 ther short and focused or global and long as the input sentence. Likewise, the edges of a subgraph in the induced graph reflect these varying ranges. The subgraphs can either be localised to one or more parts of the sentence or spread more widely across the sentence. This structure is internal to the network and is defined by the forward propagation of the input through the network. Of the other sentence models, the NBoW is a shallow model and the RNN has a linear chain structure. The subgraphs induced in the MaxTDNN model have a single fixed-range feature obtained through max pooling. The recursive neural network follows the structure of an external parse tree. Features of variable range are computed at each node of the tree combining one or more of the children of the tree. Unlike in a DCNN, where one learns a clear hierarchy of feature orders, in a RecNN low order features like those of single words can be directly combined with higher order features computed from entire clauses. A DCNN generalises many of the structural aspects of a RecNN. The feature extraction function as stated in Eq. 6 has a more general form than that in a RecNN, where the value of m is generally 2. Likewise, the induced graph structure in a DCNN is more general than a parse tree in that it is not limited to syntactically dictated phrases; the graph structure can capture short or long-range semantic relations between words that do not necessarily correspond to the syntactic relations in a parse tree. The DCNN has internal input-dependent structure and does not rely on externally provided parse trees, which makes the DCNN directly applicable to hard-to-parse sentences such as tweets and to sentences from any language. 5 Experiments We test the network on four different experiments. We begin by specifying aspects of the implementation and the training of the network. We then relate the results of the experiments and we inspect the learnt feature detectors. 5.1 Training In each of the experiments, the top layer of the network has a fully connected layer followed by a softmax non-linearity that predicts the probability distribution over classes given the input sentence. The network is trained to minimise the cross-entropy of the predicted and true distributions; the objective includes an L2 regularisation Classifier Fine-grained (%) Binary (%) NB 41.0 81.8 BINB 41.9 83.1 SVM 40.7 79.4 RECNTN 45.7 85.4 MAX-TDNN 37.4 77.1 NBOW 42.4 80.5 DCNN 48.5 86.8 Table 1: Accuracy of sentiment prediction in the movie reviews dataset. The first four results are reported from Socher et al. (2013b). The baselines NB and BINB are Naive Bayes classifiers with, respectively, unigram features and unigram and bigram features. SVM is a support vector machine with unigram and bigram features. RECNTN is a recursive neural network with a tensor-based feature function, which relies on external structural features given by a parse tree and performs best among the RecNNs. term over the parameters. The set of parameters comprises the word embeddings, the filter weights and the weights from the fully connected layers. The network is trained with mini-batches by backpropagation and the gradient-based optimisation is performed using the Adagrad update rule (Duchi et al., 2011). Using the well-known convolution theorem, we can compute fast one-dimensional linear convolutions at all rows of an input matrix by using Fast Fourier Transforms. To exploit the parallelism of the operations, we train the network on a GPU. A Matlab implementation processes multiple millions of input sentences per hour on one GPU, depending primarily on the number of layers used in the network. 5.2 Sentiment Prediction in Movie Reviews The first two experiments concern the prediction of the sentiment of movie reviews in the Stanford Sentiment Treebank (Socher et al., 2013b). The output variable is binary in one experiment and can have five possible outcomes in the other: negative, somewhat negative, neutral, somewhat positive, positive. In the binary case, we use the given splits of 6920 training, 872 development and 1821 test sentences. Likewise, in the fine-grained case, we use the standard 8544/1101/2210 splits. Labelled phrases that occur as subparts of the training sentences are treated as independent training instances. The size of the vocabulary is 15448. Table 1 details the results of the experiments. 661 Classifier Features Acc. (%) HIER unigram, POS, head chunks 91.0 NE, semantic relations MAXENT unigram, bigram, trigram 92.6 POS, chunks, NE, supertags CCG parser, WordNet MAXENT unigram, bigram, trigram 93.6 POS, wh-word, head word word shape, parser hypernyms, WordNet SVM unigram, POS, wh-word 95.0 head word, parser hypernyms, WordNet 60 hand-coded rules MAX-TDNN unsupervised vectors 84.4 NBOW unsupervised vectors 88.2 DCNN unsupervised vectors 93.0 Table 2: Accuracy of six-way question classification on the TREC questions dataset. The second column details the external features used in the various approaches. The first four results are respectively from Li and Roth (2002), Blunsom et al. (2006), Huang et al. (2008) and Silva et al. (2011). In the three neural sentence models—the MaxTDNN, the NBoW and the DCNN—the word vectors are parameters of the models that are randomly initialised; their dimension d is set to 48. The Max-TDNN has a filter of width 6 in its narrow convolution at the first layer; shorter phrases are padded with zero vectors. The convolutional layer is followed by a non-linearity, a maxpooling layer and a softmax classification layer. The NBoW sums the word vectors and applies a non-linearity followed by a softmax classification layer. The adopted non-linearity is the tanh function. The hyper parameters of the DCNN are as follows. The binary result is based on a DCNN that has a wide convolutional layer followed by a folding layer, a dynamic k-max pooling layer and a non-linearity; it has a second wide convolutional layer followed by a folding layer, a k-max pooling layer and a non-linearity. The width of the convolutional filters is 7 and 5, respectively. The value of k for the top k-max pooling is 4. The number of feature maps at the first convolutional layer is 6; the number of maps at the second convolutional layer is 14. The network is topped by a softmax classification layer. The DCNN for the finegrained result has the same architecture, but the filters have size 10 and 7, the top pooling parameter k is 5 and the number of maps is, respectively, 6 and 12. The networks use the tanh non-linear Classifier Accuracy (%) SVM 81.6 BINB 82.7 MAXENT 83.0 MAX-TDNN 78.8 NBOW 80.9 DCNN 87.4 Table 3: Accuracy on the Twitter sentiment dataset. The three non-neural classifiers are based on unigram and bigram features; the results are reported from (Go et al., 2009). function. At training time we apply dropout to the penultimate layer after the last tanh non-linearity (Hinton et al., 2012). We see that the DCNN significantly outperforms the other neural and non-neural models. The NBoW performs similarly to the non-neural n-gram based classifiers. The Max-TDNN performs worse than the NBoW likely due to the excessive pooling of the max pooling operation; the latter discards most of the sentiment features of the words in the input sentence. Besides the RecNN that uses an external parser to produce structural features for the model, the other models use ngram based or neural features that do not require external resources or additional annotations. In the next experiment we compare the performance of the DCNN with those of methods that use heavily engineered resources. 5.3 Question Type Classification As an aid to question answering, a question may be classified as belonging to one of many question types. The TREC questions dataset involves six different question types, e.g. whether the question is about a location, about a person or about some numeric information (Li and Roth, 2002). The training dataset consists of 5452 labelled questions whereas the test dataset consists of 500 questions. The results are reported in Tab. 2. The nonneural approaches use a classifier over a large number of manually engineered features and hand-coded resources. For instance, Blunsom et al. (2006) present a Maximum Entropy model that relies on 26 sets of syntactic and semantic features including unigrams, bigrams, trigrams, POS tags, named entity tags, structural relations from a CCG parse and WordNet synsets. We evaluate the three neural models on this dataset with mostly the same hyper-parameters as in the binary senti662 POSITIVE lovely comedic moments and several fine performances good script , good dialogue , funny sustains throughout is daring , inventive and well written , nicely acted and beautifully remarkably solid and subtly satirical tour de NEGATIVE , nonexistent plot and pretentious visual style it fails the most basic test as so stupid , so ill conceived , , too dull and pretentious to be hood rats butt their ugly heads in 'NOT' n't have any huge laughs in its no movement , no , not much n't stop me from enjoying much of not that kung pow is n't funny not a moment that is not false 'TOO' , too dull and pretentious to be either too serious or too lighthearted , too slow , too long and too feels too formulaic and too familiar to is too predictable and too self conscious Figure 4: Top five 7-grams at four feature detectors in the first layer of the network. ment experiment of Sect. 5.2. As the dataset is rather small, we use lower-dimensional word vectors with d = 32 that are initialised with embeddings trained in an unsupervised way to predict contexts of occurrence (Turian et al., 2010). The DCNN uses a single convolutional layer with filters of size 8 and 5 feature maps. The difference between the performance of the DCNN and that of the other high-performing methods in Tab. 2 is not significant (p < 0.09). Given that the only labelled information used to train the network is the training set itself, it is notable that the network matches the performance of state-of-the-art classifiers that rely on large amounts of engineered features and rules and hand-coded resources. 5.4 Twitter Sentiment Prediction with Distant Supervision In our final experiment, we train the models on a large dataset of tweets, where a tweet is automatically labelled as positive or negative depending on the emoticon that occurs in it. The training set consists of 1.6 million tweets with emoticon-based labels and the test set of about 400 hand-annotated tweets. We preprocess the tweets minimally following the procedure described in Go et al. (2009); in addition, we also lowercase all the tokens. This results in a vocabulary of 76643 word types. The architecture of the DCNN and of the other neural models is the same as the one used in the binary experiment of Sect. 5.2. The randomly initialised word embeddings are increased in length to a dimension of d = 60. Table 3 reports the results of the experiments. We see a significant increase in the performance of the DCNN with respect to the non-neural n-gram based classifiers; in the presence of large amounts of training data these classifiers constitute particularly strong baselines. We see that the ability to train a sentiment classifier on automatically extracted emoticon-based labels extends to the DCNN and results in highly accurate performance. The difference in performance between the DCNN and the NBoW further suggests that the ability of the DCNN to both capture features based on long n-grams and to hierarchically combine these features is highly beneficial. 5.5 Visualising Feature Detectors A filter in the DCNN is associated with a feature detector or neuron that learns during training to be particularly active when presented with a specific sequence of input words. In the first layer, the sequence is a continuous n-gram from the input sentence; in higher layers, sequences can be made of multiple separate n-grams. We visualise the feature detectors in the first layer of the network trained on the binary sentiment task (Sect. 5.2). Since the filters have width 7, for each of the 288 feature detectors we rank all 7-grams occurring in the validation and test sets according to their activation of the detector. Figure 5.2 presents the top five 7-grams for four feature detectors. Besides the expected detectors for positive and negative sentiment, we find detectors for particles such as ‘not’ that negate sentiment and such as ‘too’ that potentiate sentiment. We find detectors for multiple other notable constructs including ‘all’, ‘or’, ‘with...that’, ‘as...as’. The feature detectors learn to recognise not just single n-grams, but patterns within n-grams that have syntactic, semantic or structural significance. 6 Conclusion We have described a dynamic convolutional neural network that uses the dynamic k-max pooling operator as a non-linear subsampling function. The feature graph induced by the network is able to capture word relations of varying size. The network achieves high performance on question and sentiment classification without requiring external features as provided by parsers or other resources. Acknowledgements We thank Nando de Freitas and Yee Whye Teh for great discussions on the paper. This work was supported by a Xerox Foundation Award, EPSRC grant number EP/F042728/1, and EPSRC grant number EP/K036580/1. 663 References Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In EMNLP, pages 1183–1193. ACL. Phil Blunsom, Krystle Kocik, and James R. Curran. 2006. Question classification with log-linear models. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 615–616, New York, NY, USA. ACM. Daoud Clarke. 2012. A context-theoretic framework for compositionality in distributional semantics. Computational Linguistics, 38(1):41–71. Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical Foundations for a Compositional Distributional Model of Meaning. March. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In International Conference on Machine Learning, ICML. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, July. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. Proceedings of the Conference on Empirical Methods in Natural Language Processing - EMNLP ’08, (October):897. Katrin Erk. 2012. Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635–653. Felix A. Gers and Jrgen Schmidhuber. 2001. Lstm recurrent networks learn simple context-free and context-sensitive languages. IEEE Transactions on Neural Networks, 12(6):1333–1340. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. Processing, pages 1–6. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1394–1404. Association for Computational Linguistics. Edward Grefenstette. 2013. Category-theoretic quantitative compositional distributional models of natural language semantics. arXiv preprint arXiv:1311.1539. Emiliano Guevara. 2010. Modelling Adjective-Noun Compositionality by Regression. ESSLLI’10 Workshop on Compositionality and Distributional Semantic Models. Karl Moritz Hermann and Phil Blunsom. 2013. The Role of Syntax in Vector Space Models of Compositional Semantics. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Sofia, Bulgaria, August. Association for Computational Linguistics. Forthcoming. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580. Geoffrey E. Hinton. 1989. Connectionist learning procedures. Artif. Intell., 40(1-3):185–234. Zhiheng Huang, Marcus Thint, and Zengchang Qin. 2008. Question classification using head words and their hypernyms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 927–936, Stroudsburg, PA, USA. Association for Computational Linguistics. Nal Kalchbrenner and Phil Blunsom. 2013a. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, October. Association for Computational Linguistics. Nal Kalchbrenner and Phil Blunsom. 2013b. Recurrent Convolutional Neural Networks for Discourse Compositionality. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, Sofia, Bulgaria, August. Association for Computational Linguistics. Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2013. Prior disambiguation of word tensors for constructing sentence vectors. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), Seattle, USA, October. Andreas K¨uchler and Christoph Goller. 1996. Inductive learning in symbolic domains using structuredriven recurrent neural networks. In G¨unther G¨orz and Steffen H¨olldobler, editors, KI, volume 1137 of Lecture Notes in Computer Science, pages 183–197. Springer. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November. Xin Li and Dan Roth. 2002. Learning question classifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1–7. Association for Computational Linguistics. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. In SLT, pages 234–239. 664 Tomas Mikolov, Stefan Kombrink, Lukas Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In ICASSP, pages 5528–5531. IEEE. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, volume 8. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Jordan B. Pollack. 1990. Recursive distributed representations. Artificial Intelligence, 46:77–105. Holger Schwenk. 2012. Continuous space translation models for phrase-based statistical machine translation. In COLING (Posters), pages 1071–1080. Joo Silva, Lusa Coheur, AnaCristina Mendes, and Andreas Wichert. 2011. From symbolic to subsymbolic information in question classification. Artificial Intelligence Review, 35(2):137–154. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP). Richard Socher, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. 2013a. Grounded Compositional Semantics for Finding and Describing Images with Sentences. In Transactions of the Association for Computational Linguistics (TACL). Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Stroudsburg, PA, October. Association for Computational Linguistics. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394. Association for Computational Linguistics. Peter Turney. 2012. Domain and function: A dualspace model of semantic relations and compositions. J. Artif. Intell. Res.(JAIR), 44:533–585. Alexander Waibel, Toshiyuki Hanazawa, Geofrey Hinton, Kiyohiro Shikano, and Kevin J. Lang. 1990. Readings in speech recognition. chapter Phoneme Recognition Using Time-delay Neural Networks, pages 393–404. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. Fabio Massimo Zanzotto, Ioannis Korkontzelos, Francesca Fallucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distributional semantics. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1263–1271. Association for Computational Linguistics. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI, pages 658–666. AUAI Press. 665
2014
62
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 666–675, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Online Learning in Tensor Space Yuan Cao Sanjeev Khudanpur Center for Language & Speech Processing and Human Language Technology Center of Excellence The Johns Hopkins University Baltimore, MD, USA, 21218 {yuan.cao, khudanpur}@jhu.edu Abstract We propose an online learning algorithm based on tensor-space models. A tensorspace model represents data in a compact way, and via rank-1 approximation the weight tensor can be made highly structured, resulting in a significantly smaller number of free parameters to be estimated than in comparable vector-space models. This regularizes the model complexity and makes the tensor model highly effective in situations where a large feature set is defined but very limited resources are available for training. We apply with the proposed algorithm to a parsing task, and show that even with very little training data the learning algorithm based on a tensor model performs well, and gives significantly better results than standard learning algorithms based on traditional vectorspace models. 1 Introduction Many NLP applications use models that try to incorporate a large number of linguistic features so that as much human knowledge of language can be brought to bear on the (prediction) task as possible. This also makes training the model parameters a challenging problem, since the amount of labeled training data is usually small compared to the size of feature sets: the feature weights cannot be estimated reliably. Most traditional models are linear models, in the sense that both the features of the data and model parameters are represented as vectors in a vector space. Many learning algorithms applied to NLP problems, such as the Perceptron (Collins, 2002), MIRA (Crammer et al., 2006; McDonald et al., 2005; Chiang et al., 2008), PRO (Hopkins and May, 2011), RAMPION (Gimpel and Smith, 2012) etc., are based on vector-space models. Such models require learning individual feature weights directly, so that the number of parameters to be estimated is identical to the size of the feature set. When millions of features are used but the amount of labeled data is limited, it can be difficult to precisely estimate each feature weight. In this paper, we shift the model from vectorspace to tensor-space. Data can be represented in a compact and structured way using tensors as containers. Tensor representations have been applied to computer vision problems (Hazan et al., 2005; Shashua and Hazan, 2005) and information retrieval (Cai et al., 2006a) a long time ago. More recently, it has also been applied to parsing (Cohen and Collins, 2012; Cohen and Satta, 2013) and semantic analysis (Van de Cruys et al., 2013). A linear tensor model represents both features and weights in tensor-space, hence the weight tensor can be factorized and approximated by a linear sum of rank-1 tensors. This low-rank approximation imposes structural constraints on the feature weights and can be regarded as a form of regularization. With this representation, we no longer need to estimate individual feature weights directly but only a small number of “bases” instead. This property makes the the tensor model very effective when training a large number of feature weights in a low-resource environment. On the other hand, tensor models have many more degrees of “design freedom” than vector space models. While this makes them very flexible, it also creates much difficulty in designing an optimal tensor structure for a given training set. We give detailed description of the tensor space 666 model in Section 2. Several issues that come with the tensor model construction are addressed in Section 3. A tensor weight learning algorithm is then proposed in 4. Finally we give our experimental results on a parsing task and analysis in Section 5. 2 Tensor Space Representation Most of the learning algorithms for NLP problems are based on vector space models, which represent data as vectors φ ∈Rn, and try to learn feature weight vectors w ∈Rn such that a linear model y = w · φ is able to discriminate between, say, good and bad hypotheses. While this is a natural way of representing data, it is not the only choice. Below, we reformulate the model from vector to tensor space. 2.1 Tensor Space Model A tensor is a multidimensional array, and is a generalization of commonly used algebraic objects such as vectors and matrices. Specifically, a vector is a 1st order tensor, a matrix is a 2nd order tensor, and data organized as a rectangular cuboid is a 3rd order tensor etc. In general, a Dth order tensor is represented as T ∈Rn1×n2×...nD, and an entry in T is denoted by Ti1,i2,...,iD. Different dimensions of a tensor 1, 2, . . . , D are named modes of the tensor. Using a Dth order tensor as container, we can assign each feature of the task a D-dimensional index in the tensor and represent the data as tensors. Of course, shifting from a vector to a tensor representation entails several additional degrees of freedom, e.g., the order D of the tensor and the sizes {nd}D d=1 of the modes, which must be addressed when selecting a tensor model. This will be done in Section 3. 2.2 Tensor Decomposition Just as a matrix can be decomposed as a linear combination of several rank-1 matrices via SVD, tensors also admit decompositions1 into linear combinations of “rank-1” tensors. A Dth order tensor A ∈Rn1×n2×...nD is rank-1 if it can be 1The form of tensor decomposition defined here is named as CANDECOMP/PARAFAC(CP) decomposition (Kolda and Bader, 2009). Another popular form of tensor decomposition is called Tucker decomposition, which decomposes a tensor into a core tensor multiplied by a matrix along each mode. We focus only on the CP decomposition in this paper. written as the outer product of D vectors, i.e. A = a1 ⊗a2⊗, . . . , ⊗aD, where ai ∈Rnd, 1 ≤d ≤D. A Dth order tensor T ∈Rn1×n2×...nD can be factorized into a sum of component rank-1 tensors as T = R X r=1 Ar = R X r=1 a1 r ⊗a2 r⊗, . . . , ⊗aD r where R, called the rank of the tensor, is the minimum number of rank-1 tensors whose sum equals T . Via decomposition, one may approximate a tensor by the sum of H major rank-1 tensors with H ≤R. 2.3 Linear Tensor Model In tensor space, a linear model may be written (ignoring a bias term) as f(W ) = W ◦Φ, where Φ ∈Rn1×n2×...nD is the feature tensor, W is the corresponding weight tensor, and ◦denotes the Hadamard product. If W is further decomposed as the sum of H major component rank-1 tensors, i.e. W ≈PH h=1 w1 h ⊗w2 h⊗, . . . , ⊗wD h , then f(w1 1, . . . , wD 1 , . . . , w1 h, . . . , wD h ) = H X h=1 Φ ×1 w1 h ×2 w2 h . . . ×D wD h , (1) where ×l is the l-mode product operator between a Dth order tensor T and a vector a of dimension nd, yielding a (D −1)th order tensor such that (T ×l a)i1,...,il−1,il+1,...,iD = nd X il=1 Ti1,...,il−1,il,il+1,...,iD · ail. The linear tensor model is illustrated in Figure 1. 2.4 Why Learning in Tensor Space? So what is the advantage of learning with a tensor model instead of a vector model? Consider the case where we have defined 1,000,000 features for our task. A vector space linear model requires estimating 1,000,000 free parameters. However if we use a 2nd order tensor model, organize the features into a 1000 × 1000 matrix Φ, and use just 667 Figure 1: A 3rd order linear tensor model. The feature weight tensor W can be decomposed as the sum of a sequence of rank-1 component tensors. one rank-1 matrix to approximate the weight tensor, then the linear model becomes f(w1, w2) = wT 1 Φw2, where w1, w2 ∈R1000. That is to say, now we only need to estimate 2000 parameters! In general, if V features are defined for a learning problem, and we (i) organize the feature set as a tensor Φ ∈Rn1×n2×...nD and (ii) use H component rank-1 tensors to approximate the corresponding target weight tensor. Then the total number of parameters to be learned for this tensor model is H PD d=1 nd, which is usually much smaller than V = QD d=1 nd for a traditional vector space model. Therefore we expect the tensor model to be more effective in a low-resource training environment. Specifically, a vector space model assumes each feature weight to be a “free” parameter, and estimating them reliably could therefore be hard when training data are not sufficient or the feature set is huge. By contrast, a linear tensor model only needs to learn H PD d=1 nd “bases” of the m feature weights instead of individual weights directly. The weight corresponding to the feature Φi1,i2,...,iD in the tensor model is expressed as wi1,i2,...,iD = H X h=1 w1 h,i1w2 h,i2 . . . wD h,iD, (2) where wj h,ij is the ith j element in the vector wj h. In other words, a true feature weight is now approximated by a set of bases. This reminds us of the well-known low-rank matrix approximation of images via SVD, and we are applying similar techniques to approximate target feature weights, which is made possible only after we shift from vector to tensor space models. This approximation can be treated as a form of model regularization, since the weight tensor is represented in a constrained form and made highly structured via the rank-1 tensor approximation. Of course, as we reduce the model complexity, e.g. by choosing a smaller and smaller H, the model’s expressive ability is weakened at the same time. We will elaborate on this point in Section 3.1. 3 Tensor Model Construction To apply a tensor model, we first need to convert the feature vector into a tensor Φ. Once the structure of Φ is determined, the structure of W is fixed as well. As mentioned in Section 2.1, a tensor model has many more degrees of “design freedom” than a vector model, which makes the problem of finding a good tensor structure a nontrivial one. 3.1 Tensor Order The order of a tensor affects the model in two ways: the expressiveness of the model and the number of parameters to be estimated. We assume H = 1 in the analysis below, noting that one can always add as many rank-1 component tensors as needed to approximate a tensor with arbitrary precision. Obviously, the 1st order tensor (vector) model is the most expressive, since it is structureless and any arbitrary set of numbers can always be represented exactly as a vector. The 2nd order rank-1 tensor (rank-1 matrix) is less expressive because not every set of numbers can be organized into a rank-1 matrix. In general, a Dth order rank-1 tensor is more expressive than a (D + 1)th order rank-1 tensor, as a lower-order tensor imposes less structural constraints on the set of numbers it can express. We formally state this fact as follows: Theorem 1. A set of real numbers that can be represented by a (D + 1)th order tensor Q can also be represented by a Dth order tensor P, provided P and Q have the same volume. But the reverse is not true. Proof. See appendix. On the other hand, tensor order also affects the number of parameters to be trained. Assuming that a Dth order has equal size on each mode (we will elaborate on this point in Section 3.2) and the volume (number of entries) of the tensor is fixed as V , then the total number of parameters 668 of the model is DV 1 D . This is a convex function of D, and the minimum2 is reached at either D∗= ⌊ln V ⌋or D∗= ⌈ln V ⌉. Therefore, as D increases from 1 to D∗, we lose more and more of the expressive power of the model but reduce the number of parameters to be trained. However it would be a bad idea to choose a D beyond D∗. The optimal tensor order depends on the nature of the actual problem, and we tune this hyper-parameter on a held-out set. 3.2 Mode Size The size nd of each tensor mode, d = 1, . . . , D, determines the structure of feature weights a tensor model can precisely represent, as well as the number of parameters to estimate (we also assume H = 1 in the analysis below). For example, if the tensor order is 2 and the volume V is 12, then we can either choose n1 = 3, n2 = 4 or n1 = 2, n2 = 6. For n1 = 3, n2 = 4, the numbers that can be precisely represented are divided into 3 groups, each having 4 numbers, that are scaled versions of one another. Similarly for n1 = 2, n2 = 6, the numbers can be divided into 2 groups with different scales. Obviously, the two possible choices of (n1, n2) also lead to different numbers of free parameters (7 vs. 8). Given D and V , there are many possible combinations of nd, d = 1, . . . , D, and the optimal combination should indeed be determined by the structure of target features weights. However it is hard to know the structure of target feature weights before learning, and it would be impractical to try every possible combination of mode sizes, therefore we choose the criterion of determining the mode sizes as minimization of the total number of parameters, namely we solve the problem: min n1,...,nD D X d=1 nd s.t D Y d=1 nd = V The optimal solution is reached when n1 = n2 = . . . = nD = V 1 D . Of course it is not guaranteed that V 1 D is an integer, therefore we choose nd = ⌊V 1 D ⌋or ⌈V 1 D ⌉, d = 1, . . . , D such that QD d=1 nd ≥V and hQD d=1 nd i −V is minimized. The hQD d=1 nd i −V extra entries of the tensor correspond to no features and are used just for 2The optimal integer solution can be determined simply by comparing the two function values. padding. Since for each nd there are only two possible values to choose, we can simply enumerate all the possible 2D (which is usually a small number) combinations of values and pick the one that matches the conditions given above. This way n1, . . . , nD are fully determined. Here we are only following the principle of minimizing the parameter number. While this strategy might work well with small amount of training data, it is not guaranteed to be the best strategy in all cases, especially when more data is available we might want to increase the number of parameters, making the model more complex so that the data can be more precisely modeled. Ideally the mode size needs to be adaptive to the amount of training data as well as the property of target weights. A theoretically guaranteed optimal approach to determining the mode sizes remains an open problem, and will be explored in our future work. 3.3 Number of Rank-1 Tensors The impact of using H > 1 rank-1 tensors is obvious: a larger H increases the model complexity and makes the model more expressive, since we are able to approximate target weight tensor with smaller error. As a trade-off, the number of parameters and training complexity will be increased. To find out the optimal value of H for a given problem, we tune this hyper-parameter too on a heldout set. 3.4 Vector to Tensor Mapping Finally, we need to find a way to map the original feature vector to a tensor, i.e. to associate each feature with an index in the tensor. Assuming the tensor volume V is the same as the number of features, then there are in all V ! ways of mapping, which is an intractable number of possibilities even for modest sized feature sets, making it impractical to carry out a brute force search. However while we are doing the mapping, we hope to arrange the features in a way such that the corresponding target weight tensor has approximately a low-rank structure, this way it can be well approximated by very few component rank-1 tensors. Unfortunately we have no knowledge about the target weights in advance, since that is what we need to learn after all. As a way out, we first run a simple vector-model based learning algorithm (say the Perceptron) on the training data and estimate a weight vector, which serves as a “surro669 gate” weight vector. We then use this surrogate vector to guide the design of the mapping. Ideally we hope to find a permutation of the surrogate weights to map to a tensor in such a way that the tensor has a rank as low as possible. However matrix rank minimization is in general a hard problem (Fazel, 2002). Therefore, we follow an approximate algorithm given in Figure 2a, whose main idea is illustrated via an example in Figure 2b. Basically, what the algorithm does is to divide the surrogate weights into hierarchical groups such that groups on the same level are approximately proportional to each other. Using these groups as units we are able to “fill” the tensor in a hierarchical way. The resulting tensor will have an approximate low-rank structure, provided that the sorted feature weights have roughly group-wise proportional relations. For comparison, we also experimented a trivial solution which maps each entry of the feature tensor to the tensor just in sequential order, namely φ0 is mapped to Φ0,0,...,0, φ1 is mapped to Φ0,0,...,1 etc. This of course ignores correlation between features since the original feature order in the vector could be totally meaningless, and this strategy is not expected to be a good solution for vector to tensor mapping. 4 Online Learning Algorithm We now turn to the problem of learning the feature weight tensor. Here we propose an online learning algorithm similar to MIRA but modified to accommodate tensor models. Let the model be f(T ) = T ◦Φ(x, y), where T = PH h=1 w1 h ⊗w2 h⊗, . . . , ⊗wD h is the weight tensor, Φ(x, y) is the feature tensor for an inputoutput pair (x, y). Training samples (xi, yi), i = 1, . . . , m, where xi is the input and yi is the reference or oracle hypothesis, are fed to the weight learning algorithm in sequential order. A prediction zt is made by the model Tt at time t from a set of candidates Z(xt), and the model updates the weight tensor by solving the following problem: min T ∈Rn1×n2×...nD 1 2∥T −T t∥2 + Cξ (3) s.t. Lt ≤ξ, ξ ≥0 where T is a decomposed weight tensor and Lt = T ◦Φ(xt, zt) −T ◦Φ(xt, yt) + ρ(yt, zt) Input: Tensor order D, tensor volume V , mode size nd, d = 1, . . . , D, surrogate weight vector v Let v+ = [v+ 1 , . . . , v+ p ] be the non-negative part of v v−= [v− 1 , . . . , v− q ] be the negative part of v Algorithm: ˜v+ = sort(v+) in descending order ˜v−= sort(v−) in ascending order u = V/nD e = p −mod(p, u), f = q −mod(q, u) Construct vector X = [˜v+ 1 , . . . , ˜v+ e , ˜v− 1 , . . . , ˜v− f , ˜v+ e+1, . . . , ˜v+ p , ˜v− f+1, . . . , ˜v− q ] Map Xa, a = 1, . . . , p + q to the tensor entry Ti1,...,iD, such that a = D X d=1 (id −1)ld−1 + 1 where ld = ld−1nd, and l0 = 1 (a) Mapping a surrogate weight vector to a tensor (b) Illustration of the algorithm Figure 2: Algorithm for mapping a surrogate weight vector X to a tensor. (2a) provides the algorithm; (2b) illustrates it by mapping a vector of length V = 12 to a (n1, n2, n3) = (2, 2, 3) tensor. The bars Xi represent the surrogate weights — after separately sorting the positive and negative parts — and the labels along a path of the tree correspond to the tensor-index of the weight represented by the leaf resulting from the mapping. 670 is the structured hinge loss. This problem setting follows the same “passiveaggressive” strategy as in the original MIRA. To optimize the vectors wd h, h = 1, . . . , H, d = 1, . . . , D, we use a similar iterative strategy as proposed in (Cai et al., 2006b). Basically, the idea is that instead of optimizing wd h all together, we optimize w1 1, w2 1, . . . , wD H in turn. While we are updating one vector, the rest are fixed. For the problem setting given above, each of the sub-problems that need to be solved is convex, and according to (Cai et al., 2006b) the objective function value will decrease after each individual weight update and eventually this procedure will converge. We now give this procedure in more detail. Denote the weight vector of the dth mode of the hth tensor at time t as wd h,t. We will update the vectors in turn in the following order: w1 1,t, . . . , wD 1,t, w1 2,t, . . . , wD 2,t, . . . , w1 H,t, . . . , wD H,t. Once a vector has been updated, it is fixed for future updates. By way of notation, define W d h,t = w1 h,t+1⊗, . . . , ⊗wd−1 h,t+1 ⊗wd h,t⊗, . . . , ⊗wD h,t (and let W D+1 h,t ≜w1 h,t+1⊗, . . . , ⊗wD h,t+1), c W d h,t = w1 h,t+1⊗, . . . , ⊗wd−1 h,t+1 ⊗wd⊗, . . . , ⊗wD h,t (where wd ∈Rnd), T d h,t = h−1 X h′=1 W D+1 h′,t + W d h,t + H X h′=h+1 W 1 h′,t(4) bT d h,t = h−1 X h′=1 W D+1 h′,t + c W d h,t + H X h′=h+1 W 1 h′,t φd h,t(x, y) = Φ(x, y) ×2 w2 h,t+1 . . . ×d−1 wd−1 h,t+1 ×d+1 wd+1 h,t . . . ×D wD h,t (5) In order to update from wd h,t to get wd h,t+1, the sub-problem to solve is: min wd∈Rnd 1 2∥bT d h,t −T d h,t∥2 + Cξ = min wd∈Rnd 1 2∥c W d h,t −W d h,t∥2 + Cξ = min wd∈Rnd 1 2β1 h,t+1 . . . βd−1 h,t+1βd+1 h,t . . . βD h,t ∥wd −wd h,t∥2 + Cξ s.t. L d h,t ≤ξ, ξ ≥0. where βd h,t = ∥wd h,t∥2 L d h,t = bT d h,t ◦Φ(xt, zt) −bT d h,t ◦Φ(xt, yt) +ρ(yt, zt) = wd ·  φd h,t(xt, zt) −φd h,t(xt, yt)  − h−1 X h′=1 W D+1 h′,t + H X h′=h+1 W 1 h′,t ! ◦ (Φ(xt, yt) −Φ(xt, zt)) +ρ(yt, zt) Letting ∆φd h,t ≜φd h,t(xt, yt) −φd h,t(xt, zt) and sd h,t ≜ h−1 X h′=1 W D+1 h′,t + H X h′=h+1 W 1 h′,t ! ◦ (Φ(xt, yt) −Φ(xt, zt)) we may compactly write L d h,t = ρ(yt, zt) −sd h,t −wd · ∆φd h,t. This convex optimization problem is just like the original MIRA and may be solved in a similar way. The updating strategy for wd h,t is derived as wd h,t+1 = wd h,t + τ∆φd h,t τ = (6) min ( C, ρ(yt, zt) −T d h,t ◦(Φ(xt, yt) −Φ(xt, zt)) ∥∆φd h,t∥2 ) The initial vectors wi h,1 cannot be made all zero, since otherwise the l-mode product in Equation (5) would yield all zero φd h,t(x, y) and the model would never get a chance to be updated. Therefore, we initialize the entries of wi h,1 uniformly such that the Frobenius-norm of the weight tensor W is unity. We call the algorithm above “Tensor-MIRA” and abbreviate it as T-MIRA. 671 5 Experiments In this section we shows empirical results of the training algorithm on a parsing task. We used the Charniak parser (Charniak et al., 2005) for our experiment, and we used the proposed algorithm to train the reranking feature weights. For comparison, we also investigated training the reranker with Perceptron and MIRA. 5.1 Experimental Settings To simulate a low-resource training environment, our training sets were selected from sections 2-9 of the Penn WSJ treebank, section 24 was used as the held-out set and section 23 as the evaluation set. We applied the default settings of the parser. There are around V = 1.33 million features in all defined for reranking, and the n-best size for reranking is set to 50. We selected the parse with the highest f-score from the 50-best list as the oracle. We would like to observe from the experiments how the amount of training data as well as different settings of the tensor degrees of freedom affects the algorithm performance. Therefore we tried all combinations of the following experimental parameters: Parameters Settings Training data (m) Sec. 2, 2-3, 2-5, 2-9 Tensor order (D) 2, 3, 4 # rank-1 tensors (H) 1, 2, 3 Vec. to tensor mapping approximate, sequential Here “approximate” and “sequential” means using, respectively, the algorithm given in Figure 2 and the sequential mapping mentioned in Section 3.4. According to the strategy given in 3.2, once the tensor order and number of features are fixed, the sizes of modes and total number of parameters to estimate are fixed as well, as shown in the tables below: D Size of modes Number of parameters 2 1155 × 1155 2310 3 110 × 110 × 111 331 4 34 × 34 × 34 × 34 136 5.2 Results and Analysis The f-scores of the held-out and evaluation set given by T-MIRA as well as the Perceptron and MIRA baseline are given in Table 1. From the results, we have the following observations: 1. When very few labeled data are available for training (compared with the number of features), T-MIRA performs much better than the vector-based models MIRA and Perceptron. However as the amount of training data increases, the advantage of T-MIRA fades away, and vector-based models catch up. This is because the weight tensors learned by T-MIRA are highly structured, which significantly reduces model/training complexity and makes the learning process very effective in a low-resource environment, but as the amount of data increases, the more complex and expressive vector-based models adapt to the data better, whereas further improvements from the tensor model is impeded by its structural constraints, making it insensitive to the increase of training data. 2. To further contrast the behavior of T-MIRA, MIRA and Perceptron, we plot the f-scores on both the training and held-out sets given by these algorithms after each training epoch in Figure 3. The plots are for the experimental setting with mapping=surrogate, # rank-1 tensors=2, tensor order=2, training data=sections 2-3. It is clearly seen that both MIRA and Perceptron do much better than TMIRA on the training set. Nevertheless, with a huge number of parameters to fit a limited amount of data, they tend to over-fit and give much worse results on the held-out set than T-MIRA does. As an aside, observe that MIRA consistently outperformed Perceptron, as expected. 3. Properties of linear tensor model: The heuristic vector-to-tensor mapping strategy given by Figure 2 gives consistently better results than the sequential mapping strategy, as expected. To make further comparison of the two strategies, in Figure 4 we plot the 20 largest singular values of the matrices which the surrogate weights (given by the Perceptron after running for 1 epoch) are mapped to by both strategies (from the experiment with training data sections 2-5). From the contrast between the largest and the 2nd-largest singular values, it can be seen that the matrix generated 672 by the first strategy approximates a low-rank structure much better than the second strategy. Therefore, the performance of T-MIRA is influenced significantly by the way features are mapped to the tensor. If the corresponding target weight tensor has internal structure that makes it approximately low-rank, the learning procedure becomes more effective. The best results are consistently given by 2nd order tensor models, and the differences between the 3rd and 4th order tensors are not significant. As discussed in Section 3.1, although 3rd and 4th order tensors have less parameters, the benefit of reduced training complexity does not compensate for the loss of expressiveness. A 2nd order tensor has already reduced the number of parameters from the original 1.33 million to only 2310, and it does not help to further reduce the number of parameters using higher order tensors. 4. As the amount of training data increases, there is a trend that the best results come from models with more rank-1 component tensors. Adding more rank-1 tensors increases the model’s complexity and ability of expression, making the model more adaptive to larger data sets. 6 Conclusion and Future Work In this paper, we reformulated the traditional linear vector-space models as tensor-space models, and proposed an online learning algorithm named Tensor-MIRA. A tensor-space model is a compact representation of data, and via rank-1 tensor approximation, the weight tensor can be made highly structured hence the number of parameters to be trained is significantly reduced. This can be regarded as a form of model regularization.Therefore, compared with the traditional vector-space models, learning in the tensor space is very effective when a large feature set is defined, but only small amount of training data is available. Our experimental results corroborated this argument. As mentioned in Section 3.2, one interesting problem that merits further investigation is how to determine optimal mode sizes. The challenge of applying a tensor model comes from finding a proper tensor structure for a given problem, and 95.5 96 96.5 97 97.5 98 98.5 99 1 2 3 4 5 6 7 8 9 10 f-score Iterations Training set f-score T-MIRA MIRA Perceptron (a) Training set 87 87.5 88 88.5 89 89.5 90 1 2 3 4 5 6 7 8 9 10 f-score Iterations Training set f-score T-MIRA MIRA Perceptron (b) Held-out set Figure 3: f-scores given by three algorithms on training and held-out set (see text for the setting). the key to solving this problem is to find a balance between the model complexity (indicated by the order and sizes of modes) and the number of parameters. Developing a theoretically guaranteed approach of finding the optimal structure for a given task will make the tensor model not only perform well in low-resource environments, but adaptive to larger data sets. 7 Acknowledgements This work was partially supported by IBM via DARPA/BOLT contract number HR0011-12-C0015 and by the National Science Foundation via award number IIS-0963898. References Deng Cai , Xiaofei He , and Jiawei Han. 2006. Tensor Space Model for Document Analysis Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR), 625–626. Deng Cai, Xiaofei He, and Jiawei Han. 2006. Learning with Tensor Representation Technical Report, Department of Computer Science, University of Illinois at Urbana-Champaign. 673 Mapping Approximate Sequential Rank-1 tensors 1 2 3 1 2 3 Tensor order 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 Held-out score 89.43 89.16 89.22 89.16 89.21 89.24 89.27 89.14 89.24 89.21 88.90 88.89 89.13 88.88 88.88 89.15 88.87 88.99 Evaluation score 89.83 89.69 MIRA 88.57 Percep 88.23 (a) Training data: Section 2 only Mapping Approximate Sequential Rank-1 tensors 1 2 3 1 2 3 Tensor order 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 Held-out score 89.26 89.06 89.12 89.33 89.11 89.19 89.18 89.14 89.15 89.2 89.01 88.82 89.24 88.94 88.95 89.19 88.91 88.98 Evaluation score 90.02 89.82 MIRA 89.00 Percep 88.59 (b) Training data: Section 2-3 Mapping Approximate Sequential Rank-1 tensors 1 2 3 1 2 3 Tensor order 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 Held-out score 89.40 89.44 89.17 89.5 89.37 89.18 89.47 89.32 89.18 89.23 89.03 88.93 89.24 88.98 88.94 89.16 89.01 88.85 Evaluation score 89.96 89.78 MIRA 89.49 Percep 89.10 (c) Training data: Section 2-5 Mapping Approximate Sequential Rank-1 tensors 2 3 4 2 3 4 Tensor order 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 Held-out score 89.43 89.23 89.06 89.37 89.23 89.1 89.44 89.22 89.06 89.21 88.92 88.94 89.23 88.94 88.93 89.23 88.95 88.93 Evaluation score 89.95 89.84 MIRA 89.95 Percep 89.77 (d) Training data: Section 2-9 Table 1: Parsing f-scores. Tables (a) to (d) correspond to training data with increasing size. The upper-part of each table shows the T-MIRA results with different settings, the lower-part shows the MIRA and Perceptron baselines. The evaluation scores come from the settings indicated by the best held-out scores. The best results on the held-out and evaluation data are marked in bold. 0 100 200 300 400 500 2 4 6 8 10 12 14 16 18 20 Singular value Approximate Sequential Figure 4: The top 20 singular values of the surrogate weight matrices given by two mapping algorithms. Eugene Charniak, and Mark Johnson 2005. Coarseto-fine n-Best Parsing and MaxEnt Discriminative Reranking Proceedings of the 43th Annual Meeting on Association for Computational Linguistics(ACL) 173–180. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online Large-Margin Training of Syntactic and Structural Translation Features Proceedings of Empirical Methods in Natural Language Processing(EMNLP), 224–233. Shay Cohen and Michael Collins. 2012. Tensor Decomposition for Fast Parsing with Latent-Variable PCFGs Proceedings of Advances in Neural Information Processing Systems(NIPS). Shay Cohen and Giorgio Satta. 2013. Approximate PCFG Parsing Using Tensor Decomposition Proceedings of NAACL-HLT, 487–496. Michael Collins. 2002. Discriminative training methods for hidden Markov Models: Theory and Experiments with Perceptron. Algorithms Proceedings of Empirical Methods in Natural Language Processing(EMNLP), 10:1–8. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Schwartz, and Yoram Singer. 2006. Online Passive-Aggressive Algorithms Journal of Machine Learning Research(JMLR), 7:551–585. Maryam Fazel. 2002. Matrix Rank Minimization with Applications PhD thesis, Stanford University. Kevin Gimpel, and Noah A. Smith 2012. Structured Ramp Loss Minimization for Machine Translation Proceedings of North American Chapter of the Association for Computational Linguistics(NAACL), 221-231. 674 Tamir Hazan, Simon Polak, and Amnon Shashua 2005. Sparse Image Coding using a 3D Non-negative Tensor Factorization Proceedings of the International Conference on Computer Vision (ICCV). Mark Hopkins and Jonathan May. 2011. Tuning as Reranking Proceedings of Empirical Methods in Natural Language Processing(EMNLP), 13521362. Tamara Kolda and Brett Bader. 2009. Tensor Decompositions and Applications SIAM Review, 51:455550. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers Proceedings of the 43rd Annual Meeting of the ACL, 91–98. Amnon Shashua, and Tamir Hazan. 2005. NonNegative Tensor Factorization with Applications to Statistics and Computer Vision Proceedings of the International Conference on Machine Learning (ICML). Tim Van de Cruys, Thierry Poibeau, and Anna Korhonen. 2013. A Tensor-based Factorization Model of Semantic Compositionality Proceedings of NAACLHLT, 1142–1151. A Proof of Theorem 1 Proof. For D = 1, it is obvious that if a set of real numbers {x1, . . . , xn} can be represented by a rank-1 matrix, it can always be represented by a vector, but the reverse is not true. For D > 1, if {x1, . . . , xn} can be represented by P = p1 ⊗p2 ⊗. . . ⊗pD, namely xi = Pi1,...,iD = QD d=1 pd id, then for any component vector in mode d, [pd 1, pd 2, . . . , pd nd] = [sd 1pd 1, sd 2pd 1, . . . , sd np dpd 1] where np d is the size of mode d of P, sd j is a constant and sd j = pi1,...,id−1,j,id+1,...,iD pi1,...,id−1,1,id+1,...,iD Therefore xi = Pi1,...,iD = x1,...,1 D Y d=1 sd id (7) and this representation is unique for a given D(up to the ordering of pj and sd j in pj, which simply assigns {x1, . . . , xn} with different indices in the tensor), due to the pairwise proportional constraint imposed by xi/xj, i, j = 1, . . . , n. If xi can also be represented by Q, then xi = Qi1,...,iD+1 = x1,...,1 QD+1 d=1 td id, where td j has a similar definition as sd j. Then it must be the case that ∃d1, d2 ∈{1, . . . , D + 1}, d ∈{1, . . . , D}, d1 ̸= d2 s.t. td1 id1td2 id2 = sd id, (8) tda ida = sdb idb, da ̸= d1, d2, db ̸= d since otherwise {x1, . . . , xn} would be represented by a different set of factors than those given in Equation (7). Therefore, in order for tensor Q to represent the same set of real numbers that P represents, there needs to exist a vector [sd 1, . . . , sd nd] that can be represented by a rank-1 matrix as indicated by Equation (8), which is in general not guaranteed. On the other hand, if {x1, . . . , xn} can be represented by Q, namely xi = Qi1,...,iD+1 = D+1 Y d=1 qd id then we can just pick d1 ∈{1, . . . , D}, d2 = d1 + 1 and let q′ = [qd1 1 qd2 1 , qd1 1 qd2 2 , . . . , qd1 nq d2 qd2 nq d1 ] and Q′ = q1 ⊗. . .⊗qd1−1 ⊗q′ ⊗qd2+1 ⊗. . .⊗qD+1 Hence {x1, . . . , xn} can also be represented by a Dth order tensor Q′. 675
2014
63
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 676–686, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Graph-based Semi-Supervised Learning of Translation Models from Monolingual Data Avneesh Saluja⇤ Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Hany Hassan, Kristina Toutanova, Chris Quirk Microsoft Research Redmond, WA 98502, USA hanyh,kristout,[email protected] Abstract Statistical phrase-based translation learns translation rules from bilingual corpora, and has traditionally only used monolingual evidence to construct features that rescore existing translation candidates. In this work, we present a semi-supervised graph-based approach for generating new translation rules that leverages bilingual and monolingual data. The proposed technique first constructs phrase graphs using both source and target language monolingual corpora. Next, graph propagation identifies translations of phrases that were not observed in the bilingual corpus, assuming that similar phrases have similar translations. We report results on a large Arabic-English system and a medium-sized Urdu-English system. Our proposed approach significantly improves the performance of competitive phrasebased systems, leading to consistent improvements between 1 and 4 BLEU points on standard evaluation sets. 1 Introduction Statistical approaches to machine translation (SMT) use sentence-aligned, parallel corpora to learn translation rules along with their probabilities. With large amounts of data, phrase-based translation systems (Koehn et al., 2003; Chiang, 2007) achieve state-of-the-art results in many typologically diverse language pairs (Bojar et al., 2013). However, the limiting factor in the success of these techniques is parallel data availability. Even in resource-rich languages, learning reliable translations of multiword phrases is a challenge, and an adequate phrasal inventory is crucial ⇤This work was done while the first author was interning at Microsoft Research for effective translation. This problem is exacerbated in the many language pairs for which parallel resources are either limited or nonexistent. While parallel data is generally scarce, monolingual resources exist in abundance and are being created at accelerating rates. Can we use monolingual data to augment the phrasal translations acquired from parallel data? The challenge of learning translations from monolingual data is of long standing interest, and has been approached in several ways (Rapp, 1995; Callison-Burch et al., 2006; Haghighi et al., 2008; Ravi and Knight, 2011). Our work introduces a new take on the problem using graphbased semi-supervised learning to acquire translation rules and probabilities by leveraging both monolingual and parallel data resources. On the source side, labeled phrases (those with known translations) are extracted from bilingual corpora, and unlabeled phrases are extracted from monolingual corpora; together they are embedded as nodes in a graph, with the monolingual data determining edge strengths between nodes (§2.2). Unlike previous work (Irvine and Callison-Burch, 2013a; Razmara et al., 2013), we use higher order n-grams instead of restricting to unigrams, since our approach goes beyond OOV mitigation and can enrich the entire translation model by using evidence from monolingual text. This enhancement alone results in an improvement of almost 1.4 BLEU points. On the target side, phrases initially consisting of translations from the parallel data are selectively expanded with generated candidates (§2.1), and are embedded in a target graph. We then limit the set of translation options for each unlabeled source phrase (§2.3), and using a structured graph propagation algorithm, where translation information is propagated from labeled to unlabeled phrases proportional to both source and target phrase similarities, we estimate probability distributions over translations for 676 Source! Target! el gato! los gatos! un gato! cat! the cat! the cats! a cat! Target! Prob.! the cat! 0.7! cat! 0.15! …! …! felino! canino! el perro! Target! Prob.! canine! 0.6! dog! 0.3! …! …! Target! Prob.! the cats! 0.8! cats! 0.1! …! …! Target! Prob.! the dog! 0.9! dog! 0.05! …! …! canine! dog! the dog! catlike! Figure 1: Example source and target graphs used in our approach. Labeled phrases on the source side are black (with their corresponding translations on the target side also black); unlabeled and generated (§2.1) phrases on the source and target sides respectively are white. Labeled phrases also have conditional probability distributions defined over target phrases, which are extracted from the parallel corpora. the unlabeled source phrases (§2.4). The additional phrases are incorporated in the SMT system through a secondary phrase table (§2.5). We evaluated the proposed approach on both ArabicEnglish and Urdu-English under a range of scenarios (§3), varying the amount and type of monolingual corpora used, and obtained improvements between 1 and 4 BLEU points, even when using very large language models. 2 Generation & Propagation Our goal is to obtain translation distributions for source phrases that are not present in the phrase table extracted from the parallel corpus. Both parallel and monolingual corpora are used to obtain these probability distributions over target phrases. We assume that sufficient parallel resources exist to learn a basic translation model using standard techniques, and also assume the availability of larger monolingual corpora in both the source and target languages. Although our technique applies to phrases of any length, in this work we concentrate on unigram and bigram phrases, which provides substantial computational cost savings. Monolingual data is used to construct separate similarity graphs over phrases (word sequences), as illustrated in Fig. 1. The source similarity graph consists of phrase nodes representing sequences of words in the source language. If a source phrase is found in the baseline phrase table it is called a labeled phrase: its conditional empirical probability distribution over target phrases (estimated from the parallel data) is used as the label, and is subsequently never changed. Otherwise it is called an unlabeled phrase, and our algorithm finds labels (translations) for these unlabeled phrases, with the help of the graph-based representation. The label space is thus the phrasal translation inventory, and like the source side it can also be represented in terms of a graph, initially consisting of target phrase nodes from the parallel corpus. For the unlabeled phrases, the set of possible target translations could be extremely large (e.g., all target language n-grams). Therefore, we first generate and fix a list of possible target translations for each unlabeled source phrase. We then propagate by deriving a probability distribution over these target phrases using graph propagation techniques. Next, we will describe the generation, graph construction and propagation steps. 2.1 Generation The objective of the generation step is to populate the target graph with additional target phrases for all unlabeled source phrases, yielding the full set of possible translations for the phrase. Prior to generation, one phrase node for each target phrase occurring in the baseline phrase table is added to the target graph (black nodes in Fig. 1’s target graph). We only consider target phrases whose source phrase is a bigram, but it is worth noting that the target phrases are of variable length. The generation component is based on the observation that for structured label spaces, such as translation candidates for source phrases in SMT, even similar phrases have slightly different labels (target translations). The exponential dependence 677 of the sizes of these spaces on the length of instances is to blame. Thus, the target phrase inventory from the parallel corpus may be inadequate for unlabeled instances. We therefore need to enrich the target or label space for unknown phrases. A na¨ıve way to achieve this goal would be to extract all n-grams, from n = 1 to a maximum ngram order, from the monolingual data, but this strategy would lead to a combinatorial explosion in the number of target phrases. Instead, by intelligently expanding the target space using linguistic information such as morphology (Toutanova et al., 2008; Chahuneau et al., 2013), or relying on the baseline system to generate candidates similar to self-training (McClosky et al., 2006), we can tractably propose novel translation candidates (white nodes in Fig. 1’s target graph) whose probabilities are then estimated during propagation. We refer to these additional candidates as “generated” candidates. To generate new translation candidates using the baseline system, we decode each unlabeled source bigram to generate its m-best translations. This set of candidate phrases is filtered to include only n-grams occurring in the target monolingual corpus, and helps to prune passed-through OOV words and invalid translations. To generate new translation candidates using morphological information, we morphologically segment words into prefixes, stem, and suffixes using linguistic resources. We assume that a morphological analyzer which provides context-independent analysis of word types exists, and implements the functions STEM(f) and STEM(e) for source and target word types. Based on these functions, source and target sequences of words can be mapped to sequences of stems. The morphological generation step adds to the target graph all target word sequences from the monolingual data that map to the same stem sequence as one of the target phrases occurring in the baseline phrase table. In other words, this step adds phrases that are morphological variants of existing phrases, differing only in their affixes. 2.2 Graph Construction At this stage, there exists a list of source bigram phrases, both labeled and unlabeled, as well as a list of target language phrases of variable length, originating from both the phrase table and the generation step. To determine pairwise phrase similarities in order to embed these nodes in their graphs, we utilize the monolingual corpora on both the source and target sides to extract distributional features based on the context surrounding each phrase. For a phrase, we look at the p words before and the p words after the phrase, explicitly distinguishing between the two sides, but not distance (i.e., bag of words on each side). Co-occurrence counts for each feature (context word) are accumulated over the monolingual corpus, and these counts are converted to pointwise mutual information (PMI) values, as is standard practice when computing distributional similarities. Cosine similarity between two phrases’ PMI vectors is used for similarity, and we take only the k most similar phrases for each phrase, to create a k-nearest neighbor similarity matrix for both source and target language phrases. These graphs are distinct, in that propagation happens within the two graphs but not between them. While accumulating co-occurrence counts for each phrase, we also maintain an inverted index data structure, which is a mapping from features (context words) to phrases that co-occur with that feature within a window of p.1 The inverted index structure reduces the graph construction cost from ✓(n2), by only computing similarities for a subset of all possible pairs of phrases, namely other phrases that have at least one feature in common. 2.3 Candidate Translation List Construction As mentioned previously, we construct and fix a set of translation candidates, i.e., the label set for each unlabeled source phrase. The probability distribution over these translations is estimated through graph propagation, and the probabilities of items outside the list are assumed to be zero. We obtain these candidates from two sources:2 1. The union of each unlabeled phrase’s labeled neighbors’ labels, which represents the set of target phrases that occur as translations of source phrases that are similar to the unlabeled source phrase. For un gato in Fig. 1, this source would yield the cat and cat, among others, as candidates. 2. The generated candidates for the unlabeled phrase – the ones from the baseline system’s 1The q most frequent words in the monolingual corpus were removed as keys from this mapping, as these high entropy features do not provide much information. 2We also obtained the k-nearest neighbors of the translation candidates generated through these methods by utilizing the target graph, but this had minimal impact. 678 decoder output, or from a morphological generator (e.g., a cat and catlike in Fig. 1). The morphologically-generated candidates for a given source unlabeled phrase are initially defined as the target word sequences in the monolingual data that have the same stem sequence as one of the baseline’s target translations for a source phrase which has the same stem sequence as the unlabeled source phrase. These candidates are scored using stem-level translation probabilities, morpheme-level lexical weighting probabilities, and a language model, and only the top 30 candidates are included. After obtaining candidates from these two possible sources, the list is sorted by forward lexical score, using the lexical models of the baseline system. The top r candidates are then chosen for each phrase’s translation candidate list. In Figure 2 we provide example outputs of our system for a handful of unlabeled source phrases, and explicitly note the source of the translation candidate (‘G’ for generated, ‘N’ for labeled neighbor’s label). 2.4 Graph Propagation A graph propagation algorithm transfers label information from labeled nodes to unlabeled nodes by following the graph’s structure. In some applications, a label may consist of class membership information, e.g., each node can belong to one of a certain number of classes. In our problem, the “label” for each node is actually a probability distribution over a set of translation candidates (target phrases). For a given node f, let e refer to a candidate in the label set for node f; then in graph propagation, the probability of candidate e given source phrase f in iteration t + 1 is: Pt+1(e|f) = X j2N(f) Ts(j|f)Pt(e|j) (1) where the set N(f) contains the (labeled and unlabeled) neighbors of node f, and Ts(j|f) is a term that captures how similar nodes f and j are. This quantity is also known as the propagation probability, and its exact form will depend on the type of graph propagation algorithm used. For our purposes, node f is a source phrasal node, the set N(f) refers to other source phrases that are neighbors of f (restricted to the k-nearest neighbors as in §2.2), and the aim is to estimate P(e|f), the probability of target phrase e being a phrasal translation of source phrase f. A classic propagation algorithm that has been suitably modified for use in bilingual lexicon induction (Tamura et al., 2012; Razmara et al., 2013) is the label propagation (LP) algorithm of Zhu et al. (2003). In this case, Ts(f, j) is chosen to be: Ts(j|f) = ws f,j P j02N(f) ws f,j0 (2) where ws f,j is the cosine similarity (as computed in §2.2) between phrase f and phrase j on side s (the source side). As evident in Eq. 2, LP only takes into account source language similarity of phrases. To see this observation more clearly, let us reformulate Eq. 1 more generally as: Pt+1(e|f) = X j2N (f) Ts(j|f) X e02H(j) Tt(e0|e)Pt(e0|j) (3) where H(j) is the translation candidate set for source phrase j, and Tt(e0|e) is the propagation probability between nodes or phrases e and e0 on the target side. We have simply replaced Pt(e|j) with P e02H(j) Tt(e0|e)Pt(e0|j), defining it in terms of j’s translation candidate list. Note that in the original LP formulation the target side information is disregarded, i.e., Tt(e0|e) = 1 if and only if e = e0 and 0 otherwise. As a result, LP is suboptimal for our needs, since it is unable to appropriately handle generated translation candidates for the unlabeled phrases. These translation candidates are usually not present as translations for the labeled phrases (or for the labeled phrases that neighbor the unlabeled one in question). When propagating information from the labeled phrases, such candidates will obtain no probability mass since e 6= e0. Thus, due to the setup of the problem, LP naturally biases away from translation candidates produced during the generation step (§2.1). 2.4.1 Structured Label Propagation The label set we are considering has a similarity structure encoded by the target graph. How can we exploit this structure in graph propagation on the source graph? In Liu et al. (2012), the authors generalize label propagation to structured label propagation (SLP) in an effort to work more elegantly with structured labels. In particular, the definition of target similarity is similar to that of source similarity: Tt(e0|e) = wt e,e0 P e002H(j) wt e,e00 (4) 679 Therefore, the final update equation in SLP is: Pt+1(e|f) = X j2N (f) Ts(j|f) X e02H(j) Tt(e0|e)Pt(e0|j) (5) With this formulation, even if e 6= e0, the similarity Tt(e0|e) as determined by the target phrase graph will dictate propagation probability. We renormalize the probability distributions after each propagation step to sum to one over the fixed list of translation candidates, and run the SLP algorithm to convergence.3 2.5 Phrase-based SMT Expansion After graph propagation, each unlabeled phrase is labeled with a categorical distribution over the set of translation candidates defined in §2.3. In order to utilize these newly acquired phrase pairs, we need to compute their relevant features. The phrase pairs have four log-probability features with two likelihood features and two lexical weighting features. In addition, we use a sophisticated lexicalized hierarchical reordering model (HRM) (Galley and Manning, 2008) with five features for each phrase pair. We utilize the graph propagation-estimated forward phrasal probabilities P(e|f) as the forward likelihood probabilities for the acquired phrases; to obtain the backward phrasal probability for a given phrase pair, we make use of Bayes’ Theorem: P(f|e) = P(e|f)P(f) P(e) where the marginal probabilities of source and target phrases e and f are obtained from the counts extracted from the monolingual data. The baseline system’s lexical models are used for the forward and backward lexical scores. The HRM probabilities for the new phrase pairs are estimated from the baseline system by backing-off to the average values for phrases with similar length. 3 Evaluation We performed an extensive evaluation to examine various aspects of the approach along with overall system performance. Two language pairs were used: Arabic-English and Urdu-English. The Arabic-English evaluation was used to validate the decisions made during the development of our 3Empirically within a few iterations and a wall-clock time of less than 10 minutes in total. method and also to highlight properties of the technique. With it, in §3.2 we first analyzed the impact of utilizing phrases instead of words and SLP instead of LP; the latter experiment underscores the importance of generated candidates. We also look at how adding morphological knowledge to the generation process can further enrich performance. In §3.3, we then examined the effect of using a very large 5-gram language model training on 7.5 billion English tokens to understand the nature of the improvements in §3.2. The Urdu to English evaluation in §3.4 focuses on how noisy parallel data and completely monolingual (i.e., not even comparable) text can be used for a realistic low-resource language pair, and is evaluated with the larger language model only. We also examine how our approach can learn from noisy parallel data compared to the traditional SMT system. Baseline phrasal systems are used both for comparison and for generating translation candidates for unlabeled phrases as described in §2.1. The baseline is a state-of-the-art phrase-based system; we perform word alignment using a lexicalized hidden Markov model, and then the phrase table is extracted using the grow-diag-final heuristic (Koehn et al., 2003). The 13 baseline features (2 lexical, 2 phrasal, 5 HRM, and 1 language model, word penalty, phrase length feature and distortion penalty feature) were tuned using MERT (Och, 2003), which is also used to tune the 4 feature weights introduced by the secondary phrase table (2 lexical and 2 phrasal, other features being shared between the two tables). For all systems, we use a distortion limit of 4. We use case-insensitive BLEU (Papineni et al., 2002) to evaluate translation quality. 3.1 Datasets Bilingual corpus statistics for both language pairs are presented in Table 2. For Arabic-English, our training corpus consisted of 685k sentence pairs from standard LDC corpora4. The NIST MT06 and MT08 Arabic-English evaluation sets (combining the newswire and weblog domains for both sets), with four references each, were used as tuning and testing sets respectively. For UrduEnglish, the training corpus was provided by the LDC for the NIST Urdu-English MT evaluation, and most of the data was automatically acquired from the web, making it quite noisy. After filtering, there are approximately 65k parallel sen4LDC2007T08 and LDC2008T09 680 Parameter Description Value m m-best candidate list size when bootstrapping candidates in generation stage. 100 p Window size on each side when extracting features for phrases. 2 q Filter the q most frequent words when storing the inverted index data structure for graph construction. Both source and target sides share the same value. 25 k Number of neighbors stored for each phrase for both source and target graphs. This parameter controls the sparsity of the graph. 500 r Maximum size of translation candidate list for unlabeled phrases. 20 Table 1: Parameters, explanation of their function, and value chosen. tences; these were supplemented by an additional 100k dictionary entries. Tuning and test data consisted of the MT08 and MT09 evaluation corpora, once again a mixture of news and web text. Corpus Sentences Words (Src) Ar-En Train 685,502 17,055,168 Ar-En Tune (MT06) 1,664 33,739 Ar-En Test (MT08) 1,360 42,472 Ur-En Train 165,159 1,169,367 Ur-En Tune (MT08) 1,864 39,925 Ur-En Test (MT09) 1,792 39,922 Table 2: Bilingual corpus statistics for the Arabic-English and Urdu-English datasets used. Table 3 contains statistics for the monolingual corpora used in our experiments. From these corpora, we extracted all sentences that contained at least one source or target phrase match to compute features for graph construction. For the Arabic to English experiments, the monolingual corpora are taken from the AFP Arabic and English Gigaword corpora and are of a similar date range to each other (1994-2010), rendering them comparable but not sentence-aligned or parallel. Corpus Sentences Words Ar Comparable 10.2m 290m En I Comparable 29.8m 900m Ur Noisy Parallel 470k 5m En II Noisy Parallel 470k 4.7m Ur Non-Comparable 7m 119m En II Non-Comparable 17m 510m Table 3: Monolingual corpus statistics for the Arabic-English and Urdu-English evaluations. The monolingual corpora can be sub-divided into comparable, noisy parallel, and noncomparable components. En I refers to the English side of the Arabic-English corpora, and En II to the English side of the Urdu-English corpora. For the Urdu-English experiments, completely non-comparable monolingual text was used for graph construction; we obtained the Urdu side through a web-crawler, and a subset of the AFP Gigaword English corpus was used for English. In addition, we obtained a corpus from the ELRA5, which contains a mix of parallel and monolingual data; based on timestamps, we extracted a comparable English corpus for the ELRA Urdu monolingual data to form a roughly 470k-sentence “noisy parallel” set. We used this set in two ways: either to augment the parallel data presented in Table 2, or to augment the non-comparable monolingual data in Table 3 for graph construction. For the parameters introduced throughout the text, we present in Table 1 a reminder of their interpretation as well as the values used in this work. 3.2 Experimental Variations In our first set of experiments, we looked at the impact of choosing bigrams over unigrams as our basic unit of representation, along with performance of LP (Eq. 2) compared to SLP (Eq. 4). Recall that LP only takes into account source similarity; since the vast majority of generated candidates do not occur as labeled neighbors’ labels, restricting propagation to the source graph drastically reduces the usage of generated candidates as labels, but does not completely eliminate it. In these experiments, we utilize a reasonably-sized 4-gram language model trained on 900m English tokens, i.e., the English monolingual corpus. Table 4 presents the results of these variations; overall, by taking into account generated candidates appropriately and using bigrams (“SLP 2gram”), we obtained a 1.13 BLEU gain on the test set. Using unigrams (“SLP 1-gram”) actually does worse than the baseline, indicating the importance of focusing on translations for sparser bigrams. While LP (“LP 2-gram”) does reasonably well, its underperformance compared to SLP underlines the importance of enriching the translation space with generated candidates and handling these candidates appropriately.6 In “SLP5ELRA-W0038 6It is relatively straightforward to combine both unigrams and bigrams in one source graph, but for experimental clarity we did not mix these phrase lengths. 681 HalfMono”, we use only half of the monolingual comparable corpora, and still obtain an improvement of 0.56 BLEU points, indicating that adding more monolingual data is likely to improve the system further. Interestingly, biasing away from generated candidates using all the monolingual data (“LP 2-gram”) performs similarly to using half the monolingual corpora and handling generated candidates properly (“SLP-HalfMono”). BLEU Setup Tune Test Baseline 39.33 38.09 SLP 1-gram 39.47 37.85 LP 2-gram 40.75 38.68 SLP 2-gram 41.00 39.22 SLP-HalfMono 2-gram 40.82 38.65 SLP+Morph 2-gram 41.02 39.35 Table 4: Results for the Arabic-English evaluation. The LP vs. SLP comparison highlights the importance of target side enrichment via translation candidate generation, 1-gram vs. 2-gram comparisons highlight the importance of emphasizing phrases, utilizing half the monolingual data shows sensitivity to monolingual corpus size, and adding morphological information results in additional improvement. Additional morphologically generated candidates were added in this experiment as detailed in §2.3. We used a simple hand-built Arabic morphological analyzer that segments word types based on regular expressions, and an English lexiconbased morphological analyzer. The morphological candidates add a small amount of improvement, primarily by targeting genuine OOVs. 3.3 Large Language Model Effect In this set of experiments, we examined if the improvements in §3.2 can be explained primarily through the extraction of language model characteristics during the semi-supervised learning phase, or through orthogonal pieces of evidence. Would the improvement be less substantial had we used a very large language model? To answer this question we trained a 5-gram language model on 570M sentences (7.6B tokens), with data from various sources including the Gigaword corpus7, WMT and European Parliamentary Proceedings8, and web-crawled data from Wikipedia and the web. Only m-best generated candidates from the baseline were considered during generation, along with labeled neighbors’ labels. 7LDC2011T07 8http://www.statmt.org/wmt13/ BLEU Setup Tune Test Baseline+LargeLM 41.48 39.86 SLP+LargeLM 42.82 41.29 Table 5: Results with the large language model scenario. The gains are even better than with the smaller language model. Table 5 presents the results of using this language model. We obtained a robust, 1.43-BLEU point gain, indicating that the addition of the newly induced phrases provided genuine translation improvements that cannot be compensated by the language model effect. Further examination of the differences between the two systems yielded that most of the improvements are due to better bigrams and trigrams, as indicated by the breakdown of the BLEU score precision per n-gram, and primarily leverages higher quality generated candidates from the baseline system. We analyze the output of these systems further in the output analysis section below (§3.5). 3.4 Urdu-English In order to evaluate the robustness of these results beyond one language pair, we looked at UrduEnglish, a low resource pair likely to benefit from this approach. In this set of experiments, we used the large language model in §3.3, and only used baseline-generated candidates. We experimented with two extreme setups that differed in the data assumed parallel, from which we built our baseline system, and the data treated as monolingual, from which we built our source and target graphs. In the first setup, we use the noisy parallel data for graph construction and augment the noncomparable corpora with it: • parallel: “Ur-En Train” • Urdu monolingual: “Ur Noisy Parallel”+“Ur Non-Comparable” • English monolingual: “En II Noisy Parallel”+“En II Non-Comparable” The results from this setup are presented as “Baseline” and “SLP+Noisy” in Table 6. In the second setup, we train a baseline system using the data in Table 2, augmented with the noisy parallel text: • parallel: “Ur-En Train”+“Ur Noisy Parallel”+“En II Noisy Parallel” • Urdu monolingual: “Ur Non-Comparable” • English monolingual: “En II NonComparable” 682 ! Ex Source Reference Baseline System 1 (Ar) !ﺳﺎ$% !"#$#"ﻟﺗﻌ! sending reinforcements strong reinforcements sending reinforcements (N) 2 (Ar) !ﺛﺎ$'ﻻﻧ!+!! with extinction OOV with extinction (N) 3 (Ar) !ﺗﺣﺑ ﻟﺔ#ﻣﺣﺎ! thwarts address thwarted (N) 4 (Ar) !ﻧﺳﺑ ﻟﻲ# ! was quoted as saying attributed to was quoted as saying (G) 5 (Ar) ﺿﺢ#$ !ﻋﺑ !"&ﻟﻣﺣﻣ! abdalmahmood said he said abdul mahmood mahmood said (G) 6 (Ar) ﻣﻧﻛﺑﺎ !"#ﺗ it deems OOV it deems (G) 7 (Ur) !ﭘ !"ﻣ$ ! I am hopeful this hope I am hopeful (N) 8 (Ur) ﭘﻧﺎ$ !ﻓﺎ$ ! to defend him to defend to defend himself (G) 9 (Ur) !ﮔﻔﺗﮕ ۔ﮐﯽ! while speaking In the in conversation (N) Figure 2: Nine example outputs of our system vs. the baseline highlighting the properties of our approach. Each example is labeled (Ar) for Arabic source or (Ur) for Urdu source, and system candidates are labeled with (N) if the candidate unlabeled phrase’s labeled neighbor’s label, or (G) if the candidate was generated. The results from this setup are presented as “Baseline+Noisy” and “SLP” in Table 6. The two setups allow us to examine how effectively our method can learn from the noisy parallel data by treating it as monolingual (i.e., for graph construction), compared to treating this data as parallel, and also examines the realistic scenario of using completely non-comparable monolingual text for graph construction as in the second setup. BLEU Setup Tune Test Baseline 21.87 21.17 SLP+Noisy 26.42 25.38 Baseline+Noisy 27.59 27.24 SLP 28.53 28.43 Table 6: Results for the Urdu-English evaluation evaluated with BLEU. All experiments were conducted with the larger language model, and generation only considered the m-best candidates from the baseline system. In the first setup, we get a huge improvement of 4.2 BLEU points (“SLP+Noisy”) when using the monolingual data and the noisy parallel data for graph construction. Our method obtained much of the gains achieved by the supervised baseline approach that utilizes the noisy parallel data in conjunction with the NIST-provided parallel data (“Baseline+Noisy”), but with fewer assumptions on the nature of the corpora (monolingual vs. parallel). Furthermore, despite completely unaligned, non-comparable monolingual text on the Urdu and English sides, and a very large language model, we can still achieve gains in excess of 1.2 BLEU points (“SLP”) in a difficult evaluation scenario, which shows that the technique adds a genuine translation improvement over and above na¨ıve memorization of n-gram sequences. 3.5 Analysis of Output Figure 2 looks at some of the sample hypotheses produced by our system and the baseline, along with reference translations. The outputs produced by our system are additionally annotated with the origin of the candidate, i.e., labeled neighbor’s label (N) or generated (G). The Arabic-English examples are numbered 1 to 5. The first example shows a source bigram unknown to the baseline system, resulting in a suboptimal translation, while our system proposes the correct translation of “sending reinforcements”. The second example shows a word that was an OOV for the baseline system, while our system got a perfect translation. The third and fourth examples represent bigram phrases with much better translations compared to backing off to the lexical translations as in the baseline. The fifth Arabic-English example demonstrates the pitfalls of over-reliance on the distributional hypothesis: the source bigram corresponding to the name “abd almahmood” is distributional similar to another named entity “mahmood” and the English equivalent is offered as a translation. The distributional hypothesis can sometimes be misleading. The sixth example shows how morphological information can propose novel candidates: an OOV word is broken down to its stem via the analyzer and candidates are generated based on the stem. The Urdu-English examples are numbered 7 to 9. In example 7, the bigram “par umeed” (corresponding to “hopeful”) is never seen in the baseline system, which has only seen “umeed” (“hope”). By leveraging the monolingual corpus to understand the context of this unlabeled bigram, we can utilize the graph structure to propose a syntactically correct form, also resulting in a more fluent and correct sentence as determined by the language model. Examples 8 & 9 show cases where the baseline deletes words or translates them into more common words e.g., “conversation” to “the”, while our system proposes reasonable candidates. 683 4 Related Work The idea presented in this paper is similar in spirit to bilingual lexicon induction (BLI), where a seed lexicon in two different languages is expanded with the help of monolingual corpora, primarily by extracting distributional similarities from the data using word context. This line of work, initiated by Rapp (1995) and continued by others (Fung and Yee, 1998; Koehn and Knight, 2002) (inter alia) is limited from a downstream perspective, as translations for only a small number of words are induced and oftentimes for common or frequently occurring ones only. Recent improvements to BLI (Tamura et al., 2012; Irvine and Callison-Burch, 2013b) have contained a graph-based flavor by presenting label propagation-based approaches using a seed lexicon, but evaluation is once again done on top-1 or top-3 accuracy, and the focus is on unigrams. Razmara et al. (2013) and Irvine and CallisonBurch (2013a) conduct a more extensive evaluation of their graph-based BLI techniques, where the emphasis and end-to-end BLEU evaluations concentrated on OOVs, i.e., unigrams, and not on enriching the entire translation model. As with previous BLI work, these approaches only take into account source-side similarity of words; only moderate gains (and in the latter work, on a subset of language pairs evaluated) are obtained. Additionally, because of our structured propagation algorithm, our approach is better at handling multiple translation candidates and does not need to restrict itself to the top translation. Klementiev et al. (2012) propose a method that utilizes a pre-existing phrase table and a small bilingual lexicon, and performs BLI using monolingual corpora. The operational scope of their approach is limited in that they assume a scenario where unknown phrase pairs are provided (thereby sidestepping the issue of translation candidate generation for completely unknown phrases), and what remains is the estimation of phrasal probabilities. In our case, we obtain the phrase pairs from the graph structure (and therefore indirectly from the monolingual data) and a separate generation step, which plays an important role in good performance of the method. Similarly, Zhang and Zong (2013) present a series of heuristics that are applicable in a fairly narrow setting. The notion of translation consensus, wherein similar sentences on the source side are encouraged to have similar target language translations, has also been explored via a graph-based approach (Alexandrescu and Kirchhoff, 2009). Liu et al. (2012) extend this method by proposing a novel structured label propagation algorithm to deal with the generalization of propagating sets of labels instead of single labels, and also integrated information from the graph into the decoder. In fact, we utilize this algorithm in our propagation step (§2.4). However, the former work operates only at the level of sentences, and while the latter does extend the framework to sub-spans of sentences, they do not discover new translation pairs or phrasal probabilities for new pairs at all, but instead re-estimate phrasal probabilities using the graph structure and add this score as an additional feature during decoding. The goal of leveraging non-parallel data in machine translation has been explored from several different angles. Paraphrases extracted by “pivoting” via a third language (Callison-Burch et al., 2006) can be derived solely from monolingual corpora using distributional similarity (Marton et al., 2009). Snover et al. (2008) use cross-lingual information retrieval techniques to find potential sentence-level translation candidates among comparable corpora. In this case, the goal is to try and construct a corpus as close to parallel as possible from comparable corpora, and is a fairly different take on the problem we are looking at. Decipherment-based approaches (Ravi and Knight, 2011; Dou and Knight, 2012) have generally taken a monolingual view to the problem and combine phrase tables through the log-linear model during feature weight training. 5 Conclusion In this work, we presented an approach that can expand a translation model extracted from a sentence-aligned, bilingual corpus using a large amount of unstructured, monolingual data in both source and target languages, which leads to improvements of 1.4 and 1.2 BLEU points over strong baselines on evaluation sets, and in some scenarios gains in excess of 4 BLEU points. In the future, we plan to estimate the graph structure through other learned, distributed representations. Acknowledgments The authors would like to thank Chris Dyer, Arul Menezes, and the anonymous reviewers for their helpful comments and suggestions. 684 References Andrei Alexandrescu and Katrin Kirchhoff. 2009. Graph-based learning for statistical machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT ’09, pages 119– 127. Association for Computational Linguistics, June. Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 1–44, Sofia, Bulgaria, August. Association for Computational Linguistics. Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 17–24, New York City, USA, June. Association for Computational Linguistics. Victor Chahuneau, Eva Schlinger, Noah A. Smith, and Chris Dyer. 2013. Translating into morphologically rich languages with synthetic phrases. In Proc. of EMNLP. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228, June. Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 266–275. Association for Computational Linguistics, July. Pascale Fung and Lo Yuen Yee. 1998. An ir approach for translating new words from nonparallel, comparable texts. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics - Volume 1, ACL ’98, pages 414– 420, Stroudsburg, PA, USA. Association for Computational Linguistics. Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. EMNLP ’08, pages 848–856, Stroudsburg, PA, USA. Association for Computational Linguistics. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL08: HLT, pages 771–779, Columbus, Ohio, June. Association for Computational Linguistics. Ann Irvine and Chris Callison-Burch. 2013a. Combining bilingual and comparable corpora for low resource machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 262–270, Sofia, Bulgaria, August. Association for Computational Linguistics. Ann Irvine and Chris Callison-Burch. 2013b. Supervised bilingual lexicon induction with multiple monolingual signals. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 518–523, Atlanta, Georgia, June. Association for Computational Linguistics. Alexandre Klementiev, Ann Irvine, Chris CallisonBurch, and David Yarowsky. 2012. Toward statistical machine translation without parallel corpora. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 130–140, Avignon, France, April. Association for Computational Linguistics. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In In Proceedings of ACL Workshop on Unsupervised Lexical Acquisition, pages 9–16. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 48–54, Stroudsburg, PA, USA. Association for Computational Linguistics. Shujie Liu, Chi-Ho Li, Mu Li, and Ming Zhou. 2012. Learning translation consensus with structured label propagation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 302–310, Stroudsburg, PA, USA. Association for Computational Linguistics. Yuval Marton, Chris Callison-Burch, and Philip Resnik. 2009. Improved statistical machine translation using monolingually-derived paraphrases. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP ’09, pages 381–390, Singapore, August. Association for Computational Linguistics. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152–159, New York City, USA, June. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 160– 167, Stroudsburg, PA, USA. Association for Computational Linguistics. 685 Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. pages 311–318. Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, ACL ’95. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 12– 21, Portland, Oregon, USA, June. Association for Computational Linguistics. Majid Razmara, Maryam Siahbani, Gholamreza Haffari, and Anoop Sarkar. 2013. Graph propagation for paraphrasing out-of-vocabulary words in statistical machine translation. In Proceedings of the 51st of the Association for Computational Linguistics, ACL-51, Stroudsburg, PA, USA. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, and Richard Schwartz. 2008. Language and translation model adaptation using comparable corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 857–866, Stroudsburg, PA, USA. Association for Computational Linguistics. Akihiro Tamura, Taro Watanabe, and Eiichiro Sumita. 2012. Bilingual lexicon extraction from comparable corpora using label propagation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 24–36. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In Proceedings of ACL-08: HLT, pages 514–522, Columbus, Ohio, June. Association for Computational Linguistics. Jiajun Zhang and Chengqing Zong. 2013. Learning a phrase-based translation model from monolingual data with application to domain adaptation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1425–1434, Sofia, Bulgaria, August. Association for Computational Linguistics. Xiaojin Zhu, Zoubin Ghahramani, and John D. Lafferty. 2003. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the Twentieth International Conference on Machine Learning, ICML ’03, pages 912–919. 686
2014
64
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 687–698, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Using Discourse Structure Improves Machine Translation Evaluation Francisco Guzm´an Shafiq Joty Llu´ıs M`arquez and Preslav Nakov ALT Research Group Qatar Computing Research Institute — Qatar Foundation {fguzman,sjoty,lmarquez,pnakov}@qf.org.qa Abstract We present experiments in using discourse structure for improving machine translation evaluation. We first design two discourse-aware similarity measures, which use all-subtree kernels to compare discourse parse trees in accordance with the Rhetorical Structure Theory. Then, we show that these measures can help improve a number of existing machine translation evaluation metrics both at the segment- and at the system-level. Rather than proposing a single new metric, we show that discourse information is complementary to the state-of-the-art evaluation metrics, and thus should be taken into account in the development of future richer evaluation metrics. 1 Introduction From its foundations, Statistical Machine Translation (SMT) had two defining characteristics: first, translation was modeled as a generative process at the sentence-level. Second, it was purely statistical over words or word sequences and made little to no use of linguistic information. Although modern SMT systems have switched to a discriminative log-linear framework, which allows for additional sources as features, it is generally hard to incorporate dependencies beyond a small window of adjacent words, thus making it difficult to use linguistically-rich models. Recently, there have been two promising research directions for improving SMT and its evaluation: (a) by using more structured linguistic information, such as syntax (Galley et al., 2004; Quirk et al., 2005), hierarchical structures (Chiang, 2005), and semantic roles (Wu and Fung, 2009; Lo et al., 2012), and (b) by going beyond the sentence-level, e.g., translating at the document level (Hardmeier et al., 2012). Going beyond the sentence-level is important since sentences rarely stand on their own in a well-written text. Rather, each sentence follows smoothly from the ones before it, and leads into the ones that come afterwards. The logical relationship between sentences carries important information that allows the text to express a meaning as a whole beyond the sum of its separate parts. Note that sentences can be made of several clauses, which in turn can be interrelated through the same logical relations. Thus, in a coherent text, discourse units (sentences or clauses) are logically connected: the meaning of a unit relates to that of the previous and the following units. Discourse analysis seeks to uncover this coherence structure underneath the text. Several formal theories of discourse have been proposed to describe the coherence structure (Mann and Thompson, 1988; Asher and Lascarides, 2003; Webber, 2004). For example, the Rhetorical Structure Theory (Mann and Thompson, 1988), or RST, represents text by labeled hierarchical structures called Discourse Trees (DTs), which can incorporate several layers of other linguistic information, e.g., syntax, predicate-argument structure, etc. Modeling discourse brings together the above research directions (a) and (b), which makes it an attractive goal for MT. This is demonstrated by the establishment of a recent workshop dedicated to Discourse in Machine Translation (Webber et al., 2013), collocated with the 2013 annual meeting of the Association of Computational Linguistics. The area of discourse analysis for SMT is still nascent and, to the best of our knowledge, no previous research has attempted to use rhetorical structure for SMT or machine translation evaluation. One possible reason could be the unavailability of accurate discourse parsers. However, this situation is likely to change given the most recent advances in automatic discourse analysis (Joty et al., 2012; Joty et al., 2013). 687 We believe that the semantic and pragmatic information captured in the form of DTs (i) can help develop discourse-aware SMT systems that produce coherent translations, and (ii) can yield better MT evaluation metrics. While in this work we focus on the latter, we think that the former is also within reach, and that SMT systems would benefit from preserving the coherence relations in the source language when generating target-language translations. In this paper, rather than proposing yet another MT evaluation metric, we show that discourse information is complementary to many existing evaluation metrics, and thus should not be ignored. We first design two discourse-aware similarity measures, which use DTs generated by a publiclyavailable discourse parser (Joty et al., 2012); then, we show that they can help improve a number of MT evaluation metrics at the segment- and at the system-level in the context of the WMT11 and the WMT12 metrics shared tasks (Callison-Burch et al., 2011; Callison-Burch et al., 2012). These metrics tasks are based on sentence-level evaluation, which arguably can limit the benefits of using global discourse properties. Fortunately, several sentences are long and complex enough to present rich discourse structures connecting their basic clauses. Thus, although limited, this setting is able to demonstrate the potential of discourselevel information for MT evaluation. Furthermore, sentence-level scoring (i) is compatible with most translation systems, which work on a sentence-bysentence basis, (ii) could be beneficial to modern MT tuning mechanisms such as PRO (Hopkins and May, 2011) and MIRA (Watanabe et al., 2007; Chiang et al., 2008), which also work at the sentence-level, and (iii) could be used for reranking n-best lists of translation hypotheses. 2 Related Work Addressing discourse-level phenomena in machine translation is relatively new as a research direction. Some recent work has looked at anaphora resolution (Hardmeier and Federico, 2010) and discourse connectives (Cartoni et al., 2011; Meyer, 2011), to mention two examples.1 However, so far the attempts to incorporate discourse-related knowledge in MT have been only moderately successful, at best. 1We refer the reader to (Hardmeier, 2012) for an in-depth overview of discourse-related research for MT. A common argument, is that current automatic evaluation metrics such as BLEU are inadequate to capture discourse-related aspects of translation quality (Hardmeier and Federico, 2010; Meyer et al., 2012). Thus, there is consensus that discourseinformed MT evaluation metrics are needed in order to advance research in this direction. Here we suggest some simple ways to create such metrics, and we also show that they yield better correlation with human judgments. The field of automatic evaluation metrics for MT is very active, and new metrics are continuously being proposed, especially in the context of the evaluation campaigns that run as part of the Workshops on Statistical Machine Translation (WMT 2008-2012), and NIST Metrics for Machine Translation Challenge (MetricsMATR), among others. For example, at WMT12, 12 metrics were compared (Callison-Burch et al., 2012), most of them new. There have been several attempts to incorporate syntactic and semantic linguistic knowledge into MT evaluation. For instance, at the syntactic level, we find metrics that measure the structural similarity between shallow syntactic sequences (Gim´enez and M`arquez, 2007; Popovic and Ney, 2007) or between constituency trees (Liu and Gildea, 2005). In the semantic case, there are metrics that exploit the similarity over named entities and predicate-argument structures (Gim´enez and M`arquez, 2007; Lo et al., 2012). In this work, instead of proposing a new metric, we focus on enriching current MT evaluation metrics with discourse information. Our experiments show that many existing metrics can benefit from additional knowledge about discourse structure. In comparison to the syntactic and semantic extensions of MT metrics, there have been very few attempts to incorporate discourse information so far. One example are the semantics-aware metrics of Gim´enez and M`arquez (2009) and Comelles et al. (2010), which use the Discourse Representation Theory (Kamp and Reyle, 1993) and treebased discourse representation structures (DRS) produced by a semantic parser. They calculate the similarity between the MT output and references based on DRS subtree matching, as defined in (Liu and Gildea, 2005), DRS lexical overlap, and DRS morpho-syntactic overlap. However, they could not improve correlation with human judgments, as evaluated on the MetricsMATR dataset. 688 Compared to the previous work, (i) we use a different discourse representation (RST), (ii) we compare discourse parses using all-subtree kernels (Collins and Duffy, 2001), (iii) we evaluate on much larger datasets, for several language pairs and for multiple metrics, and (iv) we do demonstrate better correlation with human judgments. Wong and Kit (2012) recently proposed an extension of MT metrics with a measure of document-level lexical cohesion (Halliday and Hasan, 1976). Lexical cohesion is achieved using word repetitions and semantically similar words such as synonyms, hypernyms, and hyponyms. For BLEU and TER, they observed improved correlation with human judgments on the MTC4 dataset when linearly interpolating these metrics with their lexical cohesion score. Unlike their work, which measures lexical cohesion at the document-level, here we are concerned with coherence (rhetorical) structure, primarily at the sentence-level. 3 Our Discourse-Based Measures Our working hypothesis is that the similarity between the discourse structures of an automatic and of a reference translation provides additional information that can be valuable for evaluating MT systems. In particular, we believe that good translations should tend to preserve discourse relations. As an example, consider the three discourse trees (DTs) shown in Figure 1: (a) for a reference (human) translation, and (b) and (c) for translations of two different systems on the WMT12 test dataset. The leaves of a DT correspond to contiguous atomic text spans, called Elementary Discourse Units or EDUs (three in Figure 1a). Adjacent spans are connected by certain coherence relations (e.g., Elaboration, Attribution), forming larger discourse units, which in turn are also subject to this relation linking. Discourse units linked by a relation are further distinguished based on their relative importance in the text: nuclei are the core parts of the relation while satellites are supportive ones. Note that the nuclearity and relation labels in the reference translation are also realized in the system translation in (b), but not in (c), which makes (b) a better translation compared to (c), according to our hypothesis. We argue that existing metrics that only use lexical and syntactic information cannot distinguish well between (b) and (c). In order to develop a discourse-aware evaluation metric, we first generate discourse trees for the reference and the system-translated sentences using a discourse parser, and then we measure the similarity between the two discourse trees. We describe these two steps below. 3.1 Generating Discourse Trees In Rhetorical Structure Theory, discourse analysis involves two subtasks: (i) discourse segmentation, or breaking the text into a sequence of EDUs, and (ii) discourse parsing, or the task of linking the units (EDUs and larger discourse units) into labeled discourse trees. Recently, Joty et al. (2012) proposed discriminative models for both discourse segmentation and discourse parsing at the sentence level. The segmenter uses a maximum entropy model that achieves state-of-the-art accuracy on this task, having an F1-score of 90.5%, while human agreement is 98.3%. The discourse parser uses a dynamic Conditional Random Field (Sutton et al., 2007) as a parsing model in order to infer the probability of all possible discourse tree constituents. The inferred (posterior) probabilities are then used in a probabilistic CKY-like bottom-up parsing algorithm to find the most likely DT. Using the standard set of 18 coarse-grained relations defined in (Carlson and Marcu, 2001), the parser achieved an F1-score of 79.8%, which is very close to the human agreement of 83%. These high scores allowed us to develop successful discourse similarity metrics.2 3.2 Measuring Similarity A number of metrics have been proposed to measure the similarity between two labeled trees, e.g., Tree Edit Distance (Tai, 1979) and Tree Kernels (Collins and Duffy, 2001; Moschitti and Basili, 2006). Tree kernels (TKs) provide an effective way to integrate arbitrary tree structures in kernelbased machine learning algorithms like SVMs. In the present work, we use the convolution TK defined in (Collins and Duffy, 2001), which efficiently calculates the number of common subtrees in two trees. Note that this kernel was originally designed for syntactic parsing, where the subtrees are subject to the constraint that their nodes are taken with either all or none of the children. This constraint of the TK imposes some limitations on the type of substructures that can be compared. 2The discourse parser is freely available from http://alt.qcri.org/tools/ 689 ElaborationROOT SPAN Nucleus Attribution Satellite Voices are coming from Germany , SPANSatellite SPANNucleus suggesting that ECB be the last resort creditor . (a) A reference (human) translation.               !" (b) A higher quality system translation. SPANROOT In Germany the ECB should be for the creditors of last resort . (c) A lower quality system translation. Figure 1: Example of three different discourse trees for the translations of a source sentence. (a) The reference, (b) A higher quality translation, (c) A lower quality translation. One way to cope with the limitations of the TK is to change the representation of the trees to a form that is suitable to capture the relevant information for our task. We experiment with TKs applied to two different representations of the discourse tree: non-lexicalized (DR), and lexicalized (DR-LEX). In Figure 2 we show the two representations for the subtree that spans the text: “suggest the ECB should be the lender of last resort”, which is highlighted in Figure 1b. As shown in Figure 2a, DR does not include any lexical item, and therefore measures the similarity between two translations in terms of their discourse structures only. On the contrary, DR-LEX includes the lexical items to account for lexical matching; moreover, it separates the structure (the skeleton) of the tree from its labels, i.e. the nuclearity and the relations, in order to allow the tree kernel to give partial credit to subtrees that differ in labels but match in their skeletons. More specifically, it uses the tags SPAN and EDU to build the skeleton of the tree, and considers the nuclearity and/or the relation labels as properties, added as children, of these tags. For example, a SPAN has two properties (its nuclearity and its relation), and an EDU has one property (its nuclearity). The words of an EDU are placed under the predefined children NGRAM. In order to allow the tree kernel to find subtree matches at the word level, we include an additional layer of dummy leaves as was done in (Moschitti et al., 2007); not shown in Figure 2, for simplicity. 4 Experimental Setup In our experiments, we used the data available for the WMT12 and the WMT11 metrics shared tasks for translations into English.3 This included the output from the systems that participated in the WMT12 and the WMT11 MT evaluation campaigns, both consisting of 3,003 sentences, for four different language pairs: Czech-English (CSEN), French-English (FR-EN), German-English (DE-EN), and Spanish-English (ES-EN); as well as a dataset with the English references. We measured the correlation of the metrics with the human judgments provided by the organizers. The judgments represent rankings of the output of five systems chosen at random, for a particular sentence, also chosen at random. Note that each judgment effectively constitutes 10 pairwise system rankings. The overall coverage, i.e. the number of unique sentences that were evaluated, was only a fraction of the total; the total number of judgments, along with other information of the datasets are shown in Table 1. 4.1 MT Evaluation Metrics In this study, we evaluate to what extent existing evaluation metrics can benefit from additional discourse information. To do so, we contrast different MT evaluation metrics with and without discourse information. The evaluation metrics we used are described below. 3http://www.statmt.org/wmt{11,12}/results.html 690 Attribution SPANSatellite SPAN Nucleus Nucleus (a) DT for DR SPAN EDU EDU NUC NGRAM NUC NGRAM Satellite suggest Nucleus the ECB should be lender of the last resort . NUC Nucleus REL Attribution (b) DT for DR-LEX Figure 2: Two different DT representations for the highlighted subtree shown in Figure 1b. WMT12 WMT11 systs ranks sents judges systs ranks sents judges CS-EN 6 1,294 951 45 8 498 171 20 DE-EN 16 1,427 975 47 20 924 303 31 ES-EN 12 1,141 923 45 15 570 207 18 FR-EN 15 1,395 949 44 18 708 249 32 Table 1: Number of systems (systs), judgments (ranks), unique sentences (sents), and different judges (judges) for the different language pairs, for the human evaluation of the WMT12 and WMT11 shared tasks. Metrics from WMT12. We used the publicly available scores for all metrics that participated in the WMT12 metrics task (Callison-Burch et al., 2012): SPEDE07PP, AMBER, METEOR, TERRORCAT, SIMPBLEU, XENERRCATS, WORDBLOCKEC, BLOCKERRCATS, and POSF. Metrics from ASIYA. We used the freely available version of the ASIYA toolkit4 in order to extend the set of evaluation measures contrasted in this study beyond those from the WMT12 metrics task. ASIYA (Gim´enez and M`arquez, 2010a) is a suite for MT evaluation that provides a large set of metrics that use different levels of linguistic information. For reproducibility, below we explain the individual metrics with the exact names required by the toolkit to calculate them. First, we used ASIYA’s ULC (Gim´enez and M`arquez, 2010b), which was the best performing metric at the system and the segment levels at the WMT08 and WMT09 metrics tasks. This is a uniform linear combination of 12 individual metrics. From the original ULC, we only replaced TER and Meteor individual metrics by newer versions taking into account synonymy lookup and paraphrasing: TERp-A and METEOR-pa in ASIYA’s terminology. We will call this combined metric Asiya0809 in our experiments. 4http://nlp.lsi.upc.edu/asiya/ To complement the set of individual metrics that participated at the WMT12 metrics task, we also computed the scores of other commonlyused evaluation metrics: BLEU (Papineni et al., 2002), NIST (Doddington, 2002), TER (Snover et al., 2006), ROUGE-W (Lin, 2004), and three METEOR variants (Denkowski and Lavie, 2011): METEOR-ex (exact match), METEOR-st (+stemming) and METEOR-sy (+synonyms). The uniform linear combination of the previous 7 individual metrics plus the 12 from Asiya-0809 is reported as Asiya-ALL in the experimental section. The individual metrics combined in Asiya-ALL can be naturally categorized according to the type of linguistic information they use to compute the quality scores. We grouped them in the following four families and calculated the uniform linear combination of the metrics in each group:5 1. Asiya-LEX. Combination of five metrics based on lexical similarity: BLEU, NIST, METEOR-ex, ROUGE-W, and TERp-A. 2. Asiya-SYN. Combination of four metrics ba-sed on syntactic information from constituency and dependency parse trees: ‘CP-STM-4’, ‘DP-HWCM c-4’, ‘DPHWCM r-4’, and ‘DP-Or(*)’. 3. Asiya-SRL. Combination of three metric variants based on predicate argument structures (semantic role labeling): ‘SR-Mr(*)’, ‘SR-Or(*)’, and ‘SR-Or’. 4. Asiya-SEM. Combination of two metrics variants based on semantic parsing:6 ‘DROr(*)’ and ‘DR-Orp(*)’. 5A detailed description of every individual metric can be found at (Gim´enez and M`arquez, 2010b). For a more up-todate description, see the User Manual from ASIYA’s website. 6In ASIYA the metrics from this family are referred to as “Discourse Representation” metrics. However, the structures they consider are actually very different from the discourse structures exploited in this paper. See the discussion in Section 2. For clarity, we will refer to them as semantic parsing metrics. 691 All uniform linear combinations are calculated outside ASIYA. In order to make the scores of the different metrics comparable, we performed a min–max normalization, for each metric, and for each language pair combination. 4.2 Human Judgements and Learning The human-annotated data from the WMT campaigns encompasses series of rankings on the output of different MT systems for every source sentence. Annotators rank the output of five systems according to perceived translation quality. The organizers relied on a random selection of systems, and a large number of comparisons between pairs of them, to make comparisons across systems feasible (Callison-Burch et al., 2012). As a result, for each source sentence, only relative rankings were available. As in the WMT12 experimental setup, we use these rankings to calculate correlation with human judgments at the sentencelevel, i.e. Kendall’s Tau; see (Callison-Burch et al., 2012) for details. For the experiments reported in Section 5.4, we used pairwise rankings to discriminatively learn the weights of the linear combinations of individual metrics. In order to use the WMT12 data for training a learning-to-rank model, we transformed the five-way relative rankings into ten pairwise comparisons. For instance, if a judge ranked the output of systems A, B, C, D, E as A > B > C > D > E, this would entail that A > B, A > C, A > D and A > E, etc. To determine the relative weights for the tuned combinations, we followed a similar approach to the one used by PRO to tune the relative weights of the components of a log-linear SMT model (Hopkins and May, 2011), also using Maximum Entropy as the base learning algorithm. Unlike PRO, (i) we use human judgments, not automatic scores, and (ii) we train on all pairs, not on a subsample. 5 Experimental Results In this section, we explore how discourse information can be used to improve machine translation evaluation metrics. Below we present the evaluation results at the system- and segment-level, using our two basic metrics on discourse trees (Section 3.1), which are referred to as DR and DR-LEX. 5.1 Evaluation In our experiments, we only consider translation into English, and use the data described in Table 1. For evaluation, we follow the setup of the metrics task of WMT12 (Callison-Burch et al., 2012): at the system-level, we use the official script from WMT12 to calculate the Spearman’s correlation, where higher absolute values indicate better metrics performance; at the segment-level, we use Kendall’s Tau for measuring correlation, where negative values are worse than positive ones.7 In our experiments, we combine DR and DR-LEX to other metrics in two different ways: using uniform linear interpolation (at system- and segment-level), and using a tuned linear interpolation for the segment-level. We only present the average results over all four language pairs. For simplicity, in our tables we show results divided into evaluation groups: 1. Group I: contains our evaluation metrics, DR and DR-LEX. 2. Group II: includes the metrics that participated in the WMT12 metrics task, excluding metrics which did not have results for all language pairs. 3. Group III: contains other important evaluation metrics, which were not considered in the WMT12 metrics task: NIST and ROUGE for both system- and segment-level, and BLEU and TER at segment-level. 4. Group IV: includes the metric combinations calculated with ASIYA and described in Section 4. For each metric in groups II, III and IV, we present the results for the original metric as well for the linear interpolation of that metric with DR and with DR-LEX. The combinations with DR and DR-LEX that improve over the original metrics are shown in bold, and those that degrade are in italic. Furthermore, we also present overall results for: (i) the average score over all metrics, excluding DR and DR-LEX, and (ii) the differences in the correlations for the DR/DR-LEX-combined and the original metrics. 7We have fixed a bug in the scoring tool from WMT12, which was making all scores positive. This made TERRORCAT’s score negative, as we present it in Table 3. 692 Metrics +DR +DR-LEX I DR .807 – – DR-LEX .876 – – II SEMPOS .902 .853 .903 AMBER .857 .829 .869 METEOR .834 .861 .888 TERRORCAT .831 .854 .889 SIMPBLEU .823 .826 .859 TER .812 .836 .848 BLEU .810 .830 .846 POSF .754 .841 .857 BLOCKERRCATS .751 .859 .855 WORDBLOCKEC .738 .822 .843 XENERRCATS .735 .819 .843 III NIST .817 .842 .875 ROUGE .884 .899 .869 IV Asiya-LEX .879 .881 .882 Asiya-SYN .891 .913 .883 Asiya-SRL .917 .911 .909 Asiya-SEM .891 .889 .886 Asiya-0809 .905 .914 .905 Asiya-ALL .899 .907 .896 average .839 .862 .874 diff. +.024 +.035 Table 2: Results on WMT12 at the system-level. Spearman’s correlation with human judgments. 5.2 System-level Results Table 2 shows the system-level experimental results for WMT12. We can see that DR is already competitive by itself: on average, it has a correlation of .807, very close to BLEU and TER scores (.810 and .812, respectively). Moreover, DR yields improvements when combined with 15 of the 19 metrics; worsening only four of the metrics. Overall, we observe an average improvement of +.024, in the correlation with the human judgments. This suggests that DR contains information that is complementary to that used by the other metrics. Note that this is true both for the individual metrics from groups II and III, as well as for the metric combinations in group IV. Combinations in the last group involve several metrics that already use linguistic information at different levels and are hard to improve over; yet, adding DR does improve, which shows that it has some complementary information to offer. As expected, DR-LEX performs better than DR since it is lexicalized (at the unigram level), and also gives partial credit to correct structures. Individually, DR-LEX outperforms most of the metrics from group II, and ranks as the second best metric in that group. Furthermore, when combined with individual metrics in group II, DR-LEX is able to improve consistently over each one of them. Metrics +DR +DR-LEX I DR -.433 – – DR-LEX .133 – – II SPEDE07PP .254 .190 .223 METEOR .247 .178 .217 AMBER .229 .180 .216 SIMPBLEU .172 .141 .191 XENERRCATS .165 .132 .185 POSF .154 .125 .201 WORDBLOCKEC .153 .122 .181 BLOCKERRCATS .074 .068 .151 TERRORCAT -.186 -.111 -.104 III NIST .214 .172 .206 ROUGE .185 .144 .201 TER .217 .179 .229 BLEU .185 .154 .190 IV Asiya-LEX .254 .237 .253 Asiya-SYN .177 .169 .191 Asiya-SRL -.023 .015 .161 Asiya-SEM .134 .152 .197 Asiya-0809 .254 .250 .258 Asiya-ALL .268 .265 .270 average .165 .145 .190 diff. -.019 +.026 Table 3: Results on WMT12 at the segment-level. Kendall’s Tau with human judgments. Note that, even though DR-LEX has better individual performance than DR, it does not yield improvements when combined with most of the metrics in group IV.8 However, over all metrics and all language pairs, DR-LEX is able to obtain an average improvement in correlation of +.035, which is remarkably higher than that of DR. Thus, we can conclude that at the system-level, adding discourse information to a metric, even using the simplest of the combination schemes, is a good idea for most of the metrics, and can help to significantly improve the correlation with human judgments. 5.3 Segment-level Results: Non-tuned Table 3 shows the results for WMT12 at the segment-level. We can see that DR performs badly, with a high negative Kendall’s Tau of -.433. This should not be surprising: (a) the discourse tree structure alone does not contain enough information for a good evaluation at the segment-level, and (b) this metric is more sensitive to the quality of the DT, which can be wrong or void. 8In this work, we have not investigated the reasons behind this phenomenon. We speculate that this might be caused by the fact that the lexical information in DR-LEX is incorporated only in the form of unigram matching at the sentencelevel, while the metrics in group IV are already complex combined metrics, which take into account stronger lexical models. Note, however, that the variations are very small and might not be significant. 693 Tuned Metrics Orig. +DR +DR-LEX I DR -.433 – – – DR-LEX .133 – – – II SPEDE07PP .254 – .253 .254 METEOR .247 – .250 .251 AMBER .229 – .230 .232 SIMPBLEU .172 – .181 .199 TERRORCAT -.186 – .181 .196 XENERRCATS .165 – .175 .194 POSF .154 – .160 .201 WORDBLOCKEC .153 – .161 .189 BLOCKERRCATS .074 – .087 .150 III NIST .214 – .222 .224 ROUGE .185 – .196 .218 TER .217 – .229 .246 BLEU .185 – .189 .194 IV Asiya-LEX .254 .266 .269 .270 Asiya-SYN .177 .229 .228 .232 Asiya-SRL -.023 -.004 .039 .181 Asiya-SEM .134 .146 .179 .202 Asiya-0809 .254 .295 .295 .295 Asiya-ALL .268 .296 .295 .295 average .165 .201 .222 diff. +.036 +.057 Table 4: Results on WMT12 at the segmentlevel: tuning with cross-validation on WMT12. Kendall’s Tau with human judgments. Additionally, DR is more likely to produce a high number of ties, which is harshly penalized by WMT12’s definition of Kendall’s Tau. Conversely, ties and incomplete discourse analysis were not a problem at the system-level, where evidence from all 3,003 test sentences is aggregated, and allows to rank systems more precisely. Due to the low score of DR as an individual metric, it fails to yield improvements when uniformly combined with other metrics. Again, DR-LEX is better than DR; with a positive Tau of +.133, yet as an individual metric, it ranks poorly compared to other metrics in group II. However, when linearly combined with other metrics, DR-LEX outperforms 14 of the 19 metrics in Table 3. Across all metrics, DR-LEX yields an average Tau improvement of +.026, i.e. from .165 to .190. This is a large improvement, taking into account that the combinations are just uniform linear combinations. In subsection 5.4, we present the results of tuning the linear combination in a discriminative way. 5.4 Segment-level Results: Tuned We experimented with tuning the weights of the individual metrics in the metric combinations, using the learning method described in Section 4.2. First, we did this using cross-validation to tune and test on WMT12. Later we tuned on WMT12 and evaluated on WMT11. For cross-validation in WMT12, we used ten folds of approximately equal sizes, each containing about 300 sentences: we constructed the folds by putting together entire documents, thus not allowing sentences from a document to be split over two different folds. During each cross-validation run, we trained our pairwise ranker using the human judgments corresponding to nine of the ten folds. We aggregated the data for different language pairs, and produced a single set of tuning weights for all language pairs.9 We then used the remaining fold for evaluation The results are shown in Table 4. As in previous sections we present the average results over all four language pairs. We can see that the tuned combinations with DR-LEX improve over most of the individual metrics in groups II and III. Interestingly, the tuned combinations that include the much weaker metric DR now improve over 12 out of 13 of the individual metrics in groups II and III, and only slightly degrades the score of the 13th one (SPEDE07PP). Note that the ASIYA metrics are combinations of several metrics, and these combinations (which exclude DR and DR-LEX) can be also tuned; this yields sizable improvements over the untuned versions as column three in the table shows. Compared to this baseline, DR improves for three of the six ASIYA metrics, while DR-LEX improves for four of them. Note that improving over the last two ASIYA metrics is very hard: they have very high scores of .296 and .295; for comparison, the best segment-level system at WMT12 (SPEDE07PP) achieved a Tau of .254. On average, DR improves Tau from .165 to .201, which is +.036, while DR-LEX improves to .222, or +.057. These much larger improvements highlight the importance of tuning the linear combination when working at the segment-level. 5.4.1 Testing on WMT11 In order to rule out the possibility that the improvement of the tuned metrics on WMT12 comes from over-fitting, and to verify that the tuned metrics do generalize when applied to other sentences, we also tested on a new test set: WMT11. 9Tuning separately for each language pair yielded slightly lower results. 694 Therefore, we tuned the weights on all WMT12 pairwise judgments (no cross-validation), and we evaluated on WMT11. Since the metrics that participated in WMT11 and WMT12 are different (and even when they have the same name, there is no guarantee that they have not changed from 2011 to 2012), we only report results for the versions of NIST, ROUGE, TER, and BLEU available in ASIYA, as well as for the ASIYA metrics, thus ensuring that the metrics in the experiments are consistent for 2011 and 2012. The results are shown in Table 5. Once again, tuning yields sizable improvements over the simple combination for the ASIYA metrics (third column in Table 5). Adding DR and DR-LEX to the combinations manages to improve over five and four of the six tuned ASIYA metrics, respectively. However, some of the differences are very small. On the contrary, DR and DR-LEX significantly improve over NIST, ROUGE, TER, and BLEU. Overall, DR improves the average Tau from .207 to .244, which is +.037, while DR-LEX improves to .267 or +.061. These improvements are very close to those for the WMT12 cross-validation. This shows that the weights learned on WMT12 generalize well, as they are also good for WMT11. What is also interesting to note is that when tuning is used, DR helps achieve sizeable improvements, even if not as strong as for DR-LEX. This is remarkable given that DR has a strong negative Tau as an individual metric at the sentence-level. This suggests that both DR and DR-LEX contain information that is complementary to that of the individual metrics that we experimented with. Overall, from the experimental results in this section, we can conclude that discourse structure is an important information source to be taken into account in the automatic evaluation of machine translation output. 6 Conclusions and Future Work In this paper we have shown that discourse structure can be used to improve automatic MT evaluation. First, we defined two simple discourse-aware similarity metrics (lexicalized and un-lexicalized), which use the all-subtree kernel to compute similarity between discourse parse trees in accordance with the Rhetorical Structure Theory. Then, after extensive experimentation on WMT12 and WMT11 data, we showed that a variety of existing evaluation metrics can benefit from our Tuned Metrics Orig. +DR +DR-LEX I DR -.447 – – – DR-LEX .146 – – – III NIST .219 – .226 .232 ROUGE .205 – .218 .242 TER .262 – .274 .296 BLEU .186 – .192 .207 IV Asiya-LEX .282 .301 .302 .303 Asiya-SYN .216 .259 .260 .260 Asiya-SRL -.004 .017 .051 .200 Asiya-SEM .189 .194 .220 .239 Asiya-0809 .300 .348 .349 .348 Asiya-ALL .313 .347 .347 .347 average .207 .244 .267 diff. +.037 +.061 Table 5: Results on WMT11 at the segment-level: tuning on the entire WMT12. Kendall’s Tau with human judgments. discourse-based metrics, both at the segment- and the system-level, especially when the discourse information is incorporated in an informed way (i.e. using supervised tuning). Our results show that discourse-based metrics can improve the state-ofthe-art MT metrics, by increasing correlation with human judgments, even when only sentence-level discourse information is used. Addressing discourse-level phenomena in MT is a relatively new research direction. Yet, many of the ongoing efforts have been moderately successful according to traditional evaluation metrics. There is a consensus in the MT community that more discourse-aware metrics need to be proposed for this area to move forward. We believe this work is a valuable contribution towards this longer-term goal. The tuned combined metrics tested in this paper are just an initial proposal, i.e. a simple adjustment of the relative weights for the individual metrics in a linear combination. In the future, we plan to work on integrated representations of syntactic, semantic and discourse-based structures, which would allow us to train evaluation metrics based on more fine-grained features. Additionally, we propose to use the discourse information for MT in two different ways. First, at the sentence-level, we can use discourse information to re-rank alternative MT hypotheses; this could be applied either for MT parameter tuning, or as a post-processing step for the MT output. Second, we propose to move in the direction of using discourse information beyond the sentence-level. 695 References Nicholas Asher and Alex Lascarides, 2003. Logics of Conversation. Cambridge University Press. Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 22–64, Edinburgh, Scotland, July. ACL. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 10–51, Montr´eal, Canada, June. ACL. Lynn Carlson and Daniel Marcu. 2001. Discourse Tagging Reference Manual. Technical Report ISI-TR545, University of Southern California Information Sciences Institute. Bruno Cartoni, Sandrine Zufferey, Thomas Meyer, and Andrei Popescu-Belis. 2011. How comparable are parallel corpora? measuring the distribution of general vocabulary and connectives. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pages 78–86, Portland, Oregon, June. ACL. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP’08), Honolulu, Hawaii, USA. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 263– 270, Ann Arbor, Michigan. Michael Collins and Nigel Duffy. 2001. Convolution Kernels for Natural Language. In Neural Information Processing Systems, NIPS’01, pages 625–632, Vancouver, Canada. Elisabet Comelles, Jes´us Gim´enez, Llu´ıs M`arquez, Irene Castell´on, and Victoria Arranz. 2010. Document-level automatic mt evaluation based on discourse representations. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 333–338, Uppsala, Sweden, July. ACL. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85–91, Edinburgh, Scotland, July. ACL. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research, HLT ’02, pages 138–145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of the 2004 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technology, HLT-NAACL, pages 273–280. Jes´us Gim´enez and Llu´ıs M`arquez. 2007. Linguistic features for automatic evaluation of heterogenous MT systems. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 256– 264, Prague, Czech Republic, June. ACL. Jes´us Gim´enez and Llu´ıs M`arquez. 2009. On the robustness of syntactic and semantic features for automatic MT evaluation. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 250–258, Athens, Greece, March. ACL. Jes´us Gim´enez and Llu´ıs M`arquez. 2010a. Asiya: an Open Toolkit for Automatic Machine Translation (Meta-)Evaluation. The Prague Bulletin of Mathematical Linguistics, 94:77–86. Jes´us Gim´enez and Llu´ıs M`arquez. 2010b. Linguistic Measures for Automatic Machine Translation Evaluation. Machine Translation, 24(3–4):77–86. Michael Halliday and Ruqaiya Hasan, 1976. Cohesion in English. Longman, London. Christian Hardmeier and Marcello Federico. 2010. Modelling pronominal anaphora in statistical machine translation. In Proceedings of the International Workshop on Spoken Language Translation, pages 283–289. Christian Hardmeier, Joakim Nivre, and J¨org Tiedemann. 2012. Document-wide decoding for phrasebased statistical machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLPCoNLL ’12, pages 1179–1190, Jeju Island, Korea. ACL. Christian Hardmeier. 2012. Discourse in statistical machine translation. a survey and a case study. Discours. Revue de linguistique, psycholinguistique et informatique, 11(8726). Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP ’11. 696 Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2012. A Novel Discriminative Framework for Sentence-Level Discourse Analysis. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 904–915, Jeju Island, Korea. ACL. Shafiq Joty, Giuseppe Carenini, Raymond T. Ng, and Yashar Mehdad. 2013. Combining Intra- and Multi-sentential Rhetorical Parsing for Documentlevel Discourse Analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL ’13, pages 486–496, Sofia, Bulgaria. ACL. Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic: Introduction to Model theoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Number 42 in Studies in Linguistics and Philosophy. Kluwer Academic Publishers. Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Proceedings of Workshop on Text Summarization Branches Out, pages 74–81, Barcelona. Ding Liu and Daniel Gildea. 2005. Syntactic features for evaluation of machine translation. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 25–32, Ann Arbor, Michigan, June. ACL. Chi-kiu Lo, Anand Karthik Tumuluru, and Dekai Wu. 2012. Fully automatic semantic mt evaluation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 243–252, Montr´eal, Canada, June. ACL. William Mann and Sandra Thompson. 1988. Rhetorical Structure Theory: Toward a Functional Theory of Text Organization. Text, 8(3):243–281. Thomas Meyer, Andrei Popescu-Belis, Najeh Hajlaoui, and Andrea Gesmundo. 2012. Machine translation of labeled discourse connectives. In Proceedings of the Tenth Biennial Conference of the Association for Machine Translation in the Americas (AMTA). Thomas Meyer. 2011. Disambiguating temporalcontrastive connectives for machine translation. In Proceedings of the ACL 2011 Student Session, pages 46–51, Portland, OR, USA, June. ACL. Alessandro Moschitti and Roberto Basili. 2006. A Tree Kernel approach to Question and Answer Classification in Question Answering Systems. In Proceedings of the 5th international conference on Language Resources and Evaluation, Genoa, Italy. Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, and Suresh Manandhar. 2007. Exploiting Syntactic and Shallow Semantic Kernels for Question/Answer Classification. In Proceedings of the ACL-2007, pages 776–783, Prague, Czech Republic. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics (ACL’02), Philadelphia, PA, USA. Maja Popovic and Hermann Ney. 2007. Word error rates: Decomposition over POS classes and applications for error analysis. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 48–55, Prague, Czech Republic, June. ACL. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal smt. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 271–279, Ann Arbor, Michigan. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Americas, AMTA ’06, Cambridge, MA, USA. Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data. Journal of Machine Learning Research (JMLR), 8:693–723. Kuo-Chung Tai. 1979. The tree-to-tree correction problem. Journal of the ACM, 26(3):422–433, July. Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’07, Prague, Czech Republic. Bonnie Webber, Andrei Popescu-Belis, Katja Markert, and J¨org Tiedemann, editors. 2013. Proceedings of the Workshop on Discourse in Machine Translation. ACL, Sofia, Bulgaria, August. Bonnie Webber. 2004. D-LTAG: Extending Lexicalized TAG to Discourse. Cognitive Science, 28(5):751–779. Billy T. M. Wong and Chunyu Kit. 2012. Extending machine translation evaluation metrics with lexical cohesion to document level. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLPCoNLL, pages 1060–1068, Jeju Island, Korea, July. ACL. 697 Dekai Wu and Pascale Fung. 2009. Semantic roles for smt: A hybrid two-pass model. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 13– 16, Boulder, Colorado, June. 698
2014
65
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 699–709, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Continuous Phrase Representations for Translation Modeling Jianfeng Gao Xiaodong He Wen-tau Yih Li Deng Microsoft Research One Microsoft Way Redmond, WA 98052, USA {jfgao,xiaohe,scottyih,deng}@microsoft.com Abstract This paper tackles the sparsity problem in estimating phrase translation probabilities by learning continuous phrase representations, whose distributed nature enables the sharing of related phrases in their representations. A pair of source and target phrases are projected into continuous-valued vector representations in a low-dimensional latent space, where their translation score is computed by the distance between the pair in this new space. The projection is performed by a neural network whose weights are learned on parallel training data. Experimental evaluation has been performed on two WMT translation tasks. Our best result improves the performance of a state-of-the-art phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.3 BLEU points. 1 Introduction The phrase translation model, also known as the phrase table, is one of the core components of phrase-based statistical machine translation (SMT) systems. The most common method of constructing the phrase table takes a two-phase approach (Koehn et al. 2003). First, the bilingual phrase pairs are extracted heuristically from an automatically word-aligned training data. The second phase, which is the focus of this paper, is parameter estimation where each phrase pair is assigned with some scores that are estimated based on counting these phrases or their words using the same word-aligned training data. Phrase-based SMT systems have achieved state-of-the-art performance largely due to the fact that long phrases, rather than single words, are used as translation units so that useful context information can be captured in selecting translations. However, longer phrases occur less often in training data, leading to a severe data sparseness problem in parameter estimation. There has been a plethora of research reported in the literature on improving parameter estimation for the phrase translation model (e.g., DeNero et al. 2006; Wuebker et al. 2010; He and Deng 2012; Gao and He 2013). This paper revisits the problem of scoring a phrase translation pair by developing a Continuous-space Phrase Translation Model (CPTM). The translation score of a phrase pair in this model is computed as follows. First, we represent each phrase as a bag-of-words vector, called word vector henceforth. We then project the word vector, in either the source language or the target language, into a respective continuous feature vector in a common low-dimensional space that is language independent. The projection is performed by a multi-layer neural network. The projected feature vector forms the continuous representation of a phrase. Finally, the translation score of a source-target phrase pair is computed by the distance between their feature vectors. The main motivation behind the CPTM is to alleviate the data sparseness problem associated with the traditional counting-based methods by grouping phrases with a similar meaning across different languages. This style of grouping is made possible because of the distributed nature of the continuous-space representations for phrases. No such sharing was possible in the original symbolic space for representing words or phrases. In this model, semantically or grammatically related phrases, in both the source and the target languages, would tend to have similar (close) feature vectors in the continuous space, guided by the training objective. Since the translation score is a smooth function of these feature vectors, a small 699 change in the features should only lead to a small change in the translation score. The primary research task in developing the CPTM is learning the continuous representation of a phrase that is effective for SMT. Motivated by recent studies on continuous-space language models (e.g., Bengio et al. 2003; Mikolov et al. 2011; Schwenk et al., 2012), we use a neural network to project a word vector to a feature vector. Ideally, the projection would discover those latent features that are useful to differentiate good translations from bad ones, for a given source phrase. However, there is no training data with explicit annotation on the quality of phrase translations. The phrase translation pairs are hidden in the parallel source-target sentence pairs, which are used to train the traditional translation models. The quality of a phrase translation can only be judged implicitly through the translation quality of the sentences, as measured by BLEU, which contain the phrase pair. In order to overcome this challenge and let the BLEU metric guide the projection learning, we propose a new method to learn the parameters of a neural network. This new method, via the choice of an appropriate objective function in training, automatically forces the feature vector of a source phrase to be closer to the feature vectors of its candidate translations. As a result, the BLEU score is improved when these translations are selected by an SMT decoder to produce final, sentence-level translations. The new learning method makes use of the L-BFGS algorithm and the expected BLEU as the objective function defined on N-best lists. To the best of our knowledge, the CPTM proposed in this paper is the first continuous-space phrase translation model that makes use of joint representations of a phrase in the source language and its translation in the target language (to be detailed in Section 4) and that is shown to lead to significant improvement over a standard phrasebased SMT system (to be detailed in Section 6). Like the traditional phrase translation model, the translation score of each bilingual phrase pair is modeled explicitly in our model. However, instead of estimating the phrase translation score on aligned parallel data, our model intends to capture the grammatical and semantic similarity between a source phrase and its paired target phrase by projecting them into a common, continuous space that is language independent. 1 Niehues et al. (2011) use different translation units in order to integrate the n-gram translation model into the phrasebased approach. However, it is not clear how a continuous The rest of the paper is organized as follows. Section 2 reviews previous work. Section 3 reviews the log-linear model for phrase-based SMT and Sections 4 presents the CPTM. Section 5 describes the way the model parameters are estimated, followed by the experimental results in Section 6. Finally, Section 7 concludes the paper. 2 Related Work Representations of words or documents as continuous vectors have a long history. Most of the earlier latent semantic models for learning such vectors are designed for information retrieval (Deerwester et al. 1990; Hofmann 1999; Blei et al. 2003). In contrast, recent work on continuous space language models, which estimate the probability of a word sequence in a continuous space (Bengio et al. 2003; Mikolov et al. 2010), have advanced the state of the art in language modeling, outperforming the traditional n-gram model on speech recognition (Mikolov et al. 2012; Sundermeyer et al. 2013) and machine translation (Mikolov 2012; Auli et al. 2013). Because these models are developed for monolingual settings, word embedding from these models is not directly applicable to translation. As a result, variants of such models for cross-lingual scenarios have been proposed so that words in different languages are projected into the shared latent vector space (Dumais et al. 1997; Platt et al. 2010; Vinokourov et al. 2002; Yih et al. 2011; Gao et al. 2011; Huang et al. 2013; Zou et al. 2013). In principle, a phrase table can be derived using any of these cross-lingual models, although decoupling the derivation from the SMT training often results in suboptimal performance (e.g., measured in BLEU), as we will show in Section 6. Recently, there is growing interest in applying continuous-space models for translation. The most related to this study is the work of continuous space n-gram translation models (Schwenk et al. 2007; Schwenk 2012; Son et al. 2012), where the feed-forward neural network language model is extended to represent translation probabilities. However, these earlier studies focused on the ngram translation models, where the translation probability of a phrase or a sentence is decomposed as a product of n-gram probabilities as in a standard n-gram language model. Therefore, it is not clear how their approaches can be applied to the phrase translation model1, which is much more version of such a model can be trained efficiently because the factor models used by Son et al. cannot be applied directly. 700 widely used in modern SMT systems. In contrast, our model learns jointly the representations of a phrase in the source language as well as its translation in the target language. The recurrent continuous translation models proposed by Kalchbrenner and Blunsom (2013) also adopt the recurrent language model (Mikolov et al. 2010). But unlike the n-gram translation models above, they make no Markov assumptions about the dependency of the words in the target sentence. Continuous space models have also been used for generating translations for new words (Mikolov et al. 2013a) and ITG reordering (Li et al. 2013). There has been a lot of research on improving the phrase table in phrase-based SMT (Marcu and Wong 2002; Lamber and Banchs 2005; Denero et al. 2006; Wuebker et al. 2010; Zhang et al., 2011; He and Deng 2012; Gao and He 2013). Among them, (Gao and He 2013) is most relevant to the work described in this paper. They estimate phrase translation probabilities using a discriminative training method under the N-best reranking framework of SMT. In this study we use the same objective function to learn the continuous representations of phrases, integrating the strengths associated with these earlier studies. 3 The Log-Linear Model for SMT Phrase-based SMT is based on a log-linear model which requires learning a mapping between input 𝐹∈ℱ to output 𝐸∈ℰ. We are given  Training samples (𝐹𝑖, 𝐸𝑖) for 𝑖= 1 … 𝑁, where each source sentence 𝐹𝑖 is paired with a reference translation in target language 𝐸𝑖;  A procedure GEN to generate a list of N-best candidates GEN(𝐹𝑖) for an input 𝐹𝑖, where GEN in this study is the baseline phrasebased SMT system, i.e., an in-house implementation of the Moses system (Koehn et al. 2007) that does not use the CPTM, and each 𝐸∈GEN(𝐹𝑖) is labeled by the sentence-level BLEU score (He and Deng 2012), denoted by sBleu(𝐸𝑖, 𝐸) , which measures the quality of 𝐸 with respect to its reference translation 𝐸𝑖;  A vector of features 𝐡∈ℝ𝑀 that maps each (𝐹𝑖, 𝐸) to a vector of feature values2; and  A parameter vector 𝛌∈ℝ𝑀, which assigns a real-valued weight to each feature. 2 Our baseline system uses a set of standard features suggested in Koehn et al. (2007), which is also detailed in Section 6. The components GEN(. ), 𝐡 and 𝛌 define a loglinear model that maps 𝐹𝑖 to an output sentence as follows: 𝐸∗= argmax (𝐸,𝐴)∈GEN(𝐹𝑖) 𝛌T𝐡(𝐹𝑖, 𝐸, 𝐴) (1) which states that given 𝛌 and 𝐡, argmax returns the highest scoring translation 𝐸∗, maximizing over correspondences 𝐴. In phrase-based SMT, 𝐴 consists of a segmentation of the source and target sentences into phrases and an alignment between source and target phrases. Since computing the argmax exactly is intractable, it is commonly performed approximatedly by beam search (Och and Ney 2004). Following Liang et al. (2006), we assume that every translation candidate is always coupled with a corresponding 𝐴, called the Viterbi derivation, generated by (1). 4 A Continuous-Space Phrase Translation Model (CPTM) The architecture of the CPTM is shown in Figures 1 and 2, where for each pair of source and target phrases (𝑓𝑖, 𝑒𝑗) in a source-target sentence pair, we first project them into feature vectors 𝐲𝑓𝑖 and 𝐲𝑒𝑗 in a latent, continuous space via a neural network with one hidden layer (as shown in Figure 2), and then compute the translation score, score(𝑓𝑖, 𝑒𝑗), by the distance of their feature vectors in that space. We start with a bag-of-words representation of a phrase 𝐱∈ℝ𝑑, where 𝐱 is a word vector and 𝑑 is the size of the vocabulary consisting of words in both source and target languages, which is set to 200K in our experiments. We then learn to project 𝐱 to a low-dimensional continuous space ℝ𝑘: 𝜙(𝐱): ℝ𝑑→ℝ𝑘 The projection is performed using a fully connected neural network with one hidden layer and tanh activation functions. Let 𝐖1 be the projection matrix from the input layer to the hidden layer and 𝐖2 the projection matrix from the hidden layer to the output layer, we have 𝐲≡𝜙(𝐱) = tanh (𝐖2 T(tanh(𝐖1 T𝐱))) (2) 701 Figure 2. A neural network model for phrases giving rise to their continuous representations. The model with the same form is used for both source and target languages. The translation score of a source phrase f and a target phrase e can be measured as the similarity (or distance) between their feature vectors. We choose the dot product as the similarity function3: score(𝑓, 𝑒) ≡sim𝛉(𝐱𝑓, 𝐱𝑒) = 𝐲𝑓 T𝐲𝑒 (3) According to (2), we see that the value of the scoring function is determined by the projection matrices 𝛉= {𝐖1, 𝐖2}. The CPTM of (2) and (3) can be incorporated into the log-linear model for SMT (1) by 3 In our experiments, we compare dot product and the cosine similarity functions and find that the former works better for nonlinear multi-layer neural networks, and the latter works better for linear neural networks. For the sake of clarity, we choose dot product when we describe the CPTM and its training in Sections 4 and 5, respectively. 4 The baseline SMT needs to be reasonably good in the sense that the oracle BLEU score on the generated n-best introducing a new feature ℎ𝑀+1 and a new feature weight 𝜆𝑀+1. The new feature is defined as ℎ𝑀+1(𝐹𝑖, 𝐸, 𝐴) = ∑ sim𝛉(𝐱𝑓, 𝐱𝑒) (𝑓,𝑒 )∈𝐴 (4) Thus, the phrase-based SMT system, into which the CPTM is incorporated, is parameterized by (𝛌, 𝛉), where 𝛌 is a vector of a handful of parameters used in the log-linear model of (1), with one weight for each feature; and 𝛉 is the projection matrices used in the CPTM defined by (2) and (3). In our experiments we take three steps to learn (𝛌, 𝛉): 1. We use a baseline phrase-based SMT system to generate for each source sentence in training data an N-best list of translation hypotheses4. 2. We set 𝛌 to that of the baseline system and let 𝜆𝑀+1 = 1, and optimize 𝛉 w.r.t. a loss function on training data5. 3. We fix 𝛉, and optimize 𝛌 using MERT (Och 2003) to maximize BLEU on dev data. In the next section, we will describe Step 2 in detail as it is directly related to the CPTM training. lists needs to be significantly higher than that of the top-1 translations so that the CPTM can be effectively trained. 5 The initial value of 𝜆𝑀+1 can also be tuned using the dev set. However, we find in a pilot study that it is good enough to set it to 1 when the values of all the baseline feature weights, used in the log-linear model of (1), are properly normalized, such as by setting 𝜆𝑚= 𝜆𝑚/𝐶 for 𝑚= 1 … 𝑀, where 𝐶 is the unnormalized weight value of the target language model. Figure 1. The architecture of the CPTM, where the mapping from a phrase to its continuous representation is shown in Figure 2. 200K (d) 100 100 (𝑘) (𝑤1 … 𝑤𝑛) Word vector Neural network Feature vector 𝐖1 𝐖2 𝐱 𝐲 Raw phrase 𝑒 or 𝑓 … (the process of) (machine translation) (consists of). . . … (le processus de) (traduction automatique) (consiste en). . . 𝑒𝑗−1 𝑒𝑗 𝑒𝑗+1 𝑓𝑖−1 𝑓𝑖 𝑓𝑖+1 𝐲𝑒𝑗 𝐲𝑓𝑖 𝑦𝑓(𝑘) score(𝑓𝑖, 𝑒𝑗) = 𝐲𝑓𝑖 T 𝐲𝑒𝑗 Target phrases Continuous representations of target phrases Source phrases Continuous representations of source phrases Translation score as dot product of feature vectors in the continuous space 702 5 Training CPTM This section describes the loss function we employ with the CPTM and the algorithm to train the neural network weights. We define the loss function ℒ(𝛉) as the negative of the N-best list based expected BLEU, denoted by xBleu(𝛉). In the reranking framework of SMT outlined in Section 3, xBleu(𝛉) over one training sample (𝐹𝑖, 𝐸𝑖) is defined as xBleu(𝛉) = ∑ 𝑃(𝐸|𝐹𝑖)sBleu(𝐸𝑖, 𝐸) 𝐸∈GEN(𝐹𝑖) (5) where sBleu(𝐸𝑖, 𝐸) is the sentence-level BLEU score, and 𝑃(𝐸|𝐹𝑖) is the translation probability from 𝐹𝑖 to 𝐸 computed using softmax as 𝑃(𝐸|𝐹𝑖) = exp(𝛾𝛌T𝐡(𝐹𝑖,𝐸,𝐴)) ∑ exp(𝛾𝛌T𝐡(𝐹𝑖,𝐸′,𝐴)) 𝐸′∈GEN(𝐹𝑖) (6) where 𝛌T𝐡 is the log-linear model of (1), which also includes the feature derived from the CPTM as defined by (4), and 𝛾 is a tuned smoothing factor. Let ℒ(𝛉) be a loss function which is differentiable w.r.t. the parameters of the CPTM, 𝛉. We can compute the gradient of the loss and learn 𝛉 using gradient-based numerical optimization algorithms, such as L-BFGS or stochastic gradient descent (SGD). 5.1 Computing the Gradient Since the loss does not explicitly depend on 𝛉, we use the chain rule for differentiation: 𝜕ℒ(𝛉) 𝜕𝛉 = ∑ 𝜕ℒ(𝛉) 𝜕sim𝛉(𝐱𝑓, 𝐱𝑒) 𝜕sim𝛉(𝐱𝑓, 𝐱𝑒) 𝜕𝛉 (𝑓,𝑒 ) = ∑−𝛿(𝑓,𝑒) 𝜕sim𝛉(𝐱𝑓, 𝐱𝑒) 𝜕𝛉 (𝑓,𝑒 ) (7) which takes the form of summation over all phrase pairs occurring either in a training sample (stochastic mode) or in the entire training data (batch mode). 𝛿(𝑓,𝑒) in (7) is known as the error term of the phrase pair (𝑓, 𝑒), and is defined as 𝛿(𝑓,𝑒) = − 𝜕ℒ(𝛉) 𝜕sim𝛉(𝐱𝑓,𝐱𝑒) (8) It describes how the overall loss changes with the translation score of the phrase pair (𝑓, 𝑒). We will leave the derivation of 𝛿(𝑓,𝑒) to Section 5.1.2, and will first describe how the gradient of sim𝛉(𝐱𝑓, 𝐱𝑒) w.r.t. 𝛉 is computed. 5.1.1 Computing 𝝏𝐬𝐢𝐦𝛉(𝐱𝒇, 𝐱𝒆)/𝝏𝛉 Without loss of generality, we use the following notations to describe a neural network:  𝐖𝑙 is the projection matrix for the l-th layer of the neural network;  𝐱 is the input word vector of a phrase;  𝐳𝑙 is the sum vector of the l-th layer; and  𝐲𝑙= 𝜎(𝐳𝑙) is the output vector of the l-th layer, where 𝜎 is an activation function; Thus, the CPTM defined by (2) and (3) can be represented as 𝐳1 = 𝐖1 T𝐱 𝐲1 = 𝜎(𝐳1) 𝐳2 = 𝐖2 T𝐲1 𝐲2 = 𝜎(𝐳2) sim𝛉(𝐱𝑓, 𝐱𝑒) = (𝐲𝑓 2) T𝐲𝑒2 The gradient of the matrix 𝐖2 which projects the hidden vector to the output vector is computed as: ∂sim𝛉(𝐱𝑓, 𝐱𝑒) ∂𝐖2 = ∂(𝐲𝑓 2) T ∂𝐖2 𝐲𝑒2 + (𝐲𝑓 2) T ∂𝐲𝑒2 ∂𝐖2 = 𝐲𝑓 1 (𝐲𝑒2 ∘𝜎′(𝐳𝑓 2)) T + 𝐲𝑒1 (𝐲𝑓 2 ∘𝜎′(𝐳𝑒2)) T (9) where ∘ is the element-wise multiplication (Hadamard product). Applying the back propagation principle, the gradient of the projection matrix mapping the input vector to the hidden vector 𝐖1 is computed as ∂sim𝛉(𝐱𝑓, 𝐱𝑒) ∂𝐖1 = 𝐱𝑓(𝐖2 (𝐲𝑒2 ∘𝜎′(𝐳𝑓 2)) ∘𝜎′(𝐳𝑓 1)) T +𝐱𝑒(𝐖2 (𝐲𝑓 2 ∘𝜎′(𝐳𝑒2)) ∘𝜎′(𝐳𝑒1)) T (10) The derivation can be easily extended to a neural network with multiple hidden layers. 5.1.2 Computing 𝜹(𝒇,𝒆) To simplify the notation, we rewrite our loss function of (5) and (6) over one training sample as 703 ℒ(𝛉) = −xBleu(𝛉) = −G(𝛉) Z(𝛉) (11) where G(𝛉) = ∑sBleu(𝐸, 𝐸𝑖) exp(𝛌T𝐡(𝐹𝑖, 𝐸, 𝐴)) 𝐸 Z(𝛉) = ∑exp(𝛌T𝐡(𝐹𝑖, 𝐸, 𝐴)) 𝐸 Combining (8) and (11), we have 𝛿(𝑓,𝑒) = 𝜕xBleu(𝛉) 𝜕sim𝛉(𝐱𝑓, 𝐱𝑒) (12) = 1 Z(𝛉) ( 𝜕G(𝛉) 𝜕sim𝛉(𝐱𝑓, 𝐱𝑒) − 𝜕Z(𝛉) 𝜕sim𝛉(𝐱𝑓, 𝐱𝑒) xBleu(𝛉)) Because 𝛉 is only relevant to ℎ𝑀+1 which is defined in (4), we have 𝜕𝛌T𝐡(𝐹𝑖, 𝐸, 𝐴) 𝜕sim𝛉(𝐱𝑓, 𝐱𝑒) = 𝜆𝑀+1 𝜕ℎ𝑀+1(𝐹𝑖, 𝐸, 𝐴) 𝜕sim𝛉(𝐱𝑓, 𝐱𝑒) = 𝜆𝑀+1𝑁(𝑓, 𝑒; 𝐴) (13) where 𝑁(𝑓, 𝑒; 𝐴) is the number of times the phrase pair (𝑓, 𝑒) occurs in 𝐴. Combining (12) and (13), we end up with the following equation 𝛿(𝑓,𝑒) = ∑ U(𝛉, 𝐸)𝑃(𝐸|𝐹𝑖)𝜆𝑀+1𝑁(𝑓, 𝑒; 𝐴) (𝐸,𝐴)∈𝐺𝐸𝑁(𝐹𝑖) where (14) U(𝛉, 𝐸) = sBleu(𝐸𝑖, 𝐸) −xBleu(𝛉). 5.2 The Training Algorithm In our experiments we train the parameters of the CPTM, 𝛉, using the L-BFGS optimizer described in Andrew and Gao (2007), together with the loss function described in (5). The gradient is computed as described in Sections 5.1. Although SGD has been advocated for neural network training due to its simplicity and its robustness to local minima (Bengio 2009), we find that in our task that the L-BFGS minimizes the loss in a desirable fashion empirically when iterating over the complete training data (batch mode). For example, the convergence of the algorithm was found to be smooth, despite the non-convexity in our loss. Another merit of batch training is that the gradient over all training data can be computed efficiently. As shown in Section 5.1, computing 𝜕simθ(x𝑓, x𝑒)/𝜕θ requires large-scale matrix multiplications, and is expensive for multi-layer neural networks. Eq. (7) suggests that 𝜕simθ(x𝑓, x𝑒)/𝜕θ and 𝛿(𝑓,𝑒) can be computed separately, thus making the computation cost of the former term only depends on the number of phrase pairs in the phrase table, but not the size of training data. Therefore, the training method described here can be used on larger amounts of training data with little difficulty. As described in Section 4, we take three steps to learn the parameters for both the log-linear model of SMT and the CPTM. While steps 1 and 3 can be easily parallelized on a computer cluster, the CPTM training is performed on a single machine. For example, given a phrase table containing 16M pairs and a 1M-sentence training set, it takes a couple of hours to generate the N-best lists on a cluster, and about 10 hours to train the CPTM on a Xeon E5-2670 2.60GHz machine. For a non-convex problem, model initialization is important. In our experiments we always initialize 𝐖1 using a bilingual topic model trained on parallel data (see detail in Section 6.2), and 𝐖2 as an identity matrix. In principle, the loss function of (5) can be further regularized (e.g. by adding a term of 𝐿2 norm) to deal with overfitting. However, we did not find clear empirical advantage over the simpler early stop approach in a pilot study, which is adopted in the experiments in this paper. 6 Experiments This section evaluates the CPTM presented on two translation tasks using WMT data sets. We first describe the data sets and baseline setup. Then we present experiments where we compare different versions of the CPTM and previous models. 6.1 Experimental Setup Baseline. We experiment with an in-house phrase-based system similar to Moses (Koehn et al. 2007), where the translation candidates are scored by a set of common features including maximum likelihood estimates of source given target phrase mappings 𝑃𝑀𝐿𝐸(𝑒|𝑓) and vice versa 𝑃𝑀𝐿𝐸(𝑓|𝑒), as well as lexical weighting estimates 𝑃𝐿𝑊(𝑒|𝑓) and 𝑃𝐿𝑊(𝑓|𝑒), word and phrase penalties, a linear distortion feature, and a lexicalized reordering feature. The baseline includes a standard 5-gram modified Kneser-Ney language model trained on the target side of the parallel corpora described below. Log-linear weights are estimated with the MERT algorithm (Och 2003). 704 Evaluation. We test our models on two different data sets. First, we train an English to French system based on the data of WMT 2006 shared task (Koehn and Monz 2006). The parallel corpus includes 688K sentence pairs of parliamentary proceedings for training. The development set contains 2000 sentences, and the test set contains other 2000 sentences, all from the official WMT 2006 shared task. Second, we experiment with a French to English system developed using 2.1M sentence pairs of training data, which amounts to 102M words, from the WMT 2012 campaign. The majority of the training data set is parliamentary proceedings except for 5M words which are newswire. We use the 2009 newswire data set, comprising 2525 sentences, as the development set. We evaluate on four newswire domain test sets from 2008, 2010 and 2011 as well as the 2010 system combination test set, containing 2034 to 3003 sentences. In this study we perform a detailed empirical comparison using the WMT 2006 data set, and verify our best models and results using the larger WMT 2012 data set. The metric used for evaluation is case insensitive BLEU score (Papineni et al. 2002). We also perform a significance test using the Wilcoxon signed rank test. Differences are considered statistically significant when the p-value is less than 0.05. 6.2 Results of the CPTM Table 1 shows the results measured in BLEU evaluated on the WMT 2006 data set, where Row 1 is the baseline system. Rows 2 to 4 are the systems enhanced by integrating different versions of the CPTM. Rows 5 to 7 present the results of previous models. Row 8 is our best system. Table 2 shows the main results on the WMT 2012 data set. CPTM is the model described in Sections 4. As illustrated in Figure 2, the number of the nodes in the input layer is the vocabulary size 𝑑. Both the hidden layer and the output layer have 100 nodes6. That is, 𝐖1 is a 𝑑× 100 matrix and 𝐖2 a 100 × 100 matrix. The result shows that CPTM leads to a substantial improvement over the baseline system with a statistically significant margin of 1.0 BLEU points as in Table 1. We have developed a set of variants of CPTM to investigate two design choices we made in developing the CPTM: (1) whether to use a linear 6 We can achieve slightly better results using more nodes in the hidden and output layers, say 500 nodes. But the model projection or a multi-layer nonlinear projection; and (2) whether to compute the phrase similarity using word-word similarities as suggested by e.g., the lexical weighting model (Koehn et al. 2003). We compare these variants on the WMT 2006 data set, as shown in Table 1. CPTML (Row 3 in Table 1) uses a linear neural network to project a word vector of a phrase 𝐱 to a feature vector 𝐲: 𝐲≡𝜙(𝐱) = 𝐖T𝐱, where 𝐖 is a 𝑑× 100 projection matrix. The translation score of a source phrase f and a target phrase e is measured as the similarity of their feature vectors. We choose cosine similarity because it works better than dot product for linear projection. CPTMW (Row 4 in Table 1) computes the phrase similarity using word-word similarity scores. This follows the common smoothing strategy of addressing the data sparseness problem in modeling phrase translations, such as the lexical weighting model (Koehn et al. 2003) and the word factored n-gram translation model (Son et al. 2012). Let 𝑤 denote a word, and 𝑓 and 𝑒 the source and target phrases, respectively. We define sim(𝑓, 𝑒) = 1 |𝑓| ∑ sim𝜏(𝑤, 𝑒) + 𝑤∈𝑓 1 |𝑒| ∑ sim𝜏(𝑤, 𝑓) 𝑤∈𝑒 where sim𝜏(𝑤, 𝑒) (or sim𝜏(𝑤, 𝑓)) is the wordphrase similarity, and is defined as a smooth approximation of the maximum function sim𝜏(𝑤, 𝑒) = ∑ sim(𝑤, 𝑤′) exp(𝜏sim(𝑤, 𝑤′)) 𝑤′∈𝑒 ∑ exp(𝜏sim(𝑤, 𝑤′)) 𝑤′∈𝑒 training is too slow to perform a detailed study within a reasonable time. Therefore, all the models reported in this paper use 100 nodes. # Systems WMT test2006 1 Baseline 33.06 2 CPTM 34.10α 3 CPTML 33.60αβ 4 CPTMW 33.25β 5 BLTMPR 33.15β 6 DPM 33.29β 7 MRFP 33.91α 8 Comb (2 + 7) 34.39αβ Table 1: BLEU results for the English to French task using translation models and systems built on the WMT 2006 data set. The superscripts α and β indicate statistically significant difference (p < 0.05) from Baseline and CPTM, respectively. 705 where sim𝜏(𝑤, 𝑒) (or sim𝜏(𝑤, 𝑓)) is the wordphrase similarity, and is defined as a smooth approximation of the maximum function where 𝜏 is the tuned smoothing parameter. Similar to CPTM, CPTMW also uses a nonlinear projection to map each word (not a phrase vector as in CPTM) to a feature vector. Two observations can be made by comparing CPTM in Row 2 to its variants in Table 1. First of all, it is more effective to model the phrase translation directly than decomposing it into wordword translations in the CPTMs. Second, we see that the nonlinear projection is able to generate more effective features, leading to better results than the linear projection. We also compare the best version of the CPTM i.e., CPTM, with three related models proposed previously. We start the discussion with the results on the WMT 2006 data set in Table 1. Rows 5 and 6 in Table 1 are two state-of-theart latent semantic models that are originally trained on clicked query-document pairs (i.e., clickthrough data extracted from search logs) for query-document matching (Gao et al. 2011). To adopt these models for SMT, we view source-target sentence pairs as clicked query-document pairs, and trained both models using the same methods as in Gao et al. (2011) on the parallel bilingual training data described earlier. Specifically, BTLMPR is an extension to PLSA, and is the best performer among different versions of the Bi-Lingual Topic Model (BLTM) described in Gao et al. (2011). BLTM with Posterior Regularization (BLTMPR) is trained on parallel training data using the EM algorithm with a constraint enforcing a source sentence and its paralleled target sentence to not only share the same prior topic distribution, but to also have similar fractions of words assigned to each topic. We incorporated the model into the log-linear model for SMT (1) as 7 Gao and He (2013) reported results of MRF models with different feature sets. We picked the MRF using phrase features only (MRFP) for comparison since we are mainly interested in phrase representation. follows. First of all, the topic distribution of a source sentence 𝐹𝑖, denoted by 𝑃(𝑧|𝐹𝑖), is induced from the learned topic-word distributions using EM. Then, each translation candidate 𝐸 in the N-best list GEN(𝐹𝑖) is scored as 𝑃(𝐸|𝐹𝑖) = ∏ ∑𝑃(𝑤|𝑧)𝑃(𝑧|𝐹𝑖) 𝑧 𝑤∈𝐸 𝑃(𝐹𝑖|𝐸) can be similarly computed. Finally, the logarithms of the two probabilities are incorporated into the log-linear model of (1) as two additional features. DPM is the Discriminative Projection Model described in Gao et al. (2011), which is an extension of LSA. DPM uses a matrix to project a word vector of a sentence to a feature vector. The projection matrix is learned on parallel training data using the S2Net algorithm (Yih et al. 2011). DPM can be incorporated into the log-linear model for SMT (1) by introducing a new feature ℎ𝑀+1 for each phrase pair, which is defined as the cosine similarity of the phrases in the project space. As we see from Table 1, both latent semantic models, although leading to some slight improvement over Baseline, are much less effective than CPTM. Finally, we compare the CPTM with the Markov Random Field model using phrase features (MRFP in Tables 1 and 2), proposed by Gao and He (2013)7, on both the WMT 2006 and WMT 2012 datasets. MRFp is a state-of-the-art large scale discriminative training model that uses the same expected BLEU training criterion, which has proven to give superior performance across a range of MT tasks recently (He and Deng 2012, Setiawan and Zhou 2013, Gao and He 2013). Unlike CPTM, MRFp is a linear model that simply treats each phrase pair as a single feature. Therefore, although both are trained using the # Systems dev news2011 news2010 news2008 newssyscomb2010 1 Baseline 23.58 25.24 24.35 20.36 24.14 2 MRFP 24.07α 26.00α 24.90 20.84α 25.05α 3 CPTM 24.12α 26.25α 25.05α 21.15αβ 24.91α 4 Comb (2 + 3) 24.46αβ 26.56αβ 25.52αβ 21.64αβ 25.22α Table 2: BLEU results for the French to English task using translation models and systems built on the WMT 2012 data set. The superscripts α and β indicate statistically significant difference (p < 0.05) from Baseline and MRFp, respectively. 706 same expected BLEU based objective function, CPTM and MRFp model the translation relationship between two phrases from different angles. MRFp estimates one translation score for each phrase pair explicitly without parameter sharing, while in CPTM, all phrases share the same neural network that projects raw phrases to the continuous space, providing a more smoothed estimation of the translation score for each phrase pair. The results in Tables 1 and 2 show that CPTM outperforms MRFP on most of the test sets across the two WMT data sets, but the difference between them is often not significant. Our interpretation is that although CPTM provides a better smoothed estimation for low-frequent phrase pairs, which otherwise suffer the data sparsity issue, MRFp provides a more precise estimation for those high-frequent phrase pairs. That is, CPTM and MRFp capture complementary information for translation. We thus combine CPTM and MRFP (Comb in Tables 1 and 2) by incorporating two features, each for one model, into the log-linear model of SMT (1). We observe that for both translation tasks, accuracy improves by up to 0.8 BLEU over MRFP alone (e.g., on the news2008 test set in Table 2). The results confirm that CPTM captures complementary translation information to MRFp. Overall, we improve accuracy by up to 1.3 BLEU over the baseline on both WMT data sets. 7 Conclusions The work presented in this paper makes two major contributions. First, we develop a novel phrase translation model for SMT, where joint representations are exploited of a phrase in the source language and of its translation in the target language, and where the translation score of the pair of source-target phrases are represented as the distance between their feature vectors in a low-dimensional, continuous space. The space is derived from the representations generated using a multilayer neural network. Second, we present a new learning method to train the weights in the multilayer neural network for the end-to-end BLEU metric directly. The training method is based on L-BFGS. We describe in detail how the gradient in closed form, as required for efficient optimization, is derived. The objective function, which takes the form of the expected BLEU computed from N-best lists, is very different from the usual objective functions used in most existing architectures of neural networks, e.g., cross entropy (Hinton et al. 2012) or mean square error (Deng et al. 2012). We hence have provided details in the derivation of the gradient, which can serve as an example to guide the derivation of neural network learning with other non-standard objective functions in the future. Our evaluation on two WMT data sets show that incorporating the continuous-space phrase translation model into the log-linear framework significantly improves the accuracy of a state-ofthe-art phrase-based SMT system, leading to a gain up to 1.3 BLEU. Careful implementation of the L-BFGS optimization based on the BLEUcentric objective function, together with the associated closed-form gradient, is a key to the success. A natural extension of this work is to expand the model and learning algorithm from shallow to deep neural networks. The deep models are expected to produce more powerful and flexible semantic representations (e.g., Tur et al., 2012), and thus greater performance gain than what is presented in this paper. 8 Acknowledgements We thank Michael Auli for providing a dataset and for helpful discussions. We also thank the four anonymous reviewers for their comments. References Andrew, G. and Gao, J. 2007. Scalable training of L1-regularized log-linear models. In ICML. Auli, M., Galley, M., Quirk, C. and Zweig, G. 2013 Joint language and translation modeling with recurrent neural networks. In EMNLP. Bengio, Y. 2009. Learning deep architectures for AI. Fundamental Trends Machine Learning, vol. 2, no. 1, pp. 1–127. Bengio, Y., Duharme, R., Vincent, P., and Janvin, C. 2003. A neural probabilistic language model. JMLR, 3:1137-1155. Blei, D. M., Ng, A. Y., and Jordan, M. J. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3: 993-1022. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., and Kuksa, P. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, vol. 12. 707 Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T., and Harshman, R. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6): 391-407 DeNero, J., Gillick, D., Zhang, J., and Klein, D. 2006. Why generative phrase models underperform surface heuristics. In Workshop on Statistical Machine Translation, pp. 31-38. Deng, L., Yu, D., and Platt, J. 2012. Scalable stacking and learning for building deep architectures. In ICASSP. Diamantaras, K. I., and Kung, S. Y. 1996. Principle Component Neural Networks: Theory and Applications. Wiley-Interscience. Dumais S., Letsche T., Littman M. and Landauer T. 1997. Automatic cross-language retrieval using latent semantic indexing. In AAAI-97 Spring Symposium Series: Cross-Language Text and Speech Retrieval. Ganchev, K., Graca, J., Gillenwater, J., and Taskar, B. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11 (2010): 20012049. Gao, J., and He, X. 2013. Training MRF-based translation models using gradient ascent. In NAACL-HLT, pp. 450-459. Gao, J., Toutanova, K., Yih., W-T. 2011. Clickthrough-based latent semantic models for web search. In SIGIR, pp. 675-684. He, X., and Deng, L. 2012. Maximum expected bleu training of phrase and lexicon translation models. In ACL, pp. 292-301. Hinton, G., and Salakhutdinov, R., 2010. Discovering Binary Codes for Documents by Learning Deep Generative Models. Topics in Cognitive Science, pp. 1-18. Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., and Kingsbury, B., 2012. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97. Hofmann, T. 1999. Probabilistic latent semantic indexing. In SIGIR, pp. 50-57. Huang, P-S., He, X., Gao, J., Deng, L., Acero, A. and Heck, L. 2013. Learning deep structured semantic models for web search using clickthrough data. In CIKM. Kalchbrenner, N. and Blunsom, P. 2013. Recurrent continuous translation models. In EMNLP. Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. 2007. Moses: open source toolkit for statistical machine translation. In ACL 2007, demonstration session. Koehn, P. and Monz, C. 2006. Manual and automatic evaluation of machine translation between European languages. In Workshop on Statistical Machine Translation, pp. 102-121. Koehn, P., Och, F., and Marcu, D. 2003. Statistical phrase-based translation. In HLT-NAACL, pp. 127-133. Lambert, P. and Banchs, R. E. 2005. Data inferred multi-word expressions for statistical machine translation. In MT Summit X, Phuket, Thailand. Li, P., Liu, Y., and Sun, M. 2013. Recursive autoencoders for ITG-based translation. In EMNLP. Liang,P., Bouchard-Cote,A., Klein, D. and Taskar, B. 2006. An end-to-end discriminative approach to machine translation. In COLINGACL. Marcu, D., and Wong, W. 2002. A phrase-based, joint probability model for statistical machine translation. In EMNLP. Mikolov, T., Karafiat, M., Burget, L., Cernocky, J., and Khudanpur, S. 2010. Recurrent neural network based language model. In INTERSPEECH, pp. 1045-1048. Mikolov, T., Kombrink, S., Burget, L., Cernocky, J., and Khudanpur, S. 2011. Extensions of recurrent neural network language model. In ICASSP, pp. 5528-5531. Mikolov, T. 2012. Statistical Language Model based on Neural Networks. Ph.D. thesis, Brno University of Technology. Mikolov, T., Le, Q. V., and Sutskever, H. 2013a. Exploiting similarities among languages for machine translation. CoRR. 2013; abs/1309.4148. Mikolov, T., Yih, W. and Zweig, G. 2013b. Linguistic Regularities in Continuous Space Word Representations. In NAACL-HLT. Mimno, D., Wallach, H., Naradowsky, J., Smith, D. and McCallum, A. 2009. Polylingual topic models. In EMNLP. 708 Niehues J., Herrmann, T., Vogel, S., and Waibel, A. 2011. Wider context by using bilingual language models in machine translation. Och, F. 2003. Minimum error rate training in statistical machine translation. In ACL, pp. 160167. Och, F., and Ney, H. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 29(1): 19-51. Papineni, K., Roukos, S., Ward, T., and Zhu W-J. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL. Platt, J., Toutanova, K., and Yih, W. 2010. Translingual Document Representations from Discriminative Projections. In EMNLP. Rosti, A-V., Hang, B., Matsoukas, S., and Schwartz, R. S. 2011. Expected BLEU training for graphs: bbn system description for WMT system combination task. In Workshop on Statistical Machine Translation. Schwenk, H., Costa-Jussa, M. R. and Fonollosa, J. A. R. 2007. Smooth bilingual n-gram translation. In EMNLP-CoNLL, pp. 430-438. Schwenk, H. 2012. Continuous space translation models for phrase-based statistical machine translation. In COLING. Schwenk, H., Rousseau, A., and Mohammed A. 2012. Large, pruned or continuous space language models on a GPU for statistical machine translation. In NAACL-HLT Workshop on the future of language modeling for HLT, pp. 1119. Setiawan, H. and Zhou, B., 2013. Discriminative training of 150 million translation parameters and its application to pruning. In NAACL. Socher, R., Huval, B., Manning, C., Ng, A., 2012. Semantic Compositionality through Recursive Matrix-Vector Spaces. In EMNLP. Socher, R., Lin, C., Ng, A. Y., and Manning, C. D. 2011. Parsing natural scenes and natural language with recursive neural networks. In ICML. Son, L. H., Allauzen, A., and Yvon, F. 2012. Continuous space translation models with neural networks. In NAACL-HLT, pp. 29-48. Sundermeyer, M., Oparin, I., Gauvain, J-L. Freiberg, B., Schluter, R. and Ney, H. 2013. Comparison of feed forward and recurrent neural network language models. In ICASSP, pp. 8430–8434. Tur, G, Deng, L., Hakkani-Tur, D., and He, X., 2012. Towards deeper understanding: deep convex networks for semantic utterance classification. In ICASSP. Vinokourov,A., Shawe-Taylor,J. and Cristianini,N. 2002. Inferring a semantic representation of text via cross-language correlation analysis. In NIPS. Weston, J., Bengio, S., and Usunier, N. 2011. Large scale image annotation: learning to rank with joint word-image embeddings. In IJCAI. Wuebker, J., Mauser, A., and Ney, H. 2010. Training phrase translation models with leaving-oneout. In ACL, pp. 475-484. Yih, W., Toutanova, K., Platt, J., and Meek, C. 2011. Learning discriminative projections for text similarity measures. In CoNLL. Zhang, Y., Deng, L., He, X., and Acero, A. 2011. A novel decision function and the associated decision-feedback learning for speech translation. In ICASSP. Zhila, A., Yih, W., Meek, C., Zweig, G. and Mikolov, T. 2013. Combining heterogeneous models for measuring relational similarity. In NAACL-HLT. Zou, W. Y., Socher, R., Cer, D., and Manning, C. D. 2013. Bilingual word embeddings for phrase-based machine translation. In EMNLP. 709
2014
66
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 710–720, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Adaptive Quality Estimation for Machine Translation Marco Turchi(1) Antonios Anastasopoulos(3) Jos´e G. C. de Souza(1,2) Matteo Negri(1) (1) FBK - Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italy (2) University of Trento, Italy (3) National Technical University of Athens, Greece {turchi,desouza,negri}@fbk.eu [email protected] Abstract The automatic estimation of machine translation (MT) output quality is a hard task in which the selection of the appropriate algorithm and the most predictive features over reasonably sized training sets plays a crucial role. When moving from controlled lab evaluations to real-life scenarios the task becomes even harder. For current MT quality estimation (QE) systems, additional complexity comes from the difficulty to model user and domain changes. Indeed, the instability of the systems with respect to data coming from different distributions calls for adaptive solutions that react to new operating conditions. To tackle this issue we propose an online framework for adaptive QE that targets reactivity and robustness to user and domain changes. Contrastive experiments in different testing conditions involving user and domain changes demonstrate the effectiveness of our approach. 1 Introduction After two decades of steady progress, research in statistical machine translation (SMT) started to cross its path with translation industry with tangible mutual benefit. On one side, SMT research brings to the industry improved output quality and a number of appealing solutions useful to increase translators’ productivity. On the other side, the market needs suggest concrete problems to solve, providing real-life scenarios to develop and evaluate new ideas with rapid turnaround. The evolution of computer-assisted translation (CAT) environments is an evidence of this trend, shown by the increasing interest towards the integration of suggestions obtained from MT engines with those derived from translation memories (TMs). The possibility to speed up the translation process and reduce its costs by post-editing goodquality MT output raises interesting research challenges. Among others, these include deciding what to present as a suggestion, and how to do it in the most effective way. In recent years, these issues motivated research on automatic QE, which addresses the problem of estimating the quality of a translated sentence given the source and without access to reference translations (Blatz et al., 2003; Specia et al., 2009; Mehdad et al., 2012). Despite the substantial progress done so far in the field and in successful evaluation campaigns (Callison-Burch et al., 2012; Bojar et al., 2013), focusing on concrete market needs makes possible to further define the scope of research on QE. For instance, moving from controlled lab testing scenarios to real working environments poses additional constraints in terms of adaptability of the QE models to the variable conditions of a translation job. Such variability is due to two main reasons: 1. The notion of MT output quality is highly subjective (Koponen, 2012; Turchi et al., 2013; Turchi and Negri, 2014). Since the quality standards of individual users may vary considerably (e.g. according to their knowledge of the source and target languages), the estimates of a static QE model trained with data collected from a group of post-editors might not fit with the actual judgements of a new user; 2. Each translation job has its own specificities (domain, complexity of the source text, average target quality). Since data from a new job may differ from those used to train the QE model, its estimates on the new instances might result to be biased or uninformative. The ability of a system to self-adapt to the be710 haviour of specific users and domain changes is a facet of the QE problem that so far has been disregarded. To cope with these issues and deal with the erratic conditions of real-world translation workflows, we propose an adaptive approach to QE that is sensitive and robust to differences between training and test data. Along this direction, our main contribution is a framework in which QE models can be trained and can continuously evolve over time accounting for knowledge acquired from post editors’ work. Our approach is based on the online learning paradigm and exploits a key difference between such framework and the batch learning methods currently used. On one side, the QE models obtained with batch methods are learned exclusively from a predefined set of training examples under the assumption that they have similar characteristics with respect to the test data. This makes them suitable for controlled evaluation scenarios where such condition holds. On the other side, online learning techniques are designed to learn in a stepwise manner (either from scratch, or by refining an existing model) from new, unseen test instances by taking advantage of external feedback. This makes them suitable for real-life scenarios where the new instances to be labelled can considerably differ from the data used to train the QE model. To develop our approach, different online algorithms have been embedded in the backbone of a QE system. This required the adaptation of its standard batch learning workflow to: 1. Perform online feature extraction from a source–target pair (i.e. one instance at a time instead of processing an entire training set); 2. Emit a prediction for the input instance; 3. Gather user feedback for the instance (i.e. calculating a “true label” based on the amount of user post-editions); 4. Send the true label back to the model to update its predictions for future instances. Focusing on the adaptability to user and domain changes, we report the results of comparative experiments with two online algorithms and the standard batch approach. The evaluation is carried out by measuring the global error of each algorithm on test sets featuring different degrees of similarity with the data used for training. Our results show that the sensitivity of online QE models to different distributions of training and test instances makes them more suitable than batch methods for integration in a CAT framework. Our adaptive QE infrastructure has been released as open source. Its C++ implementation is available at http://hlt.fbk.eu/technologies/ aqet. 2 Related work QE is generally cast as a supervised machine learning task, where a model trained from a collection of (source, target, label) instances is used to predict labels1 for new, unseen test items (Specia et al., 2010). In the last couple of years, research in the field received a strong boost by the shared tasks organized within the WMT workshop on SMT,2 which is also the framework of our first experiment in §5. Current approaches to the tasks proposed at WMT have mainly focused on three main directions, namely: i) feature engineering, as in (Hardmeier et al., 2012; de Souza et al., 2013a; de Souza et al., 2013b; Rubino et al., 2013b), ii) model learning with a variety of classification and regression algorithms, as in (Bicici, 2013; Beck et al., 2013; Soricut et al., 2012), and iii) feature selection as a way to overcome sparsity and overfitting issues, as in (Soricut et al., 2012). Being optimized to perform well on specific WMT sub-tasks and datasets, current systems reflect variations along these directions but leave important aspects of the QE problem still partially investigated or totally unexplored.3 Among these, the necessity to model the diversity of human quality judgements and correction strategies (Koponen, 2012; Koponen et al., 2012) calls for solutions that: i) account for annotator-specific behaviour, thus being capable of learning from inherently noisy datasets produced by multiple annotators, and ii) self-adapt to changes in data distribution, learning from user feedback on new, unseen test items. 1Possible label types include post-editing effort scores (e.g. 1-5 Likert scores indicating the estimated percentage of MT output that has to be corrected), HTER values (Snover et al., 2006), and post-editing time (e.g. seconds per word). 2http://www.statmt.org/wmt13/ 3For a comprehensive overview of the QE approaches proposed so far we refer the reader to the WMT12 and WMT13 QE shared task reports (Callison-Burch et al., 2012; Bojar et al., 2013). 711 These interconnected issues are particularly relevant in the CAT framework, where translation jobs from different domains are routed to professional translators with different idiolect, background and quality standards. The first aspect, modelling annotators’ individual behaviour and interdependences, has been addressed by Cohn and Specia (2013), who explored multi-task Gaussian Processes as a way to jointly learn from the output of multiple annotations. This technique is suitable to cope with the unbalanced distribution of training instances and yields better models when heterogeneous training datasets are available. The second problem, the adaptability of QE models, has not been explored yet. A common trait of all current approaches, in fact, is the reliance on batch learning techniques, which assume a “static” nature of the world where new unseen instances that will be encountered will be similar to the training data.4 However, similarly to translation memories that incrementally store translated segments and evolve over time incorporating users style and terminology, all components of a CAT tool (the MT engine and the mechanisms to assign quality scores to the suggested translations) should take advantage of translators feedback. On the MT system side, research on adaptive approaches tailored to interactive SMT and CAT scenarios explored the online learning protocol (Littlestone, 1988) to improve various aspects of the decoding process (Cesa-Bianchi et al., 2008; Ortiz-Mart´ınez et al., 2010; Mart´ınez-G´omez et al., 2011; Mart´ınez-G´omez et al., 2012; Mathur et al., 2013; Bertoldi et al., 2013). As regards QE models, our work represents the first investigation on incremental adaptation by exploiting users feedback to provide targeted (system, user, or project specific) quality judgements. 3 Online QE for CAT environments When operating with advanced CAT tools, translators are presented with suggestions (either matching fragments from a translation memory or automatic translations produced by an MT system) for each sentence of a source document. Before being approved and published, translation suggestions may require different amounts of post-editing operations depending on their quality. 4This assumption holds in the WMT evaluation scenario, but it is not necessarily valid in real operating conditions. Each post-edition brings a wealth of dynamic knowledge about the whole translation process and the involved actors. For instance, adaptive QE components could exploit information about the distance between automatically assigned scores and the quality standards of individual translators (inferred from the amount of their corrections) to “profile” their behaviour. The online learning paradigm fits well with this research objective. In the online framework, differently from the batch mode, the learning algorithm sequentially processes an unknown sequence of instances X = x1, x2, ..., xn, returning a prediction p(xi) as output at each step. Differences between p(xi) and the true label ˆp(xi) obtained as feedback are used by the learner to refine the next prediction p(xi+1). In our experiments on adaptive QE we aim to predict the quality of the suggested translations in terms of HTER, which measures the minimum edit distance between the MT output and its manually post-edited version in the [0,1] interval.5 In this scenario: • The set of instances X is represented by (source, target) pairs; • The prediction p(xi) is the automatically estimated HTER score; • The true label ˆp(xi) is the actual HTER score calculated over the target and its post-edition. At each step of the process, the goal of the learner is to exploit user post-editions to reduce the difference between the predicted HTER values and the true labels for the following (source, target) pairs. As depicted in Figure 1, this is done as follows: 1. At step i, an unlabelled (source, target) pair xi is sent to a feature extraction component. To this aim, we used an adapted version (Shah et al., 2014) of the open-source QuEst6 tool (Specia et al., 2013). The tool, which implements a large number of features proposed by participants in the WMT QE shared tasks, has been modified to process one sentence at a time as requested for integration in a CAT environment; 5Edit distance is calculated as the number of edits (word insertions, deletions, substitutions, and shifts) divided by the number of words in the reference. Lower HTER values indicate better translations. 6http://www.quest.dcs.shef.ac.uk/ 712 Figure 1: Online QE workflow. <src>, <trg> and <pe> respectively stand for the source sentence, the target translation and the post-edited target. 2. The extracted features are sent to an online regressor, which returns a QE prediction score p(xi) in the [0,1] interval (set to 0 at the first round of the iteration); 3. Based on the post-edition done by the user, the true HTER label ˆp(xi) is calculated by means of the TERCpp7 open source tool; 4. The true label is sent back to the online algorithm for a stepwise model improvement. The updated model is then ready to process the following instance xi+1. This new paradigm for QE makes it possible to: i) let the QE system learn from one point at a time without complete re-training from scratch, ii) customize the predictions of an existing QE model with respect to a specific situation (posteditor or domain), or even iii) build a QE model from scratch when training data is not available. For the sake of clarity it is worth observing that, at least in principle, a model built in a batch fashion could also be adapted to new test data. For instance, this could be done by running periodic retraining routines once a certain amount of new labelled instances has been collected (de facto mimicking an online process). Such periodic updates, however, would not represent a viable solution in the CAT framework where post-editors’ work cannot be slowed by time-consuming procedures to re-train core system components from scratch. 7goo.gl/nkh2rE 4 Evaluation framework To measure the adaptation capability of different QE models, we experiment with a range of conditions defined by variable degrees of similarity between training and test data. The degree of similarity depends on several factors: the MT engine used, the domain of the documents to be translated, and the post-editing style of individual translators. In our experiments, the degree of similarity is measured in terms of ∆HTER, which is computed as the absolute value of the difference between the average HTER of the training and test sets. Large values indicate a low similarity between training and test data and a more challenging scenario for the learning algorithms. 4.1 Experimental setup In the range of possible evaluation scenarios, our experiments cover: • One artificial setting (§5) obtained from the WMT12 QE shared task data, in which training/test instances are arranged to reflect homogeneous distributions of the HTER labels. • Two settings obtained from data collected with a CAT tool in real working conditions, in which different facets of the adaptive QE problem interact with each other. In the first (user change, §6.1), training and test data from the same domain are obtained from different users. In the sec713 ond (user+domain change, §6.2), training and test data are obtained from different users and domains. For each setting, we compare an adaptive and an empty model against a system trained in batch mode. The adaptive model is built on top of an existing model created from the training data and exploits the new test instances to refine its predictions in a stepwise manner. The empty model only learns from the test set, simulating the worst condition where training data is not available. The batch model is built by learning only from the training data and is evaluated on the test set without exploiting information from the test instances. Each model is also compared against a common baseline for regression tasks, which is particularly relevant in settings featuring different data distributions between training and test sets. This baseline (µ henceforth) is calculated by labelling each instance of the test set with the mean HTER score of the training set. Previous works (Rubino et al., 2013a) demonstrated that its results can be particularly hard to beat. 4.2 Performance indicator and feature set To measure the adaptability of our model to a given test set we compute the Mean Absolute Error (MAE), a metric for regression problems also used in the WMT QE shared tasks. The MAE is the average of the absolute errors ei = |fi −yi|, where fi is the prediction of the model and yi is the true value for the ith instance. As our focus is on the algorithmic aspect, in all experiments we use the same feature set, which consists of the seventeen features proposed in (Specia et al., 2009). This feature set, fully described in (Callison-Burch et al., 2012), takes into account the complexity of the source sentence (e.g. number of tokens, number of translations per source word) and the fluency of the target translation (e.g. language model probabilities). The results of previous WMT QE shared tasks have shown that these baseline features are particularly competitive in the regression task (with only few systems able to beat them at WMT12). 4.3 Online algorithms In our experiments we evaluate two online algorithms, OnlineSVR (Parrella, 2007)8 and Passive8http://www2.imperial.ac.uk/˜gmontana/ onlinesvr.htm Aggressive Perceptron (Crammer et al., 2006),9 by comparing their performance with a batch learning strategy based on the Scikit-learn implementation of Support Vector Regression (SVR).10 The choice of the OnlineSVR and PassiveAggressive (OSVR and PA henceforth) is motivated by different considerations. From a performance point of view, as an adaptation of ϵ-SVR which proved to be one of the top performing algorithms in the regression QE tasks at WMT, OSVR seems to be the best candidate. For this reason, we use the online adaptation of ϵ-SVR proposed by (Ma et al., 2003). The goal of OnlineSVR is to find a way to add each new sample to one of three sets (support, empty, error) maintaining the consistency of a set of conditions known as KarushKuhn Tucker (KKT) conditions. For each new point, OSVR starts a cycle where the samples are moved across the three sets until the KKT conditions are verified and the new point is assigned to one of the sets. If the point is identified as a support vector, the parameters of the model are updated. This allows OSVR to benefit from the prediction capability of ϵ-SVR in an online setting. From a practical point of view, providing the best trade off between accuracy and computational time (He and Wang, 2012), PA represents a good solution to meet the demand of efficiency posed by the CAT framework. For each instance i, after emitting a prediction and receiving the true label, PA computes the ϵ-insensitive hinge loss function. If its value is larger than the tolerance parameter (ϵ), the weights of the model are updated as much as the aggressiveness parameter C allows. In contrast with OSVR, which keeps track of the most important points seen in the past (support vectors), the update of the weights is done without considering the previously processed i-1 instances. Although it makes PA faster than OSVR, this is a riskier strategy because it may lead the algorithm to change the model to adapt to outlier points. 5 Experiments with WMT12 data The motivations for experiments with training and test data featuring homogeneous label distributions are twofold. First, since in this artificial scenario adaptation capabilities are not required for the QE component, batch methods operate in the ideal conditions (as training and test are indepen9https://code.google.com/p/sofia-ml/ 10http://scikit-learn.org/ 714 WMT Dataset Train Test ∆ µ Batch Adaptive Empty HTER MAE MAE MAE Alg. MAE Alg. 200 754 0.39 13.7 13.2 13.2∗ OSVR 13.5∗ OSVR 600 754 1.32 13.8 12.7 12.9∗ OSVR 13.5∗ OSVR 1500 754 1.22 13.8 12.7 12.8∗ OSVR 13.5∗ OSVR Table 1: MAE of the best performing batch, adaptive and empty models on WMT12 data. Training sets of different size and the test set have been arranged to reflect homogeneous label distributions. dent and identically distributed). This makes possible to obtain from batch models the best possible performance to compare with. Second, this scenario provides the fairest conditions for such comparison because, in principle, online algorithms are not favoured by the possibility to learn from the diversity of the test instances. For our controlled experiments we use the WMT12 English-Spanish corpus, which consists of 2,254 source-target pairs (1,832 for training, 422 for test). The HTER labels for our regression task are calculated from the post-edited version and the target sentences provided in the dataset. To avoid biases in the label distribution, the WMT12 training and test data have been merged, shuffled, and eventually separated to generate three training sets of different size (200, 600, and 1500 instances), and one test set with 754 instances. For each algorithm, the training sets are used for learning the QE models, optimizing parameters (i.e. C, ϵ, the kernel and its parameters for SVR and OSVR; tolerance and aggressiveness for PA) through grid search in 10-fold crossvalidation. Evaluation is carried out by measuring the performance of the batch (learning only from the training set), the adaptive (learning from the training set and adapting to the test set), and the empty (learning from scratch from the test set) models in terms of global MAE scores on the test set. Table 1 reports the results achieved by the best performing algorithm for each type of model (batch, adaptive, empty). As can be seen, close MAE values show a similar behaviour for the three types of models.11 With the same amount of training data, the performance of the batch and the adaptive models (in this case always obtained with OSVR) is almost identical. This demonstrates that, as expected, the online algorithms do not take 11Results marked with the “∗” symbol are NOT statistically significant compared to the corresponding batch model. The others are always statistically significant at p≤0.005, calculated with approximate randomization (Yeh, 2000). advantage of test data with a label distribution similar to the training set. All the models outperform the baseline, even if the minimal differences confirm the competitiveness of such a simple approach. Overall, these results bring some interesting indications about the behaviour of the different online algorithms. First, the good results achieved by the empty models (less than one MAE point separates them from the best ones built on the largest training set) suggest their high potential when training data are not available. Second, our results show that OSVR is always the best performing algorithm for the adaptive and empty models. This suggests a lower capability of PA to learn from instances similar to the training data. 6 Experiments with CAT data To experiment with adaptive QE in more realistic conditions we used a CAT tool12 to collect two datasets of (source, target, post edited target) English-Italian tuples.The source sentences in the datasets come from two documents from different domains, respectively legal (L) and information technology (IT). The L document, which was extracted from a European Parliament resolution published on the EUR-Lex platform,13 contains 164 sentences. The IT document, which was taken from a software user manual, contains 280 sentences. The source sentences were translated with two SMT systems built by training the Moses toolkit (Koehn et al., 2007) on parallel data from the two domains (about 2M sentences for IT and 1.5M for L). Post-editions were collected from eight professional translators (four for each document) operating with the CAT tool in real working conditions. According to the way they are created, the two datasets allow us to evaluate the adaptability of different QE models with respect to user changes 12MateCat – http://www.matecat.com/ 13http://eur-lex.europa.eu/ 715 user change Legal Domain Train Test ∆ µ Batch Adaptive Empty HTER MAE MAE MAE Alg. MAE Alg. rad cons 20.5 21.4 20.6 14.5 PA 12.5 OSVR cons rad 19.4 21.2 21.3 16.1 PA 11.3 OSVR sim1 sim2 3.3 14.7 12.2 12.6∗ OSVR 12.9∗ OSVR sim2 sim1 3.2 13.4 13.3 13.9∗ OSVR 15.2∗ OSVR IT Domain Train Test ∆ µ Batch Adaptive Empty HTER MAE MAE MAE Alg MAE Alg cons rad 12.8 19.2 19.8 17.5∗ OSVR 16.6 OSVR rad cons 9.6 16.8 16.6 15.6 PA 15.5 OSVR sim2 sim1 3.3 14.7 14.4 15∗ OSVR 15.5∗ OSVR sim1 sim2 1.1 15 13.9 14.4∗ OSVR 16.1∗ OSVR Table 2: MAE of the best performing batch, adaptive and empty models on CAT data collected from different users in the same domain. within the same domain (§6.1), as well as user and domain changes at the same time (§6.2). For each document D (L or IT), these two scenarios are obtained by dividing D into two parts of equal size (80 instances for L and 140 for IT). The result is one training set and one test set for each post-editor within the same domain. For the user change experiments, training and test sets are selected from different post-editors within the same domain. For the user+domain change experiments, training and test sets are selected from different post-editors in different domains. On each combination of training and test sets, the batch, adaptive, and empty models are trained and evaluated in terms of global MAE scores on the test set. 6.1 Dealing with user changes Among the possible combinations of training and test data from different post-editors in the same domain, Table 2 refers to two opposite scenarios. For each domain, these respectively involve the most dissimilar and the most similar post-editors according to the ∆HTER. Also in this case, for each model (batch, adaptive and empty) we only report the MAE of the best performing algorithm. The first scenario defines a challenging situation where two post-editors (rad and cons) are characterized by opposite behaviour. As evidenced by the high ∆HTER values, one of them (rad) is the most “radical” post-editor (performing more corrections) while the other (cons) is the most “conservative” one. As shown in Table 2, global MAE scores for the online algorithms (both adaptive and empty) indicate their good adaptation capabilities. This is evident from the significant improvements both over the baseline (µ) and the batch models. Interestingly, the best results are always achieved by the empty models (with MAE reductions up to 10 points when tested on rad in the L domain, and 3.2 points when tested on rad in the IT domain). These results (MAE reductions are always statistically significant) suggest that, when dealing with datasets with very different label distributions, the evident limitations of batch methods are more easily overcome by learning from scratch from the feedback of a new post-editor. This also holds when the amount of test points to learn from is limited, as in the L domain where the test set contains only 80 instances. From the applicationoriented perspective that motivates our work, considering the high costs of acquiring large and representative QE training data, this is an important finding. The second scenario defines a less challenging situation where the two post-editors (sim1 and sim2) are characterized by the most similar behaviour (small ∆HTER). This scenario is closer to the situation described in Section §5. Also in this case MAE results for the adaptive and empty models are slightly worse, but not significantly, than those of the batch models and the baseline. However, considering the very small amount of “uninformative” instances to learn from (especially for the empty models), these lower results are not surprising. A closer look at the behaviour of the online algorithms in the two domains leads to other observations. First, OSVR always outperforms PA for the empty models and when post-editors have sim716 user+domain change Train Test ∆ µ Batch Adaptive Empty HTER MAE MAE MAE Alg MAE Alg L cons IT rad 24.5 26.4 27 18.2 OSVR 16.6 OSVR IT rad L cons 24.0 24.9 25.4 19.7 OSVR 12.5 OSVR L rad L cons 20.5 21.4 20.6 14.5 PA 12.5 OSVR L cons L rad 19.4 21.2 21.3 16.1 PA 11.3 OSVR IT cons L cons 13.5 17.3 17.5 15.7 OSVR 12.5 OSVR IT cons IT rad 12.8 19.2 19.8 17.5 OSVR 16.6 OSVR L cons IT cons 12.7 17.6 17.6 15.1 OSVR 15.5 OSVR IT rad IT cons 9.6 16.8 16.6 15.6 PA 15.5 OSVR IT cons L rad 8.3 12.3 13 10.7 OSVR 11.3 OSVR L rad IT rad 6.8 17 16.9 16.2 OSVR 16.6 OSVR L rad IT cons 5.0 15.4 16.2 14.7 OSVR 15.5 OSVR IT rad L rad 2.2 10.6 10.8 10.5 OSVR 11.3 OSVR Table 3: MAE of the best performing batch, adaptive and empty models on CAT data collected from different users and domains. ilar behaviour, which are situations where the algorithm does not have to quickly adapt or react to sudden changes. Second, PA seems to perform better for the adaptive models when the post-editors have significantly different behaviour and a quick adaptation to the incoming points is required. This can be motivated by the fact that PA relies on a simpler and less robust learning strategy that does not keep track of all the information coming from the previously processed instances, and can easily modify its weights taking into consideration the last seen point (see Section §3). For OSVR the addition of new points to the support set may have a limited effect on the whole model, in particular if the number of points in the set is large. This also results in a different processing time for the two algorithms.14 For instance, in the empty configurations on IT data, OSVR devotes 6.0 ms per instance to update the model, while PA devotes 4.8 ms, which comes at the cost of lower performance. 6.2 Dealing with user and domain changes In the last round of experiments we evaluate the reactivity of different online models to simultaneous user and domain changes. To this aim, our QE models are created using a training set coming from one domain (L or IT), and then used to predict the HTER labels for the test instances coming from the other domain (e.g. training on L, testing on IT). Among the possible combinations of training 14Their complexity depends on the number of features (f) and the number of previously seen instances (n). While for PA it is linear in f, i.e. O(f), for OSVR it is quadratic in n, i.e. O(n2*f). and test data, Table 3 refers to scenarios involving the most conservative and radical post-editors in each domain (previously identified with cons and rad)15. In the table, results are ordered according to the ∆HTER computed between the selected post-editor in the training domain (e.g. L cons) and the selected post-editor in the test domain (e.g. IT rad). For the sake of comparison, we also report (grey rows) the results of the experiments within the same domain presented in §6.1. For each type of model (batch, adaptive and empty) we only show the MAE obtained by the best performing algorithm. Intuitively, dealing with simultaneous user and domain changes represents a more challenging problem compared to the previous setting where only post-editors changes were considered. Such intuition is confirmed by the results of the adaptive models that outperform both the baseline (µ) and the batch models even for low ∆HTER values. Although in these cases the distance between training and test data is comparable to the experiments with similar post-editors working in the same domain (sim1 and sim2), here the predictive power of the batch models seems in fact to be lower. The same holds also for the empty models except in two cases where the ∆HTER is the smallest (2.2 and 5.0). This is a strong evidence of the fact that, in case of domain changes, online models can still learn from new test instances even if they have a label distribution similar to the training set. When the distance between training and test increases, our results confirm our previous findings 15For brevity, we omit the results for the other post-editors which, however, show similar trends with respect to the previous experiments. 717 about the potential of the empty models. The observed MAE reductions range in fact from 10.4 to 12.9 points for the two combinations with the highest ∆HTER. From the algorithmic point of view, our results indicate that OSVR achieves the best performance for all the combinations involving user and domain changes. This contrasts with the results of most of the combinations involving only user changes with post-editors characterized by opposite behaviour (grey rows in Table 3). However, it has to be remarked that in the case of heterogeneous datasets the difference between the two algorithms is always very high. In our experiments, when PA outperforms OSVR, its MAE results are significantly lower and vice-versa (respectively up to 1.5 and 1.7 MAE points). This suggests that, although PA is potentially capable of achieving higher results and better adapt to the new test points, its instability makes it less reliable for practical use. As a final analysis of our results, we investigated how the performance of the different types of models (batch, adaptive, empty) relates to the distance between training and test sets. To this aim, we computed the Pearson correlation between the ∆HTER (column 3 in Table 3) and the MAE of each model (columns 5, 6 and 8), which respectively resulted in 0.9 for the batch, 0.63 for the adaptive and -0.07 for the empty model. These values confirm that batch models are heavily affected by the dissimilarity between training and test data: large differences in the label distribution imply higher MAE results and vice-versa. This is in line with our previous findings about batch models that, learning only from the training set, cannot leverage possible dissimilarities of the test set. The lower correlation observed for the adaptive models also confirms our intuitions: adapting to the new test points, these models are in fact more robust to differences with the training data. As expected, the results of the empty models are completely uncorrelated with the ∆HTER since they only use the test set. This analysis confirms that, even when dealing with different domains, the similarity between the training and test data is one of the main factors that should drive the choice of the QE model. When this distance is minimal, batch models can be a reasonable option, but when the gap between training and test data increases, adaptive or empty models are a preferable choice to achieve good results. 7 Conclusion In the CAT scenario, each translation job can be seen as a complex situation where the user (his personal style and background), the source document (the language and the domain) and the underlying technology (the translation memory and the MT engine that generate translation suggestions) contribute to make the task unique. So far, the adaptability to such specificities (a major challenge for CAT technology) has been mainly supported by the evolution of translation memories, which incrementally store translated segments incorporating the user style. The wide adoption of translation memories demonstrates the importance of capitalizing on such information to increase translators productivity. While this lesson recently motivated research on adaptive MT decoders that learn from user corrections, nothing has been done to develop adaptive QE components. In the first attempt to address this problem, we proposed the application of the online learning protocol to leverage users feedback and to tailor QE predictions to their quality standards. Besides highlighting the limitations of current batch methods to adapt to user and domain changes, we performed an applicationoriented analysis of different online algorithms focusing on specific aspects relevant to the CAT scenario. Our results show that the wealth of dynamic knowledge brought by user corrections can be exploited to refine in a stepwise fashion the quality judgements in different testing conditions (user changes as well as simultaneous user and domain changes). As an additional contribution, to spark further research on this facet of the QE problem, our adaptive QE infrastructure (integrating all the components and the algorithms described in this paper) has been released as open source. Its C++ implementation is available at http://hlt.fbk.eu/ technologies/aqet. Acknowledgements This work has been partially supported by the ECfunded project MateCat (ICT-2011.4.2-287688). References Daniel Beck, Kashif Shah, Trevor Cohn, and Lucia Specia. 2013. SHEF-Lite: When less is more for translation quality estimation. In Proceedings of the 718 8th Workshop on Statistical Machine Translation, Sofia, Bulgaria, August. Nicola Bertoldi, Mauro Cettolo, and Federico Marcello. 2013. Cache-based Online Adaptation for Machine Translation Enhanced Computer Assisted Translation. In Proceedings of the XIV Machine Translation Summit, pages 1147–1162, Nice, France. Ergun Bicici. 2013. Feature decay algorithms for fast deployment of accurate statistical machine translation systems. In Proceedings of the 8th Workshop on Statistical Machine Translation, Sofia, Bulgaria, August. John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2003. Confidence Estimation for Machine Translation. Summer workshop final report, JHU/CLSP. Ondrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the 8th Workshop on Statistical Machine Translation, WMT-2013, pages 1–44, Sofia, Bulgaria. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 Workshop on Statistical Machine Translation. In Proceedings of the 7th Workshop on Statistical Machine Translation (WMT’12), pages 10–51, Montr´eal, Canada. Nicol`o Cesa-Bianchi, Gabriel Reverberi, and Sandor Szedmak. 2008. Online Learning Algorithms for Computer-Assisted Translation. Deliverable D4.2, SMART: Statistical Multilingual Analysis for Retrieval and Translation. Trevor Cohn and Lucia Specia. 2013. Modelling Annotator Bias with Multi-task Gaussian Processes: An Application to Machine Translation Quality Estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL-2013, pages 32–42, Sofia, Bulgaria. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online Passive-Aggressive Algorithms. J. Mach. Learn. Res., 7:551–585, December. Jos´e G.C. de Souza, Christian Buck, Marco Turchi, and Matteo Negri. 2013a. FBK-UEdin participation to the WMT13 quality estimation shared task. In Proceedings of the 8th Workshop on Statistical Machine Translation, Sofia, Bulgaria, August. Jos´e G.C. de Souza, Miquel Espl`a-Gomis, Marco Turchi, and Matteo Negri. 2013b. Exploiting Qualitative Information from Automatic Word Alignment for Cross-lingual NLP Tasks. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics - Short Papers, pages 771–776, Sofia, Bulgaria. Christian Hardmeier, Joakim Nivre, and J¨org Tiedemann. 2012. Tree Kernels for Machine Translation Quality Estimation. In Proceedings of the Seventh Workshop on Statistical Machine Translation (WMT’12), pages 109–113, Montr´eal, Canada. Zhengyan He and Houfeng Wang. 2012. A Comparison and Improvement of Online Learning Algorithms for Sequence Labeling. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pages 1147– 1162, Mumbai, India. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 177–180. Maarit Koponen, Wilker Aziz, Luciana Ramos, and Lucia Specia. 2012. Post-editing Time as a Measure of Cognitive Effort. In Proceedings of the AMTA 2012 Workshop on Post-editing Technology and Practice (WPTP 2012), San Diego, California. Maarit Koponen. 2012. Comparing Human Perceptions of Post-editing Effort with Post-editing Operations. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 181–190, Montr´eal, Canada. Nick Littlestone. 1988. Learning Quickly when Irrelevant Attributes Abound: A New Linear-Threshold Algorithm. In Machine Learning, pages 285–318. Junshui Ma, James Theiler, and Simon Perkins. 2003. Accurate Online Support Vector Regression. Neural Computation, 15:2683–2703. Pascual Mart´ınez-G´omez, Germ´an Sanchis-Trilles, and Francisco Casacuberta. 2011. Online Learning via Dynamic Reranking for Computer Assisted Translation. In Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part II, CICLing’11. Pascual Mart´ınez-G´omez, Germ´an Sanchis-Trilles, and Francisco Casacuberta. 2012. Online adaptation strategies for statistical machine translation in postediting scenarios. Pattern Recognition, 45(9):3193– 3203, September. Prashant Mathur, Mauro Cettolo, and Marcello Federico. 2013. Online Learning Approaches in Computer Assisted Translation. In Proceedings of the 8th Workshop on Statistical Machine Translation, Sofia, Bulgaria. 719 Yashar Mehdad, Matteo Negri, and Marcello Federico. 2012. Match without a Referee: Evaluating MT Adequacy without Reference Translations. In Proceedings of the 7th Workshop on Statistical Machine Translation, pages 171–180, Montr´eal, Canada. Daniel Ortiz-Mart´ınez, Ismael Garc´ıa-Varea, and Francisco Casacuberta. 2010. Online learning for interactive statistical machine translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 546–554, Stroudsburg, PA, USA. Francesco Parrella. 2007. Online support vector regression. Master’s Thesis, Department of Information Science, University of Genoa, Italy. Raphael Rubino, Jos´e G.C. de Souza, Jennifer Foster, and Lucia Specia. 2013a. Topic Models for Translation Quality Estimation for Gisting Purposes. In Proceedings of the Machine Translation Summit XIV, Nice, France. Raphael Rubino, Antonio Toral, S Cort´es Va´ıllo, Jun Xie, Xiaofeng Wu, Stephen Doherty, and Qun Liu. 2013b. The CNGL-DCU-Prompsit translation systems for WMT13. In Proceedings of the 8th Workshop on Statistical Machine Translation, pages 211– 216, Sofia, Bulgaria. Kashif Shah, Marco Turchi, and Lucia Specia. 2014. An Efficient and User-friendly Tool for Machine Translation Quality Estimation. In Proceedings of the 9th International Conference on Language Resources and Evaluation, Reykjavik, Iceland. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas, pages 223–231, Cambridge, Massachusetts, USA. Radu Soricut, Nguyen Bach, and Ziyuan Wang. 2012. The SDL Language Weaver Systems in the WMT12 Quality Estimation Shared Task. In Proceedings of the 7th Workshop on Statistical Machine Translation (WMT’12), pages 145–151, Montr´eal, Canada. Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimating the sentence-level quality of machine translation systems. In Proceedings of the 13th Annual Conference of the European Association for Machine Translation (EAMT’09), pages 28–35, Barcelona, Spain. Lucia Specia, Dhwaj Raj, and Marco Turchi. 2010. Machine Translation Evaluation versus Quality Estimation. Machine translation, 24(1):39–50. Lucia Specia, Kashif Shah, Jos´e G.C. de Souza, and Trevor Cohn. 2013. QuEst - A Translation Quality Estimation Framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL2013, pages 79–84, Sofia, Bulgaria. Marco Turchi and Matteo Negri. 2014. Automatic Annotation of Machine Translation Datasets with Binary Quality Judgements. In Proceedings of the 9th International Conference on Language Resources and Evaluation, Reykjavik, Iceland. Marco Turchi, Matteo Negri, and Marcello Federico. 2013. Coping with the Subjectivity of Human Judgements in MT Quality Estimation. In Proceedings of the 8th Workshop on Statistical Machine Translation, pages 240–251, Sofia, Bulgaria. Alexander Yeh. 2000. More Accurate Tests for the Statistical Significance of Result Differences. In Proceedings of the 18th conference on Computational linguistics (COLING 2000) - Volume 2, pages 947–953, Saarbrucken, Germany. 720
2014
67
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 721–732, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Grounded Meaning Representations with Autoencoders Carina Silberer and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected], [email protected] Abstract In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from textual and visual input. The two modalities are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similarity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models. 1 Introduction Recent years have seen a surge of interest in single word vector spaces (Turney and Pantel, 2010; Collobert et al., 2011; Mikolov et al., 2013) and their successful use in many natural language applications. Examples include information retrieval (Manning et al., 2008), search query expansions (Jones et al., 2006), document classification (Sebastiani, 2002), and question answering (Yih et al., 2013). Vector spaces have been also popular in cognitive science figuring prominently in simulations of human behavior involving semantic priming, deep dyslexia, text comprehension, synonym selection, and similarity judgments (see Griffiths et al., 2007). In general, these models specify mechanisms for constructing semantic representations from text corpora based on the distributional hypothesis (Harris, 1970): words that appear in similar linguistic contexts are likely to have related meanings. Word meaning, however, is also tied to the physical world. Words are grounded in the external environment and relate to sensorimotor experience (Regier, 1996; Landau et al., 1998; Barsalou, 2008). To account for this, new types of perceptually grounded distributional models have emerged. These models learn the meaning of words based on textual and perceptual input. The latter is approximated by feature norms elicited from humans (Andrews et al., 2009; Steyvers, 2010; Silberer and Lapata, 2012), visual information extracted automatically from images, (Feng and Lapata, 2010; Bruni et al., 2012a; Silberer et al., 2013) or a combination of both (Roller and Schulte im Walde, 2013). Despite differences in formulation, most existing models conceptualize the problem of meaning representation as one of learning from multiple views corresponding to different modalities. These models still represent words as vectors resulting from the combination of representations with different statistical properties that do not necessarily have a natural correspondence (e.g., text and images). In this work, we introduce a model, illustrated in Figure 1, which learns grounded meaning representations by mapping words and images into a common embedding space. Our model uses stacked autoencoders (Bengio et al., 2007) to induce semantic representations integrating visual and textual information. The literature describes several successful approaches to multimodal learning using different variants of deep networks (Ngiam et al., 2011; Srivastava and Salakhutdinov, 2012) and data sources including text, images, audio, and video. Unlike most previous work, our model is defined at a finer level of granularity — it computes meaning representations for individual words and is unique in its use of attributes as a means of representing the textual and visual modalities. We follow Silberer et al. (2013) in arguing that an attribute-centric representation is expedient for several reasons. Firstly, attributes provide a natural way of expressing salient properties of word meaning as demonstrated in norming studies (e.g., McRae et al., 2005) where humans often employ attributes when asked to describe a concept. Secondly, from 721 a modeling perspective, attributes allow for easier integration of different modalities, since these are rendered in the same medium, namely, language. Thirdly, attributes are well-suited to describing visual phenomena (e.g., objects, scenes, actions). They allow to generalize to new instances for which there are no training examples available and to transcend category and task boundaries whilst offering a generic description of visual data (Farhadi et al., 2009). Our model learns multimodal representations from attributes which are automatically inferred from text and images. We evaluate the embeddings it produces on two tasks, namely word similarity and categorization. In the first task, model estimates of word similarity (e.g., gem–jewel are similar but glass–magician are not) are compared against elicited similarity ratings. We performed a large-scale evaluation on a new dataset consisting of human similarity judgments for 7,576 word pairs. Unlike previous efforts such as the widely used WordSim353 collection (Finkelstein et al., 2002), our dataset contains ratings for visual and textual similarity, thus allowing to study the two modalities (and their contribution to meaning representation) together and in isolation. We also assess whether the learnt representations are appropriate for categorization, i.e., grouping a set of objects into meaningful semantic categories (e.g., peach and apple are members of FRUIT, whereas chair and table are FURNITURE). On both tasks, our model outperforms baselines and related models. 2 Related Work The presented model has connections to several lines of work in NLP, computer vision research, and more generally multimodal learning. We review related work in these areas below. Grounded Semantic Spaces Grounded semantic spaces are essentially distributional models augmented with perceptual information. A model akin to Latent Semantic Analysis (Landauer and Dumais, 1997) is proposed in Bruni et al. (2012b) who concatenate two independently constructed textual and visual spaces and subsequently project them onto a lower-dimensional space using Singular Value Decomposition. Several other models have been extensions of Latent Dirichlet Allocation (Blei et al., 2003) where topic distributions are learned from words and other perceptual units. Feng and Lapata (2010) use visual words which they extract from a corpus of multimodal documents (i.e., BBC news articles and their associated images), whereas others (Steyvers, 2010; Andrews et al., 2009; Silberer and Lapata, 2012) use feature norms obtained in longitudinal elicitation studies (see McRae et al. (2005) for an example) as an approximation of the visual environment. More recently, topic models which combine both feature norms and visual words have also been introduced (Roller and Schulte im Walde, 2013). Drawing inspiration from the successful application of attribute classifiers in object recognition, Silberer et al. (2013) show that automatically predicted visual attributes act as substitutes for feature norms without any critical information loss. The visual and textual modalities on which our model is trained are decoupled in that they are not derived from the same corpus (we would expect co-occurring images and text to correlate to some extent) but unified in their representation by natural language attributes. The use of stacked autoencoders to extract a shared lexical meaning representation is new to our knowledge, although, as we explain below related to a large body of work on deep learning. Multimodal Deep Learning Our work employs deep learning (a.k.a deep networks) to project linguistic and visual information onto a unified representation that fuses the two modalities together. The goal of deep learning is to learn multiple levels of representations through a hierarchy of network architectures, where higher-level representations are expected to help define higher-level concepts. A large body of work has focused on projecting words and images into a common space using a variety of deep learning methods ranging from deep and restricted Boltzman machines (Srivastava and Salakhutdinov, 2012; Feng et al., 2013), to autoencoders (Wu et al., 2013), and recursive neural networks (Socher et al., 2013b). Similar methods have been employed to combine other modalities such as speech and video (Ngiam et al., 2011) or images (Huang and Kingsbury, 2013). Although our model is conceptually similar to these studies (especially those applying stacked autoencoders), it differs considerably from them in at least two aspects. Firstly, most of these approaches aim to learn a shared representation between modalities 722 so as to infer some missing modality from others (e.g., to infer text from images and vice versa); in contrast, we aim to learn an optimal representation for each modality and their optimal combination. Secondly, our problem setting is different from the former studies, which usually deal with classification tasks and fine-tune the deep neural networks using training data with explicit class labels; in contrast we fine-tune our autoencoders using a semi-supervised criterion. That is, we use indirect supervision in the form of object classification in addition to the objective of reconstructing the attribute-centric input representation. 3 Autoencoders for Grounded Semantics 3.1 Background Our model learns higher-level meaning representations for single words from textual and visual input in a joint fashion. We first briefly review autoencoders in Section 3.1 with emphasis on aspects relevant to our model which we then describe in Section 3.2. Autoencoders An autoencoder is an unsupervised neural network which is trained to reconstruct a given input from its latent representation (Bengio, 2009). It consists of an encoder fθ which maps an input vector x(i) to a latent representation y(i) = fθ(x(i)) = s(Wx(i) + b), with s being a non-linear activation function, such as a sigmoid function. A decoder gθ′ then aims to reconstruct input x(i) from y(i), i.e., ˆx(i) = gθ′(y(i)) = s(W′y(i) + b′). The training objective is the determination of parameters ˆθ = {W,b} and ˆθ′ = {W′,b′} that minimize the average reconstruction error over a set of input vectors {x(1),...,x(n)}: ˆθ, ˆθ′ = argmin θ,θ′ 1 n n ∑ i=1 L(x(i),gθ′(fθ(x(i)))), (1) where L is a loss function, such as cross-entropy. Parameters θ and θ′ can be optimized by gradient descent methods. Autoencoders are a means to learn representations of some input by retaining useful features in the encoding phase which help to reconstruct the input, whilst discarding useless or noisy ones. To this end, different strategies have been employed to guide parameter learning and constrain the hidden representation. Examples include imposing a bottleneck to produce an under-complete representation of the input, using sparse representations, or denoising. Denoising Autoencoders The training criterion with denoising autoencoders is the reconstruction of clean input x(i) given a corrupted version ˜x(i) (Vincent et al., 2010). The underlying idea is that the learned latent representation is good if the autoencoder is capable of reconstructing the actual input from its corruption. The reconstruction error for an input x(i) with loss function L then is: L(x(i),gθ′(fθ(˜x(i)))) (2) One possible corruption process is masking noise, where the corrupted version ˜x(i) results from randomly setting a fraction v of x(i) to 0. Stacked Autoencoders Several (denoising) autoencoders can be used as building blocks to form a deep neural network (Bengio et al., 2007; Vincent et al., 2010). For that purpose, the autoencoders are pre-trained layer by layer, with the current layer being fed the latent representation of the previous autoencoder as input. Using this unsupervised pre-training procedure, initial parameters are found which approximate a good solution. Subsequently, the original input layer and hidden representations of all the autoencoders are stacked and all network parameters are fine-tuned with backpropagation. To further optimize the parameters of the network, a supervised criterion can be imposed on top of the last hidden layer such as the minimization of a prediction error on a supervised task (Bengio, 2009). Another approach is to unfold the stacked autoencoders and fine-tune them with respect to the minimization of the global reconstruction error (Hinton and Salakhutdinov, 2006). Alternatively, a semi-supervised criterion can be used (Ranzato and Szummer, 2008; Socher et al., 2011) through combination of the unsupervised training criterion (global reconstruction) with a supervised criterion (prediction of some target given the latent representation). 3.2 Semantic Representations To learn meaning representations of single words from textual and visual input, we employ stacked (denoising) autoencoders (SAEs). Both input modalities are vector-based representations of words, or, more precisely, the objects they refer to (e.g., canary, trolley). The vector dimensions correspond to textual and visual attributes, examples of which are shown in Table 1. We explain how these representations are obtained in more detail 723 ... ... ... input x TEXT W (1) W (3) ... ... ... IMAGES W (2) W (4) ... bimodal coding ˘y W (5′) W (5) ... softmax ˆt W (6) ... ... W (3′) ... ... W (4′) ... reconstruction ˆx W (1′) ... W (2′) Figure 1: Stacked autoencoder trained with semi-supervised objective. Input to the model are singleword vector representations obtained from text and images. Vector dimensions correspond to textual and visual attributes, respectively (see Table 1). in Section 4.1. We first train SAEs with two hidden layers (codings) for each modality separately. Then, we join these two SAEs by feeding their respective second coding simultaneously to another autoencoder, whose hidden layer thus yields the fused meaning representation. Finally, we stack all layers and unfold them in order to fine-tune the SAE. Figure 1 illustrates the model. Unimodal Autoencoders For both modalities, we use the hyperbolic tangent function as activation function for encoder fθ and decoder gθ′ and an entropic loss function for L. The weights of each autoencoder are tied, i.e., W′ = WT. We employ denoising autoencoders (DAEs) for pre-training the textual modality. Regarding the visual autoencoder, we derive a new (‘denoised’) target vector to be reconstructed for each input vector x(i), and treat x(i) itself as corrupted input. The unimodal autoencoder is thus trained to denoise a given input. The target vector is derived as follows: each object o in our data is represented by multiple images, and each image is in turn represented by a visual attribute vector x(i). The target vector is the sum of x(i) and the centroid x(j) of the remaining attribute vectors representing object o. Bimodal Autoencoder The bimodal autoencoder is fed with the concatenated final hidden codings of the visual and textual modalities as input and maps these inputs to a joint hidden layer ˘y with B units. We normalize both unimodal input codings to unit length. Again, we use tied weights for the bimodal autoencoder. We also encourage the autoencoder to detect dependencies between the two modalities while learning the mapping to the bimodal hidden layer. We therefore apply masking noise to one modality with a masking factor v (see Section 3.1), so that the corrupted modality optimally has to rely on the other modality in order to reconstruct its missing input features. Stacked Bimodal Autoencoder We finally build a stacked bimodal autoencoder (SAE) with all pre-trained layers and fine-tune them with respect to a semi-supervised criterion. That is, we unfold the stacked autoencoder and furthermore add a softmax output layer on top of the bimodal layer ˘y that outputs predictions ˆt with respect to the inputs’ object labels (e.g., boat): ˆt(i) = exp(W(6)˘y(i) +b(6)) ∑O k=1 exp(W(6) k. ˘y(i) +b(6) k ) , (3) with weights W(6) ∈RO×B, b(6) ∈RO×1, where O is the number of unique object labels. The overall objective to be minimized is therefore the weighted sum of the reconstruction error Lr and the classification error Lc: L = 1 n n ∑ i=1  δrLr(x(i), ˆx(i))+δcLc(t(i),ˆt(i))  +λR (4) where δr and δc are weighting parameters that give different importance to the partial objectives, 724 eats seeds has beak has claws has handlebar has wheels has wings is yellow made of wood canary 0.05 0.24 0.15 0.00 –0.10 0.19 0.34 0.00 Visual trolley 0.00 0.00 0.00 0.30 0.32 0.00 0.00 0.25 bird:n breed:v cage:n chirp:v fly:v track:n ride:v run:v rail:n wheel:n canary 0.16 0.19 0.39 0.13 0.13 0.00 0.00 0.00 0.00 –0.05 Textual trolley –0.40 0.00 0.00 0.00 0.00 0.14 0.16 0.33 0.17 0.20 Table 1: Examples of attribute-based representations provided as input to our autoencoders. Lc and Lr are entropic loss functions, and R is a regularization term with R = ∑5 j=1 2||W(j)||2 + ||W(6)||2. Finally, ˆt(i) is the object label vector predicted by the softmax layer for input vector x(i), and t(i) is the correct object label, represented as a O-dimensional one-hot vector1. The additional supervised criterion drives the learning towards a representation capable of discriminating between different objects. Furthermore, the semi-supervised setting affords flexibility, allowing to adapt the architecture to specific tasks. For example, by setting the corruption parameter v for the textual modality to one and δr to zero, a standard object classification model for images can be trained. Setting v close to one for either modality enables the model to infer the other (missing) modality. As our input consists of natural language attributes, the model would infer textual attributes given visual attributes and vice versa. 4 Experimental Setup In this section we present our experimental setup for assessing the performance of our model. We give details on the tasks and datasets used for evaluation, we explain how the textual and visual inputs were constructed, how the SAE model was trained, and describe the approaches used for comparison with our own work. 4.1 Data We learn meaning representations for the nouns contained in McRae et al.’s (2005) feature norms. These are 541 concrete animate and inanimate objects (e.g., animals, clothing, vehicles, utensils, fruits, and vegetables). The norms were elicited by asking participants to list properties (e.g., barks, an animal, has legs) describing the nouns they were presented with. 1In a one-hot vector, the element corresponding to the object label is one and the others are zero. As shown in Figure 1, our model takes as input two (real-valued) vectors representing the visual and textual modalities. Vector dimensions correspond to textual and visual attributes, respectively. Textual attributes were extracted by running Strudel (Baroni et al., 2010) on a 2009 dump of the English Wikipedia.2 Strudel is a fully automatic method for extracting weighted wordattribute pairs (e.g., bat–species:n, bat–bite:v) from a lemmatized and POS-tagged corpus. Weights are log-likelihood ratio scores expressing how strongly an attribute and a word are associated. We only retained the ten highest scored attributes for each target word. This returned a total of 2,362 dimensions for the textual vectors. Association scores were scaled to the [−1,1] range. To obtain visual vectors, we followed the methodology put forward in Silberer et al. (2013). Specifically, we used an updated version of their dataset to train SVM-based attribute classifiers that predict visual attributes for images (Farhadi et al., 2009). The dataset is a taxonomy of 636 visual attributes (e.g., has wings, made of wood) and nearly 700K images from ImageNet (Deng et al., 2009) describing more than 500 of McRae et al.’s (2005) nouns. The classifiers perform reasonably well with an interpolated average precision of 0.52. We only considered attributes assigned to at least two nouns in the dataset, obtaining a 414 dimensional vector for each noun. Analogously to the textual representations, visual vectors were scaled to the [−1,1] range. We follow Silberer et al.’s (2013) partition of the dataset into training, validation, and test set and acquire visual vectors for each of the sets. We use the visual vectors of the training and development set for training the autoencoders, and the vectors for the test set for evaluation. 2The corpus is downloadable from http://wacky. sslmit.unibo.it/doku.php?id=corpora. 725 4.2 Model Architecture Model parameters were optimized on a subset of the word association norms collected by Nelson et al. (1998).3 These were established by presenting participants with a cue word (e.g., canary) and asking them to name an associate word in response (e.g., bird, sing, yellow). For each cue, the norms provide a set of associates and the frequencies with which they were named. The dataset contains a very large number of cue-associate pairs (63,619 in total) some of which luckily are covered in McRae et al. (2005).4 During training we used correlation analysis (Spearman’s ρ) to monitor the degree of linear relationship between model cue-associate (cosine) similarities and human probabilities. The best autoencoder on the word association task obtained a correlation coefficient of 0.33. This performance is superior to the results reported in Silberer et al. (2013) (their correlation coefficients range from 0.16 to 0.28). This model has the following architecture: the textual autoencoder (see Figure 1, left-hand side) consists of 700 hidden units which are then mapped to the second hidden layer with 500 units (the corruption parameter was set to v = 0.1); the visual autoencoder (see Figure 1, right-hand side) has 170 and 100 hidden units, in the first and second layer, respectively. The 500 textual and 100 visual hidden units were fed to a bimodal autoencoder containing 500 latent units, and masking noise was applied to the textual modality with v = 0.2. The weighting parameters for the joint training objective of the stacked autoencoder were set to δr = 0.8 and δc = 1 (see Equation (4)). We used the model described above and the meaning representations obtained from the output of the bimodal latent layer for all the evaluation tasks detailed below. Some performance gains could be expected if parameter optimization took place separately for each task. However, we wanted to avoid overfitting, and show that our parameters are robust across tasks and datasets. 4.3 Evaluation Tasks Word Similarity We first evaluated how well our model predicts word similarity ratings. Although several relevant datasets exist, such as 3http://w3.usf.edu/Freeassociation. 4435 word pairs constitute the overlap between Nelson et al.’s norms (1998) and McRae et al.’s (2005) nouns. the widely used WordSim353 (Finkelstein et al., 2002) or the more recent Rel-122 norms (Szumlanski et al., 2013), they contain many abstract words, (e.g., love–sex or arrest–detention) which are not covered in McRae et al. (2005). This is for a good reason, as most abstract words do not have discernible attributes, or at least attributes that participants would agree upon. We thus created a new dataset consisting exclusively of McRae et al. (2005) nouns which we hope will be useful for the development and evaluation of grounded semantic space models.5 Initially, we created all possible pairings over McRae et al.’s (2005) nouns and computed their semantic relatedness using Patwardhan and Pedersen (2006)’s WordNet-based measure. We opted for this specific measure as it achieves high correlation with human ratings and has a high coverage on our nouns. Next, for each word we randomly selected 30 pairs under the assumption that they are representative of the full variation of semantic similarity. This resulted in 7,576 word pairs for which we obtained similarity ratings using Amazon Mechanical Turk (AMT). Participants were asked to rate a pair on two dimensions, visual and semantic similarity using a Likert scale of 1 (highly dissimilar) to 5 (highly similar). Each task consisted of 32 pairs covering examples of weak to very strong semantic relatedness. Two control pairs from Miller and Charles (1991) were included in each task to potentially help identify and eliminate data from participants who assigned random scores. Examples of the stimuli and mean ratings are shown in Table 2. The elicitation study comprised overall 255 tasks, each task was completed by five volunteers. The similarity data was post-processed so as to identify and remove outliers. We considered an outlier to be any individual whose mean pairwise correlation fell outside two standard deviations from the mean correlation. 11.5% of the annotations were detected as outliers and removed. After outlier removal, we further examined how well the participants agreed in their similarity judgments. We measured inter-subject agreement as the average pairwise correlation coefficient (Spearman’s ρ) between the ratings of all annotators for each task. For semantic similarity, the mean correlation was 0.76 (Min =0.34, Max 5Available from http://homepages.inf.ed.ac.uk/ mlap/index.php?page=resources. 726 Word Pairs Semantic Visual football–pillow 1.0 1.2 dagger–pencil 1.0 2.2 motorcycle–wheel 2.4 1.8 orange–pumpkin 2.5 3.0 cherry–pineapple 3.6 1.2 pickle–zucchini 3.6 4.0 canary–owl 4.0 2.4 jeans–sweater 4.5 2.2 pan–pot 4.7 4.0 hornet–wasp 4.8 4.8 airplane–jet 5.0 5.0 Table 2: Mean semantic and visual similarity ratings for the McRae et al. (2005) nouns using a scale of 1 (highly dissimilar) to 5 (highly similar). =0.97, StD =0.11) and for visual similarity 0.63 (Min =0.19, Max =0.90, SD =0.14). These results indicate that the participants found the task relatively straightforward and produced similarity ratings with a reasonable level of consistency. For comparison, Patwardhan and Pedersen’s (2006) measure achieved a coefficient of 0.56 on the dataset for semantic similarity and 0.48 for visual similarity. The correlation between the average ratings of the AMT annotators and the Miller and Charles (1991) dataset was ρ = 0.91. In our experiments (see Section 5), we correlate modelbased cosine similarities with mean similarity ratings (again using Spearman’s ρ). Categorization The task of categorization (i.e., grouping objects into meaningful categories) is a classic problem in the field of cognitive science, central to perception, learning, and the use of language. We evaluated model output against a gold standard set of categories created by Fountain and Lapata (2010). The dataset contains a classification, produced by human participants, of McRae et al.’s (2005) nouns into (possibly multiple) semantic categories (40 in total).6 To obtain a clustering of nouns, we used Chinese Whispers (Biemann, 2006), a randomized graph-clustering algorithm. In the categorization setting, Chinese Whispers (CW) produces a hard clustering over a weighted graph whose nodes cor6The dataset can be downloaded from http: //homepages.inf.ed.ac.uk/s0897549/data/. respond to words and edges to cosine similarity scores between vectors representing their meaning. CW is a non-parametric model, it induces the number of clusters (i.e., categories) from the data as well as which nouns belong to these clusters. In our experiments, we initialized Chinese Whispers with different graphs resulting from different vector-based representations of the McRae et al. (2005) nouns. We also transformed the dataset into hard categorizations by assigning each noun to its most typical category as extrapolated from human typicality ratings (for details see Fountain and Lapata, 2010). CW can optionally apply a minimum weight threshold which we optimized using the categorization dataset from Baroni et al. (2010). The latter contains a classification of 82 McRae et al. (2005) nouns into 10 categories. These nouns were excluded from the gold standard (Fountain and Lapata, 2010) in our final evaluation. We evaluated the clusters produced by CW using the F-score measure introduced in the SemEval 2007 task (Agirre and Soroa, 2007); it is the harmonic mean of precision and recall defined as the number of correct members of a cluster divided by the number of items in the cluster and the number of items in the gold-standard class, respectively. 4.4 Comparison with Other Models Throughout our experiments we compare a bimodal stacked autoencoder against unimodal autoencoders based solely on textual and visual input (left- and right-hand sides in Figure 1, respectively). We also compare our model against two approaches that differ in their fusion mechanisms. The first one is based on kernelized canonical correlation (kCCA, Hardoon et al., 2004) with a linear kernel which was the best performing model in Silberer et al. (2013). The second one emulates Bruni et al.’s (2014) fusion mechanism. Specifically, we concatenate the textual and visual vectors and project them onto a lower dimensional latent space using SVD (Golub and Reinsch, 1970). All these models run on the same datasets/items and are given input identical to our model, namely attribute-based textual and visual representations. We furthermore report results obtained with Bruni et al.’s (2014) bimodal distributional model, which employs SVD to integrate co-occurrencebased textual representations with visual repre727 Semantic Visual Models T V T+V T V T+V McRae 0.71 0.49 0.68 0.58 0.52 0.62 Attributes 0.58 0.61 0.68 0.46 0.56 0.58 SAE 0.65 0.60 0.70 0.52 0.60 0.64 SVD — — 0.67 — — 0.57 kCCA — — 0.57 — — 0.55 Bruni — — 0.52 — — 0.46 RNN-640 0.41 — — 0.34 — — Table 3: Correlation of model predictions against similarity ratings for McRae et al. (2005) noun pairs (using Spearman’s ρ). sentations constructed from low-level image features. In their model, the textual modality is represented by the 30K-dimensional vectors extracted from UKWaC and WaCkypedia.7 The visual modality is represented by bag-of-visualwords histograms built on the basis of clustered SIFT features (Lowe, 2004). We rebuilt their model on the ESP image dataset (von Ahn and Dabbish, 2004) using Bruni et al.’s (2013) publicly available system. Finally, we also compare to the word embeddings obtained using Mikolov et al.’s (2011) recurrent neural network based language model. These were pre-trained on Broadcast news data (400M words) using the word2vec tool.8 We report results with the 640-dimensional embeddings as they performed best. 5 Results Table 3 presents our results on the word similarity task. We report correlation coefficients of model predictions against similarity ratings. As an indicator to how well automatically extracted attributes can approach the performance of clean human generated attributes, we also report results of a distributional model induced from McRae et al.’s (2005) norms (see the row labeled McRae in the table). Each noun is represented as a vector with dimensions corresponding to attributes elicited by participants of the norming study. Vector components are set to the (normalized) frequency with which participants generated the corresponding attribute. We show results for three models, using all attributes except those classified as visual (T), only 7We thank Elia Bruni for providing us with their data. 8Available from http://www.rnnlm.org/. # Pair # Pair 1 pliers–tongs 11 cello–violin 2 cathedral–church 12 cottage–house 3 cathedral–chapel 13 horse–pony 4 pistol–revolver 14 gun–rifle 5 chapel–church 15 cedar–oak 6 airplane–helicopter 16 bull–ox 7 dagger–sword 17 dress–gown 8 pistol–rifle 18 bolts–screws 9 cloak–robe 19 salmon–trout 10 nylons–trousers 20 oven–stove Table 4: Word pairs with highest semantic and visual similarity according to SAE model. Pairs are ranked from highest to lowest similarity. visual attributes (V), and all available attributes (V+T).9 As baselines, we also report the performance of a model based solely on textual attributes (which we obtain from Strudel), visual attributes (obtained from our classifiers), and their concatenation (see row Attributes in Table 3, and columns T, V, and T+V, respectively). The automatically obtained textual and visual attribute vectors serve as input to SVD, kCCA, and our stacked autoencoder (SAE). The third row in the table presents three variants of our model trained on textual and visual attributes only (T and V, respectively) and on both modalities jointly (T+V). Recall that participants were asked to provide ratings on two dimensions, namely semantic and visual similarity. We would expect the textual modality to be more dominant when modeling semantic similarity and conversely the perceptual modality to be stronger with respect to visual similarity. This is borne out in our unimodal SAEs. The textual SAE correlates better with semantic similarity judgments (ρ = 0.65) than its visual equivalent (ρ = 0.60). And the visual SAE correlates better with visual similarity judgments (ρ = 0.60) compared to the textual SAE (ρ = 0.52). Interestingly, the bimodal SAE is better than the unimodal variants on both types of similarity judgments, semantic and visual. This suggests that both modalities contribute complementary information and that the SAE model is able to extract a shared representation which improves generalization performance across tasks by learning them 9Classification of attributes into categories is provided by McRae et al. (2005) in their dataset. 728 Models T V T+V McRae 0.52 0.31 0.42 Attributes 0.35 0.37 0.33 SAE 0.36 0.35 0.43 SVD — — 0.39 kCCA — — 0.37 Bruni — — 0.34 RNN-640 0.32 — — Table 5: F-score results on concept categorization. jointly. The bimodal autoencoder (SAE, T+V) outperforms all other bimodal models on both similarity tasks. It yields a correlation coefficient of ρ = 0.70 on semantic similarity and ρ = 0.64 on visual similarity. Human agreement on the former task is 0.76 and 0.63 on the latter. Table 4 shows examples of word pairs with highest semantic and visual similarity according to the SAE model. We also observe that simply concatenating textual and visual attributes (Attributes, T+V) performs competitively with SVD and better than kCCA. This indicates that the attribute-based representation is a powerful predictor on its own. Interestingly, both Bruni et al. (2013) and Mikolov et al. (2011) which do not make use of attributes are out-performed by all other attribute-based systems (see columns T and T+V in Table 3). Our results on the categorization task are given in Table 5. In this task, simple concatenation of visual and textual attributes does not yield improved performance over the individual modalities (see row Attributes in Table 5). In contrast, all bimodal models (SVD, kCCA, and SAE) are better than their unimodal equivalents and RNN-640. The SAE outperforms both kCCA and SVD by a large margin delivering clustering performance similar to the McRae et al.’s (2005) norms. Table 6 shows examples of clusters produced by Chinese Whispers when using vector representations provided by the SAE model. In sum, our experiments show that the bimodal SAE model delivers superior performance across the board when compared against competitive baselines and related models. It is interesting to note that the unimodal SAEs are in most cases better than the raw textual or visual attributes. This indicates that higher level embeddings may be beneficial to NLP tasks in general, not only to those requiring multimodal information. STICK-LIKE UTENSILS baton, ladle, peg, spatula, spoon RELIGIOUS BUILDINGS cathedral, chapel, church WIND INSTRUMENTS clarinet, flute, saxophone, trombone, trumpet, tuba AXES axe, hatchet, machete, tomahawk FURNITURE W/ LEGS bed, bench, chair, couch, desk, rocker, sofa, stool, table FURNITURE W/O LEGS bookcase, bureau, cabinet, closet, cupboard, dishwasher, dresser LIGHTINGS candle, chandelier, lamp, lantern ENTRY POINTS door, elevator, gate UNGULATES bison, buffalo, bull, calf, camel, cow, donkey, elephant, goat, horse, lamb, ox, pig, pony, sheep BIRDS crow, dove, eagle, falcon, hawk, ostrich, owl, penguin, pigeon, raven, stork, vulture, woodpecker Table 6: Examples of clusters produced by CW using the representations obtained from the SAE model. 6 Conclusions In this paper, we presented a model that uses stacked autoencoders to learn grounded meaning representations by simultaneously combining textual and visual modalities. The two modalities are encoded as vectors of natural language attributes and are obtained automatically from decoupled text and image data. To the best of our knowledge, our model is novel in its use of attributebased input in a deep neural network. Experimental results in two tasks, namely simulation of word similarity and word categorization, show that our model outperforms competitive baselines and related models trained on the same attribute-based input. Our evaluation also reveals that the bimodal models are superior to their unimodal counterparts and that higher-level unimodal representations are better than the raw input. In the future, we would like to apply our model to other tasks, such as image and text retrieval (Hodosh et al., 2013; Socher et al., 2013b), zero-shot learning (Socher et al., 2013a), and word learning (Yu and Ballard, 2007). Acknowledgment We would like to thank Vittorio Ferrari, Iain Murray and members of the ILCC at the School of Informatics for their valuable feedback. We acknowledge the support of EPSRC through project grant EP/I037415/1. 729 References Agirre, Eneko and Aitor Soroa. 2007. SemEval2007 Task 02: Evaluating Word Sense Induction and Discrimination Systems. In Proceedings of the Fourth International Workshop on Semantic Evaluations. Prague, Czech Republic, pages 7–12. Andrews, M., G. Vigliocco, and D. Vinson. 2009. Integrating Experiential and Distributional Data to Learn Semantic Representations. Psychological Review 116(3):463–498. Baroni, M., B. Murphy, E. Barbu, and M. Poesio. 2010. Strudel: A Corpus-Based Semantic Model Based on Properties and Types. Cognitive Science 34(2):222–254. Barsalou, Lawrence W. 2008. Grounded Cognition. Annual Review of Psychology 59:617–845. Bengio, Y., P. Lamblin, D. Popovici, and H. Larochelle. 2007. Greedy Layer-Wise Training of Deep Networks. In Bernhard Sch¨olkopf, John Platt, and Thomas Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press, pages 153–160. Bengio, Yoshua. 2009. Learning Deep Architectures for AI. Foundations and Trends in Machine Learning 2(1):1–127. Biemann, Chris. 2006. Chinese Whispers – an Efficient Graph Clustering Algorithm and its Application to Natural Language Processing Problems. In Proceedings of TextGraphs: the 1st Workshop on Graph Based Methods for Natural Language Processing. New York, NY, pages 73–80. Blei, D. M., A. Y. Ng, and M. I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research 3:993–1022. Bruni, E., G. Boleda, M. Baroni, and N. Tran. 2012a. Distributional Semantics in Technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Jeju Island, Korea, pages 136–145. Bruni, E., U. Bordignon, A. Liska, J. Uijlings, and I. Sergienya. 2013. Vsem: An open library for visual semantics representation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Sofia, Bulgaria, pages 187–192. Bruni, E., N. Tran, and M. Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res. (JAIR) 49:1–47. Bruni, E., J. Uijlings, M. Baroni, and N. Sebe. 2012b. Distributional Semantics with Eyes: Using Image Analysis to Improve Computational Representations of Word Meaning. In Proceedings of the 20th ACM International Conference on Multimedia. Nara, Japan, pages 1219–1228. Collobert, R., J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural Language Processing (almost) from Scratch. Journal of Machine Learning Research 12:2493–2537. Deng, J., W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Miami, Florida, pages 248–255. Farhadi, A., I. Endres, D. Hoiem, and D. Forsyth. 2009. Describing Objects by their Attributes. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Miami Beach, Florida, pages 1778–1785. Feng, Fangxiang, Ruifan Li, and Xiaojie Wang. 2013. Constructing Hierarchical Image-tags Bimodal Representations for Word Tags Alternative Choice. In Proceedings of the ICML 2013 Workshop on Challenges in Representation Learning. Atlanta, Georgia. Feng, Yansong and Mirella Lapata. 2010. Visual Information in Semantic Representation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Los Angeles, California, pages 91–99. Finkelstein, L., E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. 2002. Placing Search in Context: The Concept Revisited. ACM Transactions on Information Systems 20(1):116–131. Fountain, Trevor and Mirella Lapata. 2010. Meaning Representation in Natural Language Categorization. In Proceedings of the 31st Annual Conference of the Cognitive Science Society. Amsterdam, The Netherlands, pages 1916– 1921. 730 Golub, Gene and Christian Reinsch. 1970. Singular Value Decomposition and Least Squares Solutions. Numerische Mathematik 14(5):403– 420. Griffiths, T. L., M. Steyvers, and J. B. Tenenbaum. 2007. Topics in Semantic Representation. Psychological Review 114(2):211–244. Hardoon, D. R., S. R. Szedmak, and J. R. ShaweTaylor. 2004. Canonical Correlation Analysis: An Overview with Application to Learning Methods. Neural Computation 16(12):2639– 2664. Harris, Zellig. 1970. Distributional Structure. In Papers in Structural and Transformational Linguistics, pages 775–794. Hinton, Geoffrey E. and Ruslan R. Salakhutdinov. 2006. Reducing the Dimensionality of Data with Neural Networks. Science 313(5786):504– 507. Hodosh, Micah, Peter Young, and Julia Hockenmaier. 2013. Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics. Journal of Artificial Intelligence Research 47:853–899. Huang, Jing and Brian Kingsbury. 2013. Audiovisual Deep Learning for Noise Robust Speech Recognition. In Proceedings of the 38th International Conference on Acoustics, Speech, and Signal Processing. Vancouver, Canada, pages 7596–7599. Jones, R., B. Rey, O. Madani, and W. Greiner. 2006. Generating Query Substititions. In Proceedings of the 15th International Conference on the World-Wide Web. Edinburgh, Scotland, pages 387–396. Landau, B., L. Smith, and S. Jones. 1998. Object Perception and Object Naming in Early Development. Trends in Cognitive Science 27:19–24. Landauer, Thomas and Susan T. Dumais. 1997. A Solution to Plato’s Problem: the Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge. Psychological Review 104(2):211–240. Lowe, D. 2004. Distinctive Image Features from Scale-invariant Keypoints. International Journal of Computer Vision 60(2):91–110. Manning, C. D., P. Raghavan, and H. Sch¨utze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY. McRae, K., G. S. Cree, M. S. Seidenberg, and C. McNorgan. 2005. Semantic Feature Production Norms for a Large Set of Living and Nonliving Things. Behavior Research Methods 37(4):547–559. Mikolov, T., S. Kombrink, L. Burget, J. ˇCernock´y, and S. Khudanpur. 2011. Extensions of Recurrent Neural Network Language Model. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech, and Signal Processing. Prague, Czech Republic, pages 5528– 5531. Mikolov, T., Wen-tau Yih, and G. Zweig. 2013. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Atlanta, Georgia, pages 746–751. Miller, George A. and Walter G. Charles. 1991. Contextual Correlates of Semantic Similarity. Language and Cognitive Processes 6(1). Nelson, D. L., C. L. McEvoy, and T. A. Schreiber. 1998. The University of South Florida Word Association, Rhyme, and Word Fragment Norms. Ngiam, Jiquan, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Ng. 2011. Multimodal Deep Learning. In Proceedings of the 28th International Conference on Machine Learning. Bellevue, Washington, pages 689–696. Patwardhan, Siddharth and Ted Pedersen. 2006. Using WordNet-based Context Vectors to Estimate the Semantic Relatedness of Concepts. In Proceedings of the EACL 2006 Workshop on Making Sense of Sense: Bringing Computational Linguistics and Psycholinguistics Together. Trento, Italy, pages 1–8. Ranzato, Marc’Aurelio and Martin Szummer. 2008. Semi-supervised Learning of Compact Document Representations with Deep Networks. In Proceedings of the 25th International Conference on Machine Learning. Helsinki, Finland, pages 792–799. Regier, Terry. 1996. The Human Semantic Potential. MIT Press, Cambridge, Massachusetts. Roller, Stephen and Sabine Schulte im Walde. 2013. A Multimodal LDA Model integrating 731 Textual, Cognitive and Visual Modalities. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, Washington, pages 1146–1157. Sebastiani, Fabrizio. 2002. Machine Learning in Automated Text Categorization. ACM Computing Surveys 34:1–47. Silberer, C., V. Ferrari, and M. Lapata. 2013. Models of Semantic Representation with Visual Attributes. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Sofia, Bulgaria, pages 572–582. Silberer, Carina and Mirella Lapata. 2012. Grounded Models of Semantic Representation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Jeju Island, Korea, pages 1423–1433. Socher, R., M. Ganjoo, C. D. Manning, and A. Y. Ng. 2013a. Zero-shot learning through crossmodal transfer. In Advances in Neural Information Processing Systems 26, pages 935–943. Socher, R., Quoc V. Le, C. D. Manning, and A. Y. Ng. 2013b. Grounded Compositional Semantics for Finding and Describing Images with Sentences. In Proceedings of the NIPS Deep Learning Workshop. Socher, R., J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. 2011. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Edinburgh, Scotland, pages 151–161. Srivastava, Nitish and Ruslan Salakhutdinov. 2012. Multimodal Learning with Deep Boltzmann Machines. In Advances in Neural Information Processing Systems 25, pages 2231– 2239. Steyvers, Mark. 2010. Combining Feature Norms and Text Data with Topic Models. Acta Psychologica 133(3):234–342. Szumlanski, S. R., F. Gomez, and V. K. Sims. 2013. A New Set of Norms for Semantic Relatedness Measures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Sofia, Bulgaria, pages 890– 895. Turney, Peter D. and Patrick Pantel. 2010. From Frequency to Meaning: Vector Space Models of Semantics. Journal of Artificial Intelligence Research 37(1):141–188. Vincent, P., H. Larochelle, I. Lajoie, Y. Bengio, and P. Manzagol. 2010. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. Journal of Machine Learning Research 11:3371–3408. von Ahn, Luis and Laura Dabbish. 2004. Labeling Images with a Computer Game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Vienna, Austria, pages 319–326. Wu, Pengcheng, Steven C. H. Hoi, Hao Xia, Peilin Zhao, Dayong Wang, and Chunyan Miao. 2013. Online Multimodal Deep Similarity Learning with Application to Image Retrieval. In Proceedings of the 21st ACM International Conference on Multimedia. Barcelona, Spain, pages 153–162. Yih, Wen-tau, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question Answering Using Enhanced Lexical Semantic Models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Sofia, Bulgaria, pages 1744–1753. Yu, C. and D. H. Ballard. 2007. A Unified Model of Early Word Learning Integrating Statistical and Social Cues. Neurocomputing 70:2149– 2165. 732
2014
68
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 733–742, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Joint POS Tagging and Transition-based Constituent Parsing in Chinese with Non-local Features Zhiguo Wang Brandeis University Waltham, MA, USA [email protected] Nianwen Xue Brandeis University Waltham, MA, USA [email protected] Abstract We propose three improvements to address the drawbacks of state-of-the-art transition-based constituent parsers. First, to resolve the error propagation problem of the traditional pipeline approach, we incorporate POS tagging into the syntactic parsing process. Second, to alleviate the negative influence of size differences among competing action sequences, we align parser states during beam-search decoding. Third, to enhance the power of parsing models, we enlarge the feature set with non-local features and semisupervised word cluster features. Experimental results show that these modifications improve parsing performance significantly. Evaluated on the Chinese TreeBank (CTB), our final performance reaches 86.3% (F1) when trained on CTB 5.1, and 87.1% when trained on CTB 6.0, and these results outperform all state-of-the-art parsers. 1 Introduction Constituent parsing is one of the most fundamental tasks in Natural Language Processing (NLP). It seeks to uncover the underlying recursive phrase structure of sentences. Most of the state-of-theart parsers are based on the PCFG paradigm and chart-based decoding algorithms (Collins, 1999; Charniak, 2000; Petrov et al., 2006). Chart-based parsers perform exhaustive search with dynamic programming, which contributes to their high accuracy, but they also suffer from higher runtime complexity and can only exploit simple local structural information. Transition-based constituent parsing (Sagae and Lavie, 2005; Wang et al., 2006; Zhang and Clark, 2009) is an attractive alternative. It utilizes a series of deterministic shift-reduce decisions to construct syntactic trees. Therefore, it runs in linear time and can take advantage of arbitrarily complex structural features from already constructed subtrees. The downside is that they only search a tiny fraction of the whole space and are therefore commonly considered to be less accurate than chartbased parsers. Recent studies (Zhu et al., 2013; Zhang et al., 2013) show, however, that this approach can also achieve the state-of-the-art performance with improved training procedures and the use of additional source of information as features. However, there is still room for improvement for these state-of-the-art transition-based constituent parsers. First, POS tagging is typically performed separately as a preliminary step, and POS tagging errors will propagate to the parsing process. This problem is especially severe for languages where the POS tagging accuracy is relatively low, and this is the case for Chinese where there are fewer contextual clues that can be used to inform the tagging process and some of the tagging decisions are actually influenced by the syntactic structure of the sentence. This creates a chicken and egg problem that needs to be addressed when designing a parsing model. Second, due to the existence of unary rules in constituent trees, competing candidate parses often have different number of actions, and this increases the disambiguation difficulty for the parsing model. Third, transition-based parsers have the freedom to define arbitrarily complex structural features, but this freedom has not fully been taken advantage of and most of the present approaches only use simple structural features. In this paper, we address these drawbacks to improve the transition-based constituent parsing for Chinese. First, we integrate POS tagging into the parsing process and jointly optimize these two processes simultaneously. Because non-local syntactic information is now available to POS tag 733 determination, the accuracy of POS tagging improves, and this will in turn improve parsing accuracy. Second, we propose a novel state alignment strategy to align candidate parses with different action sizes during beam-search decoding. With this strategy, parser states and their unary extensions are put into the same beam, therefore the parsing model could decide whether or not to use unary actions within local decision beams. Third, we take into account two groups of complex structural features that have not been previously used in transition-based parsing: nonlocal features (Charniak and Johnson, 2005) and semi-supervised word cluster features (Koo et al., 2008). With the help of the non-local features, our transition-based parsing system outperforms all previous single systems in Chinese. After integrating semi-supervised word cluster features, the parsing accuracy is further improved to 86.3% when trained on CTB 5.1 and 87.1% when trained on CTB 6.0, and this is the best reported performance for Chinese. The remainder of this paper is organized as follows: Section 2 introduces the standard transitionbased constituent parsing approach. Section 3 describes our three improvements to standard transition-based constituent parsing. We discuss and analyze the experimental results in Section 4. Section 5 discusses related work. Finally, we conclude this paper in Section 6. 2 Transition-based Constituent Parsing This section describes the transition-based constituent parsing model, which is the basis of Section 3 and the baseline model in Section 4. 2.1 Transition-based Constituent Parsing Model A transition-based constituent parsing model is a quadruple C = (S, T, s0, St), where S is a set of parser states (sometimes called configurations), T is a finite set of actions, s0 is an initialization function to map each input sentence into a unique initial state, and St ∈S is a set of terminal states. Each action t ∈T is a transition function to transit a state into a new state. A parser state s ∈S is defined as a tuple s = (σ, β), where σ is a stack which is maintained to hold partial subtrees that are already constructed, and β is a queue which is used for storing word-POS pairs that remain unprocessed. In particular, the initial state has an B0,3 c2,3 w2 A0,2 b1,2 w1 a0,1 w0 sh,sh,rr-A,sh,rl-B (a) B0,3 F2,3 c2,3 w2 E0,2 A0,2 D1,2 b1,2 w1 C0,1 a0,1 w0 sh,ru-C,sh,ru-D,rr-A, ru-E,sh,ru-F,rl-B (b) Figure 1: Two constituent trees for an example sentence w0w1w2 with POS tags abc. The corresponding action sequences are given below, the spans of each nodes are annotated and the head nodes are written with Bold font type. empty stack σ and a queue β containing the entire input sentence (word-POS pairs), and the terminal states have an empty queue β and a stack σ containing only one complete parse tree. The task of transition-based constituent parsing is to scan the input POS-tagged sentence from left to right and perform a sequence of actions to transform the initial state into a terminal state. In order to construct lexicalized constituent parse trees, we define the following actions for the action set T according to (Sagae and Lavie, 2005; Wang et al., 2006; Zhang and Clark, 2009): • SHIFT (sh): remove the first word-POS pair from β, and push it onto the top of σ; • REDUCE-UNARY-X (ru-x): pop the top subtree from σ, construct a new unary node labeled with X for the subtree, then push the new subtree back onto σ. The head of the new subtree is inherited from its child; • REDUCE-BINARY-{L/R}-X (rl/rr-x): pop the top two subtrees from σ, combine them into a new tree with a node labeled with X, then push the new subtree back onto σ. The left (L) and right (R) versions of the action indicate whether the head of the new subtree is inherited from its left or right child. With these actions, our parser can process trees with unary and binary branches easily. For example, in Figure 1, for the input sentence w0w1w2 and its POS tags abc, our parser can construct two parse trees using action sequences given below these trees. However, parse trees in Treebanks often contain an arbitrary number of branches. To 734 Type Feature Templates unigrams p0tc, p0wc, p1tc, p1wc, p2tc p2wc, p3tc, p3wc, q0wt, q1wt q2wt, q3wt, p0lwc, p0rwc p0uwc, p1lwc, p1rwc, p1uwc bigrams p0wp1w, p0wp1c, p0cp1w, p0cp1c p0wq0w, p0wq0t, p0cq0w, p0cq0t q0wq1w, q0wq1t, q0tq1w, q0tq1t p1wq0w, p1wq0t, p1cq0w, p1cq0t trigrams p0cp1cp2c, p0wp1cp2c, p0cp1wq0t p0cp1cp2w, p0cp1cq0t, p0wp1cq0t p0cp1wq0t, p0cp1cq0w Table 1: Baseline features, where pi represents the ith subtree in the stack σ and qi denotes the ith item in the queue β. w refers to the head lexicon, t refers to the head POS, and c refers to the constituent label. pil and pir refer to the left and right child for a binary subtree pi, and piu refers to the child of a unary subtree pi. process such trees, we employ binarization and debinarization processes described in Zhang and Clark (2009) to transform multi-branch trees into binary-branch trees and restore the generated binary trees back to their original forms. 2.2 Modeling, Training and Decoding To determine which action t ∈T should the parser perform at a state s ∈S, we use a linear model to score each possible ⟨s, t⟩combination: score(s, t) = ⃗w · φ(s, t) = X i wifi(s, t) (1) where φ(s, t) is the feature function used for mapping a state-action pair into a feature vector, and ⃗w is the weight vector. The score of a parser state s is the sum of the scores for all state-action pairs in the transition path from the initial state to the current state. Table 1 lists the feature templates used in our baseline parser, which is adopted from Zhang and Clark (2009). To train the weight vector ⃗w, we employ the averaged perceptron algorithm with early update (Collins and Roark, 2004). We employ the beam search decoding algorithm (Zhang and Clark, 2009) to balance the tradeoff between accuracy and efficiency. Algorithm 1 gives details of the process. In the algorithm, we maintain a beam (sometimes called agenda) to keep k best states at each step. The first beam0 Algorithm 1 Beam-search Constituent Parsing Input: A POS-tagged sentence, beam size k. Output: A constituent parse tree. 1: beam0 ←{s0} ▷initialization 2: i ←0 ▷step index 3: loop 4: P ←{} ▷a priority queue 5: while beami is not empty do 6: s ←POP(beami) 7: for all possible t ∈T do 8: snew ←apply t to s 9: score snew with E.q (1) 10: insert snew into P 11: beami+1 ←k best states of P 12: sbest ←best state in beami+1 13: if sbest ∈St then 14: return sbest 15: i ←i + 1 is initialized with the initial state s0 (line 1). At step i, each of the k states in beami is extended by applying all possible actions (line 5-10). For all newly generated states, only the k best states are preserved for beami+1 (line 11). The decoding process repeats until the highest scored state in beami+1 reaches a terminal state (line 12-14). 3 Joint POS Tagging and Parsing with Non-local Features To address the drawbacks of the standard transition-based constituent parsing model (described in Section 1), we propose a model to jointly solve POS tagging and constituent parsing with non-local features. 3.1 Joint POS Tagging and Parsing POS tagging is often taken as a preliminary step for transition-based constituent parsing, therefore the accuracy of POS tagging would greatly affect parsing performance. In our experiment (described in Section 4.2), parsing accuracy would decrease by 8.5% in F1 in Chinese parsing when using automatically generated POS tags instead of gold-standard ones. To tackle this issue, we integrate POS tagging into the transition-based constituent parsing process and jointly optimize these two processes simultaneously. Inspired from Hatori et al. (2011), we modify the sh action by assigning a POS tag for the word when it is shifted: • SHIFT-X (sh-x): remove the first word from 735 β, assign POS tag X to the word and push it onto the top of σ. With such an action, POS tagging becomes a natural part of transition-based parsing. However, some feature templates in Table 1 become unavailable, because POS tags for the look-ahead words are not specified yet under the joint framework. For example, for the template q0wt , the POS tag of the first word q0 in the queue β is required, but it is not specified yet at the present state. To overcome the lack of look-ahead POS tags, we borrow the concept of delayed features originally developed for dependency parsing (Hatori et al., 2011). Features that require look-ahead POS tags are defined as delayed features. In these features, look-ahead POS tags are taken as variables. During parsing, delayed features are extracted and passed from one state to the next state. When a sh-x action is performed, the look-ahead POS tag of some delayed features is specified, therefore these delayed features can be transformed into normal features (by replacing variable with the newly specified POS tag). The remaining delayed features will be transformed similarly when their look-ahead POS tags are specified during the following parsing steps. 3.2 State Alignment Assuming an input sentence contains n words, in order to reach a terminal state, the initial state requires n sh-x actions to consume all words in β, and n −1 rl/rr-x actions to construct a complete parse tree by consuming all the subtrees in σ. However, ru-x is a very special action. It only constructs a new unary node for the subtree on top of σ, but does not consume any items in σ or β. As a result, the number of ru-x actions varies among terminal states for the same sentence. For example, the parse tree in Figure 1a contains no ru-x action, while the parse tree for the same input sentence in Figure 1b contains four ru-x actions. This makes the lengths of complete action sequences very different, and the parsing model has to disambiguate among terminal states with varying action sizes. Zhu et al. (2013) proposed a padding method to align terminal states containing different number of actions. The idea is to append some IDLE actions to terminal states with shorter action sequence, and make sure all terminal states contain the same number of actions (including IDLE actions). Algorithm 2 Beam-search with State Alignment Input: A word-segmented sentence, beam size k. Output: A constituent parse tree. 1: beam0 ←{s0} ▷initialization 2: for i ←0 to 2n −1 do ▷n is sentence length 3: P0 ←{}, P1 ←{} ▷two priority queues 4: while beami is not empty do 5: s ←POP(beami) 6: for t ∈{sh-x, rl-x, rr-x} do 7: snew ←apply t to s 8: score snew with E.q (1) 9: insert snew into P0 10: for all state s in P0 do 11: for all possible t ∈{ru-x} do 12: snew ←apply t to s 13: score snew with E.q (1) 14: insert snew into P1 15: insert all states of P1 into P0 16: beami+1 ←k best states of P0 17: return the best state in beam2n−1 We propose a novel method to align states during the parsing process instead of just aligning terminal states like Zhu et al. (2013). We classify all the actions into two groups according to whether they consume items in σ or β. sh-x, rl-x, and rr-x belong to consuming actions, and ru-x belongs to non-consuming action. Algorithm 2 gives the details of our method. It is based on the beam search decoding algorithm described in Algorithm 1. Different from Algorithm 1, Algorithm 2 is guaranteed to perform 2n −1 parsing steps for an input sentence containing n words (line 2), and divides each parsing step into two parsing phases. In the first phase (line 4-9), each of the k states in beami is extended by consuming actions. In the second phase (line 10-14), each of the newly generated states is further extended by nonconsuming actions. Then, all these states extended by both consuming and non-consuming actions are considered together (line 15), and only the k highest-scored states are preserved for beami+1 (line 16). After these 2n −1 parsing steps, the highest scored state in beam2n−1 is returned as the final result (line 17). Figure 2 shows the states aligning process for the two trees in Figure 1. We find that our new method aligns states with their ru-x extensions in the same beam, therefore the parsing model could make decisions on whether using ru-x actions or not within local decision 736 s0 a0,1 b1,2 A0,2 c2,3 B0,3 T0 C0,1 b1,2 D1,2 A0,2 E0,2 c2,3 F2,3 B0,3 T1 beam0 beam1 beam2 beam3 beam4 beam5 Figure 2: State alignment for the two trees in Figure 1, where s0 is the initial state, T0 and T1 are terminal states corresponding to the two trees in Figure 1. For clarity, we represent each state as a rectangle with the label of top subtree in the stack σ. We also denote sh-x with →, ru-x with ↑or ↓, rl-x with ↗, and rr-x with ↘. beams. 3.3 Feature Extension One advantage of transition-based constituent parsing is that it is capable of incorporating arbitrarily complex structural features from the already constructed subtrees in σ and unprocessed words in β. However, all the feature templates given in Table 1 are just some simple structural features. To further improve the performance of our transition-based constituent parser, we consider two group of complex structural features: non-local features (Charniak and Johnson, 2005; Collins and Koo, 2005) and semi-supervised word cluster features (Koo et al., 2008). Table 2 lists all the non-local features we want to use. These features have been proved very helpful for constituent parsing (Charniak and Johnson, 2005; Collins and Koo, 2005). But almost all previous work considered non-local features only in parse reranking frameworks. Instead, we attempt to extract non-local features from newly constructed subtrees during the decoding process as they become incrementally available and score newly generated parser states with them. One difficulty is that the subtrees built by our baseline parser are binary trees (only the complete parse tree is debinarized into its original multi-branch form), but most of the non-local features need to be extracted from their original multi-branch forms. To resolve this conflict, we integrate the debinarization process into the parsing process, i.e., when a (Collins and Koo, 2005) (Charniak and Johnson, 2005) Rules CoPar HeadTree Bigrams CoLenPar Grandparent Rules RightBranch Grandparent Bigrams Heavy Lexical Bigrams Neighbours Two-level Rules NGramTree Two-level Bigrams Heads Trigrams Wproj Head-Modifiers Word Table 2: Non-local features for constituent parsing. new subtree is constructed during parsing, we debinarize it immediately if it is not rooted with an intermediate node 1. The other subtrees for subsequent parsing steps will be built based on these debinarized subtrees. After the modification, our parser can extract non-local features incrementally during the parsing process. Semi-supervised word cluster features have been successfully applied to many NLP tasks (Miller et al., 2004; Koo et al., 2008; Zhu et al., 2013). Here, we adopt such features for our transition-based constituent parser. Given a largescale unlabeled corpus (word segmentation should be performed), we employ the Brown cluster algorithm (Liang, 2005) to cluster all words into a binary tree. Within this binary tree, words appear as leaves, left branches are labeled with 0 and right branches are labeled with 1. Each word can be uniquely identified by its path from the root, and represented as a bit-string. By using various length of prefixes of the bit-string, we can produce word clusters of different granularities (Miller et al., 2004). Inspired from Koo et al. (2008), we employ two types of word clusters: (1) taking 4 bit-string prefixes of word clusters as replacements of POS tags, and (2) taking 8 bit-string prefixes as replacements of words. Using these two types of clusters, we construct semi-supervised word cluster features by mimicking the template structure of the original baseline features in Table 1. 4 Experiment 4.1 Experimental Setting We conducted experiments on the Penn Chinese Treebank (CTB) version 5.1 (Xue et al., 2005): Articles 001-270 and 400-1151 were used as the training set, Articles 301-325 were used as the development set, and Articles 271-300 were used 1Intermediate nodes are produced by binarization process. 737 as the test set. Standard corpus preparation steps were performed before our experiments: empty nodes and functional tags were removed, and the unary chains were collapsed to single unary rules as Harper and Huang (2011). To build word clusters, we used the unlabeled Chinese Gigaword (LDC2003T09) and conducted Chinese word segmentation using a CRF-based segmenter. We used EVALB 2 tool to evaluate parsing performance. The metrics include labeled precision (LP), labeled recall (LR), bracketing F1 and POS tagging accuracy. We set the beam size k to 16, which brings a good balance between efficiency and accuracy. We tuned the optimal number of iterations of perceptron training algorithm on the development set. 4.2 Pipeline Approach vs Joint POS Tagging and Parsing In this subsection, we conducted some experiments to illustrate the drawbacks of the pipeline approach and the advantages of our joint approach. We built three parsing systems: Pipeline-Gold system is our baseline parser (described in Section 2) taking gold-standard POS tags as input; Pipeline system is our baseline parser taking as input POS tags automatically assigned by Stanford POS Tagger 3; and JointParsing system is our joint POS tagging and transition-based parsing system described in subsection 3.1. We trained these three systems on the training set and evaluated them on the development set. The second, third and forth rows in Table 3 show the parsing performances. We can see that the parsing F1 decreased by about 8.5 percentage points in F1 score when using automatically assigned POS tags instead of gold-standard ones, and this shows that the pipeline approach is greatly affected by the quality of its preliminary POS tagging step. After integrating the POS tagging step into the parsing process, our JointParsing system improved the POS tagging accuracy to 94.8% and parsing F1 to 85.8%, which are significantly better than the Pipeline system. Therefore, the joint parsing approach is much more effective for transition-based constituent parsing. 4.3 State Alignment Evaluation We built two new systems to verify the effectiveness of our state alignment strategy proposed in 2http://nlp.cs.nyu.edu/evalb/ 3http://nlp.stanford.edu/downloads/tagger.shtml System LP LR F1 POS Pipeline-Gold 92.2 92.5 92.4 100 Pipeline 83.9 83.8 83.8 93.0 JointParsing 85.1 86.6 85.8 94.8 Padding 85.4 86.4 85.9 94.8 StateAlign 86.9 85.9 86.4 95.2 Nonlocal 88.0 86.5 87.2 95.3 Cluster 89.0 88.3 88.7 96.3 Nonlocal&Cluster 89.4 88.7 89.1 96.2 Table 3: Parsing performance on Chinese development set. Subsection 3.2. The first system Padding extends our JointParsing system by aligning terminal states with the padding strategy proposed in Zhu et al. (2013), and the second system StateAlign extends the JointParsing system with our state alignment strategy. The fifth and sixth rows of Table 3 give the performances of these two systems. Compared with the JointParsing system which does not employ any alignment strategy, the Padding system only achieved a slight improvement on parsing F1 score, but no improvement on POS tagging accuracy. In contrast, our StateAlign system achieved an improvement of 0.6% on parsing F1 score and 0.4% on POS tagging accuracy. All these results show us that our state alignment strategy is more helpful for beam-search decoding. 4.4 Feature Extension Evaluation In this subsection, we examined the usefulness of the new non-local features and the semisupervised word cluster features described in Subsection 3.3. We built three new parsing systems based on the StateAlign system: Nonlocal system extends the feature set of StateAlign system with non-local features, Cluster system extends the feature set with semi-supervised word cluster features, and Nonlocal&Cluster system extend the feature set with both groups of features. Parsing performances of the three systems are shown in the last three rows of Table 3. Compared with the StateAlign system which takes only the baseline features, the non-local features improved parsing F1 by 0.8%, while the semi-supervised word cluster features result in an improvement of 2.3% in parsing F1 and an 1.1% improvement on POS tagging accuracy. When integrating both groups of features, the final parsing F1 reaches 89.1%. Al738 Type System LP LR F1 POS Our Systems Pipeline 80.0 80.3 80.1 94.0 JointParsing 82.4 83.0 82.7 95.1 Padding 82.7 83.6 83.2 95.1 StateAlign 84.2 82.9 83.6 95.5 Nonlocal 85.6 84.2 84.9 95.9 Cluster 85.2 84.5 84.9 95.8 Nonlocal&Cluster 86.6 85.9 86.3 96.0 Single Systems Petrov and Klein (2007) 81.9 84.8 83.3 Zhu et al. (2013) 82.1 84.3 83.2 Reranking Systems Charniak and Johnson (2005)∗ 80.8 83.8 82.3 Wang and Zong (2011) 85.7 Semi-supervised Systems Zhu et al. (2013) 84.4 86.8 85.6 Table 4: Parsing performance on Chinese test set. ∗Huang (2009) adapted the parse reranker to CTB5. l these results show that both the non-local features and the semi-supervised features are helpful for our transition-based constituent parser. 4.5 Final Results on Test Set In this subsection, we present the performances of our systems on the CTB test set. The corresponding results are listed in the top rows of Table 4. We can see that all these systems maintain a similar relative relationship as they do on the development set, which shows the stability of our systems. To further illustrate the effectiveness of our systems, we compare them with some state-ofthe-art systems. We group parsing systems into three categories: single systems, reranking systems and semi-supervised systems. Our Pipeline, JointParsing, Padding, StateAlign and Nonlocal systems belong to the category of single systems, because they don’t utilize any extra processing steps or resources. Our Cluster and Nonlocal&Cluster systems belong to semi-supervised systems, because both of them have employed semi-supervised word cluster features. The parsing performances of state-of-the-art systems are shown in the bottom rows of Table 4. We can see that the final F1 of our Nonlocal system reached 84.9%, and it outperforms state-of-the-art single systems by more than 1.6%. As far as we know, this is the best result on the CTB test set acquired by single systems. Our Nonlocal&Cluster system further improved the parsing F1 to 86.3%, and it outperforms all reranking systems and semisupervised systems. To our knowledge, this is the System F1 Huang and Harper (2009) 85.2 Nonlocal&Cluster 87.1 Table 5: Parsing performance based on CTB 6. best reported performance in Chinese parsing. All previous experiments were conducted on CTB 5. To check whether more labeled data can further improve our parsing system, we evaluated our Nonlocal&Cluster system on the Chinese TreeBank version 6.0 (CTB6), which is a super set of CTB5 and contains more annotated data. We used the same development set and test set as CTB5, and took all the remaining data as the new training set. Table 5 shows the parsing performances on CTB6. Our Nonlocal&Cluster system improved the final F1 to 87.1%, which is 1.9% better than the state-of-the-art performance on CTB6 (Huang and Harper, 2009). Compared with its performance on CTB5 (in Table 4), our Nonlocal&Cluster system also got 0.8% improvement. All these results show that our approach can become more powerful when given more labeled training data. 4.6 Error Analysis To better understand the linguistic behavior of our systems, we employed the berkeley-parseranalyser tool 4 (Kummerfeld et al., 2013) to categorize the errors. Table 6 presents the average 4http://code.google.com/p/berkeley-parser-analyser/ 739 System NP Int. Unary 1-Word Span Coord Mod. Attach Verb Args Diff Label Clause Attach Noun Edge Worst 1.75 0.74 0.44 0.49 0.39 0.37 0.29 0.15 0.14 Pipeline JointParsing Padding StateAlign Nonlocal Cluster Nonlocal&Cluster Best 1.33 0.42 0.28 0.29 0.19 0.21 0.17 0.07 0.09 Table 6: Parse errors on Chinese test set. The shaded area of each bar indicates average number of that error type per sentence, and the completely full bar indicates the number in the Worst row. System VV→NN NN→VV DEC→DEG JJ→NN NR→NN DEG→DEC NN→NR NN→JJ Worst 0.26 0.18 0.15 0.09 0.08 0.07 0.06 0.05 Pipeline JointParsing Padding StateAlign Nonlocal Cluster Nonlocal&Cluster Best 0.14 0.10 0.03 0.07 0.05 0.03 0.03 0.02 Table 7: POS tagging error patterns on Chinese test set. For each error pattern, the left hand side tag is the gold-standard tag, and the right hand side is the wrongly assigned tag. number of errors for each error type by our parsing systems. We can see that almost all the Worst numbers are produced by the Pipeline system. The JointParsing system reduced errors of all types produced by the Pipeline system except for the coordination error type (Coord). The StateAlign system corrected a lot of the NP-internal errors (NP Int.). The Nonlocal system and the Cluster system produced similar numbers of errors for all error types. The Nonlocal&Cluster system produced the Best numbers for all the error types. NPinternal errors are still the most frequent error type in our parsing systems. Table 7 presents the statistics of frequent POS tagging error patterns. We can see that JointParsing system disambiguates {VV, NN} and {DEC, DEG} better than Pipeline system, but cannot deal with the NN→JJ pattern very well. StateAlign system got better results in most of the patterns, but cannot disambiguate {NR, NN} well. Nonlocal&Cluster system got the best results in disambiguating the most ambiguous POS tag pairs of {VV, NN}, {DEC, DEG}, {JJ, NN} and {NN, NR}. 5 Related Work Joint POS tagging with parsing is not a new idea. In PCFG-based parsing (Collins, 1999; Charniak, 2000; Petrov et al., 2006), POS tagging is considered as a natural step of parsing by employing lexical rules. For transition-based parsing, Hatori et al. (2011) proposed to integrate POS tagging with dependency parsing. Our joint approach can be seen as an adaption of Hatori et al. (2011)’s approach for constituent parsing. Zhang et al. (2013) proposed a transition-based constituent parser to process an input sentence from the character level. However, manual annotation of the word-internal structures need to be added to the original Treebank in order to train such a parser. Non-local features have been successfully used for constituent parsing (Charniak and Johnson, 2005; Collins and Koo, 2005; Huang, 2008). However, almost all of the previous work use nonlocal features at the parse reranking stage. The reason is that the single-stage chart-based parser cannot use non-local structural features. In contrast, the transition-based parser can use arbitrarily complex structural features. Therefore, we can concisely utilize non-local features in a single740 stage parsing system. 6 Conclusion In this paper, we proposed three improvements to transition-based constituent parsing for Chinese. First, we incorporated POS tagging into transitionbased constituent parsing to resolve the error propagation problem of the pipeline approach. Second, we proposed a state alignment strategy to align competing decision sequences that have different number of actions. Finally, we enhanced our parsing model by enlarging the feature set with nonlocal features and semi-supervised word cluster features. Experimental results show that all these methods improved the parsing performance substantially, and the final performance of our parsing system outperformed all state-of-the-art systems. Acknowledgments We thank three anonymous reviewers for their cogent comments. This work is funded by the DAPRA via contract HR0011-11-C-0145 entitled /Linguistic Resources for Multilingual Processing0. All opinions expressed here are those of the authors and do not necessarily reflect the views of DARPA. References Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 173–180. Association for Computational Linguistics. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 132–139. Association for Computational Linguistics. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–70. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 111–118, Barcelona, Spain, July. Michael Collins. 1999. HEAD-DRIVEN STATISTICAL MODELS FOR NATURAL LANGUAGE PARSING. Ph.D. thesis, University of Pennsylvania. Mary Harper and Zhongqiang Huang. 2011. Chinese statistical parsing. Handbook of Natural Language Processing and Machine Translation. Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2011. Incremental joint pos tagging and dependency parsing in chinese. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1216–1224, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Zhongqiang Huang and Mary Harper. 2009. Selftraining pcfg grammars with latent annotations across languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 832–841. Association for Computational Linguistics. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In ACL, pages 586– 594. Ling-Ya Huang. 2009. Improve chinese parsing with max-ent reranking parser. Master Project Report, Brown University. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL-08: HLT, pages 595–603, Columbus, Ohio, June. Association for Computational Linguistics. Jonathan K. Kummerfeld, Daniel Tse, James R. Curran, and Dan Klein. 2013. An empirical examination of challenges in chinese parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 98–103, Sofia, Bulgaria, August. Association for Computational Linguistics. Percy Liang. 2005. Semi-supervised learning for natural language. Ph.D. thesis, Massachusetts Institute of Technology. Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and discriminative training. In HLT-NAACL, volume 4, pages 337–342. Citeseer. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In HLT-NAACL, pages 404–411. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 433– 440. Association for Computational Linguistics. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 125–132. Association for Computational Linguistics. 741 Zhiguo Wang and Chengqing Zong. 2011. Parse reranking based on higher-order lexical dependencies. In IJCNLP, pages 1251–1259. Mengqiu Wang, Kenji Sagae, and Teruko Mitamura. 2006. A fast, accurate deterministic parser for chinese. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 425–432. Association for Computational Linguistics. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies, pages 162–171. Association for Computational Linguistics. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2013. Chinese parsing exploiting characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 125–134, Sofia, Bulgaria, August. Association for Computational Linguistics. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 434–443, Sofia, Bulgaria, August. Association for Computational Linguistics. 742
2014
69
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 69–78, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Simple Negation Scope Resolution through Deep Parsing: A Semantic Solution to a Semantic Problem Woodley Packard♣, Emily M. Bender♣, Jonathon Read♠, Stephan Oepen♥♦, and Rebecca Dridan♥ ♣University of Washington, Department of Linguistics ♠Teesside University, School of Computing ♥University of Oslo, Department of Informatics ♦Potsdam University, Department of Linguistics [email protected], [email protected], [email protected], { oe | rdridan }@ifi.uio.no Abstract In this work, we revisit Shared Task 1 from the 2012 *SEM Conference: the automated analysis of negation. Unlike the vast majority of participating systems in 2012, our approach works over explicit and formal representations of propositional semantics, i.e. derives the notion of negation scope assumed in this task from the structure of logical-form meaning representations. We relate the task-specific interpretation of (negation) scope to the concept of (quantifier and operator) scope in mainstream underspecified semantics. With reference to an explicit encoding of semantic predicate-argument structure, we can operationalize the annotation decisions made for the 2012 *SEM task, and demonstrate how a comparatively simple system for negation scope resolution can be built from an off-the-shelf deep parsing system. In a system combination setting, our approach improves over the best published results on this task to date. 1 Introduction Recently, there has been increased community interest in the theoretical and practical analysis of what Morante and Sporleder (2012) call modality and negation, i.e. linguistic expressions that modulate the certainty or factuality of propositions. Automated analysis of such aspects of meaning is important for natural language processing tasks which need to consider the truth value of statements, such as for example text mining (Vincze et al., 2008) or sentiment analysis (Lapponi et al., 2012). Owing to its immediate utility in the curation of scholarly results, the analysis of negation and so-called hedges in bio-medical research literature has been the focus of several workshops, as well as the Shared Task at the 2011 Conference on Computational Language Learning (CoNLL). Task 1 at the First Joint Conference on Lexical and Computational Semantics (*SEM 2012; Morante and Blanco, 2012) provided a fresh, principled annotation of negation and called for systems to analyze negation—detecting cues (affixes, words, or phrases that express negation), resolving their scopes (which parts of a sentence are actually negated), and identifying the negated event or property. The task organizers designed and documented an annotation scheme (Morante and Daelemans, 2012) and applied it to a little more than 100,000 tokens of running text by the novelist Sir Arthur Conan Doyle. While the task and annotations were framed from a semantic perspective, only one participating system actually employed explicit compositional semantics (Basile et al., 2012), with results ranking in the middle of the 12 participating systems. Conversely, the bestperforming systems approached the task through machine learning or heuristic processing over syntactic and linguistically relatively coarse-grained representations; see § 2 below. Example (1), where ⟨⟩marks the cue and {} the in-scope elements, illustrates the annotations, including how negation inside a noun phrase can scope over discontinuous parts of the sentence.1 (1) {The German} was sent for but professed to {know} ⟨nothing⟩{of the matter}. In this work, we return to the 2012 *SEM task from a deliberately semantics-centered point of view, focusing on the hardest of the three sub-problems: scope resolution.2 Where Morante and Daelemans (2012) characterize negation as an “extra-propositional aspect of meaning” (p.1563), 1Our running example is a truncated variant of an item from the Shared Task training data. The remainder of the original sentence does not form part of the scope of this cue. 2Resolving negation scope is a more difficult sub-problem at least in part because (unlike cue and event identification) it is concerned with much larger, non-local and often discontinuous parts of each utterance. This intuition is confirmed by Read et al. (2012), who report results for each sub-problem using gold-standard inputs; in this setup, scope resolution showed by far the lowest performance levels. 69 we in fact see it as a core piece of compositionally constructed logical-form representations. Though the task-specific concept of scope of negation is not the same as the notion of quantifier and operator scope in mainstream underspecified semantics, we nonetheless find that reviewing the 2012 *SEM Shared Task annotations with reference to an explicit encoding of semantic predicate-argument structure suggests a simple and straightforward operationalization of their concept of negation scope. Our system implements these findings through a notion of functorargument ‘crawling’, using as our starting point the underspecified logical-form meaning representations provided by a general-purpose deep parser. Our contributions are three-fold: Theoretically, we correlate the structures at play in the Morante and Daelemans (2012) view on negation with formal semantic analyses; methodologically, we demonstrate how to approach the task in terms of underspecified, logical-form semantics; and practically, our combined system retroactively ‘wins’ the 2012 *SEM Shared Task. In the following sections, we review related work (§ 2), detail our own setup (§ 3), and present and discuss our experimental results (§ 4 and § 5, respectively). 2 Related Work Read et al. (2012) describe the best-performing submission to Task 1 of the 2012 *SEM Conference. They investigated two approaches for scope resolution, both of which were based on syntactic constituents. Firstly, they created a set of 11 heuristics that describe the path from the preterminal of a cue to the constituent whose projection is predicted to match the scope. Secondly they trained an SVM ranker over candidate constituents, generated by following the path from a cue to the root of the tree and describing each candidate in terms of syntactic properties along the path and various surface features. Both approaches attempted to handle discontinuous instances by applying two heuristics to the predicted scope: (a) removing preceding conjuncts from the scope when the cue is in a conjoined phrase and (b) removing sentential adverbs from the scope. The ranking approach showed a modest advantage over the heuristics (with F1 equal to 77.9 and 76.7, respectively, when resolving the scope of gold-standard cues in evaluation data). Read et al. (2012) noted however that the annotated scopes did not align with the Shared Task–provided constituents for 14% of the instances in the training data, giving an F1 upper-bound of around 86.0 for systems that depend on those constituents. Basile et al. (2012) present the only submission to Task 1 of the 2012 *SEM Conference which employed compositional semantics. Their scope resolution pipeline consisted primarily of the C&C parser and Boxer (Curran et al., 2007), which produce Discourse Representation Structures (DRSs). The DRSs represent negation explicitly, including representing other predications as being within the scope of negation. Basile et al. (2012) describe some amount of tailoring of the Boxer lexicon to include more of the Shared Task scope cues among those that produce the negation operator in the DRSs, but otherwise the system appears to directly take the notion of scope of negation from the DRS and project it out to the string, with one caveat: As with the logical-forms representations we use, the DRS logical forms do not include function words as predicates in the semantics. Since the Shared Task gold standard annotations included such arguably semantically vacuous (see Bender, 2013, p.107) words in the scope, further heuristics are needed to repair the string-based annotations coming from the DRS-based system. Basile et al. resort to counting any words between in-scope tokens which are not themselves cues as in-scope. This simple heuristic raises their F1 for full scopes from 20.1 to 53.3 on system-predicted cues. 3 System Description The new system described here is what we call the MRS Crawler. This system operates over the normalized semantic representations provided by the LinGO English Resource Grammar (ERG; Flickinger, 2000).3 The ERG maps surface strings to meaning representations in the format of Minimal Recursion Semantics (MRS; Copestake et al., 2005). MRS makes explicit predicate-argument relations, as well as partial information about scope (see below). We used the grammar together with one of its pre-packaged conditional Maximum Entropy models for parse ranking, trained on a combination of encyclopedia articles and tourism brochures. Thus, the deep parsing frontend system to our MRS Crawler has not been 3In our experiments, we use the 1212 release of the ERG, in combination with the ACE parser (http://sweaglesw .org/linguistics/ace/). The ERG and ACE are DELPHIN resources; see http://www.delph-in.net. 70 ⟨h1, h4:_the_q⟨0:3⟩(ARG0 x6, RSTR h7, BODY h5), h8:_german_n_1⟨4:10⟩(ARG0 x6), h9:_send_v_for⟨15:19⟩(ARG0 e10, ARG1 , ARG2 x6), h2:_but_c⟨24:27⟩(ARG0 e3, L-HNDL h9, R-HNDL h14), h14:_profess_v_to⟨28:37⟩(ARG0 e13, ARG1 x6, ARG2 h15), h16:_know_v_1⟨41:45⟩(ARG0 e17, ARG1 x6, ARG2 x18), h20:_no_q⟨46:53⟩(ARG0 x18, RSTR h21, BODY h22), h19:thing⟨46:53⟩(ARG0 x18), h19:_of_p⟨54:56⟩(ARG0 e23, ARG1 x18, ARG2 x24), h25:_the_q⟨57:60⟩(ARG0 x24, RSTR h27, BODY h26), h28:_matter_n_of⟨61:68⟩(ARG0 x24, ARG1 ) { h27 =q h28, h21 =q h19, h15 =q h16, h7 =q h8, h1 =q h2 } ⟩ Figure 1: MRS analysis of our running example (1). adapted to the task or its text type; it is applied in an ‘off the shelf’ setting. We combine our system with the outputs from the best-performing 2012 submission, the system of Read et al. (2012), firstly by relying on the latter for system negation cue detection,4 and secondly as a fall-back in system combination as described in § 3.4 below. Scopal information in MRS analyses delivered by the ERG fixes the scope of operators—such as negation, modals, scopal adverbs (including subordinating conjunctions like while), and clauseembedding verbs (e.g. believe)—based on their position in the constituent structure, while leaving the scope of quantifiers (e.g. a or every, but also other determiners) free. From these underspecified representations of possible scopal configurations, a scope resolution component can spell out the full range of fully-connected logical forms (Koller and Thater, 2005), but it turns out that such enumeration is not relevant here: the notion of scope encoded in the Shared Task annotations is not concerned with the relative scope of quantifiers and negation, such as the two possible readings of (2) represented informally below:5 (2) Everyone didn’t leave. a. ∀(x)¬leave(x) ∼Everyone stayed. b. ¬∀(x)leave(x) ∼At least some stayed. However, as shown below, the information about fixed scopal elements in an underspecified MRS is sufficient to model the Shared Task annotations. 3.1 MRS Crawling Fig. 1 shows the ERG semantic analysis for our running example. The heart of the MRS is a multiset of elementary predications (EPs). Each ele4Read et al. (2012) predicted cues using a closed vocabulary assumption with a supervised classifier to disambiguate instances of cues. 5In other words, a possible semantic interpretation of the (string-based) Shared Task annotation guidelines and data is in terms of a quantifier-free approach to meaning representation, or in terms of one where quantifier scope need not be made explicit (as once suggested by, among others, Alshawi, 1992). From this interpretation, it follows that the notion of scope assumed in the Shared Task does not encompass interactions of negation operators and quantifiers. mentary prediction includes a predicate symbol, a label (or ‘handle’, prefixed to predicates with a colon in Fig. 1), and one or more argument positions, whose values are semantic variables. Eventualities (ei) in MRS denote states or activities, while instance variables (xj) typically correspond to (referential or abstract) entities. All EPs have the argument position ARG0, called the distinguished variable (Oepen and Lønning, 2006), and no variable is the ARG0 of more than one nonquantifier EP. The arguments of one EP are linked to the arguments of others either directly (sharing the same variable as their value), or indirectly (through socalled ‘handle constraints’, where =q in Fig. 1 denotes equality modulo quantifier insertion). Thus a well-formed MRS forms a connected graph. In addition, the grammar links the EPs to the elements of the surface string that give rise to them, via character offsets recorded in each EP (shown in angle brackets in Fig. 1). For the purposes of the present task, we take a negation cue as our entry point into the MRS graph (as our initial active EP), and then move through the graph according to the following simple operations to add EPs to the active set: Argument Crawling Add to the scope all EPs whose distinguished variable or label is an argument of the active EP; for arguments of type hk, treat any =q constraints as label equality. Label Crawling Add all EPs whose label is identical to that of the active EP. Functor Crawling Add all EPs that take the distinguished variable or label of the active EP as an argument (directly or via =q constraints). Our MRS crawling algorithm is sketched in Fig. 2. To illustrate how the rules work, we will trace their operation in the analysis of example (1), i.e. traverse the EP graph in Fig. 1. The negation cue is nothing, from character position 46 to 53. This leads us to _no_q as our entry point into the graph. Our algorithm states that for this type of cue (a quantifier) the first step is 71 1: Activate the cue EP 2: if the cue EP is a quantifier then 3: Activate EPs reached by functor crawling from the distinguished variable (ARG0) of the cue EP 4: end if 5: repeat 6: for each active EP X do 7: Activate EPs reached by argument crawling or label crawling unless they are co-modifiers of the negation cue.a 8: Activate EPs reached by functor crawling if they are modal verbs, or one of the following subordinating conjunctions reached by ARG1: whether, when, because, to, with, although, unless, until, or as. 9: end for 10: until a fixpoint is reached (no additional EPs were activated) 11: Deactivate zero-pronoun EPs (from imperative constructions) 12: Apply semantically empty word handling rules (iterate until a fixpoint is reached) 13: Apply punctuation heuristics Figure 2: Algorithm for scope detection by MRS crawling aFormally: If an EP shares its label with the negation cue, or is a quantifier whose restriction (RSTR) is =q equated with the label of the negation cue, it cannot be in-scope unless its ARG0 is an argument of the negation cue, or the ARG0 of the negation cue is one of its own arguments. See § 3.3 for elaboration. functor crawling (see § 3.3 below), which brings _know_v_1 into the scope. We proceed with argument crawling and label crawling, which pick up _the_q⟨0:3⟩and _german_n_1 as the ARG1. Further, as the ARG2 of _know_v_1, we reach thing and through recursive invocation we activate _of_p and, in yet another level of recursion, _the_q⟨57:60⟩and _matter_n_of. At this point, crawling has no more links to follow. Thus, the MRS crawling operations ‘paint’ a subset of the MRS graph as in-scope for a given negation cue. 3.2 Semantically Empty Word Handling Our crawling rules operate on semantic representations, but the annotations are with reference to the surface string. Accordingly, we need projection rules to map from the ‘painted’ MRS to the string. We can use the character offsets recorded in each EP to project the scope to the string. However, the string-based annotations also include words which the ERG treats as semantically vacuous. Thus in order to match the gold annotations, we define a set of heuristics for when to count vacuous words as in scope. In (1), there are no semantically empty words in-scope, so we illustrate these heuristics with another example: (3) “I trust that {there is} ⟨nothing⟩{of consequence which I have overlooked}?” The MRS crawling operations discussed above paint the EPs corresponding to is, thing, of, consequence, I, and overlooked as in-scope (underlined in (3)). Conversely, the ERG treats the words that, there, which, and have as semantically empty. Of these, we need to add all except that to the scope. Our vacuous word handling rules use the syntactic structure provided by the ERG as scaffolding to help link the scope information gleaned from contentful words to vacuous words. Each node in the syntax tree is initially colored either in-scope or out-of-scope in agreement with the decision made by the crawler about the lexical head of the corresponding subtree. A semantically empty word is determined to be in-scope if there is an in-scope syntax tree node in the right position relative to it, as governed by a short list of templates organized by the type of the semantically empty word (particles, complementizers, non-referential pronouns, relative pronouns, and auxiliary verbs). As an example, the rule for auxiliary verbs like have in our example (3) is that they are in scope when their verb phrase complement is in scope. Since overlooked is marked as in-scope by the crawler, the semantically empty have becomes inscope as well. Sometimes the rules need to be iterated. For example, the main rule for relative pronouns is that they are in-scope when they fill a gap in an in-scope constituent; which fills a gap in the constituent have overlooked, but since have is the (syntactic) lexical head of that constituent, the verb phrase is not considered in-scope the first time the rules are tried. Similar rules deal with that (complementizers are in-scope when the complement phrase is an argument of an in-scope verb, which is not the case here) and there (non-referential pronouns are inscope when they are the subject of an in-scope VP, which is true here). 72 3.3 Re-Reading the Annotation Guidelines Our MRS crawling algorithm was defined by looking at the annotated data rather than the annotation guidelines for the Shared Task (Morante et al., 2011). Nonetheless, our algorithm can be seen as a first pass formalization of the guidelines. In this section, we briefly sketch how our algorithm corresponds to different aspects of the guidelines. For negated verbs, the guidelines state that “If the negated verb is the main verb in the sentence, the entire sentence is in scope.” (Morante et al., 2011, 17). In terms of our operations defined over semantic representations, this is rendered as follows: all arguments of the negated verb are selected by argument crawling, all intersective modifiers by label crawling, and functor crawling (Fig. 2, line 8) captures modal auxiliaries and non-intersective modifiers. The guidelines treat predicative adjectives under a separate heading from verbs, but describe the same desired annotations (scope over the whole clause; ibid., p.20). Since these structures are analogous in the semantic representations, the same operations that handle negated verbs also handle negated predicative adjectives correctly. For negated subjects and objects, the guidelines state that the negation scopes over “all the clause” and “the clause headed by the verb” (Morante et al., 2011, 19), respectively. The examples given in the annotation guidelines suggest that these are in fact meant to refer to the same thing. The negation cue for a negated nominal argument will appear as a quantifier EP in the MRS, triggering line 3 of our algorithm. This functor crawling step will get to the verb’s EP, and from there, the process is the same as the last two cases. In contrast to subjects and objects, negation of a clausal argument is not treated as negation of the verb (ibid., p.18). Since in this case, the negation cue will not be a quantifier in the MRS, there will be no functor crawling to the verb’s EP. For negated modifiers, the situation is somewhat more complex, and this is a case where our crawling algorithm, developed on the basis of the annotated data, does not align directly with the guidelines as given. The guidelines state that negated attributive adjectives have scope over the entire NP (including the determiner) (ibid., p.20) and analogously negated adverbs have scope over the entire clause (ibid., p.21). However, the annotations are not consistent, especially with respect to the treatment of negated adjectives: while the head noun and determiner (if present) are typically annotated as in scope, other co-modifiers, especially long, post-nominal modifiers (including relative clauses) are not necessarily included: (4) “A dabbler in science, Mr. Holmes, a picker up of shells on the shores of {the} great ⟨un⟩{known ocean}. (5) Our client looked down with a rueful face at {his} own ⟨un⟩{conventional appearance}. (6) Here was {this} ⟨ir⟩{reproachable Englishman} ready to swear in any court of law that the accused was in the house all the time. (7) {There is}, on the face of it, {something} ⟨un⟩{natural about this strange and sudden friendship between the young Spaniard and Scott Eccles}. Furthermore, the guidelines treat relative clauses as subordinate clauses and thus negation inside a relative clause is treated as bound to that clause only, and includes neither the head noun of the relative clause nor any of its other dependents in its scope. However, from the perspective of MRS, a negated relative clause is indistinguishable from any other negated modifier of a noun. This treatment of relative clauses (as well as the inconsistencies in other forms of co-modification) is the reason for the exception noted at line 7 of Fig. 2. By disallowing the addition of EPs to the scope if they share the label of the negation cue but are not one of its arguments, we block the head noun’s EP (and any EPs only reachable from it) in cases of relative clauses where the head verb inside the relative clause is negated. It also blocks co-modifiers like great, own, and the phrases headed by ready and about in (4)–(7). As illustrated in these examples, this is correct some but not all of the time. Having been unable to find a generalization capturing when comodifiers are annotated as in scope, we stuck with this approximation. For negation within clausal modifiers of verbs, the annotation guidelines have further information, but again, our existing algorithm has the correct behavior: The guidelines state that a negation cue inside of the complement of a subordinating conjunction (e.g. if) has scope only over the subordinate clause (ibid., p.18 and p.26). The ERG treats all subordinating conjunctions as two-place predicates taking two scopal arguments. Thus, as with clausal complements of clause-embedding verbs, the embedding subordinating conjunction and any other arguments it might have are inaccessible, since functor crawling is restricted to a handful of specific configurations. 73 As is usually the case with exercises in formalization, our crawling algorithm generalizes beyond what is given explicitly in the annotation guidelines. For example, all arguments that are treated as semantically nominal (including PP arguments where the preposition is semantically null) are treated in the same way as subjects and objects; similarly, all arguments which are semantically clausal (including certain PP arguments) are handled the same way as clausal complements. This is possible because we take advantage of the high degree of normalization that the ERG accomplishes in mapping to the MRS representation. There are also cases where we are more specific. The guidelines do not handle coordination in detail, except to state that in coordinated clauses negation is restricted to the clause it appears in (ibid., p.17–18) and to include a few examples of coordination under the heading ‘ellipsis’. In the case of VP coordination, our existing algorithm does not need any further elaboration to pick up the subject of the coordinated VP but not the nonnegated conjunct, as shown in discussion of (1) in § 3.1 above. In the case of coordination of negated NPs, recall that to reach the main portion of the negated scope we must first apply functor crawling. The functor crawling procedure has a general mechanism to transparently continue crawling up through coordinated structures while blocking future crawling from traversing them again.6 On the other hand, there are some cases in the annotation guidelines which our algorithm does not yet handle. We have not yet provided any analysis of the special cases for save and expect discussed in Morante et al., 2011, pp.22–23, and also do not have a means of picking out the overt verb in gapping constructions (p.24). Finally, we note that even carefully worked out annotation guidelines such as these are never followed perfectly consistently by the human annotators who apply them. Because our crawling algorithm so closely models the guidelines, this puts our system in an interesting position to provide feedback to the Shared Task organizers. 3.4 Fall-Back Configurations The close match between our crawling algorithm and the annotation guidelines supported by the mapping to MRS provides for very high precision 6This allows ate to be reached in We ate bread but no fish., while preventing but and bread from being reached, which they otherwise would via argument crawling from ate. and recall when the analysis engine produces the desired MRS.7 However, the analysis engine does not always provide the desired analysis, largely because of idiosyncrasies of the genre (e.g. vocatives appearing mid-sentence) that are either not handled by the grammar or not well modeled in the parse selection component. In addition, as noted above, there are a handful of negation cues we do not yet handle. Thus, we also tested fall-back configurations which use scope predictions based on MRS in some cases, and scope predictions from the system of Read et al. (2012) in others. Our first fall-back configuration (CrawlerN in Table 1) uses MRS-based predictions whenever there is a parse available and the cue is one that our system handles. Sometimes, the analysis picked by the ERG’s statistical model is not the correct analysis for the given context. To combat such suboptimal parse selection performance, we investigated using the probability of the top ranked analysis (as determined by the parse selection model and conditioned on the sentence) as a confidence metric. Our second fall-back configuration (CrawlerP in Table 1) uses MRS-based predictions when there is a parse available whose conditional probability is at least 0.5.8 4 Experiments We evaluated the performance of our system using the Shared Task development and evaluation data (respectively CDD and CDE in Table 1). Since we do not attempt to perform cue detection, we report performance using gold cues and also using the system cues predicted by Read et al. (2012). We used the official Shared Task evaluation script to compute all scores. 4.1 Data Sets The Shared Task data consists of chapters from the Adventures of Sherlock Holmes mystery novels and short stories. As such, the text is carefully edited turn-of-the-20th-century British English,9 7And in fact, the task is somewhat noise-tolerant: some parse selection decisions are independent of each other, and a mistake in a part of the analysis far enough away from the negation cue does not harm performance. 8This threshold was determined empirically on the development data. We also experimented with other confidence metrics—the probability ratio of the top-ranked and second parse or the entropy over the probability distribution of the top 10 parses—but found no substantive differences. 9In contrast, the ERG was engineered for the analysis of contemporary American English, and an anecdotal analysis of parse failures and imperfect top-ranked parses suggests 74 Gold Cues System Cues Scopes Tokens Scopes Tokens Set Method Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 CDD Ranker 100.0 68.5 81.3 84.8 86.8 85.8 91.7 66.1 76.8 79.5 84.9 82.1 Crawler 100.0 53.0 69.3 89.3 67.0 76.6 90.8 53.0 66.9 84.7 65.9 74.1 CrawlerN 100.0 64.9 78.7 89.0 83.5 86.1 90.8 64.3 75.3 82.6 82.1 82.3 CrawlerP 100.0 70.2 82.5 86.4 86.8 86.6 91.2 67.9 77.8 80.0 84.9 82.4 Oracle 100.0 76.8 86.9 91.5 89.1 90.3 CDE Ranker 98.8 64.3 77.9 85.3 90.7 87.9 87.4 61.5 72.2 82.0 88.8 85.3 Crawler 100.0 44.2 61.3 85.8 68.4 76.1 87.8 43.4 58.1 78.8 66.7 72.2 CrawlerN 98.6 56.6 71.9 83.8 88.4 86.1 86.0 54.2 66.5 78.4 85.7 81.9 CrawlerP 98.8 65.5 78.7 86.1 90.4 88.2 87.6 62.7 73.1 82.6 88.5 85.4 Oracle 100.0 70.3 82.6 89.5 93.1 91.3 Table 1: Scope resolution performance of various configurations over each subset of the Shared Task data. Ranker refers to the system of Read et al. (2012); Crawler refers to our current system in isolation, or falling back to the Ranker prediction either when the sentence is not covered by the parser (CrawlerN), or when the parse probability is predicted to be less than 0.5 (CrawlerP ); finally, Oracle simulates best possible selection among the Ranker and Crawler predictions (and would be ill-defined on system cues). annotated with token-level information about the cues and scopes in every negated sentence. The training set contains 848 negated sentences, the development set 144, and the evaluation set 235. As there can be multiple usages of negation in one sentence, this corresponds to 984, 173, and 264 instances, respectively. Being rule-based, our system does not require any training data per se. However, the majority of our rule development and error analysis were performed against the designated training data. We used the designated development data for a single final round of error analysis and corrections. The system was declared frozen before running with the formal evaluation data. All numbers reported here reflect this frozen system.10 4.2 Results Table 1 presents the results of our various configurations in terms of both (a) whole scopes (i.e. a true positive is only generated when the predicted scope matches the gold scope exactly) and (b) inscope tokens (i.e. a true positive for every token the system correctly predicts to be in scope). The table also details the performance upper-bound for system combination, in which an oracle selects the system prediction which scores the greater tokenwise F1 for each gold cue. The low recall levels for Crawler can be mostly that the archaic style in the 2012 *SEM Shared Task texts has a strong adverse effect on the parser. 10The code and data are available from http://www .delph-in.net/crawler/, for replicability (Fokkens et al., 2013). attributed to imperfect parser coverage. CrawlerN, which falls back just for parse failure brings the recall back up, and results in F1 levels closer to the system of Read et al. (2012), albeit still not quite advancing the state of the art (except over the development set). Our best results are from CrawlerP , which outperforms all other configurations on the development and evaluation sets. The Oracle results are interesting because they show that there is much more to be gained in combining our semantics-based system with the Read et al. (2012) syntactically-focused system. Further analysis of these results to draw out the patterns of complementary errors and strengths is a promising avenue for future work. 4.3 Error Analysis To shed more light on specific strengths and weaknesses of our approach, we performed a manual error analysis of scope predictions by Crawler, starting from gold cues so as to focus in-depth analysis on properties specific to scope resolution over MRSs. This analysis was performed on CDD, in order to not bar future work on this task. Of the 173 negation cue instances in CDD, Crawler by itself makes 94 scope predictions that exactly match the gold standard. In comparison, the system of Read et al. (2012) accomplishes 119 exact scope matches, of which 80 are shared with Crawler; in other words, there are 14 cue instances (or 8% of all cues) in which our approach can improve over the best-performing syntax-based submission to the original Shared Task. 75 We reviewed the 79 negation instances where Crawler made a wrong prediction in terms of exact scope match, categorizing the source of failure into five broad error types: (1) Annotation Error In 11% of all instances, we consider the annotations erroneous or inconsistent. These judgments were made by two of the authors, who both were familiar with the annotation guidelines and conventions observable in the data. For example, Morante et al. (2011) unambiguously state that subordinating conjunctions shall not be in-scope (8), whereas relative pronouns should be (9), and a negated predicative argument to the copula must scope over the full clause (10): (8) It was after nine this morning {when we} reached his house and {found} ⟨neither⟩{you} ⟨nor⟩ {anyone else inside it}. (9) “We can imagine that in the confusion of flight something precious, something which {he could} ⟨not⟩{bear to part with}, had been left behind. (10) He said little about the case, but from that little we gathered that he also was not ⟨dis⟩{satisfied} at the course of events. (2) Parser Failure Close to 30% of Crawler failures reflect lacking coverage in the ERG parser, i.e. inputs for which the parser does not make available an analysis (within certain bounds on time and memory usage).11 In this work, we have treated the ERG as an off-the-shelf system, but coverage could certainly be straightforwardly improved by adding analyses for phenomena particular to turn-of-the-20th-century British English. (3) MRS Inadequacy Another 33% of our false scope predictions are Crawler-external, viz. owing to erroneous input MRSs due to imperfect disambiguation by the parser or other inadequacies in the parser output. Again, these judgments (assigning blame outside our own work) were doublechecked by two authors, and we only counted MRS imperfections that actually involve the cue or in-scope elements. Here, we could anticipate improvements by training the parse ranker on indomain data or otherwise adapting it to this task. (4) Cue Selection In close to 9% of all cases, there is a valid MRS, but Crawler fails to pick out an initial EP that corresponds to the negation cue. This first type of genuine crawling failure often relates to cues expressed as affixation (11), as well 11Overall parsing coverage on this data is about 86%, but of course all parser failures on sentences containing negation surface in our error analysis of Crawler in isolation. Scopes Tokens Method Prec Rec F1 Prec Rec F1 CDE Boxer 76.1 41.0 53.3 69.2 82.3 75.2 Crawler 87.8 43.4 58.1 78.8 66.7 72.2 CrawlerP 87.6 62.7 73.1 82.6 88.5 85.4 Table 2: Comparison to Basile et al. (2012). as to rare usages of cue expressions that predominantly occur with different categories, e.g. neither as a generalized quantifier (12): (11) Please arrange your thoughts and let me know, in their due sequence, exactly what those events are {which have sent you out} ⟨un⟩{brushed} and unkempt, with dress boots and waistcoat buttoned awry, in search of advice and assistance. (12) You saw yourself {how} ⟨neither⟩{of the inspectors dreamed of questioning his statement}, extraordinary as it was. (5) Crawler Deficiency Finally, a little more than 16% of incorrect predictions we attribute to our crawling rules proper, where we see many instances of under-coverage of MRS elements (13, 14) and a few cases of extending the scope too wide (15). In the examples below, erroneous scope predictions by Crawler are indicated through underlining. Hardly any of the errors in this category, however, involve semantically vacuous tokens. (13) He in turn had friends among the indoor servants who unite in {their} fear and ⟨dis⟩{like of their master}. (14) He said little about the case, but from that little we gathered that {he also was} ⟨not⟩ {dissatisfied at the course of events}. (15) I tell you, sir, {I could}n’t move a finger, ⟨nor⟩ {get my breath}, till it whisked away and was gone. 5 Discussion and Comparison The example in (1) nicely illustrates the strengths of the MRS Crawler and of the abstraction provided by the deep linguistic analysis made possible by the ERG. The negated verb in that sentence is know, and its first semantic argument is The German. This semantic dependency is directly and explicitly represented in the MRS, but the phrase expressing the dependent is not adjacent to the head in the string. Furthermore, even a system using syntactic structure to model scope would be faced with a more complicated task than our crawling rules: At the level of syntax the dependency is mediated by both verb phrase coordination and the control verb profess, as well as by the semantically empty infinitival marker to. 76 The system we propose is very similar in spirit to that of Basile et al. (2012). Both systems map from logical forms with explicit representations of scope of negation out to string-based annotations in the format provided by the Shared Task gold standard. The main points of difference are in the robustness of the system and in the degree of tailoring of both the rules for determining scope on the logical form level and the rules for handling semantically vacuous elements. The system description in Basile et al. (2012) suggests relatively little tailoring at either level: aside from adjustments to the Boxer lexicon to make more negation cues take the form of the negation operator in the DRS, the notion of scope is directly that given in the DRS. Similarly, their heuristic for picking up semantically vacuous words is string-based and straightforward. Our system, on the other hand, models the annotation guidelines more closely in the definition of the MRS crawling rules, and has more elaborated rules for handling semantically empty words. The Crawler alone is less robust than the Boxer-based system, returning no output for 29% of the cues in CDE. These factors all point to higher precision and lower recall for the Crawler compared to the Boxer-based system. At the token level, that is what we see. Since full-scope recall depends on token-level precision, the Crawler does better across the board at the full-scope level. A comparison of the results is shown in Table 2. A final key difference between our results and those of Basile et al. (2012) is the cascading with a fall-back system. Presumably a similar system combination strategy could be pursued with the Boxer-based system in place of the Crawler. 6 Conclusion and Outlook Our motivation in this work was to take the design of the 2012 *SEM Shared Task on negation analysis at face value—as an overtly semantic problem that takes a central role in our long-term pursuit of language understanding. Through both theoretical and practical reflection on the nature of representations at play in this task, we believe we have demonstrated that explicit semantic structure will be a key driver of further progress in the analysis of negation. We were able to closely align two independently developed semantic analyses— the negation-specific annotations of Morante et al. (2011), on the one hand, and the broad-coverage, MRS meaning representations of the ERG, on the other hand. In our view, the conceptual correlation between these two semantic views on negation analysis reinforces their credibility. Unlike the rather complex top-performing systems from the original 2012 competition, our MRS Crawler is defined by a small set of general rules that operate over general-purpose, explicit meaning representations. Thus, our approach scores high on transparency, adaptability, and replicability. In isolation, the Crawler provides premium precision but comparatively low recall. Its limitations, we conjecture, reflect primarily on ERG parsing challenges and inconsistencies in the target data. In a sense, our approach pushes a larger proportion of the task into the parser, meaning (a) there should be good opportunities for parser adaptation to this somewhat idiosyncratic text type; (b) our results can serve to offer feedback on ERG semantic analyses and parse ranking; and (c) there is a much smaller proportion of very task-specific engineering. When embedded in a confidence-thresholded cascading architecture, our system advances the state of the art on this task, and oracle combination scores suggest there is much remaining room to better exploit the complementarity of approaches in our study. In future work, we will seek to better understand the division of labor between the systems involved through contrastive error analysis and possibly another oracle experiment, constructing gold-standard MRSs for part of the data. It would also be interesting to try a task-specific adaptation of the ERG parse ranking model, for example retraining on the pre-existing treebanks but giving preference to analyses that lead to correct Crawler results downstream. Acknowledgments We are grateful to Dan Flickinger, the main developer of the ERG, for many enlightening discussions and continuous assistance in working with the analyses available from the grammar. This work grew out of a discussion with colleagues of the Language Technology Group at the University of Oslo, notably Elisabeth Lien and Jan Tore Lønning, to whom we are indebted for stimulating cooperation. Furthermore, we have benefited from comments by participants of the 2013 DELPHIN Summit, in particular Joshua Crowgey, Guy Emerson, Glenn Slayden, Sanghoun Song, and Rui Wang. 77 References Alshawi, H. (Ed.). 1992. The Core Language Engine. Cambridge, MA, USA: MIT Press. Basile, V., Bos, J., Evang, K., and Venhuizen, N. 2012. UGroningen. Negation detection with Discourse Representation Structures. In Proceedings of the 1st Joint Conference on Lexical and Computational Semantics (p. 301 – 309). Montréal, Canada. Bender, E. M. 2013. Linguistic fundamentals for natural language processing: 100 essentials from morphology and syntax. San Rafael, CA, USA: Morgan & Claypool Publishers. Copestake, A., Flickinger, D., Pollard, C., and Sag, I. A. 2005. Minimal Recursion Semantics. An introduction. Research on Language and Computation, 3(4), 281 – 332. Curran, J., Clark, S., and Bos, J. 2007. Linguistically motivated large-scale NLP with C&C and Boxer. In Proceedings of the 45th Meeting of the Association for Computational Linguistics Demo and Poster Sessions (p. 33 – 36). Prague, Czech Republic. Flickinger, D. 2000. On building a more efficient grammar by exploiting types. Natural Language Engineering, 6 (1), 15 – 28. Fokkens, A., van Erp, M., Postma, M., Pedersen, T., Vossen, P., and Freire, N. 2013. Offspring from reproduction problems. What replication failure teaches us. In Proceedings of the 51th Meeting of the Association for Computational Linguistics (p. 1691 – 1701). Sofia, Bulgaria. Koller, A., and Thater, S. 2005. Efficient solving and exploration of scope ambiguities. In Proceedings of the 43rd Meeting of the Association for Computational Linguistics: Interactive Poster and Demonstration Sessions (p. 9 – 12). Ann Arbor, MI, USA. Lapponi, E., Read, J., and Øvrelid, L. 2012. Representing and resolving negation for sentiment analysis. In Proceedings of the 2012 ICDM workshop on sentiment elicitation from natural text for information retrieval and extraction. Brussels, Belgium. Morante, R., and Blanco, E. 2012. *SEM 2012 Shared Task. Resolving the scope and focus of negation. In Proceedings of the 1st Joint Conference on Lexical and Computational Semantics (p. 265 – 274). Montréal, Canada. Morante, R., and Daelemans, W. 2012. ConanDoyleneg. Annotation of negation in Conan Doyle stories. In Proceedings of the 8th International Conference on Language Resources and Evaluation. Istanbul, Turkey. Morante, R., Schrauwen, S., and Daelemans, W. 2011. Annotation of negation cues and their scope guidelines v1.0 (Tech. Rep. # CTRS-003). Antwerp, Belgium: Computational Linguistics & Psycholinguistics Research Center, Universiteit Antwerpen. Morante, R., and Sporleder, C. 2012. Modality and negation. An introduction to the special issue. Computational Linguistics, 38(2), 223 – 260. Oepen, S., and Lønning, J. T. 2006. Discriminantbased MRS banking. In Proceedings of the 5th International Conference on Language Resources and Evaluation (p. 1250 – 1255). Genoa, Italy. Read, J., Velldal, E., Øvrelid, L., and Oepen, S. 2012. UiO1. Constituent-based discriminative ranking for negation resolution. In Proceedings of the 1st Joint Conference on Lexical and Computational Semantics (p. 310 – 318). Montréal, Canada. Vincze, V., Szarvas, G., Farkas, R., Móra, G., and Csirik, J. 2008. The BioScope corpus. Biomedical texts annotated for uncertainty, negation and their scopes. BMC Bioinformatics, 9(Suppl 11). 78
2014
7